Why no one will win the Randi or Chopra Challenges

Whoa! Deepak Chopra is offering 1 million dollars to anyone able to present a falsifiable theory of consciousness, in response to James Randi’s $1 Million Dollar challenge to show paranormal (psi) effects exist! Of course, Twitter and Facebook are going bonkers over this. And I have been going a little bit bonkers over all the responses, to be frank. Just to blow off some steam, here are my thoughts on Chopra’s challenge, and the responses to it.

First of all, many people responded to Chopra’s call with sarcasm and cynicism, and made fun of Chopra’s lack of understanding of science.

It struck me how many of these people lack any understanding of science themselves, but I guess that’s Twitter for you. I’d like to say to these people who ‘fucking love science': proclaiming yourself an atheist or tweeting ‘WOO WOO’ to @jref does not make you a scientist any more than making a coherent sentence with the words “quantum”, “universal”, and “spirit”.

 So, what’s this all about? Years ago, James Randi, a professional stage magician, and renowned skeptic put out a 1 million dollar prize to any individual able to show true ‘paranormal’ ability. Anyone who would be able to read the future, do telekinesis, or make money as Ghostbuster, Randi would pay one million dollars. To date the prize remains unclaimed.

Deepak Chopra, on the other hand, is an Indian MD who writes books on consciousness and quantum mysticism using the Deepak Chopra Quote Generator, and apparently makes enough money to throw a million dollars at anyone coming up with a falsifiable theory of consciousness.

Neither of these challenges make sense.

Randi’s challenge does not make sense because it operates on a straw man argument: it makes a caricature of psi and then shoots at it. No, there are no such things as seeing in the future, telekinesis, or mind reading. No matter how sad it makes me to admit this, Professor X and Jean Grey DO NOT EXIST (come on, you all at least fantasized about being able to read minds and get the remote and/or your beer and pizza without having to leave your couch!) Period. Does not work, cannot work – not according to the laws of physics, not according to present theories on psi. What might exist, though, are weak, anomalous effects that if they exist, may only be detected in high-powered studies involving a large number of subjects set up in a very specific manner, that need to be pre-registered, and replicated, and replicated again before we can even start drawing conclusions about the existence of psi. So, no individual will ever be able to show paranormal ability, and thus claim the Randi prize. Safe bet, Mr. Randi.

Chopra’s challenge makes no sense because it is horribly ill-defined. Coming up with a falsifiable scientific theory of consciousness is not possible without properly defining ‘falsifiable’ and ‘consciousness’. What Chopra means to say is he will give a million dollars to anyone who can come up with a falsifiable materialist theory of conscious experience, that is, a theory of the subject of consciousness – the experience itself; not (necessarily) its contents. And that is an impossible challenge, because it is a contradiction in terms. You cannot come up with a falsifiable materialist theory of consciousness, and claim the Chopra prize. Safe bet, Mr. Chopra.

But how does mind relate to matter, then? Why can’t we have a falsifiable theory of consciousness?

I am not going to repeat Introduction to Philosophy of Mind here, but roughly we have four classes of mind/matter-theories:

  1. Only matters exists, mind is an illusion
  2. Mind exists, independent of matter
  3. Mind is dependent on matter (or vice versa)
  4. Only mind exists, matter is an illusion

Now, let’s be good scientists, and shoot at these propositions to falsify them, shall we? Classes 1 and 2 are fairly easy to shoot at, so I’ll use proper bullets ;-) Classes 3 and 4 are somewhat more challenging, though.

Let’s start with 1, which you may call orthodox materialism. It’s easy to debunk (with one caveat, though).

  • Cogito ergo sum. I have conscious experiences. Even if these experiences (including the feeling of being the subject of conscious expriences) are illusions, I am still experiencing these illusions. Therefore consciousness exists – even if all other apparent conscious beings in the universe would be philosophical zombies (that is, beings that act rational, but lack conscious experience).
  • If consciousness exists, there is ‘mind’. This rules out orthodox materialist monism (the notion that there is only matter, and that mind is an illusion).
  • Caveat: I can only falsify this for myself, because I cannot with certainty claim anyone else has conscious experiences. Vice versa, you cannot verify my conscious experiences, so you should not believe my claim, but base your evaluation on your own conscious experience (or lack thereof).

Number 2 is good old Cartesian substance dualism. Let’s shoot!

  • In order to move a body, the mind needs a way to operate it
  • Operating a body requires brain cells to fire
  • In order to make a brain cell fire, energy is required – the mind needs to add energy to the brain in order to make this work
  • Physics (ie our understanding of matter) does not allow the creation of energy within a closed system. How can mind get energy into the brain?
  • The probabilistic nature of quantum mechanics will not save you here, Church of the Quantum Spirit. Quantum mechanics describes physical reality at the finest grained level, and contrary to classical mechanics, which is deterministic in nature, quantum mechanics is probabilistic. In other words: the x(t) = v*t gives the position of a moving object at time (t) with absolute certainty; the Schroedinger equation (or better, a transformation thereof) only gives the probability that a particle will be at a given position at a given time. A typical ‘quantum woo’-argument is that the probabilistic nature of QM potentially allows for a mechanism via which mind can influence matter. However, QM is probabilistic - the outcome of a quantum measurement is inherently unpredictable. That may sound very convenient if you want to believe in free will, but in fact it is a terrible property for a cognitive system, or social beings like us. Our entire social network, and our own mental sanity flourish by the mere fact that we are (in general) quite predictable in our actions and thoughts.  Let’s please not introduce fundamental randomness in there, I’d say…

Classes three and four are more difficult to shoot down. Since WordPress does not allow me to use mortar grenade points, but only bullet points I’ll switch back to full text.

Number 3 is the class of what I call ‘weak monism’. We accept mind and matter exist. However, the one substance is dependent on the other (or: one substance can be reduced to the other). This is the category in which we will find main stream theories of consciousness. Weak monist theories come in two flavours. Materialist (or physicalist) theories propose that mind is the result of physical processes, and can be described as such. The Orthodox Skeptics are adherents of these theories, as are most main stream scientists. Idealist theories state that mind is supreme, and that matter is created by the mind. Chopra’s Church of the Quantum Spirit is of this denomination.

Weak monism, though, suffers from the dreaded Hard Problem. How does a change in one substance result in changes in the other? This goes for both materialism and idealism. Materialists need to explain how a change in matter (brain cells) translates into consciousness, and why some physical processes (action potentials) result in consciousness in some circumstances, whereas the same physical process do not in other circumstances. However, also if you’re from the Church of the Quantum Spirit, you have a hard problem. If matter is a result of mind, how come not all mental activity results in changes in matter?

According to many materialists, including Dan Dennett, the Hard Problem is not really a problem at all. Consciousness simply is the sum of all brain activity, period. In slightly more subtle wording: consciousness is believed to be an emergent phenomenon, resulting from the complexity of the neural networks of our brain. This is called supervenience – reality can be described at different levels, and higher levels of description (consciousness, mind) are dependent on features of lower levels of description (brain, neurons). Or, as Kalat has put it in Biological Psychology for generations of psychology students: you can look at the Mona Lisa as a painting of La Giaconda, and talk about in the sense of her mysterious smile, or you can give a detailed descripton of the canvas and pigments used. Same thing, different levels of description. Similarly, mind is the same thing as brain activity, but simply described in different terms. Obviously, we can easily swap around the words ‘mind’ and ‘matter’ to fix the Hard Problem for idealism.

Now, I hate to bring this news to the Orthodox Skeptics, but this is Woo in its purest form. You cannot call any theory that only says ‘if you make something complex enough it becomes conscious’ a serious theory! How complex does a system have to be in order to become conscious? At what level of description does consciousness emerge? Does the physical system need to be a brain, or would any physical system do? In other words – calling consciousness an emergent property of brain activity and leave it at that is hardly any more scientific than declaring universal quantum love and spirit (or insert your favourite Deepak Chopra quantumism here).

There are several problems with the emergence/supervenience theories of consciousness, but I personally think John Searle brought up the best argument against supervenience theories. Let me paraphrase it in terms of the Chinese Room thought experiment: in this thought experiment, we lock up a man who only speaks English in a room. Via a slot in the door he is given sheets of paper with Chinese characters. Using a manual in the room, he is able to look up an appropriate response in English. He writes the reply on another sheet of paper, which he returns via the slot. From the outside, it looks like the man knows Chinese! In reality, of course, he does not. Searle used this to argue that true artificial intelligence does not exist – for example, if you are training a system to respond to a user in natural language, what you’re doing is giving an artificial system a manual. The system does not understand language in the sense we understand language.

The Chinese Room can also serve as a thought experiment on consciousness. Take a system (a body), and pop a computational unit in there  that can map inputs to outputs it (let’s call this magic device a ‘brain’). The brain or parts of it are not conscious in any sense – they simply map inputs to outputs. However, looking at the system, operating in the world, it is conscious, or at least, bears all signs of it. This is pretty much in the line of Alva Noë’s ideas of how consciousness depends on embodiment.

In his book “Intuition Pumps and Other Tools for Thinking”, Dan Dennett defuses Searle’s argument by stating that the thought experiment is flawed. It does not matter if the ‘guy inside’ understands Chinese or not – the system (that is, the room) does. Digging deeper for ‘understanding’ or ‘consciousness’ makes no sense. There is no ‘Hard Problem’ – conscious experience is just what a system is doing at a particular level of description.

Now, I would like to very explicitly state here that Dan Dennett is probably one of the greatest minds alive, and I am nowhere in his league. I am a great fan of his work, and I feel that it should be compulsory reading for any undergraduate in psychology. However, I think he is wrong here. The reason for that? He plays a trick on us in defusing the Chinese Room.

The trick is this: he smuggles in an external observer. The Chinese Room understands Chinese only if observed by and interacting with an external observer. The ‘understanding’ of Chinese by the room only exists in the mind of the observer! Otherwise, the actions of the Chinese Room are meaningless. Likewise, the brain-in-a-body-operating-in-the-world is only conscious if observed in an appropriate context. Following Searle, I do find this problematic. Consciousness is a first-person perspective. I know I am conscious, because I am both subject and object of my experiences. Who or what is then describing the activity of my brain-body in such a way it enables my first person consciousness? It cannot be me because I am the result of this observation, and unless we allow paradoxical cause-effect relations (which I doubt any materialist would be very keen of), we are left with a very urgent question: in whose mind do I exist?

In sum, I see pretty big problems with materialist theories of consciousness. However, converting to idealism does not solve the problem. As argued earlier, idealism also suffers from the Hard Problem, and the above analysis applies as well. The Hard Problem is deviously difficult to defuse if you accept that mind and matter exist .

One possible solution is to give consciousness a ‘fundamental’ status. Consciousness is a fundamental property of the universe, like the universal forces. Hameroff and Penrose’s Orch OR model rests on this assumption, but also Giulio Tononi’s highly fashionable and critically acclaimed IIT 3.0 model of consciousness puts as its ‘zeroth’ postulate ‘consciousness exists‘. In a recent online article, Christof Koch even explicitly explored panpsychism (the idea that everything is conscious) as a solution to the mind-body problem. However, this does not explain why consciousness exists. And given that physicists are not satisfied with merely stating that ‘gravity exists’, we as psychologists should not be satisfied with stating that ‘consciousness exists’.

Anyway, in a rather large nutshell, this is why the Chopra Challenge makes no sense. Apart from the fact it is poorly defined, we are nowhere near an empirically verifiable (or falsifiable) theory of consciousness. All we’re doing now and have been doing since the era of brain scanning is looking for neural correlates of consciousness, which is a very useful enterprise because it provides boundary conditions for consciousness, but we have not been cracking on the Hard Problem at all. The Hard Problem is probably fundamentally unsolvable within a weak monist framework. In itself this does not prove Chopra right, of course.

On a separate note: what would advance our understanding is a potential falsification of  the idea that mind can be reduced to matter. This is actually the reason I started doing psi research, apart from my lifelong wish of being a Ghostbuster when I finally grow up. If we can convincingly demonstrate that certain aspects of mental functioning cannot be reduced to physical processes, we would have a strong case to either revise our physical models, or falsify materialism. Given the potentially huge impact of psi research, and the fact that the present corpus of data does not allow for clear falsification of psi, I think it is a very worthwhile area of research. But that’s my 2c.

Oh, we have one class of theories left, don’t we? Absolute idealism or monist idealism states that only mind exists, and that matter is an illusion. Well, to quote Sherlock Holmes, “when you have eliminated the impossible, whatever remains, however improbable, must be the truth.” ;-)

Woo woo…


PS regarding my last point, I can recommend reading Schroedinger’s “What is Life?” It is a short book, which you can read in a couple of hours, but will stay with you for a lifetime. Yes, I know I stole that quote from the reviews of the book, but it’s very true.


This article and its reviews are distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and redistribution in any medium, provided that the original author and source are credited.