How does your brain work? The enormous number of neurones and interconnections has led some people to believe we'll never know. However, over the past few years teams of philosophers, neuroscientists, cognitive scientists, computer engineers and psychologists have participated in an explosion of new ideas and research about the human brain and mind (and the relationship between the two). So, what's the consensus? There isn't one, not yet.
When we talk about consciousness, what do we have to explain? Concerning what philosopher David Chalmers calls the 'easy' questions, we have to explain how light waves passing in through our eyes onto a two-dimensional surface (the retina) can be turned into three-dimensional images that we can manipulate in our 'mind's eye.' Or how it is that, despite the huge amount of input coming in through our five senses, we can concentrate on one thing at a time?
On the 'hard' side, we have to explain how we experience and what we feel when we experience things that are unique to us — for instance, try explaining the colour red to somebody without referring to anything in the outside world that's red. Finally, and hardest of all, we have to explain how it is that we have a consistent and seemingly-continuous sense of self that we maintain throughout our waking hours, that others can identify as our 'personality,' and that we experience as a stream of consciousness.
The idea that lies behind most western1 people's view of mind is that we each have a separate entity called the mind or the soul, housed somewhere in the brain, that acts as a central area where our self receives input from the outside world, makes decisions based upon it, then sends commands back through the rest of the brain to the body. The mind can be made out of brain tissue (if you're scientifically-inclined) or out of something more mystical and non-material. This idea is extremely tempting, because we feel as though we have a self that looks out through our eyes, hears with our ears, and controls the body we inhabit. This position was best articulated by the French philosopher René Descartes and is known as 'dualism' or, more derisively by its critics, as 'the doctrine of the ghost in the machine.'
Unfortunately, the major problem with the idea of a central processing unit in the brain is that there just isn't any evidence that one exists. Descartes thought that the pineal gland was a possible candidate, but apart from regulating the body’s response to light and dark (and producing the impressive 'third eye' of New Zealand's Tuatara2), it doesn't appear to do much in the way of producing consciousness.
Modern scientists haven't done much better in isolating any parts of the brain that could house a mind. They've found various areas that process lower-level functions like sight, hearing and speech, but no one area has definitely been found that would explain where the mind is.
So, where is it, then? Most theories talk about a variety of areas being involved, and the rest of this Enty is devoted to explaining some of these.
If you stop and think about it, many things happen within our body that we either cannot control, are not aware of, or both. Try and stop your heart just by thinking about it. Or figure out how you catch a fast-flying ball without being immediately conscious of where your arms are or where the ball is. Or how you manage to ride a bike or drive a car without being aware of it (but while you were learning, you had to consciously think about every movement).
Obviously some things are happening outside of what we would traditionally call the 'mind.' And, as we saw in the last example, things can be very conscious and become unconscious through practice. So here we run into another problem in supposing a split between mind and body. If there is a conscious mind separate from the rest of the body, clearly it doesn't control everything that goes on. It delegates tasks to various lower centres that are presumably non-intelligent (perhaps you could say instinctual). This is reinforced by research that demonstrates that ordinary people react to things such as changes in colours of dots before they are actually aware of the change.
If this sounds like a reflex, that's almost correct, but on a magnitude far, far more complicated than the ones doctors tap when you get a check-up. Playing a dashing cover drive3 to a fast bowler, for instance, is something that requires immensely complicated responses in a very short period of time. By the time the batsman knows what he's done, he's finished the shot. It's stretching the reflex analogy to suppose that there's a 'cover drive' reflex that bypasses the mind in the same way that there's an 'I've just put my hand on a hot object' reflex4.
The philosopher Daniel Dennett takes this argument a lot further, arguing that what we call consciousness is nothing more than a huge number of interlocking reflexes, or brain circuits, or (to use Dennett's term) 'demons.' Each demon, or group of demons, creates a 'draft' copy of perceptions coming in through the body's senses, and sends it to another group of demons, who send it on to another. This continuous shunting and the 'multiple drafts' that it produces appear to us as a stream of consciousness, without ever appearing as a complete or finished product in any particular area5. This theory is difficult and counterintuitive, to say the least. Dennett's most famous work, Consciousness Explained, has been nicknamed 'Consciousness Explained Away' by its critics.
The strength of this theory is that it supports what we know about the evolution of humans. Each demon is something that our distant ancestors used (perhaps for some other function) and small genetic changes lead to different interconnections between demons, as well as more accurate and faster responses to the world around us, which would be preserved by natural selection.
So what makes us different from animals? Surely they have their own 'demons' in their brains, too. According to Dennett, in humans the major difference lies in man's ability to use language, which enables the spread of ideas between people. This, along with the evolution of better long-term memory, allows for better and more complete 'drafts' which paint such a compelling worldview that it seems as though it is being presented to a self within us.
This seems to imply that without memory, brain function is more like a set of demons who can't talk to each other because they can't remember what it was they were talking about. It follows from this that in animals the demons would be less well-organised and hence a self would be less likely. But it also follows that as things get more organised in the brain, particularly with better memory, the closer animals will come to being conscious. So a chimpanzee is closer to having a stream of consciousness than, say, a salmon.
What it Means to Understand Chinese: Artificial Intelligence, Artificial Consciousness
This section briefly addresses John Searle's Chinese Room thought experiment and its implications on consciousness. The experiment and some of the objections to it are ably dealt with here, but to summarise:
Imagine you're locked in a room with a roomful of books, look-up tables and bits of paper. All this written material equips you to respond to cards with unintelligible symbols on them that are shoved through a mail-slit in the door of your cell.
Your job is to look at the symbols, locate them in the look-up tables, carefully copy down the response you find there on a piece of paper and push the paper back through the mail-slit.
Unbeknown to you, the 'symbols' are characters in the Chinese language. The incoming cards contain questions. The outgoing paper contains the intelligent answers to the questions. Now, the question is, while in this system, do you understand Chinese?
This is a thought experiment proposed by the philosopher John Searle, and as such shouldn't be taken as a literal account of anything that's possible at the moment. However, Searle argues that the above situation is analogous to an intelligent computer program — there are inputs and outputs, with look-up tables simulating the guts of the program. This 'program' is far, far more complicated than any of today's computers could handle. It can even convince a Chinese person that it understood Chinese, because it can answer in an intelligent manner any question put to it. (This, incidentally, passes the so-called 'Turing Test' that is widely regarded as the acid test for an intelligent machine.) But, according to Searle, in such a situation, you do not understand Chinese — this is the point that he uses to argue that even the most intricate computer programs cannot have a full understanding of what they’re doing — and therefore cannot be conscious.
Are you convinced? The experiment certainly seems true enough on the face of it. But is Searle asking the right question and is he using a fair analogy?
The main objection to the 'Chinese Room' argument is that Searle is emphasising the wrong area. Of course you wouldn't understand Chinese in this situation. Still, the Chinese Room as a whole does — you, the books and the bits of paper. Searle responds by changing the situation slightly, by imagining that the person has memorised the books and look-up tables. But he still doesn't understand Chinese.
But if you or I or anybody who doesn't speak Chinese as a first language tried to learn it, that’s essentially what we'd do. We would internalise a list of words and symbols and when we heard a question, we'd first translate it into something we'd understand via our internalised look-up tables, then translate it back to formulate an answer. You would be a bit cruel to suggest that someone who does this doesn't understand the language.
The view that Searle is arguing against — that machines are capable of being conscious — is called 'strong AI'. It seems like a bit of a false argument. Here's another thought experiment. Say somebody created lots of units out of silicone that behave exactly like a human neurone. Then they connected them up in exactly the same way a brain is wired up, then fed them signals just like those the brain receives from the senses. If we believe that the brain is the organ of consciousness, and we don't believe in some sort of soul or vital spirit that is required for life, then why wouldn't this brain give rise to consciousness in the way a human brain does?
Food for thought. The preceding discussion is useful because it is another angle to approaching the problem of exactly what it takes to become conscious and have conscious experiences (called 'qualia,' in the terminology of the field). The argument around qualia is a subset of consciousness studies that is deserving of an Edited Entry on its own. So let's skirt around it for the moment and talk about:
Chaos and Creativity
One of the objections people have to the entirely-material view of the brain that neuroscience appears to be espousing is this: if everything can be reduced to electrical signals and chemical reactions that obey predictable physical laws, where do free will and creativity come into it?
This is another apparent attraction of dualism, but it's not clear why something non-material is any more likely to exhibit free will than something material. The problem remains the same, namely how the workings of the brain (or soul, if you prefer) interact to produce original thought.
One school of thought suggests that quantum mechanics could be the missing link — after all, physics no longer seems as predictable and deterministic as it did a century ago. But replacing predictability with probability and chance, as quantum theory does, only replaces one problem with another. Neither gives us an idea of how ordered, creative thought can arise from the brain. However, one of the newest branches of science, that of chaos and complexity theory, may have an initial answer of sorts.
According to some modern neuroscientists and scientists conversant with chaos, consciousness resides in the brain and correlates exactly with brain states; but it's the way the interconnections work that's the interesting part. A little mental diagram-drawing is in order...
Picture, if you will, a number of dots representing neurones (draw them on a bit of paper if it makes you feel better). Colour one of them black. Now draw lines between each neurone so that they are connected one to the next, all in a row. If the black neurone is stimulated, clearly the signal is going to go linearly straight down the line — no new information, no creativity. Now draw more lines so that each neurone is connected to every other neurone. Now, if you stimulate the black neurone, the signal is going to go everywhere in a mishmash — chaotically, if you will.
What complexity theorists have made explicit recently is that at a certain percentage of connections, between linearity and chaos, there is an area where the system behaves in an ordered but creative way. This area is called the edge of chaos and in the brain, this system (comprising multiple re-entrant circuits and conditioned groups of neurones) may create what the neuroscientist Gerald Edelman calls the 'Dynamic Core.'
This core can take in and cast off many different areas of the brain as time goes on. This explains how you can sense and think of different things while remaining the same person. The components may change, but the core remains in operation. For instance, say you're sitting in your lecture hall thinking about the scintillating sociology lesson you're being given. Then you see the pretty girl two rows ahead of you, the one you've had your eye on for weeks. Suddenly, the areas of your brain responsible for hearing the lecture and your thoughts about sociology drop out of consciousness and the areas involving your visual attention (and less-intellectual desires) kick in. The lecturer's voice fades and you start noticing details about the girl's clothes, hair and so on. These factors interact with a few memories of your previous experiences, good or bad, and a plan to talk to her begins to occur to you.
The point of the example is this. Each of these areas (or Dennett's 'demons') is made up of interconnected neurones obeying physical laws. Their interaction at the edge of chaos is such, however, that they can form new information in an ordered fashion.
Is this personality and consciousness as we know it? Maybe. Many other more subtle factors can be worked in to make this account more complete — the effect of genes and brain development, the chaos science of 'basins of attraction', to name but two. But those are explanations for another time. See below for some further reading.
Jack Cohen and Ian Stewart. The Collapse of Chaos. Easy-to-read and humorous introduction to the science of chaos.
Daniel Dennett. Consciousness Explained. Hard-going at times, and extremely counterintuitive, but definitely worth reading.
René Descartes. Discourse on Method and Meditations. The original 'dualist' who articulated the split between mind and body.
Gerald Edelman. Consciousness: How Matter Becomes Imagination. The source of the 'Dynamic Core' hypothesis.
John Searle. The Mystery of Consciousness. A series of book reviews, including one of Daniel Dennett's, with some replies. And an explanation of the 'Chinese Room' thought experiment.