Brains, minds and books

Books on the brain and consciousness pour off the presses-from Daniel Dennett, John Searle, Susan Greenfield and others. Andrew Brown surveys the recent literature and asks why our knowledge remains so sketchy and contested
November 20, 1997

Nothing is easier than to familiarise one's self with the mammalian brain," wrote William James in Psychology in 1901. "Get a sheep's head, a small saw, chisel, scalpel and forceps and unravel its parts."

Nothing as confident as this was written for most of this century until about ten years ago. Discovering where our minds are in our brains suddenly seemed possible again. Behaviourism, which attempted a full description of the mind as if subjectivity were unnecessary, was discredited. New computer techniques made it possible to study living brains at work; and-though this last is hardly admissible-a generation of scientists were coming up who had played enough with LSD to know that consciousness cannot be explained away, but is intimately bound up with the physical world.

The new discoveries, however overwhelming, still seem unsatisfactory. What interests us about brains is where they stop and where minds start. As a matter of fact it is crude to suppose that there is one place in the brain where minds start. What we really want to know is not where the mind is in the brain, but how it is manifest there.

Why there is mind in the brain is what David Chalmers in The Conscious Mind (Oxford) has called the "hard" problem of consciousness. How it is there is the easy-though still unimaginably difficult-problem. Both problems are fascinating because the answers recede like mirages, always a little ahead of the advance of knowledge. The most optimistic conferences are still called "towards a science of consciousness." There is, in this field, nothing like the solid scientific basis of genetics or much of physics.

Nevertheless, computer-assisted techniques have made it possible to watch brains as they work. Until about 20 years ago brain scientists had to commit murder to dissect-or rely on nature to murder for them. If they wanted to know what a particular part of the human brain did, they sought people in whom it had been destroyed-usually by a stroke-and studied them. The development of CAT and other such scans have made it possible to study the brain's workings in unprecedented detail. It is an epiphany to see photographs of a brain scan and watch a sensation spread inside the skull like a puff of garish smoke before it dissipates; perhaps our own brain, scanned, would show the same picture of excitement.

Later the excitement wears away. "Seeing" thought is what humans have done ever since they learned to read facial expressions. But a scan makes the mystery look fresh and makes it appear soluble. There is even a particular area of the brain which lights up on a scan when we consciously attend to a task rather than doing it automatically. Has the researcher who identified that area found consciousness?

People professionally interested in consciousness-neuroscientists, philosophers, psychologists or computer experts-would agree that he or she has not. But this is about all they would agree on. They give different accounts of failure, so far, to answer the questions about consciousness. They have different hopes for future success. Most of these disagreements do not follow the fault lines of scientific disciplines. You will find few believers in "strong" artificial intelligence (AI)-the belief that consciousness can be mapped on to a computer programme-among those who know most about the workings of the brain. But most of the disagreements are as much philosophical as scientific. This does not mean that philosophers rather than scientists will solve the riddle of consciousness, though some philosophers seem to think as much. It means that philosophical sophistication is required to make progress.

Some philosophers believe the problems are insoluble. Colin McGinn has argued that human beings are constituted in such a way that they cannot understand how "conscious states depend upon brain states." Conscious states are connected to brain states in subtle ways-ask anyone who has been drunk, anaesthetised or taken LSD. The challenge is to discover which laws-if any-govern these relations. To rephrase the question: why should anything give rise to experience?

The "Zombie philosophers," such as Daniel Dennett and the husband-and-wife team Pat and Paul Churchland, argue that this question is mistaken. To them, "consciousness" is simply the name ignorant people give to certain neural interactions. Proper scientific materialists will see that conscious experience is as unnecessary a concept as phlogiston, or ?lan vital. Just as there is no principle of life which animates otherwise inanimate matter, only inanimate molecules arranged in particular ways, we shall discover that there is nothing more to consciousness than a certain arrangement of electrical currents and chemistry-in neurones or even, perhaps, in silicon. When we have solved the "easy" problem of how the brain works, we shall discover that the "hard" problem, of why there is mind, has evaporated.

This is a difficult position to understand and to do justice to. Dennett is a writer of great vigour, who addresses important problems clearly, but his furies often seem out of proportion to their object. Consciousness Explained (Penguin), Dennett's most recent book, does a tremendous demolition job on the idea of a single "meaner," or centre of consciousness, inside us. Long after reading it, I asked myself whether his demolition job could really be said to explain consciousness rather than to explain it away.

To argue that "life" is simply an arrangement of the same molecules that make up dead matter seems rather to miss the point. True, there may be very little chemical change between a man who is dying and the same man five minutes later, dead. But who would take seriously a book entitled "Life Explained" which maintained that knowing those chemical facts is sufficient to understand what life is?

Such a criticism can be directed at both Dennett and the Churchlands. But Dennett goes further in one respect. He gets fairly close to behaviourism in arguing not only that self is an illusion, but also that there is no first-person perspective in the world at all: the act of describing consciousness-whether to yourself or a third party-creates what it describes. This puts a very high value on language: without an interior monologue you have no interior life. It is not clear whether Dennett believes that pre-verbal babies, for example, are conscious-or in what sense.

Willingness to consider that there might be human beings without consciousness who can function perfectly well in the world, links Dennett to one of the most remarkable books ever written on this subject: Julian Jaynes's The Origin of Consciousness in the Breakdown of the Bicameral Mind (Penguin). This is a rewarding book with which to start an enquiry into the field; not because it is right-it is almost certainly spectacularly wrong-but because it explodes like a bomb in the mind, leaving echoes which roll around for years. There is a certain poetic justice in this: one of the book's main themes is that the first great civilisations of Mesopotamia and central America were built in response to hallucinated voices. Jaynes argues quite seriously that consciousness originated in the eastern Mediterranean about 3,000 years ago, at some point between the Iliad and the Odyssey. The characters of the Iliad, he says, were pre-conscious: what we would now call schizophrenic. "Iliadic man did not have subjectivity as do we; he had no awareness of his awareness of the world, no internal mind-space to introspect upon... Volition, planning, initiative is organised with no consciousness whatever and then "told" to the individual in his familiar language, sometimes through the visual aura of a familiar friend or authority figure or 'god,' or sometimes as a voice alone... The Trojan war was directed by hallucinations. And the soldiers who were so directed were not at all like us. They were noble automatons who knew not what they did."

This implies that they did not deliberate. They acted on instinct; and when a problem arose for which instinct was inadequate, their left brains heard the gods, quite literally, speaking to them from the opposing area of the right brain. But by the time of the Odyssey the characters no longer heard the voices; they were alone in the world as we are, burdened with conscious choices and without gods.

Jaynes's dramatic characterisation of the Iliadic world leaves out an important point he has established earlier: that almost everything he says about Achilles is true of any piano player in performance. In other words, almost all our really impressive feats are performed unconsciously, or at least while we are unconscious of our actions.

This is one of the great evolutionary puzzles of consciousness. Many actions which are essential to preserve animal life are best performed unconsciously. A top tennis player, exhibiting the sort of speed and grace required to keep alive our ancestors on a savannah full of hungry lions, will have returned service before he is aware that the ball has crossed the net towards him. So why does he need to be conscious of his acts?

One answer is that consciousness is an adaptation to the problem of other people rather than of other animals. Nicholas Humphreys has suggested that this is what drove the evolution of human brain size: consciousness helps us form a model of what other people are likely to do (so as to be able to outsmart them). We then get an idea of ourselves by analogy with what we observe of others' behaviour.

But for most present day observers, consciousness goes much further back in time and deeper into the animal kingdom than among hominids. If the central question is "Why should anything give rise to consciousness?" one popular answer is that brains simply do so. We are conscious. Our brains cause consciousness. The debate must start from these facts. The most forceful and feared exponent of this view is John Searle, professor of philosophy at Berkeley.

Searle is a serve-and-volley philosopher, as he shows in The Mystery of Consciousness (Granta). If his first sentence does not blow an opponent away, he will rush to the net for the second. Here he is attacking "strong" AI: "The study of the mind starts with such facts as that human beings have beliefs, while thermostats and telephones don't... One gets the impression that people in AI who invent [theories which deny this] think they can get away with it because they don't really take it seriously and they don't think anyone else will either."

His famous 1980 essay on the Chinese room experiment laid out some of the strongest arguments for the view that consciousness is not computation. Both words are extremely slippery; but Searle was reacting against the strong AI position that what the brain does is to run a sort of computer programme; if we could reproduce the programme, we would have produced consciousness.

The essence of the Chinese room experiment is to imagine a man in a room who is manipulating slips of paper according to certain rules. Paper comes in with certain mar-kings on it: he looks up the rule for each diagram and sends out of the room differently marked sheets of paper. If these markings are Chinese ideograms, and the rules are well chosen, he may be answering questions in Chinese. But that does not mean that he understands one word of Chinese. This, says Searle, proves the difference between consciousness (or understanding) and computation. You can simulate by computation tasks which in humans require understanding. But you have not thereby produced understanding.

This argument has not stumped those who believe that what goes on in our minds is computation. Pat Hayes, a British AI researcher working in Florida, says: "The computationalist hypothesis is more radical than the claim that the mind could be simulated on a computer. Rather, it is the claim that the mind actually consists of computational activity in a biological computer."

A further line of attack is deployed against the computationalists by Searle and by Jaron Lanier, a professor of computer science who coined the term "virtual reality" and is one of the most determined attackers of "strong" AI. Both claim that "computation" is a fuzzy and observer-dependent term. All sorts of systems in the universe can be perceived as computing or encoding symbolic operations. Lanier's favourite example is a meteor shower which, if measured, will yield a string of numbers which must be readable on some hypothetical computer as a programme. "Does this mean that the meteor shower is computing?" he asks.

Hayes's response is that pure computing is never found in nature, any more than are pure numbers. Both are abstractions, yet only exist bound up with concrete matter. Computing may be defined as the manipulation of symbols according to formal rules, but wherever you find computing going on, these symbols are physically incarnated-whether as pulses of electricity in a silicon chip or patterns of neuronal activity. Symbols are in some way physically connected to the things they represent, whether by nerve endings or sockets on circuit boards. Meteor showers are not-so they do not count.

With goodwill, it is possible to see that everyone agrees that brains are doing more than simply processing symbols, even if we dis-agree about how much more and what this is. One point made clearly by Gerald Edelman in Brighter, Brilliant Fire (Penguin) is that any system which has evolved as an aid to survival, as our consciousness must have evolved, will have values-or emotional colours-built into it from the start. A worm has evolved an aversion to being stuck on a hook aeons before its descendants might begin to develop ideas of what a hook is.

Edelman won a Nobel prize for his work on the immune system. Since then he has worked on a Darwinian account of the brain. He wants to see how the cortex can structure itself in response to experience, by a process of selecting certain patterns of neural connection and allowing others to die away. These patterns then feed back into one another by a complicated process he calls re-entry, to build the kind of complicated models of the world that all animals live inside. His books are clear but dense; his theories are probably the closest to a scientific explanation of some of the processes that underlie the emergence of conscious life. But in the nature of things they raise more questions than they answer.

Edelman is the only one of these theorists to address clearly the question of how brains grow to be conscious. The fertilised egg, after all, is not conscious. It does not think or feel, yet it contains the instructions necessary to produce a baby that can feel and will think. The basic architecture of the brain is the same for all of us; but brains grow. They change physically as bodies grow and learn. A nun who spends her life contemplating God will have a brain significantly different from a bus driver.

It is worth emphasising that even if the processes of the brain can be described as computation, we are infinitely more complex than any computer yet built or even conceivable. Susan Greenfield, an Oxford neurochemist, has just written the best perplexed person's guide to the brain so far published, The Human Brain (Weidenfeld & Nicolson), full of illustrations of this complexity. There are about as many neurones, she says, in each adult brain as there are trees in the Amazon rain forest; and there are about as many connections between these neurones as there are leaves in the rain forest. These figures suggest how complex the wiring can be-if the brain is considered purely as an electrical system. But it is not purely elec- trical. Signals travel within neurones as electricity, but they cross between them, at the synapses, through chemical processes in- volving specialised molecules cal-led neurotransmitters. There are many different kinds of neurotransmitters, each of which can have different effects.

Further complicating the picture, the composition of this chemical soup varies with our mood and with the time of day. These chemical va-riations affect the electrical workings of the brain in the same sorts of ways that monetary policy affects the workings of the economy. That is how most drugs have their effect. But the brain, like the economy, cannot indefinitely be manipulated in this way. Cocaine does not work in the long run for the same sort of reason that Keynesian stimulation does not work in the long run.

One way of examining this argument is to ask whether it is a rule-bound process-algorithmic, in the jargon. The most celebrated exponent of the idea that it cannot be algorithmic-that there is something necessarily uncomputable and unconstrained about the emergence of consciousness-is Roger Penrose, the Oxford mathematician. Working with Stuart Hameroff, an anaesthesiologist, he has developed a theory which makes consciousness the product of quantum events within microscopic cell-stiffening structures known as micro-tubules.

Penrose is a Platonist, which makes him doubly unfashionable among the great panjandra in the field. The human mind has access to mathematical truths by a form of intuition, he believes, which no purely algorithmic process can ever match. G?del's theorem proves that there will always be mathematical truths which cannot be proved inside the system which gives rise to them, yet which we can see are true. Therefore, Penrose says, consciousness must involve non-algorithmic knowledge. Against this, Dennett has argued that it greatly over- rates the certainty of mathematical knowledge. Intuitions can be mistaken; and it is easy to imagine an algorithm that will generate intuitions which do not have to be right.

Part of Penrose's argument is that the emergence of consciousness in the universe and the relation of quantum laws to physics are both mysteries. Might they not prove to be the same mystery? Hameroff is the showman of the pair, but his interest in consciousness is professional: anaesthesiologists spend their working lives making consciousness appear and vanish to order. They know how to do it well enough by now, but the underlying laws are still a mystery.

The Penrose/Hameroff theory is important partly because it is so thoroughly rejected by almost everyone else in the field. But it is much the closest to what most people in the world believe. Most of the people who have ever believed in ghosts, zombies and other forms of disembodied consciousness spirit are probably alive today. Yet almost all scientific researchers are convinced that conscious states are dependent entirely on brain states. If we assume-as seems safe-that any truly comprehensive account of the brain's workings is decades away, and any computer which might mimic that is still further off, the question arises: why is the field so interesting and fashionable? What do people hope to find from it?

The University of Arizona in Tucson regularly hosts large conferences at which everyone in the field shows off to their peers for a week. Late one night in a bar at the last conference, Patrick Wilken, the Australian founder of the Association for the Scientific Study of Consciousness, concluded a long evening's argument: "Don't you see, Andrew, what we're trying to do here? We're trying to make a soul!" And they are.

John Lucas, the Oxford philosopher who first formulated the G?delian arguments against AI later developed by Roger Penrose, ended his paper with the words: "Since the time of Newton, the bogey of mechanist determinism has obsessed philosophers. If we were to be scientific, it seemed that we must look on human beings as determined automata, and not as autonomous moral agents; if we were to be moral, it seemed that we must deny science its due, set an arbitrary limit to its progress in understanding human neurophysiology, and take refuge in obscurantist mysticism... But now we can begin to see how there could be room for morality, without its being necessary to abolish or even to circumscribe the province of science."

His hope was premature. The Churchlands, for instance, still think that an account of the mind will become available which will render first-person experience redundant. The arguments continue-if in decreasingly vicious circles. But Lucas may have been right in the long term. Brain science may have abolished the division between body and soul; but in the process, both ideas have changed. It turns out that there is neither pure ghost, nor pure machine, but that we are an animal which is both, or works like both at once.