Technology

Are we living in the age of the brain?

Understanding the brain won’t be done simply by mapping it down to the last synapse

December 22, 2014
Neuroscience has exploded into the mainstream in recent years
Neuroscience has exploded into the mainstream in recent years

We’re surely now in the Age of the Brain. In the United States, the BRAIN Initiative, announced in 2013 and with a projected cost of $3bn, aims to map the activity of every neuron in the brain—first, those of mice and other animals, then of humans. The European Union has assigned €1bn to the ten-year Human Brain Project, which intends to deduce the brain’s wiring circuit in order to build a complete computer simulation of it. And now Japan has launched its own ten-year initiative, called Brain/MINDS, with a focus on understanding brain diseases and malfunctions such as Parkinson’s, schizophrenia and autism.

Of all these projects, the Japanese effort is the most modest, and likely to be the most useful. It will use a combination of brain imaging and genetics to try to figure out what goes wrong and why, in particular using marmosets as a model for humans. The European project, meanwhile, has already run into serious problems. Many neuroscientists are concerned that its ambitions are premature, and last July 130 researchers from labs around the world signed an open letter complaining of the “overly narrow approach, leading to a significant risk that it would fail to meet its goals." The signatories say that the project could prove to be a huge waste of money, and criticize what they see as the opaque and unaccountable way the project is being run.

Some of those complaints are about infrastructure and management. But some go to the heart of what modern brain science is attempting to do, and what its realistic limits are. Those issues are searchingly explored in a new book, The Future of the Brain, edited by cognitive scientist Gary Marcus and neuroscientist Jeremy Freeman, which I have recently reviewed for Prospect.

One of the most striking features of the neuroscience literature is the contrast between the image of “thinking” presented there and our everyday experience. The emphasis in neuroscience is on how the brain does things: how we process visual information, how we record memories, how we move our limbs and comprehend language. It’s true of course that most of us are capable of all these impressive feats—but rarely with anything approaching computer-like efficiency. We make bad judgements, we misunderstand, and most of all, we live in mental turmoil. The mind feels like a battleground of clamouring voices, not a sleek and efficient circuit: “I’m bored with this task, but I have to finish it. Or perhaps tomorrow? Shall I just make a cup of tea?”

Resolution of conflicting mental signals is certainly not ignored by cognitive scientists or psychologists, but there seems often to be a disjuncture between the neuroscientific model of the brain as a problem-solving network and the actual experience of the brain as a medley, even a bedlam, of imperatives and impulses. Sigmund Freud may have been wrong in seeking to present his psychoanalytic theory as a kind of science, but he was surely right to present the mind in terms of conflict rather than unity. One thing we do know about the brain is that it is not just a very large network of neurons, but is both very diverse (there are many different types of neuron, as well as non-neuronal cells called glia) and highly modular (different parts perform different, specialized roles). Mapping this architecture is an important goal, and there are some deeply impressive techniques for doing that. But the risk is that this is like trying to understand human culture using Google Earth—or rather, cultures, for there is just a single geography but plenty of conflicts, compromises and confusion going on within it.

None of this would be disputed by neuroscientists. But it perhaps highlights the distinctions between an understanding of the brain and an understanding of the mind. The implication seems to be that it is hard to develop one while you’re working on the other.

Another danger that the big brain projects will have to navigate is the temptation to consider the brain in isolation. This has been a prevalent tendency ever since the brain became established as the “seat of the mind:" as the popular view has it, all that we are and all that we experience takes place within this wobbly mass of grey tissue. But of course, it doesn’t. To put it bluntly, no one has ever existed without a body around their brain. In a real (and an evolutionary) sense, the brain is an outgrowth of the central nervous system, which extends throughout the body. Without sensory input, the brain has nothing to do: it is just jelly. (That is of course different from saying that a brain deprived of sensory input goes blank.) The Human Brain Project acknowledges this, which is why it includes a “neurorobotics platform” that aims to create a simulated body for its simulated brain.

This isn’t just a matter of giving the brain something to do. Some cognitive scientists, such as Antonio Damasio at the University of Southern California and Anil Seth at the University of Sussex, argue that consciousness and brain activity have an explicitly “somatic," embodied element. They think that emotions are not so much states of the brain as mental representations—indeed, interpretations—of the physiological states of the body. That is reflected in the everyday language of “gut instincts” or “thinking with the heart." It is just one of the many reasons why transhumanist talk of downloading our brains to a hard drive as a form of immortality is naïve.

But the challenges for the American and European brain projects in particular run deeper than all this. They are data-gathering exercises akin to the Human Genome Project. We can now see what that latter project got us: a load of data. That’s no criticism; data is good. It is already extremely useful to our understanding of genomics advances. But now that we have the “genome book," all three billion letters of it bound and housed in the Wellcome Trust, we are like English speakers who have learnt to recite Russian poems fluently without knowing what they mean.

We know a lot about how genes work. But just as we have only a rudimentary knowledge of how genomes relate to traits (genotypes to phenotypes), so too do we lack an understanding of how patterns of neural connectivity and interaction lead to thoughts, emotions, creativity and imagination, psychosis and joy. Let’s not over-state the case: it is extraordinary what we know about the basic neural mechanisms of, say, memory and vision. But not only is there no theory of the brain, there is not even a clear indication that such a thing exists. “What I still believe to be lacking," says Marcus, “is a theory about how sets of neurons might come together to support something as complex as human cognition.”

It is disconcerting to find how nonchalant some neuroscientists seem to be about this. Many, apparently having learnt nothing from the experience with genomics, seem blithely confident that understanding and theory will somehow just fall out of the data, once we have collected enough of it. “It is a chicken and egg situation,” says one neuroscientist working on the Human Brain Project. “Once we know how the brain works, we'll know how to look at the data.” But collecting vast amounts of data without any notion of what you want to ask of it has never been a good way to do science.

Without doubt, formulating a “theory of the brain” is an immense challenge, probably one of the major challenges for science right now. You might imagine that it would be one of the key concerns of neuroscientists—after all, isn’t science supposed to be all about devising theories and then testing them with data? But the weird thing—I find it positively bizarre—is how much theory and hypothesis has been resisted in this field. Until recently it was given short shrift, and the one promising concept that was developed—so-called neural networks, which “learn” by reinforcing connections among its web of neurons—has turned out to be more valuable for artificial intelligence and “machine learning” than as a way to understand the human brain.

As Marcus points out, it seems reasonable to suppose that the brain is a kind of computer but we still have no idea what kind of computer it is: how it manipulates and organizes information. The temptation has been to imagine that it must be a computer like the ones we build, using the same principles of computation that were outlined by pioneers of computational theory such as John von Neumann and Alan Turing. But that might not be true. According to artificial intelligence specialist Rodney Brooks of the Massachusetts Institute of Technology, “I believe that we are in an intellectual cul-de-sac, in which we model brains and computers on each other, and so prevent ourselves from having deep insights that would come with new models.” We don’t understand, for example, why it is that the human brain finds so easy tasks that tax the best supercomputer (such as parsing text), and vice versa.

It could also be a mistake to imagine the brain as some optimized device that uses just a few fundamental principles. It has, after all, been cobbled together by evolution, and like so much else shaped that way it only has to work “well enough.” Cognitive scientist VS Ramachandran has suggested that the brain might simply be a “bag of tricks," or what Marcus has dubbed a “kluge:" a clumsy, makeshift solution that does the job but without any particular elegance. If that’s so, understanding the brain is going to be even harder than we might imagine. And it won’t be done simply by mapping it down to the last synapse.