The argument in Moral Minds is that we have evolved a moral instinct, a dedicated set of neural circuits designed to deliver moral verdicts of right and wrong. The foundation of this moral capacity is a universal moral grammar, a set of principles that assess the beliefs, desires and goals of an agent with respect to his or her actions, and the consequences for the welfare of others. What this thesis suggests is that much of our moral reasoning may be illusory, mediated instead by intuitive and unconscious processes that are, to some extent, immune to cultural influences.]
Though much of Jonathan Derbyshire’s review captures much of my book Moral Minds quite accurately, there are some egregious errors that I would like to flag. I will quote directly from Derbyshire so that there is no misunderstanding.
Problem one. My moral sense test aims to probe moral intuitions by asking respondents how they imagine they would act in various hypothetical moral dilemmas. One such dilemma asked respondents to imagine themselves standing on a bridge from which they can see a tram hurtling towards five people stranded on the track. The only way to save their lives is to drop a heavy weight in front of the tram. A fat man also happens to be standing on the bridge. Should you push him to his death in order to stop the tram, or leave him, in which case those on the track will die?
Derbyshire writes that, “Hauser reports that only 10 per cent of respondents said it was morally permissible to push the fat man from the bridge. From this and similar results, he deduces a universal ‘intention principle,’ according to which intended harm is morally worse than harm that is foreseen but not directly intended. What is unclear, however, is why Hauser thinks data like these also license claims about the existence of a discrete moral faculty or ‘organ.’ It is one thing to articulate principles that help to make sense of our intuitive responses to moral dilemmas, but quite another to conclude from this that such principles must belong to a particular region of the brain.”
I did not claim that an understanding of the principles that guide moral judgement licences inferences about neural localisation. What I did say was that an understanding of the principles that guide our judgements enables us to move into detailed studies of the brain, attempting to both localise such psychological processes, chart their development and explore what happens when they break down. This is precisely what my students and I have done. For example, using functional magnetic resonance imaging, we recently published a paper that shows that the right temporo-parietal junction is critically involved in dilemmas that entail information about a person’s beliefs. More importantly, the level of activation in this area is modulated by the outcome of an action. Thus, if a person believes he will do harm and his actions cause harm, then the pattern of activation in this region is different than if the person believes he will do harm, but fails to do so. We explored this area in part because of our interest in how beliefs, intentions, goals and action figure into our moral judgements. Thus the theoretical and behavioral work motivates an exploration at the neural level.
Problem two. “Moral Minds is full of fascinating reports on psychological experiments, few of which offer any obvious support for Hauser’s ambitious claims about moral grammar.”
Moral Minds provides a novel way of looking at our moral psychology, building on the general insights of Chomsky, the more specific ideas expressed by Rawls, and most recently, the work of the philosophers John Mikhail and Sue Dwyer. Unlike Pinker’s The Language Instinct, which eloquently summarised not only Chomsky’s arguments about language but the mountain of evidence that had accumulated over the 40 years since his initial account, Moral Minds was exploratory. But half the battle in science is to ask new questions that are, we hope, sufficiently interesting for people to attempt to answer. When I began working on this problem three to four years ago, there were several questions that had never really been asked. For example, to this day we still have no evidence about critical periods for acquiring a moral system, whether the first moral system is acquired in a fundamentally different way from a second system acquired later in life, of whether people can be “bi-moral,” and whether the neural representation of one moral system is different from the representation of two. Once the linguistic analogy is invoked, these become the obvious questions. Moral Minds has already set off a host of experiments, some of these from my own lab; interested readers may wish to download some of our recent papers.
Problem three. “And there is nothing here to suggest that this nascent discipline will conquer the ‘proprietary province of the humanities’ any time soon.”
I did not claim that a biology of morality will conquer the humanities. In fact, Derbyshire fails to quote the complete sentence, which reads: “Inquiry into our moral nature will no longer be the proprietary province of the humanities, but a shared journey with the natural sciences.” The natural sciences are coming into increasing contact with the social sciences and humanities. For me, and many of my colleagues, there is an appreciation that the best work will come from a collaboration, one that recognises both that different disciplines have different strengths, and that each discipline brings some proprietary issues, some of which are open to inter-disciplinary fertilisation. In the case of morality, the biological sciences can provide rich descriptions of how people judge moral dilemmas and how they act in such cases, but it can not dictate what we ought to do. The field is abuzz, and the results are emerging quickly. I am glad to be alive to witness this renaissance, an inquiry into one of the most interesting aspects of human life.
Marc Hauser’s website
Buy Moral Minds at the Prospect bookshop
Participate in the moral sense test