The emerging moral psychology

Experimental results are beginning to shed light on the psychological foundations of our moral beliefs
April 26, 2008

Long thought to be a topic of enquiry within the humanities, the nature of human morality is increasingly being scrutinised by the natural sciences. This shift is now beginning to provide impressive intellectual returns on investment. Philosophers, psychologists, neuroscientists, economists, primatologists and anthropologists, all borrowing liberally from each others' insights, are putting together a novel picture of morality—a trend that University of Virginia psychologist Jonathan Haidt has described as the "new synthesis in moral psychology." The picture emerging shows the moral sense to be the product of biologically evolved and culturally sensitive brain systems that together make up the human "moral faculty."

Hot morality

A pillar of the new synthesis is a renewed appreciation of the powerful role played by intuitions in producing our ethical judgements. Our moral intuitions, argue Haidt and other psychologists, derive not from our powers of reasoning, but from an evolved and innate suite of "affective" systems that generate "hot" flashes of feelings when we are confronted with a putative moral violation.

This intuitionist perspective marks a sharp break from traditional "rationalist" approaches in moral psychology, which gained a large following in the second half of the 20th century under the stewardship of the late Harvard psychologist Lawrence Kohlberg. In the Kohlbergian tradition, moral verdicts derive from the application of conscious reasoning, and moral development throughout our lives reflects our improved ability to articulate sound reasons for the verdicts—the highest stages of moral development are reached when people are able to reason about abstract general principles, such as justice, fairness and the Kantian maxim that individuals should be treated as ends and never as means.

But experimental studies give cause to question the primacy of rationality in morality. In one experiment, Jonathan Haidt presented people with a range of peculiar stories, each of which depicted behaviour that was harmless (in that no sentient being was hurt) but which also felt "bad" or "wrong." One involved a son who promised his mother, while she was on her deathbed, that he would visit her grave every week, and then reneged on his commitment because he was busy. Another scenario told of a man buying a dead chicken at the supermarket and then having sex with it before cooking and eating it. These weird but essentially harmless acts were, nonetheless, by and large deemed to be immoral.

Further evidence that emotions are in the driving seat of morality surfaces when people are probed on why they take their particular moral positions. In a separate study which asked subjects for their ethical views on consensual incest, most people intuitively felt that incestuous sex is wrong, but when asked why, many gave up, saying, "I just know it's wrong!"—a phenomenon Haidt calls "moral dumbfounding."

It's hard to argue that people are rationally working their way to moral judgements when they can't come up with any compelling reasons—or sometimes any reasons at all—for their moral verdicts. Haidt suggests that the judgements are based on intuitive, emotional responses, and that conscious reasoning comes into its own in creating post hoc justifications for our moral stances. Our powers of reason, in this view, operate more like a lawyer hired to defend a client than a disinterested scientist searching for the truth.

Our rational and rhetorical skill is also recruited from time to time as a lobbyist. Haidt points out that the reasons—whether good or bad—that we offer for our moral views often function to press the emotional buttons of those we wish to bring around to our way of thinking. So even when explicit reasons appear to have the effect of changing people's moral opinions, the effect may have less to do with the logic of the arguments than their power to elicit the right emotional responses. We may win hearts without necessarily converting minds.

A Tale Of Two Faculties

Even if you recognise the tendency to base moral judgements on how moral violations make you feel, you probably would also like to think that you have some capacity to think through moral issues, to weigh up alternative outcomes and make a call on what is right and wrong.

Thankfully, neuroscience gives some cause for optimism. Philosopher-cum-cognitive scientist Joshua Greene of Harvard University and his colleagues have used functional magnetic resonance imaging to map the brain as it churns over moral problems, inspired by a classic pair of dilemmas from the annals of moral philosophy called the Trolley Problem and the Footbridge Problem. In the first, an out-of-control trolley is heading down a rail track, ahead of which are five hikers unaware of the looming threat. On the bank where you're standing is a switch that, if flicked, will send the trolley on to another track on which just one person is walking. If you do nothing, five people die; flick the switch and just one person will die.

To flick or not to flick—what would you do? Like 90 per cent of people, you probably looked at the numbers (saving five and losing one, versus losing five) and decided to hit the switch. Now consider the Footbridge Problem: again, a trolley is heading towards five unsuspecting hikers, but this time there is no switch you can throw to save the hapless hikers. The only way to stop the trolley is to put a heavy weight in front of the impending threat. Unfortunately, the only sufficiently weighty object nearby is a large man standing on the footbridge with you. Do you push him in front of the trolley, and to his death, to save the five hikers? Or is this beyond the pale? Is inaction now mandated?

Even though the numbers are the same as before—losing one life or losing five—most people feel differently about this dilemma: now a clear majority (70–90 per cent in most studies) say it is not morally permissible to push the man, and those that say it is permissible tend to take longer to reach their decision than when reflecting on the Trolley Problem.

What is going on in the brain when people mull over these different scenarios? Thinking through cases like the Trolley Problem—what Greene calls an impersonal moral dilemma as it involves no direct violence against another person—increases activity in brain regions located in the prefrontal cortex that are associated with deliberative reasoning and cognitive control (so-called executive functions). This pattern of activity suggests that impersonal moral dilemmas such as the Trolley Problem are treated as straightforward rational problems: how to maximise the number of lives saved. By contrast, brain imaging of the Footbridge Problem—a personal dilemma that invokes up-close and personal violence—tells a rather different story. Along with the brain regions activated in the Trolley Problem, areas known to process negative emotional responses also crank up their activity. In these more difficult dilemmas, people take much longer to make a decision and their brains show patterns of activity indicating increased emotional and cognitive conflict within the brain as the two appalling options are weighed up.

Greene interprets these different activation patterns, and the relative difficulty of making a choice in the Footbridge Problem, as the sign of conflict within the brain. On the one hand is a negative emotional response elicited by the prospect of pushing a man to his death saying "Don't do it!"; on the other, cognitive elements saying "Save as many people as possible and push the man!" For most people thinking about the Footbridge Problem, emotion wins out; in a minority of others, the utilitarian conclusion of maximising the number of lives saved.

To further explore the causal role of emotions in generating a normal pattern of moral judgements, neuroscientist Antonio Damasio of the University of Southern California and colleagues have looked at the effect on moral judgement of damage to a part of the brain called the ventromedial prefrontal cortex (VMPC), a region previously implicated in processing negative social emotions. Faced with the Trolley Problem, these brain-damaged patients chose like most people with intact brains, opting to flick the switch to save five lives at the expense of one, but in the Footbridge Problem took a coldly rational, utilitarian approach and said that it was morally permissible to throw the fat man in front of the train (using the same "one for five" calculus).

These findings fit in with Greene's dual-processing view of competing affective–cognitive systems. Damage to the VMPC and impairment of the functioning of the emotional system makes little difference in the Trolley Problem, which involves an impersonal action. But with the Footbridge Problem, for patients with damage to the VMPC, there is no counterbalancing emotional voice to question the wisdom of rationality's precepts, and the utilitarian calculus carries the day.

A Moral Grammar
While there is a growing consensus that the moral intuitions revealed by moral dilemmas such as the Trolley and Footbridge problems draw on unconscious psychological processes, there is an emerging debate about how best to characterise these unconscious elements.

On the one hand is the dual-processing view, in which "hot" affectively-laden intuitions that militate against personal violence are sometimes pitted against the ethical conclusions of deliberative, rational systems. An alternative perspective that is gaining increased attention sees our moral intuitions as driven by "cooler," non-affective general "principles" that are innately built into the human moral faculty and that we unconsciously follow when assessing social behaviour.

In order to find out whether such principles drive moral judgements, scientists need to know how people actually judge a range of moral dilemmas. In recent years, Marc Hauser, a biologist and psychologist at Harvard, has been heading up the Moral Sense Test (MST) project to gather just this sort of data from around the globe and across cultures.

The project is casting its net as wide as possible: the MST can be taken by anyone with access to the internet. Visitors to the "online lab" are presented with a series of short moral scenarios—subtle variations of the original Footbridge and Trolley dilemmas, as well as a variety of other moral dilemmas. The scenarios are designed to explore whether, and how, specific factors influence moral judgements. Data from 5,000 MST participants showed that people appear to follow a moral code prescribed by three principles:

• The action principle: harm caused by action is morally worse than equivalent harm caused by omission.

• The intention principle: harm intended as the means to a goal is morally worse than equivalent harm foreseen as the side-effect of a goal.

• The contact principle: using physical contact to cause harm to a victim is morally worse than causing equivalent harm to a victim without using physical contact.

Crucially, the researchers also asked participants to justify their decisions. Most people appealed to the action and contact principles; only a small minority explicitly referred to the intention principle. Hauser and colleagues interpret this as evidence that some principles that guide our moral judgments are simply not available to, and certainly not the product of, conscious reasoning. These principles, it is proposed, are an innate and universal part of the human moral faculty, guiding us in ways we are unaware of. In a (less elegant) reformulation of Pascal's famous claim that "The heart has reasons that reason does not know," we might say "The moral faculty has principles that reason does not know."

The notion that our judgements of moral situations are driven by principles of which we are not cognisant will no doubt strike many as implausible. Proponents of the "innate principles" perspective, however, can draw succour from the influential Chomskyan idea that humans are equipped with an innate and universal grammar for language as part of their basic design spec. In everyday conversation, we effortlessly decode a stream of noise into meaningful sentences according to rules that most of us are unaware of, and use these same rules to produce meaningful phrases of our own. Any adult with normal linguistic competence can rapidly decide whether an utterance or sentence is grammatically valid or not without conscious recourse to the specific rules that determine grammaticality. Just as we intuitively know what we can and cannot say, so too might we have an intuitive appreciation of what is morally permissible and what is forbidden.

Marc Hauser and legal theorist John Mikhail of Georgetown University have started to develop detailed models of what such an "innate moral grammar" might look like. Such models usually posit a number of key components, or psychological systems. One system uses "conversion rules" to break down observed (or imagined) behaviour into a meaningful set of actions, which is then used to create a "structural description" of the events. This structural description captures not only the causal and temporal sequence of events (what happened and when), but also intentional aspects of action (was the outcome intended as a means or a side effect? What was the intention behind the action?).

With the structural description in place, the causal and intentional aspects of events can be compared with a database of unconscious rules, such as "harm intended as a means to an end is morally worse than equivalent harm foreseen as the side-effect of a goal." If the events involve harm caused as a means to the greater good (and particularly if caused by the action and direct contact of another person), then a judgement of impermissibility is more likely to be generated by the moral faculty. In the most radical models of the moral grammar, judgements of permissibility and impermissibility occur prior to any emotional response. Rather than driving moral judgements, emotions in this view arise as a by-product of unconsciously reached judgements as to what is morally right and wrong.

Just as an innate, universal grammar for languages doesn't entail that all people will speak the same language, the idea of a universal moral grammar should not be taken to imply that systems of ethics will be the same the world over. For example, the grammar for language might say that all grammatical sentences must contain a subject, a verb and an object, but leave open which order they must appear in. So some languages, such as English, settle on a subject–verb–object order, and others, such as Japanese, on subject–object–verb.

Hauser argues that a similar "principles and parameters" model of moral judgement could help make sense of universal themes in human morality as well as differences across cultures (see below). There is little evidence about how innate principles are affected by culture, but Hauser has some expectations as to what might be found. If the intention principle is really an innate part of the moral faculty, then its operation should be seen in all cultures. However, cultures might vary in how much harm as a means to a goal they typically tolerate, which in turn could reflect how extensively that culture sanctions means-based harm such as infanticide (deliberately killing one child so that others may flourish, for example). These intriguing though speculative ideas await a thorough empirical test.

A full account of our moral psychology will also have to explain the variation in people's moral intuitions. Why do a minority of people think it is morally permissible to push the man in the Footbridge dilemma? Part of the answer is that people are likely to differ in the way their brains balance up affective or emotional responses with rational calculations. Such differences could result from as yet unidentified genetic factors or aspects of the environment and culture that tweak a common universal set of moral foundations.

Moral psychology also has to grapple with the problem of how and why societal norms of moral conduct change over time. Take attitudes towards homosexuals in developed western countries, which have changed enormously over the past 50 years. Arguments put forward by gay-rights advocates have undoubtedly played a part in shifting views about homosexuals. Yet the research on moral intuitions suggests that changes in the network of affective responses elicited by the thought of gays—driven by increased exposure to positive portrayals of gays in the media, for example—are likely to have been crucial to increasing acceptance.

Morality is a social phenomenon, and so it is little surprise that the way our social lives are structured—whether we live in small, tight-knit communities or large, anonymous cities—also sculpts our moral outlook. Haidt suggests that it is no coincidence that rural areas of the US, where communities are more bound together and interdependent, tend to be more conservative and religious, while urban dwellers tend to be more secular and liberal, with a focus on "individualising" ethics (see below). Viewed this way, the faultlines in the ongoing culture wars begin to come into focus, and the geographical distribution of red and blue states in the 2004 presidential election starts to make more sense.

Although current studies have only begun to scratch the surface, the take-home message is clear: intuitions that function below the radar of consciousness are most often the wellsprings of our moral judgements. Of course, the new wave of moral psychologists and neuroscientists are not the first to draw on the power of unconscious processes to explain the operation of the human mind. Freudian approaches have long stressed the role of unconscious thoughts, often with a sexual or aggressive edge to them, as drivers of both our behaviour and mental conflict. Yet the view of the non-conscious moral mind that is emerging bears little resemblance to the dark Freudian underworld of repressed memories, frustrated desires and unpalatable thoughts.

Despite the knocking it has received, reason is clearly not entirely impotent in the moral domain. We can reflect on our moral positions and, with a bit of effort, potentially revise them. An understanding of our moral intuitions, and the unconscious forces that fuel them, give us perhaps the greatest hope of overcoming them.

Moral cultures

Studies in moral psychology have typically looked at two core areas of moral concern: harm and fairness. A number of researchers are now arguing that this focus needs to be expanded, and recent studies of morality across cultures are producing signs that issues of harm and fairness are just a subset of the moral world inhabited by the majority of the planet.

Psychologist and cultural anthropologist Richard Shweder of the University of Chicago has long argued that the moral concepts found throughout the world cluster into at least three overlapping ethical domains: the ethics of autonomy (individual rights and fairness), community (respects for tradition, authority and group loyalty) and divinity (sanctity and purity of the soul).

Different cultural traditions place a different emphasis on the importance of each domain in navigating the moral realm. So while protecting the domain of divinity is a ubiquitous concern in the Indian subcontinent, liberal, educated westerners typically place the ethics of autonomy at the centre of their moral worldview.

Recently, Jonathan Haidt, along with Jesse Graham and Craig Joseph, has suggested an expansion of Shweder's three domains into five foundations for morality. Haidt, Graham and Joseph propose that the world's diverse moralities are built on top of five psychological foundations, each primed to detect and react emotionally to transgressions or violations of different moral concerns: harm to, and care, of individuals; justice and fairness; in-group loyalty; respect for authority/tradition; and issues of purity and sanctity.

Although we're all equipped with these psychological foundations, the ones that are actually built on varies across and within cultures. Using questionnaires, Haidt and Joseph have found that self-identified liberals in the US typically draw on the harm/care and justice/fairness in deciding moral issues. By contrast, religious and social conservatives generally take all five foundations to be relevant to their moral judgements. So when liberals and conservatives disagree, at stake is not just whose rights should be protected and how, but what counts as a legitimate moral concern in the first place. It is little wonder people so frequently talk past each other in the emotionally charged atmosphere of moral disputes.

There is also evidence that the different moral structures built on the universal five foundations are related to different emotional dispositions of conservatives and liberals. Recent work by David Pizarro and Yoel Inbar of Cornell University, in collaboration with Paul Bloom, a psychologist at Yale University, has explored how the morally charged emotion, disgust, which is frequently evoked by transgressions in the domain of purity, relates to these competing social orientations. The researchers found that the more disgust sensitive a person is, the more likely they are to hold conservative views on a range of social issues. Perhaps unsurprisingly, this link was strongest for the hot-button topics of abortion and gay marriage, views on which are heavily affected by attitudes to bodily purity.

None of this is to say that either the conservative or liberal outlook is inherently "better" than the other. Nor is it to suggest that you should expand the domain of your moral concerns if you're a liberal, or contract it if you're conservative, to settle on the "proper" moral domain. At the same time, however, such insights are not irrelevant to thinking about moral debates. A deeper appreciation of the roles the five proposed foundations for morality play in the moral visions of people with different backgrounds can be useful in its own right. Such an understanding has the potential to increase the sensitivity of disputants in moral debates to the mindset of those they seek to engage with or persuade. At the very least, it might be desirable to know thy enemy.

Take the Moral Sense Test

Explore your own morality