No kidding

New brain scanners claim to distinguish truth from lies. Should we trust them?
September 23, 2009

A murder case in India last year attracted unusual scientific attention. A 24-year-old woman called Aditi Sharma was accused of killing her former fiancé, Udit Bharati. Aditi had married another man, but she had met Udit at a McDonald’s, where she allegedly laced his food with arsenic.

After she was arrested, Aditi agreed to take a brain-scanning test to prove her innocence. Investigators placed 32 electrodes on her head, before reading the allegations to her in the first person—“I bought arsenic”; “I met Udit at McDonald’s”—along with neutral statements like “The sky is blue.” Adit failed the test—according to prosecutors, the parts of her brain where memories are stored buzzed when the crime was recounted. The judge deemed this proof of “experiential knowledge” of the crime, and she and her husband were sentenced to life in prison. It was the first time anywhere in the world this technology had been used to make a conviction.

A foolproof—or rather liar-proof—machine has long been a law enforcement dream. For much of the 20th century, the polygraph held out such a promise. But it doesn’t detect lies, exactly: it measures a suspect’s physiological responses to stress, such as increases in blood pressure or heart rate. And so it can produce a guilty reading on innocent suspects who are merely nervous, and is vulnerable to the well-prepared liar who can control their emotions.



In the past 15 years, however, new brain-scanning technologies have emerged that allow investigators to elicit the truth by peering directly into a suspect’s head and reading their brain’s neuronal activity. Hopes are pinned on functional magnetic resonance imaging, or fMRI. An fMRI machine looks like any hospital MRI scanner: a white metal tube into which the patient is inserted on a stretcher. It contains magnets that make certain molecules in the brain resonate, tracking flows of oxygenated blood to the regions requiring it most at any one time, conjuring colourful images of the brain and synchronising them with actions. In theory at least, it allows us to see the brain at work

Daniel Langleben, a psychiatrist at the University of Pennsylvania, is at the forefront of this potential revolution. He became curious about deception when he was studying attention deficit disorder (ADD). Children with ADD are useless liars because of their habit of blurting out the truth. Langleben thought that this related to their lack of impulse control and surmised that lying involves extra mental effort. In 2000, he carried out his first deception experiment using fMRI, in which subjects were told to lie deliberately. When lies were told, the fMRI images showed increased activity in certain regions of the brain, including the pre-frontal cortex which is responsible for reasoning and self-control. This chimed with his supposition that lying requires cognitive work. Although the experimental results were difficult to interpret, it appeared that an fMRI scan could detect a difference between a lie and the truth.

Since then, funds have been ploughed into researching the lie-detecting possibilities of this technology. Langleben is one of hundreds of scientists funded by the Pentagon and the US department of homeland security. There are also possible civilian applications: schools might use scanners to check for plagiarism; airports could screen people’s brains along with their baggage.

A US company called No Lie MRI already sells brain scans to those seekingto prove their innocence in court, or to vet potential life partners. Scott Faro, a radiologist at the Temple University Hospital in Pennsylvania, told Wired magazine that he sees fMRI as a boon for society at large. “If this is a more accurate test,” he said, “I don’t see any moral issues at all.”

Not everyone is so relaxed. The Cornell Law Review has argued that “fMRI is one of the few technologies to which the now clichéd moniker of ‘Orwellian’ legitimately applies.” And bioethicists such as Paul Root Wolpe—a colleague of Langleben’s—are considering the tricky questions these new methods pose: should the results stand as evidence in court, and if so how should they be classified—as akin to a DNA sample, or as testimony? Earlier this year neuroscientist Edward Vul published a widely noticed paper in Perspectives on Psychological Science claiming that half of the fMRI studies he analysed contained “voodoo correlations” between behaviour and brain activity. And a growing number of neuroscientists argue that the brain is too complex to be reduced to “mapped” areas of neural activity.

Seductive though the idea is, then, a true understanding of deception will probably require a much fuller knowledge of functions such as memory and perception. We are unlikely to ever find a single “lie zone.”

The conviction of Aditi Sharma caused disquiet among scientists everywhere. But earlier this year, Sharma and her husband, Pravin Khandelwal, were granted bail by the Bombay high court on the basis of a lack of material evidence linking them to the crime. Their case may take years to be heard again. And if their names are to be cleared, one of India’s courts must conclude that Sharma’s brain scan failed to find the truth—or the lie—inside her head.