Technology

Could we ever record our dreams?

We may not be able to keep records of our night time imaginings, but we might be able to influence their content

October 23, 2014
Joseph-Benoit Guichard's Dreams of love:
Joseph-Benoit Guichard's Dreams of love:

In the movies—in films like Inception and Brazil—dreams look like, well, movies. But how much “visual” information do your dreams really contain? When you think about it, even familiar faces aren’t exactly “seen” so much as “sketched”— you know who they are because it’s your dream after all, not because you necessarily picture them in every detail. Do you actually “hear” what they say, or just somehow “know” it? Besides, dreams aren’t only or even primarily visual and aural. Often what strikes us most about a dream is the emotional aura, whether that’s fear, excitement or whatever. How could that ever be recorded in a “dream” home movie?

All the same, dreams can have extraordinarily precise content. Even if it’s a romanticized trope to be taken with a pinch of salt, people have found inspiration for songs, books and scientific theories in dreams. Perhaps you, like me, will have woken from dreams purely because you have become bored with their pedantic detail.

No one knows why we dream. One idea is that replaying events in the brain helps us to consolidate them in the memory—but of course many dreams are about things that never happened to us, or weird versions of things that did. Or maybe dreams serve no purpose, but are just the residual product of neurons firing at random while we sleep.

Whatever the case, the challenge of decoding dreams from our neural activity is in many ways no different to that of understanding anything the brain does. This activity can be measured in terms of the flows of blood (which are sensed by functional magnetic resonance imaging, fMRI), the patterns of electrical activity (detected by scalp sensors in electroencephalography, EEG) and the pulses of electrical current fired off by neurons, which can be detected directly with tiny invasive probes in the brain. Thanks to the new technique called optogenetics, it’s also possible now to make selected neurons emit light when they are active. All these methods are rather like watching the activity in a city from an aerial view of the traffic or from the patterns of lights in streets and buildings, and trying to deduce from those observations what the roles, intentions and motives of the inhabitants are.

Understanding how this activity generates thoughts and feelings is one of the primary aims of the brain-mapping projects currently firing up in the USA and Europe, called the BRAIN Initiative and the Human Brain Project respectively. They are both hampered by the fact that we have no good theories to guide us. So at present, making deductions about what the brain is thinking are largely all about looking for correlations. If the activity in one part of the brain increases (signaled by increased blood flow, say) when we are presented with a particular stimulus (such as a pleasurable experience or a mathematical puzzle), we may suspect that that brain region is associated with that task or feeling. If we see a characteristic spike in the brain’s electrical activity when we encounter a syntactical error in language, we can use the spike as a future diagnostic of syntax error processing, just as an electrical surge in a power grid was once a signal that the commercial break had begun in the evening TV schedule (because everyone went to make a cup of tea). We don’t need to know how one thing leads to the other, just that one signals the existence of the other.

This kind of “black-box” (perhaps here a grey box) approach has enabled researchers at the University of California at Berkeley, led by neuroscientist Jack Gallant, to reconstruct spooky sketches of what movies people are watching just by monitoring their brains using fMRI. Gallant claims that his work is “opening a window into the movies in our minds.” The Berkeley group began in 2008 by decoding still images. They recorded the fMRI signals in an area of the brain associated with the early stages of visual processing, while subjects looked at 1,750 different images. A mathematical procedure allowed them to crack the code linking a particular distribution of light and dark in the images to the corresponding fMRI signal in the brain. With this Rosetta stone, the researchers could then deduce, with an accuracy of typically between 70-90 per cent, which of 120 candidate images a subject was looking at purely from the fMRI data of his or her brain activity. “Our results suggest that it may soon be possible to reconstruct a picture of a person’s visual experience from measurements of brain activity alone”, they claimed. A Japanese team at the ATR Computational Neuroscience Laboratories in Kyoto, led by Yukiyasu Kamitani, reported a similar result around the same time.

But moving images are harder. Since fMRI responds quite slowly to changes in neural activity, many researchers doubted that could capture enough information about motion. Certainly, it seems impossible to use this approach to decode a movie frame by frame, like a series of stills. But Gallant’s team found a trick that allowed them to separate out brain activity in the visual cortex capturing motion information from that which encodes spatial information (still images). Using that method, they created a “reference library” for decoding subjects’ fMRI scans as they watched a movie clip, by having subjects watch lots of other clips (5,000 hours’ worth in total) that were plucked at random from the internet. Using this library, a previously unseen movie clip could be roughly reconstructed from the viewer’s fMRI data by superimposing several of the “library” clips that give the best matches to the brain scans, each given an appropriate weighting in the mix. The results are fuzzy, but eerily recognizable when played alongside the actual movie being watched. Remember that what you’re seeing in these reconstructions is not exactly someone’s thoughts, but a blend of pre-existing movies that approximates them. All the same, it’s pretty freaky.

Yet will it work for dreams? “It is currently unknown whether processes like dreaming and imagination are realized in the brain in a way that is functionally similar to perception," the Berkeley team says. “If they are, then it should be possible to use [our] techniques to decode brain activity during dreaming or imagination.” That work has already begun. In 2012, Kamitani’s group used fMRI to monitor brain activity in sleeping subjects, at the same time recording EEG signals to reveal the onset of sleep, which is when we start to dream. At this point the subjects were woken and asked what they were dreaming about, before being left to go back to sleep. This was repeated many times for several participants, and the Japanese team pooled some of the most common dream images reported: general items such as “car”, “man” or “woman." They then compared the fMRI data for those cases with the signals obtained for the visual cortex when wide-awake subjects looked at still pictures of the corresponding items. They found that when the images matched, so did the patterns of brain activity. “By analysing the brain activity during the nine seconds before we woke the subjects”, Kamitani said, “we could predict whether a man is in the dream or not, for instance, with an accuracy of 75–80 per cent.”

But so long as we rely on fMRI, it seems unlikely that reconstructing mental images will get beyond Gallant’s fuzzy movies or Kamitani’s simple identification of generic themes or images. You’re unlikely to know what your dream-lover looked like, or what they said. “fMRI”, say Gallant and his colleagues, “has relatively modest spatial and temporal resolution, so much of the information contained in the underlying neural activity is lost when using this technique.” To do better, we need to zero in on particular neurons.

This is now possible. Neuroscientists have discovered that specific memories seem to be encoded in specific groups of neurons, at least in flies and rats. This doesn’t mean that an entire memory is stored in a single region of the brain – different aspects, such as the “factual” and emotional content, are imprinted in different regions. But somehow these components remain bound together, which is why a single trigger—the taste of Proust’s madeleine cake, say—can recall both the events of the past and the feelings associated with them.

Using optogenetics, neuroscientists can label the particular neurons associated with a given memory, for example by attaching fluorescent proteins to them. In this way they can in effect take snapshots of the neural map of a recollection, such as a sense of fear evoked by a smell. Even more astonishingly, they can include switches in the labels that are flipped by heat or light, so that a given memory can be invoked at will, even in situations that shouldn’t stimulate it.

It is conceivable, at least, that by using these technologies to map out which neurons are linked to which learnt images, sounds or feelings, we might be able to reconstruct detailed thoughts and dreams from patterns of neural activity. “With sufficiently good technology you could do that”, says neuroscientist Steve Ramirez of the Massachusetts Institute of Technology. “It’s just a problem of technical limitations.”

Of course, we all have thoughts and dreams that we’d prefer to keep to ourselves. Are we heading towards the mind-crime world of Minority Report? “We believe strongly that no one should be subjected to any form of brain-reading process involuntarily, covertly, or without complete informed consent”, say the Berkeley team.

On the other hand, how about a bedside machine that feeds nice dreams into your head as you sleep, or that detects nightmares and edits them into something less scary? That doesn’t sound so bad. Gallant doesn’t think that inserting movies directly into the brain will happen in the foreseeable future, however. “There is no known technology that could remotely send signals to the brain in a way that would be organized enough to elicit a meaningful visual image or thought,” he says. But it’s not an absurd idea—already, Ramirez and his coworkers have used optogenetic light-switching of neurons to implant a “false memory” in the brains of mice. Maybe we won’t be able to script our dreams in detail—but it doesn’t seem impossible to imagine selecting their cast of characters, along with the feelings they elicit. Sweet dreams indeed.