New pursuit of Schrödinger’s cat

Quantum theory is reliable but fraught with paradox. Philip Ball asks if scientists will now find an object existing in two places at once
September 21, 2011
The 1927 Solvay conference on particle physics: back row, third from right, Werner Heisenberg, sixth from right, Erwin Schrödinger; middle row, from right, Niels Bohr, Max Born, Louis de Broglie and centre, Paul Dirac. Front row, second from left, Max Planck, next to him, Marie Curie, then Hendrik Lorentz and Albert Einstein. Of the 29 pictured, 18 won Nobel prizes, Curie in both physics and chemistry




Quantum mechanics is more than a hundred years old, but we still don’t understand it. In recent years, however, physicists have found a fresh enthusiasm for exploring the questions about quantum theory that were swept under the rug by its founders. Advances in experimental methods make it possible to test ideas about why objects on the scale of atoms follow different rules from those that govern objects on the everyday scale. In effect, this becomes an enquiry into the sense in which things exist at all.

In 1900 the German physicist Max Planck suggested that light—a form of electromagnetic waves—consists of tiny, indivisible packets of energy. These particles, called photons, are the “quanta” of light. Five years later Albert Einstein showed how this quantum hypothesis explained the way light kicks electrons out of metals—the photoelectric effect. It was for this, not the theory of relativity, that he won his Nobel prize.

The early pioneers of quantum theory quickly discovered that the seemingly innocuous idea that energy is grainy has bizarre implications. Objects can be in many places at once. Particles behave like waves and vice versa. The act of witnessing an event alters it. Perhaps the quantum world is constantly branching into multiple universes.

As long as you just accept these paradoxes, quantum theory works fine. Scientists routinely adopt the approach memorably described by Cornell physicist David Mermin, as “shut up and calculate.” They use quantum mechanics to calculate everything from the strength of metal alloys to the shapes of molecules. Routine application of the theory underpins the miniaturisation of electronics, medical MRI imaging and the development of solar cells, to name just a few burgeoning technologies.

Quantum mechanics is one of the most reliable theories in science: its prediction of how light interacts with matter is accurate to the eighth decimal place. But the question of how to interpret the theory—what it tells us about the physical universe—was never resolved by founders such as Niels Bohr, Werner Heisenberg and Erwin Schrödinger. Famously, Einstein himself was unhappy about how quantum theory leaves so much to chance: it pronounces only on the relative probabilities of how the world is arranged, not on how things fundamentally are.

Most physicists accept something like Bohr and Heisenberg’s Copenhagen interpretation. This holds that there is no essential reality beyond the quantum description, nothing more fundamental and definite than probabilities. Bohr coined the notion of “complementarity” to express the need to relinquish the expectation of a deeper reality beneath the equations. If you measure a quantum object, you might find it in a particular state. But it makes no sense to ask if it was in that state before you looked. All that can be said is that it had a particular probability of being so. It’s not that you don’t “know,” but rather that the question has no physical meaning. Similarly, Heisenberg’s uncertainty principle is not a statement about the limits of what we can know about a quantum particle’s position, but places bounds on the whole concept of position.

Einstein attacked this idea in a thought experiment in which two quantum particles were arranged to have interdependent states, whereby if one were aligned in one direction, then the other had to be aligned in the opposite direction. Suppose these particles move many light years apart, and then you measure the state of one of them. Quantum theory insists that this instantly determines the state of the other. Again, it’s not that you simply don’t know until you measure. It is that the state of the particles is literally undecided until then. But this implies that the effect of the measurement is transmitted instantly, and therefore faster than light, across cosmic distances to the other particle. Surely that’s absurd, Einstein argued.

But it isn’t. Experiments have now established beyond doubt that this instantaneous action at a distance, called entanglement, is real—that’s just how quantum mechanics is.

This is not an abstruse oddity. Entanglement is exploited in quantum cryptography, where a message is encoded in entangled quantum particles, making it impossible to intercept and read in transit without the tampering being detected. Entanglement is also used in quantum computing, where the ability of quantum particles to exist in many states at once allows huge numbers of calculations to be conducted simultaneously, greatly accelerating the solution of mathematical problems. Although these technologies are in early development, already there are signs of commercial interest. Earlier this year the Canadian company D-Wave Systems announced the first sale of a quantum computer to Lockheed Martin, while fibre-optic-based quantum cryptography was used (admittedly more for publicity than for extra security) to transmit ballot information in the 2007 Swiss elections.

“Discussions of relations between information and physical reality are now of interest because such questions can have practical implications,” says Wojciech Zurek, a quantum theorist at the Los Alamos National Laboratory in New Mexico.

The quantum renaissance hinges on experimental innovations. Until the 1970s, experiments on quantum fundamentals relied mostly on indirect inference. But now it’s possible to make and probe individual quantum objects with great precision. Many technological advances have contributed to this, among them the advent of laser light composed of photons of identical, precise energy. So has the ability to make measurements with immense precision in time, space and mass; methods to hold atoms in electrical and magnetic traps (the subject of the 1997 Nobel prize in physics); and the manipulation of light with fibre optics (helped by advances in optical telecommunications).

But even if you accept the paradoxical aspects of the theory and just use the maths, the fundamental questions won’t go away. For example, if the act of measurement turns probabilities into certainties, how exactly does it do that? Physicists have long spoken of measurements “collapsing the wavefunction,” which expresses how the smeared-out, wave-like mathematical entity encoding all possible quantum states (the wavefunction) becomes focused into a particular place or state. But this was seen largely as metaphor. The collapse had to be imposed by fiat, since it didn’t feature in the mathematical theory.

Many physicists, such as Roger Penrose of Oxford University, believe that this collapse is a real physical event, similar to radioactive decay. If so, it requires an ingredient that lies outside current quantum theory. Penrose argues that the missing element is gravity, and that we’d understand wavefunction collapse if only we could marry quantum theory to general relativity, one of the major lacunae in contemporary physics.

Physicist Dirk Bouwmeester of the University of California at Santa Barbara and his co-workers hope to test that idea by placing tiny mirrors in quantum “superposition” states, meaning that they are in several places at once, and then watch their wavefunction collapse into a single location, triggered by a measurement in which photons are reflected from them. Ignacio Cirac and Oriol Romero-Isart at the Max Planck Institute for Quantum Optics in Garching, Germany, recently outlined a method for placing objects of about a nanometre in size, containing thousands or millions of atoms, into superposition states using light to trap and probe them, which would allow tests of such wavefunction-collapse theories.

Wavefunction collapse is one reason why the world doesn’t follow quantum rules all the way up from the nano world to that of our everyday experience. If it did, these rules wouldn’t seem counter-intuitive. It’s only because we’re used to our coffee cups being on our desk or in the dishwasher, but not both at once, that it seems unreasonable for photons or electrons to behave in this way.

At some scale, the quantum-ness of the microscopic world gives way to classical, Newtonian physics. Why? The generally accepted answer is the process of decoherence. Crudely speaking, interactions of a quantum entity with its teeming environment act like a measurement, collapsing superpositions into a well-defined state. So, large objects obey classical physics not because of their size per se but because they contain more particles and thus experience more interactions, so decohering instantly.

But that doesn’t fully resolve the issue—as shown by Schrödinger’s famous cat. In his thought experiment, Schrödinger imagined a cat that is poisoned, or not, depending on the outcome of a quantum event. The experiment is concealed inside a box. Since the outcome of the event is undetermined until observation collapses the wavefunction, quantum theory seemed to insist that, until the box is opened, the cat would be both alive and dead. Physicists used to evade that absurdity by insisting that somehow the bigness of the cat would bring about decoherence even without observation, so that it would be either alive or dead but not both.

Yet one can imagine suppressing decoherence by creating a Schrödinger cat experiment that is well isolated from its surroundings. Then what? Ask old-school “shut up and calculate” physicists if the cat can be simultaneously alive and dead, and they are likely to assert that this will still be censored somehow or other. But less conservative physicists may well now answer “why not?”

Perhaps we can simply do the experiment. The size of a cat makes it still nigh impossible to suppress decoherence, but a microscopic “cat” is more amenable to isolation. Cirac and Romero-Isart have proposed an experiment in which the cat is replaced by a virus, held in a light trap and coaxed by laser light into a quantum superposition of states. They say it might even work for tiny aquatic animals called tardigrades or water bears, which, unlike viruses, are unambiguously living or dead. It’s not obvious how to set up an experiment like Schrödinger’s, but simply placing a living creature in two places at once would be mind-boggling enough.

For whatever reason, the fact is that everyday objects are always in a single state and we can make measurements on them without altering that state: we have never sighted a Schrödinger cat. Physicists Anthony Leggett, a Nobel laureate at the University of Illinois, and Anupam Garg of Northwestern University, also in Illinois, call these conditions macrorealism. But is our classical world truly macrorealistic, or does it just look that way? Leggett and Garg showed in theory how to distinguish a macrorealistic world from one that isn’t. Such tests are even tougher to conduct than those on wave function collapse, says Romero-Isart, but he thinks that his proposed experiment on nano-objects could make a start.

Zurek, meanwhile, has developed a theory of how a quantum world can look classical without really being so. Whereas measuring a quantum system will alter it, classical systems can be probed without changing them. Fifty people can read this text without spontaneously altering it. But in Zurek’s scheme, this may be true of quantum systems too if they can leave imprints on their environment, which we then observe. Each observer sees (and thereby destroys) an imprint. Because each imprint is the same, they all agree on the properties of the system. But only certain quantum states can create many identical imprints. In a sense these robust states are “selected” in a quasi-Darwinian way, and so out of all the possible quantum attributes of the system, these are the ones we ascribe to the object. It’s as though a ripe apple creates redness imprints, which enable us to agree that it is red, while also possessing other quantum attributes that can’t be assigned a definite value in this way. Yet because the number of imprints in this quantum Darwinism model is finite, (though continually multiplying), they could in principle become used up after which the object does look different. It sounds crazy, but is not as implausible as it sounds.

Ideas like this, however bizarre they might seem, can be made consistent with current quantum theory precisely because that theory leaves so much unanswered. It shouldn’t be long, however, before we can put them to the test. The days of having to “shut up and calculate” may be numbered.