Quantum computers will take us beyond the binary age, into a perplexing new era. And they're already hereby Jay Elwes / December 11, 2017 / Leave a comment
Each day, humans create 2.5 quintillion bytes of data. A byte is the amount of data needed by a computer to encode a single letter. A quintillion is one followed by 18 zeros. We float on an ocean of data.
You’d arrive at an even bigger number if you put it in terms of “bits”, the ultimate basic building block out of which every wonder of the digital age is built. A bit is simply a one or a zero or, equivalently, a single switch inside an electronic processor that must be either on or off. Put eight in a row, and you’ve got enough combinations to label and store every character on your keyboard—there are thus eight bits to the byte.
These days your newspapers, your tax records, your shopping list and perhaps your love life are nothing more than a long series of “ons” and “offs” generated by the digital processors that lurk in your phone, your car, or your TV. The correct sequence of ones and zeros is all that computers need in order to control the traffic lights at the end of your street, run a nuclear power station, or find you a date for next Friday night. From one perspective, they are simply doing—on a vast scale—the tallying and reckoning we have always done on our fingers: on our digits.
The “digital age” is a colossal achievement of human ingenuity. But this world of ones and zeros is not an end state. Humankind has passed through other ages before: bronze, iron, the era of steam and then of the telegraph, each of which constituted a revolution, before being brought to a close by some further advance of human ingenuity. And that raises a question—if our present digital age will pass just like all the rest, what might come after it?
We are starting to see the answer to that question, and it looks as though the successor to the age of the digital computer will be a startlingly new kind of device—the quantum computer.
In 1981, Richard Feynman, the Nobel prize-winning physicist, presented a paper at the California Institute of Technology with the title “Simulating Physics with Computers.” “What kind of computer are we going to use to simulate physics?” Feynman asked, and he chased that first question with a second: “what kind of physics are we going to imitate?” The answer to that came clear as a bell. “Nature isn’t classical, dammit,” said Feynman, “and if you want to make a simulation of nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem because it doesn’t look so easy.”
Feynman had it dead right. What he was proposing was not easy. Instead of a computer that ran according to the laws of classical physics—such as all conventional computers—he was proposing a computer that ran according to the most advanced picture of the physical world known to science: quantum mechanics. Feynman was putting forward the idea of a computer that ran according to a completely different set of scientific principles. It was a stunning suggestion. The laws of quantum mechanics relate to the behaviour of subatomic particles and packets of energy. The idea that quantum mechanical states could be harnessed and somehow used for computation was deeply provocative.
A quantum computer would work in a completely different way to the classical kind. Instead of “bits”, it would use “qubits,” that is, quantum bits. Feynman proposed that a machine of this sort would allow scientists to model quantum states and gain new insights into the behaviour of atoms and particles. But there were possibilities beyond pure science. Quantum computers would be able to carry out operations at many times the speed of traditional computers. Not only that, they might be able to do things that a conventional computer could not do at all.
All of which would have struck Feynman’s 1981 audience as pretty far-out. Even now the idea of a quantum computer has a tang of science fiction about it. Which it should not, because quantum systems already exist. You can go online and use one right now. In May 2016, IBM debuted its “Quantum Experience,” which allows users to access a quantum system through a cloud application and run algorithms and experiments. In the summer of 2017, IBM upgraded the processor behind the application and in November announced plans for an even more powerful device.
IBM is not alone. Google is currently experimenting with an even more powerful quantum chip, and has plans to upgrade it further. In April 2017, a number of Google’s senior researchers released a paper called “Characterising Quantum Supremacy in Near-Term Devices.” In that abstruse-sounding title, the phrase “quantum supremacy” is the most significant. It denotes the moment when a quantum computer can perform operations that a classical computer cannot. The paper’s authors, who include Hartmut Neven, Engineering Director at Google and the founder and manager of its Quantum Artificial Intelligence Lab, wrote in their paper that “quantum supremacy can be achieved in the near-term.”
The potential of quantum computer technology is enormous and billions of dollars are being poured into research by companies, including not only Google and IBM but also Facebook and Microsoft, by universities in the US, UK, Australia and elsewhere, and by the Chinese government (which has invested heavily in developing quantum communication systems.) This brings with it a huge freight of complex challenges and questions and the most central question of all, aside from how you make one, is what a quantum computer would actually do. The answers to that are not straightforward, and involve negotiating a dense mash of computer science, physics, mathematics and philosophy.
Scott Aaronson is a Professor of Computer Science at the University of Texas at Austin. He is a leading authority on quantum computing and I spoke to him extensively in researching this article. “If you are interested in what is the ultimate limits of what we could compute or what we could know,” he said, “then in some sense you need to know something about quantum computing. Because this is the most powerful kind of computation based on the understood laws of physics.”
Humans have always looked at the heavens. The first Babylonian star catalogues date from 1,200 BC. The Egyptians used astronomy to calculate the timing of the flooding of the Nile and it was the Greek thinker, Aristarchus of Samos, who in the third century BC first suggested that the sun was at the centre of the solar system. Over a thousand years passed before that idea entered western science, when Copernicus made his pronouncements on the heliocentric model. In the seventeenth century, Isaac Newton set out the law of universal gravitation, an immense moment of intellectual progress which gave such a powerful picture of how the universe behaved that it remained broadly unchanged for nearly two hundred years.
The problem came towards the end of the nineteenth century and the beginning of the twentieth, when science focused on the question of the atom. Its central nucleus surrounded by a group of other particles was instinctively very similar to the Newtonian picture of a planet orbiting a star. Surely the answer was to take the old equations and apply them to the atom. When this was tried, it didn’t work. The atom could not be explained using classical physics. A new analysis was needed.
It was this need to explain the behaviour of subatomic particles that drove the delopment of quantum theory. Developed by physicists in the first decades of the twentieth century it remains an enormously powerful framework for describing the behavior of subatomic particles. It was also a deeply counter-intuitive theory, which was nothing like the motion of the planets. For one thing, the solar system could be observed by anyone with a telescope: you could see it. The quantum world is different. One of its crowning oddities is that you are not allowed to look at it. If you look at a quantum state, you find that it isn’t there. Now, how do you make a computer out of that?
Imagine that you can see your face partly reflected in a window. Only some of the the particles of light—called photons—are reflected back at you from its surface. Most are passing straight through. You have a partial reflection. So imagine the following experiment, which was described to me by Sandu Popescu, a professor at the HH Wills Physics Laboratory in Bristol, and which takes you towards the answer of how it is possible to do calculations using quantum particles.
It goes like this: we fire a single photon, that is, one particle of light, at a piece of glass. (It is possible to do a near-identical experiment using different particles, say, electrons.) We can adjust the reflectivity of the glass so that there is a 50 per cent chance that the photon will be reflected and a 50 per cent chance that it will go straight through. See the diagram below:
A caveat—as there is only a single photon, and as light doesn’t bounce off light, you don’t see the photon. And if you managed to put your eye in the path of the photon in order to see it, your eye would absorb the photon, meaning you would have taken it out of the experiment. Intuitively, you imagine that “looking” is passive, but in quantum mechanics, it is an “active” thing. It disturbs what you are looking at. You can’t “see” quantum phenomena.
Next, we put a mirror in the path of the reflected photon, so that it is reflected again, this time down to a second sheet of glass. The photon will then, just as before, either go through that second sheet of glass, or be reflected upwards from its surface, like this:
So if we release, say, 4,000 photons from the emitter on the left, then approximately 2,000 of them will go straight through the first glass surface and vanish, but 2,000 will be reflected up towards our mirror. Those 2,000 are then bounced back down to the second glass surface where 1,000 pass straight through and are lost and 1,000 are reflected upwards. And indeed if we run the experiment, that is what we find.
Then suppose we add a second mirror, beneath the first one, and place two photon detectors on the right-hand side of our experiment, to measure where our particles end up, like this:
Intuition tells us that the 4,000 photons will now bounce through the system and be roughly shared out evenly between the two detectors. Detector 1 would register around 2,000 clicks, as would Detector 2. We run the experiment. We fire the first photon, and there is a click in Detector 1. Second photon. Another click in Detector 1. And then we fire all of our photons and at the end of the experiment we find that all the photons have gone to Detector 1—4,000 clicks for Detector 1. Zero for Detector 2. This must be a mistake. We repeat the experiment and the same thing happens. It is not a mistake.
Something odd is happening. When we have only one mirror, some photons are reflected upwards from the first glass to that mirror and then go down through the second layer of glass to the Detector 2 position. However, when the second mirror is introduced, those photons somehow “know” and change their behavior. Instead of going down to Detector 2, they go up into to Detector 1. How can it possibly be that a particle taking the upper path is affected by something that occurs on the lower path? Why does the presence of the lower mirror make any difference to a photon that does not appear to interact with it?
The explanation is this: the photon takes both the upper and the lower path at the same time. It is in two places at once. This “being in two places at once” is known as “superposition.” And because it takes both the upper and lower path, the photon is influenced by the presence of the second mirror. It bears repeating: the particle is in two places at the same time.
It can be quite hard to accept this sort of quantum weirdness. Even Niels Bohr, one of the originators of quantum theory, remarked that, “If quantum mechanics hasn’t profoundly shocked you, you haven’t understood it yet.” Superposition is just one example. The quantum world reaches out into deeply strange territory including the “Schrödinger’s Cat,” thought experiment in which the animal is both alive and dead at the same time, quantum teleportation, time-traveling particles and multiverse theory. The quantum world is strange and the idea of superposition encapsulates its intrinsic oddness. And yet every experiment ever conducted confirms that it accurately describes the behavior of particles.
And how does this all relate to quantum computers? The example given above—of photons, glass lenses, mirrors and resulting superposition—is a picture of a qubit, and as such it is a diagram of an elementary quantum computer circuit. There are other arrangements which look nothing like the above example, but which are its mathematical equivalent. In mathematics, the qubit is represented by a sphere called a Bloch Sphere (see below); if the north and south poles of that sphere represent the 1 and 0 of the classical computer bit, then the qubit can be any point on the surface of the sphere. So whereas the classical bit can be either a 1 or 0, the qubit can be 1 or 0 or a huge number of states in between, and it is this that gives it its particular power. It can encode enormously more states than its binary predecessor.
There is then the challenge of getting qubits to work together, like a series of switches. Pairs of qubits, like the one in the above example, are brought together so that the two particles, each in a state of superposition, interact with one another to become “entangled.” This means that the state of one of the superposed particles will be correlated with the other. The more qubits, the greater the number of potential states becomes, which in turn increases the capacity of your system. Your quantum computer is growing.
Once built at scale, quantum computers will be able to carry out some important practical operations at many times the speed of traditional computers. Not only that, they might be able to do things that a conventional computer could not do at all.
That, at least, is the theory.
“My first reaction is ‘this sounds like a publicity stunt’,” Scott Aaronson told me, when I asked him about IBM’s online device. “A five qubit quantum computer—we know it’s possible to build that. Fine. They put one on the internet. But any result I could get with a five qubit quantum computer I could easily get the same result by simulating it on my smartphone.”
Significantly, however, Aaronson conceded, “My view has changed”. Again, why? Partly it is about IBM’s online resource stimulating “many research groups that were in far-flung places,” into thinking about the field. But more than that, Aaronson told me, scientists have also learned valuable direct lessons from the IBM quantum application, just as they have from other quantum devices such as the “D-Wave” system, another quantum device made by a Canadian company.
I asked about other systems in development: “Google is experimenting right now with a 22-qubit superconducting chip,” Aaronson said, “and is planning to upgrade very soon to 49 qubits.” The qubits in this new device will have a very high coherence time, meaning the quantum state will be maintained for longer, increasing its power. Google being Google, its work has attracted a good deal of attention. A recent New Scientist headline read: “Google on track for quantum computer breakthrough by end of 2017.”
The manipulation of minuscule particles of light is being studied by Anthony Laing, at the Quantum Engineering and Technology Labs, Bristol, where researchers under Laing’s supervision are operating an experimental quantum device, which uses laser light, crystals, prisms and mirrors to create quantum states. At first glance it’s a spaghetti mess of wires that sprout from a work-table studded with optical devices, circuit boards, small wedges of glass and a single Post-it note saying “do not touch.” Laing explained that pairs of infrared photons are directed via optical fibres into a chip where they can be manipulated into superposition. One of the biggest problems, Laing told me, is photon loss, where imperfections in the optical fibres can cause photons to scatter off. The longer the fibre, the greater the chance of this occurring.
And if your photon makes it to the end of the experiment, detecting it can present difficulties. The Bristol lab uses a detector called the Avalanche Photo Diode, which Laing called the “workhorse photon detector.” But they still only have an efficiency of 65 per cent, meaning they will only click for every 65 out of 100 photons. A new system using super-conducting nanowires is in planned, with an efficiency of up to 90 per cent.
And if these and all the other challenges were finally overcome and a fully-functioning universal quantum computer were ever achieved, what would it do? Sandu Popescu, the physicist, explained that, from the start, the motivation had been the idea “that there are some computations we can do presumably easier on a quantum computer than on any classical model.” This was confirmed by Peter Shor, a mathematician at the Massechussetts Institute of Technology, who in 1994 developed an algorithm that would run on a quantum computer to find the prime factors of a given integer.
It sounds mundane, but factoring large numbers into their prime divisors is notoriously difficult for classical computers—that’s why it’s used in so many encryption systems. If you add one more figure to the number you want to factorise, it takes roughly ten times longer for a conventional computer to work out. If you add six more digits, it takes a million times longer. Your bank account, your medical records, your private digital communications, are all protected by the power of encryption systems based on prime numbers. A quantum computer of sufficient power could make all of these privacy methods redundant. (A quantum bank robber could get very rich very quickly.) One expert told me that governments are storing their adversaries’ intelligence traffic so they can decode it using future quantum systems.
Shor’s factoring algorithm doesn’t make a quantum computer tick faster, but applies different mathematics, based on the concept of superposition (the quantum particle’s property of being in more than one place at once), which means that as you increase the length of the number to be factorised, the problem doesn’t become more difficult in such a dramatic way. It turned out that the mathematical structure of the problem meant that a quantum computer would be particularly well-suited to attacking it.
A different application for quantum computers is in physics, in the simulation of other quantum systems: the use that Feynman anticipated in his 1981 paper. Classical computers cannot simulate the dynamics of a quantum states because with each additional particle the complexity of the system increases exponentially. Quantum computers would not face that problem. “The capability to simulate things, that would revolutionise everything that deals with the physics of all kind of materials,” said Popescu. “From building better pharmaceuticals to understanding biology, to building crystals that you will use in a huge number of electronic devices—it would revolutionise everything. That is no doubt.”
The capacity to simulate quantum physics would be “huge,” Aaronson told me, not only for designing new drugs, software and financial analytical tools, but also for “new materials, superconductors and photovoltaic [solar power technology.]” He also forsees “applications to machine learning and data mining, which have occasioned a lot of excitement over the last few years,” adding that this would require “not only a quantum computer, but also a large memory that can be accessed in superposition (a so-called ‘quantum RAM’).”
And what if these and all the other challenges were finally overcome, just as the many engineering challenges in the age of steam, electricity and digital computing were overcome? Science would then have achieved the astounding feat of creating a fully-functioning universal quantum computer—but what then? What would that mean?
“My view is that on longer terms, this would change the face of the world,” Popescu told me. “If you want to say that in five years we will do this, I doubt that very much. I don’t know how long it will take, but it will certainly have dramatic implications.”
Google’s new chip, which uses “super-conducting qubits” is rumoured to be nearing readiness. Scientists in Australia are working on a phosphorous and silicon design. IBM, like Google, has opted to pursue superconducting qubits and in Europe—and also China—the emphasis has been on photonics, which broadly follows the principles outlined above. A hundred years ago, motor-racing used to involve cars of all different designs. Now, though, because aerodynamics is better understood, all racing cars look the same. But you couldn’t know back then what was going to be the best design, just as now, it is impossible to say what mode of quantum computer would be the best sort.
None of this is going to be easy. There is the great challenge of getting qubits to work together, like a series of switches; it’s notable that IBM’s online quantum computing demo had only five qubits, making it much less powerful than a standard everyday laptop.
Google’s work has attracted particular attention, and the pace of its achievements is pushed along by colossal investment that other companies—and countries—will struggle to match. The chip it is developing will have 49 qubits. One researcher told me that, if the device is to do half of what Google claims, “those would have to be 49 qubits better than any qubits that have ever existed before.”
And if it does work, Aaronson said, “it should be able to achieve a clear quantum speedup,” on certain types of problem. It would, he said “be a major scientific advance. Something to be genuinely excited about.”