A couple of weeks ago, I navigated Christmas traffic from New York to the town of Yorktown Heights in suburban Westchester County, the location of IBM’s Thomas J. Watson Research Center, to stand in front of what may be the computer most advanced quantum technology in the world.
The IBM Quantum System Two is clad in gray metal, about the size and shape of an industrial refrigerator. Which makes sense: Much of the interior architecture is made up of cooling equipment that keeps the three IBM Heron quantum processors “colder than deep space,” as IBM’s senior vice president and director of research told me in an interview, Darío Gil. Up close, it emits a quiet electronic hum.
The Quantum System Two looks intimidating, like a descendant of the supercomputer 2001: A Space Odyssey, minus HAL’s mysterious red eye. And it should be: with hundreds of qubits, the quantum counterpart to classical computing bits, running on three connected processors, the Quantum System Two represents an important step forward on the very long road to taking quantum computing from the laboratory to the practical world, where These types of machines could one day solve problems that even the fastest classical supercomputers couldn’t solve for millions of years.
But appearances can be deceiving. The qubits that do the real work of quantum computing within System Two are as sensitive and error-prone as the hardware itself appears indestructible. That’s why quantum computers need to be kept cooler than cold: Even the slightest increase in temperature, vibration, or noise can cause qubits to abandon the fickle quantum state that allows them to work their magic.
And that’s why, as impressive as Quantum System Two looks, it still represents the early stages of the quantum computing revolution: more 1960s CDC 6600 than, well, HAL. For quantum computers to truly become large and reliable enough to begin meeting business needs, leading companies like IBM (and Google, Honeywell, and a variety of other competitors in the quantum race) will need to invest billions of dollars. and overcome serious technical problems. challenges. “We have been on this journey for many years and we are going to be on it for many more years,” Gil told me. “But we have a roadmap.”
Why quantum matters
The difference between a classical computer, which includes everything from the machine you’re reading this article about to the Frontier supercomputer at Oak Ridge National Laboratory, and a quantum computer starts at the most fundamental level: the bit, or for the quantum computers, the qubits.
While bits express information in the form of binary states (on/off, 1/0), qubits, because they can take advantage of the strange properties of quantum physics, can express multiple states at the same time, much like how quantum particles in physics can express it. many different states at the same time. (If that breaks your brain, don’t worry: Einstein felt the same way.) What this means in terms of computing is that while classical machines calculate possibilities one after another, quantum machines can perform many, many calculations simultaneously. In a recent interview with 60 minutesPhysicist Michio Kaku compared a quantum computer to a mouse in a maze that can explore all possible routes at the same time, instead of doing so step by step like a classical machine.
That means a quantum computer could solve very complex problems that would always be out of reach of a classical computer, no matter how fast it works or how long it has to run. It’s as if computing has always existed in two dimensions and, suddenly, we discovered a third.
The practical possibilities are enormous. Beyond being able to crack still-unbreakable cryptography (one of the reasons security agencies around the world, including the United States and China, are so focused on quantum technology), quantum computers could one day perfectly model the behavior of the physical world, which, after all, is ultimately based on quantum physics, not classical physics. The Cleveland Clinic is already using an older IBM quantum computer in an effort to screen and optimize drugs targeting specific proteins.
Quantum technology will also become more important as classical computing reaches its physical limits. Today, computers have become more powerful because scientists have managed to include more and more transistors on each chip. That’s how Moore’s Law (the idea that computers would become more powerful and cheaper with each passing year) went from prediction to reality. But today transistors are as small as five atoms, which means there isn’t much more room to grow (or shrink). Quantum represents an escape hatch from these physical limits.
A roadmap to the future
The money behind what Politico called “the quantum hype cycle” is piling up. Last week, the House approved a $3.6 billion reauthorization of the National Quantum Initiative Act, while China has committed $15.3 billion in funding for quantum computing research. Alphabet’s Google and China’s Baidu have also delved into quantum technology, in addition to a growing ecosystem of quantum startups.
This summer, IBM’s own team of scientists (including recent Future Perfect 50 pick Jerry Chow) saw a notable scientific breakthrough in the use of quantum computers when one of its quantum systems found a better answer to a physics problem. complex than a conventional system. supercomputer. On Monday, the company crossed a technical barrier with its new Condor processor, made up of 1,121 superconducting qubits, the largest quantum chip ever released.
As Jay Gambetta, vice president of quantum at IBM, told me, this series of advances means that “we are entering the era of quantum utility.” What that means is that while in the past quantum computers were primarily used to study, well, quantum computing, the hardware has now reached the point where they can now be used to advance science in other fields and, eventually, into much more.
But that utility remains as fragile as the qubits themselves. While the hardware continues to improve, the actual revenue of the entire quantum industry is less than $1 billion. Even IBM predicts that it won’t develop quantum computers that can handle a useful number (that is, capable of doing practical work) of qubits until the end of the decade.
The same ability that makes quantum computers potentially powerful – harnessing quantum mechanics to process information in a way that is simply impossible in a classical system – is what makes them so fragile. Allow the slightest disturbance to your qubits and errors will be introduced that render the output of your quantum computer unusable.
That’s one reason IBM’s quantum research focuses primarily on error correction and mitigation: producing high-quality qubits in large quantities, which act as a kind of support equipment for the much smaller number of ” logical qubits” that actually carry information. But a roadmap that still takes years before a machine can truly deliver on the promise of the quantum computing revolution means a lot of time and money is invested in what is still, for the most part, pure research.
It’s impossible to be 100 percent sure whether we will have a useful quantum computer by the end of the decade, at least without the kind of all-knowing quantum computer we won’t have for decades or more. But I hope they make it. Advances in computing power have been the fundamental driving force of technological progress, from a satellite in space to a phone in your pocket. The kind of breakthrough that useful quantum computing promises would be truly revolutionary. Technology itself may be fragile, but there is no limit to what we can build on its foundation.
A version of this newsletter originally appeared in the Perfect future Newsletter. Sign up here!