April 15, 2024

By moving Qubits, parallelism can boost the journey toward practical quantum computing, Harvard and QuEra scientists say

Editor’s Note: QuEra Computing recently published its roadmap for the next three years. It’s an ambitious plan that sees a path toward implementing quantum computers that can address some real-world problems more effectively than classical computers. The ambition of this plan is driven by the recent success of a Harvard-led study that was recently published in Nature. In this exclusive, we delve into the researchers’ approach to error correction that could lead to quantum practicality.

Imagine a quantum computer as a subway system, a transportation network that takes its mathematically minded commuters to their jobs working on incredibly complex calculations.

A qubit, the smallest unit of quantum information, would be the individual passenger traveling alone on this long, fast and complex train of quantum calculations. Importantly, regardless of the approach scientists use to build these quantum information networks, the device’s computational potential depends on those qubits, making them as valuable as they are vulnerable. In fact, the slightest environmental noise can cause passengers packing quantum information to make mistakes, which could lead to a complete computational disaster.

Most quantum computing approaches try to solve this problem by coupling error-correcting qubits to accompany and protect what quantum scientists call the logical qubit. In other words, the logical qubit is the unit of information protected by a quantum error correction scheme realized by the appropriately named error correction qubit.

It is a method that addresses the effects of errors, but allocating qubits to error correction drains valuable qubits from the total computational power of the system. In current approaches, you could think of those physical qubits as passengers driving their own individual cars, but they are all headed to the same destination for quantum error correction. It may work, but it is not efficient and does not scale.

For a Harvard-led team of researchers, the solution to this challenge, drawing on the natural strengths of the neutral-atom quantum computer design, was to take a group of logical qubits on a train ride to what they called a “zone of intertwining.” ”, where they can perform calculations. Think of this approach as a quantum computing bullet train that brings these all-important qubits to their destination.

The team, which also includes scientists from QuEra Computing, MIT, and NIST/University of Maryland, recently published a study on the approach that many experts suggest could redefine the scalability and control of quantum circuits and take the technology a step further. from the current noisy intermediate level. era of quantum computing at scale.

Atom matrix

In an interview with The Quantum Insider, Dolev Bluvstein, a Harvard graduate student who led the research, and Harry Zhou, a research scientist at Harvard and QuEra Computing and a key collaborator on the work, offered a deep dive into the method from their recent study in Nature. which made headlines around the world and even earned praise from Scott Aaronson, the Schlumberger Centennial President computer’s science in The University of Texas at Austinand director of his Quantum Information Centerwho called himPossibly the biggest experimental breakthrough in quantum computing of 2023..”

According to the researchers, the team created a series of atoms in the neutral atom quantum computer that can be reconfigured as needed to act as logical qubits. The qubits can then interact with each other in any required pattern. The system also allows both the manipulation of individual qubits and the verification of their states mid-process.

Furthermore, this zoned architecture approach (along with the versatility of ways to organize qubits) can improve the reliability of quantum operations. Specifically, they have improved the performance of two-qubit operations (basic interactions between pairs of qubits) by using a method known as “surface code.” They have also managed to create and work with groups of qubits that are resistant to certain errors.

The team was able to perform advanced quantum operations, such as creating large entangled states and teleporting entanglements between qubits in a fault-tolerant manner.

Their system includes complex three-dimensional code interconnected in a very complex manner similar to higher dimensions, allowing researchers to intertwine up to 48 logical qubits with a high level of connectivity. This capability is used to run simulations and algorithms efficiently, and they demonstrated that using this logic coding improves the accuracy and performance of quantum calculations, including detecting and correcting errors.

The team’s results suggest that we are moving toward more reliable, large-scale quantum processors that can handle complex calculations with fewer errors.

In the study, the researchers used up to 280 physical qubits, but needed to program fewer than ten control signals to execute complex calculations.

Transition to bug-fixed devices

Bluvstein said the work signals a transition point in the field, where the fundamental units of its processor, as well as the classical controls for operating them, are now at the logical qubit level, rather than the physical qubit level. And they say now is the time to switch to bug-fixed devices.

“What’s really important is that if we keep making these devices bigger and bigger, we also need the performance to keep improving, so we can do some of these really complex algorithms, where you need every qubit to be able to implement billions of gates.” “Bluvstein said. “In order to be able to implement these billions of gates, we really need to start, now in the field, moving to not only test our algorithms with physical qubits and learn how quantum algorithms work, but also start testing our algorithms with logical qubits. We need to determine how error-correcting algorithms work because we know that to, for example, solve really big problems, like really big chemical problems, we need things on the scale of over a billion gates. And we can never “We can achieve this with our physical qubits, so we must focus on error correction algorithms.”

Avoid overhead costs

According to the researchers, the importance of their approach is that even by increasing the code size or adding more codes, they can still efficiently manage more light beams with classical control systems, thus ensuring scalability without additional complexity.

“We can operate with many error-correcting codes and we can create larger error-correcting codes without any additional overhead in classical control,” Bluvstein said.

This finding is fundamental to the practicality of quantum circuits and a step toward realizing large-scale, fault-tolerant quantum computers.

This gets to the heart of the “overarching challenge” of quantum computing, which refers to the additional resources or efforts required to accomplish a task. In this context, creating an error-corrected logical qubit requires many physical qubits. For example, to create a reliable logical qubit, you may need to use dozens or even hundreds of physical qubits just to fix errors.

Parallelism

At the heart of the approach is parallelism. According to the researchers, atomic motion, displacement and entanglement within the processor architecture enable a highly efficient, parallel control system. By pulsing a global entanglement laser each time the qubits move past each other, the team can run entanglement gates and prepare quantum states quickly and simultaneously. This approach to parallelism is unique to their study and is what allows them to scale up quantum circuits.

According to Zhou, you may only need to turn on your flat-screen TV to understand the potential parallelism competition of the neutral atom approach compared to other quantum approaches, such as superconducting quantum computing.

“For example, if you look at your TV, imagine if you opened up your 4K TV with a million pixels and realized that there was actually a single, separate control cable leading to each little cell,” Zhou said. “That wouldn’t be very good technology, especially now that you’re moving to larger scales. Whereas parallel control really makes this more like plug in a cable and go.”

Future

Researchers say their work is not over yet: there is still much to do. However, the team is confident that the techniques, along with other potential innovations, will grow and, if so, they see a path to large-scale, fault-tolerant quantum computers.

Next steps could include working on better ways to maintain the integrity of the quantum computing process, Bluvstein said.

“It’s pretty discouraging, and to be completely honest, it’s still pretty discouraging,” Bluvstein said. “But we have ideas about how to get there, and I think we have a really unique path with neutral atoms and these control techniques, especially in combination with the exceptional creativity and progress of the entire neutral atom community.”

Still, the study reflects an intangible benefit (the boost) that sparks researchers’ optimism. As momentum grows, they anticipate that more science will lead to more innovation and, as a result, quantum technology will continue to accelerate at an impressive pace.

As Zhou says: “If we can build these 10,000-qubit scale devices using all these techniques from across the community, then at that point I think we will be confident that we can build large-scale quantum computers.”

The Harvard-led work is critical to paving the way to practical quantum computing, according to QuEra, which published its roadmap this week. For more information on that roadmap, see QuEra Computing’s roadmap for advanced error-correcting quantum computers, pioneering the next frontier in quantum innovation.

Leave a Reply

Your email address will not be published. Required fields are marked *