Google's AlphaQubit Breakthrough: 6% Better Quantum Error Detection Using 241-Qubit Neural Network
eWEEK content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.
Google DeepMind and Google Quantum AI recently shared details of their collaborative breakthrough, which combines machine learning knowledge and quantum error correction expertise to accelerate and improve the accuracy and reliability of quantum computers. In a paper published in Nature, Google introduced AlphaQubit, an AI-charged decoder designed to tackle error detection—one of the field’s toughest challenges. AlphaQubit uses a 241-qubit neural network to improve error identification accuracy by 6 percent, setting a new standard for reliability in quantum systems.
Tackling Quantum Computing’s Fragility
Quantum computers leverage unique quantum properties such as superposition and entanglement to solve problems exponentially faster than classical machines. However, qubits—the fundamental units of quantum computers—are extremely sensitive to disruptions caused by microscopic hardware defects, electromagnetic interference, heat, vibrations, or even cosmic rays. These instabilities make quantum error correction crucial for the advancement of technology.
AlphaQubit addresses this issue using transformer-based neural networks, a model architecture that powers advanced AI systems like large language models. The system processes data from consistency checks across logical qubits to detect “quantum computing errors with state-of-the-art accuracy.”
Record-Breaking Accuracy
“We began by training our model to decode the data from a set of 49 qubits inside a Sycamore quantum processor, the central computational unit of the quantum computer,” the Google DeepMind and Quantum AI team wrote in a blog post. “To teach AlphaQubit the general decoding problem, we used a quantum simulator to generate hundreds of millions of examples across a variety of settings and error levels.”
AlphaQubit demonstrated 6 percent fewer errors compared to the highly accurate but slow tensor network methods. When benchmarked against the faster correlated matching decoders, AlphaQubit outperformed them with 30 percent greater accuracy. The AI decoder also excelled in scaling experiments, successfully identifying errors in simulated systems of up to 241 qubits, exceeding Sycamore’s current hardware capabilities.
Despite its success, AlphaQubit isn’t perfect. It’s not yet fast enough to correct errors in real-time for today’s fastest superconducting quantum processors, which perform millions of checks every second. The system also requires large amounts of training data, which could become an issue as quantum devices grow in size and complexity.
However, Google sees this as just the beginning. By combining machine learning and quantum error correction advances, the company aspires to build reliable quantum computers that can solve real-world problems. Read the full paper on Nature’s website to learn more.