🏆 Harvard Unveils World’s First Logical Quantum Processor

Hamartia Antidote

Elite Member
Joined
Nov 17, 2013
Messages
39,348
Reaction score
22,778
Country of Origin
Country of Residence

Quantum Processor Art
Harvard researchers have achieved a significant milestone in quantum computing by developing a programmable logical quantum processor capable of encoding 48 logical qubits and performing hundreds of logical gate operations. This advancement, hailed as a potential turning point in the field, is the first demonstration of large-scale algorithm execution on an error-corrected quantum computer.

Harvard’s breakthrough in quantum computing features a new logical quantum processor with 48 logical qubits, enabling large-scale algorithm execution on an error-corrected system. This development, led by Mikhail Lukin, represents a major advance towards practical, fault-tolerant quantum computers.
In quantum computing, a quantum bit or “qubit” is one unit of information, just like a binary bit in classical computing. For more than two decades, physicists and engineers have shown the world that quantum computing is, in principle, possible by manipulating quantum particles – be they atoms, ions or photons – to create physical qubits.

But successfully exploiting the weirdness of quantum mechanics for computation is more complicated than simply amassing a large-enough number of physical qubits, which are inherently unstable and prone to collapse out of their quantum states.

Logical Qubits: The Building Blocks of Quantum Computing​

The real coins of the realm in useful quantum computing are so-called logical qubits: bundles of redundant, error-corrected physical qubits, which can store information for use in a quantum algorithm. Creating logical qubits as controllable units – like classical bits – has been a fundamental obstacle for the field, and it’s generally accepted that until quantum computers can run reliably on logical qubits, technologies can’t really take off. To date, the best computing systems have demonstrated one or two logical qubits, and one quantum gate operation – akin to just one unit of code – between them.

Mikhail Lukin
A team led by quantum expert Mikhail Lukin (right) has achieved a breakthrough in quantum computing. Dolev Bluvstein, a Ph.D. student in Lukin’s lab, was first author on the paper. Credit: Jon Chase/Harvard Staff Photographer

Harvard’s Breakthrough in Quantum Computing​

A Harvard team led by Mikhail Lukin, the Joshua and Beth Friedman University Professor in physics and co-director of the Harvard Quantum Initiative,
has realized a key milestone in the quest for stable, scalable quantum computing. For the first time, the team has created a programmable, logical quantum processor, capable of encoding up to 48 logical qubits, and executing hundreds of logical gate operations. Their system is the first demonstration of large-scale algorithm execution on an error-corrected quantum computer, heralding the advent of early fault-tolerant, or reliably uninterrupted, quantum computation.

Published in Nature, the work was performed in collaboration with Markus Greiner, the George Vasmer Leverett Professor of Physics; colleagues from MIT and Boston-based QuEra Computing, a company founded on technology from Harvard labs. Harvard’s Office of Technology Development recently entered into a licensing agreement with QuEra for a
patent portfolio based on innovations developed in Lukin’s group.

Lukin described the achievement as a possible inflection point akin to the early days in the field of artificial intelligence: the ideas of quantum error correction and fault tolerance, long theorized, are starting to bear fruit.

“I think this is one of the moments in which it is clear that something very special is coming,” Lukin said. “Although there are still challenges ahead, we expect that this new advance will greatly accelerate the progress towards large-scale, useful quantum computers.”

The breakthrough builds on several years of work on a quantum computing architecture known as a neutral atom array, pioneered in Lukin’s lab and now being commercialized by QuEra. The key components of the system are a block of ultra-cold, suspended rubidium atoms, in which the atoms – the system’s physical qubits – can move about and be connected into pairs – or “entangled” – mid-computation. Entangled pairs of atoms form gates, which are units of computing power. Previously, the team had demonstrated low error rates in their entangling operations, proving the reliability of their neutral atom array system.

Implications and Future Directions​

“This breakthrough is a tour de force of quantum engineering and design,” said Denise Caldwell, acting assistant director of the National Science Foundation’s Mathematical and Physical Sciences Directorate, which supported the research through NSF’s Physics Frontiers Centers and Quantum Leap Challenge Institutes programs. “The team has not only accelerated the development of quantum information processing by using neutral atoms, but opened a new door to explorations of large-scale logical qubit devices which could enable transformative benefits for science and society as a whole.”

With their logical quantum processor, the researchers now demonstrate parallel, multiplexed control of an entire patch of logical qubits, using lasers. This result is more efficient and scalable than having to control individual physical qubits.

“We are trying to mark a transition in the field, toward starting to test algorithms with error-corrected qubits instead of physical ones, and enabling a path toward larger devices,” said paper first author Dolev Bluvstein, a Griffin School of Arts and Sciences Ph.D. student in Lukin’s lab.

The team will continue to work toward demonstrating more types of operations on their 48 logical qubits, and to configure their system to run continuously, as opposed to manual cycling as it does now.
 

Quantum computing startup says it will beat IBM to error correction​

Company builds on recent demonstration of error-tracking in similar hardware.​


The current generation of hardware, which will see rapid iteration over the next several years.

Enlarge / The current generation of hardware, which will see rapid iteration over the next several years.

On Tuesday, the quantum computing startup Quera laid out a road map that will bring error correction to quantum computing in only two years and enable useful computations using it by 2026, years ahead of when IBM plans to offer the equivalent. Normally, this sort of thing should be dismissed as hype. Except the company is Quera, which is a spinoff of the Harvard Universeity lab that demonstrated the ability to identify and manage errors using hardware that's similar in design to what Quera is building.

Also notable: Quera uses the same type of qubit that a rival startup, Atom Computing, has already scaled up to over 1,000 qubits. So, while the announcement should be viewed cautiously—several companies have promised rapid scaling and then failed to deliver—there are some reasons it should be viewed seriously as well.

It’s a trap!​

Current qubits, regardless of their design, are prone to errors during measurements, operations, or even when simply sitting there. While it's possible to improve these error rates so that simple calculations can be done, most people in the field are skeptical it will ever be possible to drop these rates enough to do the elaborate calculations that would fulfill the promise of quantum computing. The consensus seems to be that, outside of a few edge cases, useful computation will require error-corrected qubits.

Error-corrected qubits spread individual bits of quantum information across several hardware qubits and connect these with additional qubits that allow identification and correction of errors. As a result, these "logical qubits" may require a dozen or more hardware qubits to function well enough to be useful. So, enabling that means generating hardware with thousands or tens of thousands of qubits, each with a sufficiently low error rate to ensure we can catch and correct any glitches before they ruin calculations.

IBM and several of its competitors are using electronic devices called transmons as their hardware qubits. Transmons are relatively simple to control, and their quality has been improving iteratively as companies get experience with fabricating devices. But they require bulky wiring to control and are large enough that any useful quantum processor will require integrating multiple transmon-containing chips.

Quera and some other companies have opted for qubits based on neutral atoms, with individual atoms held in traps formed by laser beams. These have several advantages. Unlike transmons, atoms do not suffer from device-to-device variations, and they're incredibly compact—many thousands can potentially be held in a square centimeter. Qubits based on the spin of an atomic nucleus also hold its information for a relatively long time before suffering an error (with "long time" meaning more than a second here). Operations and readouts can also be performed using lasers, eliminating any wiring challenges.

Finally, the atoms can be moved around, potentially allowing any atom to be entangled with any other. This provides a degree of flexibility that's impossible with the permanent wiring used to connect transmons.

Scaling challenges​

The main challenge at this point appears to be scaling the number of atoms available. "we need about a milliwatt of laser power to hold each qubit," Quera's Yuval Boger told Ars. "So if you have 10,000 qubits, you need 10 watts, and this is 10 watts that's available for the optical tweezers [that trap the atoms]." That laser also has to be relatively low noise for everything to work properly.

That will mean a significant jump in laser power compared to the 250 milliwatts or so needed for Quera's current hardware. But, as noted earlier, one of its competitors has already scaled up a trapped atom system to over 1,000 qubits, so Quera will not run into problems immediately. But the company plans to hit the 10,000-qubit mark by 2026, so it doesn't have much time before it will face larger challenges.

Another issue is that moving the atoms is relatively slow compared to other operations. That's not an issue on Quera's existing hardware; it might become more of a factor when thousands of atoms have to be moved around a much larger grid. Atom Computing, which also uses trapped atoms, isn't convinced this is manageable and is considering keeping its grid of atoms static.

Quera, in contrast, makes moving atoms central to its machine's architecture. While the company plans on storing its atoms in a two-dimensional grid, actual operations and measurements require that the atoms be moved into a separate area that the company calls an "entanglement zone."

The physical and conceptual separation has advantages. But, once performing error correction on a hundred logical qubits, it will likely require thousands of qubits to be regularly shuffled in and out of the entanglement zone simply to get error correction to work—and that's without any computation being attempted.

A logical road map​

As our earlier coverage described, the Harvard lab where the technology behind Quera's hardware was developed has already demonstrated a key step toward error correction. It created logical qubits from small collections of atoms, performed operations on them, and determined when errors occurred (those errors were not corrected in these experiments).

But that work relied on operations that are relatively easy to perform with trapped atoms: two qubits were superimposed, and both were exposed to the same combination of laser lights, essentially performing the same manipulation on both simultaneously. Unfortunately, only a subset of the operations that are likely to be desired for a calculation can be done that way. So, the road map includes a demonstration of additional types of operations in 2024 and 2025.

Quera's road map shows lots of logical qubits in 2026.
Enlarge / Quera's road map shows lots of logical qubits in 2026.

At the same time, the company plans to rapidly scale the number of qubits. Its goal for 2024 hasn't been settled on yet, but Boger indicated that the goal is unlikely to be much more than double the current 256. By 2025, however, the road map calls for over 3,000 qubits and over 10,000 a year later. This year's small step will add pressure to the need for progress in the ensuing years.

If things go according to plan, the 3,000-plus qubits of 2025 can be combined to produce 30 logical qubits, meaning about 100 physical qubits per logical one. This allows fairly robust error correction schemes and has undoubtedly been influenced by Quera's understanding of the error rate of its current atomic qubits. That's not enough to perform any algorithms that can't be simulated on today's hardware, but it would be more than sufficient to allow people to get experience with developing software using the technology. (The company will also release a logical qubit simulator to help here.)

Quera will undoubtedly use this system to develop its error correction process—Boger indicated that the company expected it would be transparent to the user. In other words, people running operations on Quera's hardware can submit jobs knowing that, while they're running, the system will be handling the error correction for them.

Finally, the 2026 machine will enable up to 100 logical qubits, which is expected to be sufficient to perform useful calculations, such as the simulation of small molecules. More general-purpose quantum computing will need to wait for higher qubit counts still.

It's probably a measure of quantum computing's progress that, while this road map seems optimistic and aggressive, it doesn't seem completely ludicrous. A few years ago, logical qubits were a theoretical construct; their basics have now been demonstrated. Two companies already have hardware with over 1,000 qubits. Quera might face challenges—many companies in this space have found that their tech has failed to scale as expected. But the field as a whole appears to be moving steadily toward making logical qubits a reality.
 
Can anyone guide me in brief what is that?
 
Can anyone guide me in brief what is that?

Basically quantum bits seem to only be stable for a fraction of a second before they start wobbling out of phase and data values become untrustworthy. This is why they haven't cracked anything big yet. Think of it as a spinning child's top that starts wobbling when it slows to a certain speed.


Apparently these Harvard scientists have figured out either how to put them back in phase or being able to predict the wobbling and thus dynamically adjusting how data is read to take the wobbling into account.
 
Last edited:
Basically quantum bits seem to only be stable for a fraction of a second before they start wobbling out of phase and data values become untrustworthy. This is why they haven't cracked anything big yet. Think of it as a spinning child's top that starts wobbling when it slows to a certain speed.


Apparently these Harvard scientists have figured out either how to put them back in phase or being able to predict the wobbling and thus dynamically adjusting how data is read to take the wobbling into account.
Appreciate your effort but I must say I must stick to topics Ak 47 is better or M 16.
 
Sabine is fantastic and always reliable to bring a reality check to all the hype.

 

Users who are viewing this thread

Country Watch Latest

Latest Posts

Back
Top