The right roadmap accelerates the path to fault-tolerant quantum computing (Part–II)

This post is the second part of a three-part series on the transition from NISQ to fault-tolerance. While the first post focussed on the requirements for transformative quantum computing, this second post is about what roadmaps are being pursued by top hardware players to meet those requirements. This series is in collaboration with the GQI team The full post is available below, and an executive summary is available on quantum computing report.

Executive summary

Building a useful quantum computer will need better qubits and better gates. This is true in the noisy intermediate scale quantum (NISQ) era and it is true about building a futuristic transformative fault-tolerant quantum computer (FTQC). That said, building towards FTQC is not only a harder challenge, but also fundamentally different in many ways from the efforts needed for NISQ. That’s because there are many aspects of NISQ development that are not likely to be relevant for fault tolerance – error mitigation, variational algorithms, and a larger variety of gates, to name a few. FTQC requires running a few gates over and over again with high quality with respect to very specific errors, while other errors don’t matter that much. So a team that knows which aspects of hardware to focus on could have an unfair advantage in progressing faster towards FTQC than teams focussing additionally or exclusively on NISQ.

A fault-tolerance blueprint accomplishes this goal of helping hardware teams focus on the errors that matter in the operations that matter. What exactly is a fault-tolerance blueprint? It is a detailed plan for a hardware manufacturer to build and operate a working FTQC. The blueprint describes everything ranging from how to define and fabricate physical qubits and how to manipulate the qubits to make them fault tolerant, to how to process the massive amounts of data that is coming out of the device to find the errors and correct them. Blueprints are the bridge between abstract ideas in quantum fault-tolerance on the one hand and the messy realities of quantum hardware on the other.

A blueprint is essential for a hardware manufacturer to get to fault-tolerance because every different modality of manufacturing quantum computing comes with different strengths and limitations: constraints on what operations and connectivity are possible, and imperfections in the qubits and gates. Understanding the impact of these constraints and imperfections is as important as it is challenging – the errors and constraints need to be modeled using deep theoretical expertise and simulated using powerful design software. This understanding then informs the development of fault-tolerance blueprints: a great blueprint finds the constraints and imperfections that are most critical to FTQC and find ways to circumvent them. And it provides a clear target for hardware development to focus on. In doing this, it provides a powerful leverage for the hardware efforts.

Some of the strongest teams building hardware already have their roadmaps for fault-tolerance. Among these teams, some have even published detailed blueprints – PsiQuantum, Xanadu, ORCA, Quandela and Photonic Inc. in photonics and optically-adressible spins, Google, IBM and AWS in superconducting qubits, Quera in neutral atoms, and Universal Quantum and EleQtron in trapped ions. Each of these efforts, and some parallel efforts from academic teams, shed light on the relevant hardware constraints and imperfections and find ways to overcome them.

But this is just the beginning in terms of designing for fault tolerance. Every unique hardware modality being championed in the quantum industry comes with its own imperfections and constraints, which must be addressed in their blueprints. Building these blueprints is still in progress for many of the newer hardware modalities within the major platforms, and also some of the novel and promising hardware platforms as well. And the blueprints that do exist currently for the more mature modalities of computation need to become more comprehensive, identifying and fixing all significant limitations towards FTQC. I expect the progress on this front to be swift and impactful for the future of quantum computing.

Designing for fault-tolerance is different from designing for NISQ

Improving NISQ devices and building transformative FTQCs are both challenging endeavors –  both demand more and better qubits, as well as higher quality gates. At first glance, the challenges we encounter in the path to useful NISQ and transformative FTQC appear similar. However, a deeper examination reveals a more nuanced picture.

Building a fault-tolerant quantum computer is different, not merely harder than building NISQ devices. And in some aspects, NISQ development could be more challenging than FTQC development. Let us look at some examples of how NISQ development is different from FTQC:

  • Error mitigation: Error mitigation makes NISQ devices shine. As evidenced by the recent IBM demonstration, methods such as zero noise extrapolation and probabilistic error cancellation could provide the boost needed to reach quantum utility from NISQ devices. A lot of research effort has contributed to getting here and more research is underway to make these methods even more powerful. On the other hand, FTQC is incompatible with current schemes for error mitigation that are being pursued to make NISQ devices more useful. That’s because these schemes use an approach that seems fundamentally to be at odds with fault tolerance. In particular, current error mitigation schemes need the same circuit to be run several times over to obtain several classical outcomes, which are post-processed into a mitigated classical output. FTQC on the other hand needs the actual quantum information to be robust, something that it accomplishes using redundancy in the number of qubits, and it is not possible to obtain robust quantum information from running circuits many times.

  • Connectivity: Another significant approach to improving NISQ performance is to improve the connectivity between the physical qubits. Higher connectivity is helpful because it allows for running user’s desired circuits on the actual hardware with significantly lower overhead. While it's clear that more connectivity is better in NISQ, the situation is more complicated for FTQC. Fault tolerance schemes could also potentially benefit from long distance connectivity using approaches based on e.g., quantum low-density parity check (LDPC) codes, but the problem of finding good LDPC codes (which allow for doing fast logical computation while still being able to correct errors well) is still open.

  • Native gate set: NISQ benefits tremendously from having a large variety of native gates available: Every NISQ algorithm is compiled down to the native gates, which can be performed with much higher fidelities than other gates. The larger the available variety of high-fidelity native gates, the shallower the circuit and hence the more accurate the outcome of the computation. In contrast, FTQC needs a much smaller variety of gates that are run over and over again. For instance, the recent Google error correction demo relied on only two gates – Hadamard and control-Z gates along with measurements. Full-scale fault tolerance might require a couple of additional operations but not too many more.

  • Critical errors for NISQ are different from critical errors for FTQC: Even the types of errors that can be tolerated by FTQC could be very different from NISQ devices. Dephasing is a standard error that both NISQ devices and FTQCs will need to address. The situation is different for erasure errors, which described a qubit going missing in the middle of a computation, e.g., through optical loss. On a NISQ device, erasure errors are lethal from the users’ perspective because they lead to the computer running a completely different circuit from the one that the user intended. FTQCs on the other hand tolerate erasure errors very well – almost an order of magnitude better than the more common dephasing and other so-called Pauli errors. This can have some interesting implications to the direction of the industry, e.g., a hardware platform where the user has high erasure errors and low Pauli errors could be better suited for FTQC than for NISQ. Indeed, this is the case with linear optical quantum computing, which is pursued among others by PsiQuantum, among the first companies shooting straight for FTQC skipping the NISQ era altogether.

All this points to the fact that optimizing for FTQC involves prioritizing different aspects of the hardware as compared to optimizing for NISQ. This means that, as compared to a team focussing on both NISQ and FTQC or exclusively on NISQ, a team that is focussing on FTQC can progress faster towards this goal.

Different hardware platforms: different constraints, different imperfections

Focussing on FTQC means very different things for the different hardware platforms. Different platforms need to use different methods to get to fault-tolerance: the choice of qubits, the way in which physical gates are implemented, the error correction codes used, and the algorithms to find and correct errors and the classical hardware used to run these algorithms. The fault-tolerance methods need to be different because each hardware platform comes with very different constraints and very different imperfections. Let’s look a little more at these differences and also at the tools needed to assess their impact on FTQC performance.

Different constraints

Different hardware platforms have very different constraints on the connectivity and control of physical qubits. The most obvious differences arise between photonics on the one hand and matter-based platforms such as superconducting qubits, trapped ions, neutral atoms and electron spins on the other. For the latter, qubits are chunks of matter and gates are acted on them by zapping these chunks with beams of electromagnetic waves – optical, microwave and radio-frequency radiation. In photonic quantum computing, the qubits are electromagnetic waves but gates are acted by interfering them on chunks of matter such as half-silvered mirrors or ‘beamsplitters’. Of course, very different constraints act on qubits that are flying at the speed of light and those that have been fabricated onto a quantum chip. Another difference between photonic and matter platforms is that photonic qubits are destroyed as soon as they are measured. This difference means that usual circuit-based fault-tolerance methods are not suitable for photonics since they rely on frequently resetting the same qubits – so measurement-based methods are being considered for photonic fault tolerance as these allow operating without resetting qubits.

Even within matter qubits, the differences in hardware constraints are bigger than their similarities. Some qubits are fabricated at fixed locations on chips (superconducting and spin qubits), while others are made to levitate using electromagnetic fields (trapped ions and neutral atoms) and can be shuttled around if required. Some platforms potentially allow for all-to-all connectivity (trapped ions) while other qubit types interact only with their neighbors (superconducting qubits). This has serious implications on the error correction codes that can be used. Local interactions – qubits talking only to their nearest neighbors – basically restrict a platform to some version of the so-called ‘surface code’, one of the earliest error-correction schemes proposed. Examples of surface code implementations include Google following the approach to using a rotated XZZX surface code in their recent experiments and chip designs, and IBM designing for the heavy-hex code, a variant of the surface code. Platforms where it is possible to move some or all of the qubits around allow for using codes that promise lower hardware cost per logical qubit such as quantum LDPC codes.

The differences within matter platforms don’t end with connectivity. Trapped ions are notoriously slow in terms of gate action (microseconds), while superconducting qubits are much faster (nanoseconds). Some qubits need massive cryostats while others need vacuum chambers to operate, which leads to differences in the bandwidth and energy of the control and readout signals. Fast gate speeds in the nanosecond timescales would require incredibly fast decoding, which would necessitate less accurate or two-stage decoding procedures1, where the latter would lead to additional heat load since one of the two stages of decoding would likely occur in a cryostat. Slower platforms with higher fidelities could potentially use slower decoding algorithms that are more accurate.

Different imperfections

The quantum state of the physical qubits is subject to many different sources of noise from the environment and perturbations from imperfect gates acting on them. Here’s a brief table mentioning some of the most important imperfections in quantum computing platforms. The list is unordered and far from exhaustive – in reality each platform will see tens of different sources of imperfections.

Platform Imperfections
Photonics Optical loss
Stray photons
Static fabrication defects
Superconducting qubits Imperfect control pulses
Decay & dephasing
Leakage to non-computational states
Two-qubit depolarization, leakage
Trapped ions Readout bit flips
Shuttling errors
Slow, fast charge noise
Semiconductor spins Spin-orbit relaxation
Nuclear spin decoherence
Incoherent relaxation
Neutral atoms Imperfect readout and qubit re-trapping
Shot-to-shot control variance from atomic motion

One aspect to note is that the sources of noise vary wildly between the different platforms. This is of course well understood within the teams building quantum computing hardware. Different hardware teams have identified these imperfections and are continuously chipping away at them.

Something that is less commonly appreciated is that the impact of different imperfections on the performance of logical qubits is very different. For instance, something that we alluded to earlier – although a 10% rate of erasure errors resulting from optical losses can be tolerated, only about 1% of dephasing errors can be tolerated by similar fault-tolerance protocols. The tolerance is lower by another order of magnitude towards leakage errors, which are significant in superconducting qubits and can potentially spread in an uncontrolled way to neighboring qubits if left unmitigated.

Another way of looking at the same information is the error budget – what matters is not the raw probability of a physical error occurring from a hardware imperfection, but rather the percentage contribution of the hardware imperfection to errors on the logical qubits. Obtaining precisely this error budget, i.e. this percent contribution of each hardware imperfection, is possible with sophisticated modeling and simulation tools.

The design challenge: quantifying the impact of hardware limitations

Understanding and quantifying the impact of different hardware constraints and imperfections along the lines of what we just discussed is hard. That’s because pen-and-paper calculations are not useful for these tasks and these design simulations involve several intricate steps: imperfections and constraints need to be mapped into concrete models so that the quantum system can then be simulated. Scalable simulations on hundreds or thousands of qubits need to be performed and the measurement results from these simulations need to be fed into decoders – algorithms that use these results to find which errors occurred where and to correct these errors.

Usual methods of simulating quantum systems (such as those used for benchmarking the Google quantum supremacy demonstration) would struggle with anything over 50 or so qubits so other methods are needed. Fortunately, there are special simulation methods2 that can deal with thousands or more qubits in fault-tolerance circuits if certain conditions are met.

The challenge then is to map real-world hardware into circuits that meet the simulability conditions. The mapping requires a deep understanding of not only the underlying hardware but also of the fault-tolerance methods. For instance, in discrete-variable photonic QC, the mapping involves turning an infinite-level quantum system with a large number of photons into a two level without throwing away the quantum information that we want to check the robustness of. Another example is the case of over- and under-rotation errors, which is significant in all major matter qubit platforms. Sophisticated approximations are needed for these to meet simulation conditions and careful analytical and numerical checks need to be performed to make sure that the approximations don’t throw the baby with the bathwater. Making this mapping work better and for more types of hardware imperfections will be important for the success of design software.

If the mapping is successful, a hardware team can choose from several open-source design tools that can be that connected to each other to simulate fault tolerance:

  • Early efforts from academia such as qecsim, panqec set the stage as helpful tools for error-correction researchers. These tools were based on stabilizer simulations (as opposed to circuit simulations), which are relatively limited in terms of the hardware errors that can be simulated.

  • Dedicated blazing-fast circuit simulation tools such as stim and QuantumClifford and decoders such as PyMatching, FusionBlosson continue to cut the simulation time and associated compute costs by orders of magnitude.

  • Specialized simulation tools such as continuous-variable fault-tolerance simulator FlamingPy from Xanadu, and BPOSD for decoding LDPC codes excel at specific tasks.

  • A recent entrant in the field is Plaquette from QC Design, which is an end-to-end design software that offers a wide variety of relevant error models, simulators and decoders, in-built and 3rd party.

Nevertheless, the following significant limitations still exist within open-source software, which are being addressed by proprietary offerings.

  • Computing modalities important to photonics and hybrid spin-photonic architectures are still not available.

  • Several important real-world hardware imperfections beyond merely Pauli errors are not straightforwardly simulatable with existing tools. .

  • Furthermore, the impact of realistic circuit imperfections on the performance of novel fault-tolerance codes and decoders are not straightforward to study via current tools.

The proprietary Plaquette+ offering from QC Design addresses this gap by providing software for novel modalities of computing, the most general error models possible and novel codes and decoders.

Different blueprints: the major FTQC roadmaps that define hardware development

The different constraints and the different imperfections necessitate different paths to fault-tolerance. The major hardware manufacturers are acutely aware of this and have developed their ‘blueprints’, which provide the path to fault tolerance. A fault-tolerance blueprint is simply a detailed plan for a hardware manufacturer to build and operate a working fault-tolerant quantum computer. A blueprint describes everything ranging from how to define and fabricate physical qubits, how to connect these qubits together, and how to manipulate the qubits to make the resulting computation fault tolerant, to how to process the massive amounts of data that's coming out of the device to find the errors and correct them.

A complete and helpful blueprint is one that respects the constraints and imperfections in the different hardware platforms. Blueprints are the bridge between abstract ideas in quantum fault-tolerance on the one hand and the messy realities of quantum hardware on the other.

Let’s take a look at some of the different hardware platforms where fault-tolerance blueprints have been published or mentioned. Before that, two points are in order.

Firstly, this clearly is not an exhaustive list since there are very likely hardware manufacturers whose detailed blueprints are in stealth, and the teams are working heads down towards fault tolerance. Some strong examples of teams that mention the goal of fault-tolerant quantum computing include the teams at Quantinuum, who’ve been delivering on their ambitious trapped-ion hardware roadmap year after year, and several spin players like Quantum Motion, Silicon Quantum Computing and Diraq.

Second, we’ve seen so far that it is hard to develop blueprints because it is hard to fully analyze all the imperfections and their impact on logical performance, let alone address this impact. The way to work towards complete and helpful blueprints is iterative – start with an ideal blueprint that rests of many assumptions, i.e., operates within the hardware limitations but considers very few real-world imperfections, and then improve the blueprints from there. Improvements could take the form of more imperfections considered or better logical performance for the same hardware cost. In this framework, the blueprints mentioned below are at different levels of maturity but the progress in the topic is swift and clear.

Photonics: PsiQuantum, Xanadu and Orca leading with blueprints focussing on different modalities

In photonics, blueprints have been developed for fault-tolerant quantum computing based on discrete variables, i.e., single photons are the qubits, or on continuous variables, i.e., special states of light which are harder to generate but easier to measure than single photons.

Within discrete variables, one of the first real paths to fault-tolerant linear optical quantum computing was developed by the group of Simon Benjamin in 2015. They proposed an architecture3 that relied on first generating so-called ‘building-block states’, which are then entangled into a 3D cluster state; and this 3D cluster state in turn is used for fault-tolerant quantum computing. The key hardware constraint that this paper addressed is the challenge that two-qubit gates in photonics are inherently probabilistic and can fail e.g. 25–50% of the time depending on exact photonic circuits used.

Significant further advances were presented by PsiQuantum, who introduced a Fusion-based quantum computing4 approach, which provides simpler and more feasible building block states and also provides a reduced requirement for long and lossy delay lines. Follow up works from PsiQuantum provide more details about their path to fault-tolerance including ideas on the creation of these building-block states5 and on using concepts from time-domain photonics towards fault tolerance6.

Recently ORCA has published modifications to fusion-based quantum computing that exploit measurements performed on more than two qubits at once7 and further improvements to the generation of entangled resource states needed for fusion-based quantum computing8. Although these works are very likely not their full blueprint, it is positive to see teams other than PsiQuantum and Xanadu start to publish their fault-tolerance efforts.

In continuous variables, the Xanadu blueprint introduced a way to perform measurement-based quantum computing with the so-called Gottesman Kitaev Preskill qubits9. These qubits are special states of light that have no classical analog. They have not yet been generated in a photonic system although there are proposals to generate it probabilistically using Gaussian boson sampling. The blueprint accounted for and addressed this challenge that qubit generation is probabilistic. Later, an improvement to this blueprint was published in the form of the removal of the requirement of active switches in some parts of the architecture10.

Put together, these works from PsiQuantum, Xanadu and Orca address several hardware constraints and imperfections – namely that generating large photonic states is hard. They also exploit a peculiarity of photonics that a large number of photonic qubits can live as optical pulses in a single optical fiber.

Hybrid spin-photon computation blueprints from Quandela and Photonic Inc.

Light is crucial to architectures not just from the all-photonic players described above but also those from hardware manufacturers focussing on solid-state systems that are also optically active.

A novel architecture11 was announced last week by Quandela, who have so far demonstrated key hardware advances in discrete-variable photonic quantum computing. In this architecture, solid-state spins act as qubits, and entanglement between these qubits is mediated by photons. The preprint analyzes in detail the requirements for fault-tolerance with the hybrid spin-photon approach and demonstrates optimistic prospects towards this technology.

Another very recent blueprint12 from Photonic Inc. presents, from a high level, hybrid light-matter approach to fault-tolerance. Here too, solid-state spins in silicon are the qubits, and light pulses traveling in the silicon photonic chips connect these qubits.

The hybrid approach promises to overcome bottlenecks from all-photonic and all-matter platforms once the challenges around material purity, optical efficiency and more are successfully overcome.

Superconducting qubits: Google and IBM demonstrating building blocks of their roadmaps; AWS exploring novel strategies

Superconducting qubits are among the more mature platforms for quantum computing and indeed are the platforms pursued by strong teams from big tech companies like Google, IBM and AWS. Google and IBM have presented among their first experimental demonstrations of error correction (more on that in the next post!).

Google, based on their publications and seminars, seems to be scaling up based on using a vanilla variant of the surface code on top of their transmon qubits. The choice of vanilla surface code comes simply from the constraint of using 2D chips where qubits are connected most strongly to their nearest neighbors. The main hardware imperfections impacting logical performance are clearly listed in their recent publications and they claim to have a telephone-book sized blueprint behind their roadmap to beat these imperfections and get to a million physical qubits by 2029.13

The IBM quantum team has been thinking about error correction from the very early stages. Already in 2015, they had identified some of the high level challenges to building logical qubits ranging from ensuring on-chip microwave and cryogenic system integrity to developing new error correction codes and control software14. Needless to say, hardware has come a long way since then. And IBM have also developed new error correction methods tailored for their hardware constraints and imperfections in the form of the heavy-hex error correction15, which informs the layout of several of the current IBM devices on the cloud. Researchers at IBM are analyzing the performance of and looking into implementing LDPC codes on superconducting hardware16.

A different approach to fault-tolerance seem to be pursued by the AWS quantum team, who released a fault-tolerance blueprint at the end of 202017. They explore using ‘cat states’ in superconducting cavities or resonators as their qubits because of the inherent protection that these qubits offer against certain kinds of noise. The remaining noise is taken into account when choosing a tailored version of the surface code that performs better under the biased noise that cat qubits exhibit. The blueprint presents several details of how to implement the required operations in hardware and the impact of hardware imperfections on logical performance. A few months later, researchers from AWS also presented an alternative blueprint that exploits GKP qubits similar to those that Xanadu plans to use but in superconducting systems instead of photonic18. We should indeed expect these blueprints to keep evolving towards more and more hardware-efficient proposals.

Neutral atoms: Fighting leakage and exploiting levitation

The neutral-atom platform promises large qubits numbers with somewhat limited controllability, which could nevertheless be overcome by well-thought through blueprints.

Among the first real efforts towards a neutral atom fault-tolerance blueprint was put forth in 2017 by the group of Dan Browne19. The blueprint proposed to use a surface code on a 2D lattice of neutral atom qubits. Some real-world imperfections like crosstalk were also considered.

Other important imperfections were addressed by a subsequent blueprint20 from Harvard and Quera, in which leakage was converted into more a benign error via a newly designed physical operation to be implemented within rounds of error correction. This leads to significantly better error tolerances since leakage errors are a particular weakness of fault-tolerance schemes.

Very recently in 2023, researchers from Quera, UChicago, Harvard, Caltech and UArizona considered21 the possibility of replacing surface codes with LDPC codes and showed that this approach could potentially cut the hardware requirements for fault-tolerance by an order of magnitude. This blueprint, like many of the others mentioned above, exploits the novel aspects of neutral atom hardware – namely that qubits are levitating in a trap and some amount of long-distance connectivity is available.

Trapped ions: scaling up via shuttling

In terms of fault tolerance, trapped ion quantum computers offer a unique advantage of all-to-all connectivity within a trap but also come with a disadvantage that it’s challenging to place too many ions within a single trap. This setting provides a fertile ground for powerful roadmaps to fault-tolerance.

Among the earliest blueprint for trapped ion fault-tolerance was published in 2015 by authors who are now in the founding teams of Universal Quantum and EleQtron22. The blueprint considered a surface code architecture. The main challenge to scaling up that was addressed here was that trapped ion modules will inevitably have limited numbers of qubits so the blueprint proposed that ions be shuttled from one module to another to provide the large numbers of connections needed for fault-tolerance.

Subsequent progress was made in 2017 by researchers from Swansea, Madrid, Oxford, Sydney, Zürich, Innsbruck, Mainz in academia, some of whom are now also founders or advisors of AQT, Q-CTRL, PlanQC and Quantum Motion23. This blueprint introduced new ways of performing the measurements needed to find errors in the computation using only the native gates available to trapped ion processors. The work also looked at an important imperfection, namely the heating up of the vibrational mode used to perform the two-qubit gates, and proposed addressing it using a second species of ions trapped alongside the computational species. The work also provided detailed error models for use in large-scale numerical design simulations.

These blueprints are just the beginning

While the progress in the last few years in FTQC blueprints has been nothing short of extraordinary, this is just the beginning.

First, the published blueprints are not all equally comprehensive. While the publications from some of the players (e.g., PsiQuantum, Xanadu) capture from a high level all aspects of building, controlling and manipulating qubits towards fault tolerance, others are rather sparse in detail. Indeed, it’s likely that some teams do have detailed internal roadmaps towards fault tolerance (e.g., Google mentioning a telephone book of calculations and designs). But it’s hard to develop these blueprints – it needs insights from hardware and from fault-tolerance theory and powerful software to simulate all this – so it’s more than likely that most teams, especially those in small-to-mid sized startups don’t yet have these blueprints, which could give the bigger teams a major edge over them.

Also, so far, the blueprints have been restricted to very specific modalities of performing quantum computing within specific hardware platforms, in particular restricted to the bigger teams within big tech or in well-funded startups. More concretely, there are few if any ideas published on fault tolerance with novel superconducting qubit modalities outside of transmons and microwave oscillators; ions and color centers that can be controlled only by global microwave fields; with photonic platforms based on single-photon emitters; and many others.

And while the available blueprints do address some of the hardware constraints and imperfections that impact current and near-future hardware, there’s still a long way to go in terms of accounting for all imperfections and addressing them. And in this quest, there’s the possibility of exploiting the hardware idiosyncrasies of each of the different platforms to get better and better results.

Based on all this, it is only reasonable to expect ever improving blueprints that address hardware challenges and build upon hardware strengths to continuously bring down the costs of building a full-scale transformative fault-tolerant quantum computer.

Summary

  • Building a fault-tolerant quantum computer is a different challenge from building a NISQ computer. The requirements are different, and often fewer types of gates, albeit with high performance in specific aspects are required as compared to NISQ. So the path to fault-tolerance is different from the path to a useful NISQ computer.

  • Different hardware platforms come with different constraints on the connectivity and control of qubits. And many different hardware imperfections come together to have a complicated impact on the performance of FTQC. Sophisticated simulation tools are needed to appreciate and understand the impact of these imperfections on FTQC.

  • A blueprint is a detailed procedure to perform FTQC. A great blueprint respects the constraints and overcomes the imperfections in the hardware.

  • Several of the bigger quantum teams already have roadmaps to fault-tolerance. By highlighting those portions of the stack that need the most work and other portions that are already at an advanced stage, these roadmaps are incredibly valuable in focussing the hardware efforts.

  • These first blueprints are only the beginning: there’s still a dire need of roadmaps for each platform with every different type of modality or building block; and subsequent generations of roadmaps focus on bringing the costs down.


  1. [2306.17142] Belief propagation as a partial decoder↩︎

  2. [quant-ph/9807006] The Heisenberg Representation of Quantum Computers and [quant-ph/0406196] Improved Simulation of Stabilizer Circuits↩︎

  3. [1504.02457] Resource costs for fault-tolerant linear optical quantum computing↩︎

  4. [2101.09310] Fusion-based quantum computation↩︎

  5. [2106.13825] Creation of Entangled Photonic States Using Linear Optics↩︎

  6. [2103.08612] Interleaving: Modular architectures for fault-tolerant photonic quantum computing↩︎

  7. [2308.04192] High photon-loss threshold quantum computing using GHZ-state measurements↩︎

  8. [2310.06832] Flexible entangled state generation in linear optics↩︎

  9. [2010.02905] Blueprint for a Scalable Photonic Fault-Tolerant Quantum Computer↩︎

  10. [2104.03241] Fault-tolerant quantum computation with static linear optics↩︎

  11. [2311.05605] A Spin-Optical Quantum Computing Architecture↩︎

  12. [2311.04858] Scalable Fault-Tolerant Quantum Technologies with Silicon Colour Centres↩︎

  13. From the 2020 Google Summer Symposium: “... before the decade ends, ... we can build a large error-corrected quantum computer with 10^6, a million physical qubits. And, in essence, this would be an information architecture based on the surface code supported by transmon qubits. And we developed a tight schedule of well-defined—there's actually a telephone book thick of calculations and the designs behind this.”↩︎

  14. [1510.04375] Building logical qubits in a superconducting quantum computing system↩︎

  15. [1907.09528] Topological and subsystem codes on low-degree graphs with flag qubits↩︎

  16. [2308.07915] High-threshold and low-overhead fault-tolerant quantum memory + Error correcting codes for near-term quantum computers | IBM Research Blog↩︎

  17. [2012.04108] Building a fault-tolerant quantum computer using concatenated cat codes↩︎

  18. [2103.06994] Low overhead fault-tolerant quantum error correction with the surface-GKP code↩︎

  19. [1707.06498] A blueprint for fault-tolerant quantum computation with Rydberg atoms↩︎

  20. [2105.13501] Hardware-Efficient, Fault-Tolerant Quantum Computation with Rydberg Atoms↩︎

  21. [2308.08648] Constant-Overhead Fault-Tolerant Quantum Computation with Reconfigurable Atom Arrays↩︎

  22. [1508.00420] Blueprint for a microwave trapped-ion quantum computer↩︎

  23. [1705.02771] Assessing the progress of trapped-ion processors towards fault-tolerant quantum computation↩︎

Previous
Previous

QC Design announces launch of fault-tolerance design automation tool Plaquette+ and first sale to QuiX Quantum

Next
Next

Fault-tolerant quantum computing will deliver the transformative promise of quantum computing (Part-I)