PsiQuantum presents a new, practical approach to fault tolerant quantum computing. Checking for errors in your quantum computation introduces more errors, so typically we assume such checks should be implemented in a minimalistic way. The new approach, one particularly suitable for the high connectivity of photonic quantum computers, manages to achieve large checks but with fixed (small) hardware, leading to significantly higher performance.

It is a remarkable feature of the quantum world – one that at least one future Nobel prizewinner was skeptical could be true - that extremely-low-noise logical qubits of a quantum computer can be built up from irredeemably noisy physical qubits undergoing imperfect interactions.  Key was the realization that judiciously chosen multi-qubit measurements can reveal information about possible errors (so-called parity checks), without destroying the information being processed by the logical qubits of the computation.

In his recent arxiv preprint Daniel Litinski has introduced a new way of achieving such fault tolerance. His blocklet approach brings together the best aspects of the two previous paradigms which were based on either concatenated codes or low-density parity check (LDPC) codes. He does so by discovering a fault-tolerant way to measure large-weight checks (which offer higher error tolerances) using only constant-weight physical measurements. Blocklets also achieve high rates – meaning a large number of logical qubits can be encoded into a relatively small number of physical qubits.

Two Paradigms: Concatenation vs LDPC

Fault tolerance via code concatenation is mathematically attractive and powerful. It was used by  Aharonov and Ben Or in 1999 to provide the first proof that fault tolerance is in principle possible. More recently, concatenation was used to prove the remarkable result that arbitrarily long fault tolerant quantum computation with a fixed ratio of physical to logical qubits is theoretically possible.  Concatenation starts with designing a slightly-better logical qubit from a small number of physical qubits using a suitable error correcting code. Then each one of the new qubits is recursively replaced with similarly encoded versions of themselves, achieving extremely rapid noise suppression. This sounds simple, but delicate juggling is necessary to ensure that errors do not spread as more and more qubits interact. While the performance of concatenated codes can theoretically be excellent, from a practical perspective one big downside is an explosion of complexity in the multi-qubit measurements required for correction, meaning it is extremely difficult to embed this approach sensibly into the real-world constraints of space and time – the arena in which any machine must actually be assembled!

Most recent work has therefore focused on fault tolerance achieved through LDPC codes, the most famous of which is Kitaev’s toric code. In this approach any given physical qubit undergoes only a small (fixed) number of interactions, typically only with near neighbors. This makes it much simpler to see how errors can be identified before they spread and destroy the underlying logical qubits. It also has beautiful connections to areas of many-body quantum physics, particularly topological quantum field theories. From a practical perspective it is relatively easy to map the requisite multi-qubit measurements – the so-called “parity checks” – to a layout of nearest neighbor qubits on a lattice. Earlier this year this mapping was used by Google to build an impressive demonstration of a logical qubit with a lower error rate than the physical qubits from which it is comprised.  

Litinski’s blocklet protocols only require small (fixed) hardware per parity measurement, just like the topological approach, but in a way that leads to very large numbers of measurement outcomes contributing to some of the parity checks, achieving the attractive noise suppression of concatenated code approaches. It is also easy to implement them in real-world photonic architectures. They really are a “goldi(b)locks” paradigm for fault tolerance!

Photons are, like, everywhere man.

Light is always on the move – it can travel around the world more than 7 times a second! For a quantum computer made from photons, the fundamental particles of light, it doesn’t make sense to be too restricted by concerns about what is located close to what else. It does, however, make sense to be concerned about how many things the photon encounters on its journey, since every device it passes has a chance of unintentionally causing it to divert onto another path, thereby getting lost.

These considerations lead PsiQuantum to devise an approach to photonic quantum computing – fusion based quantum computing (FBQC) – that has significant differences to approaches more suitable for matter-based qubits like atoms, ions and superconducting qubits [see sidebar]. Many of those differences are to do with the extreme flexibility we have in using optical switches and fiber to send photons anywhere and everywhere.

Quantum photonic chip
An optical switch is the critical enabling technology to be able to take advantage of the flexibility we have to reroute photons. These devices are not quantum – in fact the internet runs on such switches routing laser light through optical fiber. However, the performance of those off-the-shelf switches is not good enough to build a photonic quantum computer. For this reason PsiQuantum invested a large amount of effort to build what are now the world’s leading optical switches.

Blocklets - born to be wild

The basic structure of FBQC is that photons are initially prepared in small, entangled resource states – the number of photons in a resource state does not change as the computation gets larger. Any given photon is then sent to undergo a simple entangling measurement (a fusion) with one (or several) photons from a different resource state. Where and when and with whom the fusion occurs dictates the fusion network.

Blocklets are a very large class of highly nonlocal fusion networks, that require only fixed size resource states and measurements (enjoying the advantages of LDPC codes) but with a recursive – almost fractal like - layout of the fusion network, which allows them to achieve the highly compact qubit encoding of concatenated protocols. Keeping connectivity low is important — it prevents errors from spreading too much — but it turns out that moving beyond constant-weight checks opens the new door to much higher error tolerances and lower footprints, and can be done while still maintaining a very limited hardware connectivity.

The key is to think about the fault-tolerant protocol rather than the code. Instead of trying to directly measure the large-weight checks that appear in concatenated codes, the blocklet approach gives a recipe for measuring checks of all sizes using only constant-weight measurements.

Crucially, the whole blocklet approach is fully compatible with photonic technologies such as interleaving  and active volume compiling – technologies already proved to greatly reduce the overall resource requirements for building a useful quantum computer, the mission of PsiQuantum.

Exotic and powerful features of FBQC include :

  • Even before blocklets it was realized that FBQC can readily make use of vast generalizations of topological codes, so called fault tolerant complexes. These complexes can utilize a weird directionality of “time” that doesn’t necessarily match the direction of physical time measured by your watch! Exploring this flexible notion of time can uncover surprising relationships between seemingly distinct codes - for example, revealing that Floquet codes and surface codes are, in fact, equivalent.

  • By delaying photons in long loops of optical fibers we can massively amplify photonic hardware – a bit like if the many copies of yourself you see when you stand between two flat mirrors could all actually be put to useful work! How good would that be? We call this kind of temporal multiplexing interleaving.

  • When you look at a quantum circuit you see many qubits are just hanging around waiting for gates to finish on other qubits before they can get involved in the computation again. Keeping logical qubits around like this is very resource intensive. But by making use of nonlocal connectivity, it is possible to implement time-disrespecting “portals” that excise almost all such idleness! This leads to vast reductions in the size of the circuits required for many quantum algorithms, a technique known as active volume compilation.

Whither LDPC?

The investigation of the full potential of blocklets is still nascent. However, we already know simple cases that outperform surface code LDPC approaches which have undergone decades of optimization. For the photonic architectures we've looked at blocklets have higher thresholds and lower resource counts than LDPC codes.

Recently, there have been exciting developments in LDPC codes that encode high numbers of logical qubits, such as the bivariate bicycle codes. These codes require hardware with qubit connectivity that extends beyond nearest-neighbor on a 2D chip. Such codes are very natural for photonic architectures, where locality is not a constraint, and can offer significant reductions in overhead. However, the footprint savings from such schemes only become available at low (physical) error rates, due to the lower thresholds compared to the surface code. On the other hand, blocklet families offer significantly higher thresholds, and give a more rapid path to enabling low-overhead fault tolerance. It would not be too surprising, however, to find that optimal is some kind of hybrid, where each approach is put into service at the scales it operates best. 

Next
Next

PsiQuantum and Linde Engineering Collaborate to Deliver Cryogenic Plant for World’s First Utility-Scale Quantum Computer in Brisbane