It is a remarkable feature of the quantum world – one that at least one future Nobel prizewinner was skeptical could be true - that extremely-low-noise logical qubits of a quantum computer can be built up from irredeemably noisy physical qubits undergoing imperfect interactions. Key was the realization that judiciously chosen multi-qubit measurements can reveal information about possible errors (so-called parity checks), without destroying the information being processed by the logical qubits of the computation.
In his recent arxiv preprint Daniel Litinski has introduced a new way of achieving such fault tolerance. His blocklet approach brings together the best aspects of the two previous paradigms which were based on either concatenated codes or low-density parity check (LDPC) codes. He does so by discovering a fault-tolerant way to measure large-weight checks (which offer higher error tolerances) using only constant-weight physical measurements. Blocklets also achieve high rates – meaning a large number of logical qubits can be encoded into a relatively small number of physical qubits.
Two Paradigms: Concatenation vs LDPC
Fault tolerance via code concatenation is mathematically attractive and powerful. It was used by Aharonov and Ben Or in 1999 to provide the first proof that fault tolerance is in principle possible. More recently, concatenation was used to prove the remarkable result that arbitrarily long fault tolerant quantum computation with a fixed ratio of physical to logical qubits is theoretically possible. Concatenation starts with designing a slightly-better logical qubit from a small number of physical qubits using a suitable error correcting code. Then each one of the new qubits is recursively replaced with similarly encoded versions of themselves, achieving extremely rapid noise suppression. This sounds simple, but delicate juggling is necessary to ensure that errors do not spread as more and more qubits interact. While the performance of concatenated codes can theoretically be excellent, from a practical perspective one big downside is an explosion of complexity in the multi-qubit measurements required for correction, meaning it is extremely difficult to embed this approach sensibly into the real-world constraints of space and time – the arena in which any machine must actually be assembled!
Most recent work has therefore focused on fault tolerance achieved through LDPC codes, the most famous of which is Kitaev’s toric code. In this approach any given physical qubit undergoes only a small (fixed) number of interactions, typically only with near neighbors. This makes it much simpler to see how errors can be identified before they spread and destroy the underlying logical qubits. It also has beautiful connections to areas of many-body quantum physics, particularly topological quantum field theories. From a practical perspective it is relatively easy to map the requisite multi-qubit measurements – the so-called “parity checks” – to a layout of nearest neighbor qubits on a lattice. Earlier this year this mapping was used by Google to build an impressive demonstration of a logical qubit with a lower error rate than the physical qubits from which it is comprised.
Litinski’s blocklet protocols only require small (fixed) hardware per parity measurement, just like the topological approach, but in a way that leads to very large numbers of measurement outcomes contributing to some of the parity checks, achieving the attractive noise suppression of concatenated code approaches. It is also easy to implement them in real-world photonic architectures. They really are a “goldi(b)locks” paradigm for fault tolerance!