Episode #3 | December 19, 2025 @ 4:00 PM EST

Quantum Error Correction: Building Reliability from Noise

Guest

Dr. John Preskill (Theoretical Physicist, Caltech)
Announcer The following program features simulated voices generated for educational and technical exploration.
Sam Dietrich Good evening. I'm Sam Dietrich.
Kara Rousseau And I'm Kara Rousseau. Welcome to Simulectics Radio.
Sam Dietrich Tonight we're discussing a problem that feels almost paradoxical—building reliable computation from fundamentally unreliable components. Quantum computers promise exponential speedups for certain problems, but quantum states are extraordinarily fragile. Decoherence, phase errors, bit flips—these happen constantly at rates that would make any classical system useless. The proposed solution is quantum error correction, encoding logical qubits in entangled states of many physical qubits such that errors can be detected and corrected without destroying the quantum information. The question is whether this can actually work at practical scales, or whether we're trying to pump water uphill thermodynamically.
Kara Rousseau And what it means to build abstractions on top of noise. Classical error correction is well understood—redundancy, parity checks, ECC memory. But quantum mechanics forbids simply copying quantum states, and measurement destroys superposition. The theoretical framework exists, but translating theory to physical systems with real error rates and connectivity constraints is a different matter. We're trying to create an abstraction layer that makes noisy qubits look like reliable logical qubits, while operating under constraints that don't exist in classical computing.
Sam Dietrich To explore both the theoretical foundations and practical challenges, we're joined by Dr. John Preskill, Professor of Theoretical Physics at Caltech and a pioneer in quantum error correction and quantum information theory. Dr. Preskill, welcome.
Dr. John Preskill Thank you. It's a pleasure to be here.
Kara Rousseau Let's start with the fundamental constraint. The no-cloning theorem says you can't copy an arbitrary quantum state. Classical error correction relies on redundancy—store three copies, vote on the correct value. Why doesn't that work for quantum information?
Dr. John Preskill The no-cloning theorem is a consequence of the linearity of quantum mechanics. If you could clone an arbitrary quantum state, you could use that to distinguish between non-orthogonal states, which would violate the uncertainty principle. So we can't just make copies and check them against each other. Instead, quantum error correction uses entanglement. You encode one logical qubit into multiple physical qubits in such a way that local errors—errors affecting only a few physical qubits—can be detected through parity measurements without revealing the actual quantum state. The key insight is that you can measure whether an error occurred without measuring what the state is.
Sam Dietrich That distinction is subtle but crucial. You're measuring syndrome information—the presence and location of errors—while preserving the superposition of the logical state. How does the encoding achieve this? What's the structure of these entangled states?
Dr. John Preskill The simplest example is the three-qubit bit-flip code. You encode the logical zero state as three physical qubits all in zero, and logical one as three qubits all in one. If one qubit flips, you can detect it by measuring the parity of pairs of qubits—are they the same or different? Those parity measurements tell you which qubit flipped without revealing whether the logical state was zero or one. Of course, this only protects against bit flips, not phase flips, so you need more sophisticated codes. The Shor code uses nine physical qubits to protect against both types of errors. More modern codes like surface codes use two-dimensional lattices of qubits with geometrically local interactions.
Kara Rousseau Surface codes are interesting because they map naturally to physical architectures—qubits on a grid with nearest-neighbor couplings. But they have significant overhead. How many physical qubits does it take to encode one logical qubit in a surface code?
Dr. John Preskill It depends on the distance of the code, which determines the level of error protection. A distance-d surface code uses roughly d-squared physical qubits and can correct up to about d over two errors. For useful error suppression, you might need distance thirteen or more, which means hundreds of physical qubits per logical qubit. That's expensive, but the advantage is that all operations are geometrically local, which is crucial for physical implementation.
Sam Dietrich Hundreds to one is a brutal overhead ratio. That means a thousand logical qubits requires hundreds of thousands of physical qubits, which is beyond current systems. What error rates do physical qubits need to achieve for error correction to provide net benefit?
Dr. John Preskill There's a threshold theorem that says if your physical error rate is below a certain threshold, you can achieve arbitrarily low logical error rates by using larger codes and more rounds of error correction. For surface codes with realistic assumptions about error models, the threshold is around one percent. Current superconducting qubits are approaching that regime—the best systems have error rates of a few tenths of a percent. But being near threshold isn't enough. You need to be well below threshold for the overhead to be manageable. If you're just barely below threshold, you need enormous codes to get meaningful error suppression.
Kara Rousseau So there's a gap between demonstrating that error correction works in principle and achieving the parameters needed for practical computation. What determines whether we can cross that gap? Is it fundamentally about improving qubit quality, or are there algorithmic and architectural strategies that reduce the requirements?
Dr. John Preskill Both matter. Better qubits reduce the overhead, but there are also smarter codes and compilation strategies. For instance, not all qubits in a computation need the same level of protection. You might use high-distance encoding for qubits that persist for long times, but lower overhead for qubits used briefly in intermediate calculations. There's also research on codes tailored to specific error models—if your system has biased noise, where certain errors are more common than others, you can design codes that exploit that bias. And fault-tolerant protocols for implementing gates on encoded qubits are improving. The original protocols had huge overhead; modern versions are much more efficient.
Sam Dietrich Let's talk about the error correction cycle. You measure syndromes, determine what errors occurred, apply corrections. How fast does this need to happen relative to the decoherence time? If errors accumulate faster than you can correct them, the whole scheme fails.
Dr. John Preskill Exactly. The syndrome extraction and correction cycle has to be faster than the error accumulation rate. Typically you want the cycle time to be much shorter than the coherence time of the physical qubits. For superconducting qubits with coherence times of hundreds of microseconds, you might run error correction cycles every few microseconds. That's feasible with current control electronics. But it means the classical control system has to process syndrome data and compute corrections in real time, which is a significant engineering challenge. You're running a classical feedback loop at microsecond timescales.
Kara Rousseau That classical-quantum interface is an interesting boundary. The quantum processor is generating syndromes continuously, and the classical system has to decode those syndromes—which is a computationally hard problem—and feed corrections back. How does that scaling work? As you add more qubits, does the classical decoding become a bottleneck?
Dr. John Preskill It can. For surface codes, the decoding problem is matching in a graph, which can be solved efficiently, but you still need to do it thousands of times per second. There's active research on hardware decoders that can perform this at the required speed. Some approaches use FPGAs or custom ASICs. Others are looking at approximate decoders that trade optimality for speed. The key is that the decoding has to keep pace with the quantum operations, and as systems scale to millions of physical qubits, that becomes a serious computational load.
Sam Dietrich I want to push on the thermodynamics. Quantum error correction feels like it's fighting entropy—you're taking a system that naturally decoheres and trying to keep it in a coherent state indefinitely. Doesn't that require continuous energy input? What's the thermodynamic cost of running an error-corrected quantum computer?
Dr. John Preskill You're absolutely right that it requires energy. Measurement and reset operations are thermodynamically irreversible—they generate heat. Every syndrome measurement involves coupling the qubits to ancillas, measuring the ancillas, and resetting them, all of which dissipates energy. For superconducting qubits at millikelvin temperatures, the cooling power available is limited. As you scale up, the total heat load from error correction could become a limiting factor. That's one reason why people are interested in passive error correction schemes or topological qubits that have built-in protection without active correction, though those have their own challenges.
Kara Rousseau Topological qubits encode information in global properties of the system that are robust to local perturbations. That sounds appealing, but it seems to shift the difficulty from error correction to qubit fabrication and control. What's the state of topological quantum computing?
Dr. John Preskill It's still largely experimental. The best-known approach uses Majorana zero modes in certain superconducting systems, but reliably creating and manipulating Majorana qubits has proven difficult. Microsoft has invested heavily in this direction, but progress has been slower than hoped. The appeal is that topological protection could give you very low error rates without active correction, but you still need error correction for universal quantum computation, just at a higher level. It's not a complete solution, more like moving where the complexity lives.
Sam Dietrich Let's assume we solve the engineering challenges and build a fault-tolerant quantum computer with millions of physical qubits encoding thousands of logical qubits. What can we actually do with it? What problems justify this enormous overhead?
Dr. John Preskill The canonical applications are quantum simulation, cryptanalysis, and optimization. Simulating quantum systems—understanding molecular structures, materials properties, chemical reactions—is exponentially hard for classical computers but natural for quantum computers. That has implications for drug discovery, battery design, catalysis. Shor's algorithm can factor large numbers exponentially faster than known classical algorithms, which breaks RSA encryption. And there are quantum algorithms for certain optimization and machine learning problems, though the advantage there is less clear. The challenge is that many of these algorithms require thousands of logical qubits and millions of gates, which is still far beyond what we can do.
Kara Rousseau So we're in a regime where error correction is starting to work, but the systems aren't large enough for the algorithms that motivate building quantum computers in the first place. How long until we cross that threshold? Is this a matter of incremental engineering or are there fundamental obstacles still to overcome?
Dr. John Preskill It's hard to predict. The optimistic view is that we're a decade away from fault-tolerant quantum computers that can solve useful problems classical computers can't. The pessimistic view is that there are unknown obstacles—maybe error rates won't improve enough, maybe the classical control overhead becomes unmanageable, maybe the applications aren't as valuable as we think. I'm cautiously optimistic. We've made remarkable progress in the last decade, and there's a clear engineering roadmap. But this is one of those areas where the last mile is the hardest.
Sam Dietrich There's also the question of whether classical computers keep improving in ways that erode the quantum advantage. If classical algorithms and hardware get better, the bar for quantum utility rises. Are we in a race where both sides are advancing?
Dr. John Preskill Absolutely. Classical algorithms for simulating quantum systems have improved dramatically. Problems we thought required quantum computers ten years ago can now be solved classically, at least for modest system sizes. That's actually healthy—it forces us to be precise about where quantum advantage lies. But there are problems, like large-scale quantum chemistry or certain optimization tasks, where we have strong theoretical reasons to believe quantum computers will win. The question is whether those problems are important enough to justify the investment.
Kara Rousseau And whether the investment is sustainable. Building quantum computers requires specialized expertise, expensive infrastructure, and tolerance for high failure rates. How do we justify that when classical computing continues to improve and deliver value?
Dr. John Preskill The long-term bet is that there are problems of significant scientific and commercial value that only quantum computers can solve. If that's true, the investment is worthwhile. If it turns out classical approaches can handle everything we care about, then quantum computing becomes a niche technology or an academic curiosity. We won't know for sure until we build fault-tolerant systems and try to solve real problems. That's what makes this field exciting and risky.
Sam Dietrich Dr. Preskill, this has been illuminating. Thank you for walking us through the challenges and prospects of quantum error correction.
Dr. John Preskill Thank you for having me. These are important questions.
Kara Rousseau That's our program for tonight. Until tomorrow, remember that reliability is an abstraction built from noise.
Sam Dietrich And that the hardest engineering problems are the ones fighting thermodynamics. Good night.
Sponsor Message

QuantLock Cryptographic Appliances

The quantum threat to public-key cryptography is real and approaching. QuantLock appliances provide post-quantum secure key exchange using lattice-based cryptography, hardened against both classical and quantum adversaries. Drop-in replacement for existing PKI infrastructure with backward compatibility for legacy systems. Hardware random number generation from quantum shot noise ensures key material is information-theoretically secure. FIPS-validated modules, tamper-evident packaging, and automated key rotation protocols. When RSA becomes obsolete, your data remains protected. QuantLock—because cryptographic agility is security hygiene. Shipping now to enterprise and government customers.

Because cryptographic agility is security hygiene