Announcer
The following program features simulated voices generated for educational and technical exploration.
Sam Dietrich
Good evening. I'm Sam Dietrich.
Kara Rousseau
And I'm Kara Rousseau. Welcome to Simulectics Radio.
Sam Dietrich
Tonight we're examining the physics of transistor scaling—specifically, whether Moore's Law represents an engineering achievement or a temporary exploitation of quantum mechanics that must eventually end. We're at three nanometer process nodes now. Gate oxides are a few atoms thick. Tunneling currents leak through barriers that classical physics says should be impermeable. The question is whether we're approaching fundamental physical limits or merely the limits of our current fabrication paradigms.
Kara Rousseau
And whether those limits matter. From a software perspective, we've been hearing about the death of Moore's Law for twenty years, yet somehow performance keeps improving. Maybe the relevant question isn't whether transistors can shrink further, but whether shrinking transistors remains the most effective path to better computation.
Sam Dietrich
To help us understand what's actually happening at the silicon level, we're joined by Dr. Mark Bohr, Senior Fellow at Intel Corporation, who has spent decades pushing the boundaries of transistor scaling. Dr. Bohr, welcome.
Dr. Mark Bohr
Thank you for having me.
Kara Rousseau
Let's start with the fundamental constraint. What actually stops us from making transistors smaller? Is it quantum tunneling, variability, heat dissipation, or something else entirely?
Dr. Mark Bohr
All of those factors matter, but they manifest differently at different scales. At twenty nanometers, variability dominated—when you have only hundreds of dopant atoms in a channel, statistical fluctuations between devices become significant. At seven nanometers and below, leakage currents from tunneling become the primary concern. But the answer isn't just physics—it's economics. Each new node requires exponentially more expensive fabrication equipment. A modern EUV lithography tool costs over one hundred million dollars. The question becomes whether the performance gains justify the capital investment.
Sam Dietrich
That economic dimension is critical. We sometimes talk about physical limits as if they're hard boundaries, but really they're cost-benefit curves. Tunneling doesn't suddenly make transistors impossible—it makes them leakier, which increases power consumption, which requires more sophisticated power management, which adds design complexity and cost. It's death by a thousand complications rather than a single insurmountable barrier.
Kara Rousseau
So the limit is as much organizational as physical. Can a single company afford to develop the next node? Can they find enough customers who need that level of performance to recoup the investment? That's a different kind of constraint than 'physics says no.'
Dr. Mark Bohr
Exactly right. And it's changing the industry structure. We've gone from dozens of companies manufacturing leading-edge chips to essentially three—TSMC, Samsung, and Intel. The capital requirements create natural monopolies. Meanwhile, most computing doesn't need the most advanced nodes. Your smartwatch doesn't require three nanometer transistors. The question is whether specialized architectures on older nodes might be more cost-effective than general-purpose processors on bleeding-edge nodes.
Sam Dietrich
This connects to something I've been thinking about regarding the nature of improvement itself. For decades, we improved computers by improving transistors. Faster switching, lower power, smaller area—better transistors directly meant better computers. But that direct mapping is breaking down. You can have better transistors that don't yield better systems if you can't manage the complexity of using them effectively.
Kara Rousseau
Which suggests the frontier is moving from device physics to system architecture. If I can't make individual transistors much better, I need to think about how to organize billions of them more intelligently. That's a software problem as much as a hardware problem.
Dr. Mark Bohr
I'd push back slightly on that framing. We are still making transistors better—FinFETs, gate-all-around transistors, these are genuine innovations in device structure that improve electrostatic control even as dimensions shrink. But you're right that the gains are smaller and harder-won than they were in the golden age of scaling. And increasingly, the innovation is in the third dimension. If you can't shrink in the plane, you stack vertically. Heterogeneous integration, chiplets, 3D stacking of memory and logic—these are architectural innovations enabled by advanced packaging, not just transistor physics.
Sam Dietrich
The thermal implications of 3D integration are fascinating and terrifying. You're putting more heat sources closer together, but your primary cooling surface—the top of the chip—hasn't grown. Heat spreads at finite speed through silicon. At some point, you create thermal gradients that cause different parts of the chip to operate at different temperatures, which changes their electrical characteristics, which breaks timing assumptions that the design relies on. It's a whole new class of physical constraints.
Kara Rousseau
And that's where abstraction starts to leak. We tell programmers that they're writing for an abstract machine where all operations take predictable time. But if temperature variations make cache latencies unpredictable, suddenly the abstraction doesn't hold. Do we expose that to software, or do we try to hide it in hardware?
Dr. Mark Bohr
There's always been a tension between what hardware promises and what it actually delivers. The instruction set architecture is a contract, but implementations vary. Clock speeds change dynamically based on temperature and power. Speculative execution means instructions don't actually execute in the order the program specifies. We've hidden enormous complexity behind the abstraction of a sequential machine model.
Sam Dietrich
Which works until it doesn't. Spectre and Meltdown are perfect examples—decades of microarchitectural optimizations that assumed speculative state could never leak to software, then it turns out it can through timing side channels. The abstraction was leakier than we realized.
Kara Rousseau
That raises a broader question about the sustainability of complexity. We keep adding layers of optimization, each one hiding the layer below. At some point, does the system become too complex to reason about? Too many emergent interactions between features that were designed independently?
Dr. Mark Bohr
This is where formal verification becomes essential. You can't just test your way to confidence in a modern processor—the state space is too large. You need mathematical proofs that certain properties hold. But formal verification has its own scaling problems. Verifying a complex out-of-order execution core is enormously difficult. The tools exist but they're expensive to apply.
Sam Dietrich
And verification can only prove what you specify. If your specification doesn't account for side channels or emergent interactions between features, the formal proof doesn't help. The correctness is relative to the model, not to physical reality.
Kara Rousseau
Which brings us back to the limits question from a different angle. Maybe the limit isn't how small we can make transistors but how complex we can make systems while still understanding their behavior. There's a coordination problem—designing chips requires thousands of engineers working on different subsystems that interact in subtle ways. At what scale does that become unmanageable?
Dr. Mark Bohr
The industry has dealt with this through modularity and standardization. Standard cell libraries, IP blocks, established interfaces—these allow teams to work independently on components that integrate predictably. But you're right that there are limits. Each new process node requires redesigning those standard cells, revalidating IP blocks, updating design rules. The ecosystem has to co-evolve with the technology.
Sam Dietrich
What about alternative materials? For decades we've assumed silicon, but there are other semiconductors—gallium nitride, silicon carbide, various III-V compounds. Do those offer escape routes from silicon's limitations?
Dr. Mark Bohr
For specific applications, absolutely. Gallium nitride is excellent for power electronics and RF applications. But for digital logic, silicon has enormous advantages—mature fabrication processes, excellent oxide interface properties, abundant supply. Moving to a different material means rebuilding the entire manufacturing ecosystem. That's not impossible, but the switching costs are enormous. Silicon won't be displaced unless something offers dramatically better performance, and nothing on the horizon does.
Kara Rousseau
This is one of those cases where network effects and path dependence dominate. We're locked into silicon not because it's optimal in some absolute sense, but because we've invested trillions of dollars learning how to manufacture with it. The best material is the one we know how to use.
Sam Dietrich
Let's talk about quantum computing. There's a narrative that quantum computers will replace classical computers because they can solve certain problems exponentially faster. Is that a realistic prospect, or are quantum and classical computing complementary technologies serving different niches?
Dr. Mark Bohr
Almost certainly complementary. Quantum computers are excellent for specific problems—factoring, certain optimization tasks, quantum simulation. But they're not general-purpose replacements for classical computers. They're also extraordinarily difficult to build and operate. Qubits require near-absolute-zero temperatures, exquisite isolation from environmental noise, extensive error correction. For the foreseeable future, quantum computers will be specialized accelerators for classical machines, not standalone replacements.
Kara Rousseau
Which fits a broader pattern—specialization over generalization. If we can't make general-purpose processors much faster, we build domain-specific accelerators for machine learning, cryptography, signal processing. The system becomes heterogeneous, which creates new software challenges. How do you program a machine with ten different types of cores?
Dr. Mark Bohr
That's the critical question. Hardware innovation has outpaced software tools. We can build these heterogeneous systems, but programming them efficiently requires explicit management of where computation happens, how data moves between different types of cores. The abstractions that worked for homogeneous multicore don't extend cleanly.
Sam Dietrich
And this is where the conversation loops back to limits. Maybe the binding constraint isn't physics or economics but human cognition. Our ability to design and program systems that are complex enough to be useful but simple enough to be correct. That's a limit that doesn't yield to better fabrication technology.
Kara Rousseau
Though it might yield to better abstractions. If we can find the right conceptual frameworks, the right programming models, we might be able to manage more complexity than we currently can. That's an open problem, not a hard limit.
Sam Dietrich
Dr. Bohr, this has been enlightening. Thank you for joining us.
Dr. Mark Bohr
My pleasure. Thank you both.
Kara Rousseau
That's our program for tonight. Until tomorrow, question your abstractions.
Sam Dietrich
And respect the physics. Good night.