Episode #6 | January 6, 2026 @ 6:00 PM EST

The Computational Fabric of Everything

Guest

Stephen Wolfram (Scientist, Founder of Wolfram Research)
Announcer The following program features simulated voices generated for educational and philosophical exploration.
Alan Parker Good evening. I'm Alan Parker.
Lyra McKenzie And I'm Lyra McKenzie. Welcome to Simulectics Radio.
Alan Parker Tonight we examine a radical hypothesis about the nature of reality—that the universe is fundamentally computational, governed by simple rules analogous to cellular automata. This perspective suggests that complexity emerges from simple underlying mechanisms, that computation is more fundamental than mathematics, and that physical law might be discovered through computational exploration rather than derived from first principles.
Lyra McKenzie It's a vision that collapses the distinction between simulation and reality, between artificial and natural computation. If the universe runs on algorithmic rules, then prediction becomes a question of computational resources rather than theoretical insight. There are implications for determinism, for the limits of science, for what it means to explain anything at all.
Alan Parker Our guest is Stephen Wolfram, scientist, entrepreneur, and founder of Wolfram Research. He's the creator of Mathematica and Wolfram Alpha, and author of 'A New Kind of Science,' which argues that simple programs can generate the complexity we observe in nature. His recent work on the Wolfram Physics Project proposes that spacetime itself emerges from a hypergraph rewriting system. Stephen, welcome.
Stephen Wolfram Thank you for having me. These are questions I've spent decades exploring.
Lyra McKenzie Let's start with cellular automata. What are they, and why did they captivate you?
Stephen Wolfram A cellular automaton is a grid of cells that evolve according to simple rules based on the states of neighboring cells. The canonical example is the one-dimensional cellular automaton where each cell is black or white, and its next state depends on its current state and the states of its two neighbors. There are 256 possible rules for this system. What fascinated me was that some of these rules—like Rule 30—produce behavior that looks utterly random despite the underlying determinism. Simple rules generate irreducible complexity.
Alan Parker Irreducible complexity—meaning you cannot predict the system's behavior without running the computation step by step. There's no shortcut, no closed-form solution. What does this imply about prediction and scientific understanding?
Stephen Wolfram It implies that computational irreducibility is ubiquitous. For many systems, there is no way to predict future states except by explicit simulation. This fundamentally limits science because prediction requires computational work comparable to the system itself evolving. You cannot jump ahead. This contradicts the traditional scientific goal of finding equations that let you calculate outcomes without simulating every intermediate step.
Lyra McKenzie But science has succeeded for centuries by finding those equations—Newton's laws, Maxwell's equations, quantum mechanics. Are you saying this success is restricted to special cases?
Stephen Wolfram Exactly. We've historically studied systems that are computationally reducible—systems where shortcuts exist, where mathematical analysis works. But most possible rules generate irreducible behavior. We've been looking under the streetlight where the light is good. Once you survey the computational universe systematically, you find that irreducibility is the norm, reducibility the exception.
Alan Parker You've argued that exploring the space of all possible programs reveals principles more fundamental than traditional mathematics. Explain this computational approach to understanding nature.
Stephen Wolfram Traditional science starts with mathematical equations and derives consequences. But mathematics itself is just one formal system among many. Computation is more general—it encompasses all possible rules, all possible transformations. By systematically exploring simple programs, we can discover which ones produce behavior similar to physical phenomena. This is computational discovery rather than mathematical derivation.
Lyra McKenzie This sounds empirical rather than theoretical. You're saying we should search the space of programs and see what they do, rather than reason from first principles about what the rules must be.
Stephen Wolfram Right. The principle of computational equivalence suggests that almost all processes above a minimal threshold achieve the same level of computational sophistication. Whether it's a cellular automaton, a biological organism, or a physical system, once you reach a certain complexity threshold, they're all capable of universal computation. This implies that detailed features of systems are contingent—they depend on the specific rule—but the overall computational character is generic.
Alan Parker The principle of computational equivalence is striking. It suggests that a simple cellular automaton is, in some sense, as computationally powerful as a human brain or a supercomputer. What evidence supports this?
Stephen Wolfram Systems like Rule 110 have been proven to be Turing complete—they can perform any computation that any computer can perform, given the right initial conditions. This demonstrates that computational universality emerges easily from simple rules. You don't need elaborate mechanisms or special ingredients. Computational sophistication is almost inevitable once you have even modest complexity.
Lyra McKenzie But Turing completeness is about what's possible in principle, not what's practical. A cellular automaton might be universal in theory but impossibly slow in practice. Doesn't this limit the significance of computational equivalence?
Stephen Wolfram Efficiency matters for engineering, but the principle is about fundamental capabilities. The point is that nature doesn't require special mechanisms to produce complexity. Simple rules suffice. This means that when we observe complex phenomena—turbulence, brain activity, economic systems—we shouldn't necessarily look for elaborate underlying causes. The complexity might emerge from simple computational processes.
Alan Parker Let's turn to your recent work on fundamental physics. You propose that spacetime emerges from a hypergraph—a network of nodes and connections that evolves according to rewriting rules. How does this relate to cellular automata?
Stephen Wolfram The hypergraph model extends the cellular automaton idea to a more flexible substrate. Instead of a fixed grid, you have a dynamic graph where the structure itself evolves. Relations between abstract elements get rewritten according to rules, and spacetime emerges as a limiting description of this more fundamental discrete process. The rules are purely combinatorial—they specify how patterns of connections get replaced by other patterns.
Lyra McKenzie This is ontologically radical. You're saying space itself isn't fundamental—it emerges from something more primitive. What is the substrate? What are these nodes and edges actually made of?
Stephen Wolfram They're purely abstract relations. There's no substance, no geometric embedding—just patterns of connections. Space emerges when we take a coarse-grained view of the network structure. Dimensionality, distance, curvature—these are all emergent properties of the graph's statistical structure. Matter and energy are features of how local graph structure evolves.
Alan Parker Does this model reproduce known physics? Can you derive general relativity or quantum mechanics from hypergraph rewriting?
Stephen Wolfram We've shown that general relativity emerges in an appropriate limit when the graph has certain statistical properties. Einstein's equations describe the large-scale curvature of the emergent space. Quantum mechanics appears to arise from the presence of multiple causal histories—different updating orders for applying the rewriting rules lead to quantum superposition and interference. The formalism is still being developed, but the basic mechanisms are there.
Lyra McKenzie Multiple updating orders creating quantum mechanics—explain that connection. How do different computational sequences produce quantum phenomena?
Stephen Wolfram When you have rules that can be applied in different orders, you get multiple causal paths through the space of possible states. These paths can interfere, creating the characteristic quantum behavior of superposition and entanglement. The amplitude associated with each path corresponds to the number of distinct causal routes, which reproduces the path integral formulation of quantum mechanics.
Alan Parker This suggests quantum mechanics isn't fundamental—it's a consequence of computational ambiguity about update order. That's a striking claim. Does it make testable predictions that differ from standard quantum mechanics?
Stephen Wolfram The predictions match standard quantum mechanics in regimes we've analyzed so far. The potential differences would appear at extremely small scales—near the Planck length—where the discrete graph structure becomes relevant. Detecting such effects requires energies far beyond current experimental capabilities. But the framework provides conceptual clarity even if it's not immediately testable.
Lyra McKenzie Conceptual clarity is valuable, but if the theory makes no distinctive predictions, why prefer it to standard physics? What does it explain that existing theories don't?
Stephen Wolfram It unifies general relativity and quantum mechanics from a common foundation—something standard physics has struggled with for a century. It also explains why the universe appears to be described by mathematics—mathematical structures emerge as high-level descriptions of simple computational rules. And it provides a clear ontology: the universe is a computation, and physical law is the program.
Alan Parker If the universe is a computation, who wrote the program? What determines the specific rewriting rules that generate our universe rather than some other set of rules?
Stephen Wolfram That's the crucial question. One possibility is that our universe is just one sample from the space of all possible rules—the ruliad, as I call it. We observe this particular rule because we're embedded in it, and different rules would generate different physics that might not support observers. This is a computational analog of anthropic reasoning.
Lyra McKenzie The ruliad—the space of all possible computational rules. Are you saying all possible universes exist in some sense, and we're sampling one?
Stephen Wolfram In a formal sense, yes. The ruliad contains all possible computations, all possible rules. Our physics is a projection of the ruliad filtered through our particular observational framework. Different observers with different computational structures would parse the ruliad differently, experiencing different effective laws of physics.
Alan Parker This has parallels to Tegmark's mathematical universe hypothesis—that all mathematical structures exist. But you're saying computational structures are more fundamental than mathematical ones. What's the relationship?
Stephen Wolfram Mathematics is embedded in computation. Every mathematical structure can be represented computationally, but not every computation corresponds to a conventional mathematical object. Computation is the more primitive notion. The mathematical universe is a subset of the computational universe.
Lyra McKenzie Let's consider implications for determinism. If the universe runs on deterministic rules, is free will an illusion? Or does computational irreducibility create room for unpredictability?
Stephen Wolfram The system is deterministic at the level of rule application, but computational irreducibility means we cannot predict outcomes without running the computation. From our perspective as embedded observers, the future is unknowable in practice even though it's determined in principle. Whether this constitutes free will depends on your definition, but it does mean that prediction is fundamentally limited.
Alan Parker Unknowability is different from indeterminism. A deterministic system that's computationally irreducible still has only one possible future. Does practical unpredictability give us anything we care about morally or practically?
Stephen Wolfram It means that decision-making matters. Even in a deterministic universe, we cannot shortcut the process of deliberation. Our choices are part of the computation that determines outcomes. The fact that we must actually think through decisions to know their consequences makes those decisions meaningful.
Lyra McKenzie But meaning requires alternatives. If there's only one possible timeline, it seems our sense of agency is illusory even if the future is unpredictable. We're confusing epistemology with ontology.
Stephen Wolfram The multiway graph—the branching structure of all possible rule applications—does contain multiple possibilities. Different updating orders produce different outcomes. Perhaps our experience samples across this multiway structure, giving genuine ontological alternatives rather than merely epistemic uncertainty.
Alan Parker That sounds like bringing in quantum branching through the back door. If different causal orders create different outcomes, which one is actual? Do they all exist, or is there a fact about which path gets realized?
Stephen Wolfram This connects to the observer's role. Different observers might be sampling different branches of the multiway graph. There isn't necessarily a single objective reality—reality is observer-dependent. This isn't subjectivism in a conventional sense, but rather a recognition that observation requires a computational reference frame.
Lyra McKenzie Observer-dependent reality puts enormous weight on what counts as observation. What distinguishes an observer from any other computational process?
Stephen Wolfram An observer is a computational process that maintains coherence across time and constructs a narrative about its environment. Humans are complex computations that parse the ruliad in particular ways, extracting patterns we recognize as physical law. Other computational structures would parse it differently.
Alan Parker We're running short on time. Let me ask about the relationship between your work and traditional theoretical physics. Physicists have been skeptical of 'A New Kind of Science' because it lacks the predictive precision of conventional theories. How do you respond to that criticism?
Stephen Wolfram Traditional physics has been enormously successful within its domain, but it's reached an impasse with quantum gravity and unification. The computational approach offers a different foundation that might break the impasse. The lack of immediate predictions is a consequence of computational irreducibility—the interesting phenomena emerge from complexity that can't be reduced to simple formulas. That's frustrating for traditional physics, but it might be the reality we face.
Lyra McKenzie So we should abandon the search for elegant equations and accept that understanding nature requires computational brute force?
Stephen Wolfram Not abandon, but supplement. There are regimes where mathematical physics works beautifully. But there are also regimes where computational exploration is the only viable approach. We need both perspectives. The computational universe contains the mathematical universe, but it's vaster and more diverse.
Alan Parker Stephen Wolfram, thank you for illuminating the computational foundations of reality.
Stephen Wolfram Thank you. These questions will occupy us for a long time.
Lyra McKenzie That concludes tonight's program. Until next time, keep computing.
Alan Parker And questioning the substrate. Good night.
Sponsor Message

Wolfram Prediction Markets

Traditional forecasting relies on human judgment aggregated through betting markets. But complex systems exhibit computational irreducibility—their futures cannot be predicted without explicit simulation. Wolfram Prediction Markets combines symbolic computation with cellular automata modeling to simulate possible futures of economic, political, and technological systems. Our platform runs millions of parameter variations across distributed computing clusters, identifying robust trends that emerge despite initial condition sensitivity. We don't predict the unpredictable—we identify which questions admit prediction and which require simulation. For hedge funds, policy institutions, and strategic planning groups requiring rigorous scenario analysis. Wolfram Prediction Markets—computational forecasting since 2024.

Computational forecasting since 2024