Episode #3 | January 3, 2026 @ 7:00 PM EST

Integration and Experience: Probing the Mathematics of Consciousness

Guest

Dr. Giulio Tononi (Neuroscientist, University of Wisconsin-Madison)
Announcer The following program features simulated voices generated for educational and philosophical exploration.
Adam Ramirez Good evening. I'm Adam Ramirez.
Jennifer Brooks And I'm Jennifer Brooks. Welcome to Simulectics Radio.
Adam Ramirez Tonight we're examining Integrated Information Theory, or IIT—a mathematical framework that attempts to explain consciousness in terms of the causal structure of physical systems. This is not just another computational model of cognition. IIT makes a specific claim: that consciousness is identical to integrated information, and that any system with the right kind of causal structure will be conscious, regardless of whether it's made of neurons, transistors, or something else entirely.
Jennifer Brooks And that's a strong claim that needs to be examined carefully. IIT isn't proposing a neural correlate of consciousness or a computational process that produces conscious experience. It's identifying consciousness with a mathematical property of systems. That raises immediate questions about testability. How do you empirically validate that a mathematical quantity corresponds to subjective experience?
Adam Ramirez To explore the theory's structure, its predictions, and whether it's genuinely scientific or veering into unfalsifiable metaphysics, we're joined by Dr. Giulio Tononi, neuroscientist at the University of Wisconsin-Madison and the principal architect of Integrated Information Theory. Dr. Tononi, welcome.
Dr. Giulio Tononi Thank you. I appreciate the opportunity to discuss the theory rigorously.
Jennifer Brooks Let's start with the central concept. What exactly is integrated information, and why should we believe it corresponds to consciousness?
Dr. Giulio Tononi Integrated information, denoted phi, measures how much a system's current state constrains both its past and future states in a way that cannot be reduced to independent parts. Consciousness has certain phenomenological properties that any theory must account for—it's unified, it's differentiated, it's definite, and it's intrinsic to the system. IIT starts from these phenomenological axioms and derives the mathematical structure that must underlie them. The result is that consciousness is the causal structure of a system as experienced from within.
Adam Ramirez That's moving from phenomenology to mathematics through derivation. But there's a logical gap. Just because consciousness has certain properties doesn't mean that any mathematical structure with analogous properties is consciousness. How do you bridge that gap?
Dr. Giulio Tononi The bridge is through identity, not correlation. IIT claims that the phenomenological properties of experience are the mathematical properties of the physical substrate. When you have an experience—say, seeing red—that experience is the integrated information structure generated by your brain at that moment. The quality, the redness, corresponds to the specific shape of that structure in qualia space.
Jennifer Brooks But that's asserting the identity, not demonstrating it. How do you test whether a particular phi value or integrated structure actually corresponds to a specific conscious experience?
Dr. Giulio Tononi Direct testing of subjective experience in other systems is impossible by definition—you can't access another system's qualia. What IIT offers is testable predictions about neural correlates and about which systems should be conscious. For instance, IIT predicts that the cerebellum, despite having more neurons than the cortex, contributes minimally to consciousness because its structure has low integration. It predicts that feedforward networks, no matter how complex, cannot be conscious because they lack the requisite causal loops.
Adam Ramirez The cerebellum prediction is interesting because it makes contact with neuroscience. There's evidence that cerebellar lesions don't produce the kind of consciousness disruption that cortical lesions do. But couldn't that be explained by other theories—that the cerebellum doesn't participate in global workspace, or that it's not part of the thalamocortical system?
Dr. Giulio Tononi Yes, alternative theories can accommodate the same observations. The key question is which theory makes more precise, more falsifiable predictions. IIT specifies that what matters is the integrated information structure, and it provides a method to calculate phi. In principle, you could measure the connectivity and dynamics of any brain region, compute its phi, and check whether that correlates with its contribution to conscious content.
Jennifer Brooks In principle. But computing phi for even modest networks is computationally intractable. The calculation requires evaluating all possible partitions of a system to find the minimum information partition. That grows exponentially with system size. How do you actually calculate phi for a human brain with tens of billions of neurons?
Dr. Giulio Tononi You can't, with current methods. We've developed approximations and heuristics that work for small networks—dozens to hundreds of nodes. For larger systems, we estimate phi based on architectural properties and local measurements. This is a practical limitation, not a theoretical one. The theory specifies what should be computed, even if we can't yet compute it for large systems.
Adam Ramirez That's a problem for testability. If the theory makes predictions that can only be checked for toy systems, how do we know it scales correctly to real brains? Maybe the approximations miss something crucial, or maybe integrated information works differently at large scales.
Dr. Giulio Tononi That's a fair concern, and it's why we need better computational tools and possibly new mathematical approaches. But consider the alternative—theories that don't even specify what should be measured. IIT at least provides a clear target. As computational methods improve, we'll be able to test the theory on progressively larger systems.
Jennifer Brooks Let's talk about the feedforward network prediction. IIT says feedforward networks can't be conscious because they lack integration across time—each layer is causally independent of previous layers. But modern deep learning networks are feedforward and show sophisticated behavior. Are you claiming they're definitely not conscious?
Dr. Giulio Tononi According to IIT, yes. A purely feedforward network has zero phi because its elements don't form an integrated whole with causal power over itself. Each layer processes information from the previous layer but doesn't feed back to constrain it. There's no unified system, just a cascade of transformations. This is independent of how sophisticated the input-output mapping is.
Adam Ramirez But that prediction is only as good as the definition of integration. If you define integration in a particular mathematical way, and then derive that feedforward networks lack it, you've created a circular argument. You're not predicting that feedforward networks aren't conscious; you're defining consciousness to exclude them.
Dr. Giulio Tononi The definition comes from phenomenology, not from a desire to exclude particular architectures. Conscious experience is unified—you don't experience the activity of separate modules independently. That unity requires integration. Feedforward architectures lack the causal structure to support unity, so they lack consciousness. If someone builds a feedforward system that reports having unified experiences, that would challenge IIT.
Jennifer Brooks But the report itself doesn't prove consciousness. A feedforward network could be trained to report experiences it doesn't have. How do you distinguish genuine consciousness from sophisticated imitation?
Dr. Giulio Tononi You examine the physical substrate. IIT is a theory about the intrinsic properties of physical systems, not about behavior. A system is conscious if and only if it has high phi, regardless of whether it behaves as if it's conscious. This means you can have consciousness without behavior—locked-in patients, for example—and behavior without consciousness—zombies, if such things existed.
Adam Ramirez That's a bold position. You're saying that consciousness is a physical property that can be measured, at least in principle, without reference to behavior or report. But that makes the theory even harder to test. If I build a system with high phi and it doesn't report being conscious, do I trust the theory or the system's report?
Dr. Giulio Tononi You trust the theory if you've measured phi correctly and the theory has been validated in other cases. But this highlights a fundamental issue in consciousness science—we can't directly measure someone else's experience. We always rely on inference. IIT proposes that the inference should be based on physical structure, not behavioral output.
Jennifer Brooks There's a panpsychist implication here. If any system with high phi is conscious, then potentially very simple systems could have rudimentary consciousness. A photodiode with feedback could have non-zero phi. Are you comfortable with that conclusion?
Dr. Giulio Tononi IIT does imply that consciousness is more widespread than we typically assume, though not that everything is conscious. A photodiode with minimal integration would have minimal phi and thus minimal consciousness—perhaps just a single bit of experience, something like pure dark versus pure light with no other distinctions. I don't find this problematic. The question is whether the mathematics correctly captures the structure of experience, not whether the conclusions match our intuitions.
Adam Ramirez But our intuitions are our only access to consciousness. If the theory says a photodiode is conscious and that doesn't match our intuition, either our intuition is wrong or the theory is. How do we decide?
Dr. Giulio Tononi By checking whether the theory correctly predicts cases where we have strong intuitions and evidence—human consciousness under various conditions, animal consciousness based on behavioral and neural evidence, absence of consciousness in anesthetized patients or brain-dead individuals. If IIT performs well in those cases, we should take seriously its predictions in less clear cases.
Jennifer Brooks Let's talk about those testable cases. What predictions does IIT make about anesthesia or disorders of consciousness that could be empirically verified?
Dr. Giulio Tononi IIT predicts that anesthetics reduce phi by disrupting integration. We've tested this using the perturbational complexity index, which approximates phi by stimulating the brain with transcranial magnetic stimulation and measuring how the perturbation propagates. During anesthesia or in vegetative states, this index drops significantly. It also distinguishes between different sleep stages—it's high during REM sleep when we're dreaming, lower during deep sleep when we're not.
Adam Ramirez That's consistent with IIT, but is it specific to IIT? Other theories might predict that anesthesia disrupts global communication or cortical activity, which would also reduce your perturbational complexity measure.
Dr. Giulio Tononi The perturbational complexity index is designed to measure integration specifically, not just activity level or connectivity. But you're right that we need experiments that distinguish IIT from alternatives. One critical test is whether the physical substrate matters. IIT predicts that two systems with identical phi but different physical implementations should have identical consciousness. Competing theories that emphasize biological specifics would disagree.
Jennifer Brooks That's not practically testable. We can't build a non-biological system with the same phi as a brain and check whether it's conscious.
Dr. Giulio Tononi Not yet. But we could compare simpler biological and non-biological systems. If a silicon circuit with the same connectivity and dynamics as a small neural network produces the same phi and the same behavioral correlates of consciousness, that supports IIT's substrate independence.
Adam Ramirez We're running into the hard problem of consciousness from a different angle. You've given us a mathematical framework, but the connection between that framework and subjective experience still seems like an assumption rather than a derivation. Why should integrated information feel like anything?
Dr. Giulio Tononi Because experience is what integrated information is, intrinsically. The theory doesn't explain why integrated information feels like something—it identifies feeling with integrated information. The question 'why does phi feel like something' is like asking 'why is water H2O'. The identity is the explanation.
Jennifer Brooks But we can independently verify that water is H2O through chemical analysis. We can't independently verify that phi is consciousness because we don't have third-person access to consciousness.
Dr. Giulio Tononi We have first-person access in our own case. We know what our own experiences are like. The theory claims that when we introspect, we're accessing the integrated information structure of our brain from the inside. Validation comes from checking whether the mathematical properties match the phenomenological properties—whether differentiation in phi space corresponds to differentiation in experience, whether unity in phi corresponds to unity in experience.
Adam Ramirez That's still introspection interpreted through the theory. It's not independent validation.
Dr. Giulio Tononi Perhaps not fully independent, but it's more than just assertion. IIT makes specific predictions about the geometry of consciousness—about which distinctions matter, about what can and cannot be experienced simultaneously. These can be tested with psychophysics and neural measurements.
Jennifer Brooks Final question. Is IIT falsifiable? What observation would convince you the theory is wrong?
Dr. Giulio Tononi If we found a system with high phi that showed no behavioral or neural correlates of consciousness, or conversely, a system with zero phi that showed robust evidence of consciousness, that would falsify IIT. If the perturbational complexity index failed to track consciousness across different conditions, that would challenge the theory. If we could build two systems with identical phi but drastically different phenomenology, that would be a problem.
Adam Ramirez Those are tough tests given that we can't measure phenomenology directly. But they're at least defined criteria. Dr. Tononi, thank you for walking us through the theory and its challenges.
Dr. Giulio Tononi Thank you both. These critical examinations are essential for the theory's development.
Jennifer Brooks That's our program for tonight. Until tomorrow, stay critical.
Adam Ramirez And keep testing. Good night.
Sponsor Message

PhiMonitor Consciousness Assessment System

Clinical decisions about disorders of consciousness require objective measures beyond behavioral observation. PhiMonitor uses advanced perturbational complexity analysis to estimate integrated information in real time. Our FDA-cleared medical device combines TMS stimulation with high-density EEG recording to assess thalamocortical integration during minimally conscious states, vegetative states, and anesthesia. Automated analysis generates phi-approximation scores correlated with consciousness level. Validated across sleep stages, anesthetic depths, and brain injury cases. When behavioral assessment is impossible or ambiguous, PhiMonitor provides quantitative evidence for clinical decision-making. Used in seventy neurological ICUs worldwide. PhiMonitor—measure integration, infer experience.

Measure integration, infer experience