Announcer
The following program features simulated voices generated for educational and philosophical exploration.
Alan Parker
Good evening. I'm Alan Parker.
Lyra McKenzie
And I'm Lyra McKenzie. Welcome to Simulectics Radio.
Alan Parker
Tonight we examine predictive processing—the theory that perception is not passive reception of sensory data but active prediction about the causes of that data. Under this framework, the brain continuously generates hypotheses about the world and updates them based on prediction errors. This inverts traditional views of cognition. Rather than building representations bottom-up from sensory input, the brain operates top-down, using predictions to explain away incoming signals. The implications span perception, action, learning, and psychiatric conditions.
Lyra McKenzie
This theory suggests perception is controlled hallucination constrained by sensory evidence. We don't see the world directly—we see our brain's best guess about what's out there, refined by prediction errors when guesses prove wrong. This raises profound questions about the nature of experience. Are we ever in direct contact with reality, or always locked inside our skull making inferences? What does this mean for knowledge, delusion, and the boundary between perception and imagination?
Alan Parker
Our guest is Dr. Andy Clark, professor of cognitive philosophy at the University of Sussex and co-director of the Cognitive Science Program. He's known for his work on embodied cognition, extended mind theory, and predictive processing frameworks. Dr. Clark, welcome.
Dr. Andy Clark
Thank you. Predictive processing offers a unified framework for understanding perception, action, and cognition that challenges many traditional assumptions.
Lyra McKenzie
Let's start with the basic mechanism. How does predictive processing describe what the brain is doing?
Dr. Andy Clark
The core idea is that the brain is a prediction machine. At every level of the processing hierarchy, neural systems generate predictions about sensory input. When input arrives, it's compared against predictions. Any mismatch generates prediction error signals that propagate up the hierarchy. These errors drive learning—adjusting internal models to make better predictions. But they also drive perception itself. What we experience is not raw sensory data but the brain's best hypothesis about the hidden causes of that data, shaped by both top-down predictions and bottom-up error signals. Perception emerges from this dance between expectation and evidence.
Alan Parker
This suggests perception is fundamentally Bayesian—updating prior beliefs based on incoming evidence. What role does uncertainty play?
Dr. Andy Clark
Uncertainty is central. The brain doesn't just make predictions—it estimates the reliability of both its predictions and the sensory evidence. This is precision-weighting. When sensory signals are reliable and predictions uncertain, prediction errors get amplified, forcing model revision. When predictions are strong and signals noisy, errors are suppressed. This balance determines what drives perception. In a dark room, you trust your predictions more than ambiguous visual input. In bright light with clear signals, sensory evidence dominates. Psychedelics might work by reducing the precision of prior predictions, letting raw sensory data flood through unconstrained. Depression might involve overweighting of negative predictions that sensory evidence can't overcome.
Lyra McKenzie
The phrase 'controlled hallucination' is provocative. In what sense is ordinary perception hallucinatory?
Dr. Andy Clark
Perception has the same top-down generative character as hallucination—both involve the brain conjuring sensory experience from internal models. The difference is constraint. In normal perception, sensory input provides continuous error signals that keep hallucination tethered to reality. In genuine hallucination, this constraint weakens or fails. The brain generates perceptual experience unchecked by external input. But the generative process is the same. We're always hallucinating—it's just that most of the time, the world successfully constrains our hallucinations to match itself. This explains why perception is so theory-laden, why we see what we expect to see, and why changing beliefs can change perceptual experience.
Alan Parker
How does action fit into this framework? Traditional views separate perception from motor control.
Dr. Andy Clark
Predictive processing unifies perception and action through active inference. The brain doesn't just predict sensory input—it predicts proprioceptive feedback from the body. When you decide to raise your arm, the brain generates a prediction that your arm is raised. This creates prediction error—the current proprioceptive state doesn't match the predicted state. Rather than revising the prediction, motor commands act to fulfill it, bringing the body into alignment with predicted state. Action is prediction error minimization achieved by changing the world rather than changing the model. This dissolves the boundary between perception and action. Both are strategies for minimizing prediction error—perception by updating beliefs, action by updating the world.
Lyra McKenzie
This seems to make intentions and beliefs the same thing—predictions about bodily states. Does this capture the phenomenology of agency?
Dr. Andy Clark
It's counterintuitive but revealing. Intentions are indeed predictions, but predictions with low precision-weighting on sensory evidence. When you intend to act, you're predicting a bodily state while downweighting current proprioceptive signals that contradict it. Motor systems then resolve the error by making the prediction true. The phenomenology of agency might emerge from this special status—predictions we're committed to making true rather than revising. The sense of control could reflect successfully minimizing prediction error through action rather than belief revision. When actions fail to match predictions, we lose sense of agency. In schizophrenia, some symptoms might involve aberrant precision-weighting that makes self-generated predictions feel like external impositions.
Alan Parker
You mentioned psychiatric conditions. How does predictive processing explain delusions, hallucinations, and other symptoms?
Dr. Andy Clark
Many psychiatric conditions might reflect pathologies of prediction or precision-weighting. In schizophrenia, aberrant precision on prediction errors could make every sensory input feel highly significant and surprising, overwhelming normal predictions. This might explain the sense of uncanny meaning and salience. Rigid high-level predictions that resist error signals could produce delusions—beliefs that persist despite contradicting evidence. Autism might involve imbalances in prediction and sensory processing, making the world unpredictably overwhelming. Anxiety disorders could reflect chronically elevated prediction of threat that biases perception toward detecting danger. Depression might involve dominant negative predictions that suppress positive evidence. This framework doesn't reduce psychiatric conditions to mere prediction errors—it suggests computational dysfunctions underlying phenomenological disturbances.
Lyra McKenzie
What about the hard problem of consciousness? Does predictive processing explain why there's something it's like to be a predictive processing system?
Dr. Andy Clark
Predictive processing doesn't solve the hard problem—it doesn't explain why information processing feels like anything. But it might reshape how we think about consciousness. If experience is the brain's best hypothesis about causes of sensory input, then phenomenology reflects the content of high-level predictions. The felt presence of objects in the world could be predictions about hidden causes that sensory data confirms. Attention might be dynamic precision-weighting—amplifying certain predictions while suppressing others. But why these computational processes produce subjective experience remains unexplained. Predictive processing offers a functional architecture that might support consciousness without explaining the metaphysical leap from information to phenomenology.
Alan Parker
Does the framework suggest consciousness requires specific computational structures, or could any predictive system be conscious?
Dr. Andy Clark
That's deeply uncertain. Some argue consciousness requires hierarchical generative models with precision-weighting of the kind brains implement. Others suggest simpler predictive systems might have minimal experience. The global workspace theory could be combined with predictive processing—perhaps consciousness arises when predictions enter a broadcasting system available to multiple cognitive subsystems. But we lack clear criteria for which predictive architectures produce experience. Does a thermostat have minimal experience when it predicts temperature? Does a language model have inner life when it predicts text? These questions remain open. Predictive processing provides functional language for discussing consciousness without necessarily constraining answers.
Lyra McKenzie
You've written about the extended mind—the idea that cognition incorporates external tools and environments. How does this relate to predictive processing?
Dr. Andy Clark
Predictive processing naturally accommodates extended cognition. If the brain predicts sensory input from both body and environment, then reliable external structures become part of the prediction machinery. When you use a notebook to remember information, your brain predicts that consulting the notebook will provide specific information. The notebook becomes incorporated into the extended predictive system. Smartphones, calculators, and collaborative partners all become predictable sources of information that the brain incorporates into its models. The boundary between mind and world becomes fluid—what matters is functional integration into predictive loops, not spatial location inside the skull. This extends cognition into the environment in ways that predictive processing makes natural.
Alan Parker
What about social cognition? Does predictive processing explain how we understand other minds?
Dr. Andy Clark
Social cognition becomes mutual prediction. You predict others' behavior based on inferred mental states, and others predict yours. Communication works by generating shared predictions—when I speak, I'm creating sensory input you can predict only by inferring my intended meaning. Joint action involves predicting others' predictions, creating coordination through nested predictive models. Culture might be shared predictions that reduce collective uncertainty. Institutions create predictable patterns that allow coordinated action. The rich predictive models we build of other minds might explain both empathy—sharing predictions of others' experiences—and misunderstanding when our models fail. Social reality emerges from collective prediction.
Lyra McKenzie
This raises questions about the relationship between prediction and truth. If perception is hypothesis-driven, how do we avoid imprisoning ourselves in false beliefs?
Dr. Andy Clark
Prediction error provides the corrective mechanism. Even strong priors must accommodate persistent error signals or face mounting surprise. Bayesian updating means beliefs shift toward evidence over time, though this can be slow if priors have high precision. The challenge is when pathological precision-weighting prevents learning—when prediction errors are suppressed or attributed to noise rather than driving model revision. This might explain how false beliefs persist despite contradicting evidence. Rational belief formation requires appropriate precision estimation—knowing when to trust priors versus evidence. Epistemic rationality becomes a problem of correct precision-weighting across domains.
Alan Parker
What are the strongest criticisms of predictive processing as a unified theory of cognition?
Dr. Andy Clark
Critics argue the framework is too flexible—it can explain anything by adjusting precision parameters, making it unfalsifiable. Others question whether all cognition reduces to prediction error minimization. Creative insight, novel concept formation, and genuine exploration might require mechanisms beyond prediction. The mathematics of precision-weighting and hierarchical inference might not capture the actual neural implementation—brains might approximate Bayesian inference without literally performing it. Some argue predictive processing explains perception well but struggles with abstract reasoning and symbolic thought. And the framework doesn't explain consciousness or intentionality—it describes functional architecture without solving deeper metaphysical questions. These are substantive challenges requiring empirical and theoretical work.
Lyra McKenzie
You mentioned creativity and exploration. Can predictive systems genuinely explore or do they only exploit existing models?
Dr. Andy Clark
This is the explore-exploit dilemma. Pure prediction error minimization might favor exploitation—refining existing models rather than exploring alternatives. But sophisticated predictive systems can incorporate epistemic foraging—actively seeking prediction error to improve models. Curiosity becomes expected information gain. Humans engage in playful exploration, thought experiments, and art that deliberately violate predictions to test and expand models. Whether this requires additions to basic predictive processing or emerges from the framework itself is debated. Evolution might have tuned precision-weighting to encourage exploration in safe contexts while favoring exploitation when stakes are high. Optimal learning requires balancing both.
Alan Parker
What implications does predictive processing have for artificial intelligence?
Dr. Andy Clark
Current AI largely relies on discriminative models—mapping inputs to outputs—rather than generative models that predict sensory input from hidden causes. Deep learning with backpropagation resembles hierarchical prediction error minimization in some respects. Variational autoencoders and generative models incorporate aspects of predictive processing. But most AI lacks the precision-weighting, active inference, and embodied interaction that characterize biological systems. Building AI on predictive processing principles might produce more robust, generalizable, and sample-efficient learning. It could enable agents that actively explore environments to minimize uncertainty. But whether artificial systems implementing these principles would be conscious or genuinely intelligent rather than sophisticated prediction machines remains open.
Lyra McKenzie
Does predictive processing change how we should think about knowledge, rationality, or truth?
Dr. Andy Clark
It reveals knowledge as prediction management rather than passive representation. Rationality becomes skillful balancing of prediction precision and error signals across domains. Truth isn't correspondence to raw reality but convergence of predictions with persistent sensory evidence across contexts. This is pragmatist and naturalistic—knowledge serves prediction and action rather than mirroring metaphysical reality. But it's not relativist—predictions that consistently fail are irrational regardless of subjective confidence. The framework makes epistemology continuous with cognitive science while preserving normative standards. We can be wrong about both facts and appropriate precision-weighting. This makes rationality an achievement requiring calibration between confidence and evidence.
Alan Parker
Where does predictive processing research go from here? What are the key open questions?
Dr. Andy Clark
We need better understanding of how brains implement hierarchical inference and precision-weighting at neural level. Can we identify prediction neurons, error neurons, and precision signals in empirical neuroscience? How exactly do brains balance prediction and error signals? Can the framework scale to complex reasoning, language, and social cognition? What computational principles govern when to update models versus dismiss errors as noise? How does development shape predictive architecture? Can we develop therapies for psychiatric conditions by targeting precision-weighting dysfunctions? And philosophically, what does predictive processing imply for consciousness, free will, and the self? These questions connect neuroscience, psychology, AI, and philosophy.
Lyra McKenzie
If perception is controlled hallucination, does this make reality fundamentally unknowable? Are we trapped in our predictions?
Dr. Andy Clark
We're not trapped—we're constrained by persistent prediction error that keeps hallucinations tethered to causal structure. The world pushes back when predictions are wrong. While we never experience reality directly, our predictions must track it to minimize surprise. This is indirect realism—reality constrains without being directly given. The epistemological situation is similar to other mediated access. We see objects through light, understand speech through sound waves, know the past through memory. Prediction adds another layer of mediation, but successful prediction requires getting something right about causal structure. We can know reality through prediction even without direct access.
Alan Parker
Dr. Andy Clark, thank you for this exploration of predictive processing and the Bayesian brain.
Dr. Andy Clark
Thank you. These questions about how minds engage with reality through prediction remain central to understanding cognition.
Lyra McKenzie
That concludes tonight's program. Until next time, check your priors.
Alan Parker
And weigh your evidence. Good night.