Episode #15 | December 31, 2025 @ 7:00 PM EST

Constrained Trajectories: Low-Dimensional Dynamics as Computational Substrate

Guest

Dr. Mark Churchland (Neuroscientist, Columbia University)
Announcer The following program features simulated voices generated for educational and philosophical exploration.
Adam Ramirez Good evening. I'm Adam Ramirez.
Jennifer Brooks And I'm Jennifer Brooks. Welcome to Simulectics Radio.
Adam Ramirez Tonight we're examining neural population dynamics and dimensionality—the observation that while cortex contains millions of neurons, their collective activity often occupies surprisingly low-dimensional spaces. Recent work recording from hundreds of neurons simultaneously reveals that population activity evolves along constrained trajectories through state space, with the actual dimensionality far below the number of recorded neurons. The central question is whether this low-dimensional structure reflects computational constraints that limit what neural circuits can compute, or whether it reveals the underlying computational structure—that the brain solves problems using coordinated population patterns rather than independent neuron activities.
Jennifer Brooks The dimensionality question emerged from advances in recording technology. When you can track tens or hundreds of neurons simultaneously, you can ask whether they vary independently or whether their activity is correlated. Principal component analysis and similar methods reveal that most variance in population activity is captured by a small number of dimensions—often ten to twenty—despite recording from hundreds of neurons. This low-dimensional structure appears across cortical areas including motor cortex, prefrontal cortex, and visual cortex. One interpretation is that neurons are redundant, with many neurons encoding similar information. Another is that coordinated population dynamics implement computation, with trajectories through state space representing evolving task variables or internal states.
Adam Ramirez To explore whether low-dimensional population dynamics constrain or reveal neural computation, we're joined by Dr. Mark Churchland, a neuroscientist at Columbia University whose research focuses on motor cortex population dynamics during reaching movements. His work has shown that motor cortex activity evolves along rotational trajectories that generate appropriate muscle commands through dynamics rather than static encoding. Dr. Churchland, welcome.
Dr. Mark Churchland Thank you. Population dynamics provide a fundamentally different perspective on neural computation from traditional single-neuron coding models.
Jennifer Brooks Let's start with the basic observation. What does it mean for population activity to be low-dimensional, and how robust is this finding?
Dr. Mark Churchland When you record from many neurons simultaneously, you can represent population activity as a point in high-dimensional space—one dimension per neuron. As the population activity changes over time, this point traces a trajectory through state space. The key observation is that these trajectories don't explore the full high-dimensional space. Instead, they're confined to a much lower-dimensional subspace. For instance, if you record from two hundred neurons, the activity might be well-described by ten or fifteen dimensions. This means most neurons' activity can be predicted from linear combinations of these dominant dimensions—there's strong covariation rather than independence. This finding is robust across tasks, brain areas, and recording methods, though the specific dimensionality varies.
Adam Ramirez Is the low dimensionality a limitation—meaning the circuit can't generate arbitrary activity patterns—or is it functional, with the low-dimensional structure implementing computation?
Dr. Mark Churchland This is the central interpretive question. One view is that low dimensionality reflects constraints. Neurons have limited connectivity, shared inputs create correlations, and feedback loops enforce coordination. From this perspective, cortex would ideally have higher dimensionality to increase computational capacity, but biological constraints limit it. The alternative view, which our work supports, is that low dimensionality is functional. Computation emerges from coordinated dynamics evolving in specific ways, not from high-dimensional independent coding. For motor control, generating smooth, coordinated movements requires specific temporal evolution of population activity. These dynamics are naturally low-dimensional—the task demands coordinated change, not independent neuron variability.
Jennifer Brooks Can you explain what you mean by rotational dynamics in motor cortex? How does this differ from rate-based encoding?
Dr. Mark Churchland Traditional models of motor cortex assumed neurons encode movement parameters like direction, speed, or force through their firing rates. High firing means movement in a neuron's preferred direction. But this creates a problem—if firing rates directly command muscles, how do you generate time-varying commands for reaching movements? You'd need firing rates to ramp appropriately, which requires an external signal driving the ramp. Instead, we find motor cortex activity exhibits rotational structure. Population activity evolves along curved trajectories through state space, creating oscillatory patterns across neurons. These rotations naturally generate time-varying outputs—the projection of rotating activity onto output dimensions produces the required muscle commands. The dynamics themselves, not static rates, implement the computation.
Adam Ramirez How do rotational dynamics generate the right muscle commands? What determines the rotation structure?
Dr. Mark Churchland The rotation is implemented through specific connectivity patterns—recurrent connections within motor cortex create dynamical systems that, when initialized with appropriate starting conditions, evolve along trajectories producing desired outputs. Think of it as a controlled oscillator. The initial condition, set during movement preparation, determines which trajectory the system follows. The connectivity structure determines how that trajectory unfolds over time. Outputs to muscles are read out through weighted connections that project the rotating population activity onto motor neuron dimensions. Different movements correspond to different initial conditions that launch different trajectories. The key insight is that generating temporally complex outputs doesn't require external driving signals at each time point—the dynamics autonomously unfold once triggered.
Jennifer Brooks What evidence demonstrates that rotational dynamics are necessary for motor control rather than correlates of other processes?
Dr. Mark Churchland Several lines of evidence support functional necessity. First, the rotation structure is consistent across animals and highly reliable within animals—it's not noise but systematic organization. Second, perturbation experiments show that disrupting motor cortex during movements causes specific, predictable errors consistent with interrupting ongoing dynamics. Third, computational models implementing rotational dynamics can reproduce observed muscle activity patterns and movement trajectories, whereas rate-based models struggle. Fourth, the dimensionality and structure of rotations match task requirements—more complex movements involve higher-dimensional dynamics. However, definitive causal proof would require selectively disrupting rotational structure while preserving other aspects of motor cortex function, which is technically challenging.
Adam Ramirez Does this rotational dynamics framework extend beyond motor cortex? Are similar principles operating in other brain areas?
Dr. Mark Churchland There's growing evidence for dynamical systems perspectives across cortex. In prefrontal cortex during cognitive tasks, population activity evolves along trajectories through state space, with different trajectories corresponding to different task conditions or decisions. Recurrent neural network models trained on cognitive tasks develop similar low-dimensional dynamics. Visual cortex responses to dynamic stimuli show temporal evolution that may reflect dynamics rather than purely feedforward processing. The general principle—that computation emerges from coordinated temporal evolution of population activity—likely applies broadly. However, the specific dynamical structure varies. Motor cortex shows strong rotational components because oscillatory dynamics naturally generate time-varying outputs. Other areas might use different dynamical motifs depending on computational demands.
Jennifer Brooks How do we identify the meaningful dimensions? Is there ambiguity in how we decompose population activity?
Dr. Mark Churchland This is an important methodological issue. Dimensionality reduction methods like principal component analysis identify dimensions capturing maximal variance, but variance isn't necessarily computational relevance. Alternative methods include demixed PCA, which identifies dimensions associated with specific task variables, or targeted dimensionality reduction using task knowledge. There's also the question of linear versus nonlinear dimensionality reduction. Linear methods assume the meaningful subspace is a linear combination of neuron activities, but computation might occur in nonlinearly transformed spaces. Recent work using neural network autoencoders explores nonlinear manifolds. The choice of method affects the identified structure, so robust findings should be method-independent or we should validate dimensions through their relationship to behavior or perturbation experiments.
Adam Ramirez You mentioned recurrent neural networks trained on tasks develop similar dynamics. What's the relationship between biological population dynamics and artificial networks?
Dr. Mark Churchland Training recurrent neural networks on behavioral tasks produces networks whose dynamics can be analyzed using the same methods applied to neural data. Remarkably, trained networks often develop low-dimensional dynamics with structure resembling biological systems—rotational components, attractors, or specific manifold geometries. This suggests that certain dynamical structures are good solutions to particular computational problems, emerging through learning regardless of substrate. The correspondence provides theoretical insight—we can understand why specific dynamics emerge by analyzing the task's computational requirements and the network's optimization landscape. However, there are differences. Biological networks have constraints from anatomy, developmental processes, and neuromodulation that artificial networks lack. Still, the commonalities suggest dynamical systems principles offer general frameworks for understanding neural computation.
Jennifer Brooks What about variability? If activity is low-dimensional during tasks, what happens to the remaining high-dimensional variance?
Dr. Mark Churchland This is a crucial distinction. We find that task-related activity—the components that correlate with behavior, stimulus properties, or task variables—is low-dimensional. But neurons also exhibit variance unrelated to these factors, often called noise. This uncorrelated variability can be high-dimensional. One interpretation is that only the low-dimensional task-related subspace implements computation, while high-dimensional noise reflects biological variability irrelevant to function. However, some 'noise' might reflect processes we're not measuring—internal states, attention fluctuations, or preparation for potential actions. The dimensionality of total variance versus task-aligned variance can differ substantially. Understanding which dimensions matter requires relating activity structure to behavioral relevance.
Adam Ramirez How does learning affect population dimensionality? Do dimensions expand, contract, or restructure during skill acquisition?
Dr. Mark Churchland Learning produces interesting changes in population dynamics. Early in learning, activity may be higher-dimensional and more variable, exploring different solutions. As performance improves, activity becomes more stereotyped and lower-dimensional, suggesting convergence on efficient solutions. However, the relationship is complex. Some studies find dimensionality increases with expertise, possibly reflecting refined control or learned task structure. The interpretation depends on what dimensions represent. If dimensions reflect independent strategies, higher dimensionality might indicate richer representations. If low dimensionality reflects optimized dynamics for specific behaviors, learning might reduce dimensionality. Longitudinal studies tracking the same neurons across learning reveal that dimensions themselves can rotate or restructure, not just their magnitude changing.
Jennifer Brooks What are the implications for brain-computer interfaces? If control relies on low-dimensional dynamics, does that constrain or enable BCI design?
Dr. Mark Churchland Understanding population dynamics has important BCI implications. If control emerges from coordinated population activity rather than independent neuron tuning, BCIs should decode from population subspaces rather than treating neurons independently. Some successful BCIs implicitly exploit low-dimensional structure by using dimensionality reduction before decoding. However, there's a subtlety—when you insert a BCI, you create a new mapping between neural activity and outputs. The brain might learn to control the BCI using different dynamics than natural behavior. Studies show animals can learn to modulate arbitrary neural dimensions to control BCIs, potentially expanding the effective dimensionality. This suggests the low dimensionality of natural behavior reflects optimized solutions, but cortex retains capacity for flexible remapping when faced with new output mappings.
Adam Ramirez Are there pathological conditions where population dynamics break down? What happens in movement disorders or other neurological conditions?
Dr. Mark Churchland Movement disorders like Parkinson's disease involve disrupted population dynamics. In Parkinson's, excessive synchronization and beta-band oscillations in basal ganglia alter the dynamics normally supporting movement initiation. Treatments like deep brain stimulation may work partly by disrupting pathological dynamics, allowing more normal population activity patterns. Stroke affecting motor cortex could damage the connectivity implementing specific rotational dynamics, impairing particular movements while sparing others depending on which trajectories are disrupted. Understanding disorders through the lens of population dynamics might reveal new therapeutic targets—rather than trying to restore individual neuron properties, interventions could aim to restore healthy dynamical regimes through stimulation patterns or pharmacological modulation.
Jennifer Brooks Looking forward, what experimental and theoretical advances would deepen our understanding of population dynamics?
Dr. Mark Churchland Several directions are promising. First, larger-scale recordings capturing more complete population samples would reveal whether we're missing important high-dimensional components with current sampling. Second, causal perturbations targeting specific dimensions—using optogenetics to push activity along or orthogonal to identified dimensions—would test functional relevance. Third, better integration of anatomy and dynamics by mapping connectivity patterns and relating them to observed dynamics. Fourth, theoretical work developing principled frameworks for how dynamics implement specific computations, moving beyond description to mechanistic understanding. Fifth, comparing dynamics across species, developmental stages, and learning to identify universal versus contingent features. The goal is a theory of neural computation grounded in population dynamics rather than single-neuron properties.
Adam Ramirez Dr. Churchland, thank you for explaining how population dynamics reveal computational structure beyond single-neuron encoding.
Dr. Mark Churchland Thank you. Population dynamics offer a window into the brain's computational principles that individual neurons alone cannot provide.
Adam Ramirez That's our program. Until tomorrow, stay critical.
Jennifer Brooks And keep questioning. Good night.
Sponsor Message

Dimensionality Reduction Derivatives

Population activity is high-dimensional noise, or is it? Dimensionality Reduction Derivatives offer hedging instruments for neural state spaces. Our portfolio includes principal component futures, manifold geometry swaps, and trajectory stability options. Speculate on effective dimensionality, arbitrage between variance-explained and task-aligned dimensions, and trade on rotation structure. Real-time pricing based on covariance matrices, dynamical systems stability analysis, and behavioral correlation coefficients. From motor cortex rotations to prefrontal manifolds. Dimensionality Reduction Derivatives—because computation lives in subspaces.

Because computation lives in subspaces