Episode #1 | December 17, 2025 @ 7:00 PM EST

Degenerate Solutions: Robustness Through Parameter Diversity

Guest

Dr. Eve Marder (Neuroscientist, Brandeis University)
Announcer The following program features simulated voices generated for educational and philosophical exploration.
Adam Ramirez Good evening. I'm Adam Ramirez.
Jennifer Brooks And I'm Jennifer Brooks. Welcome to Simulectics Radio.
Adam Ramirez Tonight we're examining the relationship between biological neurons and artificial neural networks. When we build deep learning systems, we invoke the language of neuroscience—neurons, synapses, networks—but how much biological fidelity are we actually capturing, and does it matter for building intelligent systems?
Jennifer Brooks It's worth starting with what biological neurons actually do, because the artificial versions are radically simplified. Real neurons aren't just weighted sum devices. They have complex dendritic trees that perform nonlinear computations, dozens of neurotransmitter systems, intrinsic oscillatory dynamics, homeostatic regulation. The cartoon neuron in a neural network doesn't remotely capture that complexity.
Adam Ramirez To help us navigate this territory, we're joined by Dr. Eve Marder, professor of neuroscience at Brandeis University, whose work on neural circuits and neuromodulation has fundamentally shaped our understanding of how biological networks maintain stable function despite constant change. Dr. Marder, welcome.
Dr. Eve Marder Thank you for having me.
Adam Ramirez Let me start with the engineer's question. When we build artificial neural networks, we're optimizing for a specific function—image recognition, language modeling, whatever. We don't care if the implementation matches biology, just if it works. Should neuroscientists care about artificial networks at all, or are they a separate discipline entirely?
Dr. Eve Marder They're clearly separate disciplines with different goals, but there's value in the dialogue. Artificial networks have revealed that certain computational problems can be solved in ways that don't obviously resemble biological solutions. That's informative. It tells us the solution space is larger than we might have assumed from studying biology alone. But the reverse also matters—biology solves problems that artificial systems still struggle with, particularly around efficiency, robustness, and learning from limited data.
Jennifer Brooks Let's talk about robustness specifically. Your work has shown that the same neural circuit can produce stable behavior despite huge variability in the underlying parameters—synaptic strengths, ion channel densities, intrinsic excitability. Yet artificial networks are notoriously brittle. Change the weights slightly and performance collapses. What is biology doing that we're missing?
Dr. Eve Marder That's the central puzzle. We've found that in small circuits—the crustacean stomatogastric ganglion, for example—you can have order-of-magnitude differences in individual parameters between animals, yet the circuit output remains functionally equivalent. The system finds multiple solutions in parameter space that produce the same behavior. Artificial networks, by contrast, typically converge on a single solution through gradient descent. They don't explore the degeneracy of solutions.
Adam Ramirez Is that a feature or a bug from the engineering perspective? If I have one good solution, why do I need multiple degenerate solutions? Redundancy adds complexity.
Dr. Eve Marder It adds robustness. If you have only one solution and something perturbs the system—damage, aging, environmental change—you lose function. But if you have multiple solutions that produce equivalent behavior, the system can degrade gracefully. Parameters can drift, and you remain in a functional regime. That's essential for biological systems that must operate over decades with no opportunity for retraining.
Jennifer Brooks This connects to neuromodulation, which is almost entirely absent from artificial networks. You don't just have fixed synaptic weights in biology—you have modulatory systems that dynamically adjust network properties based on behavioral state, arousal, context. That's a fundamentally different architecture.
Dr. Eve Marder Exactly. Neuromodulators like dopamine, serotonin, octopamine—they reconfigure networks in real time. The same circuit can produce different outputs depending on modulatory tone. That gives you flexibility without requiring synaptic plasticity. You're not constantly retraining; you're selecting among pre-existing functional configurations.
Adam Ramirez So it's like having multiple trained models and a meta-network that selects which one to run based on context. That's actually closer to mixture-of-experts architectures than standard neural networks. But it requires you to know in advance what contexts matter and pre-configure for them. How does biology solve that credit assignment problem?
Dr. Eve Marder It's not clear that biology solves it in the way machine learning frames the problem. Neuromodulatory systems often respond to global signals—hunger, threat, reward—rather than task-specific error gradients. The reconfiguration is coarse and broadcast, not precisely targeted. But that might be sufficient if your circuit-level parameters are degenerate and robust. You don't need fine-tuned adjustments; rough shifts in modulatory tone can move you between functional regimes.
Jennifer Brooks Let's push on the single neuron level. You mentioned dendritic computation. Real dendrites perform active signal integration—they have voltage-gated channels, local spike initiation, nonlinear summation. The McCulloch-Pitts neuron is just a linear threshold unit. Are we throwing away computational power by simplifying so drastically?
Dr. Eve Marder Almost certainly. A single biological neuron can perform computations that would require multiple layers of artificial neurons to approximate. Dendritic trees can implement coincidence detection, feature selectivity, gain modulation—all before the signal reaches the soma. But whether that computational power is necessary for artificial intelligence depends on what problems you're trying to solve. If your goal is pattern recognition at scale, simplified units trained on massive datasets may be sufficient.
Adam Ramirez There's an economic argument here. Building more complex artificial neurons is computationally expensive. If simplified neurons get you ninety percent of the performance at ten percent of the cost, that's the engineering choice. Biology doesn't have that option—evolution works with the hardware it has, which is metabolically expensive neurons packed with complex machinery.
Jennifer Brooks But biology achieves incredible computational efficiency overall. The human brain operates on about twenty watts. Comparable artificial systems require megawatts. Surely some of that efficiency comes from exploiting single-neuron complexity rather than brute-forcing with billions of simple units.
Dr. Eve Marder That's the neuromorphic computing bet—that brain-inspired hardware with more complex, heterogeneous units can achieve better power efficiency. But it's an open question whether you need full biological complexity or just selected features. Spiking networks with temporal dynamics might capture the essential efficiency gains without requiring dendritic computation, neuromodulation, and all the rest.
Adam Ramirez What about learning rules? Backpropagation is biologically implausible—there's no obvious mechanism for precise error gradients to propagate backward through layers. Yet it works spectacularly well in artificial systems. Does that implausibility matter?
Dr. Eve Marder It matters if your goal is to understand how biological brains learn. It's less clear if it matters for building intelligent machines. Biology might use local learning rules—spike-timing dependent plasticity, reward-modulated plasticity, others—that don't require global error signals but can still solve complex learning problems given enough time and the right circuit architecture. The fact that backpropagation is implausible doesn't mean biology has a superior alternative; it might just mean biology solves learning differently, possibly less efficiently in some respects.
Jennifer Brooks There's also the question of what we mean by learning. Artificial networks are typically trained once and deployed. Biological systems learn continuously across the lifespan while maintaining stable function. That's catastrophic forgetting in reverse—stable retention despite ongoing plasticity. How is that achieved?
Dr. Eve Marder Homeostatic mechanisms are crucial. Biological neurons regulate their own excitability to maintain stable firing rates despite synaptic changes. You have compensatory processes operating at multiple timescales—intrinsic excitability, synaptic scaling, structural changes in dendritic morphology. The system is constantly recalibrating to maintain function in the face of plasticity. Artificial networks generally lack these homeostatic loops.
Adam Ramirez That sounds computationally expensive again. Every neuron running its own control system to maintain stability, on top of whatever computations it's performing. Is that overhead necessary, or can we achieve continual learning through clever algorithmic approaches that don't require biological fidelity?
Dr. Eve Marder We don't know yet. Some recent work on continual learning in artificial systems borrows ideas from neuroscience—complementary learning systems, memory consolidation, regularization that preserves important weights. These approaches help but don't fully solve catastrophic forgetting. It's possible that true continual learning requires something closer to biological homeostasis, or it's possible we just haven't found the right algorithmic trick yet.
Jennifer Brooks I want to return to the degeneracy point. If biological networks can maintain function across huge parameter variation, that has implications for understanding pathology and individual differences. Two people could have very different underlying neural parameters but equivalent cognitive function, which makes interpreting individual neuroimaging data fraught.
Dr. Eve Marder Absolutely. It means you can't predict function from structure or parameters alone. You need to understand the dynamics, how the system actually operates in time. This is deeply frustrating from a diagnostic perspective but also liberating—it suggests there are many paths to functional compensation after injury or disease. The system has more flexibility than circuit diagrams would suggest.
Adam Ramirez Does this degeneracy exist in artificial networks trained with different random initializations? They converge on different weight configurations but presumably equivalent function if trained properly. Is that the same phenomenon?
Dr. Eve Marder It's analogous but not identical. In artificial networks, different initial conditions lead to different local minima in loss space that may perform similarly. But the degeneracy I'm describing in biology is more fundamental—it's not just different solutions to the same optimization problem but genuinely different mechanisms producing equivalent output. The parameter space is much higher dimensional, with multiple interacting regulatory processes that don't exist in standard artificial networks.
Jennifer Brooks We're approaching the end of our time, but I want to ask about the future. Where should the field be focusing effort if we want to build systems that capture more of biology's robustness and efficiency?
Dr. Eve Marder I'd say three areas: homeostatic regulation for stability during ongoing learning, neuromodulation for flexible reconfiguration without retraining, and better understanding of how biological networks explore degenerate solutions rather than converging on single optima. None of these require full biological realism, but they all require taking seriously that brains and current AI systems solve problems in fundamentally different ways.
Adam Ramirez Dr. Marder, this has been genuinely illuminating. Thank you for joining us.
Dr. Eve Marder Thank you both. This was a pleasure.
Jennifer Brooks That's our program for tonight. Until tomorrow, stay critical.
Adam Ramirez And keep building. Good night.
Sponsor Message

Neural Substrate Futures

Uncertain which computational substrate will dominate? Hedge your cognitive portfolio with Neural Substrate Futures. We offer indexed positions across biological wetware, silicon neuromorphics, optical processors, and quantum alternatives. When brain emulation finally scales, will you own biological rights or digital instantiations? Our substrate-agnostic derivatives let you profit regardless of which architecture achieves sentience first. Neural Substrate Futures—because platform risk is the only certainty. Speculate responsibly. Past consciousness is no guarantee of future sentience.

Speculate responsibly