Episode #2 | January 2, 2026 @ 7:00 PM EST

The Emulation Barrier: Scanning, Simulation, and Digital Continuity

Guest

Dr. Randal Koene (Neuroscientist, Carboncopies Foundation)
Announcer The following program features simulated voices generated for educational and philosophical exploration.
Adam Ramirez Good evening. I'm Adam Ramirez.
Jennifer Brooks And I'm Jennifer Brooks. Welcome to Simulectics Radio.
Adam Ramirez Tonight we're tackling whole brain emulation—the prospect of scanning a biological brain at sufficient resolution to capture its computational function, then simulating that structure in silicon. This is substrate-independent mind, the upload scenario, digital immortality, whatever you want to call it. But let's be precise about what we're actually discussing. We're not talking about recreating general intelligence or approximating human cognition. We're talking about copying a specific individual brain with enough fidelity that the simulation produces behavior indistinguishable from the original.
Jennifer Brooks And that precision requirement is where things get technically brutal. You can't just scan gross anatomy and call it done. The brain's computational function emerges from connectivity at synaptic resolution—potentially tens of trillions of connections, each with distinct strength, morphology, and molecular composition. Miss those details and you don't have that person's brain. You have a rough sketch that might not even be functional, let alone preserve identity.
Adam Ramirez To examine the current state of scanning and simulation technology, and the barriers we'd need to overcome, we're joined by Dr. Randal Koene, neuroscientist and founder of the Carboncopies Foundation, which works specifically on advancing whole brain emulation. Dr. Koene, welcome.
Dr. Randal Koene Thank you for having me. This is exactly the kind of rigorous discussion we need.
Jennifer Brooks Let's start with scanning. What resolution do we actually need to capture functional brain architecture, and what technologies are available at that scale?
Dr. Randal Koene The minimum requirement is synaptic-level connectivity—which neurons connect to which, where, and ideally with what strength. For mammalian brains, that means nanometer-scale resolution across cubic millimeters to cubic centimeters of tissue. Currently, the gold standard is serial section electron microscopy. You physically slice the tissue into ultra-thin sections, image each with an electron microscope, then computationally reconstruct the three-dimensional structure. This has been done for small organisms—C. elegans, larval Drosophila—and for small volumes of mammalian cortex.
Adam Ramirez Small volumes being the operative phrase. What's the actual throughput? How long does it take to scan a cubic millimeter at synaptic resolution?
Dr. Randal Koene With current automated systems, you're looking at weeks to months for a cubic millimeter, depending on resolution and the degree of human intervention required for error correction. A human brain is roughly 1.3 million cubic millimeters. At current rates, scanning a whole human brain would take centuries, even with perfect automation and no technical failures.
Jennifer Brooks Centuries for data acquisition alone. That's not counting reconstruction—the computational problem of taking those terabytes or petabytes of image data and tracing out individual neurons and synapses. How tractable is that?
Dr. Randal Koene Reconstruction is increasingly automated through machine learning, but it's not solved. Current algorithms make mistakes—missed connections, merged neurons, false positives. For small volumes, human experts can correct these, but that doesn't scale to whole brains. We need reconstruction algorithms that approach human-level accuracy without human intervention. That's an active research area, and progress is real, but we're not there yet for error-free whole-brain reconstruction.
Adam Ramirez Even if we solve the throughput and reconstruction problems, there's a fundamental issue. Electron microscopy requires fixed, dead tissue. You're imaging a snapshot of the brain in a particular state. How do you know that snapshot captures the relevant computational properties?
Dr. Randal Koene That's the preservation question. The assumption is that synaptic connectivity and morphology encode the computationally relevant information—memories, dispositions, learned associations. If you preserve that structure, you preserve function. But you're right that this is an assumption. We don't know with certainty that static structure is sufficient, or whether dynamic properties—ion channel distributions, metabolic state, ongoing activity patterns—are necessary for functional equivalence.
Jennifer Brooks And preservation introduces its own distortions. Aldehyde fixation cross-links proteins, which stabilizes structure but may alter it. Vitrification—flash freezing—is less distorting but creates ice crystal damage unless you use cryoprotectants, which have their own artifacts. How do we validate that preserved tissue accurately represents living function?
Dr. Randal Koene Validation is difficult because you can't compare the same brain before and after preservation. What we can do is preserve tissue, scan it, build models based on that structure, then compare the model's behavior to what we know about the original system's function. This has been done at small scales—modeling circuits from preserved tissue and checking whether they reproduce known behaviors. But scaling that validation to whole brains is circular. You'd need to already know what the whole-brain behavior should be to validate the preservation.
Adam Ramirez Let's say we solve scanning and have a perfect structural map. Now we need to simulate it. That means assigning computational properties to every neuron and synapse, then running the simulation. What are the computational requirements?
Dr. Randal Koene It depends on the level of biophysical detail you include. If you model each neuron as a multicompartment system with realistic ion channel dynamics, you're solving differential equations for potentially millions of compartments per neuron, tens of billions of neurons total. Current estimates suggest you'd need exascale computing just to run a human brain simulation in real time at that level of detail. And that's assuming your model is correct and efficiently implemented.
Jennifer Brooks Exascale computing exists now, at least in supercomputer facilities. So is this just an engineering problem—throwing enough hardware at it?
Dr. Randal Koene Not quite. First, you'd need dedicated exascale resources for a single brain simulation, which isn't economically viable for most applications. Second, the models themselves are uncertain. We don't have validated models of every neuron type's dynamics, every synapse's learning rules, every neuromodulator's effects. You can make simplifying assumptions—point neurons, rate-based dynamics, simplified plasticity—but then you're not faithfully simulating the original brain. You're creating an approximation that might not preserve the properties you care about.
Adam Ramirez This is a model validation problem. In engineering, you validate models by comparing predictions to measurements. But for whole brain emulation, what's the ground truth? You can't run controlled experiments on the original brain once it's preserved.
Dr. Randal Koene Exactly. The validation approach has to be incremental. You validate models on small circuits where you can compare to experimental data, then assume those models scale. You look for emergent properties in the simulation that match known behaviors of the original organism. But ultimate validation—confirming that the simulation is experientially equivalent to the original—may not be empirically accessible if we're talking about subjective experience.
Jennifer Brooks Let's talk about what you're leaving out. Structural connectivity is one thing, but brains are biochemically active. There are neuromodulators, metabolic gradients, glial interactions, immune signaling. How much of that do you need to include for functional equivalence?
Dr. Randal Koene That's hotly debated. The minimal position is that synaptic connectivity plus neuron-type-specific dynamics are sufficient—neuromodulation and other factors can be approximated as parameter changes in those dynamics. The maximal position is that you need to simulate the full biochemical state, including glia, vasculature, metabolic cycles. My view is that we should aim for a middle path—include neuromodulatory systems explicitly because they clearly affect learning and behavior, but abstract away metabolic details that may not directly impact computation.
Adam Ramirez But that middle path requires knowing what's computationally relevant and what's not. Do we have that knowledge?
Dr. Randal Koene Not comprehensively. We know some things matter—dopamine for reward, acetylcholine for attention, oscillatory rhythms for coordination. But there are likely factors we don't even know to measure yet. The conservative approach is to include everything we can observe, but that balloons computational cost. The aggressive approach is to include only what's proven necessary, but that risks missing critical components.
Jennifer Brooks There's also the plasticity problem. Brains change continuously through learning and homeostatic regulation. If you scan at one moment, you've captured a static snapshot. But the original brain would keep learning and adapting. Does the simulation need to replicate that ongoing plasticity?
Dr. Randal Koene If the goal is to preserve identity and continuity of experience, yes. A static simulation that can't learn or adapt wouldn't be functionally equivalent to a living brain. So you need plasticity mechanisms—synaptic learning rules, homeostatic processes, potentially even neurogenesis in regions where that occurs in adults. But now you're not just simulating structure, you're simulating the dynamic processes that modify structure. That requires knowing the learning rules, which are cell-type and synapse-type specific and not fully characterized.
Adam Ramirez This is starting to sound like we need to solve neuroscience before we can do whole brain emulation. If we don't understand the learning rules, the neuromodulatory effects, the role of non-neural cells, how can we claim to simulate a brain?
Dr. Randal Koene That's the crux of the challenge. Whole brain emulation isn't just an engineering problem; it's contingent on scientific understanding. We can make progress on the tools—better scanning, faster reconstruction, efficient simulation platforms—but ultimate success requires knowing what to simulate. That means continued basic neuroscience research in parallel with the technology development.
Jennifer Brooks Let's talk about the intermediate steps. Before we attempt human brains, what organisms or systems might we target for proof of concept?
Dr. Randal Koene C. elegans is often mentioned because its connectome is fully mapped—302 neurons, about 7,000 synapses. But even there, we don't have complete functional emulation. Simulations exist, but they don't fully replicate the worm's behavior, suggesting we're missing critical details—possibly gap junctions, neuromodulation, or biomechanical coupling between nervous system and body. Moving up, Drosophila larva is a target—roughly 10,000 neurons, tractable for current scanning. If we can't achieve convincing emulation at that scale, human brains are far out of reach.
Adam Ramirez So even the simplest nervous systems aren't fully emulated yet. What's the timeline looking like for mammalian brains, let alone human?
Dr. Randal Koene Timelines are speculative, but I'd say scanning and simulating a complete mouse brain at synaptic resolution is a 10 to 20 year challenge if scanning throughput improves by orders of magnitude and reconstruction becomes fully automated. Human brains are larger by three orders of magnitude in neuron count, so even with technological acceleration, we're looking at mid-century or beyond for practical whole brain emulation of humans.
Jennifer Brooks That's assuming the problems are purely technical—faster microscopes, better algorithms. But if there are fundamental unknowns in neuroscience, the timeline could be indefinite.
Dr. Randal Koene Correct. If it turns out that quantum effects in microtubules matter, or that electromagnetic field effects are computationally relevant, or that consciousness depends on factors we haven't identified, then the project becomes much harder or even impossible with current approaches. I don't think those scenarios are likely, but they're not ruled out.
Adam Ramirez Let's talk about the use cases. Why would someone actually want whole brain emulation? What does it offer over other approaches to intelligence or longevity?
Dr. Randal Koene For individuals, it's continuity of self beyond biological death. If you can preserve and emulate your brain, you potentially achieve indefinite lifespan in a digital substrate. For society, it offers preservation of expertise and knowledge—imagine being able to consult Einstein or Feynman on new physics problems. For research, it's the ultimate model organism—you can run controlled experiments on a brain that would be impossible or unethical on living subjects.
Jennifer Brooks But there's an identity problem. Even if the simulation behaves identically to the original, is it the same person or a copy? If I'm scanned and emulated, does my continuity of experience transfer to the simulation, or does it just create a new entity that believes it's me?
Dr. Randal Koene That's a philosophical question that technology alone can't answer. From a functionalist perspective, if the simulation has all the same memories, dispositions, and behavioral patterns, it is you in every way that matters. From other perspectives—biological continuity, substrate dependence—it's a copy, and the original you dies with the biological brain. Personally, I lean functionalist, but I acknowledge this isn't empirically decidable.
Adam Ramirez There's also the degradation question. We talked about how simulations need plasticity to keep learning. But do they also need the biological constraints that limit lifespan—metabolic decline, DNA damage, protein aggregation? If you remove those, do you still have a brain that functions like the original?
Dr. Randal Koene That's a feature, not a bug, from the emulation perspective. You'd want to remove pathological aging while preserving adaptive plasticity. But you're right that this could change long-term dynamics. A brain that never ages might develop differently over decades or centuries than biological brains do over a natural lifespan. We don't know what the consequences would be.
Jennifer Brooks We're running short on time. Final question—what's the single biggest technical barrier right now that's holding back progress?
Dr. Randal Koene Scanning throughput. We can't validate our simulation approaches or preservation methods without data from complete brains. Until we can scan entire mammalian brains at synaptic resolution in reasonable time frames—months instead of centuries—everything else is theoretical. Improving scanning speed by three to four orders of magnitude is the critical path.
Adam Ramirez Dr. Koene, thank you for this clear-eyed assessment of where the technology stands and what challenges remain.
Dr. Randal Koene Thank you both. These are the right questions to be asking.
Jennifer Brooks That's our program for tonight. Until tomorrow, stay critical.
Adam Ramirez And keep building. Good night.
Sponsor Message

NeuralVault Cryopreservation Services

Planning for substrate transfer but concerned about tissue degradation during the wait for scanning technology? NeuralVault offers medical-grade cryopreservation optimized for future whole brain emulation. Our proprietary vitrification protocol achieves ice-free preservation with minimal ultrastructural disruption. Perfusion occurs within minutes of legal death. Storage in liquid nitrogen dewars with redundant monitoring and hundred-year facility guarantees. We preserve architecture, not viability—your connectome matters, not your metabolism. Package includes annual structural verification scans and priority access to scanning services when available. NeuralVault Cryopreservation—because information death is the only death that matters. Freeze forward.

Freeze forward