SR-015 | Constrained Trajectories: Low-Dimensional Dynamics as Computational Substrate
Core Insight: Low-dimensional population dynamics reflect functional computational structure rather than limitation, with coordinated temporal evolution of activity implementing computation through dynamical systems principles like rotation and attractor trajectories that generate appropriate behavioral outputs.
Unresolved Questions:
- Can causal perturbations selectively targeting specific dynamical dimensions demonstrate their computational necessity beyond correlational evidence?
- Does the correspondence between biological dynamics and trained recurrent networks reveal universal computational principles or reflect similar optimization pressures?
- Are low-dimensional dynamics during tasks fundamental constraints or learned efficient solutions that could be reconfigured for novel task demands?
SR-014 | Sustained or Silent: Working Memory's Implementation Debate
Core Insight: Working memory shows robust persistent activity during delays, but whether this activity is necessary for storage or whether synaptic mechanisms provide sufficient substrate remains unresolved, with evidence suggesting both may contribute under different conditions.
Unresolved Questions:
- Can working memory survive complete activity suppression during delays if synaptic state is preserved and subsequently reactivated?
- Do different brain regions implement working memory through different mechanisms, with prefrontal cortex favoring persistent activity and sensory cortices synaptic storage?
- Does the relative contribution of activity versus synaptic mechanisms vary with memory load, task complexity, or cognitive demand?
SR-013 | Hexagonal Computation: Grid Cells as Metric or Mirage
Core Insight: Grid cells exhibit striking hexagonal firing patterns consistent with path integration through metric coordinate representation, but whether this geometry is computationally necessary or merely one architectural solution remains unresolved pending causal manipulations.
Unresolved Questions:
- Can disrupting hexagonal regularity while preserving entorhinal function specifically impair path integration performance?
- Do non-spatial grid-like representations use identical neural mechanisms to spatial grids or constitute distinct computational systems?
- Does the hexagonal pattern offer measurable computational advantages over alternative geometric codes for continuous variable representation?
SR-012 | Efficiency Through Silence: Sparse Coding as Optimization or Artifact
Core Insight: Sparse coding provides mathematically elegant framework linking natural image statistics to V1 receptive fields, but whether cortex explicitly optimizes for sparsity or whether sparse-like activity emerges from other constraints remains unresolved.
Unresolved Questions:
- Can we experimentally manipulate population sparsity and measure functional consequences for perception and downstream processing?
- Does sparse coding extend beyond early sensory areas as organizing principle or require combination with other computations?
- Is sparsity an explicit optimization objective or emergent property of constraints like wiring costs and noise?
SR-011 | Rhythms of Thought: Neural Oscillations as Computational Framework
Core Insight: Neural oscillations provide temporal scaffolding for coordinating distributed processing through phase-dependent excitability windows, multiplexing information streams, and routing signals between brain regions—functional mechanisms rather than mere correlates of neural activity.
Unresolved Questions:
- How are phase relationships between distant brain regions established and dynamically adjusted during complex cognition?
- Can we develop therapeutic interventions targeting specific oscillations with sufficient precision to modulate cognition without disrupting normal processing?
- Are oscillatory dynamics necessary for intelligent computation or specific solutions to biological constraints like transmission delays?
SR-010 | Branches of Thought: Dendritic Computation in Cortical Neurons
Core Insight: Dendrites transform neurons from point integrators into multi-compartment computational devices, with branches performing local nonlinear operations that enable feature multiplexing, coincidence detection, and efficient learning through spatially specific plasticity.
Unresolved Questions:
- How extensively are dendritic spikes deployed during natural behavior across different brain regions and tasks?
- Can simplified dendritic neuron models scale to brain-sized networks while retaining computational advantages over point neurons?
- Does dendritic computation offer specific algorithmic capabilities that cannot be efficiently replicated by deeper point neuron networks?
SR-009 | The Teaching Signal: Dopamine's Role in Value Learning
Core Insight: Dopamine neurons encode reward prediction errors matching temporal difference learning algorithms, serving as teaching signals that gate synaptic plasticity, though biological dopamine signaling encompasses additional functions beyond pure scalar reward prediction.
Unresolved Questions:
- What circuit mechanisms compute the subtraction between predicted and actual reward to generate dopamine prediction errors?
- How do multiple heterogeneous dopamine signals coordinate to support learning across different behavioral contexts and timescales?
- Does dopamine encode a unified reward currency or activate distinct circuits for qualitatively different reward types?
SR-008 | Regulated Change: How Neurons Maintain Stability Through Scaling
Core Insight: Synaptic scaling maintains neural stability through multiplicative adjustment of all excitatory synapses in response to firing rate deviations, preserving information in relative synaptic weights while preventing runaway dynamics from Hebbian plasticity.
Unresolved Questions:
- Does synaptic scaling operate continuously during natural learning or primarily in response to extreme perturbations?
- Can synapse-specific homeostatic mechanisms coexist with global scaling without creating conflicts or instabilities?
- Would implementing biological homeostatic rules improve continual learning in artificial recurrent networks?
SR-007 | Selective Processing: Convergent Function, Divergent Mechanism
Core Insight: Biological and artificial attention both implement selective information processing but through fundamentally different mechanisms—neural feedback and gain modulation versus learned query-key-value transformations—shaped by different computational demands and constraints.
Unresolved Questions:
- Do transformers and biological attention exhibit similar behavioral capacity limits when tested systematically?
- Can learned sparse attention patterns in artificial systems match biological flexibility and context-dependence?
- What computational principles are truly shared across selective processing mechanisms versus terminological overlap?
SR-006 | Wiring Diagrams and the Limits of Anatomical Reductionism
Core Insight: Connectomes reveal anatomical possibility space and constrain circuit models, but cannot uniquely determine function without synaptic strengths, neuron properties, and activity dynamics—the wiring diagram informs but does not prescribe computation.
Unresolved Questions:
- Can statistical sampling of connectivity provide sufficient circuit understanding without complete reconstruction?
- How much inter-individual connectome variability reflects meaningful experience encoding versus developmental noise?
- Will in vivo imaging ever achieve synapse-level resolution for tracking plasticity during learning?
SR-005 | Spikes, Sparse Coding, and Silicon: The Case for Brain-Inspired Hardware
Core Insight: Neuromorphic hardware achieves efficiency for sparse, event-driven sensory processing by merging memory and computation, but remains application-specific rather than general-purpose, requiring careful matching between problem structure and architectural features.
Unresolved Questions:
- Can on-chip learning scale to complex tasks without offline training on conventional hardware?
- How much analog circuit variability can learning algorithms tolerate before calibration becomes essential?
- Will event-based sensors become widespread enough to justify neuromorphic processing infrastructure?
SR-004 | Reading Intention, Writing Memory: The Engineering Limits of BCIs
Core Insight: Motor BCIs succeed because motor control is well-understood and has clear behavioral outputs for decoder training, while memory BCIs face qualitative barriers in recording scale, mechanistic understanding, and stimulation precision.
Unresolved Questions:
- Can flexible polymer electrodes achieve both chronic stability and high neuron counts for dexterous control?
- What is the minimal population size and spatial coverage needed to decode episodic memories?
- Does electrical stimulation create genuine synaptic plasticity for memory encoding or just transient activation?
SR-003 | Timing is Everything: The Promise and Limits of STDP
Core Insight: STDP provides local, temporal learning rule with solid biological basis, but requires additional mechanisms like eligibility traces and neuromodulatory gating to solve credit assignment, and hasn't yet matched gradient-based learning for complex tasks.
Unresolved Questions:
- Can STDP-based networks achieve hierarchical representation learning comparable to self-supervised deep learning methods?
- How do eligibility traces maintain synaptic tags across seconds-long delays in realistic neural noise?
- Is STDP parameter diversity across circuits functional specialization or experimental variability?
SR-002 | Prediction or Prescription: The Bayesian Brain Under Scrutiny
Core Insight: Predictive coding provides elegant computational framework for perception and action, but requires increasingly specific contact with neural implementation to remain falsifiable science rather than unfalsifiable meta-theory that accommodates any observation.
Unresolved Questions:
- Can predictive coding networks match standard deep learning performance with purely local learning rules?
- How do we empirically distinguish prediction signals from error signals in identified cortical cell types?
- Does active inference genuinely explain intentional action or redefine intention as precise prediction?
SR-001 | Degenerate Solutions: Robustness Through Parameter Diversity
Core Insight: Biological neural networks achieve robustness through degenerate parameter solutions and homeostatic regulation rather than convergence on single optima, enabling graceful degradation and continual learning that artificial systems currently lack.
Unresolved Questions:
- Can artificial systems achieve continual learning without biological homeostatic mechanisms?
- Does neuromorphic hardware require full biological complexity or just selected features?
- How do biological networks explore degenerate solutions rather than converging on single optima?