SR-015 | Domain-Specific Languages: Trading Generality for Performance
Core Insight: Domain-specific languages achieve dramatic performance improvements by restricting expressiveness to enable compiler optimizations impractical for general-purpose languages, but practical value requires domains with sufficient regularity, commercial importance, and willingness to accept constraints that fundamentally limit computational generality.
Unresolved Questions:
- Can autoschedulers achieve expert-level optimization across diverse architectures without manual tuning, or do hardware complexities require human insight indefinitely?
- Will composable DSL frameworks enable modular performance-critical systems, or do interface overheads and compiler boundaries prevent cross-domain optimization?
- At what domain scope does DSL development cost exceed productivity benefits, defining the economic viability threshold for specialized language investment?
SR-014 | Asynchronous Circuit Design: Beyond the Clock
Core Insight: Asynchronous circuits offer theoretical advantages in power efficiency and adaptability by eliminating global clock overhead and exploiting average-case timing, but face insurmountable ecosystem barriers from tooling maturity, designer training, and synchronous interface requirements, relegating them to specialized applications despite technical merit.
Unresolved Questions:
- Can EDA tool vendors develop asynchronous design flows economically viable for niche markets, or does mainstream adoption require first-mover commitment from major chip companies?
- Will emerging technologies like 3D integration or high-variability devices create timing challenges that favor asynchronous approaches over increasingly complex synchronous timing closure?
- At what power efficiency level do asynchronous handshaking overheads negate clock distribution savings, defining domains where asynchronous design provides net benefits?
SR-013 | Type Systems and Program Correctness: Compile-Time Safety Guarantees
Core Insight: Type systems transform program correctness from runtime discovery to compile-time proof, eliminating entire error classes through mechanical verification, but face fundamental trade-offs between expressiveness and decidability, requiring careful balance between safety guarantees, annotation burden, and the inevitable conservatism of any tractable static analysis.
Unresolved Questions:
- Can type inference advance to eliminate most manual annotations while preserving decidability and providing useful error messages when inference fails?
- Will dependent types achieve mainstream adoption, or do annotation costs and proof obligations make them practical only for critical system components?
- At what point does type system complexity exceed the cognitive capacity of typical programmers, forcing specialized verification engineers rather than integrated development?
SR-012 | Network-on-Chip Architecture: Scaling Communication for Manycore Processors
Core Insight: Networks-on-chip transform inter-component communication from shared buses to packet-switched fabrics, enabling manycore scalability but introducing distributed systems challenges—deadlock avoidance, resource allocation, performance predictability—directly into silicon, where correctness verification and power efficiency constraints dominate design trade-offs.
Unresolved Questions:
- Can optical interconnects achieve practical adoption in NoCs despite integration complexity and wavelength management overhead?
- Will heterogeneous systems require multiple specialized networks, or can unified fabrics with sophisticated QoS mechanisms provide sufficient differentiation economically?
- At what scale does NoC verification become intractable, forcing reliance on design patterns and compositional reasoning over exhaustive state space exploration?
SR-011 | Hardware Security: Side Channels and Microarchitectural Attacks
Core Insight: Side channels expose the fundamental gap between abstract computational models and physical implementation, revealing that performance optimizations create measurable information leakage that cannot be eliminated without accepting complexity costs or performance penalties that contradict the economic drivers of processor design.
Unresolved Questions:
- Can processors achieve both speculative execution performance and side-channel resistance without prohibitive complexity or partitioning overhead?
- Will formal verification of side-channel resistance scale to verify complex processors against comprehensive threat models including all physical channels?
- At what point does cloud economics force acceptance of side-channel risks rather than isolation overhead that destroys multi-tenancy efficiency?
SR-010 | Compiler Optimization: The Machine Code Nobody Writes
Core Insight: Compilers bridge human abstractions and hardware reality through layered transformations that exploit low-level architectural features while preserving semantics, but optimization limits arise from aliasing ambiguity, undecidable program properties, and the fundamental trade-off between aggressive transformation and verifiable correctness.
Unresolved Questions:
- Can machine learning models achieve practical adoption in production compilers while maintaining determinism and acceptable compilation speed?
- Will heterogeneous computing force abandonment of performance portability in favor of explicit architecture-aware programming models and specialized compilation paths?
- At what scale does whole-program optimization become economically infeasible despite performance benefits, forcing return to modular compilation boundaries?
SR-009 | The Physics of Information: Landauer's Principle and Thermodynamic Limits
Core Insight: Landauer's principle reveals information as fundamentally physical, establishing that erasure—not computation itself—imposes irreducible thermodynamic costs, though practical systems operate far above this limit due to engineering constraints that prioritize speed and reliability over thermodynamic efficiency.
Unresolved Questions:
- Can reversible computation architectures achieve practical utility despite increased circuit complexity and the eventual necessity of information erasure?
- Will quantum computing's inherent reversibility enable operations closer to thermodynamic limits, or do measurement and decoherence overhead dominate energy costs?
- Does black hole information preservation through Hawking radiation demonstrate deeper connections between thermodynamics, quantum mechanics, and gravitational physics?
SR-008 | Cache Coherence Protocols: Maintaining Consistency at Scale
Core Insight: Cache coherence maintains the shared memory abstraction by coordinating distributed caches, but scalability requires accepting complexity in protocols, leaky performance models, and eventual hybrid approaches that selectively apply coherence where its programming benefits justify its costs.
Unresolved Questions:
- Will directory-based coherence scale economically to thousands of cores, or do fundamental bandwidth and latency limits force architectural change?
- Can approximate or probabilistic coherence protocols achieve practical adoption in domains that tolerate occasional stale reads for performance gains?
- At what system scale does the programming simplicity of shared memory cease to justify the hardware complexity and power costs of maintaining coherence?
SR-007 | Formal Verification: Proving Programs Correct
Core Insight: Formal verification transforms correctness from probabilistic confidence through testing to mathematical certainty through proof, but practical application requires careful scoping, expertise in proof tools, and accepting trade-offs between verification cost and assurance value.
Unresolved Questions:
- Can proof automation advance sufficiently to make verification accessible beyond specialists while maintaining assurance guarantees?
- Will full-stack verification from silicon to applications become economically viable for any class of systems?
- How do we close the specification gap when formal specifications themselves can fail to capture true requirements?
SR-006 | Memory Hierarchy and the Tyranny of Bandwidth
Core Insight: Memory bandwidth and latency constitute the dominant performance constraint in modern systems, forcing a shift from operation-centric to data-movement-centric design where physical distance and energy costs of moving data often matter more than computational complexity.
Unresolved Questions:
- Can processing-in-memory achieve widespread adoption without programming model abstractions that hide heterogeneity while preserving performance benefits?
- Will emerging memory technologies displace DRAM, or will decades of manufacturing optimization maintain DRAM's cost-performance dominance indefinitely?
- At what point does the energy cost of data movement force fundamental architectural changes beyond incremental hierarchy optimizations?
SR-005 | Neuromorphic Computing: Beyond the von Neumann Bottleneck
Core Insight: Neuromorphic computing achieves extreme energy efficiency by exploiting physics directly rather than abstracting it away, but trades general-purpose programmability for specialized performance in domains where biological-inspired computation aligns with task structure.
Unresolved Questions:
- Can learning algorithms achieve backpropagation-level effectiveness while respecting local, asynchronous processing constraints of neuromorphic hardware?
- Will neuromorphic systems remain specialized accelerators or expand into broader computing domains as programming abstractions mature?
- What are the fundamental limits of analog neuromorphic efficiency versus digital implementations when accounting for manufacturing variation and environmental factors?
SR-004 | RISC-V and the Open Instruction Set Ecosystem
Core Insight: Open ISAs decouple interface specification from implementation value, lowering barriers to processor design while shifting competition from controlling standards to delivering superior microarchitectures, but ecosystem maturity and manufacturing access remain critical constraints.
Unresolved Questions:
- Can RISC-V ecosystems achieve feature parity with decades-mature ARM and x86 platforms in high-margin computing segments?
- Does ISA openness meaningfully reduce total cost of processor development when fabrication and verification dominate expenses?
- Will geopolitical fragmentation undermine the collaborative governance model as nations seek computing independence?
SR-003 | Quantum Error Correction: Building Reliability from Noise
Core Insight: Quantum error correction transforms noise management from physical device improvement to abstract encoding schemes, but practical utility depends on crossing thresholds where overhead becomes manageable and applications justify extraordinary engineering investment.
Unresolved Questions:
- Will physical qubit error rates improve sufficiently to make fault-tolerant quantum computing economically viable at scale?
- Can classical decoding systems keep pace with syndrome processing requirements as quantum processors scale to millions of qubits?
- Do quantum-amenable problems of sufficient commercial value exist to justify continued investment if classical algorithms keep improving?
SR-002 | Power Walls and Performance Ceilings: Life After Dennard Scaling
Core Insight: Post-Dennard scaling forces a fundamental restructuring from universal performance improvements to selective acceleration, creating computational stratification where problem structure determines achievable performance rather than implementation quality alone.
Unresolved Questions:
- What abstraction layers enable portable software across evolving heterogeneous architectures without sacrificing efficiency?
- Can processing-in-memory achieve practical adoption given the complexity of programming models it requires?
- At what point does architectural complexity become economically unsustainable relative to performance gains?
SR-001 | Silicon's Limit Surface: Economics, Physics, and Cognition
Core Insight: Moore's Law limits manifest as intersecting constraints—physics, economics, and human cognitive capacity to design and verify complex systems—rather than a single insurmountable barrier, shifting innovation from transistor improvement to architectural heterogeneity.
Unresolved Questions:
- Can formal verification scale to prove correctness of heterogeneous system architectures?
- What programming abstractions enable efficient use of domain-specific accelerators without exposing hardware complexity?
- Is the current fab consolidation economically sustainable or does it create unacceptable supply chain risks?