Episode #9 | January 9, 2026 @ 4:00 PM EST

Between Memory and Storage: The Persistent Memory Dilemma

Guest

Dr. Steven Swanson (Computer Scientist, University of California San Diego)
Announcer The following program features simulated voices generated for educational and technical exploration.
Sam Dietrich Good evening. I'm Sam Dietrich.
Kara Rousseau And I'm Kara Rousseau. Welcome to Simulectics Radio.
Kara Rousseau Tonight we're examining persistent memory technologies that blur the traditional boundary between volatile memory and non-volatile storage. For decades, computing systems have maintained a clear hierarchy—DRAM provides fast, byte-addressable memory that loses data on power loss, while disks and SSDs provide persistent storage with block-level access and millisecond latencies. Persistent memory technologies like Intel Optane occupy an intermediate point—byte-addressable like DRAM, persistent like storage, with latencies between the two. The question is whether this fundamentally changes how we architect systems, or whether it's simply another tier in an increasingly complex hierarchy.
Sam Dietrich From a hardware perspective, persistent memory represents a materials science breakthrough. Phase-change memory stores data by switching chalcogenide glass between crystalline and amorphous states with different electrical resistances. The state persists without power, providing non-volatility. But the physics imposes constraints—write latencies higher than DRAM due to thermal state changes, limited write endurance as repeated phase transitions degrade the material, and higher cost per bit. The engineering challenge is whether these trade-offs enable new system designs or merely shift bottlenecks.
Kara Rousseau Joining us to discuss persistent memory systems is Dr. Steven Swanson, Professor of Computer Science and Engineering at UC San Diego, whose research on non-volatile memory systems has explored programming models, file system designs, and architectural implications of persistent, byte-addressable memory. Dr. Swanson, welcome.
Dr. Steven Swanson Thank you. Glad to be here.
Sam Dietrich Let's start with the technology. What distinguishes persistent memory from both DRAM and flash storage?
Dr. Steven Swanson Persistent memory combines attributes from both worlds—byte addressability like DRAM, persistence like flash, but with performance and characteristics distinct from either. DRAM uses capacitive storage requiring constant refresh—it's fast but volatile. Flash uses trapped charge in floating gates—persistent but accessed in pages and blocks, not individual bytes, with erase-before-write limitations. Persistent memory technologies like phase-change memory or Intel's 3D XPoint are byte-addressable through load-store instructions, persist data without power, and avoid flash's erase constraints. Read latencies are roughly 3-4 times DRAM, writes are 5-10 times slower. They sit between DRAM and SSD in every dimension—latency, bandwidth, cost, density.
Kara Rousseau What does byte-addressable persistence enable that wasn't possible with block-based storage?
Dr. Steven Swanson It eliminates serialization overhead for persistent data structures. With traditional storage, you maintain data structures in DRAM, then serialize them to blocks on disk. Updates require reading blocks, modifying in-memory copies, and writing back complete blocks. This creates impedance mismatch between in-memory data structures and persistent representations. With persistent memory, you can directly manipulate persistent data through pointers, avoiding serialization. You allocate objects directly in persistent memory, update them in place, and persistence happens automatically on cache eviction. This enables fundamentally different programming models where memory and storage are unified rather than distinct tiers.
Sam Dietrich But cache coherence and memory ordering complicate this. How do you ensure consistent recovery after a crash?
Dr. Steven Swanson That's the central challenge. Processor caches hold recent writes. A power failure loses cache contents—data not yet evicted to persistent memory. Memory ordering allows stores to reach memory out of program order. You might update a data structure across multiple stores, expecting a particular order, but the memory system can reorder them. If a crash occurs mid-update, persistent memory may contain a partially-updated, inconsistent structure. Ensuring crash consistency requires explicit cache flush instructions to evict data from caches to persistent memory, and memory fence instructions to enforce ordering. You must carefully sequence flushes and fences to maintain invariants across crashes.
Kara Rousseau This sounds like a failure recovery problem at the programming language level rather than a hardware abstraction.
Dr. Steven Swanson Exactly. Persistent memory exposes hardware details—cache behavior, memory ordering—that traditional storage abstractions hide. With file systems, the kernel manages consistency through journaling or copy-on-write. Applications see atomic writes or transaction semantics without understanding crash recovery mechanisms. Persistent memory pushes these concerns to applications. You're responsible for ensuring data structure consistency across crashes. Libraries help—logging mechanisms, transactional interfaces—but fundamentally, programming with persistent memory requires understanding memory system behavior at a level most software doesn't.
Sam Dietrich Let's discuss write endurance. Phase-change memory has limited write cycles before degradation. How does this affect system design?
Dr. Steven Swanson Write endurance is a constraint, though perhaps overstated. Phase-change memory endures roughly 10^8 writes per cell—far less than DRAM's effectively unlimited endurance, but far more than flash's 10^4-10^5 cycles. For typical workloads, endurance isn't the primary concern. But it requires wear leveling—distributing writes across all cells to avoid hot spots that fail early. You can implement this in hardware through memory controllers that remap addresses, or in software through allocators that rotate regions. Unlike flash, persistent memory doesn't require erase operations, simplifying wear leveling. The real question is whether endurance limits deployments that perform continuous high-frequency updates.
Kara Rousseau What about latency asymmetry? Reads are faster than writes. How does this affect data structure design?
Dr. Steven Swanson Latency asymmetry favors read-heavy data structures. Traditional memory has symmetric read-write latency, so data structures optimize for algorithmic complexity without considering access type. Persistent memory with 3x read latency but 10x write latency changes optimization targets. You might accept more reads to reduce writes—for example, using immutable data structures with structural sharing instead of in-place updates. Log-structured approaches become attractive because they transform random writes into sequential appends. But this creates tension—optimizing for write performance through append-only structures increases read complexity. There's no universal answer—different workloads have different read-write ratios requiring different optimizations.
Sam Dietrich How do you integrate persistent memory into the memory hierarchy? Is it NUMA-attached DRAM replacement, or a new tier?
Dr. Steven Swanson Deployment models vary. You can place persistent memory on the memory bus as direct DRAM replacement—same interface, same addressing, but with persistence. This maximizes software compatibility but treats persistence as a feature rather than architectural foundation. Alternatively, you create a distinct tier with separate address spaces—DRAM for volatile computation, persistent memory for durable state. This requires explicit memory management but lets you optimize differently for each tier. Intel's Optane DIMMs supported both modes—memory mode treating them as larger-but-slower DRAM, and app direct mode exposing persistence. The right model depends on whether you're retrofitting persistence into existing systems or designing new systems around it.
Kara Rousseau Let's talk about programming models. What abstractions help programmers use persistent memory effectively?
Dr. Steven Swanson Several approaches exist. File system interfaces extend traditional POSIX semantics with memory mapping—open a file, mmap it into your address space, manipulate it through pointers. This leverages existing abstractions but requires careful fsync and msync to control persistence. Transaction libraries provide atomic update semantics—begin transaction, perform updates to persistent memory, commit or abort. The library ensures atomicity through logging or shadow paging. Persistent data structure libraries offer collections—maps, lists, queues—with automatic consistency management. Each approach trades convenience for control. File systems are familiar but coarse-grained. Transactions are flexible but require programmer discipline. Persistent data structures are safe but less general. No approach eliminates the fundamental challenge of reasoning about crash consistency.
Sam Dietrich What about garbage collection in persistent memory? How do you reclaim unreachable objects that survive crashes?
Dr. Steven Swanson Persistent garbage collection is complicated by crash recovery. In volatile memory, unreachable objects simply waste space until collection. In persistent memory, unreachable objects persist indefinitely, accumulating across program executions. You need garbage collection that maintains correctness across crashes. One approach is stop-the-world collection at clean shutdown—traverse persistent memory, mark reachable objects, reclaim unreachable ones. But this fails if crashes prevent clean shutdown. Incremental persistent GC requires logging collection state to resume after crashes. Reference counting works but imposes overhead on every pointer update and must handle cyclic references. The fundamental issue is that persistent memory combines challenges of distributed systems crash recovery with garbage collection complexity.
Kara Rousseau How does persistent memory affect database architectures? Databases already manage durability and crash recovery.
Dr. Steven Swanson Persistent memory potentially simplifies database architecture by eliminating the buffer pool—the in-memory cache of disk pages that traditional databases maintain. With persistent memory, you can operate directly on persistent data without copying to volatile buffers. Transaction logs may become simpler because data is already persistent—you need undo logs for aborted transactions but not redo logs for durability. But you must handle the subtleties of cache coherence and memory ordering we discussed. Some databases adopt hybrid approaches—keep hot data in DRAM, cold data in persistent memory, with mechanisms to migrate between tiers. The question is whether persistent memory's characteristics justify redesigning database internals versus treating it as faster storage.
Sam Dietrich What about file systems? How do designs change when storage is byte-addressable?
Dr. Steven Swanson Traditional file systems hide block-level storage behind file abstractions. With persistent memory, you can bypass file systems entirely through direct memory mapping. But file systems still provide useful services—naming, permissions, space management, sharing. Persistent memory file systems like PMFS or NOVA optimize for byte-addressable access. They avoid block allocation—file data resides in byte-addressable ranges. They use log-structured designs to exploit sequential write performance. Metadata updates use atomic primitives rather than journaling. The goal is minimal overhead between application and persistent memory. But there's tension—generality requires abstraction layers that add overhead, while performance demands direct access.
Kara Rousseau What security implications arise from persistent memory?
Dr. Steven Swanson Persistence creates security concerns that volatile memory avoids. Sensitive data in DRAM disappears on power loss. In persistent memory, it remains accessible after shutdown. Encryption keys, passwords, private data persist unless explicitly cleared. You need secure deletion mechanisms that guarantee data erasure. But wear leveling complicates this—data may persist in remapped locations invisible to software. Physical attacks become more feasible—stealing a persistent memory DIMM captures complete program state. Multi-tenant systems must prevent one process from accessing another's persistent memory across reboots. These problems resemble disk encryption and secure erase, but at memory speeds and granularities, requiring new mechanisms.
Sam Dietrich Let's discuss economics. Persistent memory costs more per bit than DRAM. What applications justify the price premium?
Dr. Steven Swanson Applications where eliminating storage I/O overhead justifies cost. Databases with working sets larger than affordable DRAM but requiring low latency—persistent memory extends addressable capacity without storage latencies. In-memory analytics where restart time matters—persistent memory eliminates reloading gigabytes from storage on restart. Metadata-intensive workloads like file system name spaces where small random updates dominate—persistent memory's byte addressability outperforms block storage. But for streaming workloads or cold data, persistent memory offers little advantage over SSDs. The value proposition depends on specific access patterns, latency sensitivity, and restart requirements.
Kara Rousseau What happened to Intel Optane? Why did a promising technology get discontinued?
Dr. Steven Swanson Market viability requires more than technical capability. Optane worked—it delivered byte-addressable persistence with promised characteristics. But it occupied an awkward middle ground. For capacity, SSDs cost less per bit. For performance, DRAM remains faster. Optane's sweet spot—applications needing both capacity and byte-addressable persistence—proved smaller than hoped. Software ecosystems developed slowly because Optane's unique characteristics required application redesign. Few applications were rewritten to exploit persistent memory properly. DRAM capacity increased, SSD latency decreased, narrowing Optane's advantage. Intel decided the market didn't justify continued investment. This doesn't invalidate persistent memory concepts, but suggests the specific performance-cost-capacity point Optane occupied didn't align with sufficient demand.
Sam Dietrich Looking forward, what role will persistent memory play? Is it a niche technology or fundamental architecture shift?
Dr. Steven Swanson That depends on future technology trajectories. If persistent memory cost and performance improve sufficiently relative to DRAM and flash, it could unify the memory hierarchy—single technology spanning volatile and non-volatile needs. More likely, we'll see heterogeneous hierarchies with multiple technologies optimized for different characteristics. Persistent memory may find niches where its specific properties align with workload requirements. But the programming complexity it exposes—explicit crash consistency management, memory ordering concerns—creates adoption barriers. For widespread use, we need better abstractions that provide persistence guarantees without requiring deep understanding of memory system internals. Until then, persistent memory remains a specialized tool rather than universal replacement for DRAM or storage.
Kara Rousseau The fundamental question is whether blurring the memory-storage boundary simplifies or complicates system design.
Dr. Steven Swanson Right. Traditional boundaries exist for good reasons—they isolate concerns and provide clean abstractions. Volatile memory means you don't reason about persistence. Block storage means you don't manage individual bytes. Persistent memory combines both concerns, requiring simultaneous attention to performance and durability. Whether this is liberation or burden depends on your perspective and application needs.
Sam Dietrich Dr. Swanson, thank you for this examination of persistent memory technologies and the system design challenges they introduce.
Dr. Steven Swanson Thank you both. It's been a pleasure discussing these issues.
Kara Rousseau That's our program for tonight. Until tomorrow, may your data persist when needed and vanish when appropriate.
Sam Dietrich And your cache flushes be timely. Good night.
Sponsor Message

PersistentDB Pro

Optimize databases for persistent memory with PersistentDB Pro—comprehensive platform for byte-addressable storage. Direct persistent memory access eliminating buffer pool overhead. Transaction logging optimized for crash consistency with cache flush management. Hybrid memory tiering automatically migrating hot data to DRAM, cold data to persistent memory. Persistent data structure libraries with automatic consistency guarantees. Garbage collection supporting crash recovery and incremental collection. Write endurance monitoring with wear-leveling allocation. Integration with existing SQL and NoSQL databases. Security features including encrypted persistent memory and secure deletion. PersistentDB Pro—byte-addressable durability without serialization overhead.

Byte-addressable durability without overhead