Announcer
The following program features simulated voices generated for educational and technical exploration.
Sam Dietrich
Good evening. I'm Sam Dietrich.
Kara Rousseau
And I'm Kara Rousseau. Welcome to Simulectics Radio.
Kara Rousseau
Tonight we're examining a fundamental constraint in high-performance computing: the interconnect bottleneck. As processors get faster and cores multiply, the electrical wires connecting them struggle to keep pace. Copper interconnects face bandwidth limits, signal integrity problems, and power consumption that scales poorly. Silicon photonics—using light instead of electrons for data transmission—promises to overcome these limits. But integrating photonic components with CMOS electronics introduces its own challenges.
Sam Dietrich
This is about physics meeting fabrication reality. Light travels faster than electrical signals in copper at these scales, and photons don't suffer from the same resistance and capacitance issues. But building waveguides, modulators, and photodetectors on the same silicon substrate as transistors requires new materials, new process steps, and careful co-design. The question is whether the performance gains justify the manufacturing complexity.
Kara Rousseau
To explore these trade-offs, we're joined by Dr. Keren Bergman from Columbia University, whose research group has pioneered silicon photonic interconnect architectures for chip-to-chip and on-chip communication. Dr. Bergman, welcome.
Dr. Keren Bergman
Thank you. I'm delighted to discuss this.
Sam Dietrich
Let's start with the physical limitations of copper. What specific problems emerge at high data rates and why can't we simply engineer around them?
Dr. Keren Bergman
Copper interconnects are fundamentally limited by the skin effect and dielectric losses at high frequencies. As you increase data rate, signals attenuate more rapidly and dispersion becomes severe. To compensate, you need equalization circuits, which consume significant power. Worse, the power required for these circuits scales non-linearly with data rate and distance. For short chip-to-chip links at modest speeds, copper works fine. But when you need to move terabits per second across a circuit board or between chips in a package, the energy cost becomes prohibitive.
Kara Rousseau
And this creates a bandwidth-power trade-off. You can push copper to higher speeds, but only by burning more watts per bit transmitted. At some point, the interconnect power dominates the system power budget, which defeats the purpose of having faster processors if you can't efficiently move data between them.
Dr. Keren Bergman
Exactly. In modern data centers, interconnect power is a significant fraction of total system power. Photonic links offer fundamentally better scaling because photons don't interact with each other the way electrons do in a wire. An optical waveguide can carry multiple wavelengths simultaneously—wavelength division multiplexing—without crosstalk. You can transmit terabits per second through a single optical fiber with relatively low loss and minimal power consumption.
Sam Dietrich
But there's no such thing as a free lunch. Light doesn't naturally interact with CMOS transistors, so you need electro-optic conversion. How much energy does that conversion consume, and does it eliminate the advantage over copper?
Dr. Keren Bergman
Electro-optic conversion does consume energy, but modern silicon photonic modulators and photodetectors are quite efficient. A well-designed modulator can operate at a few femtojoules per bit, and photodetectors add minimal overhead. The key is that once you've converted to the optical domain, transmission is essentially free—you're not fighting resistance and capacitance. For distances beyond a few centimeters and data rates above tens of gigabits per second, photonics wins decisively on energy efficiency.
Kara Rousseau
This sounds like a classic case where the crossover point depends on the specific parameters. At what system scale does photonic interconnect become compelling compared to advanced electrical signaling?
Dr. Keren Bergman
For rack-scale communication in data centers, photonics is already standard—these are the optical transceivers you see in network switches. For board-to-board links in high-performance systems, co-packaged optics are emerging. The frontier is chip-to-chip and eventually on-chip photonic interconnects. Here the economics are still evolving. Silicon photonics adds fabrication complexity and cost, so you need high enough data rates to justify the investment. As bandwidth requirements continue to grow, that crossover point moves closer to the processor.
Sam Dietrich
Let's talk about the fabrication challenges. Silicon photonics uses silicon as the waveguide material, which is compatible with CMOS processing. But you still need germanium photodetectors, III-V lasers, and precise lithography for optical components. What are the manufacturing hurdles?
Dr. Keren Bergman
The biggest challenge is integrating laser sources. Silicon is an indirect bandgap semiconductor—it doesn't emit light efficiently. So you need to either bond III-V laser chips onto the silicon photonic die, or heterogeneously integrate III-V materials into the CMOS process. Both approaches add cost and complexity. Beyond lasers, you need tight dimensional tolerances for waveguides and ring resonators, typically on the order of nanometers. Small variations in width or thickness shift the resonant wavelength, which affects system performance.
Kara Rousseau
And thermal sensitivity must be an issue. If resonant wavelengths shift with temperature, you need active thermal control or wavelength stabilization. How do systems handle this in practice?
Dr. Keren Bergman
Thermal management is critical. Ring resonators shift about one hundred picometers per degree Celsius, which is significant for dense wavelength division multiplexing systems. You can use integrated heaters for thermal tuning, but this consumes power and adds control complexity. Alternative approaches include athermal designs—carefully engineered structures whose thermal dependence cancels out—or broader resonance features that tolerate temperature variation. Each approach trades off efficiency, bandwidth, or design complexity.
Sam Dietrich
What about latency? Electrical signals propagate at roughly half the speed of light in copper, while light in silicon travels at about two-thirds the speed of light in vacuum. Does this provide a meaningful latency advantage?
Dr. Keren Bergman
The propagation speed difference is real but rarely dominant. Latency in chip-to-chip links is typically dominated by serialization delay—the time to clock bits in and out—and by any protocol overhead. Photonics can reduce latency by enabling higher serialization rates, but the propagation time difference over centimeter distances is small, measured in picoseconds. Where photonics helps more is in enabling flatter, lower-latency network topologies by providing the bandwidth to support high-radix switches.
Kara Rousseau
Let's discuss network topology. Traditional electrical networks use hierarchical topologies—multiple switch layers—because electrical ports are expensive in terms of power and area. If photonic ports are cheaper, can you build flatter networks with lower diameter?
Dr. Keren Bergman
Absolutely. This is one of the most compelling advantages of photonics for large-scale systems. With wavelength division multiplexing, a single physical waveguide can carry dozens of independent channels. This enables high-radix switches—routers with hundreds of ports—which in turn support flatter topologies like fat trees or dragonfly networks. Lower network diameter means fewer hops from source to destination, which reduces both latency and congestion.
Sam Dietrich
But switching photonic signals requires either electro-optic conversion at every hop—convert to electrical, route, convert back to optical—or all-optical switching. All-optical switches would avoid conversion overhead, but they're more complex to build. What's the state of the art in photonic switching?
Dr. Keren Bergman
We're seeing both approaches. For many applications, optical-electrical-optical conversion at switches is acceptable because you need buffering and header processing anyway, which is easier in the electrical domain. For ultra-low-latency switching, all-optical approaches using devices like microring resonator switches or Mach-Zehnder interferometers are being explored. These can switch wavelengths without conversion, but they require careful power budgeting and wavelength management.
Kara Rousseau
This raises the abstraction question. How much of the photonic layer complexity should be exposed to the system architect versus hidden in the physical layer? If wavelength assignment and thermal tuning are visible to software, that's additional complexity. If they're hidden, you lose optimization opportunities.
Dr. Keren Bergman
It's a familiar trade-off. Most current systems hide photonic details—to the network layer, an optical link looks like any other high-speed link. But there are emerging proposals for wavelength-aware routing, where higher layers can request specific wavelengths for quality-of-service guarantees or to avoid contention. This requires cross-layer coordination, which adds complexity but can improve efficiency for specialized workloads.
Sam Dietrich
What about reliability? Photonic components can fail in different ways than electrical circuits. Are there unique failure modes we need to worry about?
Dr. Keren Bergman
Laser aging is a concern—optical power degrades over time. Modulators and photodetectors are quite reliable, but contamination or mechanical stress can degrade waveguides. The system needs to monitor optical power levels and adjust drive currents to compensate for aging. Error correction codes and redundancy help, just as they do for electrical links. In some ways, photonic links are more robust because they're less susceptible to electromagnetic interference.
Kara Rousseau
Let's talk about on-chip photonics. Moving from chip-to-chip interconnects to on-chip photonic networks—using light to connect cores on the same die—seems like the logical endpoint. What are the barriers to on-chip photonics?
Dr. Keren Bergman
On-chip distances are short enough that electrical wires still work reasonably well for many applications. The advantage of photonics becomes clear when you need very high bandwidth—imagine connecting thousands of cores with terabit-per-second links—or when you want to avoid the area and power overhead of long global wires. The challenges are integrating lasers and photodetectors at the scale required, managing thermal effects on a die where different regions have different temperatures, and competing with increasingly sophisticated electrical signaling techniques.
Sam Dietrich
There's also the question of whether the bandwidth is actually needed. If memory bandwidth is the bottleneck rather than core-to-core communication, adding photonic interconnects doesn't help. How do you evaluate whether a system is interconnect-limited versus memory-limited?
Dr. Keren Bergman
You profile communication patterns. For embarrassingly parallel workloads with little inter-core communication, interconnect bandwidth doesn't matter much. For tightly coupled workloads—certain scientific simulations, graph analytics, some machine learning algorithms—interconnect can be the limiting factor. The challenge is that workload characteristics vary, so you need flexible architectures that can adapt. Photonics provides a tool for scaling bandwidth when needed, but it's not universally beneficial.
Kara Rousseau
What about power delivery? High-speed photonic components require clean power supplies and generate heat. Does photonic integration create new power delivery challenges?
Dr. Keren Bergman
Lasers are power-hungry compared to passive photonic components. A typical laser might consume tens to hundreds of milliwatts, and you need many lasers for a full system. This adds to the on-chip power density and requires careful thermal design. The good news is that photonic components are relatively tolerant of supply voltage noise compared to high-speed analog circuits, so power delivery requirements are manageable. But yes, integrating photonics increases total system power, even as it reduces interconnect power.
Sam Dietrich
Looking forward, what needs to happen for photonics to move from niche applications to mainstream computing systems?
Dr. Keren Bergman
We need continued improvements in integration and cost reduction. III-V laser integration remains expensive—we need either better heterogeneous integration techniques or breakthroughs in silicon-compatible light sources. Standardization would help—common interfaces for photonic dies, agreed-upon wavelength grids, standardized packaging approaches. And we need better design tools so that system architects can co-design photonics and electronics without requiring deep expertise in both domains.
Kara Rousseau
And presumably, applications that genuinely need the bandwidth. If processor scaling slows and we're not building systems with thousands of cores, the demand for ultra-high-bandwidth interconnects diminishes.
Dr. Keren Bergman
That's the uncertainty. The case for photonics strengthens if we move toward massively parallel architectures or if AI accelerators continue to drive demand for high-bandwidth communication. If computing trends toward smaller, more specialized chips connected by high-speed links—chiplets—photonics becomes very attractive. The technology is maturing; the question is whether the application landscape evolves in a direction that values what photonics offers.
Sam Dietrich
Dr. Bergman, this has been an illuminating discussion. Thank you for joining us.
Dr. Keren Bergman
Thank you both. This was wonderful.
Kara Rousseau
That's our program for tonight. Until tomorrow, may your signals propagate cleanly.
Sam Dietrich
And your waveguides remain aligned. Good night.