3
STRUCTURAL SHIFT

The Optical Rewiring of AI

At AI cluster speeds, copper wires hit a physics ceiling. Light through glass is the only way forward.

Status: Current (1.6T pluggables shipping) → Accelerating (CPO in 2026) → Structural (all-optical racks by 2028–2030)
1

What's Technically Happening

Inside every AI cluster, GPUs talk to each other constantly. During a single large training step, every GPU in the cluster synchronizes gradients with every other GPU, moving terabytes across the fabric per second. At frontier cluster sizes (100,000+ GPUs), aggregate bandwidth across the interconnect fabric runs on the order of petabytes per second.

Copper cables hit a physics ceiling at these speeds. As data rate per electrical lane rises, the signal degrades faster with distance, and the power required to drive the wire scales non-linearly. At 224 Gbps per lane (the rate required for 1.6T Ethernet), usable copper reach collapses to roughly one meter. Beyond one meter, you must use optics. NVIDIA mandated 1.6T optical transceivers for every GB300 rack because rack-to-rack runs exceed that reach.

The next step is co-packaged optics (CPO). Today, optical transceivers are pluggable modules sitting at the front of a switch chassis. In CPO, the laser and optical engine move onto the same package as the switch silicon — cutting the electrical signal path to near zero and dropping per-link power from roughly 30W to 9W. NVIDIA's Quantum-X (InfiniBand) CPO switches launched in H2 2025. Spectrum-X (Ethernet) CPO ships H2 2026. Broad-market CPO deployment starts in 2026 with volume ramps through 2027–2028.

The Rubin Ultra Kyber NVL576 rack, coming in 2027, uses silicon photonics for all rack-scale interconnects — 576 GPUs sharing optical fabric inside a single rack. This is the first true "all-optical" rack topology. Industry consensus: every AI interconnect will be optical within five years. That drives new dollar demand for electro-absorption modulated lasers (EML lasers), silicon photonic ICs, single-mode fiber, MT/MPO fiber connectors, precision fiber-attach assembly, and DSP chips for optical signal recovery.

2

In Plain English

Here's a physics problem with no workaround. When you try to push more and more data through a copper wire, the wire gets hot and the signal gets mushy. There's a speed limit built into the copper itself, and the AI industry has hit it. At the data rates NVIDIA needs between GPUs in a large cluster, a copper cable stops working after about one meter. Go past a meter and the signal degrades beyond repair.

So what do you do? You switch from electricity to light. Instead of sending electrical pulses down a copper wire, you fire laser pulses down a glass fiber. Light through glass has none of copper's problems — it doesn't lose strength over distance, it doesn't get hot, and the fiber is thinner and lighter than copper cable. The catch: every time you want to convert from electricity to light, you need four things. A laser to generate the light. A photonic chip to modulate that light with your data. A precision fiber connector to aim the light into the fiber without losing any. And a digital signal processor on the other end to clean up what arrives. Every GPU-to-GPU link in a modern AI cluster needs all four.

And the number of links is staggering. NVIDIA's current GB300 rack has hundreds of high-speed links per rack. The next rack (Rubin NVL72) has more. The one after that (Rubin Ultra, codenamed Kyber) packs 576 GPUs into a single rack and uses light for every single interconnect between them. For every AI factory being built, there is a matching factory somewhere else making lasers, fiber, and photonic chips — and those factories are running at capacity.

The next evolution is called co-packaged optics. Today, the laser is a separate plug-in module on the front of a switch. Tomorrow, the laser sits right on top of the switch chip, on the same package. That simple move reduces the power per link from 30 watts down to 9 watts — saving megawatts of electricity across a full data center. The first co-packaged optical switches are shipping in 2026. By 2028 this is the only way you'll be able to build a frontier AI rack. This is a once-per-decade rewiring of the entire data center fabric.

3

Who Benefits Most

Beneficiaries are ranked by the directness of their exposure. Tickers that exist in our explorer link to the company brief.

Primary beneficiaries

Direct, first-order exposure. If the trend plays out, these are the names that capture the majority of the value.

COHRCoherent Corp

Leader in high-speed transceivers, EML lasers, and vertical integration from laser chip to finished module. Every 1.6T link is real dollar revenue for Coherent. Largest single public-market beneficiary.

LITELumentum Holdings

Indium phosphide laser chips and photonic ICs. Directly competing with Coherent at the high end and winning share on CPO engines. NVIDIA silicon photonics partner.

CIENCiena Corporation

Coherent optics and transport for data center interconnect (DCI). As AI workloads spread across buildings and metros, DCI volume explodes.

Secondary beneficiaries

Real exposure but competing with alternatives or dependent on adjacent calls.

AAOIApplied Optoelectronics

Volume transceiver supplier. Benefits from raw unit count growth across 400G, 800G, and 1.6T tiers.

CRDOCredo Technology

Active electrical cables and DSPs for optical links. Bridge supplier during the copper-to-optics transition and beyond.

MTSIMACOM Technology

MACOM. RF and mixed-signal analog chips inside high-speed optical modules. Content per module rising at each speed bump.

SMTCSemtech

PMD-side chips and signal conditioning for optical links and high-speed wireline.

GLWCorning

Corning. The fiber itself. Every new optical link is new fiber and new connectors. Corning supplies both at industrial volume.

Picks and shovels

Enabling suppliers whose revenue scales with the trend regardless of which frontline vendor wins.

POETPOET Technologies

POET Technologies. Silicon photonics platform, early-stage but positioned for the CPO era.