2
CONSTRAINT

The Memory Supercycle

A GPU without memory is useless. Three companies make all the memory, and they are sold out through 2026.

Status: Current — SK Hynix CFO confirmed 2026 HBM supply fully sold out
1

What's Technically Happening

Every modern AI chip is paired with stacks of high-bandwidth memory (HBM) — DRAM chips stacked vertically and connected directly to the GPU through thousands of microscopic through-silicon vias. An NVIDIA Blackwell package uses 8 HBM3E stacks. The Rubin GPU that follows in 2026 will use HBM4 with more dies per stack and more stacks per package. Both the bandwidth per stack and the number of stacks per chip are rising at the same time, in every generation, without exception.

HBM is made by three companies: SK Hynix, Samsung, and Micron. SK Hynix holds roughly 62% of HBM shipments as of Q2 2025. SK Hynix's CFO publicly stated the company's entire 2026 HBM supply was fully sold out as of mid-2025. HBM4 mass production began in February 2026 to align with Rubin's launch window. HBM4 requires more complex through-silicon via bonding, a thinner base die, and tighter yield control than HBM3E — all of which compress the effective output per wafer.

Downstream pressure is visible in pricing. DRAM spot prices rose 40–50% through the first half of 2026. SK Hynix has publicly warned the memory wafer shortage could extend until 2030. In October 2025, Samsung and SK Hynix signed a letter of intent with OpenAI's Stargate project to eventually supply 900,000 DRAM wafers per month — a figure that would consume a meaningful share of global fab output. Micron's 2026 HBM capacity was fully booked as of its Q2 2026 earnings.

The critical insight: HBM shortage is the one supply constraint that directly blocks GPU shipments. TSMC can make the chip; if there is no HBM stack to pair with it, the chip cannot be sold. Unlike CoWoS — where expansion capex is in flight — DRAM fab expansion takes 3–5 years from decision to first wafer. There is no short path out of this.

2

In Plain English

A GPU without memory is like an engine without a gas tank. It can spin, but it cannot run. The "gas tank" for an AI chip is HBM — high-bandwidth memory — and it isn't a single chip. It's a stack of memory chips piled on top of each other like a sandwich, welded together with thousands of tiny vertical wires going through the entire pile. Each new generation of AI chip needs more of these sandwiches, and each sandwich is taller and thicker than the last.

Only three companies in the world can make these memory sandwiches: SK Hynix, Samsung, and Micron. SK Hynix alone supplies more than half the world's HBM. The CFO of SK Hynix said in mid-2025 that they had already sold every sandwich they planned to make in all of 2026. Micron said the same thing on their next earnings call. The factories are running flat out and the orders are booked months ahead.

What makes this worse is that you cannot build new memory factories quickly. A DRAM fab takes 3 to 5 years to go from "we signed a contract" to "the first chip ships." You cannot spend your way out on a quarterly timeline. So for the next several years, the number of AI chips that can physically ship is limited by the number of memory sandwiches SK Hynix, Samsung, and Micron can produce. That's why DRAM prices jumped 40–50% in the first half of 2026 — it's a pure supply squeeze.

And demand keeps compounding. Every new NVIDIA chip uses more HBM than the one before it. Every hyperscaler's custom chip — Google's TPU, Amazon's Trainium, Microsoft's Maia — also uses HBM. OpenAI's Stargate project alone is aiming to secure 900,000 DRAM wafers per month, which is a meaningful chunk of the world's entire DRAM output. The memory supercycle is not a cyclical upswing. It's a structural rewiring of demand for the rest of the decade.

3

Who Benefits Most

Beneficiaries are ranked by the directness of their exposure. Tickers that exist in our explorer link to the company brief.

Primary beneficiaries

Direct, first-order exposure. If the trend plays out, these are the names that capture the majority of the value.

MUMicron Technology

The only US-listed HBM maker. Fully booked through 2026, sold out on 2027. Every new GPU shipment routes revenue to Micron. The purest public-market bet on this trend.

000660.KSSK Hynix (Korea)

SK Hynix. Market leader at ~62% HBM share. Directly paired with NVIDIA Rubin. Dominant HBM4 position through at least 2027. Korea-listed.

005930.KSSamsung Electronics (Korea)

Samsung. #2 HBM producer; also diversified across DRAM, NAND, and foundry. Less pure but still a primary beneficiary of the supercycle.

Secondary beneficiaries

Real exposure but competing with alternatives or dependent on adjacent calls.

RMBSRambus

Licenses the physical-layer IP inside DRAM interfaces. More DRAM = more royalties. Hidden beneficiary who collects from every DRAM maker.

LRCXLam Research

Etch equipment for every new DRAM fab expansion. Each HBM capacity announcement is a future Lam order.

AMATApplied Materials

Deposition and materials engineering tools for DRAM. Benefits from expansion capex announced by SK Hynix and Samsung.

KLACKLA Corporation

HBM stacking has extreme yield sensitivity. KLA inspection content per HBM stack is one of its fastest-growing segments.

Picks and shovels

Enabling suppliers whose revenue scales with the trend regardless of which frontline vendor wins.

ENTGEntegris

Specialty chemicals, filtration, and materials used in every DRAM fab. Direct consumables exposure.

CAMTCamtek

Camtek. Optical metrology for advanced packaging and HBM stack inspection. Niche but high-growth.