NVDA
NVIDIA
Summary
What they do:
Designs the GPU chips and NVLink networking fabric that train and run virtually every large AI model on earth — sitting at Layer L06 (The Finished Chip) as the dominant merchant AI accelerator, with TSMC fabricating every chip.
Why they matter:
NVIDIA holds 80%+ AI accelerator market share by revenue, and the CUDA software ecosystem — 15+ years deep, 4 million developers, every major AI framework built on it — creates the deepest lock-in in semiconductor history; if NVIDIA stopped shipping, global AI training would halt within months.
Recent performance:
Q4 FY2026 revenue $68B (+73% YoY), data center revenue $62.3B (+75% YoY), EPS $1.62 (beat by 3.6%). Full-year FY2026 data center revenue $194B. Q1 FY2027 guided at $78B, ~$6B above consensus.
Our Verdict
The most structurally dominant company in AI — CUDA lock-in, NVLink fabric monopoly, and 80%+ market share create a position that is fully recognized by the market at ~38x trailing earnings, leaving upside dependent on sustained 50%+ growth through the Rubin transition and beyond.
Structural trends
Structural
93
/ 100
Moat
10/10
Ecosystem monopoly
AI Exp.AI Exposure
Pure Play~90% AI
Play Type
ConsensusAI Growth
~68% YoY
Rel. Value
77
COMPELLINGPriceLIVE
$196.51
+3.80%
Live via Yahoo Finance · refreshes every 5 min
Market Cap
$4.8T
P/E Ratio
40.2
P/S Ratio
22.1x
52W High
$212.19
52W Low
$95.04
52W Chg
106.8%
Beta
2.33
Walk into a hyperscaler data center — a building the size of several football fields, humming with tens of thousands of servers. Open one of the server trays. Inside, you will see between four and eight large green circuit boards, each with a GPU chip roughly the size of a postage stamp mounted under a massive heatsink. Those are NVIDIA GPUs — the chips that do the actual mathematical work of training AI models.
Each GPU is connected to stacks of HBM memory (High Bandwidth Memory — fast memory chips stacked vertically and bonded directly onto the GPU package, so data can move between memory and processor without crossing a circuit board). A single NVIDIA Blackwell GPU package is actually two GPU dies and eight HBM memory stacks fused together on a silicon interposer — a precision silicon base plate the size of a dinner plate, manufactured by TSMC using their CoWoS advanced packaging process.
But the GPU is only half the story. Between servers, NVIDIA's NVLink cables connect GPUs directly to each other at 7,200 gigabits per second — roughly 18 times faster than the fastest standard Ethernet connection. In a Blackwell NVL72 rack, 72 GPUs are wired together so tightly that the AI software treats them as one enormous processor. That rack-scale integration is what makes NVIDIA different from a company that just sells chips — they sell the entire compute fabric.
Human scale reference
A single NVL72 rack costs roughly $3 million, consumes 120 kilowatts of power (enough to run 40 homes), and can train an AI model that would have required an entire data center five years ago.
The reason NVIDIA went from a $400 billion company to a $4.5 trillion company in three years comes down to one architectural shift: AI models got big enough that the only way to train them is to wire thousands of GPUs together and treat them as one machine. That is exactly what NVIDIA's full-stack approach — GPU silicon + NVLink fabric + CUDA software — is built to do. The Blackwell generation entered full production in late 2025 and is sold out through mid-2026. The next generation, Rubin (6 new chips unveiled at CES 2026, first samples shipped to customers in February 2026), enters production in H2 2026. Analyst expectations for 2026 capex across the top 5 cloud providers and hyperscalers now approach $700 billion — and NVIDIA is the default recipient of that spend.
Supply Chain Dependencies
Upstream Suppliers
foundry · weight 0.9
memory_supplier · weight 0.8
memory_supplier · weight 0.4
memory_supplier · weight 0.3
packaging · weight 0.5
eda_customer · weight 0.8
eda_customer · weight 0.7
test_equipment · weight 0.4
packaging_customer · weight 0.5
ip_customer · weight 0.5
connectivity_customer · weight 0.6
connectivity_customer · weight 0.5
optical_customer · weight 0.7
power_customer · weight 0.7
power_customer · weight 0.5
connector_customer · weight 0.4
Downstream Customers
server_assembler · weight 0.8
server_assembler · weight 0.7
server_assembler · weight 0.5
optical_partner · weight 0.6
networking_partner · weight 0.7
hyperscaler_customer · weight 0.7
hyperscaler_customer · weight 0.6
hyperscaler_customer · weight 0.5
hyperscaler_customer · weight 0.6
hyperscaler_customer · weight 0.4
hyperscaler_customer · weight 0.3
gpu_supplier · weight 0.5
gpu_supplier · weight 0.7
gpu_supplier · weight 0.9
gpu_supplier · weight 0.8
gpu_supplier · weight 0.8
gpu_supplier · weight 0.8
The Catch
NVIDIA's dominance depends on hyperscaler capex continuing at $700B+ annually — if even two of the four major cloud providers slow their AI infrastructure spend simultaneously, NVIDIA's forward demand softens before the market sees it in earnings. Jensen himself acknowledged the risk obliquely: "compute equals revenues... without investing in compute, there cannot be revenue growth." The inverse is also true — if hyperscalers decide the compute-to-revenue conversion is not delivering adequate returns, the capex tap closes. Custom silicon represents a second structural risk: Broadcom's AI backlog of $73B and growing hyperscaler self-design programs create a ceiling on NVIDIA's long-term market share that the market has not fully discounted.
If They Win
If NVIDIA maintains its training monopoly through Rubin and beyond, and agentic AI demand sustains exponential token growth, they become the Standard Oil of AI compute — the company that collects a toll on every dollar of AI infrastructure spending on earth. FY2028 revenue could exceed $400B, with margins and switching costs that compound for a decade. The Spectrum-X networking business, the Vera CPU, and the NVLink-as-platform strategy (extending to custom silicon from AWS and others) turn NVIDIA from a chip company into the operating system of every AI data center on earth.
Others in The Finished Chip
Not financial advice. All scores generated via AI algorithms using public data.