ANET
Arista Networks
Summary
What they do:
Builds the high-speed Ethernet switches that connect every GPU to every other GPU inside hyperscaler AI clusters — the network fabric sitting between servers that routes training and inference traffic at 400G to 800G per port, with 1.6T imminent — dominating Layer L15 of the AI infrastructure stack.
Why they matter:
AI training requires thousands of GPUs sharing data constantly, and idle GPUs waiting on slow networks are the most expensive wasted resource in computing. Arista's EOS software and 7800R spine switches are the default fabric inside Meta, Microsoft, and two other hyperscaler AI deployments, with switching costs measured in engineering years, not dollars.
Recent performance:
Q4 2025 revenue $2.49B (+28.9% YoY), beating the high end of guidance. Full year 2025 revenue $9.0B (+28.6%), operating margin 48.2%. First quarterly net income above $1B. 2026 guidance raised to $11.25B (+25% YoY) with AI networking at $3.25B.
Our Verdict
Dominant AI networking franchise with 48% operating margins, $3.25B AI revenue target doubling year-on-year, and EOS software lock-in at every major hyperscaler — but at ~40x forward earnings, the stock prices in strong execution, and customer concentration plus NVIDIA Spectrum-X competition bound the upside.
Structural trends
Structural
71
/ 100
Moat
8/10
EOS software lock-in + hyperscaler operational stickiness
AI Exp.AI Exposure
High~45% AI
Play Type
EstablishedAI Growth
~40%+
Rel. Value
63
ATTRACTIVEPriceLIVE
$154.37
+1.55%
Live via Yahoo Finance · refreshes every 5 min
Market Cap
$194.4B
P/E Ratio
56.1
P/S Ratio
21.6x
52W High
$164.94
52W Low
$66.59
52W Chg
131.8%
Beta
1.48
Walk into the networking room of a hyperscaler data center — a space separate from the server halls, often climate-controlled independently, where rows of network switches are stacked in racks connected by thousands of fibre optic cables. Each cable carries data at 400 or 800 gigabits per second — soon 1.6 terabits. The switches directing that traffic are overwhelmingly Arista.
The physical architecture follows a hierarchy. At the bottom, "leaf" switches sit in each server row, connecting individual servers to the network — one switch might connect 32 or 64 servers. Above them, "spine" switches connect the leaf switches to each other, creating a fabric where any server can reach any other server. In the largest AI clusters, there may be a third "super-spine" tier connecting multiple spine pods together. Arista makes switches for every tier.
For AI-specific clusters — where GPUs communicate via the data center's Ethernet network (as opposed to NVLink, which operates within a rack) — Arista's switches handle the "east-west" traffic: data flowing between GPU servers during distributed training. This traffic is enormous. In a training run, every GPU needs to share gradient updates with every other GPU after each computation step. The speed and latency of Arista's switches directly affect how fast the model trains. Meta built a 24,576-GPU AI cluster on Arista's switching fabric. Three of Arista's four hyperscaler AI customers have deployed more than 100,000 cumulative GPUs on Arista Ethernet, with the fourth migrating from InfiniBand.
GPU clusters require dramatically more networking than traditional cloud workloads. A traditional cloud server might need one or two 100G network connections. An AI GPU server needs multiple 400G or 800G connections — one for each GPU's network path. The networking spend per GPU is 3-5x higher than networking spend per traditional server. Arista's AI networking revenue target of $3.25B for 2026 — up from near-zero three years ago — is a direct consequence of this multiplication.
Arista is a pure-play data center networking company. Full year 2025 revenue was $9.0B, with core cloud/AI products at 65% of revenue, campus and routing adjacencies at 18%, and subscription software and services at 17%. The company had approximately 5,200 employees exiting 2025 and more than 10,000 cumulative customers. Cloud and AI titans contributed 48% of 2025 revenue, enterprise and financials 32%, and AI and specialty providers (including Oracle and emerging neo clouds) 20%.
Supply Chain Dependencies
Upstream Suppliers
chip_supplier · weight 0.7
optical_supplier · weight 0.6
networking_partner · weight 0.7
component_customer · weight 0.4
component_customer · weight 0.4
analog_customer · weight 0.3
connector_customer · weight 0.4
materials_customer · weight 0.4
The Catch
Arista's two largest customers together represent 36-40% of revenue, and four hyperscaler customers drive the vast majority of AI networking growth. If any one of these customers redirects AI networking spend toward NVIDIA's Spectrum-X (which bundles switching with GPU connectivity) or toward custom switch silicon, Arista's AI growth story decelerates at exactly the moment the $186B market cap prices in acceleration. The memory cost situation adds a near-term margin risk — Jayshree Ullal described current memory prices as "horrendous" and "an order of magnitude exponentially higher," and the company has already committed to price increases on memory-intensive SKUs. Unlike packaging or chip companies that face multi-year supply buildouts, Arista's risk is concentrated in customer decisions, not physical supply constraints.
If They Win
If Arista maintains hyperscaler switching dominance through the 1.6T transition, successfully defends against NVIDIA Spectrum-X, and diversifies to five or more hyperscaler-scale customers, they become the networking backbone of every AI data center on earth — the infrastructure that every GPU cluster runs on, with software lock-in, operational stickiness, and a growing share of the most capital-intensive buildout in technology history. The 800G-to-1.6T transition beginning in 2027 would drive a multi-year switch refresh cycle at higher ASPs, while campus and enterprise expansion to $1.25B+ provides a second growth engine less dependent on hyperscaler concentration. At 48% operating margins with zero debt and $10.7B cash, the financial model compounds at a rate few technology companies can match.
Others in Connect Servers to Other Servers
Not financial advice. All scores generated via AI algorithms using public data.