The memory industry is waking up again, but don’t let the familiar patterns fool you. While it might look like another classic rebound on the surface, the DNA of this cycle has fundamentally changed.
We aren’t just looking at inventory normalization or a temporary price hike. We are witnessing a structural shift with AI as its heartbeat. Here is why the old playbook no longer applies.
1. The End of the “Wait and See” Investment
In the past, the memory cycle was a predictable dance: prices recovered, margins expanded, CapEx grew, and shipments followed. We are entering that phase now, but with a twist.
Rising prices have given producers their safety margins back, allowing them to flip the switch on large-scale capital expenditure. However, unlike previous cycles where investment followed demand, the AI era requires producers to build ahead of the curve.
2. DRAM: AI Servers Have Killed Seasonality
Remember when server DRAM demand used to ebb and flow with the seasons? Those days are over.
AI servers operate on a different logic. We are seeing continuous infrastructure expansion and capacity being built out well before it’s “needed.” This isn’t a seasonal trend; it’s a structural constant.
- Projected AI Server Shipments (2026): ~2.61 million units (+22% YoY)
- Market Share Shift: AI servers will jump from 9% of the total market in 2023 to 17% by 2026.
This transition effectively flattens the traditional memory cycle, replacing volatility with steady, structural growth.
3. HBM: The New Core of the Ecosystem
High Bandwidth Memory (HBM) is no longer a niche product—it is the center of the semiconductor universe. Looking at GPU roadmaps, the memory requirements are staggering:
- 2025–2026: ~288GB HBM per NVIDIA GPU
- 2027 Forecast: ~1,024GB per GPU
That is a 3.7x increase in memory density in just two years. Because HBM is critical for both GPUs and custom AI ASICs, it has turned memory investment into a forward-looking infrastructure cycle rather than a reactive one.
4. NAND: The Silent Giant of AI Inference
While HBM gets all the headlines for training, NAND is the engine of inference. AI inference workloads require massive storage bandwidth. NVIDIA’s emerging ICMS (Inference Context Memory Storage) architecture proves that storage is the next bottleneck.
The numbers for NVIDIA-related SSD demand are explosive:
- 2026: ~35 million TB
- 2027: ~120 million TB
This isn’t incremental growth; it’s a total reimagining of what an AI server requires—roughly 1,162TB of SSD per server.
5. The Ripple Effect: Equipment and Materials
In the equity markets, there is a specific sequence to these things. We’ve already seen memory prices rise and producers rerate. Now, the cycle is moving toward the “engine room”: equipment and materials suppliers.
As capacity expansion kicks into gear, we expect a massive surge in equipment bookings and materials consumption. Historically, this is the phase where the supply chain sees its strongest earnings leverage.
Final Thought
This isn’t a “smartphone” or “PC” cycle. This is a complete overhaul of server architecture driven by exponential data movement. By moving away from consumer-led demand toward AI infrastructure, the memory industry is becoming less volatile and more structural.
The cycle hasn’t just turned; it has evolved.
댓글 남기기