
AI infrastructure is entering a new phase.
For the last few years, the market focused mostly on compute — GPUs, accelerators, and HBM. But as those components become more powerful, the bottleneck increasingly shifts to how data moves between them. That is why optical interconnects are becoming so important. The next constraint in AI infrastructure is not just processing power. It is connectivity. OFC 2026 strongly reinforced that point, with the conference highlighting optical networking as a core enabler of the next era of AI data centers.
1. Why now? Because AI is creating a connectivity bottleneck
As GPU and HBM performance rises, the need for faster interconnects rises with it.
The reason is simple: higher memory bandwidth and more capable accelerators only create real system-level gains if the surrounding network can keep up. That is why optical interconnect demand tends to increase alongside advances in GPUs and HBM. Better chips create more traffic, and more traffic increases the value of photonics. In that sense, the next bottleneck in AI infrastructure is increasingly shifting from compute to connection.
2. What OFC 2026 confirmed
OFC 2026 made one thing very clear: the transition toward optical networking is no longer optional.
The event itself described strong global momentum around AI infrastructure and the optical technologies enabling it. At the same time, the market discussion around 1.6T and beyond was centered primarily on pluggable modules, which tells you where the near-term commercial urgency still sits. In other words, the structural demand is real, but investors still need to focus on where deployment and revenue are happening first.
That is an important distinction.
The biggest money opportunities in the next few years may not come from owning “all optics” in a vague sense. They are more likely to come from the specific bottleneck layers where supply is concentrated, technical barriers are high, and customers are already spending.
3. AI data-center optics need to be viewed in three layers
The optical opportunity inside AI data centers is not one single market.
It is better understood as three separate layers.
The first is scale-up, which usually refers to the shortest and fastest links inside a server, rack, or tightly coupled pod. The second is scale-out, which connects large numbers of GPU servers across the data center fabric. The third is scale-across, meaning links between buildings, campuses, or even regions. These segments do not have the same economics, the same technology choices, or the same timing.
That is why a broad statement like “optical demand is rising” is true but incomplete.
To understand who makes money, you have to ask which layer is scaling first, where copper breaks down, and where pluggables, switching, or new packaging architectures actually get adopted.
4. Copper is running into physical limits
One of the strongest arguments for optics is not fashion. It is physics.
As link speeds rise, the usable reach of direct-attached copper gets shorter. Broadcom wrote in March 2026 that with SerDes rates at 100 Gbps per lane, the effective reach of DAC had already shrunk to around 5 meters. Looking ahead, the challenge becomes even tougher as the industry pushes toward 200G-per-lane signaling.
That is why the market keeps moving toward optical and assisted-copper approaches.
Interestingly, this does not mean copper disappears immediately. Marvell’s 1.6T PAM4 DSP for active electrical cables was designed specifically to extend 200G/lane copper connectivity to more than 3 meters, which shows how hard the industry is working to squeeze a little more life out of copper inside the rack. But the need for that kind of signal conditioning also proves the broader point: the raw physical limits are getting tighter, and optics becomes more necessary as bandwidth scales further.
5. Why OCS matters more near term than many people think
A lot of market attention goes to CPO, but in practical architecture terms, optical circuit switching, or OCS, may matter sooner than many expect.
Google’s Apollo paper described what it called the first large-scale production deployment of optical circuit switches for datacenter networking. The paper argues that OCS is attractive because it is data-rate and wavelength agnostic, low latency, and extremely energy efficient, and because it steers light without intermediate processing. That is a powerful idea for AI fabrics, where energy and scale are both becoming more difficult constraints.
So the near-term framing may be this:
CPO is a long-term direction, but OCS is already proving architectural value.
That does not mean OCS replaces everything, but it does suggest that switching photonics — not just packaging photonics — deserves much more attention in the AI data-center conversation.
6. Pluggables are likely to make the money first
This is probably the most important market point.
The long-term direction of the industry may include CPO, NPO, and deeper photonic integration. But the more immediate commercial wave still appears to be pluggables. OFC’s own 2026 market panel on 1.6Tbps and beyond explicitly said the discussion would focus primarily on pluggable form factors, which is a strong signal about where the market sees near-term adoption.
That aligns with what suppliers actually showcased.
At OFC 2026, Coherent announced demonstrations spanning 1.6T, 3.2T, and emerging architectures for 12.8T and beyond, while emphasizing that AI infrastructure is driving an accelerated transition to higher-speed pluggable architectures. It also specifically demonstrated multiple 1.6T transceivers and early 3.2T pluggable technologies. In other words, the commercial path for the next two to three years is likely to be shaped more by 1.6T and then 3.2T pluggables than by full-scale CPO deployment.
That is why the investment takeaway is not just “buy optics.”
It is to focus on the parts of the stack that sit directly inside the current shipping roadmap. In this cycle, the nearer-term revenue center still looks more pluggable than co-packaged.
댓글 남기기