Fueling AI-driven data centers with explosive market growth
The explosive growth of AI, particularly large language models and generative AI, drives co-packaged optics (CPO) adoption. AI workloads need high bandwidth, low latency, and energy efficiency to connect millions of GPUs in hyperscale data centers or "AI factories." Key drivers include data transfer needs, energy efficiency, scalability, and industry investment.
In scale-out networks, CPO enables long-distance, high-bandwidth connections (e.g., between racks) with lower latency and power, ideal for AI-driven cloud fabrics and Ethernet/InfiniBand networks. Pluggables will remain at compute nodes until CPO matures. In scale-up AI networks, CPO replaces copper, offering better connectivity, longer reach, and lower power for GPU-to-GPU or node-to-switch fabrics, vital for AI training and HPC. Initial CPO deployments will target scale-up networks before expanding to scale-out.
At GTC 2025, NVIDIA unveiled Spectrum-X and Quantum-X silicon photonics switches, a milestone for CPO in AI infrastructure. These switches use CPO to connect GPUs with 1.6Tbps ports. NVIDIA’s CPO adoption for its Rubin architecture overcomes NVLink’s limits, enabling faster, scalable, low-power interconnects.
The CPO market, valued at $46M in 2024, is projected to reach $8.1B by 2030, with a 137% CAGR, driven by shifts from pluggables to CPO and copper to optics, addressing power, density, scalability, bandwidth, and distance constraints.
CPO: a complex ecosystem for scalable network connectivity
Co-packaged optics (CPO) integrates optical transceivers with switch ASICs or processors for high-bandwidth, low-power interconnects in scale-out (cloud fabrics) and scale-up (AI/GPU clusters) networks. The CPO supply chain involves semiconductor foundries, photonics manufacturers, packaging providers, and fiber optic specialists. Key players like Nvidia, TSMC, Broadcom, Coherent, and hyperscalers drive demand, fueled by AI workloads.
The supply chain covers raw materials, components, integration, and deployment. Silicon wafers (Shin-Etsu), SOI (Soitec), indium phosphide (AXT), and glass (Schott) support ASIC and photonic circuit co-packaging. Scale-out networks use cost-effective substrates; scale-up networks need high-performance materials. Photonic integrated circuits (PICs) from Lumentum, Coherent, and Intel supply lasers and transceivers, with scale-out using standard PICs and scale-up needing custom PICs for NVLink. Switch ASICs from Broadcom and Nvidia target high port density or low latency. Optical fibers (Corning) and connectors (Foxconn) enable long-reach (scale-out) or high-density (scale-up) links. TSMC’s CoWoS and ASE lead packaging, with scale-out prioritizing cost and scale-up needing density. Assembly and testing by Foxconn and Keysight ensure reliability. System integrators like Cisco deploy CPO switches for interoperable cloud (scale-out) or custom AI (scale-up) systems.
CPO evolves to meet AI demands, with scale-out focusing on cost and volume, and scale-up on performance and customization, transforming data center connectivity.
CPO as the backbone of scalable AI and cloud infrastructures
AI-driven co-packaged optics (CPO) transforms data centers, led by Nvidia, Broadcom, and TSMC. CPO uses photonic integrated circuits (PICs) with lasers, modulators, and waveguides for efficient signal conversion. Scale-out networks use standard PICs for cost-effective Ethernet switches, while scale-up networks need custom PICs for high-capacity AI interconnects like NVLink, achieving terabit-scale throughput via PAM-4 or NRZ modulation. Switch ASICs on TSMC’s 5nm/3nm processes enable efficient routing.
CPO scales bandwidth in scale-out (more optical engines, faster lanes) and scale-up (faster lanes, more wavelengths) networks. Photonic packaging uses 2.5D (side-by-side on substrate) or 3D (stacked with vias or EMIB) approaches. 2.5D offers high-density interconnects and simplicity but faces scalability and thermal issues. 3D reduces footprint and power use but increases complexity. Bandwidth density (Tbps/mm) at ASIC/photonic chiplet edges is key, with photonic interposers enabling 2D optical I/O for stacked chiplets, boosting density, reducing latency, and simplifying integration for HPC and data centers.