Yole Group

Villeurbanne,  France
http://www.yolegroup.com
  • Booth: N0689

Feel free to meet & exchange with our experts on booth N0689

Overview

Yole Group is a leading international market research and strategy consulting firm, delivering in-depth analyses across market trends, technology developments, teardowns, and reverse costing. Leveraging deep semiconductor expertise, its team of analysts also provides custom consulting services, offering strategic, technical, and market insights tailored to address specific business challenges and opportunities.

More information on: www.yolegroup.com.


  Products

  • Generative AI 2025
    The Generative AI momentum is driving the massive demand of data center GPU, AI ASIC, CPU, DPU and Networking ASIC, surpassing $370B by 2030....

  • Driven by Generative AI, the data center processor market is expected to reach $372B by 2030

    The data center processor market is experiencing rapid expansion, driven by growing demand for generative AI applications, which require high-performance computing. The global market for data center processors reached $147B in 2024 and is projected to hit $372B by 2030. GPUs and AI ASICs are at the heart of generative AI and largely drive the market, with double-digit growth. CPUs and networking processors, such as DPUs, are also essential in this market and are experiencing steady growth. In the AI context, where GPUs and AI ASICs dominate, FPGAs have experienced a sharp decline and are expected to remain flat in the medium term. The rapid expansion of the cryptocurrency market, such as Bitcoin, has led to strong growth in crypto ASICs within crypto farms, essential for validating cryptocurrency transactions.

    Nvidia leads the AI race, but Google and AWS are betting big on in-house AI ASICs

    Generative AI, driven by OpenAI since 2022, has transformed the data center processor market and greatly benefited Nvidia's GPUs. Facing Nvidia's dominant position and the strategic stakes represented by AI, hyperscalers such as Google and AWS are forming partnerships with Broadcom, Marvell, and Alchip to co-design their own AI ASICs and achieve greater independence. Within this shift towards AI ASICs, numerous startups like Groq, Cerebras, and Graphcore, are seeking market positions with innovative approaches, prompting a wave of M&A and fundraising activities. This push towards performance efficiency, is driving a transition towards arm-based CPUs, thereby disrupting Intel and AMD’s longstanding leadership with x86 architecture. With expertise in cooling solutions and high power capacity, cryptocurrency mining farms are now also entering the AI market by hosting the most powerful GPUs.

    Multi-chiplet architectures and advanced nodes are shaping the future of Generative AI

    Chiplets play a critical role in GPUs, CPUs, and ASICs, optimizing yield while enabling increasingly larger dies with ever more advanced nodes. In 2024, the latest CPUs are on 3nm, while GPUs and AI ASICs remain at 4nm, though 3nm is expected to arrive as early as 2025, with AWS Trainium 3. To meet AI demands, compute performance has grown eightfold since 2020 and continues to accelerate, with Nvidia announcing its Rubin Ultra for 2027 at 100 PetaFLOPs in FP4 for inference. However, memory plays a crucial role in AI applications as AI models become larger and the need for low latency and high bandwidth increases. HBM memory currently fulfills this critical role within Nvidia, AMD, Google and AWS solutions, but many AI ASIC startups, such as Groq and Graphcore, are striving to establish processors based on SRAM memory to improve performance.

  • Power Electronics for Data Centers 2025
    Redesigning power data center infrastructure to meet AI requirements: PSU market to $14B by 2030....

  • New power level triggers a new TAM for PSU with a CAGR24-30 of 15.5%

    The PSU is one of the essential parts of the hardware in data center powertrain. There are three mainstream PSU standards having different specifications: ATX (Advanced Technology eXtended), CRPS (Common Redundant Power Supply) and Open Compute project (OCP). CRPS dominates across most segments due to flexibility and modularity. OCP is hyperscaler-centric, driven by scale and cost efficiency. Custom PSUs thrive in HPC, where standardization doesn’t meet extreme demands. ATX remains in legacy/low-power setup, but growth is declining. As of 2025, the total PSU market for data centers is estimated to worth more than $7B. Seeing the increasing demand for higher power required by PSU, the market value of PSU higher than 3 kW is expected to dominate the market (~80%) by 2030. Therefore, to meet efficiency and density requirements, wide Bandgap (WBG) semiconductors (i.e., GaN and SiC), are increasingly adopted. The CAGR24-30 market value of SiC and GaN devices will feature a double-digit growth.

    Nvidia is influencing the power supply chain

    On one hand, major PSU suppliers include Delta, Liteon, Huawei and Advanced Energy collectively holding over 60% market share. Therefore, Delta is not only the N°1 in PSU market share, but they have also secured a privileged position, working closely with Nvidia across almost the entire powertrain. On the other hand, Nvidia, as the leader in computing GPUs, is setting stringent requirements, leading to rapid adoption of technologies like liquid cooling and high-power PSUs. PSU makers are aligning with Nvidia’s power, thermal, and mechanical requirements (e.g., 5.5kW–33kW PSU, liquid-cooled racks).

    AI is driving the adoption of new technologies

    The architecture of future data centers is being completely rethought by industry giants to maximize efficiency. What was once considered a long-term design goal—such as DC power distribution—is now being actively pursued, with Meta and Microsoft planning deployments in the second half of 2026. In parallel, the 80 PLUS Ruby standard has emerged as the highest official PSU efficiency certification to date, requiring up to 96.5% efficiency at 50% load and 92% at full load. Introduced in January 2025, it was specifically designed to meet the demands of AI workloads. At the same time, the target of 100 W/in³, once considered a roadmap milestone, has already been achieved by several players. As a result, wide-bandgap adoption has become essential to meet both the rising power levels and new efficiency requirements. Rather than choosing between Si, GaN, or SiC, the new trend is to combine their strengths in a single architecture—commonly referred to as a hybrid design. 

  • Co-Packaged Optics for Data Centers 2025
    NVIDIA's silicon photonics CPO scales AI data centers, driving the co-packaged optics market from $46M in 2024 to $8.1B by 2030, with a 137% CAGR....

  • Fueling AI-driven data centers with explosive market growth

    The explosive growth of AI, particularly large language models and generative AI, drives co-packaged optics (CPO) adoption. AI workloads need high bandwidth, low latency, and energy efficiency to connect millions of GPUs in hyperscale data centers or "AI factories." Key drivers include data transfer needs, energy efficiency, scalability, and industry investment.
    In scale-out networks, CPO enables long-distance, high-bandwidth connections (e.g., between racks) with lower latency and power, ideal for AI-driven cloud fabrics and Ethernet/InfiniBand networks. Pluggables will remain at compute nodes until CPO matures. In scale-up AI networks, CPO replaces copper, offering better connectivity, longer reach, and lower power for GPU-to-GPU or node-to-switch fabrics, vital for AI training and HPC. Initial CPO deployments will target scale-up networks before expanding to scale-out.
    At GTC 2025, NVIDIA unveiled Spectrum-X and Quantum-X silicon photonics switches, a milestone for CPO in AI infrastructure. These switches use CPO to connect GPUs with 1.6Tbps ports. NVIDIA’s CPO adoption for its Rubin architecture overcomes NVLink’s limits, enabling faster, scalable, low-power interconnects.
    The CPO market, valued at $46M in 2024, is projected to reach $8.1B by 2030, with a 137% CAGR, driven by shifts from pluggables to CPO and copper to optics, addressing power, density, scalability, bandwidth, and distance constraints.

    CPO: a complex ecosystem for scalable network connectivity

    Co-packaged optics (CPO) integrates optical transceivers with switch ASICs or processors for high-bandwidth, low-power interconnects in scale-out (cloud fabrics) and scale-up (AI/GPU clusters) networks. The CPO supply chain involves semiconductor foundries, photonics manufacturers, packaging providers, and fiber optic specialists. Key players like Nvidia, TSMC, Broadcom, Coherent, and hyperscalers drive demand, fueled by AI workloads.
    The supply chain covers raw materials, components, integration, and deployment. Silicon wafers (Shin-Etsu), SOI (Soitec), indium phosphide (AXT), and glass (Schott) support ASIC and photonic circuit co-packaging. Scale-out networks use cost-effective substrates; scale-up networks need high-performance materials. Photonic integrated circuits (PICs) from Lumentum, Coherent, and Intel supply lasers and transceivers, with scale-out using standard PICs and scale-up needing custom PICs for NVLink. Switch ASICs from Broadcom and Nvidia target high port density or low latency. Optical fibers (Corning) and connectors (Foxconn) enable long-reach (scale-out) or high-density (scale-up) links. TSMC’s CoWoS and ASE lead packaging, with scale-out prioritizing cost and scale-up needing density. Assembly and testing by Foxconn and Keysight ensure reliability. System integrators like Cisco deploy CPO switches for interoperable cloud (scale-out) or custom AI (scale-up) systems.
    CPO evolves to meet AI demands, with scale-out focusing on cost and volume, and scale-up on performance and customization, transforming data center connectivity.

    CPO as the backbone of scalable AI and cloud infrastructures

    AI-driven co-packaged optics (CPO) transforms data centers, led by Nvidia, Broadcom, and TSMC. CPO uses photonic integrated circuits (PICs) with lasers, modulators, and waveguides for efficient signal conversion. Scale-out networks use standard PICs for cost-effective Ethernet switches, while scale-up networks need custom PICs for high-capacity AI interconnects like NVLink, achieving terabit-scale throughput via PAM-4 or NRZ modulation. Switch ASICs on TSMC’s 5nm/3nm processes enable efficient routing.
    CPO scales bandwidth in scale-out (more optical engines, faster lanes) and scale-up (faster lanes, more wavelengths) networks. Photonic packaging uses 2.5D (side-by-side on substrate) or 3D (stacked with vias or EMIB) approaches. 2.5D offers high-density interconnects and simplicity but faces scalability and thermal issues. 3D reduces footprint and power use but increases complexity. Bandwidth density (Tbps/mm) at ASIC/photonic chiplet edges is key, with photonic interposers enabling 2D optical I/O for stacked chiplets, boosting density, reducing latency, and simplifying integration for HPC and data centers.

  • Next-Gen DRAM 2025 - Focus on HBM and 3D DRAM
    Explosive AI growth pushes HBM to 50% of total DRAM revenues by 2030, hitting nearly $100B...

  • HBM shipments expand rapidly with over 190% YoY growth in 2024, and revenue is on track to hit $98B by 2030.

    The High-Bandwidth Memory (HBM) market is experiencing exponential growth, primarily driven by the proliferation of AI workloads and HPC applications. The generative AI boom, catalyzed by the introduction of ChatGPT in late 2022, resulted in an unprecedented 187% YoY increase in HBM bit shipments in 2023, followed by an additional 193% surge in 2024. This trajectory is expected to persist. HBM is significantly outpacing the broader DRAM market. Global HBM revenue is projected to increase from $17B in 2024 to $98B by 2030, reflecting a 33% CAGR. Consequently, HBM’s revenue share within the DRAM market is expected to expand from 18% in 2024 to 50% by 2030. The market will see another major inflection point in 2025. Current supply constraints underscore the strategic importance of HBM in AI data centers and advanced computing platforms, as evidenced by reports from SK Hynix and Micron indicating that their HBM production capacity is fully allocated through 2025. These supply-side limitations reinforce the need for investments in capacity expansion to meet the escalating demand.

    HBM leaders intensify the race, while China steadily progresses in AI and advanced memory technologies.

    SK hynix currently leads the HBM market, having begun mass production of 12Hi HBM3E in late 2024 and already initiating customer sampling of its next-generation 12Hi HBM4 (36GB) in early 2025, a momentum reflected by record quarterly profits. Samsung is now accelerating efforts to strengthen its position, actively developing its HBM portfolio, refining DRAM designs, and working on 4nm logic dies for its upcoming HBM4 generation, with trial production underway and customer sampling planned within 2025. Micron, having skipped HBM3, entered the market directly with HBM3E in 2024, supplying Nvidia’s H200 GPUs. Though currently limited in production capacity compared to SK hynix and Samsung, Micron is rapidly expanding its output, aiming to reach 60,000 WPM in by late 2025, with HBM4 production set to begin in 2026. In response to U.S. restrictions on AI chips and HBM, Chinese companies have launched large-scale investments to build domestic alternatives. Huawei is reportedly leading a consortium – including XMC, SJSemi, TFME, and JCET – to establish HBM capabilities. Meanwhile, CXMT is sampling HBM2 and actively developing HBM2E, supported by an investment in advanced packaging. Despite technological gap with industry leaders (~6 years), Chinese players can leverage strong domestic demand for locally developed AI accelerators, supported by significant government backing and a well-established industry network. These factors are expected to help them gain a foothold in the HBM market in the coming years.

    As scaling challenges grow, new designs, CMOS bonding and 3D architectures are redefining DRAM development

    Despite increasing scaling challenges, planar DRAM is expected to continue evolving through the 0c/0d nodes (2033–2034), leveraging a combination of architectural and process innovations. The industry is currently reliant on the 6F² DRAM cell structure, which dominates all commercial products as of 2025. However, further miniaturization will eventually necessitate a transition toward 4F² cells based on vertical transistors (VT), integrated within a CMOS Bonded Array (CBA) architecture. After the 0c/0d nodes, the transition to a 3D DRAM architecture is expected to be inevitable. As of 2025, all major DRAM manufacturers – including Samsung, SK hynix, Micron, and CXMT – are actively exploring multiple architectural pathways to enable 3D DRAM integration. Hybrid bonding is being regarded key enabler for future HBM generations, particularly for high-stack configurations. However, due to yield and throughput challenges, HBM suppliers have extended microbump-based approaches for HBM4 and HBM4E. Hybrid bonding is now projected to enter the market with HBM5 (around 2029), especially for premium 20Hi stacks.

  • High-End Performance Packaging 2025
    High-End Performance packaging is breaking performance barriers, with chiplet integration enabling the AI revolution. The HEPP market is expected to reach $28.5B by 2030 at a CAGR of 23%....

  • High-end Packaging market will exceed $28B by 2030, with a whopping 23% CAGR2024-2030

    The High-end Packaging market was worth $8B in 2024 and is projected to exceed $28B by 2030, with a CAGR2024-2030 of 23%. Breaking it down into its end markets, the biggest High-end Performance Packaging market is ‘Telecom & Infrastructure,’ which generated over 67% of the revenue in 2024. It is followed closely by ‘Mobile & Consumer,’ the fastest-growing market with a 50% CAGR. In terms of package units, High-end Packaging is projected to experience a 33% CAGR2024-2030 from ~1B units in 2024 to +5B units by 2030. This huge growth is explained by the fact that in the case of high-end packages, the demand is increasing healthily, and the ASPs are very high compared to less advanced packaging as there is a transition of value from the front-end to the back-end forced by 2.5D & 3D platforms.
    3D Stack memory – HBM, 3DS, 3D NAND & CBA DRAM – are the most significant contributors and will represent more than 70% of the market by 2029. The fastest-growing platforms are CBA DRAM, 3D SoC, Active Si Interposer, 3D NAND stack, and embedded Si bridge.

    Foundries, IDMs, and top OSATs compete in the same High-end Packaging market space

    The barrier to entry in the High-end Packaging supply chain is increasingly high, with major foundries and IDMs disrupting the advanced packaging domain with their FE capabilities. The adoption of hybrid bonding makes things more difficult for OSATs, as only players with fab capabilities and ample resources can afford significant yield losses and large investments.
    In 2024 the group of memory players represented by YMTC, Samsung, SK hynix and Micron dominate, with 54% of the high-end packaging market, as 3D stack memory dominates over other platforms in terms of revenue and unit and wafer production. The truth is that much more memory packages are purchased than logic. Individually, TSMC is leading with 35% of market share, followed by YMTC with 20% of the total market. New players like Kioxia, Micron, SK hynix, and Samsung are expected to penetrate the 3D NAND market, rapidly gaining market share. Samsung is number three, with 16%, followed by SK hynix with 13% and Micron with 5%. These players will experience a healthy market share increase as their 3D Stack memories gain more traction and they continue developing new ones. Intel is the next with 6%.
    Top OSATs like ASE, SPIL, JCET, Amkor and TF are still there to do the final packaging assembly and test. They are trying to win market share by proposing high-end packaging solutions based on UHD FO and Mold interposer. Another important point is that they are collaborating with top foundries and IDMs to secure their involvement in this type of activity.
    The realization of High-end Packaging is now increasingly dependent on FE technology, and hybrid bonding is becoming a new trend. BESI, collaborating with AMAT, is playing a key role in this new trend, supplying equipment to big players such as TSMC, Intel, and Samsung, all of which are competing for supremacy. Other equipment providers like ASMPT and EVG, SET and Suiss MicroTech, Shibaura and TEL are important pieces of the supply chain puzzle.  

    Semiconductor packaging technology is the key pillar if the future is digital

    The main technology trend for all high-end performance packaging platforms, no matter the type, is reducing interconnection pitch – it is related to TSVs, TMVs, microbumps, and even hybrid bonding, which is already the most aggressive solution. In addition, the via diameter and wafer thickness are expected to be reduced. This technology evolution is necessary for integrating more complex monolithic dies, on the one hand, and chiplets, on the other, to support faster data processing and transmission while ensuring less power consumption and loss and allowing higher density integration and bandwidth for future generations.
    3D SoC hybrid bonding
    3D SoC hybrid bonding seems to be a key technology pillar for next-generation advanced packaging, as it allows a smaller interconnection pitch while increasing the total SoC surface. This enables possibilities such as stacking chiplets from partitioned SoC dies, allowing heterogeneous integration packaging. TSMC is the leading player in 3D SoIC packaging using hybrid bonding, thanks to its 3D Fabric. In addition, collective die to wafer is expected to be used starting with a small part of HBM4E with 16-high DRAM stack.
    Chiplet and heterogeneous integration is another important trend fueling the adoption of HEP packaging, and products using this approach are already in the market. Examples are Sapphire Rapids using EMIB, Ponte Vecchio using Co-EMIB, and Meteor Lake using Foveros from Intel. AMD is another important player adopting this technological approach in its products, such as Ryzen and EPYC starting with the 3rd generation, along with adopting a 3D Chiplet architecture in MI300. Nvidia finally adopted chiplet design for its next generation Blackwell series. More packages incorporating partitioned or duplicated dies are expected to hit the market in the coming year, as was clearly announced by important players like Intel, AMD, and Nvidia. On top of that, his approach is expected to be used for high-end ADAS in the years to come.
    The general trend is to have more 2.5D platforms combined with 3D platforms in the same package which some in the industry are already calling 3.5D packaging. Therefore, we expect packages integrating chiplets using 3D SoC, 2.5D interposers, embedded silicon bridges, and co-packaged optics in the same package in the future. New 2.5D & 3D packaging platforms will hit the market later, making HEP packaging much more complex.