The Semiconductor Industry: An Unprecedented Landscape
Release time:
2025-12-25
Source: Compiled from tomshardware.
A growing number of forecasts from AMD, NVIDIA, Broadcom, and leading research firms indicate that the semiconductor market will surpass the $1 trillion mark before the end of this decade, driven by an artificial intelligence (AI) infrastructure build - out that is several times larger than any expansion in the industry's history.
The latest analysis from Creative Strategies dubs this shift the "Gigacycle", noting that the unprecedented scale of demand for AI is reshaping the economic landscapes of computing, memory, networking, and storage simultaneously. Global semiconductor revenue stood at approximately $650 billion in 2024, yet multiple current forecasts project it will break the $1 trillion barrier by 2028 or 2029. AI is the primary factor behind this upward revision of forecasts.
Lisa Su, CEO of AMD, recently raised the company's long - term outlook, stating that the AI hardware market will reach $1 trillion by 2030. She also predicted that AMD's overall compound annual growth rate (CAGR) will hit 35%, with the data center business seeing a CAGR of around 60%. Additionally, she has publicly pushed back against the prevalent claims of an AI bubble in recent months.
Meanwhile, NVIDIA has put forward even more ambitious expectations. During its Q2 2026 earnings call, the company described the AI infrastructure market as being worth $3 trillion to $4 trillion over the next five years. This figure is based on system - level deployments in hyperscale data centers, autonomous AI projects, and enterprise clusters.
More broadly, all major silicon chip categories are expanding in tandem. Creative Strategies anticipates that by 2026, revenue from data - processing silicon chips will account for over half of the total semiconductor revenue. The AI accelerator market, which was valued at less than $100 billion in 2024, is expected to grow to $300 billion to $350 billion by 2029 or 2030. This growth will significantly boost system spending. The AI server market is projected to surge from roughly $140 billion in 2024 to $850 billion by 2030, and this growth trajectory will reshape chip demand even without considering custom silicon chips.
In this context, the research and development of ASIC chips has taken center stage in the roadmap of hyperscale data center development. Broadcom estimates that its custom chip business will exceed $100 billion by the end of this decade. The company has previously disclosed a $10 billion AI infrastructure order, which is believed to be from OpenAI.
Memory and packaging remain bottlenecks. The revenue of High - Bandwidth Memory (HBM) is expected to rise from about $16 billion in 2024 to over $100 billion by 2030. Each new generation of HBM consumes more wafers than traditional DRAM, and as the scale of AI clusters expands, this will drive growth across the entire memory market. Advanced packaging is facing similar pressures, as the capacity of Chip on Wafer on Substrate (CoWoS) is expected to increase by more than 60% from the end of 2025 to the end of 2026.
"The defining feature of the semiconductor Gigacycle is that the market expansion is large enough to create entirely new growth opportunities at every link in the value chain," Creative Strategies stated, describing this combined effect as a period of synchronous growth across all segments rather than a cycle concentrated in a specific area.
The Semiconductor Gigacycle
The semiconductor industry has witnessed cyclical fluctuations of varying scales throughout its development journey. The PC era brought sustained growth; the smartphone revolution spurred what many refer to as a super cycle; and the rise of cloud computing has further fueled this upward trajectory.
What is unfolding now is an entirely different scenario.
AI infrastructure construction represents the largest potential market expansion in the history of the semiconductor industry — a gigascale growth cycle that far outpaces all previous growth periods, both in terms of absolute dollar value and the scope of its impact on every link in the value chain.
The figures point to an unprecedented scale. Global semiconductor revenue is set to surge from approximately $650 billion in 2024 to over $1 trillion by the end of this decade, with some forecasts even projecting the trillion - dollar milestone to be achieved as early as 2028 - 2029.
This expansion is not driven by a single product category or regional market, but by a fundamental restructuring of the industry's development path. It is propelled by infrastructure demands that are simultaneously reshaping all categories of semiconductor technology.
The semiconductor industry will never be the same again, as its landscape has been completely transformed.
What Makes the Gigacycle Unique
The PC era primarily benefited microprocessors and general - purpose memory. The smartphone revolution focused on driving the development of mobile application processors and NAND flash memory. The rise of cloud computing has continuously boosted demand for server processors and network equipment.
Building AI infrastructure is a different ballgame. The architectural requirements of AI training and inference workloads create simultaneous bottlenecks in computing, memory, networking, and storage.
No single category dominates the majority of new spending. Every submarket is constrained, yet every submarket is expanding.
By 2026, revenue from data - processing silicon chips will for the first time account for over half of total semiconductor revenue — a sign that data center and AI workloads are becoming the new focus of the industry.
The Scale of TAM Expansion
Industry forecasts have been revised sharply upward over the past 18 months.
Lisa Su, CEO of AMD, stated that the AI hardware market, covering CPUs, GPUs, ASICs, and networking, will exceed $1 trillion by 2030. At AMD's November 2025 Analyst Day, she spoke candidly about this opportunity: "The market is expanding at a pace we could not have imagined just a few years ago. There is no doubt that the data center is the largest growth opportunity right now."
AMD aims to achieve an overall revenue compound annual growth rate (CAGR) of over 35% in the coming years, with the data center business expected to see a CAGR of around 60%. The company projects that its annual AI - related revenue will reach tens of billions of dollars by 2027, and its data center revenue will surpass $100 billion within the next three to five years.
Current consensus forecasts indicate that the dedicated AI accelerator market alone will reach approximately $300 billion - $350 billion by 2029 - 2030, up from less than $100 billion in 2024. When combining accelerators, CPUs, networking, and HBM, total spending on data center silicon chips will approach $900 billion - $1 trillion by the end of this decade.
Trillion - Dollar Infrastructure Buildout
Jensen Huang has clearly outlined the scale of the future. During NVIDIA's Q2 2026 earnings call, he detailed the numbers: "Over the next five years, we will seize the $3 trillion - $4 trillion AI infrastructure opportunity. And we are still in the early stages of this buildout."
He emphasized that this is no speculation: "Investing $3 trillion - $4 trillion over the next five years is quite reasonable."
The demand for computing power is real and unprecedented. Capital expenditures of the four major cloud service providers — Amazon, Google, Microsoft, and Meta — have doubled in just two years, reaching approximately $600 billion annually.
NVIDIA has announced that cumulative revenue from its Blackwell and Rubin businesses, including networking, will reach $500 billion by 2026 — an unprecedented level of revenue visibility in semiconductor history. Market demand is expanding from U.S. hyperscale data centers to independently developed AI factories and enterprise - level deployments, and the customer base is growing as the market opportunity expands.
Computing Infrastructure: Key Drivers
· GPU Market Dynamics
The GPU market remains the largest investment category in AI infrastructure development. NVIDIA projects that GPU shipments will grow by approximately 85% in 2025 and another 50% - 60% in 2026. The company targets further growth in 2027 and expects its annual revenue to exceed $600 billion by 2030.
This physical scale requires a significant expansion of manufacturing capacity, packaging throughput, and memory availability.
At the system level, AI servers are emerging as a standalone trillion - dollar category. The AI server market is expected to grow from approximately $140 billion in 2024 to $800 billion - $850 billion by 2030, representing a CAGR of over 30% (which may be a conservative estimate).
Ultimately, a tiny fraction of the total wafer output generates a disproportionate share of industry revenue: AI chips accounted for less than 0.2% of global wafer starts in 2024 but already contributed about 20% of semiconductor revenue, achieving an unprecedented level of silicon value density.
AMD is also experiencing similar growth momentum. Lisa Su described market demand as "insatiable" — the company expects AI data center chip revenue to grow at an annual rate of 80% by 2030, with overall revenue growing at 35% annually.
· The Rise of Custom Silicon
The custom chip market is growing at an astonishing rate, positioning ASICs to challenge the dominance of general - purpose GPUs. This trend is driven by a structural shift in capital allocation among hyperscale data centers, as they seek to take greater control of their chip stacks to meet their core workload requirements, while also reducing third - party costs and improving efficiency.
By 2027, leading ASIC providers are expected to achieve a revenue CAGR of 119%, far outpacing the projected 82% CAGR for AI GPUs. Custom chips are expected to increase from just 2% of hyperscale data center capital expenditures in 2023 to 13% in 2027.
Broadcom is a prime example of this expansion. CEO Hock Tan has set an ambitious target of growing the company's custom chip business to over $100 billion by the end of this decade. Broadcom's AI - related sales are projected to reach $90 billion by fiscal 2030, and potentially as high as $120 billion in a more optimistic scenario. The company recently disclosed a $10 billion AI infrastructure order from a hyperscale data center customer (widely reported to be OpenAI), focusing on custom ASIC chips, which is expected to significantly boost revenue in fiscal 2026 and 2027.
Tan revealed that demand for custom AI chips from just three major customers could reach $60 billion - $90 billion by 2027. Google, Meta, and OpenAI are already its deep - pocketed clients, and Apple and Arm are also likely to join this group.
This growth is not a zero - sum game. The total addressable market (TAM) is expanding at a remarkable rate, allowing both custom chips and commercial GPUs to grow in parallel to meet diverse computing needs.
· CPU Market Revival
The CPU market is undergoing an AI - driven revival. The server CPU market is expected to grow at an 18% CAGR, reaching approximately $60 billion by 2030, up from $26 billion in 2025.
This expansion is driven by both the growing demand for agent - based AI on general - purpose servers and the architectural requirements of integrated rack - level systems such as NVIDIA's NVL72. Unlike traditional standard servers, which are usually equipped with 1 - 2 CPUs, the NVL72 requires 36 host CPUs per rack. This expansion trend is likely to intensify as the number of accelerators per rack increases to 144 or more, and the number of DPUs per accelerator rack also rises.
We are developing our own CPU growth model, which predicts that the CPU market size will grow from around 20 billion units to 60 billion units by 2030. AMD also endorses these projections, estimating that the rise of AI alone will generate approximately $30 billion in incremental CPU revenue by 2030, with a clear goal of capturing over 50% of the data center CPU market share during this period. This market expansion will also benefit Intel and Arm, across general - purpose CPUs, specialized CPUs, and AI core node platforms.
· Connectivity Architecture: Networking Infrastructure
The concept of AI factories fundamentally relies on high - performance connectivity architectures that integrate massive accelerator clusters into a unified supercomputer. As AI workloads scale to clusters with over 100,000 accelerators, the speed, coverage, and energy efficiency of interconnections have become just as crucial as the computing units themselves.
By 2030, the market for networking silicon chips, excluding storage, is expected to reach approximately $75 billion. The AI data center switch market alone will grow from around $4 billion in 2024 to about $19 billion in 2030, with a CAGR of nearly 30%.
The optical interconnect market is following a similar trajectory. The optical transceiver market is expected to reach around $22 billion - $27 billion by 2030, and more optimistic forecasts suggest it will exceed $30 billion in the early 2030s as 800G and 1.6T deployments accelerate.
The transition to 1.6 - terabit networks is creating supply chain strains, mirroring the constraints in the computing sector. The number of AI network ports is projected to grow to approximately 150 million by 2029, representing a CAGR of about 40% - 50%. Demand for advanced lasers and modulator components continues to far outstrip supply, with cutting - edge optical components from multiple suppliers effectively sold out until 2026.
Memory and Storage: Capacity Bottlenecks
· HBM Super Cycle
High - Bandwidth Memory (HBM) has become a key driver of accelerated computing, and its demand far outpaces supply as GPU cluster scales expand.
Global HBM industry revenue is expected to double to around $30 billion by 2025. The HBM market will grow from approximately $16 billion in 2024 to over $100 billion by 2030, nearly quadrupling in size — a market scale that will surpass the entire DRAM industry (including HBM) in 2024.
By 2030, HBM is projected to account for roughly half of total DRAM industry revenue, up from less than 20% today.
The complexity of the manufacturing process will have spillover effects on the entire memory market. Producing the same memory capacity with HBM3E consumes approximately three times as many wafers as standard DDR5. This ratio is expected to rise to 4:1 for HBM4, which will naturally restrict the overall industry supply of non - HBM products and intensify supply - demand tensions across the entire memory market.
· Explosive Growth in Enterprise Storage
The shift toward Retrieval - Augmented Generation (RAG) and inference models is driving explosive growth in demand for warm storage. Demand for AI servers will push enterprise - class server solid - state drive (SSD) demand to grow 7 - to 11 - fold by 2030. Data creation is projected to nearly quadruple from 2024 levels, exceeding 500 zettabytes by 2029.
The enterprise - class SSD market is expected to reach the mid - $40 billion range by 2030, up from just over $1 billion today. By 2026, server SSDs are projected to overtake smartphones as the largest application for NAND flash memory, officially establishing AI servers as the primary users of cutting - edge storage devices.
· System - Level Memory Super Cycle
The current environment represents a system - level memory super cycle, where both DRAM and NAND flash will benefit simultaneously. Hyperscale data center operators predict that server DRAM will grow by approximately 50% by 2026. Multiple forecasts indicate that supply constraints for DRAM and NAND flash will persist until 2027 as AI capital expenditures are added to traditional workloads.
Unlike previous cycles, expanding supply is challenging due to space limitations and the long production cycles required for advanced HBM and stacked NAND. Technical barriers and supply constraints have led to price inelasticity, meaning prices will rise sharply when demand is strong.
· Wafer Fabrication Equipment and Advanced Packaging
Spending on wafer fabrication equipment (WFE) has already reflected this shift. Investment in 300 - mm wafer fabrication equipment is expected to exceed $100 billion for the first time in 2025, climb to approximately $140 billion by 2028, and cumulative spending on 300 - mm wafer fabrication equipment will surpass $300 billion from 2026 to 2028.
By 2028, capacity for advanced processes (7nm and below) is expected to grow by nearly 70%, and annual spending on advanced node process equipment alone is projected to exceed $50 billion by 2028.
The back - end of the production line is experiencing its own super cycle. Test equipment sales soared by over 20% in 2025, hitting an all - time high and even outpacing WFE growth. Sales of assembly and packaging tools also achieved near - double - digit growth. TSMC's Chip on Wafer on Substrate (CoWoS) capacity alone is expected to increase by more than 60% from the end of 2025 to the end of 2026.
Global semiconductor companies plan to build approximately $1 trillion worth of new manufacturing facilities by 2030. Companies in the U.S. semiconductor industry ecosystem alone have announced over $500 billion in private - sector investments to boost domestic capacity.
· Greenfield Economics: Benefits for All
A prominent feature of the semiconductor Gigacycle is that the market expansion is large enough to create entirely new growth opportunities at every link in the value chain.
This is not a zero - sum game. The bottleneck - driven nature of AI systems ensures that value capture can expand simultaneously across memory, storage, networking, packaging, and other sectors.
AI is expected to bring over $500 billion in incremental revenue to the semiconductor market in the coming years. By 2030, cumulative data center capital expenditures will reach approximately $6.7 trillion, of which about $5 trillion will be allocated to building AI - ready facilities. Annual spending on data center infrastructure is surging, with a peak expected to reach $900 billion.
· What Makes This Moment Unique
The PC era mainly benefited a handful of companies. The smartphone era allowed mobile - focused suppliers to capture concentrated profits.
The AI Gigacycle is lifting the entire semiconductor ecosystem — from logic to memory, from networking to packaging, and the entire WFE - supporting supply chain.
The new nature of demand means both new entrants and existing enterprises can find growth points. Constraints across various categories have led to a widespread distribution of pricing power. Synchronous expansion has broken the previous pattern where growth in one submarket inevitably led to a decline in another.
When computing resources are limited, memory and networking resources benefit; when memory resources are constrained, computing and storage resources gain. This synchronization of demand ensures that resource limitations in one area do not reduce spending in another, but instead drive it up.
The largest market expansion in semiconductor history is benefiting everyone.
*Disclaimer: This article was created by the original author. The views expressed in the article are those of the original author. Our reprint is solely intended to convey a different perspective and does not imply our endorsement or support of these views. If you have any objections, please contact us through the backend.
Previous