Morgan Stanley 2026 Semiconductor Report: Buy Packaging, Buy Testing, Buy Chinese Chips, Avoid Traditional Tracks

null

Author: Jianwei Zhizhu Miscellaneous Talks

Source: Morgan Stanley Greater China Semiconductors Research

Report Date: May 8, 2026

  1. Core Contradiction

Global AI capital expenditure exceeds expectations for expansion, but computing power supply is evolving from “NVIDIA dominates all” to a three-track parallel development of “GPU + ASIC + Chinese domestic chips.” The core contradiction is not whether demand is sufficient, but who can capture a share of this expansion, and how quickly non-AI semiconductors are marginalized in this process.

  1. Key Conclusions (ranked by trading importance)

  2. In-Depth Sector Breakdown

3.1 Advanced Packaging (CoWoS / SoIC)— The Most Certain Mainline

Core Contradiction】Demand is exploding, but capacity is only replaceable by TSMC; non-TSMC packaging (Amkor/ASE/UMC) faces share squeeze.

【Key Drivers】The four major cloud providers (AWS/Google/Microsoft/Meta) increased capital expenditure by +95% YoY in Q1 2026, with full-year cloud capex expected to reach $685 billion. AI server demand directly drives CoWoS/SoIC queue demand.

Key Data and Milestones:

NVIDIA accounts for about 59% of CoWoS consumption, Broadcom about 20%, AMD about 9%

·2026 total AI computing wafer consumption value approximately $27.2 billion, a record high

·TSMC AI chip revenue share CAGR 2024–2029 reaches 60%, with AI revenue accounting for over 30% of total revenue in 2026

【Transmission Path】

Cloud Capex → NVIDIA/Broadcom/Google TPU orders → CoWoS/SoIC as bottleneck → TSMC bargaining power increases → AI revenue share continues to grow.

【Trading Insights】

TSMC is the main line’s main line, with a clear holding logic without timing concerns. SoIC is the second growth curve starting in 2025, focus on opportunities for OSAT suppliers (ASE, etc.) involved in SoIC assembly.

3.2 Testing Equipment (Handler / Socket / Probe Card)— Lowest valuation, most certain growth

【Core Contradiction】

As chip complexity increases, testing time doubles structurally, but the market’s reassessment of testing equipment TAM is severely lagging.

【Key Drivers】

Each generation of GPU chips doubles testing time (Hopper 350s → Blackwell 700-1000s → Rubin 1200-1400s → next gen 1800-2000s); test socket pin counts jump from mobile-level 1500 to AI/HPC-level 6000, and potentially over 10,000 in the next generation.

Three core data points:

·Global Handler market size: $436 million in 2023 → $6.6 billion in 2027, CAGR over 35%

·CPO optical testing demand will scale from 2025, entering combined electrical + optical testing stage (Insertion 4i) by 2027

【Transmission Path】

Increases in chip size/layers/complexity → longer testing times → Handler/Socket volume and prices rise together → new CPO optical testing demands add on → second growth curve begins.

【Trading Insights】

Three companies are the lowest valuation, highest growth certainty segments in the AI infrastructure chain, suitable for mid-term core holdings. Insufficient market coverage and low pricing make them the most cost-effective focus currently.

3.3 Chinese AI Chips (Domestic GPU/ASIC)— Irreversible long-term trend, clear short-term segmentation

【Core Contradiction】

Export controls push domestic substitution demand, but domestic chip technology/mass production maturity varies; whether they can anchor large customer orders is the key differentiator.

【Key Drivers】

DeepSeek verifies the feasibility of low-cost inference → domestic cloud providers accelerate switching → SMIC’s 7nm capacity expansion supports mass production → domestic chips’ TCO advantage (30-60% lower than NVIDIA) creates positive feedback.

Market Size and Landscape:

2026E domestic market share: Huawei 62%, Cambrian 14%, Kunlun Chip 5%, T-Head 5%, others 14%.

Comparison of three key targets among the “Ten Dragons” (MS focus):

【Transmission Path】

Export controls → Domestic substitution → SMIC 7nm capacity expansion → Huawei/Cambrian volume growth → local cloud providers (ByteDance/Alibaba/Tencent) switch procurement → inference cost reduction → more applications explode → new wave of computing power demand.

【Trading Insights】

Cambrian has the highest certainty, the first choice; Tianzhi Zhixin has the greatest flexibility but is not yet profitable, higher risk. Huawei (not listed) is the biggest variable, its share growth exerts indirect pressure on other domestic manufacturers, ongoing monitoring needed. Time window: 2026–2027 is a critical turning point for domestic AI chips from backup to main force.

3.4 Non-AI Semiconductors (Consumer / Automotive / Industrial Control)— Structural bearish bias, weak recovery not a strong rebound

【Core Contradiction】

Supply chain resources are being systematically siphoned by AI, and the recovery pace of traditional semiconductors remains slower than expected; the market overestimates rebound elasticity.

【Key Drivers】

Foundry capacity/T-Glass substrates/storage are all tilting toward AI; non-AI chips are queued later, wafer and OSAT costs rise; chip design companies’ gross margins are under pressure.

· Excluding NVIDIA AI GPU and storage, non-AI semiconductor growth in 2026 is expected to decline significantly

· MCU inventory days remain at historic highs (flat after Q1 2025 peak); major vendors like STM/GD digest inventory slowly

· Logic foundry utilization rate is expected to recover to 80% only in H2 2026, with limited recovery elasticity

· SiC outperforms GaN: SICC (OW) recommended, SiC penetration expected to surpass 50% by 2030; avoid InnoScience (EW), capacity expansion depreciation suppresses profit

【Trading Insights】

Avoid pure traditional semiconductor exposure; MCU bottom confirmed but weak recovery, not recommended for heavy bets on strong rebound. SiC is the only segment worth noting among traditional sectors.

3.5 Storage (HBM / NAND / DDR4)— Intense internal differentiation, signals require discernment

【Core Contradiction】

AI drives explosive demand for HBM; rising prices of DDR4/NAND are due to supply being squeezed by AI, not genuine demand recovery, leading to distorted signals and limited price elasticity.

【Trading Insights】

Maintain a bullish stance on HBM, Hynix benefits most; Macronix (NOR Flash, Top Pick) benefits from shortages with reasonable valuation; NAND/DDR4 price increases do not equate to demand improvement, beware of chasing rallies.

  1. Macroeconomic and Geopolitical Variables: Explanatory Variables for Sector Judgments

【Geopolitical】 Continued tightening of export controls

NVIDIA’s exports to China are restricted → Chinese domestic AI chip substitution demand certainty increases; China’s cloud capex in 2026E reaches $105 billion, rapidly approaching 14% of global cloud capex.

【Macro】Energy constraints (U.S. side)

U.S. data center power supply tightness is a potential ceiling for GPU demand growth, but not a substantial constraint in the short term (2026).

【Industry Structure】AI siphoning effect

AI demand’s siphoning effect on non-AI supply chains (T-Glass, traditional DRAM, consumer foundry capacity) is the core explanation for the continued underperformance of non-AI semiconductors, beyond cyclical factors.

【Cost Side】Tech inflation

Rising wafer/OSAT/storage costs compress chip design margins (especially in non-AI sectors); TSMC and other foundries’ bargaining power continues to increase.

  1. Recommended Portfolio and Trading Framework

Based on sector assessments, construct the following trading framework:

  1. One-Sentence Summary

Buy packaging (TSMC), buy testing equipment (Hon Precision / WinWay / MPI), buy Chinese AI chip leaders (Cambrian); avoid overly optimistic expectations for non-AI semiconductor strong recovery, keep storage focus on HBM, neutral on traditional DRAM/NAND. Time window: 2026–2027, as AI capex cycle is far from over.

Risk Warning: This note is compiled from Morgan Stanley public research reports, for internal research only, not investment advice. Market uncertainties exist, actual results may differ significantly from forecasts. Investors should exercise caution.

“Building Future AI Infrastructure—CPUs, GPUs, ASICs, Optical Modules, and Chinese Chips”

Robust Outlook for AI Semiconductors

Morgan Stanley characterizes the AI semiconductor outlook as “Strong,” driven by three demand forces: explosive killer applications, tech giants’ compute arms race, and national AI development needs. Meanwhile, the report identifies four growth constraints—budget, U.S. energy bottleneck, Chinese chip capacity, regulation—these are fundamentally supply-side issues rather than demand fading.

Long-term, three structural variables warrant vigilance:

  1. Tech inflation (rising wafer/packaging/storage costs squeezing chip design margins);

  2. AI siphoning effect (supply chain resources tilting toward AI, marginalizing non-AI semiconductors);

  3. DeepSeek effect (low-cost inference verification, accelerated domestic inference demand, domestic foundry capacity upgrade for AI GPUs). The overlay of these three forms the underlying logic for all subsequent sector judgments.

Valuation comparisons: foundries, backend, storage, IDM (integrated device manufacturers), and semiconductor equipment

Valuation comparisons: fabless, power semiconductors, FPGAs, analog chips

Semiconductor supercycle

The core conclusion is sector divergence rather than an overall recovery: logic foundry utilization expected to rebound to 80% in 2H26, but excluding NVIDIA AI GPU and storage, non-AI semiconductor growth in 2026 is expected to decline significantly; inventory reduction from high points is a positive signal, historical data shows inventory down cycles often coincide with rising semiconductor indices, but this recovery’s structural divergence exceeds past patterns.

AI Semiconductor Supply Chain and Niche Memory

By 2030, the global semiconductor market could reach $1.5 trillion, with half from AI semiconductors

Key long-term anchor: the global semiconductor market by 2030 may reach $1.5 trillion, with AI semiconductors contributing about $753 billion; the bullish scenario for cloud AI TAM reaches $235 billion by 2025 (mainly from NVIDIA AI GPU), with a CAGR of 38% from 2023–2030, providing a top-level market space basis for all subsequent sector valuations.

Cloud Semiconductors: A Brighter Outlook

The four major cloud providers (AWS/Google/Microsoft/Meta) increased capex by +95% YoY in Q1 2026, the most significant single data point for demand; Capex/EBITDA ratio is expected to stay around 50%, indicating sustainable expansion willingness; Aspeed’s profit forecasts are continuously revised upward, as a leading cloud AI server BMC chip supplier, its upward trend confirms genuine cloud demand.

Major cloud service providers’ cloud capex remains strong

MS cloud capex tracker projects global top 10 cloud providers’ capex at $685 billion in 2026, about 10% above market consensus; the historical chart of global cloud capex closely synchronized with TSMC’s capex supports the “not a short cycle” judgment; about 65% of assets are short lifecycle, implying continuous procurement needs and rigid demand.

TSMC’s announced power deployment impacts

Using rack specifications and power from NVIDIA, AMD, Broadcom, AWS’s main clients, bottom-up estimates of CoWoS wafer demand; NVIDIA’s Rubin NVL144 rack power 220kW, 45 racks, implying 2027 annual CoWoS demand of 136k wafers, the core number for supply-demand tightness judgment.

Given sustained strong AI demand, TSMC may expand CoWoS capacity to 165k wafers/month before 2027

Direct CoWoS supply-side data: TSMC’s capacity from end-2025 at 6.6B wafers/month to end-2027 at 436M; non-TSMC (Amkor/UMC/ASE) capacity from 27.2B to 15k wafers/month; NVIDIA accounts for about 59% of total CoWoS consumption, Broadcom about 20%, indicating high customer concentration and impact sensitivity.

SoIC (System-in-Package) expansion will be a key focus for TSMC in the coming years

SoIC is defined as a key strategic direction for TSMC: capacity from end-2025 at 45k wafers/month to end-2027 at 15k; demand from NVIDIA, AMD, Apple, Qualcomm/Broadcom all included; SoIC offers higher integration and deeper tech barriers than CoWoS, representing the second growth curve in advanced packaging, with rapid volume growth expected in 2026–2027.

TSMC may double CoWoS and SoIC capacity by 2025, a trend expected to continue into 2026

In 2026, AI computing wafer consumption could reach $27.2 billion, with NVIDIA holding a large share

Top-down listing of all major AI chips in 2026 (NVIDIA B300/Rubin/H200, Google TPU, AWS Trainium3, Microsoft Maia, OpenAI Nexus), their CoWoS capacity allocation, shipment volumes, wafer consumption, and wafer value; total wafer consumption value for AI chips in 2026 estimated at about $27.2 billion, with NVIDIA dominating, the most convincing bottom-up estimate for TSMC’s AI revenue scale.

HBM (High Bandwidth Memory) consumption in 2026—up to 32 billion Gb

Total HBM demand in 2026 about 15k Gb, NVIDIA’s consumption accounts for about 58%; listing each AI chip’s HBM specs (capacity, generation, supplier), Google TPU series mainly consumes HBM3e 12hi, AWS/Microsoft consume HBM3/HBM4; Hynix, Samsung, Micron share supply, with Hynix benefiting most due to leading HBM tech.

NVIDIA GB200/300 rack output estimates

NVIDIA GB200/300 server rack supply-demand assumptions

TSMC’s AI chip revenue share may reach 60% between 2024 and 2029

TSMC’s AI chip revenue CAGR 2024–2029 reaches 60%, with AI revenue accounting for over 30% of total revenue in 2026; revenue components include general AI chips, custom ASICs, CoWoS packaging/testing, AI server CPUs, with customer structure: Apple 19%, NVIDIA 21%, Broadcom 11%; gross margin and EBITDA continue to expand, confirming AI’s positive contribution to TSMC’s overall profitability.

TSMC’s advanced wafer demand segmentation

Intelligent Agentic AI—Expanding CPU opportunities

AI shifts from inference to “action” phase, with CPU/GPU ratio shifting from GPU-heavy (1:12) to CPU-heavy (≥1:1), driven by API calls, code execution, multi-agent concurrency; MS estimates Agentic AI could add $32.5–$60 billion in CPU market size (by 2030), with MediaTek benefiting as an AI server CPU designer.

AI storage causes NAND shortages; we expect NOR Flash supply-demand imbalance to persist through 2026

Shortage of DDR4 will continue into late 2026; spot prices have an upper limit

AI ASIC, CPO, and chip testing

AI Semiconductors: Now and Future—“Key Drivers”

Presenting the drivers, constraints, technical solutions, and growth perspectives of AI semiconductors in parallel; especially contrasting three growth perspectives—Inference vs. Training, Edge vs. Cloud, Custom ASIC vs. AI GPU—these comparisons serve as a mental map for understanding all subsequent sector divergence points.

Cloud service providers (CSPs), even with NVIDIA’s powerful AI GPUs, still need custom chips

According to CSP plans, more ASIC projects are coming

How does the competition between TSMC’s CoWoS and Intel’s EMIB look?

Larger package sizes are becoming a key industry trend

The jump in chip testing time from Hopper’s 350 seconds to the next-gen GPU’s 1800–2000 seconds is the core structural driver data for testing equipment; test socket pin counts rising from mobile/PC level 1500 to AI/HPC level 6000 and beyond 10,000; global testing equipment CAGR 2024–2027 projected at 35%, TSMC’s packaging roadmap shows continuous interposer expansion, jointly supporting a long-term bullish outlook for testing equipment.

Roles of Hon Precision, WinWay, and MPI in the semiconductor supply chain

Evolution of testing equipment and components: Co-packaged Optical (CPO)

Hon Precision: a key winner benefiting from longer testing times; Morgan Stanley rating: Overweight (OW)

MPI: a leader in probe card technology with CPO options; Morgan Stanley rating: Overweight (OW)

WinWay: a testing socket leader with advantages in AI packaging complexity; rating: Overweight (OW)

Chinese Semiconductors: OSAT, Compound Semiconductors, MCU, and AI GPU

Optimistic on backend equipment (ASMP), neutral on Chinese OSAT

Prefer SiC (Silicon Carbide) over GaN (Gallium Nitride): SICC (Overweight) and InnoScience (Reduce)

MCU: Bottomed but not yet recovered

Domestic AI semiconductor market size and share continue to grow

China’s AI accelerator market landscape is clear: Huawei dominates with 62%, Cambrian 14%, others below 10%; Chinese AI GPU companies’ market value continues to grow, with more IPOs upcoming, reflecting a rising market scale and active capital markets, forming the background for subsequent target analysis.

We project China’s total accessible AI GPU market (TAM) to reach $67 billion by 2030

China’s advanced process capacity expansion to meet local AI GPU demand

Recent market tracking of China’s AI GPU demand

AI chip value chain—China and US—Decoupling of AI computing

China’s infrastructure strength is shrinking perceived technological gaps

Using radar charts across nine dimensions to compare US-China AI infrastructure gaps: China scores close to the US in policy support, AI data center space, software optimization (LLM); major gaps in front-end wafer, HBM memory, optical networks; proposing a three-step strategy to compensate for chip compute power shortfalls—multi-die packaging → larger racks and clusters → capacity expansion; Huawei CloudMatrix 384 A3 SuperPod exemplifies this strategy.

Inference economics: Total Cost of Ownership (TCO) and per-token cost

Domestic AI chips’ TCO is 30–60% lower than NVIDIA, with top domestic accelerators achieving comparable or better per-token inference costs; this is the core evidence supporting the view that “China’s domestic substitution is driven by both political and economic rationales,” underpinning the long-term bullish outlook on China’s AI chip sector.

Order status and potential orders for domestic AI accelerators

TPS (Tokens per Second)—Performance analysis

Lower prices enable domestic chips to achieve stronger performance per dollar

The “Ten Dragons” of Chinese AI GPGPU companies. Focus on Cambrian, Mu Xu, TianShu Zhixin

Comparison of Cambrian, Mu Xu, and TianShu Zhixin (Iluvatar)

A horizontal comparison of the three most prominent Chinese AI chip companies: Cambrian (SMIC 7nm ASIC, major client lock-in, only profitable), MetaX Mu Xu (SMIC 12nm GPGPU, state fund holdings, significant technological gap), TianShu Zhixin Iluvatar (TSMC 7nm GPGPU, supply chain resilience); considering profitability, customer structure, process node, the most implicit conclusion is Cambrian’s highest certainty.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pinned