Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
CITIC Construction Investment: AI Computing Power and Commercial Spaceflight Enter an Industry Acceleration Period
China Securities Construction Investment Research Report points out that AI computing power and commercial space are entering an industry acceleration period. On the computing side, application evolution profoundly reconstructs infrastructure: Agents drive computational loads from GPU-intensive to CPU-intensive, and the CPU-to-GPU ratio in data centers is expected to significantly increase; the explosive demand for AI computing power, combined with memory price increases and capacity shortages, has jointly pushed server CPU shortages and price hikes this year; the cost reduction demand for large model inference is also accelerating industry giants’ deployment of ASICs, leading the industry toward heterogeneous collaboration of GPU + ASIC. On the space side, taking the 2026 Space Day as an opportunity, as multiple reusable rockets undergo intensive validation, the increase in capacity will fully accelerate satellite internet networking, pushing commercial space into a stage of high-quality development.
The full text is as follows:
Focus on Inference Computing Power and Opportunities in Commercial Space Development
AI computing power and commercial space are entering an industry acceleration period. On the computing side, application evolution profoundly reconstructs infrastructure: Agents drive computational loads from GPU-intensive to CPU-intensive, and the CPU-to-GPU ratio in data centers is expected to significantly increase; the explosive demand for AI computing power, combined with memory price increases and capacity shortages, has jointly pushed server CPU shortages and price hikes this year; the cost reduction demand for large model inference is also accelerating industry giants’ deployment of ASICs, leading the industry toward heterogeneous collaboration of GPU + ASIC. On the space side, taking the 2026 Space Day as an opportunity, as multiple reusable rockets undergo intensive validation, the increase in capacity will fully accelerate satellite internet networking, pushing commercial space into a stage of high-quality development.
Agents drive computational loads from GPU-intensive to CPU-intensive, and the CPU-to-GPU ratio in data centers is expected to significantly increase. Traditional large language model (LLM) inference uses a single request-response mode, but intelligent agents involve complex multi-step inference cycles, including observing the environment, performing reasoning, making decisions, executing actions, receiving feedback, and more, with a single agent task potentially involving dozens or even more LLM calls. As the number and complexity of Agents increase, the load on CPU modules will become heavier. In the medium to long term, agentic AI will bring huge general-purpose computing demands. As the proportion of high-complexity agent tasks rises, the CPU-to-GPU ratio in AI data centers will shift from the current 1:8 to 1:4, toward future 1:2 to 1:1, significantly increasing market demand for CPUs.
The explosive growth in AI computing power demand, combined with memory price increases and capacity shortages, has jointly driven server CPU shortages and price hikes this year. From late 2025 to early 2026, CPU price increases show a clear progressive pattern, spreading from consumer-grade to enterprise-grade, similar to memory logic. The growth in AI computing power demand has caused ongoing CPU shortages in two ways: first, the increasing demand for CPUs themselves; second, the raw materials and capacity for CPUs are heavily occupied by GPU manufacturers. Currently, overall CPU demand remains strong, but component shortages mean server shipments are still relatively slow, leading to a backlog of unfulfilled orders. Coupled with the rapid increase in Agent demand, short-term CPU shortages are unlikely to ease, and price hikes will continue in the near term.
Global tech giants are accelerating CPU deployment, confirming the increasing importance of CPUs in AI infrastructure. In March, Nvidia began selling its Vera CPU as an independent product, positioning it as a processor designed for agentic AI and reinforcement learning eras. Arm also launched its first self-developed CPU—the Arm AGI CPU—in March, marking a historic restructuring of Arm’s business model, shifting from licensing instruction sets or providing reference core IP to directly supplying chips to cloud providers and server manufacturers. In response to adjustments within the Arm ecosystem, CPU giants Intel and AMD are leveraging their deep experience in complex instruction set architectures and advanced packaging to build defenses through heterogeneous computing and open ecosystems. Major cloud providers like AWS, Google, and Microsoft are also accelerating their in-house development, speeding into the server CPU market.
AI large models are shifting from training to inference, with token costs becoming a core bottleneck restricting AI business expansion. While traditional general-purpose GPUs (like Nvidia) dominate ecologically, the performance-cost gap in AI inference is increasingly constraining large model service providers. ASIC chips, with better energy efficiency under specific loads, highly targeted customization, and removal of redundant calculations, are becoming the optimal solution for significantly reducing per-token inference costs. According to Marvell’s forecast, the global AI ASIC market size will jump from $6.6 billion in 2023 to $55.4 billion in 2028, with a compound annual growth rate of 53%. In the medium to long term, as AI applications scale, the logic of infrastructure construction will shift from training power to inference efficiency, greatly boosting demand for ASIC chips.
Cost reduction and efficiency improvement, combined with supply chain security needs, are prompting leading global AI companies to accelerate diversification of AI chip sources. Facing the increasing soft/hardware coupling and strong ecosystem binding of general GPU vendors, more large clients prefer decoupled hardware and software solutions to reduce procurement concerns. On April 14, Meta extended its customized AI chip (MTIA) partnership with Broadcom to 2029, planning to deploy several gigawatts of computing power using 2nm advanced process technology; simultaneously, OpenAI, Google, AWS, and others are ramping up joint development with external ASIC developers like Broadcom or Marvell. This confirms that tech giants are moving away from reliance on a single general GPU, with computing deployment shifting from past monopolies toward a heterogeneous collaboration mainly of general GPUs supplemented by self-developed or customized ASICs.
Many AI chip startups are choosing the ASIC route. Currently, over 60% of AI chip startups opt for ASIC development, evolving three main differentiated strategies: first, targeting extreme scenarios, such as Cerebras, which abandons general-purpose approaches to develop ASICs for ultra-large-scale training or ultra-low latency inference, filling gaps left by general chips; second, focusing on deep scene-specific optimization and technical refinement, avoiding pursuit of extreme general performance, instead matching vertical industry needs (e.g., solving storage bottlenecks) to achieve scalable profitability with lower ecosystem and customer migration costs; third, employing ecosystem binding strategies, leveraging resources from traditional x86 or major tech giants for customized development, becoming a complement within their ecosystems. As heterogeneous collaboration deepens, the importance and market share of ASICs in AI infrastructure are expected to continue rising.
The 2026 China Space Day is approaching, focusing on high-quality development in commercial space. On April 17, the China National Space Administration held a press conference for the 2026 China Space Day, scheduled for April 24 in Chengdu, Sichuan Province. Reviewing 2025, China conducted 92 space launches, a 35% increase year-over-year; in commercial space, China’s satellite internet system construction is accelerating, with large-scale constellation production lines advancing rapidly, and the Zhuque-3 and Long March 12A reusable rockets conducting initial flight tests. Looking ahead to 2026, China’s space missions will continue intensively, with multiple reusable rockets undergoing flight validation, including Long March 55.4B and Zhuque-3, which are expected to be validated soon. The increased rocket supply is expected to further accelerate satellite internet system construction, with high-level safety assurance supporting high-quality development in commercial space.
Summary: AI computing power and commercial space are entering an industry acceleration period. On the computing side, application evolution profoundly reconstructs infrastructure: Agents drive computational loads from GPU-intensive to CPU-intensive, and the CPU-to-GPU ratio in data centers is expected to significantly increase; the explosive demand for AI computing power, combined with memory price increases and capacity shortages, has jointly pushed server CPU shortages and price hikes this year; the cost reduction demand for large model inference is also accelerating industry giants’ deployment of ASICs, leading the industry toward heterogeneous collaboration of GPU + ASIC. On the space side, taking the 2026 Space Day as an opportunity, as multiple reusable rockets undergo intensive validation, the increase in capacity will fully accelerate satellite internet networking, pushing commercial space into a stage of high-quality development.
Risk warnings: (1) Macroeconomic downward risk: The computer industry involves numerous sectors; under macroeconomic downward pressure, if industry IT spending falls short of expectations, demand will be directly affected; (2) Accounts receivable bad debt risk: Most computer companies operate on project-based contracts, which require acceptance before payments are received. Longer payment cycles from downstream clients may increase bad debts and further lead to asset impairment losses; (3) Industry competition intensifies: Demand in the computer industry is relatively certain, but increased supply-side competition may change industry structure; (4) Impact of international environment changes: Escalating international trade frictions and US pressure on Chinese technology may impact companies with high overseas revenue proportions.
(Source: Cailian Press)