Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
So, I found it interesting to follow what TSMC just released about the chip situation. Their CEO was very straightforward: this AI chip shortage isn't a passing problem, it will stay with us at least until 2027. Like, it's not speculation, it's what the company is seeing in practice.
The numbers speak for themselves. Q1 revenue hit $35.9 billion, a growth of over 40% year over year. And the most relevant? Capital expenditures were pushed to $44 billion, which is basically the maximum they can spend. President Wei Jhe-jia was very clear in the conference: demand for AI is "extremely strong," but production capacity can't keep up. He repeated several times that "there are no shortcuts" — building a new factory takes 2 to 3 years anyway.
What’s most striking is that TSMC is doing something rare: expanding 3-nanometer capacity globally. New factories planned in Taiwan, Arizona, and Japan, but all of this only starts mass production between 2027 and 2028. Meanwhile, demand continues to explode.
And if you think this is just a manufacturing problem, look at what's happening in the GPU rental market. SemiAnalysis data shows that the annual rental price for H100s increased nearly 40% in six months — from $1.70 per hour in October to $2.35 in March. Some suppliers are reporting even higher increases, between 20% and 30%. The entire H series is sold out, and all capacity of the Blackwell series has already been reserved until September 2026.
The consequence of this? Chinese cloud providers have started to cascade price increases. Tencent Cloud, Alibaba Cloud, and Baidu Smart Cloud have already made multiple hikes this year. Tencent announced increases of up to 4 times for some models, Alibaba went up by as much as 34%, Baidu between 5% and 30%. And more are coming.
The logic is simple: the four big cloud providers — Google, Microsoft, Meta, Amazon — plan to spend around $100 billion in capex in 2026, an increase of over 60% compared to 2025. NVIDIA controls 85-90% of the GPU market, so practically all this money goes to NVIDIA chips, which are manufactured by TSMC. On the other side, TSMC can spend at most $44 billion, and even then it takes years for new factories to start producing.
This gap between supply and demand is structural. The chip shortage isn't just a temporary bottleneck; it's a reality that will dominate the AI infrastructure market for the next two years. For those buying capacity, prices are only likely to go up. For TSMC, NVIDIA, and HBM memory manufacturers, it's a virtually guaranteed growth period.