Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
It’s very easy to forget how ZTE collapsed in 2018 overnight. Just one ban from the United States, and a company with 80,000 employees could no longer do anything. No Qualcomm chips, no Google license—game over. Now, eight years later, we see a completely different story in China’s AI industry.
The truth is, chips weren’t really the problem. We know that CUDA is the real obstacle—the computing platform built by NVIDIA over ten years ago, now used by more than 910B developers worldwide. The entire ecosystem is like a flywheel that’s almost impossible to stop. But this time, Chinese AI companies didn’t choose direct confrontation. They took a more pragmatic route.
Since late 2024, we’ve seen how strategies shifted. Hybrid expert models became the focus—models that no longer require running the entire system for each task. Look at DeepSeek V3: 671 billion parameters, but only 5.5% active during each inference. The training cost? Nearly $5.6 million. In comparison, GPT-4 cost about $78 million. Just one-seventh the price.
And their API pricing is truly a game-changer. Input tokens cost $0.028 to $0.28 per million, while GPT-4’s is $5. It’s 25 to 75 times cheaper. This isn’t just a marketing move—it's a structural shift in how the AI industry operates. In February, on OpenRouter, weekly usage of Chinese models increased by 127% in just three weeks. Surpassing the United States.
But the real breakthrough is in infrastructure. In Jiangsu, they built a 148-meter-long production line for servers using Loongson 3C6000 and TaiChu Yuanqi chips. From agreement to operation, only 180 days. And now, clusters of local chips are starting to train entire large models—not just inference. That’s a qualitative shift. In January, GLM-Image became the first SOTA image generation model trained entirely on local chips. In February, China Telecom completed full training of their billion-parameter model in Shanghai using a local compute pool.
In Huawei’s Ascend ecosystem, there are now 4 million developers, over 3,000 partners, and 43 major models pre-trained with Ascend. The FP16 computing power of Ascend 910B has reached the level of NVIDIA A100. Not perfect yet, but it works. And ecosystem building shouldn’t wait for perfection—real business demand should drive development.
The energy situation adds another layer to the advantage. Virginia and Georgia have paused new data center permits due to power constraints. By 2030, US data centers are expected to consume 426 terawatt-hours—possibly over 12% of total electricity. But in China, annual power generation is 10.4 trillion units, 2.5 times that of the US. And industrial electricity prices in western China are $0.03 per kilowatt-hour—half or one-fifth of the $0.12–$0.15 in the US.
So while America worries about electricity, China quietly builds computing power and produces tokens for the global market. DeepSeek’s user distribution is clear: 30.7% China, 13.6% India, 6.9% Indonesia, 4.3% US, 3.2% France. There are 26,000 companies worldwide, 3,200 institutions on the enterprise version. In China, 89% market share. In other countries, 40–60%.
The parallel to Japan’s semiconductor tragedy is clear. In 1986, Japan signed the US-Japan Semiconductor Agreement under pressure, becoming dependent on external control. Their market share dropped from 51% to 7%. The lesson is simple: if you don’t build your own ecosystem, you’ll lose your industry.
Today in AI, China has chosen a harder path—from extreme algorithm optimization, to scaling local chips from inference to training, to developing 4 million developers in the Ascend ecosystem, and exporting tokens globally. Every step involves real money spent, real short-term losses. But this is the cost of independence.
On February 27, three local AI chip companies reported results. Cambrian, +453% revenue, first profitable year. Moore Threads, +243% revenue but a $1 billion loss. Muxi, +121% revenue but an $8 billion loss. Half fire, half water. But the market needs alternatives to NVIDIA. And this is a geopolitical opportunity that must not be missed. Every loss is an investment in independence—and that’s true progress.