Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
OpenAI向左 DeepSeek向右
On April 24, 2026, DeepSeek V4 Preview Edition was officially released.
This domestically developed large model, with 1.6 trillion parameters in the Pro version and 284 billion parameters in the Flash version, has focused its core selling points on the market, with a million-context window becoming the standard free feature for all official services.
Almost simultaneously, across the ocean, OpenAI also launched GPT-5.5. It has more powerful computing capabilities, richer agent functions, but also comes at a much higher price.
“Million-context” in plain language means AI is no longer a “goldfish” that can only remember your last few sentences, but a “superbrain” capable of swallowing three volumes of “The Three-Body Problem” at once, understanding a two-hour movie in a second, and even helping you spot typos.
For example, you can throw all your company’s contracts, emails, and financial reports from the past three years at V4, and it will help you find the breach of contract clause hidden in the attachment on page 47. In the past, this required a team of lawyers; now, it’s free.
GPT-5.5 prices this kind of superbrain openly: the standard version charges $5 per million input tokens and $30 for output; while the high-end GPT-5.5 Pro version is sold at a staggering $30 per million input tokens and $180 for output.
But according to DeepSeek’s official pricing, inputs that hit the cache in V4-Flash cost only 0.2 RMB per million tokens, and outputs cost 2 RMB; even V4-Pro, which rivals top closed-source models, costs 1 RMB for cache hits, 12 RMB for cache misses, and only 24 RMB for outputs.
People always think that the AI competition between China and the US is a race of model capabilities, but in reality, it has long become a divergence of business models.
OpenAI was once the dragon-slaying youth shouting “benefiting all mankind,” but now it’s selling expensive, polished real estate; meanwhile, DeepSeek is turning near-free computing power into water, electricity, and coal.
When OpenAI becomes a savvy contractor, why does DeepSeek spend almost nothing to turn top-tier AI into free tap water? What hidden currents are behind this shift in pricing power?
Ulanqab’s Cold Wind
The decisive battle for large models takes place in a data center in Inner Mongolia, at minus 20°C.
Just before V4’s release, DeepSeek’s recruitment suddenly included an unexpected position: Senior Data Center Delivery Manager and Senior Operations Engineer, with a maximum monthly salary of 30k RMB, 14 months’ pay, stationed in Ulanqab, Inner Mongolia.
This was a company once proud of its “minimalist, pure, algorithm-only” approach, a lightweight asset company.
In the past two years, their proudest label was “using less than $6 million in training costs to produce DeepSeek-R1,” which caused a crash in the US stock AI sector.
But the enormous computing power required by V4, combined with the tightening US export controls on AI chips, shattered this lightweight idyll.
In 2025, the US Department of Commerce further tightened export restrictions on AI chips to China. Nvidia’s H100 and H800 are already cut off, and even the downgraded H20 has been added to the control list. This means DeepSeek’s future expansion of computing power must fully shift to Huawei’s Ascend ecosystem.
In the V4 release notes, the official statement clearly states that the new model is “powered by Huawei Ascend,” and it was also revealed that after the batch release of Ascend 950 supernodes in the second half of the year, the Pro version’s price will be significantly reduced.
This shift isn’t something that can be achieved by changing a few lines of code or adaptation layers; it requires starting from scratch, building a complete domestic computing infrastructure at the physical level.
The trillion-parameter scale of V4 (with pretraining data reaching 33 trillion tokens), plus the massive computational demand of a million-context window, means you need thousands of Ascend chips, data centers capable of housing these chips, power grids supplying electricity to these data centers, and a team maintaining these machines in minus 20°C cold winds without downtime.
Liang Wenfeng has taken his methodology from the world of bits into the realm of atoms.
Computing power ultimately takes root in reinforced concrete and power lines.
On one side are AI elites in Silicon Valley, dressed in plaid shirts, coding and sipping pour-over coffee; on the other are operations personnel wrapped in military coats, guarding data centers deep in the Inner Mongolian plains. This disparity forms the underlying tone of China’s resistance to US-led computing power blockade.
Transforming from a pure algorithm company to a “heavy asset” player building its own data centers means DeepSeek has bid farewell to the guerrilla era of “small efforts, big miracles,” and has donned the armor of heavy infantry.
This transformation comes at a huge cost—building data centers, buying chips, laying cables—all bottomless pits. More importantly, this heavy asset model causes operational costs to rise exponentially, while DeepSeek’s commercial revenue remains extremely limited.
This pricing strategy is essentially sacrificing profit to build an ecosystem, using free access to gain influence over infrastructure.
A tough guy who once refused all giants and relied on quantitative trading to subsidize AI with his own money—how long can he hold out in this bottomless pit?
$20 Billion Compromise
In April, DeepSeek announced its first external funding round, with a valuation of up to 300 billion RMB (about $44 billion), planning to raise 50 billion RMB, including 30 billion RMB from outside investors. Rumors of Tencent and Alibaba competing to invest are rampant.
Many think this is because building data centers is too expensive. But in fact, the core driver of DeepSeek’s fundraising isn’t just buying GPUs; it’s also about “pure technological ideals.” In the face of giant’s talent poaching, it’s hard to stand firm.
During the critical sprint of V4 development, major domestic companies launched aggressive targeted headhunting of DeepSeek’s core team. Since late 2025, at least five key members have left. The original model’s lead author Wang Bingxuan went to Tencent; core contributor Luo Fuli was poached by Lei Jun with a 10-million-yuan annual salary to Xiaomi; and core author Guo Daya joined ByteDance’s Seed team.
This is the most naked form of market operation: when your competitors hold unlimited ammunition, and you rely on your own funds to keep going, talent becomes your most vulnerable weak point.
You can ask talented people to work overtime for the dream of changing the world, but when big companies slap a check with tens of millions of cash and stock options on the table, promising unlimited computing resources, the pricing power of idealism is no longer in your hands.
Liang Wenfeng’s dilemma is actually a common predicament for every entrepreneur trying to build a “slow company” in China. In a market where giants can buy anyone with money, the route of “no fundraising, no commercialization, just focusing on technology” is extremely luxurious. Its cost is that you must accept your team being potentially bought out at any time.
This 300 billion valuation funding isn’t Liang Wenfeng’s capitulation to capital; it’s a war of attrition to protect V4’s R&D team, a human redemption battle against the giants. He must sit at the capital’s table, using real money to give the remaining team members enough reason to stay.
The possible entry of Tencent and Alibaba means DeepSeek is no longer the lonely, purely idealistic tech startup. It has become a company with external shareholders and commercial pressures. This transformation will inevitably dilute the “research freedom” Liang Wenfeng once took pride in—freedom from external interference.
But he had no choice.
When idealism is forced to wear the armor of capital, where does the confidence to keep this massive machine running, and to sustain the Ulanqab data center day and night, come from?
Another Kind of “Big Effort, Big Miracle”
The answer isn’t in algorithms, but in the power grid.
Silicon Valley’s current anxiety isn’t about chip shortages, but about electricity shortages. Elon Musk is building a massive data center in Memphis, Tennessee; OpenAI is even discussing investing in nuclear power plants; Microsoft announced restarting the Three Mile Island nuclear plant in Pennsylvania to power AI data centers. The limit of computing power is electricity—a cold, hard physical fact.
In the US, the power consumption of a large AI data center is comparable to that of a medium-sized city’s daily usage. But the US power grid is an aging network built in the 1950s, slow to expand, regionally fragmented, unable to keep pace with the rapid expansion of AI computing needs.
Supporting China’s AI race against the US are not only those algorithm geniuses earning tens of millions a year but also the silent high-voltage transmission lines.
The reason Ulanqab’s data center can rise from the ground is due to Inner Mongolia’s abundant green electricity and China’s top-ranked power grid dispatching capabilities. Public data shows Ulanqab’s green energy capacity reaches 16k kW, accounting for about 65.9%, with local low-cost green electricity about 50% cheaper than in eastern regions. Plus, with an average annual temperature of only 4.3°C and nearly 10 months of natural cooling, energy savings of 20% to 30% are achievable.
When V4 runs, what truly feeds it is China’s vast and extremely cheap electrical infrastructure. This is another dimension of “big effort, big miracle.”
Here’s a stark and fascinating historical contrast: In 1986, the US used the Japan-U.S. Semiconductor Agreement to crush Japan’s semiconductor industry, forcing Japan to open markets and accept price controls. Japan’s global market share in semiconductors fell from 40% in 1986 to 15% in 2011. Japan took thirty years to recover.
Today, the US is trying to use the same logic to lock down China’s AI—blocking chips, restricting computing power, cutting off supply chains. But China’s counterattack path is completely different.
Japan’s failure was due to its semiconductor industry’s heavy reliance on US technology licensing and market access. Once cut off, it lost its ability to survive independently. China’s AI counterattack starts from the very bottom—rebuilding physical infrastructure, making chips domestically, building data centers, laying power grids, and open-sourcing models.
This is an extremely heavy, costly route, but also one that is very hard to “kill.” While Silicon Valley builds magnificent Babel towers in the cloud, China digs trenches in the soil.
If cloud-based computing power battles are a brutal, heavy-asset consumption war, beyond building data centers in Inner Mongolia and laying cables, is there another way to escape cloud dominance?
Escaping the Cloud
As Silicon Valley giants build ever larger data centers, even planning trillion-dollar-scale computing clusters like OpenAI, China’s counterattack has quietly shifted underground.
The ultimate weapon against US computing power blockade isn’t creating chips more powerful than H100, but embedding large models into everyone’s smartphones.
Since we can’t outgun heavy firepower in cloud data centers, let’s bring the battlefield back to 1.4 billion smartphones and edge devices. This is a classic guerrilla tactic, and a highly resistant one to blockade—you can ban exports of high-end GPUs, but you can’t confiscate every phone in China.
In 2026, driven by the computing power anxiety sparked by DeepSeek, Chinese smartphone manufacturers Xiaomi, OPPO, and vivo launched a frantic “edge-side shift.” They no longer see phones merely as displays for cloud API calls but, through extreme model distillation and compression, squeezed a miniature superbrain into a few thousand-yuan domestic phones.
The core of this tech route is “distillation.” Simply put, it’s training a small model (“student”) to mimic the thinking of a large model (“teacher”), so the small model learns the teacher’s “way of thinking” rather than memorizing all its “knowledge.”
Through extreme distillation and quantization compression, a large model that originally required hundreds of GPUs to run has been shrunk to only 1.2GB to 2.5GB, capable of running smoothly on a single mobile chip.
Apps like MNN Chat for mobile AI already enable users to run DeepSeek R1 distilled models locally on their phones. The significance of edge AI is that you don’t need a constant 5G connection, nor do you have to pay $100 monthly to Silicon Valley giants. The large model is in your pocket, running offline, without spending a penny on cloud computing.
Since I can’t build a giant boiler room for centralized heating, I’ll give each household a small stove.
Of course, edge AI isn’t perfect. Limited by the computing power and memory of mobile chips, the capabilities of edge models are far inferior to those of large cloud models. It can help you write emails, translate texts, summarize articles, but if you want it to derive a complex mathematical theorem or analyze a hundreds-page legal contract, it still struggles.
But that’s enough. For most ordinary people, the AI they need isn’t a superbrain capable of deriving mathematical theorems, but a “personal assistant” that helps handle daily chores.
When large models become extremely cheap, even pocket-sized, how will they change the corners of the world forgotten by Silicon Valley?
Digital Equality in the Global South
If you sit in a Manhattan glass office with a panoramic view, you might think GPT-5.5 priced at $100 is worth it because it can write a perfect M&A report in a second.
But if you stand in a cornfield in Uganda, East Africa, facing withered crops due to climate anomalies, no one can afford the $100 subscription fee because Uganda’s per capita monthly income is less than $150.
Silicon Valley giants discuss how to dominate the world with AI, but Ugandan farmers and poor students in Southeast Asia, thanks to DeepSeek’s open source, are entering the digital age for the first time.
GPT-5.5 serves those who can pay, and its corpus is almost entirely in English. If you ask it questions in Swahili or Javanese, it answers haltingly, and consumes several times more tokens than in English. Silicon Valley giants, citing “low commercial return,” have abandoned these marginal markets.
Meanwhile, China’s open-source models have become the digital infrastructure for the Global South.
In Uganda, local NGO Sunbird AI used a Chinese open-source model, fine-tuned into the Sunflower system, expanding local language support from 6 to 31 languages. This system is now deployed in Uganda’s agricultural extension system, sending planting advice to farmers in Swahili.
In Malaysia, tech companies fine-tuned open-source bases into AI models compliant with Islamic law, supporting Malay and Indonesian languages, and ensuring outputs meet Muslim cultural and religious standards. From Indonesia’s digital ID systems to Kenya’s Swahili medical Q&A, Chinese technology is penetrating the social fabric of these countries.
According to data released early 2026 by the world’s largest AI model API aggregator platform OpenRouter, Chinese AI models’ token consumption on the platform surpassed that of US competitors for the first time. In a certain week, the top 10 models globally consumed 87 trillion tokens, with Chinese models accounting for about 61%.
Open source has broken the US monopoly on AI discourse, allowing resource-scarce developing countries to leap over the digital divide. This isn’t a grand narrative of US-China rivalry; it’s the real “rural encircling the city” in the AI era.
China’s open-source AI strategy is objectively becoming an extremely effective form of “soft power” export. As Silicon Valley giants build high walls in the cloud, trying to become the new era’s digital landlords, those “tech refugees” who can’t pay rent finally find their own spark in open source and edge-side soil.
Tap Water
Technology should never be a luxury reserved for the elite.
Silicon Valley has built exquisite, heavily guarded luxury apartments, open only to VIPs. But we’ve laid a pipeline of tap water reaching every household.
This pipeline starts in data centers in Inner Mongolia at minus 20°C, runs through the roar of ultra-high-voltage transmission lines, and in the war of 300 billion valuations. Every segment is heavy, expensive, filled with forced compromises. Liang Wenfeng once wanted to create a purely technical company, but reality pushed him to build data centers, seek funding, and compete with giants. He had no choice—because he chose a harder path: not making AI a luxury, but turning it into tap water.
And the endpoint of this pipeline is in a few-thousand-yuan domestic smartphone, in the rough fingers of Ugandan farmers, in the lives of ordinary people eager to cross the digital divide.
No matter how high the walls of computing power are built, they cannot stop the flow of tap water to the lowlands.