I was reading about ZTE’s history just a moment ago, and an odd idea occurred to me — the chip war today is not the same as it was eight years ago.



Do you remember the ZTE story? In April 2018, the U.S. Department of Commerce issued a very simple ban: no chips, no software, nothing from America. A company with 80,000 employees and revenues exceeding one trillion yuan stopped operating in a single day. Without Qualcomm chips, no base stations; without Google’s Android license, no phones. Everything collapsed. It took only 23 days for ZTE to admit that its core operations were no longer possible. It paid $1.4 billion to stay afloat.

But this time, the war on AI is proceeding in a completely different way.

When the U.S. imposed the first export restrictions on NVIDIA A100 and H100 chips in October 2022, everyone thought that was the end. Then came the second round in October 2023, followed by the third in December 2024. Escalation continues; the blockade tightens. But this time, Chinese companies didn’t give up — they chose a harder path.

The real problem isn’t the chips themselves, but something called CUDA. This is a computing system developed by NVIDIA since 2006, and it has become the foundation of the entire AI industry. Every major framework, from TensorFlow for Google to PyTorch for Meta, is deeply integrated with CUDA. A PhD student specializing in AI starts learning inside a CUDA environment from day one. Every line of code they write reinforces NVIDIA’s monopoly. By 2025, there are 4.5 million developers in the CUDA ecosystem, used by more than 40,000 global companies. More than 90% of AI developers worldwide are tied to NVIDIA.

This is the real trench. CUDA is a self-sustaining wheel: the more developers use it, the more tools and libraries grow, the more vibrant the ecosystem becomes, attracting even more developers. Once this wheel starts turning, it’s almost impossible to stop.

But the Chinese found a way out of this dilemma — not by trying to compete with NVIDIA directly on chips.

The solution came from algorithms. From the end of 2024 to 2025, all Chinese AI companies shifted to mixture-of-experts models. The idea is simple: instead of activating the full model, split it into several small experts and activate only the ones most relevant to the task. DeepSeek V3 is a clear example — 671 billion parameters, but during inference it activates only 37 billion. Just 5.5% of the full size.

The result? Crazy lower training costs. DeepSeek used 2,048 H800 processing units and trained for 58 days at a cost of $5.576 million. GPT-4 cost about $78 million. A full one-level difference. And this showed directly in pricing — DeepSeek is 25 to 75 times cheaper than Claude. In February 2026, the share of Chinese models on OpenRouter, the world’s largest API aggregation platform, surged 127% in just three weeks. A year earlier, the share was below 2%. Now it’s approaching 60%.

But this is only a solution for inference. The training problem still remains.

This is where local chips come in. In 2025, a local company began building a 148-meter-long production line in Jiangsu — from signing to production in just 180 days. The Loongson 3C6000 processor is fully domestically produced, along with the T100 card from Taichu Yuanqi from Tsinghua University. The line produces five servers every minute, with an investment of 1.1 billion yuan, targeting 100,000 units annually.

Most importantly — these chips have actually started handling real training tasks. In January 2026, Zhipu AI launched the GLM-Image model with Huawei, the first advanced image generation model fully trained on domestically made local Chinese chips. In February, the massive “Star” model was trained on a Chinese local computing pool with tens of thousands of processing units.

This is a qualitative shift. Inference needs ordinary chips, but training requires enormous computing power and extremely high bandwidth. This raises requirements by 10 times. Huawei Ascend is the core solution here. By the end of 2025, the number of Ascend ecosystem developers exceeded 4 million; partners exceeded 3,000 companies; 43 major models were trained on Ascend; and more than 200 open-source models were adapted. At the MWC conference in March 2026, Huawei launched its new SuperPoD architecture. The processing power of Ascend 910B reached the level of NVIDIA A100. The gap still exists, but it has shifted from “not usable at all” to “easily usable.”

You can’t wait for chips to become perfect. Broad deployment must start when they are good enough, and real business needs should drive development. ByteDance, Tencent, and Baidu aim to double their import of local computing servers in 2026. The Ministry of Industry and Information Technology announced that China’s intelligent computing capacity reached 1,590 EFLOPS. 2026 is a pivotal year for deploying local computing.

There’s another factor that no one has paid attention to — electricity.

In early 2026, Virginia suspended approvals for new data center construction projects. Georgia followed. Illinois and Michigan imposed restrictive measures. U.S. data center electricity consumption reached 183 terawatt-hours in 2024, about 4% of total consumption. It is expected to double by 2030 to 426 terawatt-hours, potentially exceeding 12%. Arm’s CEO predicted that AI data centers will consume 20–25% of U.S. electricity by 2030. The U.S. power grid is already stretched. The PJM grid, covering 13 states, faces a 6 GW capacity shortfall. By 2033, the U.S. will face a 175 GW gap. Wholesale electricity costs have risen 267% in data center regions.

China’s situation is the complete opposite. Annual electricity generation is 10.4 trillion kWh, compared with 4.2 trillion in the U.S. China produces 2.5 times what the U.S. produces. Household consumption in China is 15% of the total, while in the U.S. it is 36%. This means far more industrial power can be directed to computing. Electricity prices in AI company areas in the U.S. are $0.12–0.15 per kWh. In western China, it’s about $0.03 — one quarter to one fifth of the U.S. price.

While the U.S. faces an electricity crisis, Chinese AI quietly goes global. But this time, what goes out isn’t the product or the factory — it’s Tokens, the smallest unit processed by AI models. They are produced in Chinese computing factories, then transmitted via submarine cables around the world.

The distribution of DeepSeek users tells a clear story: China 30.7%, India 13.6%, Indonesia 6.9%, the U.S. 4.3%, France 3.2%. It supports 37 languages and is very popular in emerging markets like Brazil. 26,000 global companies have accounts, and 3,200 institutions have used the enterprise edition. In 2025, 58% of newly established emerging AI companies listed DeepSeek in their technical architecture. In China, DeepSeek captured 89% of the market. In sanctioned countries, the share ranges from 40–60%.

This is exactly like the fight for industrial independence 40 years ago. In Tokyo in 1986, the Japanese government signed the U.S.-Japan Semiconductor Agreement under massive American pressure. The key provisions: open the semiconductor market so that the U.S. share is no less than 20%; prohibit exporting Japanese chips at prices lower than cost; impose a 100% penalty tariff on $300 million worth of exports. At the same time, the U.S. refused Fujitsu’s acquisition of Fairchild.

By 1988, Japan controlled 51% of the global semiconductor market, while the U.S. had only 36.8%. Among the top ten global companies, Japan held six spots — NEC was second, Toshiba third, Hitachi fifth, Fujitsu seventh, Mitsubishi eighth, and Matsushita ninth. But after the agreement, everything changed. The U.S. used Section 301 mechanisms and applied comprehensive pressure. At the same time, it supported Samsung and Hynix to hit the Japanese market with low prices. Japan’s DRAM share fell from 80% to 10%. By 2017, Japan’s share of the IC market was only 7%. The giants either withdrew through division, acquisition, or a frustrated exit.

Japan’s tragedy was agreeing to be the best producer in a global system dominated by one power, without ever thinking about building its own independent system. When the tide receded, it realized it only had production.

Today, China stands at a similar crossroads but one that is entirely different. We face massive external pressure — three rounds of chip restrictions with continuous escalation. But this time, we chose a harder path: algorithm improvements, to local chip leaps from inference to training, to 4 million Ascend developers, to global Token distribution. Each step builds an independent industrial system that Japan never had.

On February 27, 2026, three local chip companies published performance reports on the same day. The results were mixed — half was on fire, half was like water. The first company’s revenue rose 453% and it achieved a profit for the first time. The second company grew 243% but lost 1 billion net. The third grew 121% and lost 800 million.

The 95% gap left by NVIDIA’s monopoly is gradually being filled with numbers from local companies. Regardless of current performance, the market needs an alternative. This is a very rare structural opportunity created by geopolitical tensions.

Financial losses are not management failures — they are a war tax that must be paid to build an independent ecosystem. R&D investment, software support, and the human costs for engineers to solve translation problems one by one. These financial reports honestly record the real picture of this war for computing power more than any other industry report. This is not an inspiring victory; it is a fierce battle fought on the front lines, with blood flowing.

But the shape of the war has already changed. Eight years ago, we asked, “Can we survive?” Today the question is, “What is the cost to survive?” The cost itself is progress.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin