Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
From "Available" to "User-friendly," has domestic computing power made a leap? The joys and worries behind the financial reports of the Four Little Dragons.
null
From December 2025 to early 2026, in just over a month, Moorthelad and MuXi Co., Ltd. successively listed on the STAR Market, while Biran Technology and TianShu Zhixin clustered on the Hong Kong Stock Exchange. Four companies, all bearing the halo of the “Four Little Dragons of Domestic GPUs,” collectively completed capital raises exceeding HKD 1.64B. In the domestic computing power chip sector, we have finally moved from the long “PPT chip design” stage into the “examination moment” where financial reports are scrutinized.
2025 was a year of “proof” for domestic computing power — proving that domestically produced GPUs can be mass-produced at scale, that the WanKa cluster can operate stably, that the capital market is willing to bet on the future of domestic computing power, and that domestic computing power can support a trillion-yuan or even higher market.
Recently, several domestic chip companies released their first annual financial reports after going public. Overall data shows that all companies experienced significant revenue growth, but collective losses also reveal the true situation of this industry. Against the backdrop of AI computing demand shifting from training to inference, where do domestic computing power companies stand?
Domestic Computing Power Delivers a “Perfect” Report
In 2025, MuXi Co., Ltd. led the four dragons with a revenue of 1.64B yuan, a year-on-year increase of 121.26%; net profit attributable to the parent was a loss of 789 million yuan, a significant narrowing of 43.97% compared to the previous year. The company’s revenue jumped from 53 million yuan in 2023 to 55k yuan in 2025, over 30 times in three years. The core driver of revenue growth was the sharp increase in GPU product sales — in 2025, GPU boards mainly based on the XiYun C series for training and inference sold 33,649 units, a 147.31% increase; by the end of the reporting period, the company’s total GPU product sales exceeded 55k units. Meanwhile, MuXi maintained high R&D investment, with R&D expenses reaching 1.03B yuan in 2025, up 14.04% from the previous year, accounting for 62.49% of revenue.
Moore Thread (688795) achieved revenue of 1.51B yuan in 2025, a year-on-year increase of 243.37%; gross profit reached 987 million yuan, up 218.43%; net profit attributable to the parent and net profit after non-recurring gains and losses both narrowed losses by 38.16% and 33.38%, respectively. After excluding share-based payment impacts, net loss in 2025 was 648 million yuan, a reduction of 847 million yuan from the previous year, a narrowing of 56.65%. Meanwhile, Moore Thread maintained high R&D expenditure, with annual R&D costs of 1.31B yuan, accounting for 86.68%.
TianShu Zhixin achieved revenue of 1.03B yuan in 2025, up 91.6%; gross profit of 558 million yuan, up 110.5%, with gross margin exceeding revenue growth, and an adjusted net loss of about 438 million yuan, narrowing 32.1% year-on-year. The core general GPU business generated 923 million yuan in revenue, a 149.6% increase, accounting for 89.3% of total revenue. Looking at segments, the TianGai training series earned 584 million yuan, up 116.7%; the ZhiKai inference series earned 339 million yuan, a 238.2% surge — explosive growth in inference became one of the most eye-catching highlights of TianShu Zhixin’s 2025 financial report.
Biran Technology’s 2025 revenue was 1.03B yuan, up 207.2%; gross profit of 557 million yuan, up 210.8%, with a gross margin of 53.8%. However, the company’s annual loss was 16.49B yuan, a 972.3% increase — this number looks shocking at first glance, but the company explained it was mainly related to changes in the book value of debt redemptions, stock-based compensation expenses, and listing costs; after adjustments, the annual loss was 874 million yuan. R&D investment reached 1.48B yuan, up 78.5%, mainly for new generation GPU architecture and AI software platform upgrades. In 2025, Biran completed full-scale mass production and delivery of flagship general GPU products BR106 and BR166, with BR166 series starting mass production in August 2025, quickly landing within less than half a year of sales, becoming a core driver of revenue leap.
Overall industry view shows that in 2025, the four domestic GPU manufacturers all saw significant revenue growth, but still collectively posted losses. Moore Thread, MuXi Co., Ltd., and TianShu Zhixin narrowed their losses year-on-year, while Biran’s losses expanded due to increased R&D spending. Sullivan China consultant Chi Yu told media that, from an industry stage perspective, domestic GPUs are still in an early rapid development phase; even the relatively leading companies still have a clear gap compared to overseas mature players like NVIDIA.
From “usable” to “good to use,” domestic computing power still faces challenges
The hot financial report figures cannot hide the deep challenges faced by domestic computing power companies.
Among these, the most urgent is improving cluster stability and engineering capabilities. Large model training demands extremely high stability of computing clusters. A technical leader from Moore Thread admitted that among users choosing domestic computing power, “long-term cluster stability” ranks first, followed by “framework compatibility and migration costs” and actual training and inference performance. This ranking itself indicates a fact: for companies doing large-scale model training, slightly lower performance is acceptable, but frequent training interruptions and repeated checkpoint rollbacks are the real nightmares.
“Moore Thread’s MTT S5000-based Quwa WanKa cluster has a floating-point computing capacity of 10 Exa-Flops, with MFU reaching 60% in Dense model training, maintaining around 40% in MoE models, with effective training time exceeding 90%, and linear training scaling efficiency reaching 95%,” the leader explained.
However, in real industry environments, managing the stability of WanKa-level AI clusters remains a high-difficulty challenge. Industry reports disclose that current WanKa-level AI clusters experience failures once or multiple times daily, caused by GPU HBM memory errors, high-speed interconnect jitter, uneven cooling leading to thermal throttling, and even power module fluctuations. This is not only a challenge for domestic computing power but also a common issue faced by global AI infrastructure — even NVIDIA’s DGX SuperPOD cannot operate completely interruption-free in practice.
The disadvantage of domestic manufacturers in stability mainly lies in the depth of engineering experience. NVIDIA has deployed hundreds of large-scale clusters over the past decade, accumulating vast fault modes and tuning expertise — know-how that cannot be quickly caught up with by simply “stacking people.” Domestic companies often run WanKa interconnects in lab environments, but once in real customer production environments, facing complex network topologies, mixed workloads, and long-term operation under non-ideal conditions, various “unexpected” problems will surface.
Second, ecosystem development remains a perennial topic for domestic computing power. Currently, domestic GPU vendors generally choose a pragmatic “compatibility ecosystem” approach. A technical leader from Moore Thread said that their self-developed MUSA architecture has excellent compatibility with NVIDIA CUDA, and through the MUSIFY automatic porting tool, developers can migrate mainstream international GPU applications to MUSA GPUs at minimal cost, greatly improving application porting efficiency and shortening development cycles. TianShu Zhixin and Biran also invest heavily in software stacks, ensuring that mainstream frameworks like PyTorch, TensorFlow, Megatron-LM can run efficiently on their hardware.
However, the compatibility mode, while shortening market entry time, also brings a structural dilemma: developers are accustomed to being locked into the CUDA ecosystem, and domestic platforms remain “ecological followers.” The deeper issue is that the compatibility route, while seemingly a shortcut in management decisions, may carry heavy costs: domestic GPU vendors need to beware of the risk of always being “ecosystem followers.”
NVIDIA’s moat has never been just hardware computing power, but the CUDA ecosystem built over the past fifteen years — millions of developers, thousands of acceleration libraries, and countless application cases. To move from “compatibility” to “dominance,” domestic vendors must find ways to make developers willing to actively write native code and contribute open-source libraries for domestic platforms, rather than just treating them as CUDA “fallbacks.”
Moore Thread and MuXi Co., Ltd. have already realized this. Besides providing the MUSIFY auto porting tool, Moore Thread has open-sourced several software libraries like Torch-MUSA and vLLM-MUSA, attempting to gradually cultivate a native MUSA ecosystem. MuXi is building an industrial ecosystem with its “1+6+X” strategy, centered on the digital computing base, promoting deep penetration of domestic GPUs across six key industries. But ecosystem building is not a one-day effort; it requires continuous investment over years or even decades, and a sufficient user base to generate positive feedback.
“All roads lead to Rome”
Faced with challenges, domestic computing power companies are seeking breakthroughs in their own ways. From disclosed strategic layouts, differentiation has become the main theme of this round of competition — even if their paths differ, their goal is similar: how to improve domestic computing power.
Biran’s strategic direction can be summarized as “system first, inference positioning.” In 2025, the company delivered a 2,048-card optical interconnect GPU supernode cluster. But the actual operational efficiency and commercialization effect of the cluster still need larger-scale deployment to test.
In product iteration, Biran plans to launch the next-generation BR20X chip and full series in 2026, optimizing for inference while maintaining training advantages — upgrading computing density, memory capacity, bandwidth, and interconnect capabilities, supporting low-precision calculations like FP8/FP4. As of the end of 2025, Biran held cash and financial assets totaling 2.9B yuan, plus 5.63B yuan raised at the start of 2026, providing relatively ample funds. However, for a chip company still in a large-scale R&D investment phase, how long these funds can support technological iteration and market expansion remains a concern.
TianShu Zhixin has chosen a more aggressive route. In January, the company announced its fourth-generation chip architecture roadmap: in 2025, TianShu TianShu architecture will surpass NVIDIA Hopper; in 2026, TianShu TianXuan will target Blackwell, and TianShu TianJi will surpass Blackwell; in 2027, TianShu TianQuan will surpass Rubin, then shift toward breakthrough computing chip architecture design.
This roadmap’s technological promises ultimately need real products to deliver, and no third-party benchmark data has yet publicly verified their performance claims. In commercialization, TianShu Zhixin has served over 340 clients, deploying more than 1,000 projects across internet, AI large models, scientific research, finance, healthcare, and education sectors. The company also launched TongYang series edge computing products for robotics and smart terminals. The release of the fourth-generation architecture and edge products shows TianShu Zhixin’s attempt to attack training, inference, and edge computing simultaneously. But multi-pronged efforts also mean dispersed R&D resources; whether they can establish sufficiently deep moats in any one area remains to be seen.
MuXi’s strategy can be summarized as “full-stack product and open ecosystem.” The company has formed four major GPU product matrices: XiYun C series (training and inference integrated general-purpose), XiSi N series (AI inference), XiCai G series (graphics rendering), and XiSuo X series (scientific intelligence).
In July 2025, the first domestically produced process-based XiYun C600 series was released at WAIC, achieving risk mass production by the end of 2025, with planned mass sales in the first half of 2026. It should be noted that “full domestic process” usually refers to specific process nodes, and the performance gap with the industry’s most advanced processes is a key variable in product competitiveness. The XiSuo X series’ first product, X206, equipped with 128GB large memory, was officially launched in January 2026. MuXi plans to further develop XiSuo X206 and XiYun C700 in 2026. The net funds raised amount to about 3.9B yuan, to be invested over three to four years in R&D and industrialization of new high-performance general-purpose GPUs. This multi-year investment means short-term results are limited; the pace of technological iteration and market window matching will be major tests.
Unlike peers focused solely on AI computing, Moore Thread insists on a full-feature GPU route, covering gaming graphics, AI computing, physics simulation, scientific computing, and ultra-high-definition video encoding/decoding. This broad coverage offers advantages but also means facing more focused competitors in each segment.
Moore Thread’s GPUs support AI acceleration, graphics rendering, physics simulation, scientific computing, and ultra-HD video codecs, claiming to be among the few domestically supporting full computational precision from FP8 to FP64 natively. In cluster engineering, Moore Thread launched the new generation full-feature GPU architecture “HuaGang,” supporting over 100,000-card scale intelligent computing clusters. Its performance in scientific computing and biopharmaceuticals is notable — disclosed data shows that in the molecular dynamics engine SPONGE, MTT S5000’s performance is 1.7 times that of international flagship products; in the biopharmaceutical molecule docking tool DSDP, performance reaches 8.1 times. All these come from the company’s own disclosures, with no independent third-party validation in standardized testing environments. Whether these vertical advantages can translate into sustainable business models remains to be seen.
Beyond the “Four Little Dragons,” Huawei Ascend and Cambrian are also influential players in the domestic computing power landscape. Industry analysts disclose that NVIDIA’s share of the Chinese AI acceleration card market has dropped sharply from about 95% before sanctions to around 55% in 2025, with domestic vendors shipping a total of 1.65 million units, accounting for about 41%, led by Huawei with 810,000 units. Cambrian’s 2025 revenue reached 812k yuan, up 453%, with a net profit of 6.5B yuan. These figures indicate that the overall process of domestic computing power substitution is accelerating, and the competitive landscape faced by the “Four Little Dragons” is more complex than imagined — they must not only catch up with NVIDIA but also compete alongside local players like Huawei and Cambrian.
Looking ahead to 2026, the phase of proof may shift toward “surpassing” — surpassing not only international competitors’ technical indicators but also the trust threshold of users for domestic computing power. According to Frost & Sullivan forecasts, the share of domestic general-purpose GPU products could rise from 17.4% in 2024 to over 50% by 2029. Behind this leap in market share lies a systematic contest of technology, ecosystem, engineering capability, and business models.
For listed domestic computing power companies, the new challenges start after going public: how to balance high R&D investment with sustainable profitability? How to carve out a unique path between ecosystem compatibility and independent innovation? How to seize the opportunities amid the structural shift from training to inference demand?
(Article by Leo Zhang, ToB insights; author: Zhang Shenyu; editor: Yang Lin)