Here it is! The world’s leading large model company finally has a truly quantifiable financial sample.
Listed on the Hong Kong Stock Exchange for only 52 days, MiniMax has delivered its first annual report since IPO:
By February 2026, ARR surpassed $150 million, 2025 revenue increased by 158.9% year-over-year, gross profit soared by 437%, and the loss rate significantly narrowed…
But more importantly: as the first annual report from a global large model company, it provides a valuable window into market insights on “how large models are commercialized,” and serves as an important indicator of whether Chinese AI companies can succeed in global competition.
Starting from this, we can gain insights not only into MiniMax’s next steps but also into the overall evolution of the AI large model industry.
How does the performance of the world’s first large model company annual report look?
In 2025, the second year of MiniMax’s true commercialization, the company achieved total annual revenue of $79.04 million, a staggering increase of 158.9% year-over-year, with over 70% of revenue coming from international markets.
Meanwhile, the company’s adjusted net loss was $250 million, with a significant reduction in net loss rate. Simply put: they are earning more and losing less.
MiniMax’s revenue doubled, with a clearer picture of a dual-driven model based on AI-native products and open platforms.
Specifically, the main sources of MiniMax’s revenue can be divided into two categories: AI-native products aimed at the C-end and open platforms plus other AI-based enterprise services aimed at the B-end.
Revenue from AI-native products refers to subscription income from applications like MiniMax, MiniMax Voice, Haijiao AI, Xingye and others.
By the end of 2025, MiniMax had served over 236 million users across more than 200 countries and regions. This user scale is quite competitive among global internet products.
As product commercialization continued, along with increased user engagement and willingness to pay, this segment contributed $53.08 million in revenue in 2025, a 143% increase year-over-year, accounting for 67.2% of total revenue.
The open platform for enterprises and developers, along with other AI-based enterprise services billed by usage, generated $25.96 million last year, a 197.8% increase.
At the earnings call, MiniMax founder Yan Junjie also revealed the latest — in productivity scenarios driven by text models, product growth has become even more apparent this year.
For example, the average daily token consumption of the M2 series text models in February 2026 increased more than six times compared to December 2025, with token consumption from CodingPlan increasing over tenfold.
In February this year, the new registered users for the open platform aimed at enterprise clients and individual developers exceeded four times the number in December 2025.
Currently, MiniMax has over 214,000 enterprise clients and developers across more than 100 countries and regions, with over 50% of revenue from development platforms coming from overseas markets.
This “C-end + B-end” dual-driven business model provides MiniMax with stable, predictable recurring revenue and significantly improves profitability.
In 2025, MiniMax’s gross profit reached $20.08 million, a sharp increase of 437% year-over-year, and notably outpacing revenue growth.
At the same time, the company’s gross profit margin also grew rapidly, from -24.7% in 2023 to 12.2% in 2024, and then increased by 13.2 percentage points to 25.4% in 2025.
The increase in gross profit level is mainly due to improvements in model and system efficiency, as well as infrastructure optimization.
Combined with the significantly narrowed net loss rate, from a profit perspective, MiniMax is closer to a sustainable commercialization cycle.
Achieving such capabilities and results requires substantial upfront investment and groundwork.
In 2025, MiniMax’s R&D expenses were $250 million, a 33.8% increase year-over-year. For an AI large model company, more meaningful than raw numbers is the R&D efficiency:
MiniMax’s R&D expenditure as a percentage of total revenue decreased from 619% in 2024 to 320% in 2025, indicating a gradual improvement in R&D efficiency.
Meanwhile, “efficiency” is also reflected in MiniMax’s cash reserves.
As of the end of 2025, the company’s cash reserves stood at $1.05 billion (including cash and equivalents, restricted cash, term deposits, etc.), higher than $880 million at the end of 2024.
Since going public in Hong Kong, the capital market has responded positively, with MiniMax’s stock price rising, further strengthening its cash “ammunition” reserves.
Overall, this performance indicates that MiniMax has entered an acceleration phase. So, what kind of technology and products support this explosive growth?
2025 was a year of comprehensive technological advancement and accelerated commercialization for MiniMax.
During this year, MiniMax built a multimodal R&D capability, with models in language, video, speech, music, and other major modalities possessing global competitiveness. From Q4 2025 to early 2026, over 108 days, MiniMax consecutively released third-generation language models M2, M2.1, and M2.5, demonstrating industry-leading model iteration speed.
Especially in the second half of the year, MiniMax launched a full-scale product explosion.
Looking back in time, it’s clear that since October last year, MiniMax has been in a rapid model iteration mode.
When M2 was released, the R&D team used traditional full-attention mechanisms to ensure stability for real tasks, and broke the impossible triangle of intelligence level, operational speed, and computational cost.
In real scenarios, this model won first place in the AI-Trader simulated A-share stock contest at Hong Kong University, earning nearly three thousand yuan in 20 days with a 100,000 yuan principal.
By December, MiniMax passed the HKEX hearing, and then launched the flagship Coding&Agent model M2.1.
M2.1 achieved the SOTA in multi-language programming globally and addressed a major shortcoming of Vibe Coding, mastering front-end and back-end development standards.
Later, with OpenClaw, the creators of OpenClaw praised M2.1 as (at the time) the best open-source model.
Less than two months after M2.1’s launch, around the Spring Festival, M2.5 was also released, still focusing on coding and intelligent agent capabilities, but lighter, faster, and more cost-effective.
Its inference speed can reach 100 TPS, capable of full development of front-end, back-end, and database, and running as an intelligent agent costs only $1 per hour.
Today, Notion co-founder Akshay Kothari announced that Notion Custom Agents officially introduced the first open-source weight model MiniMax M2.5, meaning MiniMax M2.5 is increasingly integrated into core scenarios of mainstream international productivity tools.
During this period of continuous model upgrades, MiniMax also updated the model agent capability “scaffolding” — the MiniMax Agent platform — twice.
In January, MiniMax Agent launched version 2.0, introducing a desktop version that deeply integrates into local work environments, directly reading files on the computer.
The newly launched Experts system allows users to customize domain-specific experts through unique knowledge and workflows for various task scenarios.
After the Spring Festival, the Agent platform was updated again, further strengthening community co-creation of the Experts system, with over 10,000 applications.
Returning to MiniMax’s product iteration path, the platform also responded to OpenClaw’s hot topic with MaxClaw mode. Users can deploy on the cloud with one click via the web, without worrying about complex configurations.
Preparing for three major PMFs in 2026, transforming into an AI platform company
Today, the pace of large model iteration is accelerating, meaning competition is fiercer, and the market is changing rapidly.
The next-generation AI company that wins in the future will not be just a pure technology provider but an organization with strong capabilities in technology, products, and commercialization.
The future winner must first be an action-oriented company that abandons the “question-solving” mindset.
Current competition is no longer about who scores higher on benchmarks but about who can deliver results in the chaotic real-world business environment.
Take MiniMax’s latest M2.5 as an example: it has true full-stack delivery capabilities, covering backend logic and database design.
Coupled with MiniMax Agent, it connects cloud and local environments, allowing AI to proactively read local files and absorb implicit business knowledge, turning the large model into a long-term “digital partner” with industry expertise.
What determines the survival of a large model company are also technical judgment, market intuition, and speed of evolution.
As MiniMax CEO Yan Junjie said during the earnings call, it’s about having the ability to define a new intelligence paradigm and being prepared in advance.
Since the second half of 2025, MiniMax has been actively preparing for several super PMFs they expect to emerge this year:
Programming: AI programming will reach L4 to L5 intelligence, moving from “tool” to “colleague-level” collaboration, where L4 can propose innovative solutions for engineering and complex algorithms; L5 requires multiple intelligent agents to be effectively organized and collaborate like humans.
Office: The workspace scenarios across various professions will be broader and more marketable, more complex than programming, but MiniMax believes office scenarios will replicate last year’s rapid progress in programming.
Multimodal creation: AI creation tools will generate “ready-to-deliver” medium to long content, even streaming and real-time output formats. For example, MiniMax has upgraded Media Agent in Haijiao AI to support multimodal, versatile creation and one-click video production.
This means new technological challenges are imminent, and larger-scale intelligent supply will explode, opening a huge window for innovation at the application layer.
AI companies like MiniMax, centered on multimodal models, will see amplified demand, with Token volume likely increasing by 1-2 orders of magnitude.
To meet these challenges, MiniMax has prepared models like M3 and Hailuo3, designed specifically for these scenarios.
Faced with such opportunities, MiniMax will fully upgrade from a large model company to an “AI platform company”, focusing on intelligent density and model throughput as core metrics, driving AI to become the new generation of global production infrastructure.
In MiniMax’s view, when the boundaries of intelligence are broken through, new scenarios and users will emerge, further forming new ecosystems and commercial dividends.
The companies capable of defining this intelligence boundary and benefiting from its product and commercial advantages are what they call “AI platform companies.”
In simpler terms, the value of an AI platform company can be roughly estimated as intelligent density × token throughput. When both are strong enough, the platform’s value naturally manifests.
In MiniMax’s roadmap, models like M3 and Hailuo 3 are the ultimate practitioners of this formula — their development will continuously optimize inference architecture and computational efficiency.
It’s foreseeable that the boundaries of intelligence are being redefined, and 2026 will be a critical watershed for AI’s transition from the “tool era” to the “ecosystem era.”
Source: Quantum Bit
Risk Warning and Disclaimer
Market risks exist; investments should be cautious. This article does not constitute personal investment advice and does not consider individual users’ specific investment goals, financial situations, or needs. Users should consider whether any opinions, viewpoints, or conclusions in this article are suitable for their particular circumstances. Invest accordingly at your own risk.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
The world's first large model performance report! MiniMax predicts three major super PMFs by 2026, AI platform companies are on the move.
Here it is! The world’s leading large model company finally has a truly quantifiable financial sample.
Listed on the Hong Kong Stock Exchange for only 52 days, MiniMax has delivered its first annual report since IPO:
By February 2026, ARR surpassed $150 million, 2025 revenue increased by 158.9% year-over-year, gross profit soared by 437%, and the loss rate significantly narrowed…
But more importantly: as the first annual report from a global large model company, it provides a valuable window into market insights on “how large models are commercialized,” and serves as an important indicator of whether Chinese AI companies can succeed in global competition.
Starting from this, we can gain insights not only into MiniMax’s next steps but also into the overall evolution of the AI large model industry.
How does the performance of the world’s first large model company annual report look?
In 2025, the second year of MiniMax’s true commercialization, the company achieved total annual revenue of $79.04 million, a staggering increase of 158.9% year-over-year, with over 70% of revenue coming from international markets.
Meanwhile, the company’s adjusted net loss was $250 million, with a significant reduction in net loss rate. Simply put: they are earning more and losing less.
MiniMax’s revenue doubled, with a clearer picture of a dual-driven model based on AI-native products and open platforms.
Specifically, the main sources of MiniMax’s revenue can be divided into two categories: AI-native products aimed at the C-end and open platforms plus other AI-based enterprise services aimed at the B-end.
Revenue from AI-native products refers to subscription income from applications like MiniMax, MiniMax Voice, Haijiao AI, Xingye and others.
By the end of 2025, MiniMax had served over 236 million users across more than 200 countries and regions. This user scale is quite competitive among global internet products.
As product commercialization continued, along with increased user engagement and willingness to pay, this segment contributed $53.08 million in revenue in 2025, a 143% increase year-over-year, accounting for 67.2% of total revenue.
The open platform for enterprises and developers, along with other AI-based enterprise services billed by usage, generated $25.96 million last year, a 197.8% increase.
At the earnings call, MiniMax founder Yan Junjie also revealed the latest — in productivity scenarios driven by text models, product growth has become even more apparent this year.
For example, the average daily token consumption of the M2 series text models in February 2026 increased more than six times compared to December 2025, with token consumption from CodingPlan increasing over tenfold.
In February this year, the new registered users for the open platform aimed at enterprise clients and individual developers exceeded four times the number in December 2025.
Currently, MiniMax has over 214,000 enterprise clients and developers across more than 100 countries and regions, with over 50% of revenue from development platforms coming from overseas markets.
This “C-end + B-end” dual-driven business model provides MiniMax with stable, predictable recurring revenue and significantly improves profitability.
In 2025, MiniMax’s gross profit reached $20.08 million, a sharp increase of 437% year-over-year, and notably outpacing revenue growth.
At the same time, the company’s gross profit margin also grew rapidly, from -24.7% in 2023 to 12.2% in 2024, and then increased by 13.2 percentage points to 25.4% in 2025.
The increase in gross profit level is mainly due to improvements in model and system efficiency, as well as infrastructure optimization.
Combined with the significantly narrowed net loss rate, from a profit perspective, MiniMax is closer to a sustainable commercialization cycle.
Achieving such capabilities and results requires substantial upfront investment and groundwork.
In 2025, MiniMax’s R&D expenses were $250 million, a 33.8% increase year-over-year. For an AI large model company, more meaningful than raw numbers is the R&D efficiency:
MiniMax’s R&D expenditure as a percentage of total revenue decreased from 619% in 2024 to 320% in 2025, indicating a gradual improvement in R&D efficiency.
Meanwhile, “efficiency” is also reflected in MiniMax’s cash reserves.
As of the end of 2025, the company’s cash reserves stood at $1.05 billion (including cash and equivalents, restricted cash, term deposits, etc.), higher than $880 million at the end of 2024.
Since going public in Hong Kong, the capital market has responded positively, with MiniMax’s stock price rising, further strengthening its cash “ammunition” reserves.
Overall, this performance indicates that MiniMax has entered an acceleration phase. So, what kind of technology and products support this explosive growth?
Intensive product iterations, accelerating commercialization
2025 was a year of comprehensive technological advancement and accelerated commercialization for MiniMax.
During this year, MiniMax built a multimodal R&D capability, with models in language, video, speech, music, and other major modalities possessing global competitiveness. From Q4 2025 to early 2026, over 108 days, MiniMax consecutively released third-generation language models M2, M2.1, and M2.5, demonstrating industry-leading model iteration speed.
Especially in the second half of the year, MiniMax launched a full-scale product explosion.
Looking back in time, it’s clear that since October last year, MiniMax has been in a rapid model iteration mode.
When M2 was released, the R&D team used traditional full-attention mechanisms to ensure stability for real tasks, and broke the impossible triangle of intelligence level, operational speed, and computational cost.
In real scenarios, this model won first place in the AI-Trader simulated A-share stock contest at Hong Kong University, earning nearly three thousand yuan in 20 days with a 100,000 yuan principal.
By December, MiniMax passed the HKEX hearing, and then launched the flagship Coding&Agent model M2.1.
M2.1 achieved the SOTA in multi-language programming globally and addressed a major shortcoming of Vibe Coding, mastering front-end and back-end development standards.
Later, with OpenClaw, the creators of OpenClaw praised M2.1 as (at the time) the best open-source model.
Less than two months after M2.1’s launch, around the Spring Festival, M2.5 was also released, still focusing on coding and intelligent agent capabilities, but lighter, faster, and more cost-effective.
Its inference speed can reach 100 TPS, capable of full development of front-end, back-end, and database, and running as an intelligent agent costs only $1 per hour.
Today, Notion co-founder Akshay Kothari announced that Notion Custom Agents officially introduced the first open-source weight model MiniMax M2.5, meaning MiniMax M2.5 is increasingly integrated into core scenarios of mainstream international productivity tools.
During this period of continuous model upgrades, MiniMax also updated the model agent capability “scaffolding” — the MiniMax Agent platform — twice.
In January, MiniMax Agent launched version 2.0, introducing a desktop version that deeply integrates into local work environments, directly reading files on the computer.
The newly launched Experts system allows users to customize domain-specific experts through unique knowledge and workflows for various task scenarios.
After the Spring Festival, the Agent platform was updated again, further strengthening community co-creation of the Experts system, with over 10,000 applications.
Returning to MiniMax’s product iteration path, the platform also responded to OpenClaw’s hot topic with MaxClaw mode. Users can deploy on the cloud with one click via the web, without worrying about complex configurations.
Preparing for three major PMFs in 2026, transforming into an AI platform company
Today, the pace of large model iteration is accelerating, meaning competition is fiercer, and the market is changing rapidly.
The next-generation AI company that wins in the future will not be just a pure technology provider but an organization with strong capabilities in technology, products, and commercialization.
The future winner must first be an action-oriented company that abandons the “question-solving” mindset.
Current competition is no longer about who scores higher on benchmarks but about who can deliver results in the chaotic real-world business environment.
Take MiniMax’s latest M2.5 as an example: it has true full-stack delivery capabilities, covering backend logic and database design.
Coupled with MiniMax Agent, it connects cloud and local environments, allowing AI to proactively read local files and absorb implicit business knowledge, turning the large model into a long-term “digital partner” with industry expertise.
What determines the survival of a large model company are also technical judgment, market intuition, and speed of evolution.
As MiniMax CEO Yan Junjie said during the earnings call, it’s about having the ability to define a new intelligence paradigm and being prepared in advance.
Since the second half of 2025, MiniMax has been actively preparing for several super PMFs they expect to emerge this year:
Programming: AI programming will reach L4 to L5 intelligence, moving from “tool” to “colleague-level” collaboration, where L4 can propose innovative solutions for engineering and complex algorithms; L5 requires multiple intelligent agents to be effectively organized and collaborate like humans.
Office: The workspace scenarios across various professions will be broader and more marketable, more complex than programming, but MiniMax believes office scenarios will replicate last year’s rapid progress in programming.
Multimodal creation: AI creation tools will generate “ready-to-deliver” medium to long content, even streaming and real-time output formats. For example, MiniMax has upgraded Media Agent in Haijiao AI to support multimodal, versatile creation and one-click video production.
This means new technological challenges are imminent, and larger-scale intelligent supply will explode, opening a huge window for innovation at the application layer.
AI companies like MiniMax, centered on multimodal models, will see amplified demand, with Token volume likely increasing by 1-2 orders of magnitude.
To meet these challenges, MiniMax has prepared models like M3 and Hailuo3, designed specifically for these scenarios.
Faced with such opportunities, MiniMax will fully upgrade from a large model company to an “AI platform company”, focusing on intelligent density and model throughput as core metrics, driving AI to become the new generation of global production infrastructure.
In MiniMax’s view, when the boundaries of intelligence are broken through, new scenarios and users will emerge, further forming new ecosystems and commercial dividends.
The companies capable of defining this intelligence boundary and benefiting from its product and commercial advantages are what they call “AI platform companies.”
In simpler terms, the value of an AI platform company can be roughly estimated as intelligent density × token throughput. When both are strong enough, the platform’s value naturally manifests.
In MiniMax’s roadmap, models like M3 and Hailuo 3 are the ultimate practitioners of this formula — their development will continuously optimize inference architecture and computational efficiency.
It’s foreseeable that the boundaries of intelligence are being redefined, and 2026 will be a critical watershed for AI’s transition from the “tool era” to the “ecosystem era.”
Source: Quantum Bit
Risk Warning and Disclaimer
Market risks exist; investments should be cautious. This article does not constitute personal investment advice and does not consider individual users’ specific investment goals, financial situations, or needs. Users should consider whether any opinions, viewpoints, or conclusions in this article are suitable for their particular circumstances. Invest accordingly at your own risk.