Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
NVIDIA's Market Share Drops Significantly, Where Are the Opportunities in the New Stage of the AI Revolution?
This is the ninth article in the AI Investment Research 100 Series.
In the previous articles, we looked at Intel, AMD, and ARM. Their stock prices have all seen substantial gains over the past year—AMD doubled, Intel tripled, and ARM reached a new historical high. After these rises, a simple question arises:
Can these already appreciated stocks still be held? Are there still opportunities among those that haven't risen?
To answer this question, one cannot avoid a core term—reasoning. The companies discussed earlier, as their stocks rose, repeatedly featured this word in analyses.
So: How big is the reasoning track? What stage is it at? Which companies will benefit, and how? Which are already priced in by the market, and which are not?
This is the ninth article in the AI Investment Research 100 Series, with a length of 15k words. The content is rich yet easy to read. It is recommended to bookmark it first and then read.
1. How Big Is the Track?
Model training is "writing programs," while reasoning is "the process of calling that program daily." After GPT was trained, hundreds of millions of people ask it questions every day, and each interaction consumes reasoning computational power. Claude Code runs a task, with the agent running 100 rounds on its own, each round being reasoning.
Multiple industry studies and media references point in the same direction: after models enter production environments, reasoning will become the main component of lifecycle costs, with estimates ranging from 80% to 90%. In other words, in the future AI era, 8 out of 10 dollars of computing power bills will be spent on reasoning.
However, over the past three years, market discussions have almost entirely focused on training, because training is a more "sexy" story—who has more H100s, bigger parameters, or trains the next-generation model first. Reasoning has been regarded as a side task after training.
This cognitive bias is being reversed, and this is the fundamental reason why the semiconductor companies in this group have been revalued over the past year.
So, reasoning is a big track, but how big exactly? It can be measured from five specific angles.
First is the number of users. ChatGPT has 900 million weekly active users and 50 million paying users. The comparison on the Chinese side is more direct—daily token call volume increased from 100 billion at the beginning of 2024 to 140 trillion in 2026, a 1,400-fold increase. This area is still far from saturation.
Second is usage intensity. OpenAI’s token processing volume was 6 billion per minute in October 2025, and by April 2026, it had reached 15 billion—an increase of 2.5 times in half a year. Enterprise version revenue accounts for over 40%, and enterprise users’ usage intensity is dozens of times that of consumers.
Third is dialogue length. Context length has grown from a few hundred tokens in early days to now, with DeepSeek API documentation listing V4 Pro / Flash context lengths of 1 million tokens, with a maximum output of 384k. Longer documents mean higher memory and computational power consumption per reasoning session.
Fourth is the increasing computational cost of models themselves. Reasoning models like OpenAI’s GPT-1, DeepSeek R1, and Claude Thinking, before answering questions, internally "think" for thousands or even tens of thousands of tokens. Jensen Huang once mentioned, taking DeepSeek R1 as an example, that reasoning models may require much higher computational loads, even reaching hundreds of times more.
In the past, asking AI a question directly gave an answer; now, asking AI a difficult problem involves it thinking for half a minute first, then providing an answer. That "half-minute thinking" is the additional computational power consumption.
Fifth is agents. A single agent task usually requires calling the model 10-100 times. OpenAI Codex’s weekly active users have already surpassed 4 million (as of April 22, 2026)—this is just one product of a single company. An industry insider estimates that the overall computational power consumption of AI intelligences could reach more than 10 times that of large language models of similar parameter scale.
Multiplying these five factors, the total demand for reasoning will see a magnitude expansion within three to five years. This is not an exaggerated narrative but a judgment increasingly aligned with mainstream views.
"Where Are the Opportunities as NVIDIA’s Reasoning Market Share Declines and the AI Revolution Enters Its Second Stage?"