Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
Luofu Li: Large models enter the post-training era, with top teams' pre-training and post-training computing power ratio reaching 1:1
According to Beating Monitoring, Luo Fuli, head of Xiaomi’s large model team, pointed out that the competition for large models has shifted from the Chat era dominated by pre-training to the Post-train dominated Agent era. The current key focus is “how to effectively scale reinforcement learning (RL) on Agents.”
This paradigm shift directly leads to a reconstruction of computing resource allocation. Luo Fuli revealed that during the Chat era, the ratio of computing power used for research, pre-training, and post-training was approximately 3:5:1; whereas in the current Agent era, a reasonable allocation ratio has become 3:1:1, meaning the investment in pre-training and post-training has become roughly equal, with top-tier model teams now investing equally in both.
At the same time, the requirements for system architecture have also undergone a major change. Previously, RL infrastructure mainly centered around the “model inference engine” to handle pure text computations; now, the infrastructure must be centered around the “Agent,” supporting heterogeneous cluster scheduling and tolerating the ambiguity caused by various uncontrollable factors that may interrupt Agents during complex workflows.