Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
DeepMind Researcher Speculates on Delay of DeepSeek V4: Training Data Doubled to 33T Causing Severe Instability
According to monitoring by Dongcha Beating, the technical report for DeepSeek V4 reveals that V4-Flash and V4-Pro were pre-trained on 32T and 33T tokens respectively, doubling the approximately 15T tokens used in V3. The report admits that the training process encountered ‘significant instability challenges,’ with repeated occurrences of loss spikes (sudden increases in training loss) attributed to outliers in the MoE layer, and the routing mechanism itself exacerbating these outliers, making simple rollbacks ineffective. DeepSeek has identified two solutions that have been applied in actual training: Anticipatory Routing, which decouples routing index calculations from backbone network updates and is automatically triggered only when a loss spike is detected, incurring an additional overhead of about 20%; and SwiGLU Clamping, which clamps activation values to a fixed range to directly suppress outliers. The report states that both methods are effective but acknowledges that ‘the underlying principles are not yet fully understood.’ Google DeepMind researcher Susan Zhang, who previously worked at Meta AI and OpenAI, commented that the instability caused by the doubling of training data ‘explains the delay,’ describing these two solutions as ‘band-aids,’ while also affirming the technical transparency of DeepSeek.