Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
DeepMind researcher speculates that the delay of DeepSeek V4: Doubling training data to 33T caused serious instability
According to Beating Monitoring, the DeepSeek V4 technical report discloses that V4-Flash and V4-Pro were pre-trained on 32T and 33T tokens respectively, doubling from about 15T tokens in V3.
The report admits that during training, “significant instability challenges were encountered,” with loss spikes repeatedly occurring, and the root cause being anomalies in the MoE layer. The routing mechanism itself can also exacerbate these anomalies, and simple rollback cannot fully resolve the issue.
DeepSeek identified two solutions and has applied them to actual training: Anticipatory Routing, which decouples routing index calculation from backbone network updates and only automatically triggers when a loss spike is detected, with an additional overhead of about 20%; and SwiGLU Clamping, which clamps activation values within a fixed range to directly suppress anomalies.
The report states that both are effective but admits that “the underlying principles are not yet fully understood.”
Google DeepMind researcher Susan Zhang (formerly at Meta AI and OpenAI) commented that the instability caused by doubling training data “explains the delay,” describing these two solutions as “band-aids,” while also praising DeepSeek’s technical transparency.