DeepMind Researcher Speculates on Delay of DeepSeek V4: Training Data Doubled to 33T Causing Severe Instability

robot
Abstract generation in progress

According to monitoring by Dongcha Beating, the technical report for DeepSeek V4 reveals that V4-Flash and V4-Pro were pre-trained on 32T and 33T tokens respectively, doubling the approximately 15T tokens used in V3. The report admits that the training process encountered ‘significant instability challenges,’ with repeated occurrences of loss spikes (sudden increases in training loss) attributed to outliers in the MoE layer, and the routing mechanism itself exacerbating these outliers, making simple rollbacks ineffective. DeepSeek has identified two solutions that have been applied in actual training: Anticipatory Routing, which decouples routing index calculations from backbone network updates and is automatically triggered only when a loss spike is detected, incurring an additional overhead of about 20%; and SwiGLU Clamping, which clamps activation values to a fixed range to directly suppress outliers. The report states that both methods are effective but acknowledges that ‘the underlying principles are not yet fully understood.’ Google DeepMind researcher Susan Zhang, who previously worked at Meta AI and OpenAI, commented that the instability caused by the doubling of training data ‘explains the delay,’ describing these two solutions as ‘band-aids,’ while also affirming the technical transparency of DeepSeek.

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin