Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Why Large Models Fail to Generate 'Ma Jiaqi': MiniMax's Token Analysis Reveals Nearly 5% of Tokens Forgotten in Post-Training
According to monitoring by Dongcha Beating, MiniMax released a technical blog disclosing the root cause investigation of its M2 series large model’s inability to output the name ‘Ma Jiaqi’. The investigation started from a specific case and ultimately revealed a systemic degradation issue affecting the entire vocabulary. The root cause was identified as the tokenizer (a component that segments text into units for model processing) merging ‘Jiaqi’ into a standalone token during training. In the pre-training phase, the model encountered a large amount of internet text and learned this token; however, in the post-training dialogue data, there were fewer than 5 samples containing ‘Jiaqi’. During post-training, high-frequency tokens like tool_call markers and code symbols continuously updated the surrounding vector space, pushing low-frequency tokens like ‘Jiaqi’ in the wrong direction. The model still ‘recognizes’ Ma Jiaqi and can accurately respond with related information; it has merely lost the ability to output this token. The team subsequently conducted a comprehensive scan of approximately 200,000 tokens in the complete vocabulary and found that about 4.9% of tokens had significantly degraded. The most severe degradation was observed in Japanese: 29.7% of Japanese tokens showed significant degradation, far exceeding Korean (3.3%), Russian (3.7%), Chinese (3.9%), and English (3.5%). Other notably degraded tokens included internet SEO garbage terms like ‘legendary private server’ and ‘painless abortion’, with mechanisms identical to that of ‘Jiaqi’. The severe degradation in Japanese also solved an old mystery. Previously, the model occasionally mixed in Russian or Korean characters in Japanese dialogues, but the cause was unknown. This analysis indicated that after the parameter drift of Japanese tokens, they became confused with tokens from other languages in the vector space, leading to incorrect activation of Japanese tokens (language mixing) and pushing adjacent low-frequency Chinese tokens out of the normal probability range (token forgetting). The solution is to construct a synthetic dataset covering the entire vocabulary, allowing the model to practice each token through simple repetition tasks. The results were immediate: the proportion of Russian characters mixed into Japanese responses dropped from 47% to 1%, and the stability of output parameters for the entire vocabulary (cosine similarity) increased from a low of 0.329 to all above 0.97.