Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
MIT Researchers Reveal LLM Strong Superposition Mechanism: Doubling Width Halves Error Rate
AIMPACT News, May 3 (UTC+8), MIT researchers reveal the mechanism by which large language model performance reliably scales with size, providing experimental validation for the “superposition” phenomenon for the first time. The study finds that LLMs bypass dimensionality limits by storing multiple concepts within the same dimension, and this “strong superposition” allows the model to represent all concepts simultaneously, with errors arising from noise generated by overlaps. The team validated their findings using the Anthropic simplified model and open-source models such as OPT, GPT-2, Qwen2.5, and Pythia: doubling the model width reduces errors by about half, with a scaling exponent of 0.91, close to the theoretical value of 1. The research answers two key questions: scaling will stop when model width matches vocabulary size; for natural language tasks, the flatness of word frequency distribution limits the acceleration of the search space, but architecture designs that encourage superposition can achieve better performance at the same scale.