Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Pre-training speedup by 2 to 3 times, Nous's new TST solution faces "collision" controversy
ME News Report, May 14th (UTC+8), according to Beating Monitoring, Nous Research released a new pre-training scheme for large models called Token Substitution Training (TST). This scheme compresses adjacent tokens by packaging them before training, which can shorten pre-training time by 2 to 3 times under the same computational effort. TST consists of two phases. During the first 20% to 40% of training, the model no longer reads tokens one by one, but instead “packs” adjacent tokens by averaging them and inputs this, predicting which tokens are included in the next pack (without internal order). Afterwards, the model reverts to conventional next-token prediction. Because the underlying architecture remains unchanged, the resulting model is identical during inference to a standard model. This method has been validated on MoE models with up to 10 billion parameters. The core idea of this scheme is “exchanging data for computing power,” using faster data consumption to save training time. If high-quality text resources are exhausted in the future, this accelerated data consumption characteristic might become a shortcoming. Additionally, a few hours after the paper was published, some readers pointed out that TST’s mechanism is extremely similar to the earlier work “Beyond Next Token Prediction” released in 2024. The authors later admitted on Hugging Face that this was “unfortunate convergent research,” and promised to update the paper with additional citations. (Source: BlockBeats)