Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
Meituan LongCat-2.0-Preview quietly launched: no announcement, no open source
CryptoWorld News: Meituan has launched a new model, Longcat-2.0-preview, on the Longcat API platform. The update log is dated April 20, but no official announcement or technical report has been released. Previously, Longcat series models were accompanied by official blogs and technical reports, and were open-sourced on Hugging Face and GitHub. The update log for 2.0-preview does not include an open-source link, only available through API, listing three capabilities: targeted at agent development, supporting tool invocation, multi-step reasoning, and long-context tasks, excelling in code generation and complex instruction execution. On April 24, multiple media outlets citing insiders reported that the total parameters of this model exceed one trillion, adopting a MoE architecture, supporting a 1 million context window, with parameter count roughly the same as DeepSeek v4. The entire training and inference process is based on domestic computing clusters, utilizing 50,000 to 60k domestically produced acceleration cards, making it the largest-scale training task completed with domestic computing power to date. During testing, a daily free quota of 10 million tokens was provided.