Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
Meituan quietly releases a new large model: no announcement, no open source
According to Beating Monitoring, Meituan has launched a new model LongCat-2.0-Preview on the LongCat API platform, with the update log dated April 20, but Meituan has not released any official announcement or technical report to date. Previously, each model in the LongCat series (Flash-Chat, Flash-Thinking, Flash-Lite, Flash-Omni, Next) was accompanied by an official blog, technical report, and was open-sourced simultaneously on Hugging Face and GitHub. The update log for 2.0-Preview contains no open-source links and only provides services via API.
The update log lists only three capabilities: designed for agent development, native support for tool invocation, multi-step reasoning, and long-context tasks; proficient in code generation, automation workflows, and executing complex instructions; deeply integrated with Claude Code, OpenClaw, OpenCode, and Kilo Code.
On April 24, multiple media outlets citing informed sources reported more details: the model has over one trillion total parameters, uses MoE architecture, supports a 1 million token context window, and has a parameter count roughly the same as DeepSeek V4 released on the same day. Insiders said that the training and inference of LongCat-2.0-Preview were completed entirely on domestic computing clusters, utilizing 50,000 to 60k domestically produced accelerators, making it the largest training task ever completed with domestic computing power. During testing, a daily free quota of 10 million tokens was provided.