Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Mistral AI releases Vibe remote coding agent and 128 billion parameter model Medium 3.5
AIMPACT News, May 3rd (UTC+8), Mistral AI released the Vibe remote coding agent and the public preview version of Mistral Medium 3.5. Vibe is a CLI-accessible coding agent capable of completing software tasks such as code writing, refactoring, test generation, and CI failure investigation, supporting cloud backend operation, allowing users to handle long-duration tasks even when away from their computers, and supporting multiple agents in parallel. Each agent session runs in an isolated sandbox, and upon completion, can create a Pull Request on GitHub and notify the user. Mistral Medium 3.5 is a dense model with 128 billion parameters, marking its first flagship integrated model, with a 256k-token context window, supporting instruction following, reasoning, and coding tasks. The model is trained from scratch with a visual encoder to handle variable image sizes. On the SWE-Bench validation set, it scored 77.6%, surpassing Devstral 2 and Qwen3.5 397B A17B, and has become the default model for Vibe and Le Chat. Inference effort can now be configured upon request.