Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
DeepMind releases AI math research assistant: Multi-agent architecture beats GPT-5.5Pro and also solves previously "unsolvable" problems
According to Beating Monitoring, Google DeepMind released an AI co-mathematician, a multi-agent interactive research workstation for mathematicians. The system achieved a 47.9% accuracy rate on the currently most difficult research-level math benchmark, FrontierMath Tier 4 (solving 23 out of 48 problems), directly surpassing the previous record of 39.6% set by GPT-5.5 Pro. This system did not use a new generation base model; it directly used Gemini 3.1 Pro. When running Tier 4 alone, this model achieved only 19%, but with the addition of the agent framework, its performance more than doubled. DeepMind built a multi-layer architecture for it: at the top, a “project coordinator” breaks down research tasks into multiple workflows, which are then distributed to sub-agents responsible for literature retrieval, coding, and reasoning. The proofs generated must pass a review process conducted by multiple “review agents” before submission. This heavy scaffolding demonstrates that, in top-tier mathematical reasoning, the ability gains from orchestration may be greater than those from model upgrades. The blind test was conducted by Epoch AI, and to prevent cheating, the DeepMind team could not see the questions throughout the process, with each problem allowed 48 hours to run. The results not only topped the leaderboard but also solved three problems that all previous models had failed to solve. Although called an assistant, it functions more like a brainstorming colleague. Group theory expert Marc Lackenby used it in actual research to solve a public conjecture in the Kourovka Notebook. Interestingly, the system’s initial strategy was flagged as “flawed” by its own review agent, but Lackenby recognized the clever idea hidden in the flawed approach, filled in the gaps himself, and ultimately completed the proof. Currently, the AI co-mathematician is only open for internal testing to a small number of mathematicians.