Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
METR updates AI agent capability benchmarks, Gemini 3.1Pro's reliability surpasses all cutting-edge models to reach the top
ME News report, April 16 (UTC+8). According to Beating Monitoring, the AI safety assessment organization METR updated its “Time Horizon” benchmark, adding test data for the Google Gemini 3.1 Pro. This benchmark tracks the upper limit of an AI agent’s ability to independently complete programming tasks, and since its release in February this year, it has become an important reference for measuring growth in AI agent capabilities.
The measurement method has human software engineering experts (average about 5 years of experience) and AI agents complete the same set of more than 100 software tasks, with task difficulty measured by how long humans take. There are two core indicators: the 50% Time Horizon (the highest task difficulty the AI has a 50% chance of completing) and the 80% Time Horizon (the highest task difficulty the AI has an 80% chance of completing).
Gemini 3.1 Pro’s rankings flipped on these two indicators. On the 50% Time Horizon, it ranked second, behind the significantly ahead Claude Opus 4.6:
However, on the more stringent 80% Time Horizon, Gemini 3.1 Pro overtook and took the top spot:
Claude Opus 4.6 can tackle more difficult tasks, but its success rate fluctuates greatly. Gemini 3.1 Pro has a lower ceiling but is more stable within its capability range. For production scenarios that require predictable outcomes, the latter may be more practical.
Compared with the previous generation Gemini 3 Pro (50% Time Horizon about 3.7 hours), Gemini 3.1 Pro has improved by about 71%. Over a longer timeline, METR’s data shows that the time horizon of frontier models has grown from a few seconds with GPT-2 in 2019 to more than ten hours today—roughly doubling every 4.3 months. METR says it “has not seen any signs of exponential growth slowing.”
It should be noted that METR’s tasks cover software engineering, machine learning, and cybersecurity, and they are all clearly defined independent tasks that can be automatically scored. In subsequent research, METR found that when the scoring method shifted from algorithmic judgments to overall human evaluation, AI performance dropped significantly. A 12-hour time horizon does not mean AI can replace humans for half a day of real work.
(Source: BlockBeats)