Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
ARC-AGI-3 announces the largest human testing in history: humans have conquered all levels, but AI still has gaps
ME News Report, April 15 (UTC+8), according to Beating Monitoring, ARC Prize Foundation announced the human performance dataset for ARC-AGI-3, which is the largest human testing study in the ARC-AGI series to date, with 458 participants. The dataset includes 342 complete human operation playback records, covering 25 public environments, all open-sourced. ARC-AGI-3 contains 135 abstract reasoning environments, where testers do not receive any gameplay instructions and must explore, infer rules, and develop strategies on their own. The tests are conducted at an offline testing center in San Francisco, each lasting 90 minutes, with participants earning about $130 base pay plus $5 for each environment they pass. All tests are “first-time pass” conditions, meaning each person only sees the environment once and attempts it once, measuring learning and adaptation abilities when facing entirely new problems. Humans and AI are given exactly the same information, with no information gap.
Core conclusion: All environments in ARC-AGI-3 are passed by humans, with at least two independent participants completing each environment, and most environments being passed by more than five people. The ARC Prize Foundation states, “We have not yet achieved AGI; this dataset is evidence.”
Since the preview of ARC-AGI-3, nearly one million AI evaluation submissions have been received for the open environments. Based on this data, the foundation also announced two scoring rule adjustments: first, changing the human benchmark per level from the “second-best player” to the “median player” to reduce luck’s influence on scores; second, increasing the maximum score per level from 100% to 115% to prevent poor performance on one level from dragging down overall results. The net effect of these two adjustments is a slight increase of about 0.5 percentage points in both human and AI scores. (Source: BlockBeats)