Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
V4-Pro ranks first on Codeforces 3206 by beating GPT-5.4, but with long context and knowledge, it still falls behind Opus and Gemini
According to Beating Monitoring, the V4 technical report has released a comparison between DeepSeek-V4-Pro-Max (the highest inference strength mode) and a closed-source flagship. The comparison set includes Opus 4.6 Max, GPT-5.4 xHigh, Gemini 3.1 Pro High, as well as the open-source models Kimi K2.6 and GLM-5.1, excluding the recently released Opus 4.7 and GPT-5.5.
In terms of coding, V4-Pro-Max scored 3206 on Codeforces, exceeding GPT-5.4’s 3168 and Gemini 3.1 Pro’s 3052, setting a new record for this benchmark. On LiveCodeBench, the score of 93.5 was also the highest across the board. SWE Verified scored 80.6, only 0.2 percentage points lower than Opus 4.6’s 80.8.
For long-context benchmarks, both 1M tests ranked V4-Pro-Max second: CorpusQA 1M scored 62.0, trailing Opus 4.6’s 71.7 but ahead of Gemini 3.1 Pro’s 53.8; MRCR 1M scored 83.5, with Opus 4.6 leading at 92.9 by nearly 10 percentage points.
For agent tasks, MCPAtlas Public scored 73.6, just below Opus 4.6’s 73.8. Terminal-Bench 2.0 scored 67.9, lower than GPT-5.4’s 75.1 and Gemini 3.1 Pro’s 68.5.
In knowledge and reasoning, V4-Pro-Max still shows a clear gap: GPQA Diamond scored 90.1 (Gemini 94.3), SimpleQA-Verified scored 57.9 (Gemini 75.6), and HLE scored 37.7 (Gemini 44.4). As an open-source model, V4-Pro-Max is the first to match and even surpass the closed-source flagship on multiple coding and long-context benchmarks; however, it still lags behind Gemini 3.1 Pro on knowledge-intensive evaluations.
It should be noted that the comparison above does not include the recently released GPT-5.5 and Opus 4.7, and the performance gap between V4 and the latest generation of closed-source models needs to be verified by third-party testing.