Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
SemiAnalysis practical test: GPT-5.5 returns to the forefront, but OpenAI quietly concealed a feat surpassed by Opus
According to Beating Monitoring, semiconductor and AI analysis organization SemiAnalysis released a horizontal evaluation of programming assistants, covering GPT-5.5, Opus 4.7, and DeepSeek V4. The core conclusion: GPT-5.5 is OpenAI’s first return to the forefront of programming models in half a year, and SemiAnalysis’s engineers are beginning to switch between Codex and Claude Code, whereas almost everyone previously only used Claude. GPT-5.5 is based on a new pre-training codenamed “Spud,” marking OpenAI’s first expansion of pre-training scale since GPT-4.5.
In practical testing, a division of labor emerged: Claude handles new project planning and initial setup, while Codex focuses on reasoning-intensive bug fixes. Codex is stronger in understanding data structures and logical reasoning but is not good at inferring users’ vague intentions. For the same dashboard task, Claude automatically replicated the reference page layout but fabricated a large amount of data, whereas Codex skipped the layout but produced much more accurate data.
The article reveals operational details of a benchmark test: OpenAI published a blog in February this year calling for the industry to adopt SWE-bench Pro as the new standard for programming benchmarks, but the announcement for GPT-5.5 instead used a new benchmark called “Expert-SWE.” The reason is hidden in the fine print at the bottom of the announcement: GPT-5.5 was surpassed by Opus 4.7 on SWE-bench Pro, and is far below Anthropic’s unreleased Mythos (77.8%).
Regarding Opus 4.7, Anthropic posted a postmortem one week after release, admitting that Claude Code had three bugs between March and April, lasting several weeks and affecting nearly all users. Previously, multiple engineers reported performance drops in 4.6, but this was dismissed as subjective perception. Additionally, the new tokenizer in 4.7 causes token usage to increase by up to 35%, which Anthropic admits is effectively a hidden price increase.
DeepSeek V4 is rated as “keeping pace with the cutting edge but not leading,” and will be the lowest-cost alternative among closed-source models. The article also states that “Claude still outperforms DeepSeek V4 Pro on high-difficulty Chinese writing tasks,” and comments that “Claude wins over Chinese models using the opponent’s language.”
The article introduces a key concept: the valuation of model pricing should consider “cost per task” rather than “cost per token.” GPT-5.5’s unit price is twice that of GPT-5.4 (input $5, output $30 per million tokens), but it completes the same task with fewer tokens, so the actual cost may not be higher. Preliminary data from SemiAnalysis shows Codex has an input-output ratio of 80:1, lower than Claude Code’s 100:1.