SemiAnalysis practical test: GPT-5.5 returns to the forefront, but OpenAI quietly concealed a feat surpassed by Opus

robot
Abstract generation in progress

According to Beating Monitoring, semiconductor and AI analysis organization SemiAnalysis released a horizontal evaluation of programming assistants, covering GPT-5.5, Opus 4.7, and DeepSeek V4. The core conclusion: GPT-5.5 is OpenAI’s first return to the forefront of programming models in half a year, and SemiAnalysis’s engineers are beginning to switch between Codex and Claude Code, whereas almost everyone previously only used Claude. GPT-5.5 is based on a new pre-training codenamed “Spud,” marking OpenAI’s first expansion of pre-training scale since GPT-4.5.

In practical testing, a division of labor emerged: Claude handles new project planning and initial setup, while Codex focuses on reasoning-intensive bug fixes. Codex is stronger in understanding data structures and logical reasoning but is not good at inferring users’ vague intentions. For the same dashboard task, Claude automatically replicated the reference page layout but fabricated a large amount of data, whereas Codex skipped the layout but produced much more accurate data.

The article reveals operational details of a benchmark test: OpenAI published a blog in February this year calling for the industry to adopt SWE-bench Pro as the new standard for programming benchmarks, but the announcement for GPT-5.5 instead used a new benchmark called “Expert-SWE.” The reason is hidden in the fine print at the bottom of the announcement: GPT-5.5 was surpassed by Opus 4.7 on SWE-bench Pro, and is far below Anthropic’s unreleased Mythos (77.8%).

Regarding Opus 4.7, Anthropic posted a postmortem one week after release, admitting that Claude Code had three bugs between March and April, lasting several weeks and affecting nearly all users. Previously, multiple engineers reported performance drops in 4.6, but this was dismissed as subjective perception. Additionally, the new tokenizer in 4.7 causes token usage to increase by up to 35%, which Anthropic admits is effectively a hidden price increase.

DeepSeek V4 is rated as “keeping pace with the cutting edge but not leading,” and will be the lowest-cost alternative among closed-source models. The article also states that “Claude still outperforms DeepSeek V4 Pro on high-difficulty Chinese writing tasks,” and comments that “Claude wins over Chinese models using the opponent’s language.”

The article introduces a key concept: the valuation of model pricing should consider “cost per task” rather than “cost per token.” GPT-5.5’s unit price is twice that of GPT-5.4 (input $5, output $30 per million tokens), but it completes the same task with fewer tokens, so the actual cost may not be higher. Preliminary data from SemiAnalysis shows Codex has an input-output ratio of 80:1, lower than Claude Code’s 100:1.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin