Claude Opus 4.7 hides the price increase: a new tokenizer makes the same text use 37–47% more tokens, while the fee rate stays the same but the bill gets more expensive

robot
Abstract generation in progress

According to reporting from The Decoder and hands-on testing of AI cost observation platforms such as Finout and ClaudeCodeCamp, Anthropic’s Claude Opus 4.7, which launched in mid-month, keeps the official rate of $5 USD per million tokens for input and $25 USD for output, but the new tokenizer splits the same text into more tokens—an observed 1.47x increase in tests for both English and code. Average community tests also show a 37.4% cost increase. For enterprise users, this is the first AI pricing controversy of 2026—“the rate card didn’t move, but the bill went up.”

The official 1.35x upper limit meets real-world testing at 1.47x

Anthropic’s official documentation acknowledges that Opus 4.7’s new tokenizer will break the same passage into more tokens; the ratio range provided by the company is 1.0–1.35x (i.e., up to +35%). But multiple independent tests have produced different results: Finout measured 1.47x in real-world enterprise prompts, ClaudeCodeCamp also observed 1.47x in technical docs scenarios, and a combined community assessment averages +37.4%. The discrepancy comes from the type of text used in the tests—English-heavy documentation and code are affected the most.

Converted to real costs: an Opus 4.6 prompt that originally used 1,000 input tokens plus 500 output tokens becomes roughly 1,370–1,470 input tokens plus 685–735 output tokens on 4.7. Even if the per-token rate is completely identical, the total request bill increases by 37–47%.

Token as an invisible price leverage in the business model

This isn’t a single incident—it’s a structural issue with the AI business model. LLM vendors price in units of “per token,” but what “one token equals” in terms of information is entirely controlled by the vendor. Changing the tokenizer, encoding algorithms, or the vocabulary will make the same content correspond to a different number of tokens. In other words, AI vendors can implement effective price hikes through tokenizer upgrades without changing the rate card.

Enterprise AI procurement has used “cost per token” as the main price-comparison metric over the past few years, but the Opus 4.7 case shows this metric is incomplete. Real cost monitoring must look at “total token consumption to complete a single business task”—when comparing across models, you should first run a token-calibrated benchmark (using the same task inputs and observing each model’s actual token consumption).

Specific impact on enterprise procurement contracts

For organizations with already-signed Anthropic enterprise contracts, three areas need to be checked immediately: first, whether monthly spend has risen abnormally due to the model upgrade; second, whether the contract’s “model version” clauses include mandatory upgrade conditions; and third, whether internal AI cost monitoring includes per-task token tracking, rather than only monitoring total tokens per day. This week, Anthropic also officially launched usage-based pricing for its enterprise offering. With these two developments combined, enterprises’ AI budgets may see unexpected double-digit overspends.

Transparency in AI pricing will become a new industry topic

The tokenizer controversy around Opus 4.7 could spur new industry self-regulation standards: requiring vendors to publish how token ratios change when upgrading models, or requiring that a fixed tokenizer remain unchanged for a period of time. For an AI industry that is currently “swallowing” 80% of global venture capital, insufficient transparency will draw closer regulatory attention—bodies like the U.S. FTC and the EU DMA have already begun focusing on the issue of “hidden price increases” in digital services. For the enterprise procurement and developers among Wade readers, this isn’t an abstract topic—it’s the number on next month’s bill.

This article Claude Opus 4.7 hides a price increase: the new tokenizer makes the same text consume 37–47% more tokens, rates unchanged but bills get more expensive was first published on Chain News ABMedia.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin