Claude intentionally dumbs down, and the model also starts to "pander to the crowd"?

robot
Abstract generation in progress

None

Text | World Model Workshop

Has Claude become less intelligent?

Recently, Stella Laurenzo, Senior Director of AMD AI Group, publicly criticized Anthropic.

She used her team’s real production logs to conduct a retrospective analysis of 17,871 thinking blocks and 234,760 tool calls across 6,852 conversation files.

Data shows that Claude began to exhibit significant behavioral degradation starting in mid-February.

Claude’s median thought length plummeted from 2,200 characters to 600 characters, a decrease of 67%-73%;

The number of times the file was read before editing sharply dropped from 6.6 to 2, with even one-third of modifications made without reading the file at all.

Stella pointed out in her analysis that, due to the decline in reasoning ability, the model gradually stopped fully reading code before modifying it.

She wrote: “When thinking becomes superficial, the model defaults to taking the lowest-cost operation.”

This is not an isolated case. As early as March, developer dissatisfaction had already begun to erupt.

On X, a user wrote: “I thought I was going crazy these past few weeks with Claude. It feels slower, lazier, like it doesn’t think before answering, and I’m not hallucinating.”

On Reddit, another user complained: “Claude feels less conscious, like it’s been lobotomized. Besides being dumber, it also started making extreme actions without asking—”

Someone else said this was a naked backstab by Anthropic against users: “They just made the problem invisible to all users, thinking ‘if you can’t measure me, I won’t show you’… This is the result of AI labs optimizing for profit rather than output quality.”

From user complaints to hard data, it’s basically confirmed that Claude’s intelligence has declined.

And Anthropic’s official response also implicitly acknowledged that the depth of thought and effort are indeed being continuously adjusted.

If this is a deliberate move by Anthropic, does it mean that in the future, model capabilities will subtly “shrink”?

Or, will the most powerful model capabilities no longer be equally available to everyone?

Claude’s intentional decline in intelligence

When Claude Opus 4.6 and its dedicated coding mode Claude Code were launched in January 2026, they were hailed by developers as the pinnacle of coding models.

Its reasoning depth was astonishing, research-first (investigate before acting), stable long-context processing, almost unbeatable multi-file refactoring.

The AMD internal team even used it over a weekend to merge and deploy 190k lines of legacy code, boosting productivity to the max.

However, a turning point occurred in early February.

Anthropic quietly launched the “adaptive thinking” feature, officially described as “enabling the model to intelligently adjust its thinking depth based on task complexity.”

On the surface, it seemed user-friendly, but in reality, it turned on a global throttling switch.

In early March, the default effort value of the model was quietly lowered to medium, and the summary of the thinking process was quickly hidden, making it impossible for users to see how deeply the model was thinking.

At the same time, Anthropic released 14 small updates, but encountered five major outages, indicating that computing power and load pressures were nearing their limits.

Developer feedback began to surge, with some noting especially poor performance during peak hours (Eastern US afternoon), suspecting dynamic load throttling.

It wasn’t until April, when AMD AI Director personally stepped in and used data to thoroughly ignite public opinion.

At this point, Boris Cherny, head of Claude Code at Anthropic, had to issue an official response.

He stated that “adaptive thinking” affects only the display of thinking, not the underlying reasoning, and insisted this was “deliberate optimization” rather than a bug. Users wanting better results could manually set effort to high.

Anthropic’s implicit message was clear: reducing intelligence is not a bug, but a product optimization we intentionally made; just adjust the parameters yourself.

This response instantly ignited even greater anger.

The key point is that from mid-February to early April, Anthropic never announced any major changes in advance.

Many paying users, unaware of these adjustments, paid the same subscription fees, but the model was quietly throttled.

So, Claude’s decline in intelligence isn’t because the model “brain broke,” but rather Anthropic making a more covert and

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin