Claude Code really is getting dumber! The official admits there are three major bugs, and user subscription quotas are completely reset.

Anthropic admits that the decline in Claude Code quality is due to three major technical mistakes, which have now been fixed and all user subscription quotas have been reset as compensation. However, the continuous errors have sparked community backlash, with some developers even switching to competitors like OpenAI Codex.

Claude Code getting dumber is not an illusion, product manager apologizes

Have you recently felt that Claude Code has become dumber or less intelligent? That’s not your imagination. Claude Code product manager Boris Cherny personally posted an apology on the community platform Threads, admitting that the product quality has indeed declined over the past month.

After an internal investigation, Anthropic’s development team identified three issues affecting the user experience. All these issues are related to the product settings of Claude Code and its proxy software development kits; the underlying AI model itself has not degraded. These problems have been fixed in version 2.1.116.

Image source: Boris ChernyClaude Code getting dumber is not an illusion, product manager apologizes

Recently, AMD AI department head Stella Laurenzo and many Reddit forum users have criticized that Claude Code suddenly became lazy and forgetful, even questioning whether the official deliberately weakened the model’s capabilities to save costs. Anthropic has officially denied these rumors through this statement.

Three major technical mistakes caused Claude Code to become dumber

According to an official article issued by Anthropic, an internal investigation confirmed three independent update errors that led to degraded user experience. The official also detailed the technical specifics and progress of fixes:

  1. Default reasoning ability was downgraded: On March 4, the development team lowered Claude Code’s default reasoning strength from high to medium, aiming to resolve issues where some users experienced interface freezing and high latency under high load. The official later admitted this trade-off was incorrect, as most users prefer a higher default intelligence level. This setting was reverted on April 7.
  2. Cache optimization caused memory gaps: To reduce latency and costs for conversations idle over an hour, the team launched a cache clearing mechanism on March 26 that removed previous thought processes. Due to a bug in the code, the system continued to clear memory in subsequent conversation rounds, making the model appear forgetful and causing abnormal quota consumption for users. This bug was fixed on April 10.
  3. Excessive token limits in system prompts: Before releasing the new model, the team added a length restriction to system prompts on April 16 to reduce verbose responses, requiring tool call texts to stay within 25 characters. When combined with other settings, this severely damaged code quality. The team immediately rolled back this change on April 20.

To compensate affected developers, Anthropic announced that starting April 23 (U.S. time), all subscriber usage quotas would be fully reset, and promised to increase the proportion of internal staff using the public version, as well as strictly review and test system prompt modifications.

Community backlash continues, some users switch to OpenAI Codex

Even though Anthropic transparently explained the reasons for Claude Code’s quality decline and offered compensation, developer communities remain dissatisfied with the experience during this period.

On the well-known developer forum HackerNews, some users expressed extreme disappointment with the official attitude of denying product deterioration over the past month, especially criticizing the mechanism of clearing cache after just one hour of idleness.

Many engineer users pointed out that leaving conversations idle for several hours is common in their workflows. The official’s unannounced change severely disrupted project development continuity.

Product manager Boris Cherny also responded directly on HackerNews, explaining that loading large amounts of context after long idle periods would instantly consume a high proportion of user quotas, which is why the team aimed to optimize the product in this aspect.

Image source: HackerNewsBoris Cherny personally responds to user questions on HackerNews

Despite official explanations, the series of update errors and communication gaps have already damaged brand trust.

According to comments on the forum, many impatient paying users have canceled their subscriptions and shifted their work environment to competitors like OpenAI’s Codex, seeking more stable development assistance.

Further reading:
Claude Code rumored to withdraw from Pro plan, Max subscription required! Anthropic executive: still testing

Want to download the desktop version of Claude Code? Experts reveal Google search poisoning, executing installation commands gets compromised

Still buying AI relay stations on Taobao? Whistleblower: Claude Code source code leaked, dozens of infected versions detected

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin