Anthropic Identifies Three Product-Layer Changes Behind Claude Code Quality Decline, Not Model Issue

Gate News message, April 23 — Anthropic’s engineering team confirmed that the Claude Code quality degradation reported by users over the past month stemmed from three independent product-layer changes, not from API or underlying model issues. The three problems were fixed on April 7, April 10, and April 20 respectively, with the final version now at v2.1.116.

The first change occurred on March 4, when the team reduced the default reasoning effort level for Claude Code from “high” to “medium” to address occasional extreme latency spikes in Opus 4.6 under high reasoning intensity. After widespread user complaints about reduced performance, the team reverted the change on April 7. The current default is now “xhigh” for Opus 4.7 and “high” for other models.

The second issue was a bug introduced on March 26. The system was designed to clear old reasoning records after conversation inactivity exceeded one hour to reduce session recovery costs. However, a flaw in implementation caused the clearing to execute repeatedly on every subsequent turn rather than once, causing the model to progressively lose prior reasoning context. This manifested as increasing forgetfulness, repeated operations, and abnormal tool invocations. The bug also resulted in cache misses on every request, accelerating user quota consumption. Two unrelated internal experiments masked the reproduction conditions, extending the debugging process to over a week. After fixing on April 10, the team reviewed problematic code using Opus 4.7 and found that Opus 4.7 could identify the bug while Opus 4.6 could not.

The third change launched on April 16 alongside Opus 4.7. The team added instructions to the system prompt to reduce redundant output. Internal testing over several weeks showed no regression, but post-launch interaction with other prompts degraded coding quality. Extended evaluation revealed a 3% performance drop in both Opus 4.6 and 4.7, leading to a rollback on April 20.

These three changes affected different user groups at different times, and their combined effect created widespread and inconsistent quality decline, complicating diagnosis. Anthropic stated it will now require more internal employees to use the same public build version as users, run full model evaluation suites for every system prompt modification, and implement staged rollout periods. As compensation, Anthropic has reset usage quotas for all subscription users.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

Meta Platforms Plans 10% Workforce Reduction on May 20, Affecting Approximately 8,000 Positions

Gate News message, April 24 — Meta Platforms plans to reduce its workforce by approximately 10%, affecting roughly 8,000 positions, on May 20. The layoffs are intended to improve operational efficiency while increasing investment in artificial intelligence. The planned restructuring reflects the

GateNews38m ago

The Trump administration has released an AI refinement crackdown plan, accusing Chinese companies of systematically stealing model capabilities.

The Office of Science and Technology Policy (OSTP) in the White House, in an official statement released on April 23 by Presidential Assistant Michael J. Kratsios, said that the Trump administration has information indicating that foreign entities (mainly based in China) are intentionally targeting major U.S. artificial intelligence companies. They are systematically extracting U.S. AI model capabilities through “tens of thousands of agent accounts” and jailbreaking technology, and are simultaneously announcing four response measures.

MarketWhisper1h ago

DeepSeek releases the V4 open-source preview, with a technical score of 3206 surpassing GPT-5.4

DeepSeek officially launched the V4 preview series on April 24, with open-sourced model weights under the MIT license, and the model weights have been also released on Hugging Face and ModelScope. According to the DeepSeek V4 technical report, V4-Pro-Max (the highest inference intensity mode) scored 3206 points on the Codeforces benchmark, surpassing GPT-5.4.

MarketWhisper1h ago

Cambricon Completes Day 0 Adaptation of DeepSeek-V4, Marking Milestone for China's AI Chip Ecosystem

Gate News message, April 24 — Cambricon announced today that it has completed Day 0 adaptation of DeepSeek-V4, the latest large language model from DeepSeek, using its proprietary NeuWare software ecosystem and vLLM framework. The adaptation code has been open-sourced simultaneously, marking the

GateNews1h ago

Tencent open-sourced Hy3 preview version, code benchmark tests improved by 40% over the previous generation

Tencent officially open-sourced the Hy3 preview version of a large language model on April 23 on GitHub, Hugging Face, and the ModelScope platform, and simultaneously provided paid API service via Tencent Cloud. According to Decrypt’s report on April 24, the Hy3 preview version began training in late January and reached the publication calendar in less than three months.

MarketWhisper1h ago

FTX Portfolio Investments Worth 158 Trillion Won If Not Bankrupt

FTX, the centralized cryptocurrency exchange that filed for Chapter 11 bankruptcy protection in November 2022 due to liquidity shortages and capital outflows, would have held investments valued at approximately 158.796 trillion won if it had not collapsed, according to analysis cited by Park

CryptoFrontier1h ago
Comment
0/400
No comments