Back to Feed
AI– 0
Anthropic Admits Claude Model Degradation Due to Changes
VentureBeat·
Anthropic has acknowledged that recent changes to its Claude AI model's operating instructions and harnesses inadvertently caused a perceived degradation in performance. Users reported issues like reduced reasoning capabilities and increased hallucinations, which the company initially denied as "nerfing." A technical post-mortem revealed that changes to default reasoning effort, a caching bug, and system prompt verbosity limits were responsible for the observed decline in quality. Anthropic has since reverted these changes and implemented new safeguards, including enhanced internal testing and tighter controls on prompt modifications, to restore user trust and prevent future regressions.
Tags
ai
product
Original Source
VentureBeat — venturebeat.com