问HN:Claude的表现是不是在变差?

5作者: sahli2 天前原帖
感觉大多数Claude Code用户已经注意到Claude模型的质量下降。作为一名Claude Pro订阅用户(网页版本;我不使用Claude Code),我在过去几周内明显感受到这种下降。我再也无法在一次交互中完成任务了。Claude经常因为达到某个内部工具调用/轮次限制而停止输出,因此我不得不不断点击“继续”。每次续接都需要重新提供上下文,这迅速消耗了我的令牌和配额。模型的错误也增多,无法像以前那样可靠地完成任务。 这尤其令人沮丧,因为Sonnet 4.6确实是一个质的飞跃:它能够更频繁地一次性生成长且正确的代码。但现在这一点似乎基本消失了。 作为一名付费的Pro用户,老实说,我最近发现自己使用免费的替代品,如DeepSeek和Z.ai(GLM)的频率比使用Claude还要高。我也完全停止使用Opus——它对令牌的需求太高,消耗我的每周配额太快,实在不够实用。 Anthropic是在试图限制使用还是想要让用户流失呢?
查看原文
It feels like most Claude Code users have already noticed a quality drop in the Claude models. As a Claude Pro subscriber (Web version; I don&#x27;t use Claude Code), I’ve seen a clear decline over the last couple of weeks. I can’t complete tasks in a single turn anymore. Claude often stops streaming because it hits some internal tool-call&#x2F;turn limit, so I have to keep pressing “Continue.” Each continuation has to re-feed context, which quickly burns through tokens and quota. The model also makes more mistakes and fails to fully complete tasks it used to handle reliably.<p>This is especially frustrating because Sonnet 4.6 was a real step up: it could produce long, correct code in one pass much more often. That seems basically gone now.<p>As a paying Pro user, I honestly find myself using free alternatives like DeepSeek and Z.ai (GLM) more than Claude lately. I’ve also stopped touching Opus entirely—it’s so token-hungry that it drains my weekly quota too fast to be practical.<p>Is Anthropic trying to limit usage or drive people away?