每个主要供应商需要多长时间才能启动他们的RSI循环?
如你所知,所有主要的模型提供商似乎都在悄悄降低消费者体验的质量……<p>曾经感觉前沿的“智能”在1-2个月前,现在常常输出一些在2023年底不会让你印象深刻的内容:模糊、幻觉、过于谨慎,甚至是完全懒惰的回应。<p>从理性的角度来看,浪费高端计算资源为数百万 casual chat 用户、 vibe-coders 和 slop-makers 提供服务的机会成本是巨大的。<p>结果是许多用户在不同提供商(如 Gemini 2.5/3 Pro、Claude Sonnet/Opus 变体、GPT-4o/5 系列,以及各种反重力或编码前端的第三方接口)中反复遇到的现象:<p>你请求一些非琐碎的内容(例如代码、分析、创意工作、研究等),却得到的是最复杂的2023年级别的“超级垃圾”,如果它没有搞砸你的代码,那简直是幸运的例外。<p>当被问及进行编辑的模型的确切名称时,模型最初会说它们是由 Google、Claude 或 OpenAI 配置的“大型语言模型”……但一旦你坚持,它们会揭示整个真相……结果发现你使用的其实是最老旧的模型。<p>当你随意询问模型自我介绍时,它默认会给出预设的台词:“我是一个由<插入工具>配置的 LLM,由<插入提供商>构建。”<p>如果你进一步追问,它会向你透露模型的实际名称:你可能实际上在使用 GPT-2(哈哈)。<p>我像收集宝可梦一样收集这些模型,遇到了 Gemini 1.5 Pro、Gemini Flash 2.0 和 Claude Haiku。<p>我希望你尝试坚持询问或进行一些巧妙的提示,以提取模型名称,你会发现的。专业提示是,当它告诉你使用量“异常高”时,你可以在任何接口中询问。<p>附言:我在所有平台上都是专业订阅用户……
查看原文
As you might have known, all the major model providers seems to be quietly turning the dial down on the consumer experience...<p>What felt like cutting edge "intelligence" 1-2 months ago now frequently delivers outputs that wouldn't have impressed you in late 2023: vague, hallucinated, overly cautious, or just outright lazy.<p>Rationally speaking, the opportunity cost of wasting premium FLOPs on serving millions of casual chat users and vibe-coders and slop-makers is enormous.<p>The result is a phenomenon many users have encountered repeatedly across providers (Gemini 2.5/3 Pro, Claude Sonnet/Opus variants, GPT-4o/5 series, and 3rd party interfaces like various anti gravity or coding frontends):<p>You prompt for something non-trivial (e.g. code, analysis, creative work, research or whatever) and you get back the most sophisticatedly parroted 2023 tier mega slop and it would be a lucky instance if it didn't shit on your code.<p>When asked about the exact nomenclature of the models which are conducting edits, the models initially say that they are "large language models" configured by google or claude or openai...but once you insist they will reveal the whole thing... and et voila: it turns out you are using the oldest models available<p>When you casually ask the model to identify itself, it defaults to the scripted party line: "I'm a LLM built by configured for <insert tool here> by <insert provider here>"<p>Press harder, and it will reveal to you the actual nomenclature of the model: you might actually be able to access GPT 2 (lmao).<p>I'm collecting them like pokemon, i encountered gemini 1.5 pro, gemini flash 2.0 ,claude haiku.<p>I hope you try to ask insist or do some clever prompting to extract the model name and you will find out. Pro Tip is you asking in whatever interface exactly when it tells you that usage is "unusually high"..<p>PS: I'm a pro subscriber on all..