问HN:蛇是在吃自己的尾巴吗?
HN,<p>如果每个人都在使用大型语言模型(LLMs)来解决问题,那么几年后,LLMs不会缺乏可挖掘的内容吗?简而言之,如何才能在长期内避免LLMs的普遍低智化以及用于解决问题的公共可获取内容的退化?<p>对于2025年之后出现的事件和问题,LLMs将从哪里获取信息来解决这些问题?在Stack Overflow、Reddit和随机论坛上提问的动力很小。LLMs并没有促进人们之间的互动,使得一个人能够为另一个人解决问题,因此它们必须真正足够聪明,能够理解问题并解决它们,而不是简单地重复已有的信息。<p>这在最小程度上是可持续的吗?这是否是在自食其果?
查看原文
HN,<p>If everyone is using LLMs to solve problems, in a few years, won't LLMs run out of content to mine? In short, how can the general dumbing down of LLMs and degradation of publicly accessible content used to solve problems be avoided over the long term?<p>For questions about events and problems that arose after 2025, where would LLMs get information to solve those questions? There is little incentive to ask questions on stackoverflow, reddit and random forums. LLMs are not allowing interactions between people that result in one person solving a problem for another person, so they would have to actually be smart enough to understand the problems and solve them instead of regurgitating existing information.<p>Is this sustainable in the least? Is the snake eating its own tail?