请问HN:在HN的指导原则中,是否应该禁止“我问了$AI,它说”这样的回复?

145作者: embedding-shape大约 16 小时前原帖
随着各种大型语言模型(LLM)越来越受欢迎,类似于“我问了Gemini,Gemini说……”的评论也随之增多。 虽然这些指导方针是在不同的时期制定(并不断修订)的,但似乎是时候讨论一下这些评论是否应该在HN上受到欢迎。 一些例子: - https://news.ycombinator.com/item?id=46164360 - https://news.ycombinator.com/item?id=46200460 - https://news.ycombinator.com/item?id=46080064 就我个人而言,我在HN上是为了进行人类之间的对话,而大型LLM生成的文本只会妨碍我阅读来自真实人类(至少是这样假设的)的真实文本。 你怎么看?是否应该允许那些基本上归结为“我问了$LLM关于$X,$LLM说的是……”的回应出现在HN上,并更新指导方针以说明人们不应该对此进行批评(类似于当前的其他指导方针),还是应该新增一条指导方针,要求人们避免将大型LLM的回复复制粘贴到评论中,或者采取其他完全不同的措施?
查看原文
As various LLMs become more and more popular, so does comments with &quot;I asked Gemini, and Gemini said ....&quot;.<p>While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.<p>Some examples:<p>- https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46164360<p>- https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46200460<p>- https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46080064<p>Personally, I&#x27;m on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).<p>What do you think? Should responses that basically boil down to &quot;I asked $LLM about $X, and here is what $LLM said:&quot; be allowed on HN, and the guidelines updated to state that people shouldn&#x27;t critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?