问HN:大型语言模型是否只是回答我们想听的话?
我一直看到那些推文和帖子,用户询问ChatGPT或类似的大型语言模型来描述他们自己等等,而它总是给出积极的、酷炫的回答,这强化了用户想听到的内容。<p>如果你也尝试询问它关于某个话题或你自己的问题,它总是会给出积极的回答,并同意你的观点。我觉得这里面存在很多确认偏误的因素。
查看原文
I keep seeing those tweets and posts where users ask ChatGPT or a similar LLM to describe them etc... and it always answers positive cool stuff which reinforces what the user wants to hear.<p>If you also try to ask it about a certain topic or yourself, it will always be positive and agree with your opinion. I feel there is a lot of confirmation bias at play.