问HN:你们是如何应对那些信任大型语言模型(LLMs)的人?
很多人将大型语言模型(LLMs)视为他们客观真理的来源。他们有一个问题,实际上通过搜索可以找到一个可靠的来源来很好地回答,但他们却选择询问某个LLM聊天机器人,并盲目相信其所说的一切。<p>你是如何应对这种情况的?你会尝试告诉他们关于幻觉的事情,以及LLMs并没有真正或错误的概念吗?还是你选择不去干涉?当他们在与你的对话中这样做,或者在遇到LLMs被用作影响你的某些事情的来源时,你会怎么做?
查看原文
A lot of people use LLMs as the source of their objective truth. They have a question that would be very well answered with a search leading to a reputable source, but instead they ask some LLM chat bot and just blindly trust whatever it says.<p>How do you deal with that? Do you try to tell them about hallucinations and that LLMs have no concept of true or false? Or do you just let them be? What do you do when they do that in a conversation with you or encounter LLMs being used as a source for something that affects you?