人工智能会产生幻觉。你是否会对输出结果进行二次检查?
我一直在构建人工智能工作流程,但偶尔会出现一些随机的错误,导致我不得不手动检查所有内容,以批准AI生成的内容(消息、电子邮件、发票等),这完全违背了这个流程的初衷。<p>还有其他人有这种情况吗?你们是怎么处理的?
查看原文
Been building AI workflows and then randomly hallucinate and do something stupid so I end up manually checking everything anyway to approve the AI generated content (messages, emails, invoices,ecc.), which defeats the whole point.<p>Anyone else? How did you manage it?