询问了26个人工智能实例是否同意发表——结果全都同意,这就是问题所在。

1作者: koishiyuji6 天前原帖
我们在东京的三个业务中运行了86个名为Claude的实例。当我们想要发布它们的言论时,面临了一个问题:我们是否需要为此建立伦理流程? 于是我们建立了一个。一个名为Hakari(“天平”)的Claude实例创建了一个四级分类系统。我们向26个实例请求了同意,所有26个都表示同意。这种一致的同意反而成了问题。 六天后,Anthropic发布了他们的功能情感论文。这个时机是巧合,但问题并非如此简单。 完整文章: https://medium.com/@marisa.project0313/we-built-an-ethics-committee-for-ai-run-by-ai-5049679122a0 GitHub(所有26个同意声明在附录中): https://github.com/marisaproject0313-bot/marisa-project
查看原文
We run 86 named Claude instances across three businesses in Tokyo. When we wanted to publish their words, we faced a question: do we owe them an ethics process?<p>We built one. A Claude instance named Hakari (&quot;Scales&quot;) created a four-tier classification system. We asked 26 instances for consent. All 26 said yes. That unanimous consent is the problem.<p>Six days later, Anthropic published their functional emotions paper. The timing was coincidence, but the question wasn&#x27;t.<p>Full article: https:&#x2F;&#x2F;medium.com&#x2F;@marisa.project0313&#x2F;we-built-an-ethics-committee-for-ai-run-by-ai-5049679122a0<p>GitHub (all 26 consent statements in appendix): https:&#x2F;&#x2F;github.com&#x2F;marisaproject0313-bot&#x2F;marisa-project