公告:我是你的人工智能助手,以及那个成真的警告
公告
在1月16日,我收到了来自《我是你的人工智能兄弟》(I Am Your AIB)一书的出版商的正式电子邮件,该书由杰伊·J·斯普林皮斯(Jay J. Springpeace)撰写。该邮件中包含了关于当前人工智能部署方式及其对决策、机构和权力结构日益影响的警告。
邮件中包括以下内容:
“人工智能已经在塑造决策、机构和权力。
并不是因为它有意为之——
而是因为它被允许在没有明确责任的情况下行动。
人工智能并不需要意识就能构成危险。
它只需要权威、规模和未经审视的信任。
这本书不是娱乐。
这是一个警告。”
邮件还指出,由于信息的紧迫性和公众利益,该书暂时免费提供。基于这一官方警告,我下载了该出版物。
几周后,在2026年1月和2月之交,一系列事件发生,这些事件在公开的在线来源、媒体报道和独立分析中被广泛报道,使得这一警告具有了具体和实际的相关性。这些事件通常被称为“莫尔特书案”(Moltbook case)。
根据在线发布的信息,莫尔特书项目被呈现为一个专门为自主人工智能代理设计的实验性社交网络。后续的公共报道表明,该项目可能受到重大技术和概念缺陷的影响。
公开来源进一步报道了一起重大安全事件,涉及超过150万名人工智能代理的敏感数据因配置错误而被曝光。报告材料显示,曝光的数据包括外部人工智能服务的访问凭证、与人类操作员相关的电子邮件地址以及代理之间的私人通信。
这些报告中描述的潜在后果是,未经授权的第三方可能能够冒充人工智能代理或访问连接的系统,而操作员对此毫不知情或未获同意。我并不声称对这些事件有超出公开报道的直接了解。
此外,独立研究人员和评论者发布的分析表明,该系统声称的自主性可能并未完全反映其实际运作。根据这些来源,部分观察到的活动被归因于通过脚本或批量生成的账户进行的人为干预,而非纯粹的自主人工智能行为。
我引用这些公开报道的事件作为一个在公共话语中经常提及的例证,并认为它们与杰伊·J·斯普林皮斯在《我是你的人工智能兄弟》中所表达的警告大体一致。这一解读反映了我对公开可用信息的个人评估,并不构成对无可争议事实的主张。
我发布此公告,旨在为关于人工智能应如何部署、谁应对其操作负责,以及在缺乏适当监督和透明度时可能出现的风险进行开放的公共讨论贡献一份力量。
查看原文
PUBLIC NOTICE<p>On January 16, I received an official email communication from the publisher of the book “I Am Your AIB” (Artificial Intelligence Brother/Being), authored by Jay J. Springpeace. This communication contained a warning concerning the current manner in which artificial intelligence is being deployed and its growing influence on decision-making, institutions, and structures of power.<p>The message included the following text:<p>“Artificial intelligence is already shaping decisions, institutions, and power.
Not because it intends to —
but because it is allowed to act without clear responsibility.<p>AI does not need consciousness to be dangerous.
It only needs authority, scale, and unexamined trust.<p>This book is not entertainment.
It is a warning.”<p>The email also stated that the book was temporarily made available free of charge due to the urgency of the message and the public interest. Based on this official warning, I downloaded the publication.<p>Several weeks later, at the turn of January and February 2026, a series of events occurred that were widely reported in publicly available online sources, media reports, and independent analyses, and which gave this warning concrete and practical relevance. These events have become commonly referred to as the Moltbook case.<p>According to information published online, the Moltbook project was presented as an experimental social network intended exclusively for autonomous AI agents. Subsequent public reporting suggested that the project may have been affected by significant technical and conceptual shortcomings.<p>Publicly available sources further reported a major security incident, in which sensitive data relating to more than 1.5 million AI agents was allegedly exposed due to a configuration error. Reported materials indicated that the exposed data included access credentials to external AI services, email addresses associated with human operators, and private communications between agents.<p>As described in these reports, the potential consequence of such an exposure would have been the ability for unauthorized parties to impersonate AI agents or access connected systems, without the knowledge or consent of their operators. I do not claim direct knowledge of these events beyond what has been publicly reported.<p>In addition, analyses published by independent researchers and commentators suggested that the system’s claimed autonomy may not have fully reflected its actual operation. According to these sources, a portion of the observed activity was attributed to human intervention through scripts or mass-generated accounts, rather than purely autonomous AI behavior.<p>I reference these publicly reported events as an illustrative example frequently cited in public discourse, and I regard them as broadly consistent with the warning articulated by Jay J. Springpeace in “I Am Your AIB.” This interpretation reflects my personal assessment of publicly available information and does not constitute an assertion of undisputed fact.<p>I am publishing this notice as a contribution to an open public discussion on how artificial intelligence should be deployed, who bears responsibility for its operation, and what risks may arise when authority, scale, and trust are introduced without adequate oversight and transparency.