告诉HN:我们尚未发现氛围编码的规则。

1作者: 0xbadcafebee11 天前原帖
我突然想到,有些事情绝对不应该进行“氛围编码”。我认为这是一个重要的认识,因为这意味着在氛围编码的应用上存在着根本的社会限制。 我们绝对不应该进行氛围编码的第一件事:密码学。构建良好的密码系统不仅需要对数学有深入的理解,还需要对密码学和安全的历史有深刻的知识,而你**不能出错**。你不能让一个漏洞溜进去,也不能凭空想象什么。它必须经过世界级专家多年的审查才能被认为是安全的。而且,它仍然可能会因为某种没人预料到的技术而失败。氛围编码不仅无法保证这些,甚至由于其缺陷,失败几乎是必然的。 还有一些事情是绝对不能允许失败的,比如飞行控制系统、车辆电子控制单元、武器系统、核能、城市供水、供电等工业控制。即使氛围编码能够近似这些东西,也没有人会(或应该)信任它。 因此,尽管我们在寻找有趣的人工智能编码应用时,也应该考虑到它的使用将面临非常现实的障碍。“旧世界”的编码方式将不得不继续存在。我们最终将形成两个截然不同的软件宇宙,拥有不同的公司、不同的员工和不同的规则。我们甚至可能需要更新API/ABI,以标记输入/输出是否来自大型语言模型,以防止一个组件的幻觉“感染”另一个需要保持安全和可靠的系统。
查看原文
It just occurred to me that there are some things that should never, ever, be vibe coded. I think this is an important realization, because it means there are fundamental, societal limits to what we can do with vibe coding.<p>The first thing we should never, ever vibe code: cryptography. Building good crypto requires not only advanced understanding of mathematics, but also a deep knowledge of the history of cryptography and security, and you <i>can not be wrong</i>. You can&#x27;t let a bug slip in, or hallucinate something. It has to stand up to years of scrutiny by world-class experts before it can be considered secure. And it may still fall to a technique nobody predicted. Not only can vibe-coding not guarantee any of that, it&#x27;s so flawed that failure is virtually certain.<p>And there&#x27;s other things that simply can&#x27;t be allowed to fail. Flight control systems. Vehicle ECUs. Weapons systems. Nuclear power. Industrial controls for municipal water, power utilities, etc. Even if vibe-coding could approximate these things, nobody ever would (or should) trust it.<p>So while we&#x27;re out here finding really fun uses of AI coding, we should also consider that there will be very real barriers to its use. The &quot;old world&quot; of coding will have to stick around. We&#x27;re going to end up with two very different universes of software, with different companies, different staff, different rules. We may even have to update APIs&#x2F;ABIs to mark when an input&#x2F;output comes from an LLM, to prevent one component&#x27;s hallucinations from &quot;infecting&quot; another system that needs to remain safe and reliable.