人工智能撰写了我的项目,一位Nginx工程师重建了架构

2作者: zhidao9大约 1 个月前原帖
我已经编写代码超过十年,现任职于nginx团队。最近,我让AI(Claude Code + Opus 4.6)从零开始编写了一个可编程的HTTP性能测试工具——使用C语言和QuickJS,约2000行代码,花了一天时间完成。然后我开始逐步重构其架构,每次提交一个小改动。 我所学到的并不完全符合两种观点——既不是“AI会取代我们所有人”,也不是“这只是炒作”。 *AI最危险的错误是不可见的。* jsbench允许用户编写调用fetch()的JS脚本进行负载测试。AI编写了这个功能和测试。报告显示:16,576个请求,0个错误。通过。但每一个fetch都失败了。工作线程没有事件循环——fetch()无法发送任何内容。代码只是无条件地将每次调用计为成功。AI编写的代码和AI编写的测试共享同样的盲点。不是崩溃——是那些运行正常、通过所有测试却产生错误结果的程序。 *在正确的方向下,AI就是你的整个团队。* fetch()不支持并发——Promise.all处理三个请求时耗时900毫秒,而不是300毫秒。AI实现了“伪异步”:Promise签名,内部同步阻塞。我知道该如何修复:注册一个全局事件循环,返回一个待处理的Promise,让循环驱动I/O。我给了AI问题、架构、现有代码和约束条件。它一次性解决了这个问题——9个文件,905毫秒变为302毫秒。如果我只是说“fetch有个bug”,它可能会绕过破损的架构进行修补。但明确的方向促成了正确的结构性变更。 *判断力是真正的倍增器。* 我将epoll和定时器组合成一个“引擎”对象——每个线程一个。简单的想法,但涉及6个文件,20多个调用点。AI没有遗漏任何一个,全部进行了修改。如果判断错误,AI也会同样彻底地应用这个错误。一次架构调用,应用于数十个文件——无论如何都具有巨大的杠杆效应。 *什么变得更有价值:* 架构判断——AI可以执行任何方向,但不会选择一个。代码审查——AI产生bug的速度与代码一样快;识别逻辑/架构问题现在成为了一种防御性必要。领域深度——我知道fetch()需要一个事件循环,因为我已经编写了十年的事件驱动系统,而不是因为一个好的提示。AI放大你已经拥有的能力;它并不创造能力。 *一句话:* 在AI时代,技术知识不是用来编写代码的——而是用来发现AI代码中的问题。看到问题,你就有了杠杆。错过了,你就是在信任一个会自信地告诉你一切都好的工具。 完整系列(持续更新):https://github.com/hongzhidao/jsbench/tree/main/docs
查看原文
I&#x27;ve been writing code for over a decade. I work on the nginx team. Recently I had AI (Claude Code + Opus 4.6) write a programmable HTTP benchmarking tool from scratch — C + QuickJS, ~2,000 lines, running in a day. Then I started refactoring its architecture, one commit at a time.<p>What I learned doesn&#x27;t match either camp — not &quot;AI will replace us all&quot; nor &quot;it&#x27;s just hype.&quot;<p>*AI&#x27;s most dangerous bugs are invisible.* jsbench lets users write JS scripts calling fetch() for load testing. AI wrote the feature and the tests. Report: 16,576 requests, 0 errors. PASS. But every single fetch had failed. The worker thread had no event loop — fetch() couldn&#x27;t send anything. The code just unconditionally counted each call as success. AI-written code and AI-written tests shared the same blind spot. Not crashes — programs that run fine, pass all tests, and produce wrong results.<p>*With the right direction, AI is your entire team.* fetch() didn&#x27;t support concurrency — Promise.all with three requests took 900ms instead of 300ms. AI had implemented &quot;fake async&quot;: Promise signature, synchronous blocking inside. I knew the fix: register with a global event loop, return a pending Promise, let the loop drive I&#x2F;O. I gave AI the problem, the architecture, the existing code, and the constraints. It got it right in one shot — 9 files, 905ms → 302ms. If I&#x27;d just said &quot;fetch has a bug,&quot; it would&#x27;ve patched around the broken architecture. But a clear direction got a correct structural change.<p>*Judgment is the real multiplier.* I combined epoll and timers into one &quot;engine&quot; object — one per thread. Simple idea, but 6 files, 20+ call sites. AI changed every one without missing any. If the judgment had been wrong, AI would&#x27;ve applied the mistake just as thoroughly. One architectural call, applied across dozens of files — enormous leverage either way.<p>*What becomes more valuable:* Architectural judgment — AI implements any direction but won&#x27;t choose one. Code review — AI produces bugs as fast as code; spotting logic&#x2F;architecture problems is now a defensive necessity. Domain depth — I knew fetch() needed an event loop because I&#x27;ve written event-driven systems for a decade, not because of a good prompt. AI amplifies abilities you already have; it doesn&#x27;t create them.<p>*One line:* In the AI era, technical knowledge isn&#x27;t for writing code — it&#x27;s for seeing what&#x27;s wrong with AI&#x27;s code. See it, and you have leverage. Miss it, and you&#x27;re trusting a tool that will confidently tell you everything is fine.<p>Full series (ongoing): https:&#x2F;&#x2F;github.com&#x2F;hongzhidao&#x2F;jsbench&#x2F;tree&#x2F;main&#x2F;docs