对你的计划和实施进行代码审查。
现在是2026年,人类语言在某种程度上已经趋于统一。我们逐渐从编写代码转向编写详细的计划。这些计划已经发展到可以直接嵌入我们的工具中(例如,Cursor的计划模式,CC也有类似功能)。为什么我们不把这些计划像代码审查一样进行审查呢?
最终,我们对Python的看法将不会再像现在对汇编语言的看法一样。我从不检查GCC编译器的二进制输出,因为我信任它。我现在所看到和使用的工作流程完全不同。我希望团队能够对计划进行代码审查,而不仅仅是对实现进行审查。
AI尚未达到确定性,因此我们还没有达到GCC编译器的水平。然而,一个好的计划审查的价值是实施审查的10倍。代码是一种商品,而计划则是尚未解决的部分。你可以花几个小时让你的代理去实现,然后把所有东西都丢掉,或者让团队达成共识,几乎一次性完成大多数任务。当然,这在AI出现之前就一直是如此,达成共识关于构建什么总是比如何构建更重要,但像Claude Code和Cursor这样的工具使得计划成为唯一真正重要的部分。
团队应该在一个结构化的文本文件上达成一致。可以称之为plan.md,或者根据你所使用的实现工具命名。它描述了功能、逻辑,最重要的是成功的衡量标准。
以下是实际的工作流程:
1. 选择一个任务,并使用Claude Code / Cursor创建一个plan.md文件。根据需要进行迭代,确保你有好的成功标准,以便代理可以朝着这个目标构建。
2. 打开一个草稿PR,附上该文本文件。将其放入Slack中。团队在Slack或GitHub评论中对方法达成一致。我通常更喜欢在Slack上对计划进行迭代,而在GitHub评论中进行代码评论。
3. 一旦团队对计划表示认可,就将代理指向该计划。由于成功标准已经写出,代理可以自我验证。
4. 一旦你对实现满意,现在就更新PR,附上生成的代码,让你的队友像审查任何代码审查一样审查代码,除了他们已经审查过你的计划,因此有更多的上下文。
查看原文
It’s 2026 and the human language now more or less compiles. We've slowly moved away from writing code and towards writing detailed plans. The plans have gotten to the point where they’re built into our tools(Cursor Plan mode, CC also has one). Why shouldn't we review these plans like its a code review?<p>Eventually we won’t be looking at Python the same way we don't look at Assembly. I never check the binary output of a GCC compiler because I trust it. The workflow I’m seeing and using is completely different. I want to see teams code reviewing the Plan, not just the implementation.<p>AI is not deterministic yet so we're not quite at the GCC compiler level yet. However, a good plan review is worth 10x more than an implementation review. Code is a commodity, the plan is the not solved part. You can spend hours letting your agent implement and throw it all away, or get buy in from your team and (almost) one shot most tasks. Of course this was always true even before AI, aligning on what to build always mattered more than the how but tools like Claude Code and Cursor make it the only part that really matters.<p>The team should align on a structured text file. Call it a plan.md or whatever depending on what you’re implementing it with. It describes the feature, the logic, and most importantly the measurement of success.<p>Here’s the actual workflow:
1. Pick up a task and create a plan.md file using Claude code / Cursor. Iterate on this for as long as you need to. Make sure you have good success criteria the agent can build towards
2. Open a Draft PR with that text file. Drop it in Slack. The team aligns on the approach in Slack or GitHub comments. I usually prefer Slack for iterating on a plan and GitHub comments for code comments
3. Once the team thumbs-ups the plan, point the agent at it. Since the success criteria are written out, the agent can self-verify.
4. Once you’re happy with the implementation , now you update the PR with the generated code, get your teammates to review the code as they would any code review except they have much more context since they’ve already reviewed your plan.