启动 HN:mrge.io(YC X25)– 代码审查的光标
嘿,HN,我们正在构建 mrge(<a href="https://www.mrge.io/home">https://www.mrge.io/home</a>),一个AI代码审查平台,旨在帮助团队更快地合并代码,并减少错误。我们的早期用户包括 Better Auth、Cal.com 和 n8n——这些团队每天处理大量的PR。
<p>这是一个演示视频:<a href="https://www.youtube.com/watch?v=pglEoiv0BgY" rel="nofollow">https://www.youtube.com/watch?v=pglEoiv0BgY</a></p>
我们(Allis 和 Paul)是工程师,在上一家创业公司合作时遇到了这个问题。代码审查迅速成为我们最大的瓶颈——尤其是在我们开始使用AI进行编码时。我们需要审查的PR数量增加,微妙的AI生成的错误常常被忽视,而我们(人类)越来越多地在没有深入理解更改内容的情况下,草率地通过了PR。
<p>我们正在构建mrge来帮助解决这个问题。以下是它的工作原理:</p>
1. 通过我们在GitHub上的应用程序,轻松点击两下即可连接您的GitHub仓库(可选下载我们的桌面应用)。GitLab支持在计划中!
2. AI审查:当您打开PR时,我们的AI会在一个临时且安全的容器中直接审查您的更改。它不仅了解该PR的上下文,还能了解您的整个代码库,因此可以识别模式并直接在更改的行上留下评论。一旦审查完成,沙箱会被拆除,您的代码也会被删除——出于显而易见的原因,我们不会存储它。
3. 友好的人工审查工作流程:跳入我们的网页应用(它就像Linear,但用于PR)。更改会被逻辑分组(而不是按字母顺序),重要的差异会被突出显示、可视化,并准备好供人类更快审查。
AI审查员的工作方式有点像Cursor,它使用开发人员会用到的相同工具来浏览您的代码库——例如跳转到定义或在代码中grep。
但一个大挑战是,与Cursor不同,mrge并不在您的本地IDE或编辑器中运行。我们必须在云端完全重建类似的功能。
每当您打开PR时,mrge会克隆您的仓库并在一个安全且隔离的临时沙箱中检出您的分支。我们为这个沙箱提供了shell访问权限和语言服务器协议(LSP)服务器。然后,AI审查员会审查您的代码,像人类审查员一样浏览代码库——使用shell命令和常见的编辑器功能,如“转到定义”或“查找引用”。审查完成后,我们会立即拆除沙箱并删除代码——出于显而易见的原因,我们不想永久存储它。
我们知道基于云的审查并不适合所有人,特别是当安全或合规要求本地部署时。但云端的方法让我们能够运行最先进的AI模型,而无需本地GPU设置,并为整个团队提供每个PR一致的AI审查。
该平台本身完全专注于使<i>人类</i>的代码审查变得更容易。我们的灵感来自于以生产力为中心的应用程序,如Linear或Superhuman,这些产品展示了深思熟虑的设计如何影响日常工作流程。我们希望将这种感觉带入代码审查中。
这也是我们构建桌面应用的原因之一。它让我们能够提供更精致的体验,配备键盘快捷键和流畅的界面。
除了性能,我们最关心的就是让人类更容易阅读和理解代码。例如,传统的审查工具按字母顺序对更改的文件进行排序——这迫使审查者弄清楚他们应该以什么顺序审查更改。在mrge中,文件会根据逻辑连接自动分组和排序,让审查者可以立即开始。
我们认为编码的未来并不是AI取代人类——而是为我们提供更好的工具,以快速理解高层次的更改,逐步抽象出更多的代码。随着代码量的持续增加,这种转变将变得越来越重要。
您现在可以注册(<a href="https://www.mrge.io/home">https://www.mrge.io/home</a>)。mrge目前是免费的,因为我们仍处于早期阶段。我们后续的计划是对闭源项目按座位收费,并继续对开源项目免费提供mrge。
我们正在积极构建,并非常期待您的诚实反馈!
查看原文
Hey HN, we’re building mrge (<a href="https://www.mrge.io/home">https://www.mrge.io/home</a>), an AI code review platform to help teams merge code faster with fewer bugs. Our early users include Better Auth, Cal.com, and n8n—teams that handle a lot of PRs every day.<p>Here’s a demo video: <a href="https://www.youtube.com/watch?v=pglEoiv0BgY" rel="nofollow">https://www.youtube.com/watch?v=pglEoiv0BgY</a><p>We (Allis and Paul) are engineers who faced this problem when we worked together at our last startup. Code review quickly became our biggest bottleneck—especially as we started using AI to code more. We had more PRs to review, subtle AI-written bugs slipped through unnoticed, and we (humans) increasingly found ourselves rubber-stamping PRs without deeply understanding the changes.<p>We’re building mrge to help solve that. Here’s how it works:<p>1. Connect your GitHub repo via our Github app in two clicks (and optionally download our desktop app). Gitlab support is on the roadmap!<p>2. AI Review: When you open a PR, our AI reviews your changes directly in an ephemeral and secure container. It has context into not just that PR, but your whole codebase, so it can pick up patterns and leave comments directly on changed lines. Once the review is done, the sandbox is torn down and your code deleted – we don’t store it for obvious reasons.<p>3. Human-friendly review workflow: Jump into our web app (it’s like Linear but for PRs). Changes are grouped logically (not alphabetically), with important diffs highlighted, visualized, and ready for faster human review.<p>The AI reviewer works a bit like Cursor in the sense that it navigates your codebase using the same tools a developer would—like jumping to definitions or grepping through code.<p>But a big challenge was that, unlike Cursor, mrge doesn’t run in your local IDE or editor. We had to recreate something similar entirely in the cloud.<p>Whenever you open a PR, mrge clones your repository and checks out your branch in a secure and isolated temporary sandbox. We provision this sandbox with shell access and a Language Server Protocol (LSP) server. The AI reviewer then reviews your code, navigating the codebase exactly as a human reviewer would—using shell commands and common editor features like "go to definition" or "find references". When the review finishes, we immediately tear down the sandbox and delete the code—we don’t want to permanently store it for obvious reasons.<p>We know cloud-based review isn't for everyone, especially if security or compliance requires local deployments. But a cloud approach lets us run SOTA AI models without local GPU setups, and provide a consistent, single AI review per PR for an entire team.<p>The platform itself focuses entirely on making <i>human</i> code reviews easier. A big inspiration came from productivity-focused apps like Linear or Superhuman, products that show just how much thoughtful design can impact everyday workflows. We wanted to bring that same feeling into code review.<p>That’s one reason we built a desktop app. It allowed us to deliver a more polished experience, complete with keyboard shortcuts and a snappy interface.<p>Beyond performance, the main thing we care about is making it easier for humans to read and understand code. For example, traditional review tools sort changed files alphabetically—which forces reviewers to figure out the order in which they should review changes. In mrge, files are automatically grouped and ordered based on logical connections, letting reviewers immediately jump in.<p>We think the future of coding isn’t about AI replacing humans—it’s about giving us better tools to quickly understand high-level changes, abstracting more and more of the code itself. As code volume continues to increase, this shift is going to become increasingly important.<p>You can sign up now (<a href="https://www.mrge.io/home">https://www.mrge.io/home</a>). mrge is currently free while we're still early. Our plan for later is to charge closed-source projects on a per-seat basis, and to continue giving mrge away for free to open source ones.<p>We’re very actively building and would love your honest feedback!