请问HN:如何在不过度授权的情况下让AI代理获得访问权限?
为了提高人工智能代理的效率,我们需要与真实系统建立反馈循环:部署、日志、配置、环境、仪表板。但在这一点上,问题就出现了。
大多数现代应用程序并没有细粒度的权限控制。具体例子:Vercel。如果我想让一个代理读取日志或检查环境变量,我必须给它一个令牌,这个令牌也允许它修改或删除内容。没有干净的只读或能力范围访问。
而这不仅仅是Vercel的问题。我在云仪表板、CI/CD系统和围绕可信人类设计的SaaS API中看到同样的模式,而这些系统并不是为自主代理设计的。
所以真正的问题是:今天人们在生产环境中是如何限制人工智能代理的?
你们是在构建强制执行政策的代理层吗?用白名单包装API?还是只是接受风险?
感觉我们正在尝试将自主系统连接到从未为其设计的基础设施上。
我很好奇其他人在实际设置中是如何处理这个问题的,而不是理论上的探讨。
查看原文
To make AI agents more efficient, we need to build feedback loops with real systems: deployments, logs, configs, environments, dashboards.<p>But this is where things break down.<p>Most modern apps don’t have fine-grained permissions.<p>Concrete example: Vercel.
If I want an agent to read logs or inspect env vars, I have to give it a token that also allows it to modify or delete things. There’s no clean read-only or capability-scoped access.<p>And this isn’t just Vercel. I see the same pattern across cloud dashboards, CI/CD systems, and SaaS APIs that were designed around trusted humans, not autonomous agents.<p>So the real question:<p>How are people actually restricting AI agents in production today?<p>Are you building proxy layers that enforce policy? Wrapping APIs with allowlists? Or just accepting the risk?<p>It feels like we’re trying to connect autonomous systems to infrastructure that was never designed for them.<p>Curious how others are handling this in real setups, not theory.