展示HN:TurnZero – LLMs的持久专家
为了减少人工智能会话中的冷启动问题,我开发了一个工具,它作为MCP服务器运行,并在第0轮之前加载上下文。
发生了两件事:
1. **个人先验** - 您的工作流程和标准在每个会话中加载一次,并在所有支持的AI客户端之间保持持久性。
2. **专家先验** - 当提示是特定于堆栈时,基于语义相似性注入相关先验。这是为了减少AI的错误和不当行为。
**隐私保证**:设计上以本地优先。原始提示永远不会被存储。注入始终在客户端进行。
```bash
pipx install turnzero
turnzero setup # 注册MCP服务器与Claude Code、Cursor、Claude Desktop、Gemini CLI
turnzero verify # 确认一切连接正确
```
演示: [https://asciinema.org/a/8IV2yoLNTloSlZo0](https://asciinema.org/a/8IV2yoLNTloSlZo0)
代码库: [https://github.com/turnzero-ai/turnzero](https://github.com/turnzero-ai/turnzero)
查看原文
In an attempt to reduce cold starts in AI sessions Ive made a tool that runs as an MCP server and loads the context before Turn 0.<p>Two things happen:<p>Personal Priors - your workflows and standards loads once per session and persists across every supported AI client.<p>Expert Priors - when prompt is stack specific, relevar priors inject based on semantic similarity. This is to reduce errors and unwanted behaviour of the AI.<p>Privacy guarantee: local-first by design. Raw prompts are never stored. Injection is always client-side.<p>```bash pipx install turnzero turnzero setup # registers MCP server with Claude Code, Cursor, Claude Desktop, Gemini CLI
turnzero verify # confirms everything is wired correctly ```<p>Demo:<a href="https://asciinema.org/a/8IV2yoLNTloSlZo0" rel="nofollow">https://asciinema.org/a/8IV2yoLNTloSlZo0</a><p>Repo: <a href="https://github.com/turnzero-ai/turnzero" rel="nofollow">https://github.com/turnzero-ai/turnzero</a>