展示HN:CacheZero – Karpathy的LLM维基想法,作为一个NPM安装包
Karpathy 最近关于大型语言模型知识库的推文引发了广泛关注(1700万次观看)。他描述了一个系统,用户可以将原始内容放入一个文件夹,然后一个大型语言模型将其编译成一个互联的维基,用户可以通过 Obsidian 进行搜索和查询。
我将整个流程构建为一个单一的命令行工具:
- Chrome 扩展,用于收藏推文、文章、YouTube 视频等
- Hono 服务器 + LanceDB 进行向量搜索
- Claude Code 将你的书签编译成带有维基链接和引用的维基页面
- 在 Obsidian 中以图形视图浏览
- 使用 Quartz 发布为静态网站
安装命令:npm i -g cachezero
[项目链接](https://github.com/swarajbachu/cachezero)
我特别希望能收到关于编译步骤的反馈——维基的质量在很大程度上依赖于 SCHEMA.md 提示,而我仍在不断迭代。
查看原文
Karpathy's recent tweet about LLM knowledge bases went viral (17M views). He described a system where you dump raw content into a folder, an LLM compiles it into an interconnected wiki, and you search/query it through Obsidian.<p>I built the whole pipeline as a single CLI tool:<p>- Chrome extension to bookmark tweets, articles, YouTube, anything
- Hono server + LanceDB for vector search
- Claude Code compiles your bookmarks into wiki pages with wikilinks and citations
- Browse in Obsidian with graph view
- Publish as a static site with Quartz<p>npm i -g cachezero<p><a href="https://github.com/swarajbachu/cachezero" rel="nofollow">https://github.com/swarajbachu/cachezero</a><p>Would love feedback on the compile step specifically — the wiki quality depends heavily on the SCHEMA.md prompt and I'm still iterating on it.