展示HN:让本地LLM焕发生机,构建任何东西(免费及演示视频)
我感到很沮丧。我的多个本地大语言模型(LLM)闲置着——既然云模型的参数更多且表现更好,何必费心呢?
更糟的是,市面上所有可以与大语言模型一起构建的工具都价格离谱。
于是我构建了我所需要的:一个完全免费的、本地优先的人工智能平台,真正释放本地模型的潜力。
这不仅仅是另一个智能代理。它有一个许多AI领域的聪明人都忽视的关键创新:
大多数AI代理只是使用工具来追求人类定义的目标。
而这个代理能够即时创建自己的目标、子目标和工具。
当你使用更多参数的大语言模型时,这个工具的表现更是上了一个台阶,32B的大语言模型的性能远超16B的大语言模型,你会看到至少100倍的性能提升。
没错——它完全在本地运行。完全免费。
项目链接:
[https://github.com/ARAldhafeeri/whale-rider](https://github.com/ARAldhafeeri/whale-rider)
演示视频:
[https://youtu.be/ciFfjajS_xA?si=NQt5DStF1BS3m4Fj](https://youtu.be/ciFfjajS_xA?si=NQt5DStF1BS3m4Fj)
查看原文
I was frustrated. I had multiple local LLMs sitting idle—why bother when cloud models with more parameters are better?<p>On top of that, every tool out there that lets you build with LLMs is absurdly overpriced.<p>So I built what I needed: a completely free, local-first AI platform that actually unlocks the power of local models.<p>This isn’t just another agent. It has a key innovation many smart folks in AI missed:<p>Most AI agents just use tools to chase human-defined goals.<p>This one creates its own goals, subgoals, and tools on the fly.<p>The tool is on another level when you use more parameter LLM, the performance with a 32B LLM is way better than16B LLM, you'll see at least a 100x performance gain.<p>And yes—it runs entirely locally. Completely free.<p>Project:
<a href="https://github.com/ARAldhafeeri/whale-rider">https://github.com/ARAldhafeeri/whale-rider</a><p>Demo Video:
<a href="https://youtu.be/ciFfjajS_xA?si=NQt5DStF1BS3m4Fj" rel="nofollow">https://youtu.be/ciFfjajS_xA?si=NQt5DStF1BS3m4Fj</a>