我为本地LLM(大语言模型)制作的一个小工具:LLM-neofetch-plus

1作者: HFerrahoglu2 个月前原帖
大家好!<p>我考虑这个想法已经有一段时间了,终于实现了:它显示系统信息,就像常规的 NeoFetch,但我为使用本地 LLM(如 Ollama、llama.cpp 等)的用户添加了一些额外功能。<p>例如: - 你的 GPU 有多少 VRAM,是什么型号(NVIDIA、AMD、Intel、Apple M 系列)? - 你的机器可以舒适运行多少亿个参数(70B 还是 13B 更合理)? - 哪种 GGUF 量化对应什么(Q4_K_M 与 Q8_0 等)? - Ollama / llama.cpp / vLLM / LM Studio 的比较 - 磁盘速度测试 + JSON/Markdown 导出<p>简单安装: pip install llm-neofetch-plus<p>llm-neofetch -d 3 ← 这是详细版本,显示建议等。<p>GitHub: https://github.com/HFerrahoglu/llm-neofetch-plus<p>如果有人尝试了这个工具,请告诉我你是否喜欢,以及我们应该改进什么?谢谢!
查看原文
Hey everyone!<p>I&#x27;ve had this in mind for a while, and I finally did it: it shows system information like regular NeoFetch, but I&#x27;ve added extra features for those using local LLMs (Ollama, llama.cpp, etc.).<p>For example: -How much VRAM does your GPU have, which model (NVIDIA, AMD, Intel, Apple M series)? -How many billions of parameters can your machine comfortably run (is 70B or 13B more sensible?) -Which GGUF quantization does what (Q4_K_M vs Q8_0 vs)? -Comparison of Ollama &#x2F; llama.cpp &#x2F; vLLM &#x2F; LM Studio -Disk speed test + JSON&#x2F;Markdown export<p>Simple installation: pip install llm-neofetch-plus<p>llm-neofetch -d 3 ← This is the detailed version, showing suggestions etc.<p>GitHub: https:&#x2F;&#x2F;github.com&#x2F;HFerrahoglu&#x2F;llm-neofetch-plus<p>If anyone tries it, could you tell me if you liked it or not, and what we should change? Thanks!