展示HN:Polymcp和Ollama用于简单的本地和云端LLM执行

1作者: justvugg大约 19 小时前原帖
我们为 Polymcp 添加了一流的 Ollama 支持,使得运行大型语言模型变得简单——无论您是在本地工作还是在云端部署。<p>通过将 Ollama 作为后端提供者,Polymcp 可以以最小的配置协调 MCP 服务器和模型。这使您可以专注于构建智能体,而不是处理基础设施的连接。<p>代码示例:<br> ```python from polymcp.polyagent import PolyAgent, OllamaProvider agent = PolyAgent( llm_provider=OllamaProvider(model="gpt-oss:120b"), mcp_servers=["http://localhost:8000/mcp"] ) result = agent.run("法国的首都是什么?") print(result) ```<p>这带来了以下优势: • 清晰的编排:Polymcp 管理 MCP 服务器,而 Ollama 处理模型执行。 • 相同的工作流程,无处不在:在您的笔记本电脑或云端运行相同的设置。 • 灵活的模型选择:支持 gpt-oss:120b、Kimi K2、Nemotron 等 Ollama 支持的模型。<p>我们的目标是提供一种简单的方法来实验和部署基于 LLM 的智能体,而无需额外的连接代码。<p>欢迎反馈或分享您如何使用这个工具的想法。<p>代码库链接:<a href="https://github.com/poly-mcp/Polymcp" rel="nofollow">https://github.com/poly-mcp/Polymcp</a>
查看原文
We’ve added first-class Ollama support to Polymcp to make running large language models easy—whether you’re working locally or deploying in the cloud.<p>By using Ollama as a backend provider, Polymcp can coordinate MCP servers and models with minimal configuration. This lets you focus on building agents instead of wiring infrastructure.<p>from polymcp.polyagent import PolyAgent, OllamaProvider<p>agent = PolyAgent( llm_provider=OllamaProvider(model=&quot;gpt-oss:120b&quot;), mcp_servers=[&quot;http:&#x2F;&#x2F;localhost:8000&#x2F;mcp&quot;] )<p>result = agent.run(&quot;What is the capital of France?&quot;) print(result)<p>What this enables: • Clean orchestration: Polymcp manages MCP servers while Ollama handles model execution. • Same workflow, everywhere: Run the same setup on your laptop or in the cloud. • Flexible model choice: Works with models like gpt-oss:120b, Kimi K2, Nemotron, and others supported by Ollama.<p>The goal is to provide a straightforward way to experiment with and deploy LLM-powered agents without extra glue code.<p>Would love feedback or ideas on how you’d use this.<p>Repo: <a href="https:&#x2F;&#x2F;github.com&#x2F;poly-mcp&#x2F;Polymcp" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;poly-mcp&#x2F;Polymcp</a>