问HN:MCP的意义是什么?

2作者: tomelliot16 天前原帖
我一直在尝试使用MCP服务器,以便更好地了解这个生态系统。我发现了一个真正的问题,但不确定在这种架构下有哪些解决方案——我是不是漏掉了什么? 大型语言模型(LLM)总是处于任何管道的中间。这意味着在每次工具调用之间,可能会出现混乱和数据丢失的翻译(更不用说与进程之间直接传输数据相比,这种方式极其缓慢且浪费资源)。 我使用的例子是:我希望Claude为我协调对Stripe数据的分析。我让它获取上个月的所有交易并将其写入磁盘(作为第一步,在实际执行任何操作之前)。由于从Stripe输出的数据在写入磁盘之前需要经过LLM处理,这导致数据完全混乱,只写入了很小一部分。 我正在努力拼凑出一个让聊天机器人在我的生活中做有用事情的拼图。是否存在一个未来状态,在这个状态下,这个问题不再是固有的障碍?我想到了一些解决方法: - 使用Python解释器,让LLM编写代码。但这样一来,使用Stripe的Python库或API还有什么意义呢? - 建立某种MCP服务器之间的通信协议。此时,我们实际上是在为LLM构建一个操作系统。
查看原文
I’ve been messing around with MCP servers to get a feel for the ecosystem. I’m seeing a real issue, and I’m not sure what solutions are possible with the architecture - am I missing something?<p>The LLM always sits in the middle of any pipeline. That means you’ll always have potentially messy and lossy translation in between every tool call (not to mention incredibly slow&#x2F;wasteful compared to piping data between processes).<p>The example I was using: I wanted Claude to orchestrate some analysis on Stripe data for me. I asked it to get all transactions from last month and write them to disk (as a step one, before actually doing anything). Because the data coming out of Stripe goes back through the LLM before going to disk, it completely borked it and wrote only a small fraction of the data.<p>I&#x27;m trying to piece together the puzzle that lets a chatbot do useful things for me in my life. Is there a future-state where this issue isn’t an inherent problem? Some workarounds I&#x27;ve thought of:<p>- have a python interpreter and have the LLM write code. But then what’s the point of an MCP server when you’d just use the Stripe python library or APIs? - have some kind of inter-MCP-server communication protocol At this point we&#x27;re writing an OS for the LLM to live inside.