展示HN:PicoFlow – 一个用于LLM代理的简约Python工作流

1作者: shijizhi_19196 天前原帖
嗨,HN, 我已经对大型语言模型(LLM)代理进行了实验一段时间,常常觉得对于简单的工作流程(聊天、工具调用、小循环),现有的框架增加了很多抽象和样板代码。 因此,我构建了一个名为 PicoFlow 的小型 Python 库。目标很简单: 使用普通的异步 Python 表达代理工作流程,而不是框架特定的图形或链条。 **最小聊天代理** 每个步骤只是一个异步函数,工作流程通过 `>>` 组合: ```python from picoflow import flow, llm, create_agent LLM_URL = "llm+openai://api.openai.com/v1/chat/completions?model=gpt-4.1-mini&api_key_env=OPENAI_API_KEY" @flow async def input_step(ctx): return ctx.with_input(input("你:")) agent = create_agent( input_step >> llm("回答用户:{input}", llm_adapter=LLM_URL) ) agent.run() ``` 没有链条,没有图形,没有单独的提示/模板对象。你可以通过在异步步骤中直接放置断点来调试。 **控制流就是 Python** 循环和分支使用普通的 Python 逻辑编写,而不是 DSL 节点: ```python def repeat(step): async def run(ctx): while not ctx.done: ctx = await step.acall(ctx) return ctx return Flow(run) ``` 该框架仅调度步骤;它不会试图控制你的控制流。 **切换模型提供者 = 更改 URL** 另一个设计选择:模型后端通过单个 LLM URL 配置。 OpenAI: ```python LLM_URL = "llm+openai://api.openai.com/v1/chat/completions?model=gpt-4.1-mini&api_key_env=OPENAI_API_KEY" ``` 切换到另一个兼容 OpenAI 的提供者(例如 SiliconFlow 或本地网关): ```python LLM_URL = "llm+openai://api.siliconflow.cn/v1/chat/completions?model=Qwen/Qwen2.5-7B-Instruct&api_key_env=SILICONFLOW_API_KEY" ``` 工作流程代码完全不变。只有运行时配置发生变化。这使得在实践中进行 A/B 测试模型和切换提供者变得更加便宜。 **何时有用(何时无用)** 如果你: - 想快速原型化代理 - 更喜欢显式控制流 - 不想学习大型框架抽象 那么 PicoFlow 可能会很有用。 如果你: - 严重依赖预构建的组件和集成 - 想要一个包含所有功能的编排平台 那么它可能并不理想。 **代码库:** [https://github.com/the-picoflow/picoflow](https://github.com/the-picoflow/picoflow) 这仍然处于早期阶段,并且具有一定的主观性。我非常希望能得到反馈,看看这种“工作流程即 Python”的风格是否对其他人有用,或者人们是否已经有更好的解决方案。 谢谢!
查看原文
Hi HN,<p>I’ve been experimenting with LLM agents for a while and often felt that for simple workflows (chat, tool calls, small loops), existing frameworks add a lot of abstraction and boilerplate.<p>So I built a small Python library called PicoFlow. The goal is simple:<p>express agent workflows using normal async Python, not framework-specific graphs or chains.<p>Minimal chat agent<p>Each step is just an async function, and workflows are composed with &gt;&gt;:<p><pre><code> from picoflow import flow, llm, create_agent LLM_URL = “llm+openai:&#x2F;&#x2F;api.openai.com&#x2F;v1&#x2F;chat&#x2F;completions?model=gpt-4.1-mini&amp;api_key_env=OPENAI_API_KEY” @flow async def input_step(ctx): return ctx.with_input(input(“You:”)) agent = create_agent( input_step &gt;&gt; llm(“Answer the user: {input}”, llm_adapter=LLM_URL) ) agent.run() </code></pre> No chains, no graphs, no separate prompt&#x2F;template objects. You can debug by putting breakpoints directly in the async steps.<p>Control flow is just Python<p>Loops and branching are written with normal Python logic, not DSL nodes:<p><pre><code> def repeat(step): async def run(ctx): while not ctx.done: ctx = await step.acall(ctx) return ctx return Flow(run) </code></pre> The framework only schedules steps; it doesn’t try to own your control flow.<p>Switching model providers = change the URL<p>Another design choice: model backends are configured via a single LLM URL.<p>OpenAI:<p><pre><code> LLM_URL = “llm+openai:&#x2F;&#x2F;api.openai.com&#x2F;v1&#x2F;chat&#x2F;completions?model=gpt-4.1-mini&amp;api_key_env=OPENAI_API_KEY” </code></pre> Switch to another OpenAI-compatible provider (for example SiliconFlow or local gateways):<p><pre><code> LLM_URL = “llm+openai:&#x2F;&#x2F;api.siliconflow.cn&#x2F;v1&#x2F;chat&#x2F;completions?model=Qwen&#x2F;Qwen2.5-7B-Instruct&amp;api_key_env=SILICONFLOW_API_KEY” </code></pre> The workflow code doesn’t change at all. Only runtime configuration does. This makes A&#x2F;B testing models and switching providers much cheaper in practice.<p>When this is useful (and when it’s not)<p>PicoFlow is probably useful if you:<p>- want to prototype agents quickly - prefer explicit control flow - don’t want to learn a large framework abstraction<p>It’s probably not ideal if you:<p>- rely heavily on prebuilt components and integrations - want a batteries-included orchestration platform<p>Repo:<p><a href="https:&#x2F;&#x2F;github.com&#x2F;the-picoflow&#x2F;picoflow" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;the-picoflow&#x2F;picoflow</a><p>This is still early and opinionated. I’d really appreciate feedback on whether this style of “workflow as Python” is useful to others, or if people are solving this in better ways already.<p>Thanks!