展示HN:Worqlo – 企业工作流程的对话层
大多数企业工作并不是因为数据差而变得缓慢,而是因为访问这些数据的界面分散。<p>像“哪些交易停滞不前?”这样一个简单的问题,涉及到仪表板、电子表格、客户关系管理(CRM)、商业智能(BI)工具、内部脚本以及几个Slack讨论。对答案采取行动需要在系统之间切换,这中间的摩擦就是问题所在。<p>Worqlo是一个通过将对话作为接口层、将确定性工作流作为执行层来消除这种摩擦的实验。<p>这个想法很简单:自然语言输入 → 验证后的工作流输出。<p>大型语言模型(LLM)处理意图,结构化工作流引擎负责执行:CRM查询、字段更新、通知、权限和审计日志。该模型从不直接执行操作。<p>以下是其工作原理。<p>为什么选择对话?<p>人们以问题为思维方式,而系统则以模式为思维方式。仪表板在它们之间起到桥梁作用。<p>接口层面不断增加,因为每个系统都暴露出自己的用户界面。工程师最终会构建内部工具、过滤器、查询、分析页面和一次性自动化。这就是用户界面的“税收”。<p>对话减少了表面复杂度,工作流则增加了安全性和确定性。<p>架构(简化版)
用户 → LLM(意图) → 路由器 → 工作流引擎 → 连接器 → 系统<p>LLM<p>提取意图和参数。
没有执行权限。<p>意图路由器<p>将意图映射到已知的工作流模板。<p>工作流引擎<p>按顺序执行步骤:
- 模式验证
- 权限检查
- CRM查询
- API更新
- 通知
- 审计日志<p>连接器<p>针对CRM、ERP、内部API和消息系统的严格适配器。<p>如果以下情况发生,工作流引擎将拒绝运行:
- 字段不存在
- 数据类型不匹配
- 权限失败
- 工作流模板与用户意图不匹配<p>这可以防止常见的LLM失败案例:虚构字段、不正确的API调用、不安全的操作等。<p>示例查询<p>用户:
“给我看看本周DACH的销售管道”<p>内部流程:
意图 = llm.parse(“管道查询”)
验证(意图)
获取(数据)
汇总(统计)
返回(摘要)<p>后续:
“将汉莎航空的交易重新分配给朱莉,并提醒亚历克斯跟进”<p>工作流:
按名称查找交易
验证所有权变更
写入CRM更新
发送Slack通知
写入审计日志<p>所有操作都通过确定性的步骤进行。<p>为什么从销售开始?<p>销售CRM结构化且可预测。
工作流重复(重新分配、提醒、跟进)。
延迟很重要。
输出是可测量的。
这使得该领域成为对话工作流的良好测试环境。<p>长期的想法并不局限于销售。
相同的模式适用于运营、财务、市场营销和人力资源。<p>为什么不直接使用“ChatGPT + API”?<p>因为那样会导致速度变慢。<p>LLM并不是可靠的执行引擎。
它们会虚构字段名称、ID、端点和逻辑。
企业系统需要安全、可审计的操作。<p>Worqlo将LLM视为解析器,而不是工作者。<p>执行在一个受控环境中进行,包含:
- 工作流模板
- 模式合同
- 基于角色的访问控制(RBAC)
- 日志
- 可重复的结果<p>这保持了自然语言的便利性和经典自动化引擎的可靠性。<p>我们正在测试什么<p>我们想看看:
- 对话是否可以替代用户界面,处理狭窄、结构化的任务
- 确定性执行是否可以与自然语言意图共存
- 多轮工作流是否真的可以减少操作负担
- 连接器模型是否可以扩展而不产生另一个集成混乱
- 工程师是否更喜欢通过工作流而不是用户界面层来暴露功能<p>现在还为时已早。
但该模型在高容量、低级别的操作工作中似乎很有前景。
查看原文
Most enterprise work isn’t slow because of bad data.
It’s slow because the interface to that data is scattered.<p>A single question like “Which deals are stalled?” touches dashboards, spreadsheets, a CRM, BI tools, internal scripts, and a few Slack threads. Acting on the answer requires switching between systems again. The friction is in the middle.<p>Worqlo is an experiment in removing that friction by using conversation as the interface layer and deterministic workflows as the execution layer.<p>The idea is simple:
natural language in → validated workflow out.<p>The LLM handles intent.
A structured workflow engine handles execution: CRM queries, field updates, notifications, permissions, and audit logging.
The model never executes actions directly.<p>Below is how it works.<p>Why Conversation?<p>People think in questions.
Systems think in schemas.
Dashboards sit between them.<p>Interfaces multiply because every system exposes its own UI. Engineers end up building internal tools, filters, queries, analytics pages, and one-off automations. That’s the UI tax.<p>Conversation removes the surface area.
Workflows add safety and determinism.<p>Architecture (simplified)
User → LLM (intent) → Router → Workflow Engine → Connectors → Systems<p>LLM<p>Extracts intent and parameters.
No execution privileges.<p>Intent Router<p>Maps intent to a known workflow template.<p>Workflow Engine<p>Executes steps in order:<p>schema validation<p>permission checks<p>CRM queries<p>API updates<p>notifications<p>audit logs<p>Connectors<p>Strict adapters for CRMs, ERPs, internal APIs, and messaging systems.<p>The workflow engine will refuse to run if:<p>fields don’t exist<p>data types mismatch<p>permissions fail<p>workflow template doesn’t match user intent<p>This prevents the usual LLM failure cases: hallucinated fields, incorrect API calls, unsafe actions, etc.<p>Example Query<p>User:<p>"Show me this week's pipeline for DACH"<p>Internal flow:<p>intent = llm.parse("pipeline query")
validate(intent)
fetch(data)
aggregate(stats)
return(summary)<p>Follow-up:<p>"Reassign the Lufthansa deal to Julia and remind Alex to follow up"<p>Workflow:<p>find deal by name
validate ownership change
write CRM update
send Slack notification
write audit log<p>Everything runs through deterministic steps.<p>Why Start With Sales<p>Sales CRMs are structured and predictable.
Workflows repeat (reassign, nudge, follow-up).
Latency matters.
Output is measurable.
It makes the domain a good test environment for conversational workflows.<p>The long-term idea is not sales-specific.
The same pattern applies to operations, finance, marketing, and HR.<p>Why Not Just Use “ChatGPT + API”?<p>Because that breaks fast.<p>LLMs are not reliable execution engines.
They hallucinate field names, IDs, endpoints, and logic.
Enterprise systems require safe, auditable actions.<p>Worqlo treats the LLM as a parser, not a worker.<p>Execution lives in a controlled environment with:<p>workflow templates<p>schema contracts<p>RBAC<p>logs<p>repeatable results<p>This keeps the convenience of natural language and the reliability of a classic automation engine.<p>What We’re Testing<p>We want to see whether:<p>conversation can replace UI for narrow, structured tasks<p>deterministic execution can coexist with natural language intent<p>multi-turn workflows can actually reduce operational load<p>a connector model can scale without creating another integration mess<p>engineers prefer exposing functionality through workflows instead of UI layers<p>It’s still early.
But the model seems promising for high-volume, low-level operational work.