请问HN:大型语言模型的提示工程

2作者: Scotrix3 个月前原帖
我正在进行一个项目,需要提取用户意图,并将其转移到确定性的工具/功能/API 执行中,随后通过另一组工具对结果进行细化和转换。由于收集正确的意图和参数(潜在提示中存在许多微妙的差异)相当具有挑战性,因此我使用了一系列连续执行的提示来微调,以准确收集所需的信息,以便进行相对可靠的工具执行。我尝试过使用多个代理框架(包括 langchain/langgraph),但情况很快变得混乱,这种混乱容易导致许多副作用。 因此,我想知道是否有工具、方法或其他任何东西可以更好地控制 LLM 执行链,以避免陷入混乱的配置和/或代码执行实现中?也许还有更直观的解决方案,还是我只是一个人在为此苦恼?
查看原文
I’m working on a project where I need to extract user intents and move them to deterministic tool&#x2F;function&#x2F;api executions + afterwards refining&#x2F;transforming the results by another set of tools. Since gathering the right intent and parameters (there are a lot of subtle differences in potential prompts) is quite challenging I’m using a long consecutive executed list of prompts to fine tune to gather exactly the right pieces of information needed to have somewhat reliable tool executions. I tried this with a bunch of agent frameworks (including langchain&#x2F;langgraph) but it gets very messy very quickly and this messiness is creating a lot of side effects easily.<p>So I wonder if there is a tool, approach, anything to keep better control of chains of LLM executions which don’t end up in a messy configuration and&#x2F;or code execution implementation? Maybe even something more visual, or am I the only struggling with this?