问HN:我们是否接近解决大型语言模型/智能体的记忆问题?

2作者: wkyleg大约 22 小时前原帖
我在使用大型语言模型(LLMs)时遇到的主要问题,以及最阻碍我进一步采用它们的原因,是代理无法记住相关上下文。<p>几年前,大家都在使用RAG、嵌入、数据库等技术来增强模型的能力。而现在,能够访问本地Markdown和记忆文件的模型(如OpenClaw)似乎在性能上明显优于这些依赖grep和简单UNIX工具的数据库。<p>这是LLMs在扩展时固有的问题吗?对于大多数人来说,Obsidian的效果真的好得多吗?有没有人发现有什么东西实际上能超越Markdown?<p>目前,我在采用这些技术时的主要瓶颈似乎是记忆和持久的长期上下文,而不是模型的质量或可靠性。<p>我很好奇是否有任何技术或扩展指标可以用来预测这一领域的未来发展方向。
查看原文
The main issue I experience with LLMs, and the one that seems to most inhibit my further adoption is lack of ability of agents to remember relevant context.<p>A few years ago everyone was using RAG, embeddings, databases on top of models. Now models with access to local markdown and memory files (like OpenClaw) seem to be readily outperforming these databases with grep and simple UNIX tools.<p>Is this an inherent issue in scaling LLMS? Does Obsidian work that much better for most people? It anyone finding anything that actually outperforms markdown?<p>At this point the main bottleneck in my adoption seems to be memory and persistent long term context, not quality or reliability of the models.<p>I&#x27;m curious if there are any technical or scaling metrics we could use to forecast where this will end up going.