大家都在尝试使用向量和图形来实现人工智能记忆。我们则回归到了SQL。

2作者: Arindam17292 个月前原帖
当我们首次开始使用大型语言模型(LLMs)时,明显存在一个差距:它们在当下能够很好地推理,但一旦对话转移,就会忘记一切。 你可以告诉一个智能体:“我不喜欢咖啡”,而在三步之后,它可能又会建议喝浓缩咖啡。这并不是逻辑错误,而是缺乏记忆。 在过去几年中,人们尝试了许多方法来解决这个问题: 1. 提示填充/微调 – 不断添加历史记录。对于短对话有效,但令牌和成本迅速增加。 2. 向量数据库(RAG) – 在Pinecone/Weaviate中存储嵌入。召回是语义的,但检索过程嘈杂且失去结构。 3. 图数据库 – 构建实体关系图。适合推理,但难以扩展和维护。 4. 混合系统 – 结合向量、图、键值和关系数据库。灵活但复杂。 然后出现了一个转折点: 关系数据库!没错,这项已经在银行和社交媒体运行了数十年的技术,似乎成为了赋予人工智能持久记忆的最实用方法之一。 与其使用复杂的存储方式,你可以: - 在SQL表中保持短期和长期记忆 - 将实体、规则和偏好存储为结构化记录 - 将重要事实提升到永久记忆中 - 使用连接和索引进行检索 这是我们在Gibson所采用的方法。我们建立了一个名为Memori的开源项目(https://memori.gibsonai.com/),这是一个多智能体记忆引擎,可以赋予你的AI智能体类似人类的记忆。 这有点讽刺,经过了关于向量和图的所有炒作,AI记忆的最佳解决方案之一可能是我们信任了50多年的技术。 我很想知道你对我们方法的看法!
查看原文
When we first started building with LLMs, the gap was obvious: they could reason well in the moment, but forgot everything as soon as the conversation moved on.<p>You could tell an agent, “I don’t like coffee,” and three steps later it would suggest espresso again. It wasn’t broken logic, it was missing memory.<p>Over the past few years, people have tried a bunch of ways to fix it:<p>1. Prompt stuffing &#x2F; fine-tuning – Keep prepending history. Works for short chats, but tokens and cost explode fast.<p>2. Vector databases (RAG) – Store embeddings in Pinecone&#x2F;Weaviate. Recall is semantic, but retrieval is noisy and loses structure.<p>3. Graph databases – Build entity-relationship graphs. Great for reasoning, but hard to scale and maintain.<p>4. Hybrid systems – Mix vectors, graphs, key-value, and relational DBs. Flexible but complex.<p>And then there’s the twist: Relational databases! Yes, the tech that’s been running banks and social media for decades is looking like one of the most practical ways to give AI persistent memory.<p>Instead of exotic stores, you can:<p>- Keep short-term vs long-term memory in SQL tables<p>- Store entities, rules, and preferences as structured records<p>- Promote important facts into permanent memory<p>- Use joins and indexes for retrieval<p>This is the approach we’ve been working on at Gibson. We built an open-source project called Memori (https:&#x2F;&#x2F;memori.gibsonai.com&#x2F;), a multi-agent memory engine that gives your AI agents human-like memory.<p>It’s kind of ironic, after all the hype around vectors and graphs, one of the best answers to AI memory might be the tech we’ve trusted for 50+ years.<p>I would love to know your thoughts about our approach!