返回首页
最新
AI memory systems often become a black box. When an LLM produces a wrong answer, it’s unclear whether the issue comes from storage, retrieval, or the memory itself.<p>Most systems rely on RAG and vector storage, which makes memory opaque and hard to inspect, especially for temporal or multi-step reasoning.<p>An alternative is to make memory readable and structured: store it as files, preserve raw inputs, and allow the LLM to read memory directly instead of relying only on vector search.
I recently used Deepseek and when sending another request in "Thinking" mode initially showed activation of "reading" mode, I sent a regular text request without documents so I don't know what that means. Well, I suspect that this is a deeper understanding of the user prompt.
I scraped 1,576 HN snapshots and found 159 stories that hit the maximum score. Then I crawled the actual articles and ran sentiment analysis.<p>The results surprised me.<p>*The Numbers*<p>- Negative sentiment: 78 articles (49%)
- Positive sentiment: 45 articles (28%)
- Neutral: 36 articles (23%)<p>Negative content doesn't just perform well – it dominates.<p>*What "Negative" Actually Means*<p>The viral negative posts weren't toxic or mean. They were:<p>- Exposing problems ("Why I mass-deleted my Chrome extensions")
- Challenging giants ("OpenAI's real business model")
- Honest failures ("I wasted 3 years building the wrong thing")
- Uncomfortable truths ("Your SaaS metrics are lying to you")<p>The pattern: something is broken and here's proof.<p>*Title Patterns That Worked*<p>From the 159 viral posts, these structures appeared repeatedly:<p>1. [Authority] says [Controversial Thing] - 23 posts
2. Why [Common Belief] is Wrong - 19 posts
3. I [Did Thing] and [Unexpected Result] - 31 posts
4. [Company] is [Doing Bad Thing] - 18 posts<p>Average title length: 8.3 words. The sweet spot is 6-12 words.<p>*What Didn't Work*<p>Almost none of the viral posts were:
- Pure product launches
- "I'm excited to announce..."
- Listicles ("10 ways to...")
- Generic advice<p>*The Uncomfortable Implication*<p>If you want reach on HN, you're better off writing about what's broken than what you built.<p>This isn't cynicism – it's selection pressure. HN readers are skeptics. They've seen every pitch. What cuts through is useful criticism backed by evidence.<p>*For Founders*<p>Before your next launch post, ask: what problem am I exposing? What assumption am I challenging? What did I learn the hard way?<p>That's your hook.<p>---<p>Data: Built a tool that snapshots HN/GitHub/Reddit/ProductHunt every 30 minutes. Analyzed 1,576 snapshots, found 2,984 instances of score=100, deduped to 159 unique URLs, crawled 143 successfully, ran GPT-4 sentiment analysis on full article text.<p>Happy to share the raw data if anyone wants to dig deeper.