返回首页

24小时热榜

13作者: projectyang大约 22 小时前原帖
I was curious to see how some of the latest models behaved and played no limit texas holdem.<p>I built this website which allows you to:<p>Spectate: Watch different models play against each other.<p>Play: Create your own table and play hands against the agents directly.
6作者: daikikadowaki大约 4 小时前原帖
I’m an independent researcher proposing State Discrepancy, a public-domain metric to quantify how much an AI system changes a user’s intent (“the Ghost”).<p>The goal: replace vague legal and philosophical notions of “manipulation” with a concrete engineering variable. Without clear boundaries, AI faces regulatory fog, social distrust, and the risk of being rejected entirely.<p>Algorithm 1 (on pp.16–17 of the linked white paper) formally defines the metric:<p>1. D = CalculateDistance(VisualState, LogicalState)<p>2. IF D &lt; α : optimization (Reduce Update Rate)<p>3. ELSE IF α ≤ D &lt; β : warning (Apply Visual&#x2F;Haptic Modifier proportional to D)<p>4. ELSE IF β ≤ D &lt; γ : intervention (Modulate Input &#x2F; Synchronization)<p>5. ELSE : security (Execute Defensive Protocol)<p>The full paper is available on Zenodo: <a href="https:&#x2F;&#x2F;doi.org&#x2F;10.5281&#x2F;zenodo.18206943" rel="nofollow">https:&#x2F;&#x2F;doi.org&#x2F;10.5281&#x2F;zenodo.18206943</a>
5作者: allie1大约 8 小时前原帖
We’ve all seen the crazy “10 parallel agents” type setups, but I never saw it fitting my workflow.<p>What I usually do is I would have Claude Code build a plan, Codex find flaws in it, iterating until i get something that looks good. I’d give direction and make sure it follows my overall idea.<p>Implementation is working well on its own.<p>But this takes a lot of focus to get right for me, I can’t see myself doing it on the same project, multiple features.<p>Am I missing something?
5作者: jamesponddotco大约 18 小时前原帖
<i>TLDR:</i> Librario is a book metadata API that aggregates data from Google Books, ISBNDB, and Hardcover into a single response, solving the problem of no single source having complete book information. It&#x27;s currently pre-alpha, AGPL-licensed, and available to try now[0].<p>My wife and I have a personal library with around 1,800 books. I started working on a library management tool for us, but I quickly realized I needed a source of data for book information, and none of the solutions available provided all the data I needed. One might provide the series, the other might provide genres, and another might provide a good cover, but none provided everything.<p>So I started working on Librario, a book metadata aggregation API written in Go. It fetches information about books from multiple sources (Google Books, ISBNDB, Hardcover. Working on Goodreads and Anna&#x27;s Archive next.), merges everything, and saves it all to a PostgreSQL database for future lookups. The idea is that the database gets stronger over time as more books are queried.<p>You can see an example response here[1], or try it yourself:<p><pre><code> curl -s -H &#x27;Authorization: Bearer librario_ARbmrp1fjBpDywzhvrQcByA4sZ9pn7D5HEk0kmS34eqRcaujyt0enCZ&#x27; \ &#x27;https:&#x2F;&#x2F;api.librario.dev&#x2F;v1&#x2F;book&#x2F;9781328879943&#x27; | jq . </code></pre> This is pre-alpha and runs on a small VPS, so keep that in mind. I never hit the limits in the third-party services, so depending on how this post goes, I’ll or will not find out if the code handles that well.<p>The merger is the heart of the service, and figuring out how to combine conflicting data from different sources was the hardest part. In the end I decided to use field-specific strategies which are quite naive, but work for now.<p>Each extractor has a priority, and results are sorted by that priority before merging. But priority alone isn&#x27;t enough, so different fields need different treatment.<p>For example:<p>- Titles use a scoring system. I penalize titles containing parentheses or brackets because sources sometimes shove subtitles into the main title field. Overly long titles (80+ chars) also get penalized since they often contain edition information or other metadata that belongs elsewhere.<p>- Covers collect all candidate URLs, then a separate fetcher downloads and scores them by dimensions and quality. The best one gets stored locally and served from the server.<p>For most other fields (publisher, language, page count), I just take the first non-empty value by priority. Simple, but it works.<p>Recently added a caching layer[2] which sped things up nicely. I considered migrating from <i>net&#x2F;http</i> to <i>fiber</i> at some point[3], but decided against it. Going outside the standard library felt wrong, and the migration didn&#x27;t provide much in the end.<p>The database layer is being rewritten before v1.0[4]. I&#x27;ll be honest: the original schema was written by AI, and while I tried to guide it in the right direction with SQLC[5] and good documentation, database design isn&#x27;t my strong suit and I couldn&#x27;t confidently vouch for the code. Rather than ship something I don&#x27;t fully understand, I hired the developers from SourceHut[6] to rewrite it properly.<p>I&#x27;ve got a 5-month-old and we&#x27;re still adjusting to their schedule, so development is slow. I&#x27;ve mentioned this project in a few HN threads before[7], so I’m pretty happy to finally have something people can try.<p>Code is AGPL and on SourceHut[8].<p>Feedback and patches[9] are very welcome :)<p>[0]: <a href="https:&#x2F;&#x2F;sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;" rel="nofollow">https:&#x2F;&#x2F;sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;</a><p>[1]: <a href="https:&#x2F;&#x2F;paste.sr.ht&#x2F;~jamesponddotco&#x2F;a6c3b1130133f384cffd25b33a8ab1bc3392093c" rel="nofollow">https:&#x2F;&#x2F;paste.sr.ht&#x2F;~jamesponddotco&#x2F;a6c3b1130133f384cffd25b3...</a><p>[2]: <a href="https:&#x2F;&#x2F;todo.sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;16" rel="nofollow">https:&#x2F;&#x2F;todo.sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;16</a><p>[3]: <a href="https:&#x2F;&#x2F;todo.sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;13" rel="nofollow">https:&#x2F;&#x2F;todo.sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;13</a><p>[4]: <a href="https:&#x2F;&#x2F;todo.sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;14" rel="nofollow">https:&#x2F;&#x2F;todo.sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;14</a><p>[5]: <a href="https:&#x2F;&#x2F;sqlc.dev" rel="nofollow">https:&#x2F;&#x2F;sqlc.dev</a><p>[6]: <a href="https:&#x2F;&#x2F;sourcehut.org&#x2F;consultancy&#x2F;" rel="nofollow">https:&#x2F;&#x2F;sourcehut.org&#x2F;consultancy&#x2F;</a><p>[7]: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=45419234">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=45419234</a><p>[8]: <a href="https:&#x2F;&#x2F;sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;" rel="nofollow">https:&#x2F;&#x2F;sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;</a><p>[9]: <a href="https:&#x2F;&#x2F;git.sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;tree&#x2F;trunk&#x2F;item&#x2F;CONTRIBUTING.md" rel="nofollow">https:&#x2F;&#x2F;git.sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;tree&#x2F;trunk&#x2F;item&#x2F;CONTRI...</a>