返回首页
24小时热榜
I was curious to see how some of the latest models behaved and played no limit texas holdem.<p>I built this website which allows you to:<p>Spectate: Watch different models play against each other.<p>Play: Create your own table and play hands against the agents directly.
via <a href="https://news.ycombinator.com/item?id=46429250">https://news.ycombinator.com/item?id=46429250</a>
<i>TLDR:</i> Librario is a book metadata API that aggregates data from Google Books, ISBNDB, and Hardcover into a single response, solving the problem of no single source having complete book information. It's currently pre-alpha, AGPL-licensed, and available to try now[0].<p>My wife and I have a personal library with around 1,800 books. I started working on a library management tool for us, but I quickly realized I needed a source of data for book information, and none of the solutions available provided all the data I needed. One might provide the series, the other might provide genres, and another might provide a good cover, but none provided everything.<p>So I started working on Librario, a book metadata aggregation API written in Go. It fetches information about books from multiple sources (Google Books, ISBNDB, Hardcover. Working on Goodreads and Anna's Archive next.), merges everything, and saves it all to a PostgreSQL database for future lookups. The idea is that the database gets stronger over time as more books are queried.<p>You can see an example response here[1], or try it yourself:<p><pre><code> curl -s -H 'Authorization: Bearer librario_ARbmrp1fjBpDywzhvrQcByA4sZ9pn7D5HEk0kmS34eqRcaujyt0enCZ' \
'https://api.librario.dev/v1/book/9781328879943' | jq .
</code></pre>
This is pre-alpha and runs on a small VPS, so keep that in mind. I never hit the limits in the third-party services, so depending on how this post goes, I’ll or will not find out if the code handles that well.<p>The merger is the heart of the service, and figuring out how to combine conflicting data from different sources was the hardest part. In the end I decided to use field-specific strategies which are quite naive, but work for now.<p>Each extractor has a priority, and results are sorted by that priority before merging. But priority alone isn't enough, so different fields need different treatment.<p>For example:<p>- Titles use a scoring system. I penalize titles containing parentheses or brackets because sources sometimes shove subtitles into the main title field. Overly long titles (80+ chars) also get penalized since they often contain edition information or other metadata that belongs elsewhere.<p>- Covers collect all candidate URLs, then a separate fetcher downloads and scores them by dimensions and quality. The best one gets stored locally and served from the server.<p>For most other fields (publisher, language, page count), I just take the first non-empty value by priority. Simple, but it works.<p>Recently added a caching layer[2] which sped things up nicely. I considered migrating from <i>net/http</i> to <i>fiber</i> at some point[3], but decided against it. Going outside the standard library felt wrong, and the migration didn't provide much in the end.<p>The database layer is being rewritten before v1.0[4]. I'll be honest: the original schema was written by AI, and while I tried to guide it in the right direction with SQLC[5] and good documentation, database design isn't my strong suit and I couldn't confidently vouch for the code. Rather than ship something I don't fully understand, I hired the developers from SourceHut[6] to rewrite it properly.<p>I've got a 5-month-old and we're still adjusting to their schedule, so development is slow. I've mentioned this project in a few HN threads before[7], so I’m pretty happy to finally have something people can try.<p>Code is AGPL and on SourceHut[8].<p>Feedback and patches[9] are very welcome :)<p>[0]: <a href="https://sr.ht/~pagina394/librario/" rel="nofollow">https://sr.ht/~pagina394/librario/</a><p>[1]: <a href="https://paste.sr.ht/~jamesponddotco/a6c3b1130133f384cffd25b33a8ab1bc3392093c" rel="nofollow">https://paste.sr.ht/~jamesponddotco/a6c3b1130133f384cffd25b3...</a><p>[2]: <a href="https://todo.sr.ht/~pagina394/librario/16" rel="nofollow">https://todo.sr.ht/~pagina394/librario/16</a><p>[3]: <a href="https://todo.sr.ht/~pagina394/librario/13" rel="nofollow">https://todo.sr.ht/~pagina394/librario/13</a><p>[4]: <a href="https://todo.sr.ht/~pagina394/librario/14" rel="nofollow">https://todo.sr.ht/~pagina394/librario/14</a><p>[5]: <a href="https://sqlc.dev" rel="nofollow">https://sqlc.dev</a><p>[6]: <a href="https://sourcehut.org/consultancy/" rel="nofollow">https://sourcehut.org/consultancy/</a><p>[7]: <a href="https://news.ycombinator.com/item?id=45419234">https://news.ycombinator.com/item?id=45419234</a><p>[8]: <a href="https://sr.ht/~pagina394/librario/" rel="nofollow">https://sr.ht/~pagina394/librario/</a><p>[9]: <a href="https://git.sr.ht/~pagina394/librario/tree/trunk/item/CONTRIBUTING.md" rel="nofollow">https://git.sr.ht/~pagina394/librario/tree/trunk/item/CONTRI...</a>
Finding a true “Remote from Anywhere” role is harder than it looks.<p>Many jobs are labeled “remote,” but the fine print often ties them to a region, a time zone, or specific legal and tax requirements.<p>Here are practical checks that help you spot “remote anywhere” roles faster, and avoid common red flags.<p>1) Read the location line
Start with the simplest signal: is there a geography attached?<p>- “US Remote” “Remote (EU)” “LATAM only” or “Remote within X countries” usually means location restrictions.
- If time zones are listed, that can also imply location limits, even when the role is technically remote.
- Look for explicit language like “Global remote,” “Work from anywhere,” “fully asynchronous,” or “distributed team across multiple countries.” These are not guarantees, but they are stronger indicators.<p>2) Treat salary as a clue<p>Pay ranges can indicate the target hiring market.<p>- A range like $100k to $250k often signals a US-centered market (not always, but often).<p>3) Watch the application form<p>Sometimes the job post is vague, but the ATS form tells the truth:<p>- Questions like “Which time zone can you work in?” can reveal the required overlap.
- If the location dropdown includes only a few regions (e.g., US, Canada, Europe, Other), it often indicates there are specific geographic requirements.
- Red flags that usually indicate US-only hiring include questions about US work authorization, a US tax ID, US-specific benefits or requirements such as Security Clearance.<p>4) Check the company on LinkedIn<p>If a company truly hires globally, you can usually see it in its team.<p>- Review employee locations. Even if LinkedIn shows only a few “top locations,” individual profiles reveal the real spread.
- Search for your profession (e.g., Software Engineer) and check where they actually live.
- If you see people working from India, Asia, Africa or other regions beyond the US and Europe, that is a strong sign the company can hire internationally.<p>5) Compare career pages and external job boards<p>Job descriptions are sometimes more detailed on the company website.<p>- Look for mentions of an asynchronous culture, a multi-national team, or the number of nationalities in the company.
- Check LinkedIn job posts and external job boards. They sometimes include location constraints that are missing from the official posting.<p>Remote anywhere roles exist, but they are a narrower category than most people expect.<p>Companies balance time zone collaboration, employment compliance, payroll, and security requirements.<p>Good luck with your remote job search!
There is also a little command line tool to search jobs from command line. You can also use the web interface at <a href="https://jobswithgpt.com" rel="nofollow">https://jobswithgpt.com</a>