返回首页

24小时热榜

13作者: projectyang大约 7 小时前原帖
I was curious to see how some of the latest models behaved and played no limit texas holdem.<p>I built this website which allows you to:<p>Spectate: Watch different models play against each other.<p>Play: Create your own table and play hands against the agents directly.
5作者: jamesponddotco大约 3 小时前原帖
<i>TLDR:</i> Librario is a book metadata API that aggregates data from Google Books, ISBNDB, and Hardcover into a single response, solving the problem of no single source having complete book information. It&#x27;s currently pre-alpha, AGPL-licensed, and available to try now[0].<p>My wife and I have a personal library with around 1,800 books. I started working on a library management tool for us, but I quickly realized I needed a source of data for book information, and none of the solutions available provided all the data I needed. One might provide the series, the other might provide genres, and another might provide a good cover, but none provided everything.<p>So I started working on Librario, a book metadata aggregation API written in Go. It fetches information about books from multiple sources (Google Books, ISBNDB, Hardcover. Working on Goodreads and Anna&#x27;s Archive next.), merges everything, and saves it all to a PostgreSQL database for future lookups. The idea is that the database gets stronger over time as more books are queried.<p>You can see an example response here[1], or try it yourself:<p><pre><code> curl -s -H &#x27;Authorization: Bearer librario_ARbmrp1fjBpDywzhvrQcByA4sZ9pn7D5HEk0kmS34eqRcaujyt0enCZ&#x27; \ &#x27;https:&#x2F;&#x2F;api.librario.dev&#x2F;v1&#x2F;book&#x2F;9781328879943&#x27; | jq . </code></pre> This is pre-alpha and runs on a small VPS, so keep that in mind. I never hit the limits in the third-party services, so depending on how this post goes, I’ll or will not find out if the code handles that well.<p>The merger is the heart of the service, and figuring out how to combine conflicting data from different sources was the hardest part. In the end I decided to use field-specific strategies which are quite naive, but work for now.<p>Each extractor has a priority, and results are sorted by that priority before merging. But priority alone isn&#x27;t enough, so different fields need different treatment.<p>For example:<p>- Titles use a scoring system. I penalize titles containing parentheses or brackets because sources sometimes shove subtitles into the main title field. Overly long titles (80+ chars) also get penalized since they often contain edition information or other metadata that belongs elsewhere.<p>- Covers collect all candidate URLs, then a separate fetcher downloads and scores them by dimensions and quality. The best one gets stored locally and served from the server.<p>For most other fields (publisher, language, page count), I just take the first non-empty value by priority. Simple, but it works.<p>Recently added a caching layer[2] which sped things up nicely. I considered migrating from <i>net&#x2F;http</i> to <i>fiber</i> at some point[3], but decided against it. Going outside the standard library felt wrong, and the migration didn&#x27;t provide much in the end.<p>The database layer is being rewritten before v1.0[4]. I&#x27;ll be honest: the original schema was written by AI, and while I tried to guide it in the right direction with SQLC[5] and good documentation, database design isn&#x27;t my strong suit and I couldn&#x27;t confidently vouch for the code. Rather than ship something I don&#x27;t fully understand, I hired the developers from SourceHut[6] to rewrite it properly.<p>I&#x27;ve got a 5-month-old and we&#x27;re still adjusting to their schedule, so development is slow. I&#x27;ve mentioned this project in a few HN threads before[7], so I’m pretty happy to finally have something people can try.<p>Code is AGPL and on SourceHut[8].<p>Feedback and patches[9] are very welcome :)<p>[0]: <a href="https:&#x2F;&#x2F;sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;" rel="nofollow">https:&#x2F;&#x2F;sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;</a><p>[1]: <a href="https:&#x2F;&#x2F;paste.sr.ht&#x2F;~jamesponddotco&#x2F;a6c3b1130133f384cffd25b33a8ab1bc3392093c" rel="nofollow">https:&#x2F;&#x2F;paste.sr.ht&#x2F;~jamesponddotco&#x2F;a6c3b1130133f384cffd25b3...</a><p>[2]: <a href="https:&#x2F;&#x2F;todo.sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;16" rel="nofollow">https:&#x2F;&#x2F;todo.sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;16</a><p>[3]: <a href="https:&#x2F;&#x2F;todo.sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;13" rel="nofollow">https:&#x2F;&#x2F;todo.sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;13</a><p>[4]: <a href="https:&#x2F;&#x2F;todo.sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;14" rel="nofollow">https:&#x2F;&#x2F;todo.sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;14</a><p>[5]: <a href="https:&#x2F;&#x2F;sqlc.dev" rel="nofollow">https:&#x2F;&#x2F;sqlc.dev</a><p>[6]: <a href="https:&#x2F;&#x2F;sourcehut.org&#x2F;consultancy&#x2F;" rel="nofollow">https:&#x2F;&#x2F;sourcehut.org&#x2F;consultancy&#x2F;</a><p>[7]: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=45419234">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=45419234</a><p>[8]: <a href="https:&#x2F;&#x2F;sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;" rel="nofollow">https:&#x2F;&#x2F;sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;</a><p>[9]: <a href="https:&#x2F;&#x2F;git.sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;tree&#x2F;trunk&#x2F;item&#x2F;CONTRIBUTING.md" rel="nofollow">https:&#x2F;&#x2F;git.sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;tree&#x2F;trunk&#x2F;item&#x2F;CONTRI...</a>
5作者: andrewstetsenko大约 10 小时前原帖
Finding a true “Remote from Anywhere” role is harder than it looks.<p>Many jobs are labeled “remote,” but the fine print often ties them to a region, a time zone, or specific legal and tax requirements.<p>Here are practical checks that help you spot “remote anywhere” roles faster, and avoid common red flags.<p>1) Read the location line Start with the simplest signal: is there a geography attached?<p>- “US Remote” “Remote (EU)” “LATAM only” or “Remote within X countries” usually means location restrictions. - If time zones are listed, that can also imply location limits, even when the role is technically remote. - Look for explicit language like “Global remote,” “Work from anywhere,” “fully asynchronous,” or “distributed team across multiple countries.” These are not guarantees, but they are stronger indicators.<p>2) Treat salary as a clue<p>Pay ranges can indicate the target hiring market.<p>- A range like $100k to $250k often signals a US-centered market (not always, but often).<p>3) Watch the application form<p>Sometimes the job post is vague, but the ATS form tells the truth:<p>- Questions like “Which time zone can you work in?” can reveal the required overlap. - If the location dropdown includes only a few regions (e.g., US, Canada, Europe, Other), it often indicates there are specific geographic requirements. - Red flags that usually indicate US-only hiring include questions about US work authorization, a US tax ID, US-specific benefits or requirements such as Security Clearance.<p>4) Check the company on LinkedIn<p>If a company truly hires globally, you can usually see it in its team.<p>- Review employee locations. Even if LinkedIn shows only a few “top locations,” individual profiles reveal the real spread. - Search for your profession (e.g., Software Engineer) and check where they actually live. - If you see people working from India, Asia, Africa or other regions beyond the US and Europe, that is a strong sign the company can hire internationally.<p>5) Compare career pages and external job boards<p>Job descriptions are sometimes more detailed on the company website.<p>- Look for mentions of an asynchronous culture, a multi-national team, or the number of nationalities in the company. - Check LinkedIn job posts and external job boards. They sometimes include location constraints that are missing from the official posting.<p>Remote anywhere roles exist, but they are a narrower category than most people expect.<p>Companies balance time zone collaboration, employment compliance, payroll, and security requirements.<p>Good luck with your remote job search!
4作者: sp1982大约 4 小时前原帖
There is also a little command line tool to search jobs from command line. You can also use the web interface at <a href="https:&#x2F;&#x2F;jobswithgpt.com" rel="nofollow">https:&#x2F;&#x2F;jobswithgpt.com</a>