2作者: stared大约 1 个月前原帖
Hi HN!<p>&quot;Never perfect. Perfection goal that changes. Never stops moving. Can chase, cannot catch.&quot; - Abathur (<a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=pw_GN3v-0Ls" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=pw_GN3v-0Ls</a>)<p>StarCraft 2 is one of the most balanced games ever - thanks to Blizzard’s pursuit of perfection. It has been over 15 years since the release of Wings of Liberty and over 10 years since the last installment, Legacy of the Void. Yet, balance updates continue to appear, changing how the game plays. Thanks to that, StarCraft is still alive and well!<p>I decided to create an interactive visualization of all balance changes, both by patch and by unit, with smooth transitions.<p>I had this idea quite a few years ago, yet LLMs made it possible - otherwise, I wouldn&#x27;t have had the time to code or to collect all changes from hundreds of patches (not all have balance updates). It took way more time than expected - both dealing with parsing data and dealing with D3.js transitions.<p>Pretty much pure vibe coding with Claude Code and Opus 4.5 - while constantly using Playwright skills and consulting Gemini 3 Pro (<a href="https:&#x2F;&#x2F;github.com&#x2F;stared&#x2F;gemini-claude-skills" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;stared&#x2F;gemini-claude-skills</a>). While Opus 4.5 was much better at executing, it was often essential to use Gemini to get insights, to get cleaner code, or to inspect screenshots. The difference in quality was huge.<p>Still, it was tricky, as LLMs do not know D3.js nearly as well as React. The D3.js transition part is a thing that sometimes I think would be better to do manually, and only use LLMs for details. But it was also a lesson.<p>Enjoy!<p>Source code is here: <a href="https:&#x2F;&#x2F;github.com&#x2F;stared&#x2F;sc2-balance-timeline" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;stared&#x2F;sc2-balance-timeline</a>
1作者: raghavchamadiya大约 1 个月前原帖
I keep seeing two extreme futures discussed around AI.<p>One is techno utopia: AI does everything, productivity explodes, humans are free to create and chill.<p>The other is collapse: AI replaces jobs, wealth concentrates, consumption dies, society implodes.<p>What I don’t see discussed enough is the mechanism between those states.<p>If AI systems genuinely outperform humans at most economically valuable tasks, wages are no longer the primary distribution mechanism. But capitalism today assumes wages are how demand exists. No wages means no buyers. No buyers means even the owners of AI have no customers.<p>That feels less like a social problem and more like a systems contradiction.<p>Historically, automation shifted labor rather than deleting it. But AI is different in that it targets cognition itself, not just muscle or repetition. If the marginal cost of intelligence trends toward zero, markets built on selling human time start to behave strangely.<p>Some questions I keep circling:<p>Who funds demand in a post labor economy Is UBI enough, or does ownership of productive models need to be broader Do we end up with state mediated consumption rather than market mediated consumption Does GDP even remain a meaningful metric when production is decoupled from employment<p>I’m not arguing AI doom or AI salvation here. I’m trying to understand the transition dynamics. The part where things either adapt smoothly or break loudly.<p>Curious how others here model this in their heads, especially folks building or deploying these systems today.
1作者: pmaze大约 1 个月前原帖
I think LLMs are overused to summarise and underused to help us read deeper. I built a system for Claude Code to browse 100 non-fiction books and find interesting connections between them.<p>I started out with a pipeline in stages, chaining together LLM calls to build up a context of the library. I was mainly getting back the insight that I was baking into the prompts, and the results weren&#x27;t particularly surprising. On a whim, I gave CC access to my debug CLI tools and found that it wiped the floor with that approach. It gave actually interesting results and required very little orchestration in comparison.<p>One of my favourite trail of excerpts goes from Jobs’ reality distortion field to Theranos’ fake demos, to Thiel on startup cults, to Hoffer on mass movement charlatans (<a href="https:&#x2F;&#x2F;trails.pieterma.es&#x2F;trail&#x2F;useful-lies&#x2F;" rel="nofollow">https:&#x2F;&#x2F;trails.pieterma.es&#x2F;trail&#x2F;useful-lies&#x2F;</a>). A fun tendency is that Claude kept getting distracted by topics of secrecy, conspiracy, and hidden systems - as if the task itself summoned a Foucault’s Pendulum mindset.<p>Details: * The books are picked from HN’s favourites (which I collected before: <a href="https:&#x2F;&#x2F;hnbooks.pieterma.es&#x2F;" rel="nofollow">https:&#x2F;&#x2F;hnbooks.pieterma.es&#x2F;</a>). * Chunks are indexed by topic using Gemini Flash Lite. The whole library cost about £10. * Topics are organised into a tree structure using recursive Leiden partitioning and LLM labels. This gives a high-level sense of the themes. * There are several ways to browse. The most useful are embedding similarity, topic tree siblings, and topics cooccurring within a chunk window. * Everything is stored in SQLite and manipulated using a set of CLI tools.<p>I wrote more about the process here: <a href="https:&#x2F;&#x2F;pieterma.es&#x2F;syntopic-reading-claude&#x2F;" rel="nofollow">https:&#x2F;&#x2F;pieterma.es&#x2F;syntopic-reading-claude&#x2F;</a><p>I’m curious if this way of reading resonates for anyone else - LLM-mediated or not.