返回首页
最新
I built a CLI tool that scans your Metabase instance to find which SQL questions reference a column or table you're about to drop/rename.<p>metabase-impact --metabase-url http://localhost:3000 --api-key "mb_xxx" --drop-column orders.user_id<p>It outputs affected questions with direct links so you can fix or archive them before deploying.<p>Built this after breaking dashboards one too many times. Uses sqlglot for SQL parsing (handles aliases and complex queries). Only works on native SQL questions, not MBQL/GUI queries.
I built this because Cursor, Claude Code and other agentic AI tools kept giving me tests that looked fine but failed when I ran them. Or worse - I'd ask the agent to run them and it would start looping: fix tests, those fail, then it starts "fixing" my code so tests pass, or just deletes assertions so they "pass".<p>Out of that frustration I built KeelTest - a VS Code extension that generates pytest tests and executes them, got hooked and decided to push this project forward... When tests fail, it tries to figure out why:<p>- Generation error: Attemps to fix it automatically, then tries again<p>- Bug in your source code: flags it and explains what's wrong<p>How it works:<p>- Static analysis to map dependencies, patterns, services to mock.<p>- Generate a plan for each function and what edge cases to cover<p>- Generate those tests<p>- Execute in "sandbox"<p>- Self-heal failures or flag source bugs<p>Python + pytest only for now. Alpha stage - not all codebases work reliably. But testing on personal projects and a few production apps at work, it's been consistently decent. Works best on simpler applications, sometimes glitches on monorepos setups. Supports Poetry/UV/plain pip setups.<p>Install from VS Code marketplace: <a href="https://marketplace.visualstudio.com/items?itemName=KeelCode.keeltest" rel="nofollow">https://marketplace.visualstudio.com/items?itemName=KeelCode...</a><p>More detailed writeup how it works: <a href="https://keelcode.dev/blog/introducing-keeltest" rel="nofollow">https://keelcode.dev/blog/introducing-keeltest</a><p>Free tier is 7 tests files/month (current limit is <=300 source LOC). To make it easier to try without signing up, giving away a few API keys (they have shared ~30 test files generation quota):<p>KEY-1: tgai_jHOEgOfpMJ_mrtNgSQ6iKKKXFm1RQ7FJOkI0a7LJiWg<p>KEY-2: tgai_NlSZN-4yRYZ15g5SAbDb0V0DRMfVw-bcEIOuzbycip0<p>KEY-3: tgai_kiiSIikrBZothZYqQ76V6zNbb2Qv-o6qiZjYZjeaczc<p>KEY-4: tgai_JBfSV_4w-87bZHpJYX0zLQ8kJfFrzas4dzj0vu31K5E<p>Would love your honest feedback where this could go next, and on which setups it failed, how it failed, it has quite verbose debug output at this stage!
Just built a small tool and created some comparsion of country size vs. planets. Greenland seems larger than i thought.<p>The tool allows you to drag a counry to other planet to see the size there.
嘿,HN!<p>我是Collin,一名来自阿姆斯特丹的20岁法学生。我开发了Cited AI,这是一款能够从您的文档或上下文中提供准确且可验证答案的人工智能。<p>作为一名经常使用人工智能的法学生,我深知答案的准确性和可验证性是多么重要。我记得曾多次询问像ChatGPT或Claude这样的聊天机器人有关案例法或长文档的问题,结果要么是虚构了不存在的事实,要么无法提供我源文档中的确切段落,以便我验证其答案。即使在特别要求精确引用的情况下,找到原文中的段落仍然是一件麻烦事。这就是为什么我在去年11月开始构建Cited。<p>它应该能够处理各种类型的内容,包括复杂的PDF(包括数学密集型文档)和长达75,000字的文档。为了确保完整文档都在大型语言模型的上下文中,不使用RAG或分块技术。<p>我非常希望听到您的反馈!