1作者: abrarnasirj大约 1 个月前原帖
Hi HN, I’m Abrar Nasir Jaffari, co-founder of HackLikeMe. We built an agentic CLI because we were tired of the context-switching between LLM web chats and the terminal when doing DevSecOps work.Most AI coding assistants are just wrappers for file editing. We’ve built 6 specialized agents (Coder, FullStack, Security, DevOps, Plan, Monitor) that have native terminal access.<p>It doesn&#x27;t just suggest code; it can:<p>Run nmap to audit your local network.<p>Use tshark to analyze packet captures.<p>Manage docker containers and kubectl clusters.<p>The &#x27;Pause to Think&#x27; feature: Before it executes a command, it generates a reasoning plan so you can see why it&#x27;s about to run a specific script.&quot;<p>The &quot;Beta&quot; Offer: &quot;We launched yesterday and we&#x27;re currently in beta. We are giving free Pro access for the first 100 HN users—no credit card required.<p>We’re running on a mix of AWS and GCP (leveraging some credits we just landed), so we’re able to offer some decent compute for the reasoning models during the beta.
3作者: FrankHobson大约 1 个月前原帖
I’ve tried most popular personal finance apps over the last few years, and I always end up quitting.<p>For me, the main reasons are:<p>- Core functionality hidden behind paywalls<p>- UX that feels bloated or optimized for upsells<p>- $100+&#x2F;year pricing for some<p>- Needing multiple separate tools (budgeting, tracking, investments) with manual syncing and often no decent mobile app<p>I’m starting an open personal finance tool as a side project because I want something I’d actually stick with long-term.<p>Before locking myself into the wrong design, I’d love to hear from others:<p>- Why did you stop using finance apps (if you used any)?<p>- What features are must-haves vs. nice-to-haves?<p>- What made a tool “click” for you — or never click at all?<p>Happy to hear if this feels redundant or already solved better elsewhere.
1作者: teugent大约 1 个月前原帖
We’ve validated the Sigma Runtime architecture (v0.4.12) on Google Gemini-3 Flash, confirming that long-horizon identity control and stability can be achieved without retraining or fine-tuning the model.<p>The system maintains two distinct personas (“Fujiwara”, a stoic Edo-period ronin, and “James”, a formal British analyst) across 220 dialogue turns in stable equilibrium. This shows that cognitive coherence and tone consistency can be controlled at runtime rather than in model weights.<p>Unlike LangChain or RAG frameworks that orchestrate prompts, Sigma Runtime treats the model as a dynamic field with measurable drift and equilibrium parameters. It applies real-time feedback — injecting entropy or coherence corrections when needed — to maintain identity and prevent both drift and crystallization. The effect is similar to RLHF-style fine-tuning, but done externally and vendor-agnostic.<p>This decouples application logic from any specific LLM provider. The same runtime behavior has been validated on GPT-5.2 and Gemini-3, with Claude tests planned next.<p>We use narrative identities like “Fujiwara” or “James” because their linguistic styles make stability easy to verify by eye. If the runtime can hold these for 100+ turns, it can maintain any structured identity or agent tone.<p>Runtime versions ≥ v0.4 are proprietary, but the architecture is open under the Sigma Runtime Standard (SRS): <a href="https:&#x2F;&#x2F;github.com&#x2F;sigmastratum&#x2F;documentation&#x2F;tree&#x2F;main&#x2F;srs" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;sigmastratum&#x2F;documentation&#x2F;tree&#x2F;main&#x2F;srs</a><p>A reproducible early version (SR-EI-037) is available here: <a href="https:&#x2F;&#x2F;github.com&#x2F;sigmastratum&#x2F;documentation&#x2F;tree&#x2F;bf473712ada5a9204a65434e46860b03d5fbf8fe&#x2F;sigma-runtime&#x2F;SR-EI-037&#x2F;code" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;sigmastratum&#x2F;documentation&#x2F;tree&#x2F;bf473712a...</a><p>Regulated under DOI: 10.5281&#x2F;zenodo.18085782 — non-commercial implementations are fully open.<p>HN discussion focus: – Runtime-level vs weight-level control – Model-agnostic identity stability – Feedback-based anti-crystallization – Can cognitive coherence be standardized?