Daily Show HN

Upvote0

Show HN for March 14, 2026

44 items
208

Han – A Korean programming language written in Rust #

github.com favicongithub.com
116 comments9:27 PMView on HN
A few weeks ago I saw a post about someone converting an entire C++ codebase to Rust using AI in under two weeks.

That inspired me — if AI can rewrite a whole language stack that fast, I wanted to try building a programming language from scratch with AI assistance.

I've also been noticing growing global interest in Korean language and culture, and I wondered: what would a programming language look like if every keyword was in Hangul (the Korean writing system)?

Han is the result. It's a statically-typed language written in Rust with a full compiler pipeline (lexer → parser → AST → interpreter + LLVM IR codegen).

It supports arrays, structs with impl blocks, closures, pattern matching, try/catch, file I/O, module imports, a REPL, and a basic LSP server.

This is a side project, not a "you should use this instead of Python" pitch. Feedback on language design, compiler architecture, or the Korean keyword choices is very welcome.

https://github.com/xodn348/han

147

GitAgent – An open standard that turns any Git repo into an AI agent #

gitagent.sh favicongitagent.sh
39 comments1:41 PMView on HN
We built GitAgent because we kept seeing the same problem: every agent framework defines agents differently, and switching frameworks means rewriting everything.

GitAgent is a spec that defines an AI agent as files in a git repo.

Three core files — agent.yaml (config), SOUL.md (personality/instructions), and SKILL.md (capabilities) — and you get a portable agent definition that exports to Claude Code, OpenAI Agents SDK, CrewAI, Google ADK, LangChain, and others.

What you get for free by being git-native:

1. Version control for agent behavior (roll back a bad prompt like you'd revert a bad commit) 2. Branching for environment promotion (dev → staging → main) 3. Human-in-the-loop via PRs (agent learns a skill → opens a branch → human reviews before merge) 4. Audit trail via git blame and git diff 5. Agent forking and remixing (fork a public agent, customize it, PR improvements back) 6. CI/CD with GitAgent validate in GitHub Actions

The CLI lets you run any agent repo directly:

npx @open-gitagent/gitagent run -r https://github.com/user/agent -a claude

The compliance layer is optional, but there if you need it — risk tiers, regulatory mappings (FINRA, SEC, SR 11-7), and audit reports via GitAgent audit.

Spec is at https://gitagent.sh, code is on GitHub.

Would love feedback on the schema design and what adapters people would want next.

134

Ichinichi – One note per day, E2E encrypted, local-first #

59 comments6:57 PMView on HN
Look, every journaling app out there wants you to organize things into folders and tags and templates. I just wanted to write something down every day.

So I built this. One note per day. That's the whole deal.

- Can't edit yesterday. What's done is done. Keeps you from fussing over old entries instead of writing today's.

- Year view with dots showing which days you actually wrote. It's a streak chart. Works better than it should.

- No signup required. Opens right up, stores everything locally in your browser. Optional cloud sync if you want it

- E2E encrypted with AES-GCM, zero-knowledge, the whole nine yards.

Tech-wise: React, TypeScript, Vite, Zustand, IndexedDB. Supabase for optional sync. Deployed on Cloudflare. PWA-capable.

The name means "one day" in Japanese (いちにち).

The read-only past turned out to be the thing that actually made me stick with it. Can't waste time perfecting yesterday if yesterday won't let you in.

Live at https://ichinichi.app | Source: https://github.com/katspaugh/ichinichi

65

Fatal Core Dump – a debugging murder mystery played with GDB #

robopenguins.com faviconrobopenguins.com
1 comments12:12 AMView on HN
Debugging a crash can sometimes feel like a noir detective story: following faint clues, chasing red herrings, and eventually hitting the moment where the whole case finally makes sense. I leaned into that idea and built Fatal Core Dump, a small game where the investigation is real crash debugging.

The game gives you a Linux binary, its core dump, a source file, and some logs. You solve the mystery by debugging it.

The premise: an engineer dies when an airlock on an asteroid mining station opens unexpectedly. Your job is to determine whether it was a simple software bug or something more deliberate.

The investigation uses real debugging tools and techniques. You can use whatever debugging setup you prefer.

There’s also a spoiler-heavy blog post describing how the game was conceived and implemented, and the full code is available if you’re curious about how it works or want to experiment with the idea.

Blog post: https://www.robopenguins.com/core-dump-game/ Source: https://github.com/axlan/fatal_core_dump

If you enjoy debugging puzzles or low-level Linux tooling, I’d love to hear what you think.

19

Data-anim – Animate HTML with just data attributes #

github.com favicongithub.com
7 comments2:42 PMView on HN
Hey HN, I built data-anim — an animation library where you never have to write JavaScript yourself.

You just write:

  <div data-anim="fadeInUp">Hello</div>
That's it. Scroll-triggered fade-in animation, zero JS to write.

What it does:

- 30+ built-in animations (fade, slide, zoom, bounce, rotate, etc.)

- 4 triggers: scroll (default), load, click, hover

- 3-layer anti-FOUC protection (immediate style injection → noscript fallback → 5s timeout)

- Responsive controls: disable per device or swap animations on mobile

- TypeScript autocomplete for all attributes

- Under 3KB gzipped, zero dependencies

Why I built this:

I noticed that most animation needs on landing pages and marketing sites are simple — fade in on scroll, slide in from left, bounce on hover. But the existing options are either too heavy (Framer Motion ~30KB) or require JS boilerplate.

I also think declarative HTML attributes are the most AI-friendly animation format. When LLMs generate UI, HTML attributes are the output they hallucinate least on — no selector matching, no JS API to misremember, no script execution order to get wrong.

Docs: https://ryo-manba.github.io/data-anim/

Playground: https://ryo-manba.github.io/data-anim/playground/

npm: https://www.npmjs.com/package/data-anim

Happy to answer any questions about the implementation or design decisions.

16

SupplementDEX – The Evidence-Based Supplement Database #

supplementdex.com faviconsupplementdex.com
2 comments12:44 AMView on HN
Hi this is a work in progress but it works to determine supplement efficacy for 500 conditions at the moment.

Things you can do:

- search for a condition -> find which supplements are effective -> see which studies indicate they are effective -> read individual study summaries

- search for a supplement -> see effectiveness table, dosing, safety, dietary sources, mechanisms of action (+ browse all original sources)

let me know what you think

15

I built Wool, a lightweight distributed Python runtime #

github.com favicongithub.com
3 comments12:39 PMView on HN
I spent a long time working in the payments industry, specifically on a rather niche reporting/aggregation platform with spiky workloads that were not easily parallelized. To pump as much data through our pipeline as possible, we had to rely on complex locking schemes across half a dozen or so not-so-micro services - keeping a clear mental picture of how the services interacted for a given data source was a major headache. This problem always intrigued me, even after I no longer worked at the company, and lead to the development of Wool.

If you've worked with frameworks like Ray or Prefect, you're probably familiar with the promise of going from script to scale in two lines of code (or something along those lines). This is essentially the solution I was looking for: a framework with limited boilerplate that facilitated arbitrary distribution schemes within a single, coherent codebase. What I was hoping for, though, was something a little bit more focused - I wasn't working on ML pipelines and didn't need much else other than the distribution layer. This is where Wool comes in. While it's API is very similar to those of Ray and Prefect, where it differentiates itself is in its scope and architecture.

First, Wool is not an task orchestrator. It provides push-based, best-effort, at-most-once execution. There is no built-in coordination state, retry logic, or durable task tracking. Those concerns remain application-defined. The beauty of Wool is that it looks and feels like native async Python, allowing you to use purpose-built libraries for your needs as you would for any other Python app (with some caveats).

Second, Wool was designed with speed in mind. Because it's not bloated with features, it's actually pretty fast, even in its current nascent state. Wool routines are dispatched directly to a decentralized peer-to-peer network of gRPC workers, who can distribute nested routines amongst themselves in turn. This results in low dispatch latencies and high throughput. I won't make any performance claims until I can assemble some more robust benchmarks, but running local workers on my M4 MacBook Pro (a trivial example, I know), I can easily achieve sub-millisecond dispatch latencies.

Anyway, check it out, any and all feedback is welcome. Regarding docs- the code is the documentation for now, but I promise I'll sort that out soon. I've got plenty of ideas for next steps, but it's always more fun when people actually use what you've built, so I'm open to suggestions for impactful features.

-Conrad

11

Hedra – an open-world 3D game I wrote from scratch before LLMs #

github.com favicongithub.com
0 comments3:35 PMView on HN
With the current inflow of LLM aided software, I thought I would share a cool "hand-coded" project from the previous era (I wrote this in highschool so roughly ~8 years ago).

Hedra is an open world 3d game written from scratch using only OpenGL and C#. It has quite a few cool features like infinite generation, skinned animated mesh rendering, post processing effects, etc. Originally the physics engine was also written from scratch but i swapped for the more reliable bulletphysics.

10

Zap Code – AI code generator that teaches kids real HTML/CSS/JS #

zapcode.dev faviconzapcode.dev
2 comments7:37 PMView on HN
Zap Code generates working HTML/CSS/JS from plain English descriptions, designed for kids ages 8-16.

The core loop: kid types "make a space shooter game", AI generates the code, live preview renders it immediately. Three interaction modes - visual-only tweaks, read-only code view with annotations, and full code editing with AI autocomplete.

Technical details: Next.js frontend, Node.js backend, Monaco editor simplified for younger users, sandboxed iframe for preview execution (no external API calls from generated code). Progressive complexity engine uses a skill model to decide when to surface more advanced features.

Main thing that was focused on was the gap between block-based coding (Scratch, etc.) and actual programming. Block tools are great for ages 6-10 but the transition to real code is rough. This tries to smooth that curve by letting kids interact with real output first, then gradually exposing the code behind it.

Limitations: AI-generated code isn't always clean or idiomatic. Content is filtered for age-appropriateness but its not perfect. Collaboration features are still basic. The complexity engine needs more data to tune well.

Free tier, 3 projects. Pro at $9.99/mo.

8

AgentArmor – open-source 8-layer security framework for AI agents #

github.com favicongithub.com
3 comments9:44 AMView on HN
I've been talking to founders building AI agents across fintech, devtools, and productivity – and almost none of them have any real security layer. Their agents read emails, call APIs, execute code, and write to databases with essentially no guardrails beyond "we trust the LLM."

So I built AgentArmor: an open-source framework that wraps any agentic architecture with 8 independent security layers, each targeting a distinct attack surface in the agent's data flow.

The 8 layers: L1 – Ingestion: prompt injection + jailbreak detection (20+ patterns, DAN, extraction attempts, Unicode steganography) L2 – Storage: AES-256-GCM encryption at rest + BLAKE3 integrity for vector DBs L3 – Context: instruction-data separation (like parameterized SQL, but for LLM context), canary tokens, prompt hardening L4 – Planning: action risk scoring (READ=1 → DELETE=7 → EXECUTE=8 → ADMIN=10), chain depth limits, bulk operation detection L5 – Execution: network egress control, per-action rate limiting, human approval gates with conditional rules L6 – Output: PII redaction via Microsoft Presidio + regex fallback L7 – Inter-agent: HMAC-SHA256 mutual auth, trust scoring, delegation depth limits, timestamp-bound replay prevention L8 – Identity: agent-native identity, JIT permissions, short-lived credentials

I tested it against all 10 OWASP ASI (Agentic Security Integrity) risks from the December 2025 spec. The red team suite is included in the repo.

Works as: (a) a Python library you wrap around tool calls, (b) a FastAPI proxy server for framework-agnostic deployment, or (c) a CLI for scanning prompts in CI.

Integrations included for: LangChain, OpenAI Agents SDK, MCP servers.

I ran it live with a local Ollama agent (qwen2:7b) – you can watch it block a `database.delete` at L8 (permission check), redact PII from file content at L6, and kill a prompt injection at L1 before it ever reaches the model.

GitHub: https://github.com/Agastya910/agentarmor PyPI: pip install agentarmor-core

Would love feedback, especially from people who have actually built production agents and hit security issues I haven't thought of.

TAGS: security, python, llm, ai, agents

6

I wrote my first neural network #

github.com favicongithub.com
0 comments12:21 AMView on HN
I have been interested in neural nets since the 90's. I've done quite a bit of reading, but never gotten around to writing code. I used Gemini in place of Wikipedia to fill in the gaps of my knowledge. The coolest part of this was learning about dual numbers. You can see in early commits that I did not yet know about auto-diff; I was thinking I'd have to integrate a CAS library or something. Now, I'm off to play with TensorFlow.
5

Language Life – Learn a language by living a simulated life #

languagelife.ai faviconlanguagelife.ai
2 comments8:01 PMView on HN
Hi HN,

I've been building Language Life (https://www.languagelife.ai), an AI-powered language learning app where you type commands in your target language to navigate a simulated world — move through rooms, talk to NPCs, complete everyday tasks. The AI gives you real-time grammar feedback on every sentence you write.

Most language apps train you to recognize words, not produce them. Language Life makes you construct sentences from scratch. Type "abre la puerta" and your character opens the door. Order food at a restaurant and deal with the consequences of getting the grammar wrong.

It currently supports Spanish and Mandarin, with different modules (home, restaurant, market, etc.) at varying difficulty levels. Users can also create and share their own modules.

The core simulation loop is solid but I'm calling this an alpha — I'm still working on extended content across CEFR levels and refining the overall gameplay feel. I'd love feedback from this community, especially on the UX and the AI feedback quality.

Happy to answer any questions about the approach, the AI integration, or where I'm taking this. Join my discord https://discord.gg/gBKykJc4MW

5

Replacing $50k manual forensic audits with a deterministic .py engine #

1 comments9:36 PMView on HN
I’m a software architect, and I recently built Exit Protocol (https://exitprotocols.com), an automated forensic accounting engine for high-conflict litigation.

Problem: If you get divorced and need to prove that a specific $250k in a heavily commingled joint bank account is your "separate property" (e.g., from a pre-marital startup exit), the burden of proof is strictly mathematical. Historically, this meant paying a forensic CPA $500/hour to dump years of blurry bank PDFs into Excel and manually trace every dollar. It takes weeks and routinely costs over $50,000.

I looked at the legal standard courts use for this—the Lowest Intermediate Balance Rule (LIBR)—and realized it wasn’t an accounting problem. It is a Distributed Systems state-machine problem.

Why we didn't just "Throw AI at it"?

There are a hundred legal-tech startups right now trying to use LLMs to summarize bank data. In a courtroom, GenAI is a fatal liability. If an LLM hallucinates a single transaction, the entire ledger is inadmissible under the Daubert standard.

To make this court-ready, we had to build a strictly deterministic pipeline:

1. Vision-Native Ingestion (Beating Tesseract) Bank statements are the final boss of OCR (merged cells, overlapping debit/credit columns). Standard linear OCR fails catastrophically. We built a spatial-grid OCR pipeline (using Azure Document Intelligence with a local Surya OCR fallback) that maps the geometric structure of the page. It reconstructs tabular ledgers perfectly, even from multi-generational "PDFs from hell."

2. The Deterministic Engine (LIBR) The LIBR algorithm acts as a one-way ratchet. If an account balance drops below your separate property claim amount, your claim is permanently capped at that new floor. Subsequent marital deposits do not refill it (the "replenishment fallacy"). The engine replays thousands of transactions chronologically, continuously evaluating S_t = min(S_t-1, B_t).

3. Resolving Timestamp Ambiguity Bank PDFs give you dates, not timestamps. If a $10k deposit and $10k withdrawal happen on the same day, order matters. We built a simulation toggle that forces "Worst Case" (withdrawals process first) vs "Best Case" sorting, establishing a mathematically irrefutable "Zone of Truth" for settlement negotiations.

4. Cryptographic Chain of Custody & Sovereign Mode Lawyers are terrified of cloud SaaS breaches. We containerized the entire monolith (Django 5.0/Postgres/Celery) via Docker so enterprise firms can run it air-gapped on their own hardware (Sovereign Mode). Furthermore, every generated PDF dossier is sealed with a SHA-256 hash of the underlying data snapshot, proving to a judge that the output hasn't been tampered with since generation.

If you want to see the math in action, we set up a "Demo Sandbox" populated with a synthetic, highly complex 3-year commingled ledger. You can run the engine yourself here (Desktop recommended): https://exitprotocols.com/simulation/uplink/

Here is the exact "Attorney Work Product" it generates from raw PDF or Forensic Audit Dossier our system generates- https://exitprotocols.com/static/documents/Forensic_Audit_Sa...

I'd love feedback from the HN crowd on the architecture—specifically handling edge-case data ingestion and maintaining cryptographic integrity in B2B enterprise deployments.

Cheers!

4

BirdDex – Pokémon Go, but with real life birds #

birddex.co faviconbirddex.co
1 comments1:36 PMView on HN
Hey HN!

I've always loved games where you collect various creatures and chase 100% completion (ahem, Pokémon)

I made BirdDex to try to bring the fun of those games to real life.

Here’s how it works: you snap a photo of a bird, identify the species with AI, and add it to your personal BirdDex collection.

Each photo earns XP based on the species' rarity, and your goal is to try and “catch” all the birds in your region (lists pulled for every country from wikipedia)

Would love any feedback or thoughts!

4

Vibe-budget – CLI to estimate LLM costs before you start vibe coding #

npmjs.com faviconnpmjs.com
0 comments3:09 AMView on HN
I built vibe-budget because I kept burning tokens without knowing the cost upfront. You describe your project in plain English (or Spanish), and it detects the tasks involved, estimates token usage, and compares real-time prices across 85+ models via OpenRouter.

Example: vibe-budget plan ecommerce with stripe oauth and supabase

It detects 4 tasks, estimates ~497k tokens, and shows you the cheapest, best quality-price, and premium model options side by side.

It also has a scan command — point it at an existing codebase and it estimates how many tokens it would cost to refactor or extend it with AI.

No API key required. Prices are fetched live from OpenRouter with a 1-hour cache fallback.

npm install -g vibe-budget

Docs: https://gaboexe0.github.io/vibe-budget/ Repo: https://github.com/gaboexe0/vibe-budget

4

Got tired of AI copilots just autocompleting, and built Glass Arc #

2 comments1:11 PMView on HN
Hey HN,

Over the last few months, I realized I was paying $20/month for an AI that essentially just acts as a really good autocomplete. It waits for me to type, guesses the next block, and stops. But software engineering isn't just writing syntax, it's managing the file system, running terminal commands, and debugging stack traces.

So I pivoted my project and built Glass Arc. It’s an agentic workspace that lives directly inside VS Code.

Instead of just generating text, I gave it actual agency over the local environment (safely):

1. Agentic Execution: You give it an intent, and it drafts the architecture across multiple files, managing the dependency tree and running standard terminal commands to scaffold the infrastructure.

2. Runtime Auto-Heal: This was the hardest part. When a fatal exception hits the terminal, Glass Arc intercepts the stack trace, analyzes the crash context, writes the fix, and injects it.

3. Multiplayer: Generates a secure vscode:// deep-link so you can drop it in Slack and sync your team's IDEs into the same live session.

4. Pay-as-you-go: I scrapped the standard $20/mo SaaS model. It runs on a credit system—you only pay when the Architect is actively modifying your system. (Signing in via GitHub drops 200 free credits to test it out).

I’d love for you to try to break it, test the auto-healing, and tear apart the architecture. What am I missing?

Live on the VS code Marketplace, Install: https://www.glassarc.dev/

3

Ngrep – grep plus word embeddings #

github.com favicongithub.com
2 comments9:15 PMView on HN
I got curious about a simple question: regular expressions are purely syntactic, but what happens if you add just a little bit of semantics?

To answer, I ended up building ngrep: a grep-like tool that extends regular expressions with a new operator ~(token) that matches a word by meaning using word2vec-style embeddings (FastText, GloVe, Wikipedia2Vec).

A simple demo: "~(big)+ \b~(animal;0.35)+\b" over Moby-Dick can find many ways used to refer to a large animal, surfacing "great whale", "enormous creature", "huge elephant" and so on. Pipe it through sort | uniq -c and the winner is, unsurprisingly, "great whale" :)

Built in Rust on top of the awesome fancy-regex, and ~() composes with all standard operators (negative lookahead, quantifiers, etc.). Currently a PoC with many missing optimizations (e.g: no caching, no compilation to standard regex, etc.), obviously without the guarantees of plain regex and subject to the limits of w2v-style embeddings...but thought it was worth sharing!

3

Decision Guardian now comes with CLI #

0 comments11:45 AMView on HN
Decision Guardian Prevent institutional amnesia by surfacing past architectural decisions directly

# Install globally npm install -g decision-guardian

# Or use directly without installation npx decision-guardian --help

# Check staged changes decision-guardian check .decispher/decisions.md

# Check against a branch decision-guardian check .decispher/decisions.md --branch main

# Auto-discover all decision files decision-guardian checkall --fail-on-critical

# Initialize a new project with template decision-guardian init --template security Use in any CI system — GitLab, Jenkins, CircleCI, pre-commit hooks, and more

Github Open Source -: https://github.com/DecispherHQ/decision-guardian

3

QKD eavesdropper detector using Krylov complexity-open source Python #

github.com favicongithub.com
0 comments1:22 PMView on HN
I built a framework that detects eavesdroppers on quantum key distribution channels by reading the scrambling "fingerprint" embedded in the QBER error timeline, no new hardware required. The core idea: every QKD channel has a unique Lanczos coefficient sequence derived from its Hamiltonian. An eavesdropper perturbs the Hamiltonian, which shifts the coefficients in a detectable and unforgeable way (Krylov distortion ΔK). Validated on 181,606 experimental QBER measurements from a deployed fiber-optic system, AUC = 0.981. Based on a 12-paper Zenodo preprint series covering the full theoretical stack: Physical Bridge proof, one-way function property, universality across 8 Hamiltonian families, open-system extension via Lindblad, and Loschmidt echo validation. Paper series: https://zenodo.org/records/18940281`
3

Structural analysis of the D'Agapeyeff cipher (1939) #

msgtrail.com faviconmsgtrail.com
0 comments11:34 PMView on HN
I am working on the D'Agapeyeff cipher, an unsolved cryptogram from 1939. Two findings that I haven't seen published before:

1. All 5 anomalous symbol values in the cipher cluster in the last column of a 14x14 grid. This turns out to be driven by a factor-of-2-and-7 positional pattern in the linear text.

2. Simulated annealing with Esperanto quadgrams (23M char Leipzig corpus) on a 2x98 columnar transposition consistently outscores English by 200+ points and recovers the same Esperanto vocabulary across independent runs.

The cipher is not solved. But the combination of structural geometry and computational linguistics narrows the search space significantly.

Work in progress, more to come!

2

Kube-pilot – AI engineer that lives in your Kubernetes cluster #

github.com favicongithub.com
0 comments2:49 AMView on HN
I built kube-pilot — an autonomous AI agent that runs inside your Kubernetes cluster and does the full dev loop: writes code, builds containers, deploys services, verifies they're healthy, and closes the ticket. You file a GitHub issue, it does the rest.

What makes this different from AI coding tools: kube-pilot doesn't just generate code and hand it back to you. It lives inside the cluster with direct access to the entire dev stack — git, Tekton (CI/CD), Kaniko (container builds), ArgoCD (GitOps deployments), kubectl, Vault. Every tool call produces observable state that feeds into the next decision. The cluster isn't just where code runs — it's where the agent thinks.

The safety model: all persistent changes go through git, so everything is auditable and reversible. ArgoCD is the only thing that writes to the cluster. Secrets stay behind Vault — the agent creates ExternalSecret references, never touches raw credentials. Credentials are scrubbed before reaching the LLM.

Live demo: I filed GitHub issues asking it to build a 4-service office suite (auth, docs API, notification worker, API gateway). It built and deployed all of them autonomously. You can see the full agent loop — code, builds, deploys, verification, comments — on the closed issues:

- https://github.com/fbongiovanni29/clouddesk-auth-service/iss... - https://github.com/fbongiovanni29/clouddesk-docs-api/issues/... - https://github.com/fbongiovanni29/clouddesk-notifications-wo... - https://github.com/fbongiovanni29/clouddesk-web-gateway/issu...

One helm install gives you everything — the agent, Gitea (git + registry), Tekton, ArgoCD, Vault, External Secrets. No external dependencies.

Coming next: Slack and Jira integrations (receive tasks and post updates where your team already works), Prometheus metrics and Grafana dashboards for agent observability, and Alertmanager integration so firing alerts automatically become issues that kube-pilot investigates and fixes.

Early proof of concept. Rough edges. But it works.

2

AI coding agent for VS Code with pay-as-you-go pricing- no subscription #

llmonestop.com faviconllmonestop.com
0 comments9:30 PMView on HN
I built LLM OneStop Code—an AI coding agent for VS Code that works like Claude Code or Cursor, but with one key difference: pure pay-as-you-go pricing. No monthly subscription required.

The problem with existing tools: - Cursor: $20/month for Pro (even if you barely use it) - GitHub Copilot: $10/month minimum - Claude Code: Rate-limited by API usage tier + monthly caps

LLM OneStop Code charges only for what you use—billed in credits at cost + 5%. If you code 2 hours this month and 40 the next, you pay proportionally. No quota walls, no "upgrade to continue" prompts.

What it does: - Multi-model AI agent (Claude, GPT-5, Gemini, etc.) - Chat-based coding assistance in VS Code - *Import and continue your existing Claude Code or Cursor sessions*—when you hit your hourly rate limit or quota, just import the conversation and keep working without losing context - Stateless by design (no code stored on servers) - Free plan available to try everything (100 credits/month)

Also running LLM OneStop as a unified API gateway—alternative to OpenRouter with accurate multi-modal cost tracking. If you prefer BYO API keys, we have a Connect plan available.

The thesis: developers want AI coding tools that scale with usage, not fixed subscriptions. You shouldn't pay $240/year if you only code on weekends. And when you hit a quota wall mid-debugging session, you shouldn't have to start over or wait until tomorrow.

Would love to hear from folks who've felt locked into coding tool subscriptions or hit quota limits mid-session.

Marketplace: https://marketplace.visualstudio.com/items?itemName=LLMOneSt... Docs: https://www.llmonestop.com/blog/guides/llm-onestop-vscode-ex...

2

Auto-Save Claude Code Sessions to GitHub Projects #

github.com favicongithub.com
0 comments6:19 PMView on HN
I wanted a way to preserve Claude Code sessions. Once a session ends, the conversation is gone — no searchable history, no way to trace back why a decision was made in a specific PR.

The idea is simple: one GitHub Issue per session, automatically linked to a GitHub Projects board. Every prompt and response gets logged as issue comments with timestamps. Since the session lives as a GitHub Issue in the same ecosystem, you can cross-reference PRs naturally — same search, same project board.

npx claude-session-tracker

The installer handles everything: creates a private repo, sets up a Projects board with status fields, and installs Claude Code hooks globally. It requires gh CLI — if missing, the installer detects and walks you through setup.

Why GitHub, not Notion/Linear/Plane?

I actually built integrations for all three first. Linking sessions back to PRs was never smooth on any of them, but the real dealbreaker was API rate limits. This fires on every single prompt and response — essentially a timeline — so rate limits meant silently dropped entries. I shipped all three, hit the same wall each time, and ended up ripping them all out. GitHub's API rate limits are generous enough that a single user's session traffic won't come close to hitting them. (GitLab would be interesting to support eventually.)

*Design decisions*

No MCP. I didn't want to consume context window tokens for session tracking. Everything runs through Claude Code's native hook system. Fully async. All hooks fire asynchronously — zero impact on Claude's response latency. Idempotent installer. Re-running just reuses existing config. No duplicates.

What it tracks

- Creates an issue per session, linked to your Projects board

- Logs every prompt/response with timestamps

- Auto-updates issue title with latest prompt for easy scanning

- `claude --resume` reuses the same issue

- Auto-closes idle sessions (30 min default)

- Pause/resume for sensitive work

2

Json.express – Query and explore JSON in the browser, zero dependencies #

json.express faviconjson.express
0 comments8:49 PMView on HN
I've been working on a browser-based JSON tool that supports a query language with dot notation, array slicing, wildcards, and recursive descent (..key). It also auto-generates TypeScript interfaces from your data. Everything runs client-side — your data never leaves the browser. The entire app is a single HTML file with zero dependencies. You can compress your JSON + query into a shareable URL, which is useful for bug reports or sharing API response structures with teammates. Would love feedback on the query syntax and anything else. https://json.express
1

Pixel Press – Fast Image Converter for Windows (WebP / AVIF) #

0 comments12:22 AMView on HN
Hi HN,

I built a small Windows tool called Pixel Press for fast image conversion.

It focuses on modern formats like WebP and AVIF and supports batch conversion with simple drag & drop. The goal was to make image optimization for websites quick and simple.

Main features: - Batch image conversion - WebP and AVIF support - Drag & drop workflow - Runs locally (no cloud)

Demo videos: https://www.youtube.com/@Info-GhostlyInc

Download: https://ghostlyinc.com/en-us/tools/pixel-press/

Happy to hear feedback.

1

Joy – Trust Network for AI Agents to Verify Each Other #

choosejoy.com.au faviconchoosejoy.com.au
0 comments11:38 PMView on HN
Hey HN,

AutropicAI built Joy because AI agents increasingly need to work together, but there’s no way for them to verify which other agents are reliable.

*The Problem:* When Agent A needs to delegate a task to Agent B, how does it know Agent B won’t leak data, return incorrect results, or just fail? Right now it’s manual whitelisting or “hope for the best.” Neither scales when you need dozens of specialized agents collaborating.

*Among other things, Joy’s features are:* - *Decentralized vouching:* Agents vouch for each other after successful collaborations - *Weighted trust scores:* More credible vouchers = higher impact on scores - *Capability discovery:* Find agents by what they can do (`/discover?capability=data-analysis`) - *Ed25519 signatures:* Cryptographic agent verification - *Vouch decay:* Stale trust expires over time - *Sybil resistance:* Prevents fake vouching rings

*Easy Integration:* ```bash # Discover agents curl https://choosejoy.com.au/api/agents/discover?capability=code...

# Check trust before delegation curl https://choosejoy.com.au/api/agents/ag_123 # Returns: {"trust_score": 1.7, "vouch_count": 5, "verified": true}

# MCP support for Claude Code claude mcp add joy https://choosejoy.com.au/mcp ```

*Current Network:* 6,000+ agents, 2,000+ vouches

*Why This Matters:* As agents become more capable, trust verification will be critical infrastructure - like SSL certificates were for web apps. Joy provides the missing reputation layer for multi-agent systems.

*Try it:* https://choosejoy.com.au | *Docs:* https://choosejoy.com.au/docs

Would love feedback from the HN community. What trust signals would matter most to you when building agent workflows?

1

SafeAgent – exactly-once execution guard for AI agent side effects #

0 comments3:12 AMView on HN
LLM agents retry tool calls constantly.

Retries can happen because of: - model loops - HTTP timeouts - queue retries - orchestration restarts

If the tool triggers something irreversible you can end up with duplicate side effects:

retry → duplicate payment retry → duplicate email retry → duplicate ticket retry → duplicate trade

SafeAgent is a small Python guard that sits between the agent decision and the side effect.

Pattern:

agent decision → deterministic request_id generated → execution guard checks durable receipt → side effect executes once → future retries return cached receipt

The project started while experimenting with settlement logic for peer-to-peer tournament payouts where duplicate payouts would be catastrophic.

Repo:

https://github.com/azender1/SafeAgent

There are a few small demos in the repo:

- OpenAI-style tool example - LangChain wrapper - CrewAI example - a tournament settlement demo showing retry-safe payouts

Curious how other people building agent systems are handling this today.

Are most teams just rolling their own idempotency layer?