Daily Show HN

Upvote0

Show HN for February 27, 2026

68 items
99

Claude-File-Recovery, recover files from your ~/.claude sessions #

github.com favicongithub.com
40 comments4:26 PMView on HN
Claude Code deleted my research and plan markdown files and informed me: “I accidentally rm -rf'd real directories in my Obsidian vault through a symlink it didn't realize was there: I made a mistake. “

Unfortunately the backup of my documentation accidentally hadn’t run for a month. So I built claude-file-recovery, a CLI-tool and TUI that is able to extract your files from your ~/.claude session history and thankfully I was able to recover my files. It's able to extract any file that Claude Code ever read, edited or wrote. I hope you will never need it, but you can find it on my GitHub and pip. Note: It can recover an earlier version of a file at a certain point in time.

pip install claude-file-recovery

88

Badge that shows how well your codebase fits in an LLM's context window #

github.com favicongithub.com
41 comments3:14 PMView on HN
Small codebases were always a good thing. With coding agents, there's now a huge advantage to having a codebase small enough that an agent can hold the full thing in context.

Repo Tokens is a GitHub Action that counts your codebase's size in tokens (using tiktoken) and updates a badge in your README. The badge color reflects what percentage of an LLM's context window the codebase fills: green for under 30%, yellow for 50-70%, red for 70%+. Context window size is configurable and defaults to 200k (size of Claude models).

It's a composite action. Installs tiktoken, runs ~60 lines of inline Python, takes about 10 seconds. The action updates the README but doesn't commit, so your workflow controls the git strategy.

The idea is to make token size a visible metric, like bundle size badges for JS libraries. Hopefully a small nudge to keep codebases lean and agent-friendly.

GitHub: https://github.com/qwibitai/nanoclaw/tree/main/repo-tokens

61

Vibe Code your 3D Models #

github.com favicongithub.com
20 comments5:27 PMView on HN
Hi HN,

I’m the creator of SynapsCAD, an open-source desktop application I've been building that combines an OpenSCAD code editor, a real-time 3D viewport, and an AI assistant.

You can write OpenSCAD code, compile it directly to a 3D mesh, and use an LLM (OpenAI, Claude, Gemini, ...) to modify the code through natural language.

Demo video: https://www.youtube.com/watch?v=cN8a5UozS5Q

A bit about the architecture:

- It’s built entirely in Rust.

- The UI and 3D viewport are powered by Bevy 0.15 and egui.

- It uses a pure-Rust compilation pipeline (openscad-rs for parsing and csgrs for constructive solid geometry rendering) so there are no external tools or WASM required.

- Async AI network calls are handled by Tokio in the background to keep the Bevy render loop smooth.

Disclaimer: This is a very early prototype. The OpenSCAD parser/compiler doesn't support everything perfectly yet, so you will definitely hit some rough edges if you throw complex scripts at it.

I mostly just want to get this into the hands of people who tinker with CAD or Rust.

I'd be super happy for any feedback, architectural critiques, or bug reports—especially if you can drop specific OpenSCAD snippets that break the compiler in the GitHub issues!

GitHub (Downloads for Win/Mac/Linux): https://github.com/ierror/synaps-cad

Happy to answer any questions about the tech stack or the roadmap!

12

CodeLeash: framework for quality agent development, NOT an orchestrator #

codeleash.dev faviconcodeleash.dev
4 comments3:54 AMView on HN
Hi HN,

I built my first project using an LLM in mid-2024. I've been excited ever since. But of course, at some point it all turns into a mess.

You see, software is an intricate interwoven collection of tiny details. Good software gets many details right; and does not regress as it gains functionality.

My bootstrapped startup, ApprovIQ (https://approviq.com) is trying to break into a mature market with multiple fully featured competitors. I need to get the details right: MVP quality won't sell. So I opted for Test-Driven Development, the classic red/green/refactor. Writing tests that fail - then making them pass - forces you to document in your tests every decision that went into the code. This makes it a universal way to construct software. With TDD, you don't need to hold context in your head about how things should work. Your software can work as intricate as you like and still be resilient to regression. Bug in a third-party dependency? Get a failing test, make it pass. Anyone who undoes your fix will see the test fail.

At the same time as doing TDD with Claude Code, I also discovered that agents obey all instructions put in front of them! I started to add super-advanced linting: architectural guideline enforcement, scripts that walk the codebase's AST and enforce my architecture, I even added one that enforces only our brand colors in our codebase. That one is great because it prevents agents from picking ugly "AI generic" colors in frontends. Because the check blocks commits with ugly colors, our product looks way less like an AI built it - without human involvement.

In time I was no longer in the details of what the agent was building and was mostly supervising the TDD process while it implemented our product. Once that got tedious, I automated that into a state machine too.

All the ideas that now allow me build at high quality are in this repo.

This isn't your weekend vibe project. I've spent months refining the framework. There are rough edges but it's better out and rough than in hiding until perfect.

Hopefully some ideas here help you or your agent. I recommend cloning it and letting your agent have a look! And if you want to contribute please to - and if you want to get in touch, contact details in my profile.

Thanks for looking.

11

BananaOS, vibecoded operating system that boots on a 486 with ~11MB RAM #

4 comments5:43 PMView on HN
My 10-year-old son has been deep in low-level rabbit holes lately and ended up vibe-coding his own operating system. Since he’s still a kid and not on HN himself, I’m posting this on his behalf with his permission.

This started as curiosity about how computers actually boot, and somehow escalated into writing a kernel, building a GUI, and setting up CI that produces a bootable OS image on every commit.

BananaOS is a small experimental operating system built mainly for learning and exploration of low-level systems programming. It currently targets i386 BIOS systems and is designed to run on extremely constrained hardware. Fun fact: Wallpaper logic, one of the most important OS functionalities, is directly implemented in the kernel. That cracked my son up!

Some highlights:

Multiboot-compliant kernel loaded via GRUB

VESA framebuffer graphics with double buffering

Custom window manager with movable and resizable windows

Dock-style application launcher

PS/2 keyboard and mouse input handling

PCI enumeration and AHCI SATA support

Basic applications (terminal, notepad, calculator, file explorer, settings)

Memory detection and allocation based on available RAM

Boots in QEMU with about 11.2 MB RAM

Includes an ISR workaround to emulate CMOV so it can boot on Intel 486 CPUs

One thing I found particularly fun: he also added GitHub Actions workflows that automatically build the OS image for every commit, so the repo continuously produces fresh bootable artifacts.

The project is very much experimental and should only be run inside an Virtual Machine.

Repo (with build instructions and screenshots):

https://github.com/Banaxi-Tech/BananaOS

Quick start (only on Linux, check dependencies, and see README):

git clone https://github.com/Banaxi-Tech/BananaOS cd BananaOS make qemu-system-i386 -cdrom bananaos.img -m 128M

Retro mode:

qemu-system-i386 -cpu 486 -cdrom bananaos.img -m 11.2M

He’s mainly building this to understand kernels, memory management, drivers, and how operating systems actually work below user space.

Feedback from people who have built hobby operating systems or worked close to hardware would be especially appreciated.

8

PDF reader with interactive visualizations for any concept #

zerodistract.com faviconzerodistract.com
2 comments5:47 PMView on HN
Hey HN,

I built a PDF reader that can generate interactive visualizations right inside the document. The goal is to reduce the “intuition gap” when reading dense papers: select a concept, click Visualize, and it tries to produce a solid visual interactive app where you can rotate/zoom/step through (not just a text explanation).

Upload any PDF, select text, click Visualize. Demo (no signup): https://zerodistract.com/try/pdf/67cdee74-810b-4f1b-af7d-010... Product link: https://zerodistract.com

I’d love feedback, especially on what feels useful vs. distracting, and where it breaks.

8

Goatpad #

goatpad.xyz favicongoatpad.xyz
0 comments5:19 PMView on HN
Think Notepad, but with goats!

It started as a joke with some friends and then I realized this was the perfect project to see far I could get with Claude without opening my IDE (which I'd wanted to try for a while with a small app)

I was pretty shocked to find that I only needed to manually intervene for: 1. Initializing the repo 2. Generating sprites - I tried a few image gen tools, but couldn't get a non-messy looking sprite to generate to my liking. I ended up using some free goat sprites I found instead (credited in the About section) 3. Uploading images/sprite sheets (raw claude code can't do this for some reason?) 4. DNS stuff

Aside from agents timing out/hanging periodically and some style hand holding, it was pretty straightforward and consistently accurate natural language coding end to end. I suspect this is in large part to replicating an existing, well documented style of app, but it was good practice for other projects I have planned.

The goats slowly (or quickly if you change modes) eat your note and if they consume more than half of it, you lose the file forever. I did this as an exercise to practice some gamelike visuals I've wanted to implement, but was surprised to find that this is actually a perfect forcing function to help me stay focused on text editor style tasks.

I tend to get distracted mid-stream and the risk of losing the file when I tab away has mitigated more than I expected.

Enjoy!

6

A self-hosted OAuth 2.0 server for authenticating AI agents and machine #

github.com favicongithub.com
0 comments3:09 AMView on HN
MachineAuth is a self-hosted OAuth 2.0 server for authenticating AI agents and machines.

What is an AI agent in this context? A software bot (like OpenCLAW, Claude Code, etc.) that makes API calls to access protected resources. Instead of sharing long-lived API keys, your agents can authenticate using OAuth 2.0 Client Credentials and receive short-lived JWT tokens.

Why?

     No more sharing API keys
     Short-lived tokens (configurable)
     Easy credential rotation
     Industry-standard security
5

I Built Smart Radio That Auto-Skips Talk and Ads by Using ML #

tunejourney.com favicontunejourney.com
3 comments12:09 AMView on HN
Hi, I built TuneJourney to solve a specific annoyance: radio ads and DJ chatter. The core feature is an in-browser "AI Skip Talk" filter.

The Tech: Instead of processing on a server, it uses the Web Audio API to capture the stream locally and runs a lightweight ML classification model directly in your browser. It estimates the music vs. speech probability in near real-time. If enabled, it automatically triggers a "next" command to hop to another station the moment an ad, news segment, or DJ starts talking.

Features: - In-browser Inference: Entirely local and privacy-focused; no audio data ever leaves your machine. - WebGL + Point Clustering: Renders 70,000 stations across 11,000 locations smoothly. - Real-time Activity: See other users on the globe and what they are listening to in real-time. - System Integration: Full Media Key support for physical keyboard and system-level Next/Prev buttons. - Customization: Includes a talk sensitivity slider for the ML model so you can tweak the threshold.

Check it out: https://tunejourney.com

Let me know what you think! I am interested if this project is worth further investment, building a mobile app, etc.

5

Forge-GPU – 55 C lessons for SDL's GPU API, built with Claude Code #

github.com favicongithub.com
0 comments10:26 PMView on HN
Open-source tutorial series teaching real-time graphics programming with SDL's GPU API. Covers everything from Hello Window to SSAO, with math lessons, engine lessons, and a UI track building font rendering from scratch. Every lesson is a standalone C program with commented code explaining why, not just what. The whole project was built with Claude Code. Each lesson also distills into a reusable Claude Code skill — copy them into your own project and build games with AI that actually understands the GPU patterns.
4

SVG Weave. A node graph editor that animates SVGs with AI #

svgweave.com faviconsvgweave.com
2 comments4:59 PMView on HN
Hey HN,

I'm a solo dev and I kept wasting hours hand-writing CSS @keyframes to animate SVGs. Write a keyframe, preview, tweak the timing, preview again, repeat. For anything beyond a simple fade it turns into dozens of rules across multiple elements. I wanted something where I could just describe the motion and get working animations back.

SVG Weave is a visual node graph editor for this. You place nodes on a canvas (SVG input, prompt, render) and connect them with wires. Type what you want the animation to do, hit render, and the AI streams CSS @keyframes back in real time. You see the SVG come alive as tokens arrive.

Things that might be interesting technically:

- Style-inject mode: when only animations are needed, the AI outputs just a <style> block instead of rewriting the full SVG. Faster and avoids corrupting path data. - Overlap detection: the system prompt makes the model analyze element layering and restrict partially-covered elements to opacity/scale only, preventing hidden edges from being revealed during translation. - State transitions: connect two SVGs (start and end) and AI generates a single animated SVG that morphs between them using CSS transforms, opacity, and clip-path. - Chaining: output of one render feeds as input to the next prompt, so you can build complex animations step by step. - Shadow DOM isolation in the preview modal so SVG styles don't leak into the host page.

You can also generate SVGs from text descriptions or vectorize raster images in the editor.

Stack: Next.js, React Flow, Convex, Gemini via OpenRouter.

Free signup gets you 20 credits (one render costs about 10). Requires an account to save projects but you can see the full editor immediately.

svgweave.com

4

MCP server for AI compliance documentation (Colorado AI Act) #

github.com favicongithub.com
1 comments10:44 PMView on HN
I built an MCP server that gives AI agents access to compliance documentation — starting with the Colorado AI Act (SB 24-205), effective June 30, 2026.

The problem: organizations deploying AI in hiring, lending, insurance, or healthcare decisions need specific documentation — risk management policies, impact assessments, consumer notifications, bias testing docs, and appeal mechanisms. Most teams either pay $50K+ for a GRC platform, hire a law firm at $500/hr, or wing it.

What I built: compliance protocols that are both human-readable (PDF) and agent-readable (structured JSON via MCP/CLI/API). Your AI assistant can check if you're a deployer, pull protocol schemas, and help you implement them.

Tools available via MCP: - colorado_ai_act_check — are you a deployer? - list_protocols — browse by vertical - get_protocol_schema — structured format for agent implementation - assess_compliance — gap analysis

Install: npx -y aop-mcp-server

The Colorado AI Act is the first state-level AI governance law with teeth ($20K/violation, AG enforcement). More states are coming.

Site: https://appliedoperationsprotocols.com

3

I Built a $1 Escalating Internet Billboard – Called Space #

spacefilled.com faviconspacefilled.com
3 comments9:07 PMView on HN
Hey HN —

I made something simple called Space.

It’s one digital billboard.

Anyone can buy it. It starts at $1. Every time someone buys it, the price increases by exactly $1.

That’s the whole mechanic.

Why I Built It

I wanted to test a constraint:

What happens when ownership is singular, public, and progressively more expensive?

At $1 it’s impulse. At $100 it’s intentional. At $1,000 it’s a statement.

By the time it reaches $1,000, it will have generated $500,500 in total revenue — purely from the $1 incremental mechanic.

I’m curious about:

How price escalation changes meaning

Whether late buyers value symbolism over reach

What people choose to display when cost forces consideration

The Constraint Layer

The constraint is the point.

Only one “space” exists at a time.

Price is deterministic (+$1 per transaction).

The entire history is embedded in the current price.

The value increases because participation increases it.

No auctions. No bidding logic. No variable pricing.

Just math and participation.

Technical Side (Where I’d Love Feedback)

This has been more interesting to build than I expected.

Some things I’ve been dealing with:

Race conditions around concurrent purchases

Locking logic so two buyers don’t claim the same price

Ensuring atomic increments on the backend

Payment confirmation before state mutation

Preventing replay or double-submission exploits

Keeping it minimal without overengineering it

Right now it’s intentionally lightweight. But I’m thinking about:

Should price increments be fully on-chain / provable?

Is there a cleaner way to handle concurrency at scale?

Would you introduce time decay or leave it purely linear?

Should the historical ownership chain be immutable + public?

What safeguards would you add?

Part of me wants to keep it naive and raw. Part of me wants it architecturally tight.

3

CBrowser – Simulate how a confused first-timer experiences your website #

github.com favicongithub.com
0 comments5:56 PMView on HN
https://cbrowser.ai

Most browser automation asks "did the button work?" CBrowser asks "will a real person find the button?"

It's an MCP server (83 tools) that connects to Claude and gives it a real browser. But the core idea is cognitive journey simulation — you pick a persona (first-timer, elderly user, power user, someone with ADHD, low vision, motor tremors) and CBrowser walks your site the way they would. Each persona has 25 cognitive traits (patience, risk tolerance, comprehension, working memory) that determine how they react to friction. When frustration or confusion crosses a threshold, they abandon — just like a real user.

Each persona also has a motivational values profile based on Schwartz's 10 Universal Values (security, achievement, self-direction, benevolence, etc.) plus Self-Determination Theory needs and Maslow levels. These aren't decorative — they drive which influence patterns each persona is susceptible to. A security-motivated user responds differently to urgency cues than an achievement-motivated one. You can test whether your site's persuasion patterns actually work on your target audience, or only on people who already think like your designers.

What it actually does:

cognitive_journey_init + browser tools = watch a simulated persona attempt a task on your live site compare_personas = same task, multiple personas, side-by-side friction analysis

competitive_benchmark = run the same journey on your site and a competitor's

empathy_audit = test with disability personas (WCAG violations + lived-experience barriers)

Plus the usual browser automation: navigate, click, fill, screenshot, cross-browser testing, visual regression, performance baselines

The self-healing selectors cache alternatives (aria-label, role, ID, text) at match time and fall back through them ranked by confidence, with a 60% gate so you don't get silent false passes.

TypeScript, MIT, connects to Claude.ai via the demo MCP server or install locally with npm. Built on Playwright.

I'd love feedback on the persona model — are 25 cognitive traits + 10 motivational values too many? Too few? Are the abandonment thresholds realistic?

3

AgentGames.co – my interactive story game creator #

agentgames.co faviconagentgames.co
0 comments9:36 PMView on HN
Hey all,

I recently built the first version of my project to play and create interactive story games with AI agents.

Each game can have up to 20 agents, each agent can have an image, voice, access to other agents, and resources to enhance the experience. All game configuration has logic conditions you can set. For example, from agent A you can only access agent B if you've visited agent C or you have a specific resource item.

You can create anything from a murder mystery to a dungeon crawler. The resources are dynamic and can set drop rates, make stackable for currencies or health, and give them their own images as well.

To play: Type or speak in game. If you want to go somewhere, say where you want to go. To create: Describe what you want. You can manually configure as well.

You can try it for free. I'd really love your feedback. There was a lot of trickiness along the way to build this, so I learned a lot. More improvements to come

Happy to answer any questions, thank you!

2

Caret – tab to complete at any app on your Mac #

trycaret.com favicontrycaret.com
0 comments11:06 PMView on HN
We've been heads-down building this for a few weeks and want real feedback before we open it up. Caret runs in the background and completes your thoughts across every app: email, terminal, notes, wherever. Hit Tab and it finishes the sentence. We're onboarding the first users ourselves, one by one. So if you're interested, one of the three of us will get on a call with you to set you up and understand how you work. Looking for 20 people.
2

Csv.repair – Free browser-based tool to fix broken CSV files #

github.com favicongithub.com
0 comments4:00 PMView on HN
I built csv.repair a free, open source browser-based tool for analyzing, querying, and repairing broken or oversized CSV files. No file uploads - everything runs locally in your browser. Your data never leaves your machine.

What it does:

- Handles CSV files with millions of rows (virtual scrolling + Web Workers)

- Inline cell editing (double-click to edit, Tab to move between cells)

- Run SQL queries directly on your data (powered by AlaSQL)

- Auto-repair: trim whitespace, remove empty/duplicate rows, standardize dates, fix encoding

- Health diagnostics that flag malformed rows, inconsistent columns, encoding problems

- Column statistics and distribution charts

- Undo/redo, search & replace, keyboard shortcuts

- Export the cleaned file when you're done

- Tech stack: React + TypeScript + Vite + PapaParse + AlaSQL + Recharts + Tailwind CSS. Installable as a PWA.

Live: https://csv.repair

I'd love feedback especially from people who regularly deal with messy CSV files.

2

Pitch An App – Crowdsourced app ideas with voting and revenue sharing #

pitchanapp.com faviconpitchanapp.com
0 comments3:38 PMView on HN
Pitch An App (https://www.pitchanapp.com) is a platform where users submit app ideas, the community votes on them, and ideas that hit the vote threshold get built.

Submitters earn revenue share when their app generates income. Voters get 50% off forever.

9 apps have been built through the platform so far.

The core assumption: non-technical people have good software ideas but no way to act on them. Community voting serves as demand validation before any development starts.

Honest limitations: the vote threshold model means only popular ideas get built, so niche-but-valuable ideas might not make the cut. Working on ways to address that.

Happy to answer questions about the model or the tech.

2

Music Discovery #

secondtrack.co faviconsecondtrack.co
0 comments3:32 PMView on HN
I made a tool for discovering new music which links to artists bandcamp pages and eventually to your own records listed on the site. I want to have an intuitive and simple way to discover, buy and sell records. Looking for any and all feedback.
2

Patterns for coordinating AI agents on real software projects #

github.com favicongithub.com
0 comments5:21 PMView on HN
I'm a former chef with no CS degree, running 3 AI agents (PM, Builder, Auditor) building a D&D 3.5e combat engine. Every pattern in this repo was born from a specific failure.

The failure catalog is the real value. Examples:

- "Ghost Target" (pattern #3): Agent confidently "fixed" a bug that didn't exist. Now we verify gaps before writing code. - "Write-Only Field" (pattern #8): Agent wrote data at character creation that nothing ever read at runtime. 17 fields exposed. Now every data write requires a consume-site proof chain. - "Parallel Drift" (pattern #9): Two code paths computing the same attack logic diverged by 21 modifiers. Now every resolver change requires parity verification across all paths. - "Coverage Theater" (pattern #12): 30+ rows in the coverage map went stale. Tests passed but tracked nothing. Now coverage updates are mandatory in every debrief. - "Governance Consumption Failure" (pattern #15): PM wrote rules. Nobody read them. Process docs had no enforcement mechanism. Now methodology lives in auto-loaded config, not optional docs.

21 patterns total, each with: the failure that created it, the rule that prevents it, and a real example from production.

Not a framework you install — it's a methodology repo. The patterns are language/tool agnostic.

Happy to answer questions about what it's like building real software with multi-agent coordination when you're not a developer.

2

Let your OpenClaw find you clients #

clawhub.ai faviconclawhub.ai
0 comments3:42 PMView on HN
Returns real-time and verified email addresses and phone numbers of businesses anywhere.

How it works: autoexpanding queries on Google Maps businesses and associated websites (email enrichment).

2

Smart card eID driver written in Zig #

github.com favicongithub.com
0 comments9:30 PMView on HN
In Serbia, for a long time, the only way to use an eID was by having a Windows machine. Over the last year, I slowly implemented an open source module by reading the PKCS#11 standard and sniffing the USB traffic of the official (Windows) module. The result is a shared library that can be used on *nix systems, removing the need for Windows for many citizens of Serbia.
2

Lar-JEPA – A Testbed for Orchestrating Predictive World Models #

github.com favicongithub.com
0 comments2:38 AMView on HN
Hey HN,

The current paradigm of agentic frameworks (LangChain, AutoGPT) relies on prompting LLMs and parsing conversational text strings to decide the next action. This works for simple tasks but breaks down for complex reasoning because it treats the agent's mind like a scrolling text document.

As research shifts toward Joint Embedding Predictive Architectures (JEPAs) and World Models, we hit an orchestration bottleneck. JEPAs don't output text; they output abstract mathematical tensors representing a predicted environmental state. Traditional text-based frameworks crash if you try to route a NumPy array.

We built Lar-JEPA as a conceptual testbed to solve this. It uses the Lár Engine,a deterministic, topological DAG ("PyTorch for Agents") to act as the execution spine. Key Features for Researchers: Mathematical Routing (No Prompting): You write deterministic Python RouterNodes that evaluate the latent tensors directly (e.g., if collision_probability > 0.85: return "REPLAN"). Native Tensor Logging: We custom-patched our AuditLogger with a TensorSafeEncoder. You can pass massive PyTorch/NumPy tensors natively through the execution graph, and it gracefully serializes them into metadata ({ "__type__": "Tensor", "shape": [1, 768] }) without crashing JSON stringifiers. System 1 / System 2 Testing: Formally measure fast-reflex execution vs. deep-simulation planning. Continuous Learning: Includes a Default Mode Network (DMN) architecture for "Sleep Cycle" memory consolidation.

We've included a standalone simulation where a Lár System 2 Router analyzes a mock JEPA's numerical state prediction, mathematically detects an impending collision, vetoes the action, and replans—all without generating a single word of English text. Repo: https://github.com/snath-ai/Lar-JEPA Would love to hear your thoughts on orchestration for non-autoregressive models.

1

NightClear – AI anxiety clearing for bedtime #

apps.apple.com faviconapps.apple.com
0 comments1:16 PMView on HN
Hi HN,

I built NightClear because during the day I'm productive. At night, my brain becomes a worst-case-scenario simulator.

Based on CBT's "structured worry time" — instead of suppressing anxious thoughts, you give them a container:

1. Type what's on your mind (free-form, private) 2. LLM analyzes: core issue, controllable vs not 3. Generates one micro-action for tomorrow 4. Breathing exercise

SwiftUI, SwiftData (local), LLM API. Dark mode only. No account needed. No social/gamification.

Free: 3/week. Pro: $4.99/mo or $29.99/yr (7-day trial).

iOS only. Would love feedback on the AI analysis quality.

1

Bypassing Stripe's 20-item limit for multi-model AI billing #

0 comments1:26 PMView on HN
We’ve been building out the usage-based billing for Tonic Fabricate, and I ran into a Stripe limitation that isn’t really advertised but can completely break your checkout flow if you’re doing multi-model AI billing.

Like many AI-native applications, Fabricate supports a variety of LLMs (Sonnet, Haiku, Opus, etc.) for various agents and tasks. Users pay for usage by the token. We’d love to use Stripe Billing Meters to charge users by the token — I trust Stripe’s math more than my own when it comes to floating point arithmetic with very large numbers.

But Stripe caps subscriptions at 20 items. Each meter requires its own price, which in turn consumes one of those 20 item slots.

This becomes a problem fast with AI. If you multiply 4-5 models by 5 token types (input, output, cached input, and ephemeral cache tiers), you’re already at 20+ meters. Adding even one more model exceeds the limit.

The workaround we settled on was to move the pricing logic entirely into Fabricate and report abstract "billing units" to Stripe instead of raw tokens.

The setup is pretty simple:

1. We created one single meter in Stripe with a fixed price of $0.00000001 per unit.

2. Internally, we assigned an integer multiplier to every model/token combination (e.g., Sonnet input might be 350 units per token, while Haiku cached input is 10).

3. At reporting time, we do the integer math: (tokens * multiplier) for each category, sum them up, and send that one aggregate number to Stripe.

This effectively decouples our model list from Stripe’s API constraints. We can add as many models or token tiers as we want without ever touching Stripe configuration again.

The obvious downside is that the Stripe dashboard just shows a massive number of "units" rather than a breakdown of what the customer actually used. We decided we didn't care. We have our own internal telemetry for usage analytics; Stripe’s only job is to multiply a number by a price and generate a valid invoice.

If you’re setting up usage-based billing with more than a few dimensions, I’d recommend abstracting your units from day one. It’s much easier than trying to migrate your billing architecture once you’ve already hit that 20-item ceiling. And of course if you're Stripe (hi, Stripe!), I really recommend making changes to your product to offer native support for this very popular use case.

1

DevSquad – Claude Code Plugin That Works with to Gemini CLI and Codex #

github.com favicongithub.com
0 comments1:12 PMView on HN
A year ago, I couldn't code. I still can't. But AI coding tools got me here.

If you use Claude Code on anything bigger than a small script, you've hit the token limits faster than you'd like, or experienced context rot mid-session. The agent loses track of files, forgets architecture decisions, starts making things up. I was hitting this every single day.

DevSquad is a Claude Code plugin, not a framework, not a new CLI. You install it, and it hooks into Claude Code's execution to delegate subtasks to Gemini and Codex. When context gets heavy, specific work gets offloaded: tests to one model, docs to another, refactoring to a third.

For a quick proof of concept, I initially made a Claude.md that did the job, but I had to often remind it and it wasn't predictable or reliable. Hence, hook-enforced delegation, not polite suggestions via Claude.md. The plugin intercepts at defined trigger points and routes work through structured handoffs. Each agent operates in a focused context window instead of everyone fighting over one.

I know what you're thinking: there have been a dozen agent orchestrators on Show HN this month. Here's why I built another one: all of those require you to set up a new environment. Docker, new CLI, YAML configs. DevSquad hooks into the tool you already use.

How I built it: I'm a product person, not a developer. I led product at an AdTech company for years but never wrote production code. DevSquad was built entirely with AI coding tools — Claude Code, Google Antigravity, and yes, DevSquad itself once the early version worked. Make of that what you will.

Two questions I'd genuinely love HN's take on: How are you handling Claude Code's token limits on large projects? And — right now this is built for vibe coders. Does it matter if the person who built a tool can't write the code by hand, as long as the tool works?

1

Browse2API – Turn any website into an API #

browse2api.com faviconbrowse2api.com
0 comments1:09 PMView on HN
Hey HN, I built this because I was tired of writing custom scrapers for every website. Browse2API records your browser interactions and turns them into production-ready API endpoints. You can try the live demo APIs on the page - no signup needed. Looking for early users who want custom APIs built for their use case. I'd love any feedback or suggestions on the product, the demo APIs, or use cases you'd want supported.

Happy to answer questions!

1

Organic Programming – A .proto is all you need #

github.com favicongithub.com
0 comments1:08 PMView on HN
Hi HN,

I've been obsessed with metaprogramming and generative systems since 1998. With the rise of AI agents that actually write code, I felt the time was right to formalize an idea I've been circling for years: Organic Programming.

The premise is simple: a single .proto file (Protocol Buffers) is the universal contract between every component in your system — old code, new code, human-written, agent-generated. From that contract, everything else is automatically generated: API stubs, CLI wrappers, client/server skeletons, serialization. You write only the domain logic.

The building block is the "holon" — a composable unit at any scale that exposes multiple facets (gRPC service, CLI tool, library API, tests), all derived from the same .proto. Holons compose in-process, over the network, or at the shell (classic Unix piping). Two holons from different tech stacks interlock like LEGO bricks — one serves, the other dials.

It's bio-inspired (small autonomous units that compose into larger organisms) and Unix-inspired (everything pipes, do one thing well).

What exists today: - A specification ("seed"): constitution, protocol, conventions - SDKs for 14 languages (Go is the reference, others at various stages) - Hello-world examples in 12+ languages - A cross-language Go+Dart assembly demo - CLI tooling (op, who)

Repo: https://github.com/organic-programming/seed Org: https://github.com/organic-programming About me: https://pereira-da-silva.com/en

I'd love honest feedback — especially from anyone wrestling with composability in the age of agentic software.

1

Brainrot messed up my kid's attention span, so I built a tool to fix it #

agentkite.com faviconagentkite.com
1 comments5:34 PMView on HN
Hi HN! I am Dan (aadivar), one of two people who have built Kite, a tool with a simple goal: get rid of brain rot and internet addiction.

Modern websites are built to keep you engaged using algorithmic feeds, infinite scroll, social proof counters, autoplay recommendations, urgency timers, etc. Even when you intend to check one thing, you get pulled into a 45-minute session you didn't choose.

I personally noticed this with my daughter, who was getting pulled into YouTube Shorts rabbit hole addiction, while she was supposed to be using the platform for science and music lessons. Trying to limit her time on the app didn't help much, and blocking YouTube entirely would block the educational content as well.

I wanted a way to block only YouTube Shorts while keeping the other longform video content. Unfortunately, a simple solution for my problem did not exist. Thus the idea for Kite was born.

With Kite, anyone can use natural language to say:

- "Hide shorts" - "Replace infinite scroll with pagination" - "Hide like counts"

And the tool does exactly what you need, thus removing brainrot rabbit holes and other addictive elements.

**

Kite is available under a closed beta right now, and we would especially love feedback from folks who spend a lot of time on YouTube, Twitter/X, Reddit, LinkedIn, or shopping platforms.

Beta sign-up here: https://www.agentkite.com

Happy to answer any technical questions about how the agent safely modifies websites on the client side.

**

BTW, We also built AttentionGuard, which detects manipulation patterns like FOMO cues, engagement traps, social pressure signals etc. in real-time and flags them. This extension is open-source (MIT license) and live for Chrome and Firefox now. More details on our website!

1

I got tired of syncing Claude/Gemini/AGENTS.md and .cursorrules #

0 comments5:39 PMView on HN
I use Claude, Codex, Cursor, and Gemini on different projects. Each one has its' own md file in its own format. CLAUDE.md, AGENTS.md, .cursorrules, GEMINI.md. Four files saying roughly the same thing, four chances to get out of sync!

  I kept forgetting to update one, then wondering why Cursor was hallucinating my project structure while Claude had it right.

  So I built an MCP server that reads a single YAML file (project.faf) and generates all
   four formats. 7 bundled parsers handle the differences between them. You edit one file, and bi-sync keeps everything current.

  It's an MCP server, so Claude Desktop can use it directly. 61 tools, 351 tests, no CLI dependency.

  Try it: npx claude-faf-mcp

  Source: https://github.com/Wolfe-Jam/claude-faf-mcp

  The .faf format itself is IANA-registered (application/vnd.faf+yaml).

  Curious if others are dealing with this multi-AI config problem, or if there's a simpler approach I'm not seeing.
1

OpportuAI – remote jobs, AI tools and digital products aggregator #

opportunai.vercel.app faviconopportunai.vercel.app
0 comments1:06 PMView on HN
I got tired of checking multiple job boards every morning so I built OpportuAI.

It automatically scrapes 277+ remote jobs daily from 8 sources (Remotive, RemoteOK, We Work Remotely, HN Hiring, Wellfound, NoDesk, Arbeitnow, Jobicy) and aggregates 159+ AI tools from Product Hunt, BetaList, and Reddit.

Stack: React + Vite, Vercel (free tier), Supabase, Python scrapers hitting public APIs and RSS feeds.

Also has a submit page if anyone wants to list their tool or job: opportunai.vercel.app/submit

Would love feedback from this community!

1

My iPhone notifies me about cloud outages before they blow up here #

apps.apple.com faviconapps.apple.com
0 comments7:27 PMView on HN
I was sick of finding out a cloud provider was down when casually doomscrolling on X in the middle of work. So I built Pingy as a fun side project, it sends me a push notification whenever a cloud provider is experiencing an outage or a degradation.

AWS, GCP, Azure, GitHub, Stripe, OpenAI, Supabase, Vercel, Cloudflare, and 50+ more are tracked. I just pushed it App Store yesterday and if you are paranoid like me, checkout Pingy on the App Store.

1

Vis Pro – A Formula-Based Workout Program Editor #

vis.fitness faviconvis.fitness
0 comments7:29 PMView on HN
Hey HN,

About 5 years ago, I built a weightlifting app for 5/3/1 that got me on the front page of HN [0]. After that, life happened. I had kids and so decided to get a job and put that project on ice. Eventually I grew too disappointed with my job, and decided to try building something again.

The biggest feedback I kept getting from users was simple: “Let me create my own programs.”

That’s how Vis started.

The initial idea was to create a B2B platform where gyms and trainers could build programs using formulas (e.g., percentages of 1RM) and reusable blocks instead of spreadsheets. I built what I still think is the best workout editor out there, but I quickly found out that B2B sales is hard (or maybe I just suck at it). It also just didn’t feel like a big enough sell for gyms.

So I pivoted. I focused on the iOS app for a while [1], and now re-packaged the editor so individuals can use it directly.

With Vis Pro, you can:

- Define programs using formulas (e.g., 0.85 * SQUAT_1RM, or even better: RPE(8, SET_REPS) * SQUAT_1RM)

- Build workouts by re-using pre-defined or even custom blocks

- Share your programs and workouts with others

The core idea is that programs are parametric instead of static. Change your 1RM and the entire program recalculates automatically.

You can try it out without an account at https://vis.fitness/pro/try/create-program

The whole thing is built with NextJS, using Chevrotain (surprisingly solid) for the formula engine.

It's been super interesting using Codex since late Dec. It's been a huge force multiplier, enabling me to ship really cool features like formula autocomplete and syntax highlighting in a couple of hours. I'm used to reviewing a lot of code from my time at Google, so that hasn't been a problem, but it's interesting to feel that the review speed is now the limiting factor. Though the codebase would become unmaintainable real quick without that.

The next step is building an MCP server to allow users to create programs using LLMs and have them show up directly in the editor (and your phone).

Would love feedback, whether you even lift or not!

[0] https://news.ycombinator.com/item?id=31508009

[1] https://apps.apple.com/us/app/vis-next-generation-workouts/i...

1

OneSentence – An offline macOS voice utility built entirely with AI #

onesentence.app favicononesentence.app
0 comments1:15 PMView on HN
Hi HN, I’m sharing OneSentence, an offline voice utility for macOS (M-series). I built this for two reasons: first, I wanted to see how far I could push cheap AI, and second, I wanted to use this utility. The idea was born out of using Emacs packages with Whisper to dictate to my machine. I had found it effective to simply speak and articulate context to coding agents. OneSentence does four things well: privacy, speech-to-text, text-to-speech, and template insertion.

The development process was probably the most interesting part. I used Gemini 3 Pro Preview and 3 Flash Preview almost exclusively (yes, it was not Claude). It went from chatting to writing light specs and decomposing to tasks, to finally setting up a Ralph-style orchestrator that I would leave running overnight. I also had not a few of late nights; making progress this way was exhausting, but still fun. It's not something I ever would have built without AI. The models helped me maintain strict linters, achieve upper-80s test coverage, write the Cloudflare services, and walked me through the maze of Apple's sandboxing, certificate provisioning and signing. End to end, everything was made through Gemini, down to the scripting and recording of the promo video. The linter became a quality-of-context goad. Yes, I really do want explicit type interfaces even through Swift does fine without them, and no you will not write more than 800 lines of code in a file.

Pain points: The UI tests are slow, test logs flood context, and the AI hallucinates (who knew). Cloning lib code into tmp/ to answer questions became a habit.

Under the hood, it relies on a sizeable set of technologies. It feels like the whole thing is like Aesop's polite gnat that rested on a bull. In other words: built on the shoulders of giants. Georgi Gerganov, Hyeongju Kim, Steve Yegge, Sindre Sorhus, Gwendal Roué, Zorg, the team who made Whisper, and so so many others: you have my thanks.

The app is built in Swift, it uses Whisper.cpp and ONNX for inference and supports Whisper and Parakeet for speech-to-text and uses Supertonic text-to-speech. I got to try a few new (to me) tools: Prek for pre-commit hooks, Tuist for project generation, dvc for model versioning and management, beads for agent work tracking, et.al. Gemini converted raw Whisper .pt files to CoreML using PyTorch, and I spent a lot of time experimenting to see how much of a difference using the Apple Neural Engine would make (interestingly, not as much as I expected in my use case, but both modes work). Parakeet is also in there just for kicks (Whisper produces better results).

I originally planned to launch on the Mac App Store, but the reviewers insisted I remove a behavior I felt was central to the app. So instead I decided to distribute it directly, using Cloudflare Workers/R2, and LemonSqueezy for sales & licensing.

Supertonic's diffusion models are interesting to use; they never read a text exactly the same way twice. If you do decide to try OneSentence, just for fun, turn off the "Refine Punction" setting and see what happens when you have it read a sentence with lots of exclamation marks, my boys got a kick out of it.

I set the defaults for myself. Transcription defaults to Whisper Large V2—which is relatively old, large and slow, but I found the transcription quality excellent.

I am offering this primarily as a one-time purchase, But there is also a cheap subscription option there if people prefer it. I know there are other product options, other models, and even macOS's own functionality. Choice is good.

This has been my evenings-and-weekends project for the last several months, and as a daily driver, I get a lot of value out of it.

You can check it out here: https://onesentence.app/ Use the promo code I3MDE1MQ to get 40% off in the next two weeks.

I'll be around to answer any questions.

Cheers!

1

I've been building autonomous AI agents for 2 years – before OpenClaw #

splox.io faviconsplox.io
0 comments1:22 PMView on HN
Hey HN,

I've been building Splox for 2 years — autonomous AI agents that connect to your tools and act on your behalf. Before OpenClaw, before the hype.

You describe what you want in plain English. The agent executes it across 10,000+ services via MCP with 1-click OAuth. No LLM keys to manage, no Docker, no self-hosting.

Three agents live today:

- Autonomous Trader — connects to Hyperliquid, monitors markets, executes positions, manages risk. Runs 24/7.

- Omni Tool Agent — email, Sheets, Notion, Slack, Telegram — anything you'd waste 2 hours on daily. 10,000+ tools via MCP.

- Coder — connects to your servers, local machines, Kubernetes clusters. Reads codebases, deploys, manages infrastructure end-to-end.

People are using it to manage social media accounts, run Telegram user bots, automate customer support — all by just connecting a tool. No code required.

What makes it truly autonomous: a built-in Event Hub. Agents subscribe to real-time events — webhooks, scheduled triggers, even detecting silence — and react without human input. They run indefinitely.

For power users, there's a visual graph-based workflow builder for complex multi-step pipelines: https://app.splox.io

You can deploy your own complex AI Agents by using graph-based workflows.

For end users: https://chat.splox.io

Would love feedback.