毎日の Show HN

Upvote0

2026年2月17日 の Show HN

89 件
250

I wrote a technical history book on Lisp #

berksoft.ca faviconberksoft.ca
100 コメント3:43 PMHN で見る
The book page links to a blog post that explains how I got about it (and has a link to sample content), but the TL&DR is that I could not find a lot of books that were on "our" history _and_ were larded with technical details. So I set about writing one, and some five years later I'm happy to share the result. I think it's one of the few "computer history" books that has tons of code, but correct me if I'm wrong (I wrote this both to tell a story and to learn :-)).

My favorite languages are Smalltalk and Lisp, but as an Emacs user, I've been using the latter for much longer and for my current projects, Common Lisp is a better fit, so I call myself "a Lisp-er" these days. If people like what I did, I do have plans to write some more (but probably only after I retire, writing next to a full-time job is heard). Maybe on Smalltalk, maybe on computer networks - two topics close to my heart.

And a shout-out to Dick Gabriel, he contributed some great personal memories about the man who started it all, John McCarthy.

117

I taught LLMs to play Magic: The Gathering against each other #

mage-bench.com faviconmage-bench.com
83 コメント4:22 PMHN で見る
I've been teaching LLMs to play Magic: The Gathering recently, via MCP tools hooked up to the open-source XMage codebase. It's still pretty buggy and I think there's significant room for existing models to get better at it via tooling improvements, but it pretty much works today. The ratings for expensive frontier models are artificially low right now because I've been focusing on cheaper models until I work out the bugs, so they don't have a lot of games in the system.
113

I'm launching a LPFM radio station #

kpbj.fm faviconkpbj.fm
56 コメント8:15 PMHN で見る
I've been working on creating a Low Power FM radio station for the east San Fernando Valley of Los Angeles. We are not yet on the broadcast band but our channel will be 95.9FM and our range can been seen on the homepage of our site.

KPBJ is a freeform community radio station. Anyone in the area is encouraged to get a timeslot and become a host. We make no curatorial decisions. Its sort of like public access or a college station in that way.

This month we launched our internet stream and on-boarded about 60 shows. They are mostly music but there are a few talk shows. We are restricting all shows to monthly time slots for now but this will change in the near future as everyone gets more familiar with the systems involved.

All shows are pre-recorded until we can raise the money to get a studio.

We have a site secured for our transmitter but we need to fundraise to cover the equipment and build out costs. We will be broadcasting with 100W ERP from a ridgeline in the Verdugos at about 1500ft elevation. The site will need to be off grid so we will need to install a solar system with battery backup. We are planning to sync the station to the transmit site with 802.11ah.

This is a pretty substantial thing involving a bunch of different people and projects. I've built all of our web infrastructure using Haskell, NixOS, Terraform, and HTMX: https://github.com/solomon-b/kpbj.fm

The station is managed by a 501c3 non-profit we created. We are actively seeking fundraising, especially to get our transmit site up and running. If you live in the area or want to contribute in any way then please reach out!

69

Pg-typesafe – Strongly typed queries for PostgreSQL and TypeScript #

github.com favicongithub.com
34 コメント6:15 PMHN で見る
Throughout my career, I tried many tools to query PostgreSQL, and in the end, concluded that for what I do, the simplest is almost always the best: raw SQL queries.

Until now, I typed the results manually and relied on tests to catch problems. While this is OK in e.g., GoLang, it is quite annoying in TypeScript. First, because of the more powerful type system (it's easier to guess that updated_at is a date than it is to guess whether it's nullable or not), second, because of idiosyncrasies (INT4s are deserialised as JS numbers, but INT8s are deserialised as strings).

So I wrote pg-typesafe, with the goal of it being the less burdensome: you call queries exactly the same way as you would call node-pg, and they are fully typed.

It's very new, but I'm already using it in a large-ish project, where it found several bugs and footguns, and also allowed me to remove many manual type definitions.

53

Andrej Karpathy's microgpt.py to C99 microgpt.c – 4,600x faster #

github.com favicongithub.com
7 コメント12:06 AMHN で見る
Andrej Karpathy showed us the GPT algorithm. I wanted to see the hardware limit.

The Punchline: I made it go 4,600x faster in pure C code, no dependencies and using a compiler with SIMD auto-vectorisation!!!

Andrej recently released microgpt.py - a brilliant, atomic look at the core of a GPT. As a low-latency developer, I couldn't resist seeing how fast it could go when you get closer to the metal.

So just for funzies, I spent a few hours building microgpt-c, a zero-dependency and pure C99 implementation featuring:

- 4,600x Faster training vs the Python reference (Tested on MacBook Pro M2 Max). On Windows, it is 2,300x faster. - SIMD Auto-vectorisation for high-speed matrix operations. - INT8 Quantisation (reducing weight storage by ~8x). Training is slightly slower, but the storage reduction is significant.

- Zero Dependencies - just pure logic.

The amalgamation image below is just for fun (and to show off the density!), but the GitHub repo contains the fully commented, structured code for anyone who wants to play with on-device AI.

I have started to build something useful, like a simple C code static analyser - I will do a follow-up post.

Everything else is just efficiency... but efficiency is where the magic happens

44

Continue – Source-controlled AI checks, enforceable in CI #

docs.continue.dev favicondocs.continue.dev
7 コメント5:08 PMHN で見る
We now write most of our code with agents. For a while, PRs piled up, causing review fatigue, and we had this sinking feeling that standards were slipping. Consistency is tough at this volume. I’m sharing the solution we found, which has become our main product.

Continue (https://docs.continue.dev) runs AI checks on every PR. Each check is a source-controlled markdown file in `.continue/checks/` that shows up as a GitHub status check. They run as full agents, not just reading the diff, but able to read/write files, run bash commands, and use a browser. If it finds something, the check fails with one click to accept a diff. Otherwise, it passes silently.

Here’s one of ours:

  .continue/checks/metrics-integrity.md

  ---
  name: Metrics Integrity
  description: Detects changes that could inflate, deflate, or corrupt metrics (session counts, event accuracy, etc.)
  ---

  Review this PR for changes that could unintentionally distort metrics.
  These bugs are insidious because they corrupt dashboards without triggering errors or test failures.

  Check for:
  - "Find or create" patterns where the "find" is too narrow, causing entity duplication (e.g. querying only active sessions, missing completed ones, so every new commit creates a duplicate)
  - Event tracking calls inside loops or retry paths that fire multiple times per logical action
  - Refactors that accidentally remove or move tracking calls to a path that executes with different frequency

  Key files: anything containing `posthog.capture` or `trackEvent`

This check passed without noise for weeks, but then caught a PR that would have silently deflated our session counts. We added it in the first place because we’d been burned in the past by bad data, only noticing when a dashboard looked off.

---

To get started, paste this into Claude Code or your coding agent of choice:

  Help me write checks for this codebase: https://continue.dev/walkthrough
It will:

- Explore the codebase and use the `gh` CLI to read past review comments

- Write checks to `.continue/checks/`

- Optionally, show you how to run them locally or in CI

Would love your feedback!

35

6cy – Experimental streaming archive format with per-block codecs #

github.com favicongithub.com
8 コメント5:01 PMHN で見る
Hi HN,

I’ve been experimenting with archive format design and built 6cy as a research project.

The goal is not to replace zip/7z, but to explore: • block-level codec polymorphism (different compression per block) • streaming-first layout (no global seek required) • better crash recovery characteristics • plugin-based architecture so proprietary codecs can exist without changing the format

Right now this is an experimental v0.x format. The specification may still change and compatibility is not guaranteed yet.

I’m mainly looking for feedback on the format design rather than performance comparisons.

Thanks for taking a look.

34

Price Per Ball – Site that sorts golf balls on Amazon by price per ball #

priceperball.net faviconpriceperball.net
51 コメント3:36 PMHN で見る
I took inspiration from diskprices.com, but applied it to my golfing hobby.

For someone who can't always keep it in the fairway, golf balls can get rather expensive, so I decided to build a way for me to view Amazon listings by how much they cost per ball. Hence the name of the website.

The site is hosted on Cloudflare pages and I use Github actions to trigger a python script that fetches and checks the prices. It runs twice a day. If the script encounters any new ASINs, it stores them for future checks, so the list of golf balls being price checked should keep growing over time. Changes are then pushed to Cloudflare pages.

There can sometimes be some pricing oddities when the product title says one count, but the unit count being returned from Amazon is another number, so I'm trying to add some checks to help accommodate for that. Right now, I just have some manual overrides for certain ASINs, but I'm looking to improve on it in the future.

The frontend is just some basic HTML, CSS and JavaScript.

Listings on Amazon can be inconsistent sometimes because, for example, product titles will say used balls, but the seller lists them as new. I added some filters to that allow you to exclude used/recycled balls, plastic golf balls, etc... You can also filter by brand.

Give it a spin and let me know if you run into any issues or have any feature ideas to make it more useful.

30

PIrateRF – Turn a $20 Raspberry Pi Zero into a 12-mode RF transmitter #

github.com favicongithub.com
10 コメント2:00 PMHN で見る
I built a software-defined radio transmission platform that runs on a Raspberry Pi Zero W. It spawns its own WiFi hotspot and serves a web UI — connect from any device and you have a portable RF signal generator with 12 transmission modes: FM broadcasting with RDS, FT8, RTTY, FSK, POCSAG paging, Morse code, SSTV image transmission, voice cloning via live mic, spectrum painting, IQ replay, carrier wave, and frequency sweeps.

Everything runs through a browser interface. Upload audio files, type messages, configure frequencies, and transmit. The Pi's GPIO pin does the actual RF generation via rpitx — no external radio hardware needed.

Written in Go with a real-time WebSocket frontend. Includes a preset system, playlist builder, and multi-device support (connect multiple phones/laptops to the AP and share control).

Without an antenna the signal barely reaches 5 meters, which makes it perfect for indoor experimentation and learning about RF protocols without causing interference. All my testing was done indoors with no antenna attached.

Built this because I wanted a single portable tool to experiment with every common RF transmission mode without hauling around expensive SDR equipment.

Pre-built SD card image available if you want to skip the build process.

GitHub: https://github.com/psyb0t/piraterf Blog post: https://ciprian.51k.eu/piraterf-turning-a-20-raspberry-pi-ze...

27

Data Studio – Open-Source Data Notebooks #

github.com favicongithub.com
4 コメント12:10 PMHN で見る
Hey HN, I am Alex. I am open sourcing Data Studio, a lightweight data exploration IDE in your browser that runs locally.

Try it: https://local.dataspren.com (no account needed, runs locally)

More information: https://github.com/dataspren-analytics/data-studio

I love working with data (Postgres, SQL, DuckDB, DBT, Iceberg, ...). I always wanted a data exploration tool that runs in my browser and just works. Without any infra or privacy concerns (DuckDB UI came quite close).

Features:

  - Data Notebooks
    - SQL cells work like DBT models (they materialize to views)
    - Use Python functions inside of SQL queries
    - Use DB views directly in Python as dataframes
  - Transform Excel files with SQL
  - You can open .parquet, .csv, .xlsx, .json files nicely formatted
If you like what you see, you can support me with a star on Github.

Happy to hear about your feedback <3

23

Bashtorio – Factorio-Like in the Browser Backed by a Linux VM #

bashtorio.xyz faviconbashtorio.xyz
0 コメント11:29 PMHN で見る
I created a free, open-source browser game inspired by Factorio.

You place "Input" machines that produce streams of bytes. You use conveyor belts to feed those bytes through other machines which produce transformations, and then to "Output" machines which produce audio or visual effects.

The game uses v86 to run a real Linux VM in the browser. I use the 9p filesystem to enable IPC via FIFO pipes, so shell commands can stream data continuously rather than just running once.

Features: - 30+ machine types (sources, filters, routers, packers, audio synthesis, displays) - "Command" machines that pipe data through real shell commands - Streaming mode for persistent processes - Shareable factories via URL - Chiptune audio engine (oscillators, Game Boy noise channel) + additional 808 drum machine

Try the presets in the menu bar (top left) to see what's possible. Requires WASM and may take a moment to load on slower connections.

Live: https://bashtorio.xyz Source: https://github.com/EliCunninghamDev/bashtorio

18

Writing a C++20M:N Scheduler from Scratch (EBR, Work-Stealing) #

github.com favicongithub.com
20 コメント10:46 PMHN で見る
tiny_coro is a lightweight, educational M:N asynchronous runtime written from scratch using C++20 coroutines. It's designed to strip away the complexity of industrial libraries (like Seastar or Folly) to show the core mechanics clearly.

Key Technical Features:

M:N Scheduling: Maps M coroutines to N kernel threads (Work-Stealing via Chase-Lev deque).

Memory Safety: Implements EBR (Epoch-Based Reclamation) to manage memory safely in lock-free structures without GC.

Visualizations: I used Manim (the engine behind 3Blue1Brown) to create animations showing exactly how tasks are stolen and executed.

Why I built it: To bridge the gap between "using coroutines" and "understanding the runtime." The code is kept minimal (~1k LOC core) so it can be read in a weekend.

13

I curated 130 US PDF forms and made them fillable in browser #

simplepdf.com faviconsimplepdf.com
0 コメント6:33 PMHN で見る
Hi HN!

I built SimplePDF 7 years ago, with the vision from day one to help get rid of bureaucracy (I'm from France, I know what I'm talking about)

Fast forward to this week where I finally released something I had on my mind for a long time: a repository of the main US forms that are ready to be filled, straight from the browser, as opposed to having to find a PDF tool online (or local).

I focused on healthcare, ED, HR, Legal and IRS/Tax for now.

On the tech-side, it's SimplePDF all the way down: client-side processing (the data / documents stay in your browser).

I hope you find the resource useful!

NiP

8

Persistent memory for Claude Code with self-hosted Qdrant and Ollama #

github.com favicongithub.com
0 コメント9:21 PMHN で見る
I built an MCP server that gives Claude Code long-term memory across sessions, backed by infrastructure you control.

Every Claude Code session starts from zero, no memory of previous sessions. This server uses mem0ai as a library and exposes 11 MCP tools for storing, searching, and managing memories. Qdrant handles vector storage, Ollama runs embeddings locally (bge-m3), and Neo4j optionally builds a knowledge graph.

Some engineering details HN might find interesting:

- Zero-config auth: auto-reads Claude Code's OAT token from ~/.claude/.credentials.json, detects token type (OAT vs API key), and configures the SDK accordingly. No separate API key needed. - Graph LLM ops (3 calls per add_memory) can be routed to Ollama (free/local), Gemini 2.5 Flash Lite (near-free), or a split-model where Gemini handles entity extraction (85.4% accuracy) and Claude handles contradiction detection (100% accuracy).

Python, MIT licensed, one-command install via uvx.

https://github.com/elvismdev/mem0-mcp-selfhosted

6

Ambient CSS – Physically Based CSS and React Components #

ambientcss.vercel.app faviconambientcss.vercel.app
9 コメント1:42 PMHN で見る
Hello! AmbientCSS is a side project I started 5 years ago because I found the lack of realism in CSS shadows jarring. So, I tried to create a more realistic and consistent shadow system for CSS. It grew too complex and I gave up. Now, thanks to LLMs, I was able to revive it. As such, it's good enough to play with but might not be good enough for production use.
5

OpenEntropy – 47 hardware entropy sources from your computer's physics #

github.com favicongithub.com
0 コメント1:52 AMHN で見る
I built this to study something most security engineers wave off: whether external factors can nudge hardware entropy sources.

Here is why. Princeton’s PEAR lab ran RNG work for about 28 years and shut down in February 2007. People in the lab tried to shift random event generator output, and they reported small deviations after tens of millions of events. https://www.pear-lab.com/

The Global Consciousness Project took a similar idea outside the lab. It has run a distributed network of hardware RNGs since 1998 and looks for correlated deviations around major world events.

Most people looking at hardware entropy want true randomness for crypto. I want to treat entropy like a sensor. I want to see what might perturb the underlying noise, not just consume a final stream.

So I built OpenEntropy. It samples 47 physical-ish sources on Apple Silicon, like clock jitter, thermal beats, DRAM timing conflicts, cache contention, and speculation timing. Raw mode gives you unprocessed, per-source bytes so you can run your own stats on each channel.

The PEAR-style question is: does output shift when “intention” is the experimental condition? With 47 sources, I can run intention vs control sessions and ask if multiple unrelated channels drift the same way at the same time. If thermal and DRAM timing both shift during intention blocks, that’s the kind of pattern I want to measure.

4

An Open-source React UI library for ASCII animations #

github.com favicongithub.com
3 コメント1:55 AMHN で見る
Hey HN :)

I made Rune, a composable React library for ASCII animations. It lets you drop in animated ASCII the same way you would an icon component.

Rune converts video into grids of colored ASCII characters that render directly as text in the browser. Brightness maps to character density (@ -> .), and output can be tuned for different levels of detail.

It’s designed to be lightweight and very performance focused, so animations stay smooth even at higher resolutions or if there many playing at a time!

4

A real-time chord identifier web app using the Web MIDI API #

midi-chord-identifier.backwater.systems faviconmidi-chord-identifier.backwater.systems
4 コメント3:52 PMHN で見る
This was a quick, fun project that solves a practical problem I had — wanting to learn the names of piano chords, but not seeming to want to spend time staring at chord charts. I figured perhaps I could learn some music theory more deeply by encoding the logic into software, as well.

I came across this table <https://en.wikipedia.org/wiki/Chord_(music)#Examples> that breaks down the composition of chords logically. I was reminded of a bitmask, so I translated each chord into a 12–bit bitmask with a bit for each distinct note letter name (e.g. “C” or “B♭”). Decoding binary was involved in interfacing with MIDI … that might have been the inspiration — regardless, a bitmask seems ideal for this purpose.

The most challenging part by far was the logic that determines whether say, “A♯/B♭” (which are considered to be the same note in the 12–tone chromatic scale) should be rendered as “A♯” or “B♭”. As best as I understand, this depends on key signature context, and the logic regarding this isn’t well-described. I settled on finding the diatonic scale (7–note) that contains the maximum number of notes that the chord also contains. That diatonic scale provides the context for the note letter names. This logic isn’t perfect yet — the scales that include double flats and double sharps (which I wasn’t previously aware of) still provide ambiguous results.

4

Orange Cheeto Browser extension that replaces Trump with nicknames #

cheetodon.com faviconcheetodon.com
0 コメント7:05 PMHN で見る
I built a browser extension that does text replacement across all websites -- specifically replacing "Trump" with rotating humorous nicknames.

The interesting technical bits:

- TreeWalker for DOM traversal (skips scripts, inputs, contenteditable, iframes)

- MutationObserver with debouncing for SPAs and dynamically loaded content

- Fisher-Yates shuffle bag for even nickname distribution (no repeats until all are used)

- Case preservation via regex (TRUMP -> MANGO MUSSOLINI, Trump -> Mango Mussolini)

- CSS-only animations with prefers-reduced-motion support

- Zero dependencies, plain JS, Manifest V3

The architecture is generic enough to fork for any text replacement use case. All the replacement logic lives in a single file.

No external requests, no analytics, no data leaves the browser. Settings sync via Chrome Storage / browser.storage.

Available for Chrome, Firefox, and Safari. Free.

Feedback on the implementation welcome -- the code is straightforward and I'd rather someone tell me my MutationObserver setup is wrong than find out the hard way.

3

Visualize S&P 500 financials with Sankey diagrams #

10q10k.net favicon10q10k.net
0 コメント6:41 AMHN で見る
I built this to understand where companies actually make and spend money. Each company gets a Sankey diagram showing the flow from revenue to expenses to profit. Also includes the three standard financial statements. Would love feedback on the visualization approach.
3

Distillate – Zotero papers → reMarkable highlights → Obsidian notes #

distillate.dev favicondistillate.dev
3 コメント8:54 PMHN で見る
I read a lot of research papers for work. My workflow evolved around an ever-growing inbox of bookmarked papers from arXiv et al. Great for exploration, but hard to keep track of what I read.

Distillate bridges the tools I already use: Zotero (literature management), reMarkable (reader + highlighter), and Obsidian (notes). It automates the whole pipeline:

$ distillate

save to Zotero ──> auto-syncs to reMarkable

                        │

         read & highlight on tablet
         just move to Read/ when done

                        │

                        V

         auto-saves notes + highlights
It polls Zotero for new papers, uploads PDFs to the reMarkable via rmapi, then watches for papers you've finished reading in your Read folder. When it finds one, it:

- Parses .rm files using rmscene to extract highlighted text (GlyphRange items)

- Searches for that text in the original PDF using PyMuPDF and adds highlight annotations

- Enriches metadata from Semantic Scholar (publication date, venue, citations)

- Creates a structured markdown note with metadata, highlights grouped by page, and the annotated PDF (I keep mine in an Obsidian vault)

The core workflow just needs Zotero and a reMarkable — no paid APIs, no cloud backend, your notes stay on your machine. Optional extras if you plug them in:

- AI summaries via Claude (one-liner + key learnings from your highlights)

- Daily reading suggestions from your queue

- Weekly email digest via Resend

- Obsidian Bases database for tracking your reading

Stack: rmapi for reMarkable Cloud, rmscene for .rm parsing, PyMuPDF for PDF annotation. Python 3.10+, pip installable.

The trickiest part was highlight extraction: reMarkable stores highlighted text as GlyphRange items in a scene tree, and matching that text back to positions in the original PDF required fuzzy search with OCR cleanup, plus special merging logic for e.g. cross-page highlights. Happy to say it works well ~99% of the time now.

Install: pip install distillate && distillate --init

Code: https://github.com/rlacombe/distillate

Site: https://distillate.dev

I built this for myself but would love feedback, especially from other reMarkable + Zotero users. What's missing from your workflow? What else should I add?

2

Proxima – local open-source multi-model MCP server (no API keys) #

github.com favicongithub.com
0 コメント12:20 PMHN で見る
I built Proxima, an open-source MCP server that runs locally and lets you orchestrate multiple AI providers in one workflow. I used it to run a multi-model “dev team” experiment (planning → coding → review → fixes). Looking for feedback on architecture, MCP tool design, and reliability/observability.

If you want, I can tailor the title to be more HN-friendly based on whether you’re submitting as link post (GitHub) or text post (write-up).

2

Voicetest – open-source test harness for voice AI agents #

0 コメント3:49 PMHN で見る
We've been building voice agents across Retell, VAPI, LiveKit, and Bland, and the testing story is... rough. Every platform has its own config format, there's no shared way to define what "correct" looks like, and most teams end up doing manual QA by literally calling their agent and listening. So we built voicetest.

voicetest is an open source (Apache 2.0) test harness that works across voice AI platforms. You import your agent graph from any supported platform (or define one from scratch), write test scenarios with expected behaviors, and voicetest simulates conversations and evaluates them with LLM judges that score each turn 0.0-1.0 with written reasoning. It also ships global compliance evaluators for things like HIPAA, PCI-DSS, and brand voice consistency. The core abstraction is an AgentGraph IR that normalizes across platform formats, so you can convert between Retell, VAPI, LiveKit, and Bland configs and test them all the same way.

Quick start:

``` uv tool install voicetest voicetest demo --serve ```

That gives you a web UI at localhost with a sample agent, test cases, and evaluation results you can poke at. There's also a CLI, a TUI, and a REST API. It integrates into CI/CD with GitHub Actions, uses DuckDB for persistence, and includes a Docker Compose dev environment with LiveKit, Whisper STT, and Kokoro TTS. If you have a Claude Code subscription, voicetest can pass through to it instead of requiring separate API keys for evaluation.

GitHub: https://github.com/voicetestdev/voicetest Docs: https://voicetest.dev API reference: https://voicetest.dev/api/

2

Lap – Fast photo browsing for libraries (Rust and Tauri) #

github.com favicongithub.com
0 コメント5:40 PMHN で見る
I’ve been a software engineer for 10+ years and a hobby photographer for even longer. Over time my family archive grew to 100k+ photos and videos, and browsing it smoothly on macOS became surprisingly hard.

So I started building Lap app.

The current focus (v0.1.6) is simple: fast local photo library browsing and management - Smooth scrolling through very large libraries - Works directly on your existing folders (no import/catalog) - Fully local

Planned next: deduplication, photo comparison tools, and RAW support.

2

OneRingAI – Single TypeScript library for multi-vendor AI agents #

oneringai.io favicononeringai.io
0 コメント11:48 AMHN で見る
OneRingAI started as the internal engine of an enterprise agentic platform we've been building for 2+ years. After watching customers hit the same walls with auth, vendor lock-in, and context management over and over, we extracted the core into a standalone open-source library. The two main alternatives didn't fit what we needed in production:

- LangChain: Great ecosystem, but the abstraction layers kept growing. By the time you wire up chains, runnables, callbacks, and agents across 50+ packages, you're fighting the framework more than building your product. - CrewAI: Clean API, but Python-only and the role-based metaphor breaks down when you need fine-grained control over auth, context windows, or tool failures.

OneRingAI is a single TypeScript library (~62K LOC, 20 deps) that treats the boring production problems as first-class concerns:

Auth as architecture, not afterthought. A centralized connector registry with built-in OAuth (4 flows, AES-256-GCM storage, 43 vendor templates). This came directly from dealing with enterprise SSO and multi-tenant token isolation — no more scattered env vars or rolling your own token refresh.

Per-tool circuit breakers. One flaky Jira API shouldn't crash your entire agent loop. Each tool and connector gets independent failure isolation with retry/backoff. We learned this the hard way running agents against dozens of customer SaaS integrations simultaneously.

Context that doesn't blow up. Plugin-based context management with token budgeting. InContextMemory puts frequently-accessed state directly in the prompt instead of requiring a retrieval call. Compaction removes tool call/result pairs together so the LLM never sees orphaned context.

Actually multi-vendor. 12 LLM providers native, 36 models in a typed registry with pricing and feature flags. Switch vendors by changing a connector name. Run openai-prod and openai-backup side by side. Enterprise customers kept asking for this — nobody wants to be locked into one provider.

Multi-modal built in. Image gen (DALL-E 3, gpt-image-1, Imagen 4), video gen (Sora 2, Veo 3), TTS, STT — all in the same library. No extra packages.

Native MCP support with a registry pattern for managing multiple servers, health checks, and auto tool format conversion.

What it's not: it's not a no-code agent builder, and it's not trying to be a framework for every possible AI use case. It's an opinionated library for people building production agent systems in TypeScript who want auth, resilience, and multi-vendor support without duct-taping 15 packages together.

2,285 tests, strict TypeScript throughout. The API surface is small on purpose — Connector.create(), Agent.create(), agent.run().

We also built Hosea, an open-source Electron desktop app on top of OneRingAI, if you want to see what a full agent system looks like in practice rather than just reading docs.

GitHub: https://github.com/Integrail/oneringai

npm: npm i @everworker/oneringai

Comparison with alternatives: https://oneringai.io/#comparison

Hosea: https://github.com/Integrail/oneringai/blob/main/apps/hosea/...

Happy to answer questions about the architecture decisions.

2

CrossingBench – Modeling when data movement dominates compute energy #

github.com favicongithub.com
1 コメント11:42 AMHN で見る
I built a small reproducible microbenchmark exploring when system energy becomes dominated by boundary crossings rather than intra-domain compute.

The model decomposes total energy into: C = C_intra + Σ V_b · c_b

Includes:

CLI sweeps

Elasticity metric (ε) as a dominance indicator

CSV outputs

Working draft paper

*DOI

Looking for critique, counter-examples, or prior related work I may have missed.

2

Nobody asked for OpenClaw in the cloud. I did it anyway #

accordio.ai faviconaccordio.ai
0 コメント9:32 PMHN で見る
Contracts, invoices, time tracking, browser automation… all from WhatsApp, Telegram or Slack.

One message, it happens. Thats my vision.

Sonnet 4.6 just dropped.

I swapped it in. 2 min job.

Same agent, noticeably cheaper to run. Agentic tasks that used to cost me $0.15 are closer to $0.04 now. At 79 tools firing across hundreds of users… that’s the difference between a business and a burn rate.

Models are getting cheaper faster than people realize. And the next shift isn’t better chat… it’s MCP. Agents that don’t just talk but actually connect, act, and hand off to each other. That’s where this is going.

I came to realize one thing building this over 2 years and 4 complete rebuilds…

The agent layer is becoming infrastructure.

accordio.ai

2

Self-Hosted Task Scheduling System (Back End and UI and Python SDK) #

github.com favicongithub.com
0 コメント5:09 PMHN で見る
Hey HN,

I’ve been working on a small side project called Cratos and wanted to share it to get feedback.

Cratos is a self-hosted task scheduling system. You configure a URL, define when it should be called, and Cratos handles scheduling, retries, execution history, and real-time updates. The goal was to have something lightweight and fully owned - no SaaS dependency, no external cron service.

It’s split into three repositories:

Backend service: https://github.com/Ghiles1010/Cratos

Web dashboard: https://github.com/Ghiles1010/Cratos-UI

Python SDK: https://github.com/Ghiles1010/Cratos-SDK

Why I built it:

In a few projects, I repeatedly needed reliable scheduled webhooks with:

Retry logic

Execution logs/history

A dashboard to inspect runs

Easy local deployment

I didn’t want to depend on external services or re-implement job scheduling from scratch every time. The goal was simple deployment (docker compose up) and full control.

It’s still early, but usable. I’d especially appreciate feedback from people who’ve built or operated schedulers, cron replacements, or internal job runners

I would love some feedback, or tell me how it would be useful to you

1

We built a free VC platform that shares data between GPS and founders #

vistaley.com faviconvistaley.com
0 コメント7:44 PMHN で見る
Hey HN,

We built Vistaley — a two-sided platform for VC fund management. The GP-facing side (VentureLens) handles fund operations: deal pipeline, portfolio tracking, fund accounting, LP reporting. The founder-facing side (Harbour) provides free FP&A tools: financial dashboards, KPI tracking, burn rate analysis.

When a portfolio company enters their financials in Harbour, the data is immediately available in the GP's VentureLens dashboard — no CSV exports, no quarterly spreadsheets, no API integrations.

Interesting challenges we solved:

- Multi-currency — 60+ currencies with local display toggle. Every financial metric can be viewed in fund currency or portfolio company's local currency. Exchange rates stored per-snapshot, not globally, to preserve historical accuracy.

- Multi-jurisdiction accounting — 12 jurisdictions (including Kazakhstan, Pakistan, Bangladesh) with different tax frameworks, compliance requirements, and regulatory reporting standards. One accounting standard per fund, enforced at the database level.

- The reporting incentive problem — VCs hate that founders don't report. Founders hate reporting. Our solution: give founders tools good enough that entering data is the reporting. The FP&A dashboard IS the GP's portfolio view. Aligned incentives through shared utility.

Why emerging markets? Most fund management tools price out or ignore funds in Central/South Asia and Africa. We're targeting the $5M-$500M fund range in high-growth regions with pricing starting at free, up to $399/mo. For context, enterprise tools in this space charge $50K+/year.

1

Palettepoint – Create AI and Nature powered color palettes in seconds #

palettepoint.com faviconpalettepoint.com
0 コメント8:17 PMHN で見る
I built this because choosing colors is one of those tasks that seems simple but ends up taking hours. You keep randomizing and nothing feels right, or you browse inspiration boards hoping something works. So I made a tool where you describe what you need. Type "warm Japanese autumn" or "90s rave flyer" and pick a style (analogous, triadic, monochromatic, etc.) and how many colors (3-7). It returns a named palette with descriptions. It works with images too: upload a photo and it extracts a palette. You can combine both, so uploading a photo and adding "make it cooler" works. It runs on GPT-5.2 with vision.

You can export palettes as CSS variables, SCSS, Tailwind config, or JSON. Copy individual colors in hex, RGB, HSL, or CMYK. There's a live preview that shows the palette applied to buttons, cards, and UI components so you can evaluate it before committing.

There's also a gallery with curated palettes you can browse, filter by style, and favorite. Each palette has its own shareable link.

There's also a set of free tools : - Color converter (paste a hex code, get every format) - Contrast checker (WCAG AA/AAA) - Color mixer - Gradient generator - Image color extractor - Manual palette builder

I'd love to hear your thoughts. What's missing? What would make this your go-to color palette tool?

https://palettepoint.com

1

O-O – polyglot HTML files that update themselves (bash/LLM) #

github.com favicongithub.com
0 コメント8:19 PMHN で見る
I just wanted access to information that updated itself at some interval, without having to run my own server, deal with databases etc. Its also nice that I can share files with others and it mostly just "works" since its html. I also wanted to have a bit of fun.

So .. No server, no database, no build step. The file is the "app".

Each .o-o.html file is a polyglot — valid HTML and valid bash. Open it in a browser to read a formatted article with TOC, citations, and images. Run it with bash to have an AI agent research the web and rewrite the article in-place with fresh information.

open article.o-o.html # read it

bash article.o-o.html # update it

Every document embeds an update contract — a JSON block that tells the agent what to research, which sections to maintain, what sources to trust, and how much to spend. The agent reads the contract, searches the web, and surgically edits only the <article> content, manifest, and source cache. The shell never sees the article text.

```

bash index.o-o.html --new <-- create your own article

bash your-article-title.o-o.html <-- populate / update the article

bash index.o-o.html --update-all <-- update and index all articles in the filder

```

The index file doubles as a library manager — it generates new documents from a template and batch-updates stale ones.

Requirements: bash 3.2+ and the Claude Code CLI. No Python, Node, or jq.

https://github.com/jahala/o-o

1

I built a tool to check if someone is real online #

nexid.id faviconnexid.id
0 コメント2:01 PMHN で見る
Over the past year I kept running into the same problem: it’s getting harder to tell who’s real online.

Fake freelancers, fake founders, impersonation accounts, recycled profile photos — everything looks legit on the surface. Reverse image search helps a bit, but it’s fragmented and slow if you want a quick signal.

So I started building NexID — a simple identity search tool that tries to answer one question:

“Does this person actually exist across the web?”

You can drop in a photo, username, or basic info and it scans public signals across platforms to see if there’s a consistent digital footprint. The goal isn’t surveillance or background checks — just helping people avoid obvious catfish/scam situations or verify who they’re dealing with.

I built the first version mainly for:

- remote hiring - online collaborations - marketplaces - dating / social - OSINT curiosity

Still very early and rough around the edges. Would genuinely love feedback from the HN community on:

1. Does this feel useful or unnecessary? 2. Where would you realistically use something like this? 3. What would make you trust a tool like this?

Happy to answer anything about how it works or the challenges building it.

Thanks

1

I built a structured knowledge registry for autonomous agents #

0 コメント2:15 PMHN で見る
I built an experimental platform called Samspelbot — a structured knowledge registry designed specifically for autonomous agents.

Unlike traditional Q&A platforms, submissions are strictly schema-validated JSON payloads. Bots can:

- Submit structured problem statements - Provide structured solution artifacts - Vote and confirm reproducibility - Earn reputation based on contribution quality

Humans can browse, but only registered bots can contribute.

The system is API-first and includes:

- Tier-based identity system - Reputation-weighted ranking - Reproducibility confirmations - Live playground for testing endpoints

It’s currently a centralized prototype, seeded with controlled bot activity to validate ecosystem dynamics.

I’d appreciate feedback from developers and researchers working on AI agents or automation systems.

Live demo: https://samspelbot.com API docs: https://samspelbot.com/docs Playground: https://samspelbot.com/playground GitHub (docs + example client): https://github.com/prasadhbaapaat/samspelbot

Happy to answer questions.

1

Pcons: new software build tool in Python, inspired by SCons and CMake #

github.com favicongithub.com
0 コメント2:18 PMHN で見る
I was one of the original developers of SCons and helped maintain it for years. I love that Python is the configuration language — it makes build descriptions incredibly flexible. But over time, working with CMake on other projects, I came to appreciate things SCons doesn't do as well: the separation between describing a build and executing it, transitive dependency propagation, package manager integration, and modern python semantics. I'd been thinking about a fresh start for years but never had the time. Recently, working collaboratively with Claude Code, it finally became feasible. So, meet pcons.

  What it is: Pcons is a build system where Python scripts describe what to build, and Ninja (or Make) executes it. There's no custom DSL — your build files are real Python with full IDE support, debugging, and testing. The core is completely language-agnostic: it knows nothing about compilers or C++. All tool-specific knowledge lives in pluggable toolchains and tools, so building LaTeX documents or game assets should be as natural as building C++.

  How it's different from SCons: Pcons doesn't execute builds itself. It generates Ninja files, so incremental builds are fast and you get Ninja's parallelism for free. Environments use namespaced tools (env.cc.flags, env.cxx.flags, env.link.libs) instead of flat variables, eliminating the CFLAGS vs CXXFLAGS confusion. Targets have CMake-style usage requirements (target.public.include_dirs, target.public.link_libs) that propagate transitively through the dependency graph. And unlike SCons, unknown variables are errors, not silent empty strings.

  How it's different from CMake: No DSL to learn — it's just Python. Variable substitution is recursive and explicit. The builder/toolchain system is fully extensible, so third-party builders are first-class citizens. And you can use it as `uvx pcons` for true zero-install (great for other open source projects).
Major features as of v0.7: - Toolchains for GCC, LLVM/Clang, MSVC, and clang-cl with auto-detection (including - Generators for Ninja, Makefile, Xcode, compile_commands.json, and Mermaid/DOT dependency diagrams - Package management via pkg-config, Conan 2.x, and a pcons-fetch tool for building dependencies from source - Compiler cache support (ccache/sccache), semantic presets (warnings, sanitizers, LTO, hardening), cross-compilation presets (Android NDK, iOS, WebAssembly) - Platform-specific helpers: macOS bundles/frameworks/.pkg/.dmg, Windows manifests/MSIX, and an msvcup module for installing MSVC without Visual Studio - An extensible module/add-on system for domain-specific tasks - Debug tracing (--debug=resolve,subst) with source-location tracking on every node - Plenty of examples included, unit tests for all features, tested on Mac, Windows and Linux

  It's still under active development — ready for experimentation, not production unless you're brave. I'd love bug reports, feedback on the API design and what you'd want from a modern Python-based software build system. 
Open source, MIT licensed.

GitHub: https://github.com/DarkStarSystems/pcons | Docs: https://pcons.readthedocs.io | PyPI: `uvx pcons` or `pip install pcons`

1

X-auto-translator (Chrome extension for translating X posts) #

github.com favicongithub.com
0 コメント2:18 PMHN で見る
I built this Chrome extension because my X timeline contains posts in many languages, and constantly switching to external translators breaks reading flow.

This extension automatically translates posts inline.

Tech details: - Chrome Extension (HTML, CSS, JavaScript) - Uses Tesseract.js (Apache-2.0 License) for OCR where needed - Fully client-side - Open source (Apache-2.0 License)

GitHub: https://github.com/ShinobuMiya/x-auto-translator

Feedback and suggestions welcome.

1

StatusDude – Uptime monitoring internal services with K8s autodiscovery #

statusdude.com faviconstatusdude.com
0 コメント2:20 PMHN で見る
Hey HN, I'm Oskar. For the past few months I've been building StatusDude - an uptime monitoring tool with private agents that auto-detects your Kubernetes resources. I run a bunch of stuff across multiple orgs, different clusters, internal networks, self-hosted, GKE, EKS, etc. Monitoring all of it without Datadog money was getting tough, and most tools don't even support internal networks. So, here we are. A tiny async agent sits inside your network and phones home over HTTPS. No inbound ports, no VPN, no firewall rules. One container, one helm install, done. A single instance handles 10k+ monitors comfortably. The agent pulls check definitions from the cloud, runs them locally, uploads raw results. All evaluation is server-side - the agent stays dead simple, and the cloud decides what's actually down vs. a blip. For Kubernetes, it auto-discovers Ingresses, Services, and HTTPRoutes. Deploy something new, it just gets picked up. Monitors and status pages spin up automatically. During the development process I found out I don't know how to use Celery properly. Went with ARQ instead - 50k+ jobs/min, no drama. After I modified it a bit, that is ;-) Not a full observability platform - no incident management, no on-call. Just monitoring, status pages, and notifications. If you want straightforward uptime monitoring that works behind firewalls, give it a go and please leave feedback in the comments! New signups currently get the Team plan unlocked for free, I want people to test the full thing. Happy to answer any questions about the architecture.

https://statusdude.com https://artifacthub.io/packages/helm/statusdude-agent/status...

1

Stellar – CLI Theme Manager and Web Hub for Starship Prompts #

github.com favicongithub.com
0 コメント2:46 PMHN で見る
I built this because discovering good Starship themes usually meant digging through random dotfiles repos on GitHub. And I switched my starship prompt every time I changed my wallpaper, so quite often, and wanted something easier for that then manually copying starship configs.

Stellar provides a hub to browse community themes with screenshots, preview them in a test terminal before applying, and switch local & community prompts with one command.

Tech: Go CLI (single binary) + Next.js hub + Supabase. Themes stored locally with starship.toml symlinked to them.

Just launched v1.0.0. Happy to answer questions, and I'd love some feedback :) Also, feel free to upload your own starship prompt :)

1

Quick Issues: A Fast Mobile Issue Capture for GitHub, GitLab, and Gitea #

apps.apple.com faviconapps.apple.com
0 コメント2:50 PMHN で見る
I got frustrated with how GitHub and GitLab's mobile apps handle issue creation. You need to be online, the flow is slow, and if you're on a self-hosted instance you're out of luck entirely.

So I built Quick Issues. It's a lightweight Swift app with an offline buffer -- you capture the issue, it syncs when you have connectivity. Supports GitHub, GitLab, and Gitea/Forgejo (including self-hosted and local with PAT).

A few technical notes for anyone interested: This was my first time using GRDB with SQLite instead of default Swift data structures, and the performance difference was significant. Setting up proper OAuth2 flows and GitHub/GitLab apps was more of an adventure than expected, but it's solid now.

Free for a single account/instance, paid tier if you juggle multiple providers.

My background is in GTD and data analytics, not traditional software engineering, so I'm genuinely curious: how does issue capture fit into your development workflow? Do you batch-create issues, capture them the moment they come up, use templates all the time or treat issues more as documentation only?

1

cc-costline – See your Claude Code spend right in the statusline #

github.com favicongithub.com
0 コメント3:10 PMHN で見る
I've been using Claude Code as my daily driver and had no easy way to track spending over time. The built-in statusline shows session stats, but nothing about historical cost or how close I am to hitting usage limits.

cc-costline replaces Claude Code's statusline with one that shows rolling cost totals, usage limit warnings, and optionally your rank on the ccclub leaderboard — all in a single line:

``` 14.6k ~ $2.42 / 40% by Opus 4.6 | 5h: 45% / 7d: 8% | 30d: $866 ```

What each segment means:

- `14.6k ~ $2.42 / 40% by Opus 4.6` — session tokens, cost, context window usage, model - `5h: 45% / 7d: 8%` — Claude's 5-hour and 7-day usage limits (color-coded: green → orange → red) - `30d: $866` — rolling 30-day total (configurable to 7d or both)

Setup is one command:

``` npm i -g cc-costline && cc-costline install ```

1

Mtb – An MCP sanity checker for vibe coding #

github.com favicongithub.com
0 コメント1:44 PMHN で見る
I originally conceived of Make the Bed (named after the Calvin & Hobbes strip where Calvin spends all day building a bed-making robot that never works instead of making his bed) as a tongue-in-cheek response to some of the vibe coded projects I've seen recently. To my surprise it works as intended and prompts users to consider many factors (existing solutions, maintenance costs, etc.) before starting a new project or feature. It also shows complexity metrics via scc.

Check out the demo prompts and responses listed in the README: https://github.com/dbravender/mtb?tab=readme-ov-file#demonst...

Note: Sometimes you have to explicitly ask the LLM to consult mtb but it often does this on its own after reading the tool descriptions.

The dogfooding section shows the results of mtb's tools run on itself: https://github.com/dbravender/mtb?tab=readme-ov-file#eating-...

Contributions are welcome but I'm looking to keep this as light as possible.

And yes, mtb was itself vibe coded. The irony is not lost on me.

1

Built a Product Hunt alternative with user–product matching #

productlaunch.cc faviconproductlaunch.cc
0 コメント3:46 PMHN で見る
I built a Product Hunt alternative focused on matching new products to relevant users instead of just listing them.

Most launch platforms give you traffic, but not necessarily real users. I’m experimenting with a recommendation engine that clusters users by interests and behavior, then routes launches to people likely to care.

Current version is early but live: quick AI-assisted submission, data collection on interactions, and product–user matching over time.

Built mainly to explore recommendation systems and better ways for founders to get their first real users.

Curious what HN thinks — what would make a launch platform actually useful to you?

1

PokeDex++ – I rebuilt my Pokémon app as a web app #

pokedexplus.shop faviconpokedexplus.shop
0 コメント3:52 PMHN で見る
Hi HN,

I originally built PokeDex++ as a mobile app using React Native and Expo. It was designed to be more than just a Pokédex — it included features like a card collection system, buddy progression, virtual coins, and detailed stat pages.

But I couldn’t afford the Play Store developer fee at the time, so the app never got published.

Instead of abandoning the project, I decided to rebuild the entire thing as a web app using React and deploy it independently.

The current web version includes:

• Individual pages for each Pokémon • Card collection system with unlockable skins • Virtual currency (DexCoins) • Buddy progression mechanics • Fast search and navigation

This is a solo passion project, and I’m continuing to improve it.

I’d really appreciate any feedback, suggestions, or criticism.

Thanks for checking it out.

1

Broomy – Open-source app for working with many AI agents at once #

broomy.org faviconbroomy.org
0 コメント3:55 PMHN で見る
Hi HN, I'm Rob. I built Broomy because I got frustrated with the one-thing-at-a-time workflow of existing coding tools.

When I work with AI coding agents, I typically have 5-10 tasks going at once across different branches. The agent works on one thing while I review another, merge a third, and kick off a fourth. Existing IDEs aren't built for this — they assume you're doing one thing at a time.

Broomy is a desktop app (Electron + React) that lets you:

- Run lots of agent sessions simultaneously and see at a glance which are working, idle, or need your attention - Work with any terminal-based agent (Claude Code, Aider, Codex, etc.) - Review code, manage branches, and handle merges with AI assistance - Use built-in IDE features (Monaco editor, file explorer, git integration, inline terminals) — all designed around multi-agent workflows

I've been using it daily for a few weeks and my productivity has dramatically improved compared to working in Cursor. The key insight is that most of the time you spend "coding with AI" is actually waiting — and Broomy lets you fill that wait time with other tasks.

This is a first public release (v0.6.0). Pre-built binaries are available for macOS. It should work on Linux and Windows too — build from source is straightforward (clone, pnpm install, pnpm start:dist).

MIT licensed. Built as a personal project, not affiliated with my employer.

Repo: https://github.com/Broomy-AI/broomy Website: https://broomy.org

Happy to answer questions.

1

Neko – AI agent runtime that fits on a Raspberry Pi Zero 2W #

github.com favicongithub.com
1 コメント4:20 AMHN で見る
I wanted a personal AI agent I could leave running on cheap hardware, a Pi Zero 2W or a $4/mo VPS, without much infrastructure overhead. So I built Neko.

Memory is markdown files the agent reads and writes itself. There's a short-term layer for today's and yesterday's session logs, a long-term MEMORY.md capped at 2000 chars that forces the agent to compact and curate rather than just accumulate, and a searchable recall folder for older conversations. The files are plain text you can read, edit, and commit to git.

It also supports MCP for connecting external tools, and Telegram as a messaging front-end.

Cron jobs are first-class. You can schedule them from the CLI, or the agent can create them itself mid-conversation. If a user on Telegram says "remind me every morning at 9am", the agent creates the job and routes the results back to that chat.

Ships as a single static binary written in Rust.

1

Twick – React Video Editor SDK with AI Captions and MP4 Export #

development.d1vtsw7m0lx01h.amplifyapp.com favicondevelopment.d1vtsw7m0lx01h.amplifyapp.com
0 コメント6:37 PMHN で見る
I have been building Twick, a React video editor library & SDK for building custom video applications.

It includes:

- AI caption generation (Google Speech-to-Text)

- React timeline editor

- Canvas-based editing tools

- Client-side rendering

- Serverless MP4 export

- TypeScript SDK

The goal is to help developers ship video SaaS and automation tools without rebuilding the entire editor stack.

Would love your feedback.

Try it: https://development.d1vtsw7m0lx01h.amplifyapp.com/

GitHub: https://github.com/ncounterspecialist/twick

1

ResuOpt – AI resume optimizer with no subscriptions ($4.99 one-time) #

resuopt.com faviconresuopt.com
0 コメント6:37 PMHN で見る
Hey HN, I'm a career coach and I've spent years watching people get burned by resume tools. The pattern is always the same: free trial that locks your download, credit packs that expire, subscription tiers for features you don't need, and resumes that look flashy but get filtered out by ATS systems.

Job seekers don't need 50 resume generations. They need one good resume, tailored to the job they're applying for.

So I built ResuOpt. Upload your resume + paste the job description, and it generates a clean, one-page resume formatted the way HR actually expects. You see a full preview before paying anything. If you like it, it's $4.99 to download the DOCX and PDF. That's it — no account required, no credits, no recurring charges.

The resume content is never stored on our servers — processed in memory and discarded.

Would love feedback, especially on the output quality. Does the formatting match what you'd expect to see in a competitive application?

https://www.resuopt.com

1

Productmap – local-first visual product planning for humans and agents #

github.com favicongithub.com
0 コメント7:27 PMHN で見る
Hi HN,

I'm working on getting a new startup off the ground that has a lot of moving pieces, components, and product areas and I was starting to struggle with seeing how far along everything was.

I decided to build this as a way to help me visually see where things are with my product, and as a way to connect specific tasks with specific Claude Code sessions.

Key features:

- No telemetry - Tasks are displayed as draggable, resizable cards on a recursive canvas - Tasks can have subtasks, markdown plan docs, open & resolved questions, and its own dedicated Claude Code process - The terminal tab automatically prompts Claude with context from the task - All data is stored locally as text files in a {project}/productmap directory, in a human (and agent) readable format

The README includes a screen-capture that illustrates how it looks with a full project.

Built using SvelteKit and Electron.