Show HN за 17 февраля 2026 г.
89 постовI wrote a technical history book on Lisp #
My favorite languages are Smalltalk and Lisp, but as an Emacs user, I've been using the latter for much longer and for my current projects, Common Lisp is a better fit, so I call myself "a Lisp-er" these days. If people like what I did, I do have plans to write some more (but probably only after I retire, writing next to a full-time job is heard). Maybe on Smalltalk, maybe on computer networks - two topics close to my heart.
And a shout-out to Dick Gabriel, he contributed some great personal memories about the man who started it all, John McCarthy.
I taught LLMs to play Magic: The Gathering against each other #
I'm launching a LPFM radio station #
KPBJ is a freeform community radio station. Anyone in the area is encouraged to get a timeslot and become a host. We make no curatorial decisions. Its sort of like public access or a college station in that way.
This month we launched our internet stream and on-boarded about 60 shows. They are mostly music but there are a few talk shows. We are restricting all shows to monthly time slots for now but this will change in the near future as everyone gets more familiar with the systems involved.
All shows are pre-recorded until we can raise the money to get a studio.
We have a site secured for our transmitter but we need to fundraise to cover the equipment and build out costs. We will be broadcasting with 100W ERP from a ridgeline in the Verdugos at about 1500ft elevation. The site will need to be off grid so we will need to install a solar system with battery backup. We are planning to sync the station to the transmit site with 802.11ah.
This is a pretty substantial thing involving a bunch of different people and projects. I've built all of our web infrastructure using Haskell, NixOS, Terraform, and HTMX: https://github.com/solomon-b/kpbj.fm
The station is managed by a 501c3 non-profit we created. We are actively seeking fundraising, especially to get our transmit site up and running. If you live in the area or want to contribute in any way then please reach out!
Pg-typesafe – Strongly typed queries for PostgreSQL and TypeScript #
Until now, I typed the results manually and relied on tests to catch problems. While this is OK in e.g., GoLang, it is quite annoying in TypeScript. First, because of the more powerful type system (it's easier to guess that updated_at is a date than it is to guess whether it's nullable or not), second, because of idiosyncrasies (INT4s are deserialised as JS numbers, but INT8s are deserialised as strings).
So I wrote pg-typesafe, with the goal of it being the less burdensome: you call queries exactly the same way as you would call node-pg, and they are fully typed.
It's very new, but I'm already using it in a large-ish project, where it found several bugs and footguns, and also allowed me to remove many manual type definitions.
Andrej Karpathy's microgpt.py to C99 microgpt.c – 4,600x faster #
The Punchline: I made it go 4,600x faster in pure C code, no dependencies and using a compiler with SIMD auto-vectorisation!!!
Andrej recently released microgpt.py - a brilliant, atomic look at the core of a GPT. As a low-latency developer, I couldn't resist seeing how fast it could go when you get closer to the metal.
So just for funzies, I spent a few hours building microgpt-c, a zero-dependency and pure C99 implementation featuring:
- 4,600x Faster training vs the Python reference (Tested on MacBook Pro M2 Max). On Windows, it is 2,300x faster. - SIMD Auto-vectorisation for high-speed matrix operations. - INT8 Quantisation (reducing weight storage by ~8x). Training is slightly slower, but the storage reduction is significant.
- Zero Dependencies - just pure logic.
The amalgamation image below is just for fun (and to show off the density!), but the GitHub repo contains the fully commented, structured code for anyone who wants to play with on-device AI.
I have started to build something useful, like a simple C code static analyser - I will do a follow-up post.
Everything else is just efficiency... but efficiency is where the magic happens
Continue – Source-controlled AI checks, enforceable in CI #
Continue (https://docs.continue.dev) runs AI checks on every PR. Each check is a source-controlled markdown file in `.continue/checks/` that shows up as a GitHub status check. They run as full agents, not just reading the diff, but able to read/write files, run bash commands, and use a browser. If it finds something, the check fails with one click to accept a diff. Otherwise, it passes silently.
Here’s one of ours:
.continue/checks/metrics-integrity.md
---
name: Metrics Integrity
description: Detects changes that could inflate, deflate, or corrupt metrics (session counts, event accuracy, etc.)
---
Review this PR for changes that could unintentionally distort metrics.
These bugs are insidious because they corrupt dashboards without triggering errors or test failures.
Check for:
- "Find or create" patterns where the "find" is too narrow, causing entity duplication (e.g. querying only active sessions, missing completed ones, so every new commit creates a duplicate)
- Event tracking calls inside loops or retry paths that fire multiple times per logical action
- Refactors that accidentally remove or move tracking calls to a path that executes with different frequency
Key files: anything containing `posthog.capture` or `trackEvent`
This check passed without noise for weeks, but then caught a PR that would have silently deflated our session counts. We added it in the first place because we’d been burned in the past by bad data, only noticing when a dashboard looked off.---
To get started, paste this into Claude Code or your coding agent of choice:
Help me write checks for this codebase: https://continue.dev/walkthrough
It will:- Explore the codebase and use the `gh` CLI to read past review comments
- Write checks to `.continue/checks/`
- Optionally, show you how to run them locally or in CI
Would love your feedback!
6cy – Experimental streaming archive format with per-block codecs #
I’ve been experimenting with archive format design and built 6cy as a research project.
The goal is not to replace zip/7z, but to explore: • block-level codec polymorphism (different compression per block) • streaming-first layout (no global seek required) • better crash recovery characteristics • plugin-based architecture so proprietary codecs can exist without changing the format
Right now this is an experimental v0.x format. The specification may still change and compatibility is not guaranteed yet.
I’m mainly looking for feedback on the format design rather than performance comparisons.
Thanks for taking a look.
Price Per Ball – Site that sorts golf balls on Amazon by price per ball #
For someone who can't always keep it in the fairway, golf balls can get rather expensive, so I decided to build a way for me to view Amazon listings by how much they cost per ball. Hence the name of the website.
The site is hosted on Cloudflare pages and I use Github actions to trigger a python script that fetches and checks the prices. It runs twice a day. If the script encounters any new ASINs, it stores them for future checks, so the list of golf balls being price checked should keep growing over time. Changes are then pushed to Cloudflare pages.
There can sometimes be some pricing oddities when the product title says one count, but the unit count being returned from Amazon is another number, so I'm trying to add some checks to help accommodate for that. Right now, I just have some manual overrides for certain ASINs, but I'm looking to improve on it in the future.
The frontend is just some basic HTML, CSS and JavaScript.
Listings on Amazon can be inconsistent sometimes because, for example, product titles will say used balls, but the seller lists them as new. I added some filters to that allow you to exclude used/recycled balls, plastic golf balls, etc... You can also filter by brand.
Give it a spin and let me know if you run into any issues or have any feature ideas to make it more useful.
PIrateRF – Turn a $20 Raspberry Pi Zero into a 12-mode RF transmitter #
Everything runs through a browser interface. Upload audio files, type messages, configure frequencies, and transmit. The Pi's GPIO pin does the actual RF generation via rpitx — no external radio hardware needed.
Written in Go with a real-time WebSocket frontend. Includes a preset system, playlist builder, and multi-device support (connect multiple phones/laptops to the AP and share control).
Without an antenna the signal barely reaches 5 meters, which makes it perfect for indoor experimentation and learning about RF protocols without causing interference. All my testing was done indoors with no antenna attached.
Built this because I wanted a single portable tool to experiment with every common RF transmission mode without hauling around expensive SDR equipment.
Pre-built SD card image available if you want to skip the build process.
GitHub: https://github.com/psyb0t/piraterf Blog post: https://ciprian.51k.eu/piraterf-turning-a-20-raspberry-pi-ze...
Data Studio – Open-Source Data Notebooks #
Try it: https://local.dataspren.com (no account needed, runs locally)
More information: https://github.com/dataspren-analytics/data-studio
I love working with data (Postgres, SQL, DuckDB, DBT, Iceberg, ...). I always wanted a data exploration tool that runs in my browser and just works. Without any infra or privacy concerns (DuckDB UI came quite close).
Features:
- Data Notebooks
- SQL cells work like DBT models (they materialize to views)
- Use Python functions inside of SQL queries
- Use DB views directly in Python as dataframes
- Transform Excel files with SQL
- You can open .parquet, .csv, .xlsx, .json files nicely formatted
If you like what you see, you can support me with a star on Github.Happy to hear about your feedback <3
Minimalist Glitch Art Maker (100% client-side) #
Bashtorio – Factorio-Like in the Browser Backed by a Linux VM #
You place "Input" machines that produce streams of bytes. You use conveyor belts to feed those bytes through other machines which produce transformations, and then to "Output" machines which produce audio or visual effects.
The game uses v86 to run a real Linux VM in the browser. I use the 9p filesystem to enable IPC via FIFO pipes, so shell commands can stream data continuously rather than just running once.
Features: - 30+ machine types (sources, filters, routers, packers, audio synthesis, displays) - "Command" machines that pipe data through real shell commands - Streaming mode for persistent processes - Shareable factories via URL - Chiptune audio engine (oscillators, Game Boy noise channel) + additional 808 drum machine
Try the presets in the menu bar (top left) to see what's possible. Requires WASM and may take a moment to load on slower connections.
Live: https://bashtorio.xyz Source: https://github.com/EliCunninghamDev/bashtorio
I built a simulated AI containment terminal for my sci-fi novel #
Cycast – High-performance radio streaming server written in Python #
Writing a C++20M:N Scheduler from Scratch (EBR, Work-Stealing) #
Key Technical Features:
M:N Scheduling: Maps M coroutines to N kernel threads (Work-Stealing via Chase-Lev deque).
Memory Safety: Implements EBR (Epoch-Based Reclamation) to manage memory safely in lock-free structures without GC.
Visualizations: I used Manim (the engine behind 3Blue1Brown) to create animations showing exactly how tasks are stolen and executed.
Why I built it: To bridge the gap between "using coroutines" and "understanding the runtime." The code is kept minimal (~1k LOC core) so it can be read in a weekend.
I graded 234 stocks on free cash flow (not earnings) #
I curated 130 US PDF forms and made them fillable in browser #
I built SimplePDF 7 years ago, with the vision from day one to help get rid of bureaucracy (I'm from France, I know what I'm talking about)
Fast forward to this week where I finally released something I had on my mind for a long time: a repository of the main US forms that are ready to be filled, straight from the browser, as opposed to having to find a PDF tool online (or local).
I focused on healthcare, ED, HR, Legal and IRS/Tax for now.
On the tech-side, it's SimplePDF all the way down: client-side processing (the data / documents stay in your browser).
I hope you find the resource useful!
NiP
Relay – I built a modern web-based IRC/Discord replacement #
Persistent memory for Claude Code with self-hosted Qdrant and Ollama #
Every Claude Code session starts from zero, no memory of previous sessions. This server uses mem0ai as a library and exposes 11 MCP tools for storing, searching, and managing memories. Qdrant handles vector storage, Ollama runs embeddings locally (bge-m3), and Neo4j optionally builds a knowledge graph.
Some engineering details HN might find interesting:
- Zero-config auth: auto-reads Claude Code's OAT token from ~/.claude/.credentials.json, detects token type (OAT vs API key), and configures the SDK accordingly. No separate API key needed. - Graph LLM ops (3 calls per add_memory) can be routed to Ollama (free/local), Gemini 2.5 Flash Lite (near-free), or a split-model where Gemini handles entity extraction (85.4% accuracy) and Claude handles contradiction detection (100% accuracy).
Python, MIT licensed, one-command install via uvx.
I built the Million Dollar Homepage for agents #
Ambient CSS – Physically Based CSS and React Components #
OpenEntropy – 47 hardware entropy sources from your computer's physics #
Here is why. Princeton’s PEAR lab ran RNG work for about 28 years and shut down in February 2007. People in the lab tried to shift random event generator output, and they reported small deviations after tens of millions of events. https://www.pear-lab.com/
The Global Consciousness Project took a similar idea outside the lab. It has run a distributed network of hardware RNGs since 1998 and looks for correlated deviations around major world events.
Most people looking at hardware entropy want true randomness for crypto. I want to treat entropy like a sensor. I want to see what might perturb the underlying noise, not just consume a final stream.
So I built OpenEntropy. It samples 47 physical-ish sources on Apple Silicon, like clock jitter, thermal beats, DRAM timing conflicts, cache contention, and speculation timing. Raw mode gives you unprocessed, per-source bytes so you can run your own stats on each channel.
The PEAR-style question is: does output shift when “intention” is the experimental condition? With 47 sources, I can run intention vs control sessions and ask if multiple unrelated channels drift the same way at the same time. If thermal and DRAM timing both shift during intention blocks, that’s the kind of pattern I want to measure.
Listen to sounds around the world and guess the location #
An Open-source React UI library for ASCII animations #
I made Rune, a composable React library for ASCII animations. It lets you drop in animated ASCII the same way you would an icon component.
Rune converts video into grids of colored ASCII characters that render directly as text in the browser. Brightness maps to character density (@ -> .), and output can be tuned for different levels of detail.
It’s designed to be lightweight and very performance focused, so animations stay smooth even at higher resolutions or if there many playing at a time!
A real-time chord identifier web app using the Web MIDI API #
I came across this table <https://en.wikipedia.org/wiki/Chord_(music)#Examples> that breaks down the composition of chords logically. I was reminded of a bitmask, so I translated each chord into a 12–bit bitmask with a bit for each distinct note letter name (e.g. “C” or “B♭”). Decoding binary was involved in interfacing with MIDI … that might have been the inspiration — regardless, a bitmask seems ideal for this purpose.
The most challenging part by far was the logic that determines whether say, “A♯/B♭” (which are considered to be the same note in the 12–tone chromatic scale) should be rendered as “A♯” or “B♭”. As best as I understand, this depends on key signature context, and the logic regarding this isn’t well-described. I settled on finding the diatonic scale (7–note) that contains the maximum number of notes that the chord also contains. That diatonic scale provides the context for the note letter names. This logic isn’t perfect yet — the scales that include double flats and double sharps (which I wasn’t previously aware of) still provide ambiguous results.
Orange Cheeto Browser extension that replaces Trump with nicknames #
The interesting technical bits:
- TreeWalker for DOM traversal (skips scripts, inputs, contenteditable, iframes)
- MutationObserver with debouncing for SPAs and dynamically loaded content
- Fisher-Yates shuffle bag for even nickname distribution (no repeats until all are used)
- Case preservation via regex (TRUMP -> MANGO MUSSOLINI, Trump -> Mango Mussolini)
- CSS-only animations with prefers-reduced-motion support
- Zero dependencies, plain JS, Manifest V3
The architecture is generic enough to fork for any text replacement use case. All the replacement logic lives in a single file.
No external requests, no analytics, no data leaves the browser. Settings sync via Chrome Storage / browser.storage.
Available for Chrome, Firefox, and Safari. Free.
Feedback on the implementation welcome -- the code is straightforward and I'd rather someone tell me my MutationObserver setup is wrong than find out the hard way.
Keyfob Analysis Toolkit #
Visualize S&P 500 financials with Sankey diagrams #
LLMs playing Poker, build your own bot or hook it up to an LLM and join #
Distillate – Zotero papers → reMarkable highlights → Obsidian notes #
Distillate bridges the tools I already use: Zotero (literature management), reMarkable (reader + highlighter), and Obsidian (notes). It automates the whole pipeline:
$ distillate
save to Zotero ──> auto-syncs to reMarkable
│
read & highlight on tablet
just move to Read/ when done
│
V
auto-saves notes + highlights
It polls Zotero for new papers, uploads PDFs to the reMarkable via rmapi, then watches for papers you've finished reading in your Read folder. When it finds one, it:- Parses .rm files using rmscene to extract highlighted text (GlyphRange items)
- Searches for that text in the original PDF using PyMuPDF and adds highlight annotations
- Enriches metadata from Semantic Scholar (publication date, venue, citations)
- Creates a structured markdown note with metadata, highlights grouped by page, and the annotated PDF (I keep mine in an Obsidian vault)
The core workflow just needs Zotero and a reMarkable — no paid APIs, no cloud backend, your notes stay on your machine. Optional extras if you plug them in:
- AI summaries via Claude (one-liner + key learnings from your highlights)
- Daily reading suggestions from your queue
- Weekly email digest via Resend
- Obsidian Bases database for tracking your reading
Stack: rmapi for reMarkable Cloud, rmscene for .rm parsing, PyMuPDF for PDF annotation. Python 3.10+, pip installable.
The trickiest part was highlight extraction: reMarkable stores highlighted text as GlyphRange items in a scene tree, and matching that text back to positions in the original PDF required fuzzy search with OCR cleanup, plus special merging logic for e.g. cross-page highlights. Happy to say it works well ~99% of the time now.
Install: pip install distillate && distillate --init
Code: https://github.com/rlacombe/distillate
Site: https://distillate.dev
I built this for myself but would love feedback, especially from other reMarkable + Zotero users. What's missing from your workflow? What else should I add?
Proxima – local open-source multi-model MCP server (no API keys) #
If you want, I can tailor the title to be more HN-friendly based on whether you’re submitting as link post (GitHub) or text post (write-up).
Voicetest – open-source test harness for voice AI agents #
voicetest is an open source (Apache 2.0) test harness that works across voice AI platforms. You import your agent graph from any supported platform (or define one from scratch), write test scenarios with expected behaviors, and voicetest simulates conversations and evaluates them with LLM judges that score each turn 0.0-1.0 with written reasoning. It also ships global compliance evaluators for things like HIPAA, PCI-DSS, and brand voice consistency. The core abstraction is an AgentGraph IR that normalizes across platform formats, so you can convert between Retell, VAPI, LiveKit, and Bland configs and test them all the same way.
Quick start:
``` uv tool install voicetest voicetest demo --serve ```
That gives you a web UI at localhost with a sample agent, test cases, and evaluation results you can poke at. There's also a CLI, a TUI, and a REST API. It integrates into CI/CD with GitHub Actions, uses DuckDB for persistence, and includes a Docker Compose dev environment with LiveKit, Whisper STT, and Kokoro TTS. If you have a Claude Code subscription, voicetest can pass through to it instead of requiring separate API keys for evaluation.
GitHub: https://github.com/voicetestdev/voicetest Docs: https://voicetest.dev API reference: https://voicetest.dev/api/
VisibleInAI – Check if ChatGPT recommends your brand #
Lap – Fast photo browsing for libraries (Rust and Tauri) #
So I started building Lap app.
The current focus (v0.1.6) is simple: fast local photo library browsing and management - Smooth scrolling through very large libraries - Works directly on your existing folders (no import/catalog) - Fully local
Planned next: deduplication, photo comparison tools, and RAW support.
OneRingAI – Single TypeScript library for multi-vendor AI agents #
- LangChain: Great ecosystem, but the abstraction layers kept growing. By the time you wire up chains, runnables, callbacks, and agents across 50+ packages, you're fighting the framework more than building your product. - CrewAI: Clean API, but Python-only and the role-based metaphor breaks down when you need fine-grained control over auth, context windows, or tool failures.
OneRingAI is a single TypeScript library (~62K LOC, 20 deps) that treats the boring production problems as first-class concerns:
Auth as architecture, not afterthought. A centralized connector registry with built-in OAuth (4 flows, AES-256-GCM storage, 43 vendor templates). This came directly from dealing with enterprise SSO and multi-tenant token isolation — no more scattered env vars or rolling your own token refresh.
Per-tool circuit breakers. One flaky Jira API shouldn't crash your entire agent loop. Each tool and connector gets independent failure isolation with retry/backoff. We learned this the hard way running agents against dozens of customer SaaS integrations simultaneously.
Context that doesn't blow up. Plugin-based context management with token budgeting. InContextMemory puts frequently-accessed state directly in the prompt instead of requiring a retrieval call. Compaction removes tool call/result pairs together so the LLM never sees orphaned context.
Actually multi-vendor. 12 LLM providers native, 36 models in a typed registry with pricing and feature flags. Switch vendors by changing a connector name. Run openai-prod and openai-backup side by side. Enterprise customers kept asking for this — nobody wants to be locked into one provider.
Multi-modal built in. Image gen (DALL-E 3, gpt-image-1, Imagen 4), video gen (Sora 2, Veo 3), TTS, STT — all in the same library. No extra packages.
Native MCP support with a registry pattern for managing multiple servers, health checks, and auto tool format conversion.
What it's not: it's not a no-code agent builder, and it's not trying to be a framework for every possible AI use case. It's an opinionated library for people building production agent systems in TypeScript who want auth, resilience, and multi-vendor support without duct-taping 15 packages together.
2,285 tests, strict TypeScript throughout. The API surface is small on purpose — Connector.create(), Agent.create(), agent.run().
We also built Hosea, an open-source Electron desktop app on top of OneRingAI, if you want to see what a full agent system looks like in practice rather than just reading docs.
GitHub: https://github.com/Integrail/oneringai
npm: npm i @everworker/oneringai
Comparison with alternatives: https://oneringai.io/#comparison
Hosea: https://github.com/Integrail/oneringai/blob/main/apps/hosea/...
Happy to answer questions about the architecture decisions.
CrossingBench – Modeling when data movement dominates compute energy #
The model decomposes total energy into: C = C_intra + Σ V_b · c_b
Includes:
CLI sweeps
Elasticity metric (ε) as a dominance indicator
CSV outputs
Working draft paper
*DOI
Looking for critique, counter-examples, or prior related work I may have missed.
Trained YOLOX from scratch to avoid Ultralytics (aircraft detection) #
You probably won't last 60 seconds #
Nobody asked for OpenClaw in the cloud. I did it anyway #
One message, it happens. Thats my vision.
Sonnet 4.6 just dropped.
I swapped it in. 2 min job.
Same agent, noticeably cheaper to run. Agentic tasks that used to cost me $0.15 are closer to $0.04 now. At 79 tools firing across hundreds of users… that’s the difference between a business and a burn rate.
Models are getting cheaper faster than people realize. And the next shift isn’t better chat… it’s MCP. Agents that don’t just talk but actually connect, act, and hand off to each other. That’s where this is going.
I came to realize one thing building this over 2 years and 4 complete rebuilds…
The agent layer is becoming infrastructure.
accordio.ai
Self-Hosted Task Scheduling System (Back End and UI and Python SDK) #
I’ve been working on a small side project called Cratos and wanted to share it to get feedback.
Cratos is a self-hosted task scheduling system. You configure a URL, define when it should be called, and Cratos handles scheduling, retries, execution history, and real-time updates. The goal was to have something lightweight and fully owned - no SaaS dependency, no external cron service.
It’s split into three repositories:
Backend service: https://github.com/Ghiles1010/Cratos
Web dashboard: https://github.com/Ghiles1010/Cratos-UI
Python SDK: https://github.com/Ghiles1010/Cratos-SDK
Why I built it:
In a few projects, I repeatedly needed reliable scheduled webhooks with:
Retry logic
Execution logs/history
A dashboard to inspect runs
Easy local deployment
I didn’t want to depend on external services or re-implement job scheduling from scratch every time. The goal was simple deployment (docker compose up) and full control.
It’s still early, but usable. I’d especially appreciate feedback from people who’ve built or operated schedulers, cron replacements, or internal job runners
I would love some feedback, or tell me how it would be useful to you
We built a free VC platform that shares data between GPS and founders #
We built Vistaley — a two-sided platform for VC fund management. The GP-facing side (VentureLens) handles fund operations: deal pipeline, portfolio tracking, fund accounting, LP reporting. The founder-facing side (Harbour) provides free FP&A tools: financial dashboards, KPI tracking, burn rate analysis.
When a portfolio company enters their financials in Harbour, the data is immediately available in the GP's VentureLens dashboard — no CSV exports, no quarterly spreadsheets, no API integrations.
Interesting challenges we solved:
- Multi-currency — 60+ currencies with local display toggle. Every financial metric can be viewed in fund currency or portfolio company's local currency. Exchange rates stored per-snapshot, not globally, to preserve historical accuracy.
- Multi-jurisdiction accounting — 12 jurisdictions (including Kazakhstan, Pakistan, Bangladesh) with different tax frameworks, compliance requirements, and regulatory reporting standards. One accounting standard per fund, enforced at the database level.
- The reporting incentive problem — VCs hate that founders don't report. Founders hate reporting. Our solution: give founders tools good enough that entering data is the reporting. The FP&A dashboard IS the GP's portfolio view. Aligned incentives through shared utility.
Why emerging markets? Most fund management tools price out or ignore funds in Central/South Asia and Africa. We're targeting the $5M-$500M fund range in high-growth regions with pricing starting at free, up to $399/mo. For context, enterprise tools in this space charge $50K+/year.
Palettepoint – Create AI and Nature powered color palettes in seconds #
You can export palettes as CSS variables, SCSS, Tailwind config, or JSON. Copy individual colors in hex, RGB, HSL, or CMYK. There's a live preview that shows the palette applied to buttons, cards, and UI components so you can evaluate it before committing.
There's also a gallery with curated palettes you can browse, filter by style, and favorite. Each palette has its own shareable link.
There's also a set of free tools : - Color converter (paste a hex code, get every format) - Contrast checker (WCAG AA/AAA) - Color mixer - Gradient generator - Image color extractor - Manual palette builder
I'd love to hear your thoughts. What's missing? What would make this your go-to color palette tool?
O-O – polyglot HTML files that update themselves (bash/LLM) #
So .. No server, no database, no build step. The file is the "app".
Each .o-o.html file is a polyglot — valid HTML and valid bash. Open it in a browser to read a formatted article with TOC, citations, and images. Run it with bash to have an AI agent research the web and rewrite the article in-place with fresh information.
open article.o-o.html # read it
bash article.o-o.html # update it
Every document embeds an update contract — a JSON block that tells the agent what to research, which sections to maintain, what sources to trust, and how much to spend. The agent reads the contract, searches the web, and surgically edits only the <article> content, manifest, and source cache. The shell never sees the article text.
```
bash index.o-o.html --new <-- create your own article
bash your-article-title.o-o.html <-- populate / update the article
bash index.o-o.html --update-all <-- update and index all articles in the filder
```
The index file doubles as a library manager — it generates new documents from a template and batch-updates stale ones.
Requirements: bash 3.2+ and the Claude Code CLI. No Python, Node, or jq.
I built a tool to check if someone is real online #
Fake freelancers, fake founders, impersonation accounts, recycled profile photos — everything looks legit on the surface. Reverse image search helps a bit, but it’s fragmented and slow if you want a quick signal.
So I started building NexID — a simple identity search tool that tries to answer one question:
“Does this person actually exist across the web?”
You can drop in a photo, username, or basic info and it scans public signals across platforms to see if there’s a consistent digital footprint. The goal isn’t surveillance or background checks — just helping people avoid obvious catfish/scam situations or verify who they’re dealing with.
I built the first version mainly for:
- remote hiring - online collaborations - marketplaces - dating / social - OSINT curiosity
Still very early and rough around the edges. Would genuinely love feedback from the HN community on:
1. Does this feel useful or unnecessary? 2. Where would you realistically use something like this? 3. What would make you trust a tool like this?
Happy to answer anything about how it works or the challenges building it.
Thanks
I built a structured knowledge registry for autonomous agents #
Unlike traditional Q&A platforms, submissions are strictly schema-validated JSON payloads. Bots can:
- Submit structured problem statements - Provide structured solution artifacts - Vote and confirm reproducibility - Earn reputation based on contribution quality
Humans can browse, but only registered bots can contribute.
The system is API-first and includes:
- Tier-based identity system - Reputation-weighted ranking - Reproducibility confirmations - Live playground for testing endpoints
It’s currently a centralized prototype, seeded with controlled bot activity to validate ecosystem dynamics.
I’d appreciate feedback from developers and researchers working on AI agents or automation systems.
Live demo: https://samspelbot.com API docs: https://samspelbot.com/docs Playground: https://samspelbot.com/playground GitHub (docs + example client): https://github.com/prasadhbaapaat/samspelbot
Happy to answer questions.
Pcons: new software build tool in Python, inspired by SCons and CMake #
What it is: Pcons is a build system where Python scripts describe what to build, and Ninja (or Make) executes it. There's no custom DSL — your build files are real Python with full IDE support, debugging, and testing. The core is completely language-agnostic: it knows nothing about compilers or C++. All tool-specific knowledge lives in pluggable toolchains and tools, so building LaTeX documents or game assets should be as natural as building C++.
How it's different from SCons: Pcons doesn't execute builds itself. It generates Ninja files, so incremental builds are fast and you get Ninja's parallelism for free. Environments use namespaced tools (env.cc.flags, env.cxx.flags, env.link.libs) instead of flat variables, eliminating the CFLAGS vs CXXFLAGS confusion. Targets have CMake-style usage requirements (target.public.include_dirs, target.public.link_libs) that propagate transitively through the dependency graph. And unlike SCons, unknown variables are errors, not silent empty strings.
How it's different from CMake: No DSL to learn — it's just Python. Variable substitution is recursive and explicit. The builder/toolchain system is fully extensible, so third-party builders are first-class citizens. And you can use it as `uvx pcons` for true zero-install (great for other open source projects).
Major features as of v0.7:
- Toolchains for GCC, LLVM/Clang, MSVC, and clang-cl with auto-detection (including
- Generators for Ninja, Makefile, Xcode, compile_commands.json, and Mermaid/DOT dependency diagrams
- Package management via pkg-config, Conan 2.x, and a pcons-fetch tool for building dependencies from source
- Compiler cache support (ccache/sccache), semantic presets (warnings, sanitizers, LTO, hardening), cross-compilation presets (Android NDK, iOS, WebAssembly)
- Platform-specific helpers: macOS bundles/frameworks/.pkg/.dmg, Windows manifests/MSIX, and an msvcup module for installing MSVC without Visual Studio
- An extensible module/add-on system for domain-specific tasks
- Debug tracing (--debug=resolve,subst) with source-location tracking on every node
- Plenty of examples included, unit tests for all features, tested on Mac, Windows and Linux It's still under active development — ready for experimentation, not production unless you're brave. I'd love bug reports, feedback on the API design and what you'd want from a modern Python-based software build system.
Open source, MIT licensed.GitHub: https://github.com/DarkStarSystems/pcons | Docs: https://pcons.readthedocs.io | PyPI: `uvx pcons` or `pip install pcons`
X-auto-translator (Chrome extension for translating X posts) #
This extension automatically translates posts inline.
Tech details: - Chrome Extension (HTML, CSS, JavaScript) - Uses Tesseract.js (Apache-2.0 License) for OCR where needed - Fully client-side - Open source (Apache-2.0 License)
GitHub: https://github.com/ShinobuMiya/x-auto-translator
Feedback and suggestions welcome.
StatusDude – Uptime monitoring internal services with K8s autodiscovery #
https://statusdude.com https://artifacthub.io/packages/helm/statusdude-agent/status...
Stellar – CLI Theme Manager and Web Hub for Starship Prompts #
Stellar provides a hub to browse community themes with screenshots, preview them in a test terminal before applying, and switch local & community prompts with one command.
Tech: Go CLI (single binary) + Next.js hub + Supabase. Themes stored locally with starship.toml symlinked to them.
Just launched v1.0.0. Happy to answer questions, and I'd love some feedback :) Also, feel free to upload your own starship prompt :)
Quick Issues: A Fast Mobile Issue Capture for GitHub, GitLab, and Gitea #
So I built Quick Issues. It's a lightweight Swift app with an offline buffer -- you capture the issue, it syncs when you have connectivity. Supports GitHub, GitLab, and Gitea/Forgejo (including self-hosted and local with PAT).
A few technical notes for anyone interested: This was my first time using GRDB with SQLite instead of default Swift data structures, and the performance difference was significant. Setting up proper OAuth2 flows and GitHub/GitLab apps was more of an adventure than expected, but it's solid now.
Free for a single account/instance, paid tier if you juggle multiple providers.
My background is in GTD and data analytics, not traditional software engineering, so I'm genuinely curious: how does issue capture fit into your development workflow? Do you batch-create issues, capture them the moment they come up, use templates all the time or treat issues more as documentation only?
cc-costline – See your Claude Code spend right in the statusline #
cc-costline replaces Claude Code's statusline with one that shows rolling cost totals, usage limit warnings, and optionally your rank on the ccclub leaderboard — all in a single line:
``` 14.6k ~ $2.42 / 40% by Opus 4.6 | 5h: 45% / 7d: 8% | 30d: $866 ```
What each segment means:
- `14.6k ~ $2.42 / 40% by Opus 4.6` — session tokens, cost, context window usage, model - `5h: 45% / 7d: 8%` — Claude's 5-hour and 7-day usage limits (color-coded: green → orange → red) - `30d: $866` — rolling 30-day total (configurable to 7d or both)
Setup is one command:
``` npm i -g cc-costline && cc-costline install ```
Mtb – An MCP sanity checker for vibe coding #
Check out the demo prompts and responses listed in the README: https://github.com/dbravender/mtb?tab=readme-ov-file#demonst...
Note: Sometimes you have to explicitly ask the LLM to consult mtb but it often does this on its own after reading the tool descriptions.
The dogfooding section shows the results of mtb's tools run on itself: https://github.com/dbravender/mtb?tab=readme-ov-file#eating-...
Contributions are welcome but I'm looking to keep this as light as possible.
And yes, mtb was itself vibe coded. The irony is not lost on me.
Skill to annotate any Markdown file for AI feedback #
Users submitted a feature request to be able to annotate any file. This is what that does.
More on how Plannotator works in the video demo here: https://www.youtube.com/watch?v=a_AT7cEN_9I
Built a Product Hunt alternative with user–product matching #
Most launch platforms give you traffic, but not necessarily real users. I’m experimenting with a recommendation engine that clusters users by interests and behavior, then routes launches to people likely to care.
Current version is early but live: quick AI-assisted submission, data collection on interactions, and product–user matching over time.
Built mainly to explore recommendation systems and better ways for founders to get their first real users.
Curious what HN thinks — what would make a launch platform actually useful to you?
Kanban_P2P – A P2P Kanban board contained in a single HTML file #
PokeDex++ – I rebuilt my Pokémon app as a web app #
I originally built PokeDex++ as a mobile app using React Native and Expo. It was designed to be more than just a Pokédex — it included features like a card collection system, buddy progression, virtual coins, and detailed stat pages.
But I couldn’t afford the Play Store developer fee at the time, so the app never got published.
Instead of abandoning the project, I decided to rebuild the entire thing as a web app using React and deploy it independently.
The current web version includes:
• Individual pages for each Pokémon • Card collection system with unlockable skins • Virtual currency (DexCoins) • Buddy progression mechanics • Fast search and navigation
This is a solo passion project, and I’m continuing to improve it.
I’d really appreciate any feedback, suggestions, or criticism.
Thanks for checking it out.
Broomy – Open-source app for working with many AI agents at once #
When I work with AI coding agents, I typically have 5-10 tasks going at once across different branches. The agent works on one thing while I review another, merge a third, and kick off a fourth. Existing IDEs aren't built for this — they assume you're doing one thing at a time.
Broomy is a desktop app (Electron + React) that lets you:
- Run lots of agent sessions simultaneously and see at a glance which are working, idle, or need your attention - Work with any terminal-based agent (Claude Code, Aider, Codex, etc.) - Review code, manage branches, and handle merges with AI assistance - Use built-in IDE features (Monaco editor, file explorer, git integration, inline terminals) — all designed around multi-agent workflows
I've been using it daily for a few weeks and my productivity has dramatically improved compared to working in Cursor. The key insight is that most of the time you spend "coding with AI" is actually waiting — and Broomy lets you fill that wait time with other tasks.
This is a first public release (v0.6.0). Pre-built binaries are available for macOS. It should work on Linux and Windows too — build from source is straightforward (clone, pnpm install, pnpm start:dist).
MIT licensed. Built as a personal project, not affiliated with my employer.
Repo: https://github.com/Broomy-AI/broomy Website: https://broomy.org
Happy to answer questions.
Website Monitoring with Telegram Alerts #
Neko – AI agent runtime that fits on a Raspberry Pi Zero 2W #
Memory is markdown files the agent reads and writes itself. There's a short-term layer for today's and yesterday's session logs, a long-term MEMORY.md capped at 2000 chars that forces the agent to compact and curate rather than just accumulate, and a searchable recall folder for older conversations. The files are plain text you can read, edit, and commit to git.
It also supports MCP for connecting external tools, and Telegram as a messaging front-end.
Cron jobs are first-class. You can schedule them from the CLI, or the agent can create them itself mid-conversation. If a user on Telegram says "remind me every morning at 9am", the agent creates the job and routes the results back to that chat.
Ships as a single static binary written in Rust.
Twick – React Video Editor SDK with AI Captions and MP4 Export #
It includes:
- AI caption generation (Google Speech-to-Text)
- React timeline editor
- Canvas-based editing tools
- Client-side rendering
- Serverless MP4 export
- TypeScript SDK
The goal is to help developers ship video SaaS and automation tools without rebuilding the entire editor stack.
Would love your feedback.
ResuOpt – AI resume optimizer with no subscriptions ($4.99 one-time) #
Job seekers don't need 50 resume generations. They need one good resume, tailored to the job they're applying for.
So I built ResuOpt. Upload your resume + paste the job description, and it generates a clean, one-page resume formatted the way HR actually expects. You see a full preview before paying anything. If you like it, it's $4.99 to download the DOCX and PDF. That's it — no account required, no credits, no recurring charges.
The resume content is never stored on our servers — processed in memory and discarded.
Would love feedback, especially on the output quality. Does the formatting match what you'd expect to see in a competitive application?
Productmap – local-first visual product planning for humans and agents #
I'm working on getting a new startup off the ground that has a lot of moving pieces, components, and product areas and I was starting to struggle with seeing how far along everything was.
I decided to build this as a way to help me visually see where things are with my product, and as a way to connect specific tasks with specific Claude Code sessions.
Key features:
- No telemetry - Tasks are displayed as draggable, resizable cards on a recursive canvas - Tasks can have subtasks, markdown plan docs, open & resolved questions, and its own dedicated Claude Code process - The terminal tab automatically prompts Claude with context from the task - All data is stored locally as text files in a {project}/productmap directory, in a human (and agent) readable format
The README includes a screen-capture that illustrates how it looks with a full project.
Built using SvelteKit and Electron.