2025년 11월 1일의 Show HN
20 개Duper – The Format That's Super #
Built in Rust, with bindings for Python and WebAssembly, as well as syntax highlighting in VSCode. I made it for those like me who hand-edit JSONs and want a breath of fresh air.
It's at a good enough point that I felt like sharing it, but there's still plenty I wanna work on! Namely, I want to add (real) Node support, make a proper LSP with auto-formatting, and get it out there before I start thinking about stabilization.
KeyLeak Detector – Scan websites for exposed API keys and secrets #
The problem: Modern web development moves fast. You're vibe-coding, shipping features, and suddenly your AWS keys are sitting in a <script> tag visible to anyone who opens DevTools. I've personally witnessed this happen to at least 3-4 production apps in the past year alone.
KeyLeak Detector runs through your site (headless browser + network interception) and checks for 50+ types of leaked secrets: AWS/Google keys, Stripe tokens, database connection strings, LLM API keys (OpenAI, Claude, etc.), JWT tokens, and more.
It's not perfect—there are false positives—but it's caught real issues in my own projects. Think of it as a quick sanity check before you ship.
Use case: Run it on staging before deploying, or audit your existing sites. Takes ~30 seconds per page.
MIT licensed, for authorized testing only.
UnisonDB – Log-native KV database that replicates like a message bus #
For the past few months, I’ve been building UnisonDB — a log-native database where the Write-Ahead Log (WAL) is the database, not just a recovery mechanism.
I started this because every time I needed data to flow — from core to edge, or between datacenters — I ended up gluing together a KV database + CDC + Kafka.
It worked, but it always felt like overkill: too many moving parts for even small workloads, and too little determinism.
What is it?
UnisonDB unifies storage and streaming into a single log-based core. Every write is: • Durable (appended to the WAL), • Ordered (globally sequenced for safety), • Streamable (available to any follower in real time).
It combines B+Tree storage (predictable reads, no LSM compaction storms) with WAL-based replication (sub-second fan-out to 100+ nodes).
Key Ideas
1. Storage + Streaming = One System — no CDC, no Kafka, no sidecar pipelines
2. B+Tree-Backed — predictable reads, zero compaction overhead
3. Multi-Model — KV, wide-column, and large objects (LOB) in one atomic transaction
4. Replication-Native — WAL streams via gRPC; followers tail in real time
5. Reactive by Design — every write emits a ZeroMQ notification
6. Edge-Friendly — replicas can go offline and resync instantly
Performance & Tradeoffs 1. Write throughput is lower than pure LSM stores (e.g. BadgerDB) — because writes are globally ordered for replication safety. Deliberate tradeoff: consistency > raw write speed.
2. Still ~2× faster than BoltDB with replication enabled.
Tech Details
Written in Go
FlatBuffers for zero-copy serialization
gRPC for streaming replication
Jod – Conversational observability with MCP, no more dashboard juggling #
You can ask things like:
“Why did latency spike last night?” “Show me 5xx errors from the payments service.” “Create a time series graph showing error counts for the last 6 hours.”
..and Jod will pull, summarize, or even visualize the answers for you.
Before Jod, we spent countless hours digging through CloudWatch and deployment logs, juggling 10+ dashboards just to trace one issue. It often took as much time as writing the actual code. During incidents, things got even worse, too much noise, endless context switching, and a lot of repetitive work. We figured we couldn’t be the only ones feeling that pain, so we decided to build something that could make the process a little easier.
Right now, Jod connects to CloudWatch through an MCP server, which streams responses to the backend over SSE, and the client displays everything in a conversational interface. You can ask questions about your logs, request visualizations with the @Graph annotation, or dig deeper into errors and trends. We’ve actually debugged and fixed multiple issues in Jod’s own codebase using Jod itself.
That said, it’s still early days, and there’s a lot we want to improve. On our short-term roadmap, we plan to:
- Add support for metrics and traces, not just logs.
- Expand to other providers like Azure and GCP.
- Release a standalone MCP server so developers can plug it into their own AI clients.
If any of this resonates with you, we’d love for you to try it out: https://jodmcp.com. It’s free to get started!
We’d really appreciate your feedback, bug reports, and suggestions on this. Thank you.
Proxmox-GitOps: Container Automation Framework #
Core Concepts:
- Recursive Self-management: Control plane seeds itself by pushing its monorepository onto a locally bootstrapped instance, triggering a pipeline that recursively provisions the control plane onto PVE.
- Monorepository: Centralizes infrastructure as comprehensive IaC artifact (for mirroring, like the project itself on Github) using submodules for modular composition.
- Single Source of Truth: Git represents the desired infrastructure state.
- Loose coupling: Containers are decoupled from the control plane, enabling runtime replacement and independent operation.
Micro-RLE ≤264-byte compression for UART/MCU logs, zero RAM growth #
Micro-RLE is the smallest drop-in I could come up with: 264 B of Thumb code, 36 B of state, no malloc, worst-case 14 cycles/byte and still lossless for every 8-bit pattern.
On the usual sensor streams (ADC, IMU, GPS) it’s 33-70 % smaller than raw output and boots in < 600 µs, so you can fire-and-forget from main() before the PLL even locks.
Repo is a single .c file and a 3-function API—replace the weak emit() hook with your UART / DMA / ring-buffer and you’re done.
Size proof: arm-none-eabi-size micro_rle.o text data bss 264 0 36
MIT licensed, link in the repo. Happy to hear where else this fits!
LocalSend Alternative Built with Iroh #
- LocalSend like speeds within the same local network, for transfers over internet - speeds depending on ISP's
ZigNet: How I Built an MCP Server for Zig in 1.5 Days #
Chess960v2 – Stockfish tournament with different starting positions #
The Core Idea: Stockfish will play against itself across all 960 possible starting positions. The goal is to collect a unique dataset and determine if any initial setups provide a statistically significant advantage against perfect play.
Current Status (Test Run):
The tournament is already live and running. You can follow the progress on the website: chess960v2.com
To speed up testing, time control is set to 0.5 seconds per move (instead of the planned 3 seconds for the main tournament).
Expected test duration: ~2 months.
This will help verify system stability, the game processing pipeline, and gather initial data.
Plan for Main Tournament (Early 2026):
Full time control: 3 seconds per move
Duration: exactly 1 year
The result will be a complete database of all games with detailed statistics
Technical Side: The project is written in Go (Golang) and serves as an orchestrator that manages multiple Stockfish processes, distributes positions, collects PGN files, and analyzes results. The source code will be published in the coming weeks once I've cleaned it up and written documentation.
Why this might interest the HN community:
Data: Will produce a valuable dataset for chess enthusiasts, analysts, and ML applications
Open Source & Go: Soon anyone can look under the hood of the Go implementation, which could be useful for those working with similar high-performance applications
Scale: A year of autonomous operation presents an interesting engineering challenge
Chess 960: The discipline itself sparks debates about "solving" classical chess, and this project could add data to that discussion
This is a modest announcement for now, without active social media promotion. I'd appreciate any feedback, improvement suggestions, or collaboration offers. Questions are welcome.
Project website: chess960v2.com
BrokerMatch – Benefits strategy decision engine for startups #
Just launched my MVP a language learning app to learn through listening #
I’m planning to launch the full version in the next two weeks. If you’re interested, you can join the waiting list here: https://www.makeform.ai/f/yyXGVerS
Dwellable – The app I built after waiving home inspection during Covid #
My wife and I bought our first home in 2020, during peak COVID. When visiting homes, our agent actually asked us to waive the home inspection, which was terrifying. I ended up walking through the house myself with a set of handwritten notes: "check if Kitchen outlets are GFI", "make sure toilet flushes", "test the bath fan", etc.
That experience was the seed for Dwellable, a home maintenance and inspection app for homeowners like us who didn’t know where to start (both before and after buying).
The app automatically pulls your property records (square footage, year built, fuel type, etc.) and uses AI to recommend reminders and seasonal maintenance tasks. It’s free right now, built entirely native on iOS and Android.
The app is free now while we gather feedback from users.
Tech stack:
- Backend: Python + gRPC
- iOS: Native SwiftUI
- Android: Native Jetpack compose
- Landing page: Github pages (vanilla JS + HTML)