每日 Show HN

Upvote0

2026年1月6日 的 Show HN

70 篇
378

Prism.Tools – Free and privacy-focused developer utilities #

blgardner.github.io faviconblgardner.github.io
103 評論12:33 PM在 HN 查看
Hi HN, I'm Barry and I've built Prism.Tools (https://blgardner.github.io/prism.tools/) – a collection of client-side developer utilities that respect your privacy.

Many of these tools were used way back in the days when I ran a BBS and started my communities first ISP, serving three local communities with Dial-Up Internet, Web Hosting etc. The tools have been refined to reflect the changes in tech since then and designed for the Novice and Pro alike. As I locate more tools others may find useful I will refine and add them to the collection. Use them, Share them, or not. They will be here if you need them...

40+ dev tools (JSON formatters, regex tester, base64 encoder, Git command helper, etc.) that run entirely in your browser. Zero tracking, zero analytics, zero data collection – everything processes locally. Self-contained HTML files with no build process or frameworks.

I realized I had a lot of tools/utilities I've built over the years for my own use. I lothe having to 'sign-up' just to access/use simple utilities that I can create myself. I've refined them and put them in one safe place so I could easily access them if/when needed. I decided to make them available via Github Pages for anyone that may find them useful. Prism.Tools is the result.

Each tool is a standalone HTML file with embedded CSS and JavaScript. No frameworks, no npm packages, no build steps – just open the file and it works.

The entire toolset:

- 100% client-side processing – your data never leaves your browser.

- No external dependencies except for specific libraries from cdnjs.cloudflare.com (marked.js for markdown, exifr for image metadata, etc.)

- Consistent dark UI – every tool follows the same design language for familiarity.

- Vanilla JS where possible – only reaching for Public CDN Resources when necessary.

The constraint of "single HTML file" was intentional. It forces simplicity and ensures tools remain maintainable. It also means users can inspect, modify, or self-host any tool trivially.

These tools have helped me with debugging production issues, Quick formatting tasks, learning Git commands (the Git command helper has been particularly helpful)

Just visit https://blgardner.github.io/prism.tools/ and try any tool. No signup, no install.

What tools are missing that you find yourself needing? Any performance issues with specific tools? UI/UX friction points?

All tools follow the same privacy-first philosophy... Your data stays in your browser. No accounts, no tracking, no servers processing your information. The project is also a demonstration that you don't always need React, Vue, or complex build pipelines – sometimes vanilla JavaScript in a single HTML file is exactly the right tool for the job.

Vanilla JavaScript (ES6+) CSS3 with CSS Grid Minimal external libraries: marked.js, exifr, highlight.js, sql-formatter (all from CDN) No frameworks, no bundlers, no npm Hosted on Github Pages

Happy to answer questions about the technical implementation, design decisions, or specific tools!

All tools are inspectable – just view source on any page to see exactly how they work!

58

VaultSandbox – Test your real MailGun/SES/etc. integration #

vaultsandbox.com faviconvaultsandbox.com
13 評論1:50 PM在 HN 查看
I've spent the last few months working on something I wish I'd had years ago. I kept running into the same issue: CI green, production mail broken. TLS handshake failures, DKIM alignment mismatches, SPF soft-fails ... the stuff that only surfaces when real mail servers are involved. Most test tools (Mailpit, MailHog) are catch-alls. They confirm "an email was sent" but don't validate the protocol. They also aren't designed for network-exposed environments: no auth, unprotected Web UI, easy to enumerate messages.

VaultSandbox is my attempt at fixing that. It's a self-hosted SMTP gateway (AGPLv3) that validates SPF, DKIM, DMARC, and rDNS on every incoming message. You keep your production email provider (Postmark, SendGrid, SES) in tests and you just change the recipient domain. No mocking, no config changes. There are client SDKs (Node, Python, Go, Java, .NET), plus a Web UI and a CLI for manual testing.

Some technical details:

Deterministic Tests Instead of polling or sleep loops, the SDKs use Server-Sent Events (SSE) so test assertions trigger the moment the mail hits the gateway.

Minimal infrastructure footprint Built with NestJS and Angular, with no external database dependency to keep the container footprint small and easier to reason about.

Post-Quantum Encryption I use ML-KEM-768 for the encryption layer. Incoming mail is encrypted immediately using a client-generated public key and the plaintext is discarded. The server only ever stores encrypted message data and cannot decrypt it. I chose PQ because I wanted to build something I wouldn't have to revisit in five years. If it handles large PQ keys reliably, everything else is easy.

Quick start: https://vaultsandbox.dev/getting-started/quickstart/

Site: https://vaultsandbox.com

I'd love feedback, especially on whether AGPLv3 would be a blocker for something you'd self-host in dev.

55

DDL to Data – Generate realistic test data from SQL schemas #

31 評論12:47 PM在 HN 查看
I built DDL to Data after repeatedly pushing back on "just use production data and mask it" requests. Teams needed populated databases for testing, but pulling prod meant security reviews, PII scrubbing, and DevOps tickets. Hand-written seed scripts were the alternative slow, fragile, and out of sync the moment schemas changed.

Paste your CREATE TABLE statements, get realistic test data back. It parses your schema, preserves foreign key relationships, and generates data that looks real, emails look like emails, timestamps are reasonable, uniqueness constraints are honored.

No setup, no config. Works with PostgreSQL and MySQL.

https://ddltodata.com

Would love feedback from anyone who deals with test data or staging environments. What's missing?

38

Repogen – a static site generator for package repositories #

github.com favicongithub.com
7 評論4:36 PM在 HN 查看
Hi HN,

Package repositories don't need to be complicated. They're just static files: metadata indexes and the packages themselves. Yet somehow hosting your own feels like you need dedicated infrastructure and deep knowledge of obscure tooling.

repogen is an SSG for package repos. Point it at your .deb/.rpm/.apk files, it generates the static structure, you upload to S3 or any web host. Done. $0.02/month to host packages for your whole team.

It supports Debian, RPM, Alpine, Pacman, and Homebrew. Has incremental mode for updating repos without redownloading everything. Handles signing. Very alpha, but it works. Would love to get feedback!

26

llmgame.ai – The Wikipedia Game but with LLMs #

llmgame.ai faviconllmgame.ai
23 評論3:07 AM在 HN 查看
I used to play the Wikipedia Game in high school and had an idea for applying the same mechanic of clicking from concept to concept to LLMs.

Will post another version that runs with an LLM entirely in the browser soon, but for now, please enjoy as long as my credits last...

Warning: the LLM does not always cooperate

20

DevicePrint – device fingerprinting without cookies #

34 評論7:51 PM在 HN 查看
Hi HN,

I built DevicePrint after running into problems with duplicate accounts and unreliable cookies in my own projects.

DevicePrint is a lightweight device fingerprinting tool designed for developers. It helps identify devices across sessions without relying on cookies.

Use cases include fraud detection, preventing duplicate signups, and security-sensitive workflows.

I'd really appreciate feedback — especially around privacy concerns or edge cases you’ve run into.

Link: https://deviceprint.io

19

ccrider - Search and Resume Your Claude Code Sessions – TUI / MCP / CLI #

github.com favicongithub.com
4 評論2:14 PM在 HN 查看
I built a tool that stores your full Claude Code history to let you easily find and resume sessions. It has TUI, CLI and MCP interfaces. It's a single Go binary, and the session history is synced to SQLite each time you use it.

Default mode is the TUI with a session browser and full-text search. Once a session is selected you can browse and search within it, resume it or export to markdown.

The MCP server provides tools to let Claude search back through the session for pre-compact context or pull from prior sessions. I use this constantly.

I've seen elaborate continuity systems to give Claude Code access to history but this simple approach has been very effective.

Installation:

macOS: brew install neilberkman/tap/ccrider

Linux/other: git clone https://github.com/neilberkman/ccrider && cd ccrider && go build

MCP server: claude mcp add --scope user ccrider $(which ccrider) serve-mcp

Source: https://github.com/neilberkman/ccrider

17

I built an Open Source screen timer for the m5stickc (Arduino) #

partridge.works faviconpartridge.works
0 評論1:51 PM在 HN 查看
I've never posted on ShowHN before, but I wanted to share my Xmas 2025 project; to try a new approach to controlling our kids screen time.

This also involved massively over-engineering a solution in order to play with a shiny new gadget (and avoid the in laws at Christmas, obviously)

I've shared some learnings on AI coding with embedded devices, and how I approached the product design/hardware selection side of things.

The Web App is at https://screenie.org - and I'm Open Sourcing the device and web app code later today (links to follow on that site)

16

Symbolic Circuit Distillation: prove program to LLM circuit equivalence #

github.com favicongithub.com
2 評論8:57 PM在 HN 查看
Hi HN,

I have been working on a small interpretability project I call Symbolic Circuit Distillation. The goal is to take a tiny neuron-level circuit (like the ones in OpenAI's "Sparse Circuits" work) and automatically recover a concise Python program that implements the same algorithm, along with a bounded formal proof that the two are equivalent on a finite token domain.

Roughly, the pipeline is:

1. Start from a pruned circuit graph for a specific behavior (e.g. quote closing or bracket depth) extracted from a transformer. 2. Treat the circuit as an executable function and train a tiny ReLU network ("surrogate") that exactly matches the circuit on all inputs in a bounded domain (typically sequences of length 5–10 over a small token alphabet). 3. Search over a constrained DSL of common transformer motifs (counters, toggles, threshold detectors, small state machines) to synthesize candidate Python programs. 4. Use SMT-based bounded equivalence checking to either: - Prove that a candidate program and the surrogate agree on all inputs in the domain, or - Produce a counterexample input that rules the program out.

If the solver finds a proof, you get a small, human-readable Python function plus a machine-checkable guarantee that it matches the original circuit on that bounded domain.

Why I built this

Mechanistic interpretability has gotten pretty good at extracting "small crisp circuits" from large models, but turning those graphs into clean, human-readable algorithms is still very manual. My goal here is to automate that last step: go from "here is a sparse circuit" to "here is a verified algorithm that explains what it does", without hand-holding.

What works today

- Tasks: quote closing and bracket-depth detection from the OpenAI circuit_sparsity repo. - Exact surrogate fitting on a finite token domain. - DSL templates for simple counters, toggles, and small state machines. - SMT-based bounded equivalence between: sparse circuit -> ReLU surrogate -> Python program in the DSL.

Limitations and open questions

- The guarantees are bounded: equivalence is only proven on a finite token domain (short sequences and a small vocabulary). - Currently focused on very small circuits. Scaling to larger circuits and longer contexts is open engineering and research work. - The DSL is hand-designed around a few motifs. I am not yet learning the DSL itself or doing anything very clever in the search.

What I would love feedback on

- Are the problem framing and guarantees interesting to people working on mechanistic interpretability or formal methods? - Suggestions for next benchmarks: which circuits or behaviors would you want to see distilled next? - Feedback on the DSL design, search strategy, and SMT setup.

Happy to answer questions about implementation details, the SMT encoding, integration with OpenAI's Sparse Circuits repo, or anything else.

10

Tt – P2P terminal sharing over WebRTC #

2 評論4:38 PM在 HN 查看
Made this to use my laptop terminal session on mobile without data going through a server.

tt start -p mypassword → get a URL (qrcode) → open in browser → enter password → connected.

Direct webrtc datachannel between your machine and browser.

Signaling server is a cloudflare worker, exchanges ~2KB of sdp/ice metadata then gets out of the way. Password never transmitted - argon2id derives 256-bit key locally on both ends. All terminal i/o gets nacl secretbox encryption before hitting the datachannel. double encrypted with dtls underneath but I wanted the relay to see nothing useful even during signaling.

go + pion/webrtc, about 14k loc. browser is xterm.js + webcrypto for argon2id. stun default, turn for symmetric nat.

my use case: checking on claude code runs from my phone when im not at my desk. hotel wifi, spotty mobile data. spent time on turn fallback, keepalive with reconnection, buffered writes during disconnects. pwa so it works from home screen and survives app switching. holds up ok on 3g.

Trade-offs: ice gathering takes 2-5s on connect. browser cant initiate, need cli on host. codes expire 24h.

Single binary, no deps. daemon mode for multiple sessions. tests with race detector, chaos tests for disconnects, network condition simulation. crypto and webrtc at ~72% coverage.

https://github.com/artpar/terminal-tunnel

ps: default relay runs on my cloudflare free tier so no guarantees. you can self-host the worker or run tt relay locally.

9

I'm 35 and couldn't finish reading articles anymore, so I built this #

parsely.obasic.app faviconparsely.obasic.app
11 評論5:50 AM在 HN 查看
Hey HN, the full story is on the page, but here's the TL;DR:

I'm 35 and somewhere along the way I lost the ability to finish reading articles. I'd open them, read 2-3 paragraphs, get distracted, and close the tab feeling like a failure. My "Read Later" list became a graveyard of 2,000+ unread articles.

I tried everything – focus apps, reader modes, blocking extensions. Nothing worked. Then I realized. 'maybe the problem isn't me'. Modern web articles are designed for engagement metrics, not comprehension. Sidebars, popups, related articles, comment sections – everything is optimized to pull your attention away.

So I built Parsely. It shows one paragraph at a time, blurs everything else. Stupidly simple. But it worked. I'm finally finishing articles again.

Tech stack: WXT framework, TypeScript, Mozilla Readability (same as Firefox Reader View), Shadow DOM for style isolation.

Code is MIT licensed: https://github.com/TeamOliveCode/parsely

Happy to answer questions about the implementation or commiserate about our collective attention spans!

7

Plano – Edge and service proxy with orchestration for AI agents #

github.com favicongithub.com
1 評論7:20 PM在 HN 查看
Hey HN — I’m Adil from Katanemo (with Salman, Shuguang, and Meiyu)

We previously shared an early version of this project as ArchGW. Based on customer feedback, the scope expanded from “LLM routing and model access” into something broader: delivery infrastructure for agentic applications. We renamed it to Plano and reworked the architecture accordingly.

The problem

On-the-ground AI practitioners will tell you that calling an LLM is not the hard part. The really hard part is delivering agentic applications to production quickly and reliably, then iterating without rewriting system code every time. In practice, teams keep rebuilding the same concerns that sit outside any single agent’s core logic:

This includes model agility — the ability to pull from a large set of LLMs and swap providers without refactoring prompts or streaming handlers. They need to learn from production by collecting signals and traces that tell them what to fix. They need consistent policy enforcement for moderation and jailbreak protection, rather than sprinkling hooks across codebases. And they need multi-agent patterns like handoff and specialization without turning their app into orchestration glue.

These concerns get rebuilt and maintained inside fast-changing frameworks and application code, coupling product logic to infrastructure decisions. It’s brittle, and pulls teams away from core product work into plumbing they shouldn’t have to own.

What Plano does

Plano moves core delivery concerns out of process into a modular proxy and dataplane designed for agents. It supports inbound listeners (agent orchestration, safety and moderation hooks), outbound listeners (hosted or API-based LLM routing), or both together.

Plano provides the following capabilities via a unified, protocol-native, framework-friendly dataplane:

- Orchestration: Low-latency routing and handoff between agents. Add or change agents without modifying app code, and evolve strategies centrally instead of duplicating logic across services.

- Guardrails & Memory Hooks: Apply jailbreak protection, content policies, and context workflows (rewriting, retrieval, redaction) once via filter chains. This centralizes governance and ensures consistent behavior across your stack.

- Model Agility: Route by model name, semantic alias, or preference-based policies. Swap or add models without refactoring prompts, tool calls, or streaming handlers.

- Agentic Signals™: Zero-code capture of behavior signals, traces, and metrics across every agent, surfacing traces, token usage, and learning signals in one place.

The goal is to keep application code focused on product logic while Plano owns delivery mechanics.

More on Architecture

Plano has two main parts:

Envoy-based data plane. Uses Envoy’s HTTP connection management to talk to model APIs, services, and tool backends. We didn’t build a separate model server—Envoy already handles streaming, retries, timeouts, and connection pooling. Some of us are core Envoy contributors at Katanemo.

Brightstaff, a lightweight controller written in Rust. It inspects prompts and conversation state, decides which upstreams to call and in what order, and coordinates routing and fallback. It uses small LLMs (1–4B parameters) trained for constrained routing and orchestration. These models do not generate responses and fall back to static policies on failure. The models are open sourced here: https://huggingface.co/katanemo

Plano runs alongside your app servers (cloud, on-prem, or local dev), doesn’t require a GPU, and leaves GPUs where your models are hosted.

Repo https://github.com/katanemo/plano + docs https://docs.planoai.dev/

6

Git-workty – I got tired of Git stash and built a worktree wrapper #

github.com favicongithub.com
0 評論5:25 AM在 HN 查看
I context-switch a lot. Someone pings me about a bug while I'm mid-feature, and I either:

1. `git stash` (and forget what's in there forever)

2. Make a "WIP: stuff" commit (clutters history)

3. Ignore them until I'm done (not great)

Worktrees solve this perfectly – each task gets its own directory, fully isolated. But the commands are clunky and I kept forgetting the syntax.

So I built a small CLI wrapper. Now my workflow is:

```

wnew feat/login # creates worktree + cd into it

# ...do work...

wcd # fuzzy-pick another worktree, cd there

wgo main # jump straight to main

```

Dashboard shows everything at a glance:

```

▶ feat/login ● 3 ↑2↓0 ~/.workty/repo/feat-login

  main                  ↑0↓0   ~/src/repo
```

Written in Rust, ~2k lines. Refuses to delete dirty worktrees unless you force it.

https://github.com/binbandit/workty

Curious if anyone else has this problem or solved it differently.

6

Sidestream – an AI chat app with a side of insight #

github.com favicongithub.com
2 評論8:44 PM在 HN 查看
Hi. My name is Eric Brandon and I’ve built an AI chat app that feels really different.

While you talk with an AI model in a chat pane, a second AI model is reading over the conversation and seeking out useful, interesting, surprising, amusing, and fact-checking information that wouldn’t have appeared in the main chat.

You can read those “discoveries” on their own, or click them into your main chat to steer the conversation in a new direction.

This is based on the observation that talking to smart people is usually more enjoyable, interesting, and informative than talking to a smart AI.

There’s many possible reasons why - the AI isn’t smart enough, it has no “real” emotions, it has no real long-term memory of your relationship, and so on.

But certainly one big reason is that the AI has been trained and instructed to simulate a “helpful assistant.” And helpful assistants stay on topic. They don’t interject with something super interesting, or wise, that is only thematically related. They don’t chime in with amusing related anecdotes. They don’t complicate the conversation with contrasting views.

I find this chat experience much much more interesting and useful than any of the first person apps from Anthropic, openAI, or Google.

This is combined with many power-user features like branching conversations, access to powerful models like chatGPT 5 Pro without a subscription, sophisticated output for sharing chats, and much more.

I find the freedom of having access to all the latest big-lab models in one app, and even in one chat, extremely convenient.

This app is a bit of a glimpse into the future, I believe. Today’s AI ecosystem means that having this experience: - Requires more technical sophistication than the average user because you need to bring your own API keys - Costs more than regular chat, because you can’t benefit from subsidized monthly plans, and because the “discoveries” add to the AI token costs of every conversation

But this user experience is so much better than the standard one that it’s hard to believe, in the future, when you can bring your subscriptions to third-party apps, and when inference is cheaper, that this won’t become the standard experience.

You can read about the app and download it at https://sidestream-app.com and https://github.com/ericbrandon/sidestream

It’s a non-commercial, open-source project built just because I wanted it for myself, but I hope you enjoy too!

5

Shoot-em-up Deck Building game #

muffinman-io.itch.io faviconmuffinman-io.itch.io
1 評論3:25 PM在 HN 查看
Hey, I've shared this game while it was still a work in progress in one of the Tell HN threads. I was pleasantly surprised that a lot of people tried it, and I got some positive feedback and constructive criticism.

Anyway, I found a gamedev library I really like - Kaplay [1]. What I like about it is that easy things are easy. Meaning I could have fun building the game without learning (or fighting) a new tool. I always wanted to make games, but this is the first one I actually did.

And I have to tell you - game development sparks joy! It combines so many things (game design, programming, art, sounds, UI...), and it is incredibly fun.

The game is a combination of action and turn-based gameplay. Probably a too ambitious goal, but I think I managed to make a fun little game. If you are not sure what I'm talking about, try playing the tutorial, it should help. There is also a global leaderboard, which is dominated by my friend, and I would appreciate if someone can dethrone him :)

It is completely free. I'm thinking of open sourcing it, but the code isn't up to my regular standards, as I was learning on the go. Also, I was more excited about adding stuff than going back and refactoring. So I'm still deciding on that one.

Before you ask, no, I haven't vibe-coded it. For me personally, that would mean delegating the fun stuff to something else. I’ve been working on it for about a month, in the evenings after my kid is asleep.

All feedback is appreciated!

[1] https://kaplayjs.com/

P.S. I also plan a blog post or two about it. I already started an interactive demo on how missiles are coded.

5

Bull.sh: Financial Modeling Agent CLI #

github.com favicongithub.com
0 評論7:17 PM在 HN 查看
Built a free open source agentic CLI tool for financial modeling & analysis. Hadn't played around with real equity valuation modeling for awhile and wanted to build tooling to get myself back into the game.

Bull.sh lets you query & store 10-Qs, 10-K in a local vector store to chat with them, build investment thesis from scratch or build full framework models through the CLI to export into excel.

It's open source, just requires your own Anthropic API key and (optionally) AlphaVantage Free API key if you want save some tokens from scraping. Feel free to play around with it.

Some ideas I have for features (amongst improvements to the existing orchestration & valuation frameworks):

1. Bull vs. Bear Agent Debate

Spawn two adversarial agents: one argues the bull case, one argues the bear case. They critique each other's assumptions, then a moderator agent synthesizes a balanced thesis. Surface the strongest counterarguments automatically.

2. Earnings Call Analyzer

Auto-fetch earnings call transcripts (or transcribe audio), detect management tone shifts vs. prior quarters, extract forward guidance, flag hedging language ("we believe", "challenging environment"). Compare CEO sentiment to online & social sentiment.

3. Supply Chain Knowledge Graph

Parse 10-K supplier/customer disclosures + news to build a graph of company relationships. Visualize dependencies, detect concentration risk, propagate shocks ("if TSMC goes down, who's exposed?"). Use Neo4j or networkx.

4. Real-time 8-K Alert System

Monitor SEC EDGAR for new 8-K filings, classify by materiality (executive departure, M&A, guidance change), push alerts via webhook/Slack/email. Let users set watchlists and filter by event type.

5. Thesis Backtester

Save thesis snapshots with timestamps. When the stock moves ±20%, resurface the original thesis and score which predictions were right/wrong. Build a track record dashboard showing analyst accuracy over time.

4

LoRA Trained on SFMTA CAD Drawings to Aerial Images #

0 評論5:37 PM在 HN 查看
Hey I'm Kieran and I've been playing with the intersection (pun intended) of generative AI and civil engineering for roadways. I did a test run training a LoRA on the new Flux 2 Dev model using Fal's trainer useing a custom dataset of paired images from publicly available striping CAD drawings of street layouts to aerial images of the same area.

The use case here is to allow urban planners to instantly visualize their proposed changes as they work with their existing tooling.

This was just a quick experiment with a small data size that exceeded my expectations so I wanted to share with you all.

Watch a demo with instructions on how to test it out: https://www.youtube.com/watch?v=zS8pGoOfe00

Try it now (includes free credits) https://3dstreet.app/generator

If you're really excited about running on your own hardware here are the lora weights: https://v3b.fal.media/files/b/0a87f41f/glySGbKtv8lzigPWzQDjb...

I can writeup a longer blog post if interested in the details. This was only trained on 12 image pairs with text descriptions but it still cost about $100 on Fal. I'd love to do a larger run, but it does take a while to prepare all of the data and I'm hesitant to drop $2k. I'd be curious for the experts out there if you think the quality will increase if I use a larger sample size.

4

Kurisu – Statistical dashboard in a single HTML file #

nocturnefoundry.gumroad.com faviconnocturnefoundry.gumroad.com
0 評論10:06 PM在 HN 查看
I built Kurisu to solve a specific problem I kept encountering: analyzing sensitive data in offline settings without IT approval or cloud uploads and without needing to login and out of accounts on every device i own.

The constraints led to interesting technical decisions:

*Single-file architecture:* The entire app is one HTML file. No build process, no npm install, no deployment. You can WhatsApp it to yourself and open it on any device with a browser. This bypasses corporate IT restrictions and makes distribution trivial.

*Statistical engine in Web Workers:* Implements Spearman Rank Correlation, Kruskal-Wallis, and Mann-Kendall tests client-side. The automated dashboard scores potential visualizations by statistical significance rather than generating random charts.

*Local-first processing:* All data stays in the browser. Useful for compliance-heavy scenarios (HIPAA, financial data, NDAs) where cloud analytics tools are prohibited.

*Real use case:* I work 9am-9pm, then code nights. During potential investor/client meetings over coffee, being able to drop their Excel file into Kurisu and generate insights in 30 seconds without "let me set this up in Excel" has been valuable.

*Technical tradeoffs:* - Export occasionally crops at bottom edge (html2canvas limitation, fixing in v1.1) - Single file = harder to maintain at scale, lots of things break all the time but distribution simplicity wins for now

Future plans: Optional AI model integration (bring your own API/local LLM), embedded tiny model experiments, desktop wrapper if there's demand.

This is my first shipped product after 3 months of learning to code via AI assistance. Would especially appreciate feedback on: 1. The statistical implementation correctness 2. Single-file distribution model viability 3. Technical debt concerns as it grows 4. ways to be able optimise the app to work with with larger datasets and more complex calculations within the web worker.

Built from Pakistan where most SaaS pricing doesn't make sense. $29 one-time felt right but feedback on this would be appreciated as well.

4

A stealth ESP32 radar hidden in a phone charger #

juanfr.gumroad.com faviconjuanfr.gumroad.com
2 評論3:56 PM在 HN 查看
This is a short technical article documenting a personal project.

The goal was to hide a radar-based presence detection system inside an ordinary phone charger.

The focus is not on firmware features, but on design decisions: - physical constraints - radar orientation - electrical isolation - simplicity over configurability

It’s not a tutorial and not a product announcement. Just a record of engineering decisions made under real-world constraints.

Feedback is welcome.

3

Doom (1993) Playable in a GitHub Readme #

github.com favicongithub.com
0 評論12:58 PM在 HN 查看
I just had this really fun idea of making a playable version of DOOM on my GitHub profile after building "Doom" in a QR code last year (https://news.ycombinator.com/item?id=43729683) and I finally stopped procrastinating to try and build it.

DoomMe is DOOM E1M1 map that can run on GitHub's markdown viewer which doesn't support javascript, web assembly or even iframes it works with a stateless engine with a capture of every possible position in the map (with 64 unit distance) captured in 4 directions and stitched together using graph logic with 8000+ WebP images and markdown files

Because of it being stateless, I had to manually edit the WAD file to "open" the gates of E1M1 to use omgifol later to capture all valid positions inside the map!

Open Source with MIT License and current version is <190 mb in size with all assets

- Play it on my profile: https://github.com/Kuberwastaken

- Blog post covering most of the process, implementation and failed tries: https://kuber.studio/blog/Projects/How-I-Made-DOOM-Run-Insid...

3

Syntopic reading with Claude Code – connections across 100 books #

trails.pieterma.es favicontrails.pieterma.es
0 評論3:47 PM在 HN 查看
LLMs are often used to reduce the amount of reading we do. I’m hopeful they can be used to augment our reading habits instead.

I gave Claude Code a set of CLI tools to explore a library of 100 non-fiction books (sampled from HN’s favourites) and surface sequences of excerpts that link ideas.

I was first intrigued when it linked Steve Jobs’ self-deception to an excerpt about Theranos, and it ended up finding many more compelling trails.

I hope someone else finds them interesting as well.

Implementation notes: https://pieterma.es/syntopic-reading-claude/

3

Doo – Generate auth and CRUD APIs from struct definitions #

github.com favicongithub.com
1 評論3:59 PM在 HN 查看
Built Doo because I was tired of writing 200 lines of auth boilerplate for every API.

Example (complete API):

import std::Http::Server;

import std::Database;

struct User {

     id: Int @primary @auto,
     email: Str @email @unique,
     password: Str @hash,
}

fn main() {

     let db = Database::postgres()?;
     let app = Server::new(":3000");

     app.auth("/signup", "/login", User, db);
     app.crud("/todos", Todo, db);  // Todo = any struct you define
    
     app.start();
 }
Result:

- POST /signup with email validation + password hashing (automatic from @email, @hash)

- POST /login with JWT

- Full CRUD endpoints for GET, POST, GET/:id, PUT/:id, DELETE/:id

- Compiles to native binary

Status: Alpha v0.3.0. Auth, CRUD, validation, and Postgres working. Actively fixing bugs.

https://github.com/nynrathod/doolang

What would you need to see before using this in production?

3

A dice baseball game I built with my second grader over winter break #

apps.apple.com faviconapps.apple.com
0 評論8:11 PM在 HN 查看
Over winter break, instead of the usual “kids playing in the snow” thing, we ended up doing something a little different. Mostly because it was abnormally warm, and partly because my kid had other ideas.

We built a small iPhone game together.

My son is a second grader and loves baseball. For years, our family has played a simple dice-based baseball game at restaurants while waiting for food. No screens. Just dice, rules scribbled on napkins, and a lot of arguing over whether something was a double or a ground out. It’s been one of those low-tech things that quietly stuck.

On the first night of winter break, he asked if we could turn it into an actual game.

So we did.

We spent the week breaking down how baseball works, how turns and randomness feel fair, and how games should be quick and fun. I introduced him to basic iPhone development concepts, UI thinking, and how we can use AI as a helper. Not to do the work for us, but to brainstorm, prototype, and iterate faster.

He was immediately hooked.

One of my favorite moments: he came up with a “7th minute stretch” idea. If a game session goes long enough, the app pauses and encourages you to get off the phone for 30 seconds. Do jumping jacks, grab water, stretch, whatever. It’s intentionally anti-doomscrolling, and very much his idea.

The result is Dice Baseball. A simple, fast, family-friendly dice baseball game for iPhone. No accounts. No ads. No paid features. There’s an optional tip jar if someone wants to support it, but that’s it.

For me, the best part wasn’t shipping the app. It was watching a kid realize that software isn’t magic. It’s something you can build, improve, and think critically about. He’s already talking about updates and new ideas.

If you’re curious, here it is ... https://apps.apple.com/us/app/dice-baseball/id6757132879

Thanks for checking it out.

2

Sidestream – an AI chat app with a side of insight #

github.com favicongithub.com
0 評論8:12 PM在 HN 查看
Hi. My name is Eric Brandon and I’ve built an AI chat app that feels really different.

While you talk with an AI model in a chat pane, a second AI model is reading over the conversation and seeking out useful, interesting, surprising, amusing, and fact-checking information that wouldn’t have appeared in the main chat.

You can read those “discoveries” on their own, or click them into your main chat to steer the conversation in a new direction.

This is based on the observation that talking to smart people is usually more enjoyable, interesting, and informative than talking to a smart AI.

There’s many possible reasons why - the AI isn’t smart enough, it has no “real” emotions, it has no real long-term memory of your relationship, and so on.

But certainly one big reason is that the AI has been trained and instructed to simulate a “helpful assistant.” And helpful assistants stay on topic. They don’t interject with something super interesting, or wise, that is only thematically related. They don’t chime in with amusing related anecdotes. They don’t complicate the conversation with contrasting views.

I find this chat experience much much more interesting and useful than any of the first person apps from Anthropic, openAI, or Google.

This is combined with many power-user features like branching conversations, access to powerful models like chatGPT 5 Pro without a subscription, sophisticated output for sharing chats, and much more.

I find the freedom of having access to all the latest big-lab models in one app, and even in one chat, extremely convenient.

This app is a bit of a glimpse into the future, I believe. Today’s AI ecosystem means that having this experience: - Requires more technical sophistication than the average user because you need to bring your own API keys - Costs more than regular chat, because you can’t benefit from subsidized monthly plans, and because the “discoveries” add to the AI token costs of every conversation

But this user experience is so much better than the standard one that it’s hard to believe, in the future, when you can bring your subscriptions to third-party apps, and when inference is cheaper, that this won’t become the standard experience.

You can read about the app and download it at https://sidestream-app.com and https://github.com/ericbrandon/sidestream

It’s a non-commercial, open-source project built just because I wanted it for myself, but I hope you enjoy too!

2

Twitter Viewer – View Profiles,Search Tweets,Download Videos (No Login) #

twitterwebviewer.com favicontwitterwebviewer.com
2 評論3:50 PM在 HN 查看
Hey HN! Happy New Year! I built Twitter Viewer because Twitter now requires login to view public content, which creates unnecessary barriers for researchers, journalists, and casual users.

What it does: View any public Twitter profile & tweets (no account needed) Search tweets by keywords Download Twitter videos in MP4 format 100% anonymous (no tracking, no data collection)

Tech Stack: Next.js 14 (App Router) for SSR and fast loading Tailwind CSS for styling Hosted on Vercel with Edge runtime Twitter API integration (via proxy)

Why I built this: Since Twitter changed its policy in 2023, you can't even view a single tweet without creating an account. This is frustrating for people who just want to quickly check a profile or share a link.

The tool has been in private beta for a month with positive feedback. I'm launching publicly today and would love your thoughts!

Live: https://twitterwebviewer.com

Happy to answer any questions about the implementation or features!

2

Kafka-compatible streaming on S3 with stateless brokers #

1 評論4:53 PM在 HN 查看
Hi HN,

KafScale is Kafka-compatible streaming, k8s native, where S3 is the source of truth and brokers hold no persistent state. Written in Go, runs on Kubernetes.

Built this after years of operating Kafka and hitting the same walls: broker failures that take hours to recover, partition rebalancing that blocks deploys, disk capacity planning that never ends.

How it works:

- Producers and consumers use standard Kafka clients - Brokers buffer in memory, flush to S3 - etcd stores metadata and consumer group state - Recovery means restarting a pod and reading from S3 - Optional Iceberg processor reads segments directly from S3, bypasses brokers entirely for batch/analytical workloads

What you give up: latency is 400-500ms (S3 round-trip), no transactions, no compacted topics. It's not a 100% replacement.

What you get: brokers are disposable, scaling is just replica count, no disk management, direct access to streamed data over S3 ACL

License: Apache 2.0 GitHub: https://github.com/novatechflow/kafscale

2

Terraforming Puzzle – Terraform the solar system, incl Venus and Mars #

puzzmallow.com faviconpuzzmallow.com
0 評論8:05 AM在 HN 查看
I made a hex based logic puzzle where you place different coloured terrain hexes to terraform various planets and moons across the system.

To solve you have follow the numbered labels indicating how many of a terrain type appear in each row as well as the shape rules for each terrain type.

So far I have created: - Mars - The Moon - Ganymede - Venus - Titan

2

CoinFountains – Let's replicate and scale what we are grateful for #

coinfountains.com faviconcoinfountains.com
0 評論12:55 PM在 HN 查看
4th pivot of this app

Discover and share positive experiences with others. Post plans to replicate what you're grateful for and see what resonates with the community. Toss coins to show your interest in growing the same positive experiences.

Who uses CoinFountains? Early adopters Find and share what's working before everyone else.

Entrepreneurs Help grow and scale what's already working.

Visionaries See potential in what's working and help it grow.

2

ReelSafe – Save and AI search your fav videos from YouTube, Insta or FB #

reelsafe.anishkv.in faviconreelsafe.anishkv.in
0 評論3:49 PM在 HN 查看
Hi HN,

I built ReelSafe because I kept losing videos I knew I had saved.

I’d save an Instagram reel or a YouTube Short thinking “I’ll find this later,” but later meant endlessly scrolling through bookmarks with no way to search by what was inside the video.

So I built a small app that:

Lets you save reels, shorts, and videos from multiple platforms

Uses AI to understand what’s in the video (topics, actions, context)

Allows searching your saved videos using natural language (e.g. “that breathing exercise video” or “keto recipe reel”)

The focus is on retrieval, not recommendations.

This is an early version and I’m especially interested in feedback around:

Whether AI-based search feels useful or overkill

What signals would make search results feel more trustworthy

Privacy expectations around analyzing saved content

Link: https://reelsafe.anishkv.in

Happy to answer any questions and share technical details if useful.

— Anish

2

A lightweight, E2E encrypted pastebin built with Svelte 5 and Hono #

github.com favicongithub.com
0 評論3:51 PM在 HN 查看
I built this because I needed a simple way to send snippets to colleagues or copy/paste text from my phone to a random computer without logging into anything. I used a few other services for a while, but the downtime and general bloat finally got to me. I decided to build my own over the New Year break.

It is live here: https://yp.pe

Full disclosure: I vibe coded the vast majority of this using Claude Code, but I kept a pretty tight leash on the logic. The full commit history is public if you want to see the fumbles and the process. I will be the first to admit that some of my initial architecture decisions were not the best, and I completely own up to that, but I am happy with where the end result landed.

The Features:

- Fast and Lightweight: No ads and no tracking. CORS policy blocks Cloudflare analytics.

- Real-time Collab: Uses Yjs/CRDTs. It is limited to 10 concurrent editors by default on a first come first served basis, but it allows unlimited viewers.

- "Smart" Slugs: Slugs are kept as short as possible. I specifically removed ambiguous characters like capital I and lowercase l so it is easy to type the URL manually into another computer address bar.

- Note Controls: You can set notes to expire after a certain time or after a specific number of views. By default, any note not accessed in 90 days is automatically cleaned up.

- Privacy: No logins. E2E encryption for password protected notes. Passwords and hashes never leave your browser, only encrypted blobs do. There is a Playwright test in the repo that verifies this.

- The Rest: Custom slugs, syntax highlighting via highlight.js, rate limiting, PWA installable.

The Caveats: I wanted to avoid the complexity of ownership, so the rules are simple: anyone can edit or delete any note. It is designed for quick, ephemeral use rather than long-term storage. If someone takes your slug, they can delete it and you can take it back. It is a bit of a free-for-all, but it keeps the codebase clean.

Technical Stack:

- Frontend: Svelte 5 with runes

- Backend: Hono

- Infrastructure: Runs on Cloudflare Workers, using Durable Objects for the real-time sync and D1 for the database.

It has not been tested at scale, but since it is on Workers, I hope it holds up. Now that the holidays are over and I am heading back to work, I will not have a ton of time to maintain this, so PRs are very welcome if you find bugs.

I am hosting a public instance for now at yp.pe. If the costs get crazy I might not be able to keep it public, but I tried to make it as easy as possible to self-host with deployment scripts and documentation in the repo.

1

Sentience – Semantic Visual Grounding for AI Agents (WASM and ONNX) #

0 評論3:57 PM在 HN 查看
Hi HN, I’m the solo founder behind SentienceAPI. I’ve spent the last December building a browser automation runtime designed specifically for LLM agents.

The Problem: Building reliable web agents is painful. You essentially have two bad choices:

Raw DOM: Dumping document.body.innerHTML is cheap/fast but overwhelms the context window (100k+ tokens) and lacks spatial context (agents try to click hidden or off-screen elements).

Vision Models (GPT-4o): Sending screenshots is robust but slow (3-10s latency) and expensive (~$0.01/step). Worse, they often hallucinate coordinates, missing buttons by 10 pixels. The Solution: Semantic Geometry Sentience is a "Visual Cortex" for agents. It sits between the browser and your LLM, turning noisy websites into clean, ranked, coordinate-aware JSON.

How it works (The Stack):

Client (WASM): A Chrome Extension injects a Rust/WASM module that prunes 95% of the DOM (scripts, tracking pixels, invisible wrappers) directly in the browser process. It handles Shadow DOM, nested iframes ("Frame Stitching"), and computed styles (visibility/z-index) in <50ms.

Gateway (Rust/Axum): The pruned tree is sent to a Rust gateway that applies heuristic importance scoring with simple visual cues (e.g. is_primary)

Brain (ONNX): A server-side ML layer (running ms-marco-MiniLM via ort) semantically re-ranks the elements based on the user’s goal (e.g., "Search for shoes").

Result: Your agent gets a list of the Top 50 most relevant interactable elements with exact (x,y) coordinates with importance value and visual cues, helping LLM agent make decision.

Performance:

Cost: ~$0.001 per step (vs. $0.01+ for Vision)

Latency: ~400ms (vs. 5s+ for Vision)

Payload: ~1400 tokens (vs. 100k for Raw HTML)

Developer Experience (The "Cool" Stuff): I hated debugging text logs, so I built Sentience Studio, a "Time-Travel Debugger." It records every step (DOM snapshot + Screenshot) into a .jsonl trace. You can scrub through the timeline like a video editor to see exactly what the agent saw vs. what it hallucinated.

Links:

Docs & SDK: https://www.sentienceapi.com/docs

GitHub (SDK): SDK Python: https://github.com/SentienceAPI/sentience-python

SDK TypeScript: https://github.com/SentienceAPI/sentience-ts

Studio Demo: https://www.sentienceapi.com/docs/studio

Build Web Agent: https://www.sentienceapi.com/docs/sdk/agent-quick-start

Screenshots with importance labels (gold stars): https://sentience-screenshots.sfo3.cdn.digitaloceanspaces.co... 2026-01-06 at 7.19.41 AM.png

https://sentience-screenshots.sfo3.cdn.digitaloceanspaces.co... 2026-01-06 at 7.19.41 AM.png

I’m handling the backend in Rust and the SDKs in Python/TypeScript. The project is now in beta launch, I would love feedbacks on the architecture or the ranking logic!

1

VoltCode Run multiple Claude/Gemini tasks in parallel #

github.com favicongithub.com
0 評論1:55 PM在 HN 查看
I built VoltCode to solve AI coding bottlenecks by running multiple AI agents simultaneously (tested on M1 Pro). Generate components with Claude while Gemini writes tests—all in one IDE.

Problem: Current AI coding is sequential. You wait for Claude to finish one component before starting the next. With complex apps, you're constantly blocked—generate navbar → wait → generate auth → wait → write tests → wait. Each task takes 10-30s, adding up to minutes of idle time.

How it works: VoltCode's parallel task engine runs up to multiple AI agents simultaneously. Chat with Claude Code to generate a dashboard while Gemini writes API routes and another Claude instance creates tests. All tasks execute in parallel with intelligent queue management.

Key insight: Modern apps need multiple components. Instead of sequential AI calls, batch them. The bottleneck isn't AI speed—it's waiting for one task to finish before starting the next.

Built with Tauri + React. Supports Claude Code, Gemini, Codex with MCP protocol. Live preview updates as each parallel task completes. Task panel shows all running jobs with progress.

The magic moment: Watch your app build itself as multiple AIs work simultaneously.

1

Doom Playable in a GitHub Readme #

kuber.studio faviconkuber.studio
2 評論7:21 PM在 HN 查看
GitHub READMEs don’t support JavaScript, iframes, or WebAssembly, so how does someone run DOOM on it at all?

I built DoomMe (great name, I know) with a stateless graph of 8000+ Markdown files and WebP images of every possible position you can be or face towards on the E1M1 map to make it one very big "choose your own adventure" book on GitHub.

Open Source with MIT License

- GitHub repo: https://github.com/Kuberwastaken/DoomMe - Play it on my profile: https://github.com/Kuberwastaken

1

PutHouse – The AI Options Trader #

puthouse.com faviconputhouse.com
0 評論12:35 PM在 HN 查看
Like many of you, I liked the idea of selling premium. It feels like a great strategy for passive income. But despite trading for years, I was still my own worst enemy and I kept seeing people in this sub asking for a way to automate trading because, let’s be honest, the manual grind is brutal. I noticed three things that kept me suffering:

Time sink: Checking prices, monitoring delta, watching for assignment risk, deciding when to roll or close. I was spending 10+ hours a week managing positions. That’s not "passive" income; that’s a second job.

Emotional decisions: When I saw extra premium, I’d pick strikes out of greed. When I was winning, I’d want to hold longer. And when the market moved, I didn’t want to miss out. I knew the rules and still broke them.

Gatekeeping: I’d see screenshots of huge gains with zero explanation. Just a P/L flex. I don’t think most people are malicious but the end result is the same: outcomes without process.

I stopped trying to trade better and started trying to remove myself from the equation. I built a systematic tool called PutHouse to automate the mechanics of covered calls and cash-secured puts.

What I’ve learned since running PutHouse across different market conditions:

Time is the real ROI: Getting back ~10 hours a week to live life and have it managed for you has been the biggest win.

Systematic > intuitive: A boring, repeatable strategy beats a gut feeling every single time.

Transparency matters: I need to know what the data is telling me because metrics such as IV, RV, RSI, OI, and delta are important.

I’m opening this up because I want to stress-test the logic with people who actually trade this way. I’m not looking to sell you anything. I genuinely want feedback on what I’ve built:

What would you not trust about an algorithmic approach to selling premium?

For those who trade manually, what’s the "edge case" or market condition that would make you worry about an automated system?

What metrics are you looking at that I might be missing? (Currently: IV, RV, RSI, OI, Delta, DTE, Min Volume, Min Price, Bid/Ask Spread).

The goal isn't a "money printer". It's consistency and risk management. If you’ve built something similar or have thoughts on the mechanics, I’d love to hear what you would be most worried about.

1

I built an agent that analyzes your Google Search Console data #

chatseo.app faviconchatseo.app
0 評論8:07 AM在 HN 查看
I built this after getting frustrated with digging through Google Search Console tables and exports just to decide what to work on next to do the SEO of my first project (recipe mobile app).

ChatSEO connects to Google Search Console and lets you ask questions in plain language (e.g. pages close to page 1, keywords losing traction, or what actions are likely to have the biggest impact).

Under the hood, it combines GSC data with external keyword and SERP data to prioritize SEO actions instead of just showing dashboards.

It’s still early: onboarding is basic, recommendations can be noisy, and pricing isn’t final.

I’d love feedback from people who actively use GSC on whether this approach is useful or what feels wrong.

1

SCIM Gateway for Go – RFC-compliant server with plugin architecture #

0 評論9:34 PM在 HN 查看
I built a SCIM 2.0 gateway library for Go that makes it straightforward to expose any backend as a standards-compliant identity provider.

SCIM (System for Cross-domain Identity Management) is the standard protocol for user provisioning between systems like Okta, Azure AD, and your application. Existing Go implementations were either incomplete or unmaintained.

Key technical decisions:

- Plugin pattern: backends return raw data, library handles protocol (filtering, pagination, PATCH operations) - Full RFC 7643/7644 compliance: all filter operators, complex path expressions, bulk operations with cycle detection - Per-plugin authentication: each backend can use different auth (Basic, Bearer, custom JWT) - Minimal dependencies: only google/uuid, uses stdlib for everything else - Thread-safe: proper mutex usage, 76% test coverage, zero panics

Can run as standalone server or embedded http.Handler. Includes SQLite, PostgreSQL, and in-memory examples. The plugin interface is simple:

    func (p *Plugin) GetUsers(ctx context.Context, params QueryParams) ([]*User, error) {
        return p.db.GetAllUsers(), nil  // Library handles filtering, pagination, etc...
        // Or you can optimize your queries by using params
    }
Inspired by the Node.js scimgateway but redesigned for Go's type system and concurrency model.

GitHub: https://github.com/marcelom97/scimgateway

Happy to discuss design tradeoffs and answer questions!

1

A playground to test stablecoin gas fees and concurrent txns on Tempo #

tempo-playground.vercel.app favicontempo-playground.vercel.app
1 評論6:56 AM在 HN 查看
Hi HN,

I built *Tempo Playground*, a small hands-on playground to explore Tempo’s transaction model for real-world payments.

The goal was to make Tempo’s ideas tangible instead of just reading documentation.

*What you can try right now:* - Pay transaction fees in *any USD-denominated stablecoin* (no native gas token required) - Execute *concurrent transactions* from the same account (parallel execution) - *Batch multiple calls* into a single atomic transaction

*Some early observations:* - Gas costs are around *$0.001 per transaction* - Finality is roughly *0.5s*, which feels close to web2 UX for payments

*Link:* https://tempo-playground.vercel.app

It’s intentionally minimal and still rough around the edges.

I’d love feedback on: - What else would you want to test to evaluate a payments-focused chain? - Any edge cases or stress scenarios you think I should explore next?

Happy to answer technical questions or share implementation details.

1

LeetDuck – AI voice-to-voice mock interviewer for LeetCode.com #

leetduck.com faviconleetduck.com
0 評論3:56 PM在 HN 查看
I made this because I had trouble "thinking aloud" during technical interviews, even when I knew the optimal solution.

I'm using it right now to work on practicing vocalizing my ideation, which is a huge component of technical interviews that leetcode doesn't cover.

It integrates seamlessly into leetcode.com and sits on your screen as a little "rubber duck."

LeetDuck is automatically configured to proactively ask you a random behavioral question at the start, respond to runs and submits, and ask you about the time and space complexity when you successfully complete the problem.

Let me know what you guys think! Is this actually something other people would use too?