2025年12月21日 的 Show HN
38 条WalletWallet – create Apple passes from anything #
Gaming Couch – a local multiplayer party game platform for 8 players #
I’ve been working on Gaming Couch, a web-based game platform where up to 8 players use their smartphones as controllers to play real-time action mini-games on a central browser screen.
TL;DR:
- 18 competitive mini-games for up to 8 players
- Runs entirely in the browser
- Phones act as controllers (no apps, no accounts required)
- Focused on fast, chaotic, real-time party games (no trivia)
- Currently in public early access
Try it here: https://gamingcouch.com. Open the link on a computer, host a session, scan the QR code with your phone(s) and play!
What is it?
Gaming Couch is a party game platform where friends play short competitive action games together on one screen, using their phones as controllers (there's also support for physical gamepads if that's more your thing!)
I intentionally avoided trivia and text-heavy games. Many people don’t write or read English fluently, and I wanted games where reaction, timing, and chaos matter more than spelling.
It’s currently in early public access with 18 mini-games, all made by me and a two friends. All game rounds last ~1 minute, scores carry over, and after each round players vote on the next game. If you’re solo, 3 games support bots, but it’s best with a full couch of people as half the fun comes from the social aspect of playing together!
Why I built it:
For the last 15+ plus years, me and my friends have loved video game nights but organizing them has always been a PITA when you have more than 4 people playing:
- Different games were under different Steam accounts requiring downloads and installation.
- Extra controllers were missing (somebody forgot to bring theirs) or they wouldn’t pair.
- Consoles were expensive and not always available if we were on the road.
Once I started building it, other dev friends asked if they could make games for it too, which led me to realize this could also be a platform for small party games, especially for gamejam devs who don’t want to or have time to build multiplayer infrastructure from scratch. This is why supporting third-party games is the next major feature I’m working on.
Tech stack:
- Games run locally in the host’s browser (no streaming of games)
- Phones connect via WebRTC to the host session (1–10ms latency in ideal conditions with P2P connection)
- Fallback to TURN when direct P2P connection isn’t possible e.g. due to strict firewall settings in corporate networks or use of VPN's
- Website/Platform made with React + TypeScript
- Existing games made with Unity or just plain JS/TS.
- Backend: Supabase (Postgres + auth only, currently only used for optional user accounts)
How is it different from e.g. Jackbox, Airconsole or Nintendo?
Jackbox is absolutely great, but it’s heavily dependent on English literacy and "being funny" on the spot. I wanted something focused on fast, chaotic, real-time action games that work even if your friends speak different languages or just want to smash buttons. Also, I'm not a fan of their party pack model...
AirConsole is the most well known comparison to Gaming Couch in terms of technology and execution, but I feel there is a gap for a curated experience where the UI is unified, rounds are 60 seconds, and the competitive "meta-game" (scoreboards/voting) is baked into the platform. And in any case AirConsole was acquired by a car-software company and have pivoted their focus from couch gaming toward in-car entertainment.
Nintendo games are usually the gold standard in the party game category but the HW and games cost so much! With Gaming Couch, I want to keep the accessibility threshold as low as possible so everyone is able to play without upfront HW or SW costs.
What do you think of this? Are you an interested player or perhaps a developer who has had an idea to develop a fun 8 player mini-game but has been daunted by the idea thus far?
RenderCV – Open-source CV/resume generator, YAML → PDF #
Run rendercv render cv.yaml → get a perfectly typeset PDF.
Highlights:
1. Version-controllable: Your CV is just text. Diff it, tag it.
2. LLM-friendly: Paste into ChatGPT, tailor to a job description, paste back, render. Batch-produce variants with terminal AI agents.
3. Perfect typography: Typst under the hood handles pixel-perfect alignment and spacing.
4. Full design control: Margins, fonts, colors, and more; tweak everything in YAML.
5. Comes with JSON Schema: Autocompletion and inline docs in your editor.
Battle-tested for 2+ years, thousands of users, 120k+ total PyPI downloads, 100% test coverage, actively maintained.
GitHub: https://github.com/rendercv/rendercv
Docs: https://docs.rendercv.com
Overview on RenderCV's software design (Pydantic + Jinja2 + Typst): https://docs.rendercv.com/developer_guide/understanding_rend...
I also wrote up the internals as an educational resource on maintaining Python projects (GitHub Actions, packaging, Docker, JSON Schema, deploying docs, etc.): https://docs.rendercv.com/developer_guide/
The Official National Train Map Sucked, So I Made My Own #
I’m a junior developer. I wanted to share a side project I’ve been working on.
The national railway carrier (BDZ) has no public API. They have an official map but the UI is quite dated, often lags, and doesn't show the full route context.
I wrote a short write-up about the process here: https://www.pavlinbg.com/posts/bg-train-tracker
I know it's still rough around the edges (I'm still working on it), but I’d love to hear your feedback or suggestions!
Rust/WASM lighting data toolkit – parses legacy formats, generates SVGs #
I built this to scratch my own itch and put it on crates.io and PyPI where nothing like it existed.
The old file formats (EULUMDAT from 1990, IES from 1991) still work fine for basic photometry. But the industry is moving toward spectral data – full wavelength distributions instead of just lumen values.
The new standards (TM-33, ATLA-S001) are barely supported by existing tools.
So this handles both: legacy formats for compatibility, spectral data for anyone who wants to work with the new standards.
Stack: Rust core, then UniFFI for bindings. One codebase compiles to WASM/Leptos, egui, SwiftUI, Jetpack Compose, PyO3.
At one point the generated Swift boilerplate got so large GitHub classified it as a Swift project. 3D viewer is Bevy, loaded on-demand.
Feedback welcome – especially on the SVG output and the 3D viewer.
https://github.com/holg/eulumdat-rs (MIT/Apache-2.0)
HN Sentiment API – I ranked tech CEOs by how much you hate them #
505k+ comments, Oct 31 - Present.
Here's the leaderboard:
LOVED:
- Steve Jobs: 44% positive, 7% negative
- Linus Torvalds: 43% positive, 5% negative
- Gabe Newell: 34% positive, 8% negative
MID:
- Bill Gates: 22% positive, 8% negative
- Tim Cook: 15% positive, 30% negative
- Bezos: 12% positive, 18% negative
HATED:
- Zuckerberg: 4% positive, 35% negative
- Sam Altman: 8% positive, 38% negative
- Musk: 5% positive, 45% negative
Try it yourself:
# Who does HN talk about the most?
curl "https://api.hnpulse.com/entities?label=person&sort=mentions"
# What are people saying about remote work?
curl "https://api.hnpulse.com/comments?entity=remote work&limit=3"
# Is OpenAI's reputation getting worse?
curl "https://api.hnpulse.com/trends?entity=openai&bucket=day"
# What technology gets mentioned alongside SF?
curl "https://api.hnpulse.com/entities?co-occur=SF&label=technolog..."
Stack: Go, PostgreSQL, GPT-4o mini for entity extraction
Docs: https://docs.hnpulse.com API: https://api.hnpulse.com
I built a 1‑dollar feedback tool as a Sunday side project #
So as a “principle experiment” I built my own today as a side project and priced it at 1 dollar. Just because if something is cheap to run and easy to replicate, it should be priced accordingly, and it’s also fun marketing.
1$ feedback tool.
Shipped today, got the first users/moneys today, writing this post today. Side Sunday project, then back to the main product tomorrow.
ArkhamMirror – CIA's Analysis of Competing Hypotheses, Runs in Browser #
Message received. I built a standalone version that runs entirely in your browser.
Live tool: https://mantisfury.github.io/ArkhamMirror/ach/ Full ArkhamMirror repo: https://github.com/mantisfury/ArkhamMirror
What it does:
Implements Heuer's 8-step ACH methodology (the CIA technique for avoiding confirmation bias) Guides you through identifying hypotheses, gathering evidence, building the consistency matrix, and running sensitivity analysis Exports to JSON, Markdown, or PDF
Privacy model:
All data stored in browser localStorage Zero network calls after initial page load except to/from your AI provider (if applicable) No backend, no accounts, no telemetry Works offline once loaded Built as part of ArkhamMirror, a 100% local investigative platform I made as a non-coder using AI assistants.
Optional AI assistance:
Connect your own API key (OpenAI, Groq) for AI-powered suggestions Or use local LLMs (Ollama, LM Studio, local Anthropic through proxy) if you run it locally The AI helps suggest hypotheses, evidence items, and ratings – but you make all decisions
Why standalone? The full ArkhamMirror platform is powerful but requires Docker + databases. This gives journalists, analysts and anyone who's curious a zero-friction way to try the ACH methodology immediately.
Analysts have been stuck with crappy spreadsheets for a long time. Now they (and you) have a free upgrade. Source: https://github.com/mantisfury/ArkhamMirror/tree/main/ach-sta...
If you try it out, please let me know what you think.
Kanmail – Turn your inbox into a Kanban board #
Been working away shifting Kanmail over to a Go backend (using the excellent wails) and rewriting the frontend in TypeScript. This "new" v2 is much quicker, overcomes a bunch of old limitations (no age limit on paginating folders) and finally gets a working Linux AppImage (*my Linux test setup is somewhat limited and I expect there's plenty of edge cases here - hit me up if you have issues).
Anyway, would love any feedback on site or app, the previous round was incredibly useful.
Lockify – developer-friendly CLI for managing encrypted env variables #
I built Lockify because I wanted a simple way to encrypt and decrypt files locally without relying on cloud services.
It’s a small Go-based CLI focused on being simple, fast, and easy to use from the terminal.
I’d really appreciate any feedback, especially around usability and CLI design.
BetterQR – I got tired of $20/mo+ subscriptions for simple QR codes #
I built BetterQR because I refuse to pay more than my Netflix subscription for 3 QR codes!
I just needed 3 QR codes for an event, but every tool I found wanted a ~$30/month subscription. It felt like a massive pricing mismatch for what is essentially a simple redirect and a PNG.
I ended up hacking together a minimal alternative in an afternoon and decided to polish it up and ship it.
It's intentionally lean: - static & dynamic codes (that don't break). - analytics (just enough to see if anyone is actually scanning). - no "subscription or die" model for basic usage. Up to 3 codes are free forever. If you need more, the paid tier is priced like a utility, not a luxury suite.
I'm sharing this mainly to get feedback: - what QR features people actually care about - where the "free vs paid" line should be - whether this solves a real annoyance or just mine
Happy to answer questions about implementation, trade-offs, tech stack or anything else.
Twitch Plays Claude – Crowd-controlled live coding experiment #
I built a live experiment called "Twitch Plays Claude". It’s exactly what it sounds like: inspired by Twitch Plays Pokémon, but instead of moving a sprite, the crowd controls an LLM (Claude 4.5 Opus) to live-code a single index.html file.
I’m really curious to see if this results in a chaotic mess or if a "wisdom of the crowd" effect kicks in to build a coherent application.
How it works:
Any user in the chat can submit a prompt using !idea <prompt>. This can be as simple as "Add a small button here", or it can try to modify the whole page like "Make the website a 3D space simulation using Three.js". The composition is where the chaos emerge. You can for instance write "!idea add a mario movie projected automatically on a screen in the space".
I implemented two modes to manage the chaos:
- Anarchy: Chat inputs are batched. I included a "pressure estimate" logic in the system prompt so the AI tries to satisfy the weighted demand of the crowd.
- Democracy: Inputs are synthesized by Claude, then voted on by chat before execution. Each complete cycle lasts about 1:30-2 mins.
To keep it interesting, the crowd sets a "Collective Goal" every 30 minutes. If the goal changes, the page resets; if kept, iteration continues.
The stack:
- Backend: FastAPI, Gunicorn, Nginx, and a custom Twitch bot. - Frontend: The stream updates the DOM using morphdom via websockets (used only to signal that something has changed). This was important to prevent full page refreshes and keep the visual experience smooth. If needed in the event of any refresh bug, the chat can reload the page using !refresh - Sandbox: It's heavily sandboxed, but I allowlisted libraries like Three.js so people can try to build 3D scenes or mini-games. - The AI used is Claude Opus 4.5 for both democratic synthesis and code (patchs) production. I implemented a custom system to make Claude not have to rewrite the full index.html each time.
I plan to keep this running for a few days. The GitHub repo auto-updates with every commit from the stream.
Depending on how it goes, I might implement hierarchical clustering on semantic embeddings to improve the Democracy mode, or give the chat control over the system prompt itself and/or reset the page.
Links:
- Live Stream: https://www.twitch.tv/artix187
- Result (Live Website, at your own risks): https://artix.tech/tpc
- Crowd-produced code: https://github.com/ArtixJP/twitch-plays-claude
Please let me know what you think or if you have any idea to improve the system!
Build apps with 500 models locally. No tracking, no cloud, just code #
100% Open Source
The core idea: You should be able to prompt a full-stack application into existence, but the environment should be local, the models should be swappable (Ollama/LM Studio support was a priority), and the output should be standard code you actually own.
A few things I focused on:
Context Management: One of the hardest parts was figuring out how to feed the right file context back to the LLM without blowing out the token limit. I’ve implemented a custom indexing approach to keep the "vibe coding" flow snappy.
The "Local" Win: Since it runs locally, you can use it on a plane or in a coffee shop without burning through a data plan or worrying about API latency.
Stack: It’s primarily focused on the Node.js ecosystem right now, as that's where I found the most consistent generation results.
It’s definitely a work in progress. Specifically, I'm still fine-tuning how it handles large-scale refactors across multiple files, which is where most AI coders tend to trip up.
I’m really curious to hear from this community—do you actually prefer using local models for this kind of work, or is the "intelligence gap" between a local Llama 3 and Claude 4.5 Sonnet still too big for your daily use?
Site: https://codinit.dev
Zero Trust API – Image CDR in Rust/WASM (Rebuild Images from Pixels) #
API Docs & Demo: [Zero Trust App](https://zero-trust-web.vercel.app/) � RapidAPI (Get a key): [Zero Trust API] (https://rapidapi.com/image-zero-trust-security-labs/api/zero...)
I built an LLM agent that finds you online and roasts you #
This is an agent that looks up public information and roasts you based on what it finds.
The Santa theme is just seasonal UI. Underneath, it is a straightforward agent that does web lookup, identity confirmation, and multi turn interaction.
Happy to answer questions or hear what breaks.
MomentBridge – A 24KB site to share life moments (pure HTML/CSS/JS) #
Tech stack: - Pure HTML, CSS, and vanilla JavaScript (no frameworks, no build step) - 30+ moment cards with smooth hover animations - Fully responsive design - Total file size: ~24KB (including images/icons) - Hosted on GitHub Pages
Live demo: https://vnglstzrs.github.io/momentbridge/
I built this as an experiment in minimal front‑end design and to encourage digital mindfulness. All feedback is welcome—and yes, the project is for sale if someone wants to develop it further.
SHA-256 quasi-collision with 184/256 matching bits #
Code and verification script:
https://github.com/POlLLOGAMER/SHA-256-Colision-Finder-NEW.i...
https://github.com/POlLLOGAMER/SHA-256-Colision-Finder-NEW.i...
OSF project:
https://doi.org/10.17605/OSF.IO/EA739
Not a full collision. Open to feedback.
The equation for smoke vortices also describes 100M° fusion plasma #
The math is identical: potential vorticity conservation.
Particles carry vorticity, grid does the FFT Poisson solve, B-splines shuffle data between them. The trick is tracking how the flow map deforms (Jacobian evolution via RK4) and reinitializing before it blows up.
Why bother: grid methods smear turbulent structures. Lagrangian particles don't. Fusion people care because the scrape-off layer is where reactors fail. Turbulent blobs hit the walls.
Matches Arakawa finite-differences on energy/enstrophy conservation. Zonal flows emerge without forcing. Linear dispersion validates against theory.
Lyrics to Rolling WebVTT Converter #
Ava – open-source AI voice assistant that runs in the browser #
I built a voice assistant that runs entirely in the browser. No backend, no API calls. Everything runs on your device.
The goal was to see how far browser based AI has come, and whether a full voice pipeline can work client side with acceptable latency.
How it works - Speech to text: Whisper tiny en via WebAssembly - LLM: Qwen 2.5 0.5B running via llama.cpp WASM port - Text to speech: Native browser SpeechSynthesis API
Responses stream in real time. TTS starts speaking as soon as a sentence is generated instead of waiting for the full reply.
After the first load, it works completely offline. Nothing leaves the device.
Why this matters: - Shows what is now possible with modern browsers, small LLMs and WASM
Caveats: - Requires Chrome or Edge 90+ due to SharedArrayBuffer for WASM threading - Around 380MB initial download, cached after - English only for now - The 0.5B model is limited, but small enough to run locally
Tested on macOS and Linux desktop browsers. I could not get this working reliably on mobile yet due to memory and threading limits. Getting all of this working in the browser took far longer than expected due to many low level WASM and browser issues.
Demo https://ava.muthu.co
Source https://github.com/muthuspark/ava
I would love feedback, especially from anyone experimenting with local AI or browser based ML, and ideas on improving performance or mobile support.
ArgueWiki, where arguments live forever #
You can only rank supporting arguments against other supporting arguments, opposing against opposing, etc, in a hope to neutralize confirmation bias for a given position. i.e., even if you agree with a perspective, you'd still have to decide what is the better argument for it.
I made this because I've spent too much time arguing with people on the internet, and sometimes you see the same tired arguments and rebuttals, and I just wished there was a place where you could point people so that they can walk through all the arguments and counterarguments themselves so it's not people just repeating themselves in circles.
I know it's a pretty common hobby horse for rationality & debate nerds; I've seen a lot of varieties of the same thing on the internet while seeing how else things were done. It started with the idea of argument mapping/trees and how different statements could connect to other statements, and perhaps there could be a massive web visual of how all statements interconnect...but it made me think of how people thought Obsidian's graph thing was cool, but ultimately pointless.
Anyways, I wanted to go the opposite direction; instead of tons of features relating to fallacies/rebuttals, etc, I wanted to make the objects as simple as possible, such that they were more easily digestible. And ultimately the form is pretty loose for how people construct their Arguments.
This is my first side project; I primarily work in film & entertainment, but minored in Math/CompSci and always wanted to build a website (what a dream, huh?). Just had a baby and not a lot of time, and only vanilla webDev experience (I STILL maintain my personal website with Dreamweaver, but am probably gonna revamp it now that I have more experience.) Over the course of the past year just learning the ins and outs of Vue, going thru a few iterations of frameworks, libraries, tweaking, DBs, migrations, local dev, etc.
Comparing it with all the AI side projects that are currently out there, it feels like a pretty humble CRUD site, but it feels nice to put something out there.
The styling obviously isn't anything to write home about, but I wanted to keep it minimalist and closer to a wiki aesthetic, but responsive. Accessibility probably leaves much to be desired, but that's why I ultimately leaned on headless and NuxtUI for interactive components.
At this point, though, the question of content/users remains. I tried seeding with an LLM, but did not really enjoy tuning the quality of content generated, the voice, the personas, etc. I'm now considering perhaps keyword scanning for debates on X/Twitter and converting those into content that links back. It feels a bit cringe going into fully automated reply bot territory for seeding/promoting it, but I'm not really sure what other avenues to pursue if it's just done.
Open to feedback, especially around UX A lot could probably be refined, but I'm sort of unsure how to make it easier for a new user to understand or to want to contribute. Higher-level feedback on the structure are also welcome; someone said to me that Statements and Arguments might still be too abstract for people and it should be one unified object, but I feel like I'd have to have more feedback to think about that.
Loan Sweet Spot – Mortgage visualizer vibe coded in 3h with Gemini #
I built LoanSweetSpot.com to solve a personal frustration: standard mortgage calculators give you a grid of numbers, but I wanted to visualize the "knee" of the curve—the sweet spot where a small extra payment saves a disproportionate amount of interest (and time).
The Build Process: This was a pure "vibe coding" experiment. I acted as the product manager/architect, and Gemini acted as the junior dev handling the syntax.
Stack: Zero dependencies (except Chart.js via CDN), single index.html file.
Hosting: Cloudflare Pages (Free tier).
Time: From blank file to production in one sitting (~3-4 hours).
Features:
Interactive Payoff Curve: The graph is the controller. Drag the blue dot to adjust the term.
The "Danger Zones": Vertical red lines appear automatically when your total cost hits 2.0x or 3.0x the principal (predatory interest warning).
Privacy: 100% client-side math.
I’d love to hear what you think of the UX, or your thoughts on this workflow.
Vibey – Vibe Code in your Browser #
https://github.com/martinpllu/vibey
Features include:
- Uses OpenRouter so you can pick your model (Gemini 3 Flash works really well) - Everything runs locally in your browser – no backend, data only goes to OpenRouter for AI calls - Auto-attaches browser errors to help the AI debug - Screenshot capture to show the AI what's happening - Restore any previous version of your code - Community gallery to share and discover apps
I've been having fun using Vibey to quickly build little apps and games. I'd love to hear your thoughts.
Modern Trello Website #
the target audience is: Students planning coursework and deadlines Solo builders & freelancers managing multiple projects Casual users who want to get organised
lmk what you think!
deeploy 0.1 – Terminal-first deployment platform (self-hosted) #
Why? I wanted deployments to feel native to my terminal workflow. No browser tabs, no dashboards.
Stack: Go, Bubble Tea (TUI), Docker, Traefik, Let's Encrypt, PostgreSQL.
You install a daemon on your VPS (one curl), run the TUI on your machine, and deploy via git push. Zero downtime, auto SSL.
Early release (0.1). Would love feedback on the approach.