Daily Show HN

Upvote0

Show HN for December 30, 2025

39 items
37

Tidy Baby is a SET game but with words #

tidy.baby favicontidy.baby
7 comments3:57 PMView on HN
Hi HN —

Tidy Baby is a new game made by me and Wyna Liu (of NYT Connections!) that is inspired by the legendary card-based game SET that we assume many of you love (we too love SET).

In SET, you’ve got four dimensions: shape, number, color, and shading, each with three variants.

In Tidy Baby you only have to deal with three dimensions:

- word length (3, 4, or 5 letters) - part of speech (noun, verb, or adjective) - style (bold, underline, or italic)

Like in SET, you are trying to form sets of three cards where, along each dimension, the set is either all the same or all different. If you’ve never played SET there are more details/examples at “how to play” in the game.

The mechanics of Tidy Baby are sort of inspired by a solitaire/practice version of SET I sometimes play where you draw two random cards and have to name the third card that would make a valid set.

In Tidy Baby you are presented with two “game cards” and a grid of up to nine candidates to complete a valid set – your job is to pick the right one before the clock runs out.

Unlike in SET, you get points for “partial” sets where your set is valid on one or two dimensions (but not all three). It’s actually a pretty fun challenge to try to get only sets that are invalid along all three dimensions.

In building the game, we were sort of surprised that the biggest challenge was ensuring that all words were unambiguously one part of speech. You’d be surprised how hard it is to find three-letter adjectives that are not also common verbs or nouns. We did our best!

We’ve got three “paces” in the game: Steady, Strenuous, and Grueling (s/o MECC!)

Let us know what you think!

36

I remade my website in the Sith Lord Theme and I hope it's fun #

cookie.engineer faviconcookie.engineer
16 comments6:08 PMView on HN
I used the time over Christmas and in between the years to redesign my website.

This time I decided to make it in the theme of an evil Sith Lord that commands the Galactic Cookie Empire, because I found my previous cookie consent game a bit boring after a while.

Here's the website's welcome page and the cookie consent game: https://cookie.engineer/index.html

(the cookie consent game isn't started on any other page of my website, only on the welcome page)

I also made a "making of" weblog article series, in case you're interested in the development process and how I implemented it and what kind of troubles I went through already:

- Making of the Game: https://cookie.engineer/weblog/articles/making-of-my-website...

- Making of the Avatar: https://cookie.engineer/weblog/articles/making-of-my-website...

- Debuggers to toy around with: https://cookie.engineer/design/consent/index.html

It "should" work on modern browsers. I tested it on Firefox on Linuxes, Chrome/Chromium on Linuxes, and Safari on Macbook. Don't have an iPhone so I can't test that, but my two old Android phones were also working fine with the meta viewport hack (I can't believe this is still the "modern" way to do things after 15 years. Wtf).

Best experience is of course with a bigger display. On smaller screen sizes, the game will use a camera to zoom around the game world and follow the player's spaceship. Minimum window width is 1280 pixels for no camera, and I think 800 pixels to be playable (otherwise the avatar gets in the way too much in the boss fights).

Oh, there's also a secret boss fight that you can unlock when you toy around with the Dev Tools :)

What's left to do on the avatar animation side:

- I have to backport CMUdict to JavaScript / ECMAScript. That's what I'm working on right now, as I'm not yet satisfied with the timings of the phonemes. Existing tools and pipelines that do this in python aren't realtime, which leads to my next point.

- I want to switch to using the "waveform energy detection" and a zero cross rate detector to time phonemes more correctly. I believe that changes in waveforms and their structures can detect differences in phonemes, but it's a gut feeling and not a scientific fact. Existing phoneme animation papers were kind of shit and broken (see my making of article 2). The phoneme boundary detector is highly experimental though and is gonna need a couple weeks more time until it's finished.

That's it for now, hope you gonna enjoy your stay on my website and I hope you gonna have fun playing the Cookie Consent Game :)

Oh, also, because it might not be obvious: No LLMs were used in the making of this website. Pretty much everything is hand-coded, and unbundled and unminified on purpose so visitors can learn from the code if they want to.

~Cookie

31

Brainrot Translator – Convert corporate speak to Gen Alpha and back #

brainrottranslator.com faviconbrainrottranslator.com
9 comments4:51 PMView on HN
Hey HN, I built this because the generational gap online is getting wider (and weirder). It’s an LLM-wrapper that translates "Boomer" (normal/corporate English) into "Brainrot" (Gen Alpha slang) and vice versa. It also has an "Image-to-Rot" feature that uses vision to describe uploaded images in slang. It’s mostly for fun, but actually kind of useful for deciphering what your younger cousins are saying. Would love to hear what you think!
29

RAMBnB.xyz P2P marketplace for RAM rentals #

rambnb.xyz faviconrambnb.xyz
9 comments11:32 PMView on HN
Airbnb is missing out on the biggest rental opportunity of 2026 so I built the solution.

Need to open Microsoft Teams and your other favorite Electron app? Get a temporary increase of memory

16 GB ought to be enough for everybody but in case it's not, you can rent more

11

Cover letter generator with Ollama/local LLMs (Open source) #

coverlettermaker.co faviconcoverlettermaker.co
11 comments2:11 AMView on HN
I built an open source web app that generates cover letters using local AI models (Ollama, LM Studio, vLLM, etc.) so your resume and job application data never leaves your machine.

No placeholders. No typing. Letters are ready to copy and paste.

The workflow is: 1. Upload your resume (PDF) - it gets parsed and cached in your browser. 2. Paste the job description 3. Get a personalized cover letter in ~5 seconds

It connects to any OpenAI-compatible local LLM endpoint. I use it with Ollama + llama3.2, but it works with any local model server.

Key features: - 100% local and private depending on the LLM of your choice - Smart resume parsing with pdf-parse - Multi-language support (you can add more languages) - Editable output with one-click copy

I made this because I was tired of wasting time with writing letters while applying for jobs. All other tools I tried weren't as quick as I wanted because I still needed to modify the letters to replace placeholders.

I also didn't find any tool that let's me use my local LLM for free, and I didn't want to pay for ChatGPT/Claude API calls for every job application.

The output quality is good, and it can bypass some AI detectors.

It's open source too and free to use. You can self-host it or run it locally in development mode.

GitHub: https://github.com/stanleyume/coverlettermaker

Cheers :)

8

MCP Mesh – one endpoint for all your MCP servers (OSS self-hosted) #

github.com favicongithub.com
0 comments4:42 PMView on HN
Hey HN! I’m Gui from deco (decocms.com). We’ve been using this tool internally as the foundation for a few customer AI platforms, and today we’re open-sourcing it as MCP Mesh.

MCP is quickly becoming the standard for agentic systems, but… once you go past a couple servers it turns into the same problems for every team:

- M×N config sprawl (every client wired to every server, each with its own JSON + ports + retries) - Token + tool bloat (dumping tool definitions into every prompt doesn’t scale) - Credentials + blast radius (tokens scattered across clients, hard to audit, hard to revoke) - No single place to debug (latency, errors, “what tool did it call, with what params?”)

MCP Mesh sits between MCP clients and MCP servers and collapses that mess into one production endpoint you can actually operate.

What it does:

- One endpoint for Cursor / Claude / VS Code / custom agents → all MCP traffic routes through the mesh - RBAC + policies + audit trails at the control plane (multi-tenant org/workspace/project scoping) - Full observability with OpenTelemetry (traces, errors, latency, cost attribution) - Runtime strategies as “gateways” to deal with tool bloat: Full-context (small toolsets), Smart selection (narrow toolset before execution), Code execution (load tools on-demand / run code in a sandbox) - Token vault + OAuth support, proxying remote servers without spraying secrets into every client - MCP Apps + Bindings so apps can target capability contracts and you can swap MCP providers without rewriting everything

A small but surprisingly useful thing: the UI shows every call, input/output, who ran it, and lets you replay calls. This ended up being our “Wireshark for MCP” during real workflows.

It’s open-source + self-hosted (run locally with SQLite; Postgres or Supabase for prod).

You can start with `npx @decocms/mesh` or clone + run with Bun.

We’d love your feedback!

Links below:

Repo: https://github.com/decocms/mesh

Landing: https://www.decocms.com/mcp-mesh

Blog post: https://www.decocms.com/blog/post/mcp-mesh

edit: layout

7

I created a 2025 Wrapped for WhatsApp Conversations #

textunwrapped.com favicontextunwrapped.com
3 comments1:54 AMView on HN
Hey HN!

As I sat to write my 2025 reflection, I realized one thing I was missing was a ‘year wrapped’ for my relationships — I got my music, got my photos, got my fitness wraps — but what about my relationships?

Specifically, I wanted to figure out what my text conversations say about my relationships and myself, and if there’s been evolution throughout the year.

Who reaches out more? What’s our tone and conflict resolution? What were our month by month successes and conflicts?

So I built an app that analyzes WhatsApp conversations (.txt files) and surfaces the patterns — using Anthropic’s API for the AI-generated analysis and Instant as my database.

It’s called Text Unwrapped.

You sign up, and upload a conversation from WhatsApp. That’s sent to Anthropic's Claude AI with a bunch of different prompts. Here are some of the things you get:

- Relationship score & synopsis on overall communication - Personality Profiles (Myers Briggs, tone analysis, top themes & emojis) - A month-by month timeline, outlining key texts and themes for that month - Actionable insights for each person - A deep dive on a topic of your choice (say you want to dive into defensiveness or avoidance) - POVs from different schools of psychology, like CBT and Jungian

You can try this yourself. I made it so each sign up gets 1 free credit (1 credit = 1 conversation analysis).

I am not a technical person: I vibe-coded this. I used Claude Code (Opus 4.5), and Instant as the backend.

I’ve been playing around with making apps for the last few years, but it was always hard to make a leap. As of this March, I was able to start turning a lot of my passion projects into real ideas. I’ve made a few personal apps, but this is the first one I wanted to share on HN.

It took me about 3 days to build this. Once I had a strong spec in place, I needed to make very little changes to Claude (mainly upgraded the design and double checked permissions). Outside of that, Instant was a big help: Claude was able to use it and add auth in less than 2 minutes.

The hardest part was adding Stripe - but mainly because I hadn’t done this before. Claude Code guided me through the Webhook setup, and the main challenge was listening to the ‘checkout complete’ to validate payment and add credits to the user.

I know privacy is a big concern here. For what it’s worth, I don’t store the actual conversation file — it’s deleted as soon as the conversation analysis is completed. I only store the analysis in the database.

Hope you enjoy it!

5

A dynamic key-value IP allowlist for Nginx #

github.com favicongithub.com
0 comments10:29 PMView on HN
I am currently working on a larger project that needs a short-lived HTTP "auth" based on a separate, out-of-band authentication process. Since every allowed IP only needs to be allowed for a few minutes at a time on specific server names, I created this project to solve that. It should work with any Redis-compatible database. For the docker-compose example, I used valkey.

This is mostly useful if you have multiple domains that you want to control access to. If you want to allow 1.1.1.1 to mywebsite.com and securesite.com, and 2.2.2.2 to securesite.com and anothersite.org for certain TTLs, you just need to set hash keys in your Redis-compatible database of choice like:

1.1.1.1:

  - mywebsite.com: 1 (30 sec TTL)

  - securesite.com: 1 (15 sec TTL)
2.2.2.2:

  - securesite.com: 1 (3600 sec TTL)

  - anothersite.org: 1 (never expires)
Since you can use any Redis-compatible database as the backend, per-entry TTLs are encouraged.

An in-process cache can also be used, but is not enabled unless you pass --enable-l1-cache to kvauth. That makes successful auth_requests a lot faster since the program is not reaching out to the key/value database on every request.

I didn't do any hardcore profiling on this but did enable the chi logger middleware to see how long requests generally took:

kvauth-1 | 2025/12/30 21:32:28 "GET http://127.0.0.1:8888/kvauth HTTP/1.0" from 127.0.0.1:42038 - 401 0B in 300.462µs # disallowed request

nginx-1 | 192.168.65.1 - - [30/Dec/2025:21:32:28 +0000] "GET / HTTP/1.1" 401 179 "-" "curl/8.7.1"

kvauth-1 | 2025/12/30 21:32:37 "GET http://127.0.0.1:8888/kvauth HTTP/1.0" from 127.0.0.1:40160 - 401 0B in 226.189µs # disallowed request

nginx-1 | 192.168.65.1 - - [30/Dec/2025:21:32:37 +0000] "GET / HTTP/1.1" 401 179 "-" "curl/8.7.1"

# IP added to redis allowlist

kvauth-1 | 2025/12/30 21:34:02 "GET http://127.0.0.1:8888/kvauth HTTP/1.0" from 127.0.0.1:54032 - 200 0B in 290.648µs # allowed, but had to reach out to valkey

kvauth-1 | 2025/12/30 21:34:02 "GET http://127.0.0.1:8888/kvauth HTTP/1.0" from 127.0.0.1:54044 - 200 0B in 4.041µs

nginx-1 | 192.168.65.1 - - [30/Dec/2025:21:34:02 +0000] "GET / HTTP/1.1" 200 111 "-" "curl/8.7.1"

kvauth-1 | 2025/12/30 21:34:06 "GET http://127.0.0.1:8888/kvauth HTTP/1.0" from 127.0.0.1:51494 - 200 0B in 6.617µs # allowed, used cache

kvauth-1 | 2025/12/30 21:34:06 "GET http://127.0.0.1:8888/kvauth HTTP/1.0" from 127.0.0.1:51496 - 200 0B in 3.313µs

nginx-1 | 192.168.65.1 - - [30/Dec/2025:21:34:06 +0000] "GET / HTTP/1.1" 200 111 "-" "curl/8.7.1

IP allowlisting isn't true authentication, and any production implementation of this project should use it as just a piece of the auth flow. This was made to solve the very specific problem of a dynamic IP allow list for NGINX.

4

ARES Dashboard – Open-Source AI Red-Teaming and Governance Platform #

github.com favicongithub.com
1 comments1:21 AMView on HN
Hi HN,

I’m sharing ARES Dashboard v1.0.0, an open-source platform for AI red-teaming, evaluation, and governance.

ARES is built to support structured red-team campaigns against LLMs, with a focus on safety, misuse resistance, and enterprise-style controls rather than one-off prompt testing.

Key features: -Campaign-based AI red-teaming workflows Multi-tenant architecture with role-based access control -Audit logging and compliance-oriented design -Demo mode for users without API keys -Extensive documentation covering security boundaries, threat models, and operational practices

The project is early but intentionally designed with production and enterprise readiness in mind. Current gaps are mostly operational (monitoring, load testing, production deployments), which are the next priorities.

GitHub: https://github.com/Arnoldlarry15/ARES-Dashboard

I’d really appreciate feedback from folks working in AI safety, security, infra, or governance — especially around real-world red-teaming workflows and operational concerns.

Thanks for taking a look!

4

DevCompare – a live, auto-updating comparison of AI coding tools #

devcompare.io favicondevcompare.io
0 comments7:03 AMView on HN
Hi HN,

AI coding tools change fast and I kept running into comparisons that were outdated within weeks. So I built DevCompare to automate the whole thing.

DevCompare regenerates comparisons daily across 18 criteria, including features, pricing, repo understanding, latency, and real developer sentiment from Reddit and HN.

The site is fully automated via cron, cached at the edge, and loads as a simple table with expandable details. No signups, no paywall.

https://devcompare.io

Happy to answer questions

4

Cck ClaudeCode file change tracking and auto Claude.md #

0 comments4:43 PMView on HN
Every Claude Code session starts fresh. You re-explain your project structure, build commands, and conventions. Every. Single. Time.

I built cck to solve this. Two modes:

*1. CLAUDE.md Generation* ```bash git clone https://github.com/takawasi/claude-context-keeper && cd claude-context-keeper && pip install . cck sync ``` Scans your codebase, generates CLAUDE.md. Claude reads it at session start.

*2. Per-Turn Context Injection (the real power)* ```bash cck setup --cb-style cck watch --with-history & cck hook install --use-history ``` This tracks every file change in SQLite and injects recent changes on every turn: ``` [CCK] Recent changes: 15:23:45 ~ src/main.py 15:22:30 + src/utils/helper.py 15:20:12 ~ tests/test_main.py ```

Claude sees exactly what you just edited. No more "I just changed X" explanations.

Built from 300+ Claude Code sessions. Zero AI calls, pure static analysis.

GitHub: https://github.com/takawasi/claude-context-keeper

3

ADK-Studio – a visual builder for creating AI agent workflows with Rust #

1 comments6:08 AMView on HN
Hi HN,

I’ve been working on ADK-Rust, an open-source framework for building and deploying AI agents in Rust.

The motivation came from building agent systems where performance, safety, and predictable behavior mattered more than rapid prototyping. Most agent frameworks and workflow tools today are Python- or JS-first and tend to be runtime-heavy when taken to production.

Recently, I added ADK-Studio — a visual, low-code environment for building AI agent workflows on top of ADK-Rust.

You can think of ADK-Studio as a Rust-native alternative to tools like n8n, but focused specifically on AI agents: - Visual, drag-and-drop workflow design (sequential, parallel, loop, router agents) - Tool integration (functions, MCP servers, browser automation, search) - Real-time execution with SSE streaming and event traces - Code generation: visual workflows compile down to production Rust code - Build and run agents as native executables directly from the studio

The goal is to let people prototype agent workflows visually, then ship them as fast, memory-safe Rust binaries instead of long-running JS/Python services.

Making AI Agents with ADK Studio is super simple:

1. ADK-Studio install: `cargo install adk-studio` 2. Start ADK Studio server: `adk-studio --port 6000` 3. Open in browser: open http://localhost:6000

I would really appreciate feedback from folks building agent systems, workflow engines, or AI inference infrastructure — especially around design tradeoffs vs existing tools like n8n.

Project site: https://adk-rust.com GitHub: https://github.com/zavora-ai/adk-rust

Best James

3

C/C++ source code graph RAG based on Clang/clangd #

github.com favicongithub.com
1 comments4:39 AMView on HN
Graph RAG for C/C++ Development

1. Overview

This project enables deep code analysis with Large Language Models. By constructing a Neo4j-based Graph RAG, it enables developers and AI agents to perform complex, multi-layered queries on C/C++ codebases that traditional search tools simply can't handle. With only 4 MCP APIs and a vanilla agent, it is already able to accomplish lots of tasks related to the codebases.

2. How it works

Using clangd and clang, the system parses and indices your source files to create a high-fidelity code graph. It captures everything from high-level folder structures to granular relationships, including entities like Folders, Files, Namespaces, Classes/Structs, Variables, Methods, etc.; relationships like: CALLS, INCLUDES, INHERITS, OVERRIDES, and more.

The system generates summaries and embeddings for every level of the codebase (from functions up to entire folders) using a bottom-up approach. This structured context helps AI agents understand the "big picture" without getting lost in the syntax.

To get you started easily, the project includes: an example MCP (Model Context Protocol) server, and a demonstration AI agent to showcase the graph’s power. You can easily build your own custom agents and servers on top of the graph RAG.

3. Efficiency & Performance

Incremental Updates: The system detects changes between commits and updates only what’s necessary. Parallel Processing: Parsing and summary generation are distributed across worker processes with optimized data sharing. Smart Caching: Results are cached to minimize redundant computations, saving you both time and LLM costs.

4. A benchmark: The Linux Kernel

When building a code graph for the Linux kernel (WSL2 release) on a workstation (12 cores, 64GB RAM), it takes about ~4 hours using 10 parallel worker processes, with peak memory usage at ~36GB. Note this process does not include the summary generation, and the total time may vary based on your LLM provider.

3

I built my own Metronome Desktop App #

shredono.me faviconshredono.me
0 comments11:48 PMView on HN
Title says all. I kept searching for a Metronome app that would have everything I need as I learn to play faster on guitar and just couldn't find one so I just built my own instead. Sharing with everyone else in case they have a need for something like this :)

Has a few cool nifty features like:

* Keybinds for everything. I hate my mouse

* Progressive mode where the bpm increases every x bars by y bpm

* Speed burst training switches between 2 speeds either at a preset amount of bars or by pressing a key

3

Flipper Zero MCP – Control Your Flipper Using AI via USB or WiFi #

github.com favicongithub.com
0 comments3:59 PMView on HN
I built an modular MCP server that lets AI control a Flipper Zero.

The basic idea: you tell Claude "write a BadUSB script that opens a rickroll" and it generates the DuckyScript, validates it, saves it to your Flipper, and can execute it.

I've launched the project with 14 MCP tools across 4 modules:

1. BadUSB: generate/validate/save/diff/execute DuckyScript from natural language

2. Music: create and load FMF files to be played over the Flipper's piezo speaker ("make me the theme song to Castlevania")

3. System: device info, SD card status, connection health

4. Connection: health checks, reconnect

...the code is modular so you can create your own modules.

To me, the interesting technical bit is the WiFi support. Flipper's protobuf RPC is designed to work over USB serial. The stock WiFi dev board firmware is for debugging, not RPC.

I wrote custom ESP32-S2 firmware, a TCP <-> UART bridge that exposes the full RPC interface over your network. It includes a captive portal for WiFi config and handles Flipper's Expansion Protocol negotiation. Firmware is in the repo: /firmware/tcp_uart_bridge

Architecture:

- MCP client (Claude Desktop, Cursor, etc.) <-> MCP server (Python, stdio) <-> Flipper Zero (protobuf RPC over USB or TCP)

- Transport-agnostic: same protobuf either way

- Modular: easy to add new Flipper capabilities

This is (I believe) the first MCP server for Flipper Zero. There are MCP servers for ESP32s and Arduinos, but those control the microcontroller itself. This controls the Flipper as a tool.

I look forward to feedback, especially from any other Flipper users who get it running over Wifi!

2

Reveal – A stateless, zero-DB blur curtain for suspicious links #

app.iddqd.kr faviconapp.iddqd.kr
0 comments7:38 AMView on HN
IDDQD Internet builds zero-DB, zero-signup tools powered by pure HTML/JS for instant browser execution. Even with AI features, we keep it stateless and record-free.

Hi HN,

I built Reveal, a simple web-based tool to preview suspicious or potentially NSFW/jump-scare links safely. Instead of clicking blindly, you load the target URL behind a heavy blur curtain and use a slider to reveal the content at your own pace.

Key Technical Highlights:

Pure Client-Side: No database, no backend logging.

Stateless Architecture: The target URL is stored within the browser's address bar (URL parameters), making it easy to share without saving a single byte on our servers.

Vanilla Stack: Built with Vanilla JS, Bootstrap 5 (Glassmorphism), and Web Audio API for real-time feedback.

If you’re tired of "link-roulette" from friends or colleagues, this might be a useful addition to your toolkit. I'm an indie developer focused on "Vibe Coding" and keeping the web record-free.

App: https://app.iddqd.kr/reveal/ Source: https://github.com/iddqd-park

Park Sil-jang

Dev Team Lead at IDDQD Internet. E-solution & E-game Lead. Bushwhacking Code Shooter. Currently executing mandates as Choi’s Schemer.

HQ (EN): https://en.iddqd.kr/

GitHub: https://github.com/iddqd-park

2

Paper Tray – dramatically better file organization for Google Drive #

papertray.ai faviconpapertray.ai
0 comments1:33 PMView on HN
Hi HN,

I'm a solo founder working on a project that uses AI to help with finding files in Google Drive.

The Problem: With Google Docs and Sheets, it's easy to make documents but very hard to find them again unless you organize them manually, which takes time.

This is a big problem for startups, who often use Google Docs for the convenience but struggle with information management. It leads to a lot of time wasted searching for documents and a lack of clarity.

The Solution: Paper Tray uses AI to organize Drive files automatically. It 'tags' each file so you can then use a filter interface to find them.

By default, it tags by the type of document (meeting notes, plan, pitch deck etc), the topic of the document, and the department it belongs to (product, engineering, sales etc).

The result is that it takes just a few seconds to find most of your documents, in an intuitive and satisfying way.

You can add your files easily with a Chrome Extension that adds a button to the header of Google Docs and Sheets. Click this button to add the file and manage the tags.

Business model: a 7-day free trial followed by monthly/annual subscription, currently priced at $12/mo, or $9/mo with an annual plan.

2

I built a standalone WASM playbook builder for sports coaches #

playmaker.click faviconplaymaker.click
0 comments10:23 AMView on HN
Hello HN, I made a simple web app for drawing up plays for amateur sports coaches: https://playmaker.click/playbook

It's inspired by Excalidraw; I love how you can hit buttons and go into different modes so I tried to replicate that for desktop users. It should work nicely on tablets, phones not so much.

The main motivation was to learn some Rust and WebAssembly, the recent WASM 64-bit address space update is interesting to me, so I put together a lttle playbook engine that handles player movement, trail recording, and ball possession tracking. It runs the animation/interpolation stuff in the browser.

The rest of the stack is Laravel/Vue on the frontend, Tailwind for styling. Nothing too fancy. I wanted something where I could drag players around, record movements, and play them back like a little animation (double tap outside the field). Supports football (soccer), basketball, rugby, and hockey fields, reasonably badly drawn

There's rough edges for sure; the Rust code could probably be cleaner, I got a bit of help from Claude but where applicable I dug down into the reasoning as to "why", so it was a good learning experience figuring out how to expose structs to JS via wasm-bindgen and learning about serde.

The playbook all runs locally in the browser, and saves the play config into the URL using LZ-String so it can be easily shared without blowing browser URL address bar limits, but I haven't tested silly-big plays on it yet.

Happy to answer any questions about the setup or my approach in general. Cheers! PG

1

Cover letter maker with Ollama/local LLMs (Open source) #

1 comments10:31 AMView on HN
I made an open source web app that generates cover letters using local AI models (Ollama, LM Studio, vLLM, Openrouter, etc) so your CV and job application data never leaves your browser. No placeholders. No typing. Letters are ready to copy and paste. 100% local and private depending on the LLM of your choice. Multi-language support (so you can add more languages).

It connects to any OpenAI-compatible local LLM endpoint. I use it with Ollama + llama3.2, but it works with any server.

The generated letters unique since they are based on your unique experience and skills from your resume. They will also be written as if directly responding to that job posting, so all letters are unique.

I honestly don't feel bad about using or making this becuase while actively applying for jobs, I see that a high percentage of recruiters now use AI to generate job descriptions and also during the interview process.

I was tired of wasting time with writing and personalising letters while applying for jobs. All other tools I tried weren't as quick as I wanted because I still needed to modify the letters to replace placeholders.

I also didn't find any tool that let's me use my local LLM for free, and I didn't want to pay for ChatGPT/Claude API calls for every job application.

The output quality is good, and it can bypass some AI detectors.

It's open source too and free to use. You can self-host it or run it locally in development mode.

GitHub: https://github.com/stanleyume/coverlettermaker

Cheers :)

1

Habit Tracking as an RPG in Google Sheets #

befitting-iodine-673.notion.site faviconbefitting-iodine-673.notion.site
0 comments10:25 AMView on HN
I built this for myself after failing to stick with most habit trackers.

Instead of streaks, I use RPG mechanics: daily quests, XP, gold, rewards, and penalties for missed habits. Everything runs inside Google Sheets using formulas and a small script.

I’ve been using it daily for over 8 months. Curious if others here have tried gamifying personal systems, or have ideas on how to improve this approach.

1

Omnvert Network Toolkits Ping/MTR,DNS,Headers,TLS,RDAP,PCAFlowsP→JSON #

omnvert.com faviconomnvert.com
0 comments10:25 AMView on HN
I built Omnvert as a fast, privacy-first toolbox for the checks I repeat during incidents and deployments: ping/MTR, DNS propagation (recursive vs authoritative), redirect chains + headers, TLS handshake details, RDAP/WHOIS, ASN/prefix, subnet math — plus a small PCAP toolkit to turn captures into scriptable output (JSON/NDJSON), top conversations, and flow exports.
1

Botchat – a privacy-preserving, multi-bot AI chat tool #

app.botchat.ca faviconapp.botchat.ca
0 comments4:54 PMView on HN
This started as a hobby project, curious if others find it useful. Feedback welcome!

botchat is a privacy-preserving, multi-bot chat tool that lets you interact with multiple AI models simultaneously.

Give bots personas, so they look at your question from multiple angles. Leverage the strengths of different models in the same chat. And most importantly, protect your data.

botchat never stores your conversations or attachments on any servers and, if you are using our keys (the default experience), your data is never retained by the AI provider for model training.

1

Squirreling: a browser-native SQL engine #

blog.hyperparam.app faviconblog.hyperparam.app
0 comments5:03 PMView on HN
I made a small (~9 KB), open source SQL engine in JavaScript built for interactive data exploration. Squirreling is unique in that it’s built entirely with modern async JavaScript in mind and enables new kinds of interactivity by prioritizing streaming, late materialization, and async user-defined functions. No other database engine can do this in the browser.

More technical details in the post. Feedback welcome.

1

HyperCell – Open-source Excel calculation engine for Java #

github.com favicongithub.com
0 comments10:27 AMView on HN
Hi HN! I built HyperCell to solve a problem we kept hitting at our company (Scoop Analytics):

Business teams model everything in Excel – pricing logic, risk calculations, financial forecasts. Engineers then rewrite that logic in Java. Every time. This translation process causes bugs, delays, and drift between "what the business thinks the logic is" vs "what's actually running."

HyperCell takes a different approach: load the Excel file, compile the formulas into a DAG, and execute them in-memory. No translation. The spreadsheet IS the code.

Technical details: - Parses Excel formulas using ANTLR4 - Builds a dependency graph for intelligent recalculation - 200+ functions (SUM, VLOOKUP, INDEX/MATCH, IF, NPV, IRR, etc.) - Cross-validated against Excel with 82,881 formulas at 100% accuracy - ~50ms to load a typical workbook, sub-millisecond recalculation

Use cases we've seen: - Insurance companies running rating models - Fintech doing real-time pricing - E-commerce with vendor-specific shipping rules

It's Apache 2.0 licensed. We extracted it from our production system where it's been running for 2+ years.

Happy to answer any questions about the architecture or implementation!

1

Endpoint State Policy – Policy as Data #

github.com favicongithub.com
1 comments3:46 AMView on HN
Endpoint State Policy (ESP) is a policy-as-data system that keeps policy intent separate from execution.

Policies define desired state and evidence as structured data, not scripts. They’re compiled into constrained contracts that execution engines must follow, producing attestations instead of free-form output.

The contract model limits what execution can do, preventing policy logic from turning into ad-hoc tooling, while allowing the same policy to run across different environments and backends.

ESP focuses on portable intent, constrained execution, and verifiable outcomes — not embedding policy into tools.