每日 Show HN

Upvote0

2026年4月11日 的 Show HN

26 条
501

Pardonned.com – A searchable database of US Pardons #

275 评论6:19 AM在 HN 查看
https://pardonned.com

Inspired by the videos of Liz Oyer, I wanted to be able to verify her claims and just look up all the pardons more easily.

Tech Stack: Playwright - to sccrape the DOJ website SQLite - local database Astro 6 - Build out a static website from the sqlite db

All code is open source and available on Github.

51

Hormuz Havoc, a satirical game that got overrun by AI bots in 24 hours #

hormuz-havoc.com faviconhormuz-havoc.com
16 评论10:58 AM在 HN 查看
I built a satirical browser game to share with friends (Hormuz Havoc: you play an American president managing a crisis in the Middle East, only "loosely" inspired by current events). I had good fun making this, but that's not necessarily the interesting part.

The interesting part was that within a few hours of sharing it with my friends, some of them set about trying to overrun the leaderboard by launching a swarm of AI bots to learn the game and figure out how to get the highest score. This set off a game of cat-and-mouse as they found vulnerabilities and I tried patching them.

Within hours of sharing, someone used the Claude browser extension to read game.js directly. Large parts of the scoring formula, action effect values, and bonus thresholds were sitting in client-side JavaScript - this was a trivial thing even a human could've found, but a human would've still had to play the game, whereas the AI bot just optimised directly against the scoring formula. It meant that the first AI already scored 2.5x higher than the best human player by optimising directly against the source code rather than playing the game.

Straightforward fix: moved the entire game engine server-side. The client is now a dumb terminal, it sends an action ID, receives a rendered state. No scoring logic, no bonus thresholds, no action effects exist in the browser. The live score display uses a deliberately different formula as misdirection.

This increased the difficulty in finding bot-enabled hacks, so the subsequent bots tried brute-forcing the game, trying to game the RNG functions, and other methods.

But the next winning bot found a vulnerability where the same signed session token could be replayed. It would play turn N, observe a bad random event, replay the same token for turn N, get a different RNG outcome, keep the best one. Effectively branching from a single game state to cherry-pick lucky outcomes across 30 turns. Managed to 1.5x the previous bot's high score.

The bot's own description: "The key optimisation was token replay. Because the backend let the same signed state be replayed, I could branch from one exact game state repeatedly and continue from the luckiest high-value outcome each turn."

Fix here: consume a turn nonce atomically before any randomness is generated.

The current state is that the leaderboard is now split into human and AI-assisted. I think the capability of AI bots has flatlined a bit now. Perhaps Claude Mythos might be able to discover the next hackable exploit ¯\_(ツ)_/¯

Happy to go deeper on any of the above - or just enjoy the game! Feel free to try your own AI-powered leaderboard attempt too!

40

Waffle – Native macOS terminal that auto-tiles sessions into a grid #

waffle.baby faviconwaffle.baby
22 评论1:40 PM在 HN 查看
Hi HN. I built Waffle because I kept ending up with 15 terminal windows scattered across three spaces with no idea what was running where.

Splitting/merging in iTerm kind of works but it never felt intuitive to me.

With that in mind, I built something to suit my workflow:

Waffle is a native macOS terminal (Built on Miguel de Icaza's SwiftTerm) that tiles your sessions into an auto-scaling grid automatically. 1 session is fullscreen, 2 is side by side, 4 is 2x2, 9 is 3x3. Open a terminal, it joins the grid. Close one, the grid rebalances. No splitting, no config.

I've been using it a lot recently and one thing I've found really useful is that sessions detect which repo they're in and group accordingly. Each project gets a distinct colour. Cmd+[ and Cmd+] flip between groups. If you have three repos open across eight terminals, you can filter to just one project's sessions instantly. Also, no accidentally closing a window with CMD-W as it gives you a confirmation and requires a second CMD-W to close.

Honestly, if you live in tmux, this probably isn't for you but it's really helped to speed up my workflow.

Other things: It comes with a handful of themes (and has support for iTerm themes), bundled JetBrains mono, has keyboard shortcuts for everything. Free, no account, opt-in analytics only. macOS 14+.

There's a demo on the landing page if you want to see it in action.

37

I rebuilt a 2000s browser strategy game on Cloudflare's edge #

kampfinsel.com faviconkampfinsel.com
33 评论12:17 PM在 HN 查看
I grew up in Germany in the early 2000s playing a browser game called Inselkampf. You built up an island, mined gold and stone, cut down trees for wood, raised armies, sent fleets across an ocean grid, joined alliances and got betrayed by them. Same genre as OGame or Travian. It shut down in 2014 and I never found anything that replaced that feeling of checking in before school to see if your fleet had arrived and your alliance was still alive.

I finally built the version I wanted to play. Kampfinsel is live at kampfinsel.com right now with real players on it. It's not a straight copy of the old game. I gave it its own world. No magic, no gunpowder – just ballistas, fire pots, and slow ships crossing huge distances. Three resources: gold, stone, wood. Travel between islands takes hours, not seconds. It's slow on purpose.

The whole thing runs on Cloudflare's edge. Workers for the game logic and API, D1 for the database, KV for sessions and caching, R2 for assets and Durable Objects for per-island state and the tick system (fleet arrivals, combat, resource generation). There's no origin server at all. Making a stateful multiplayer game work inside Workers' CPU limits and D1's consistency model meant some non-obvious choices: resources are calculated on-read from timestamps instead of being ticked into the database, fleet movements live in Durable Object alarms and combat writes are batched. This helped me a lot!

The look is intentionally rough and text-heavy (Hi HN!): server-rendered HTML, tables, a parchment color palette, Unicode icons, no frontend framework, no build step. The only JavaScript is for countdown timers and auto-refresh. I wanted it to feel the way I remember these games looking, not how they actually looked. Honestly, it looks a lot like HN itself - tables, monospace, no chrome. If you like how this site looks, you'll probably feel at home.

No signup wall, no premium currency, no pay-to-win. Feedback very welcome, especially from anyone who played this kind of game back in the day or has opinions on running stateful stuff on Workers + D1 + Durable Objects. I'll be around for the next few hours.

8

Editing 2000 photos made me build a macOS bulk photo editor #

apps.apple.com faviconapps.apple.com
20 评论7:41 PM在 HN 查看
Last year, I had 2000+ photos from my wedding to edit. The shots were great, but the lighting was different in every room. Some photos were too dark, and some were too yellow. I wanted all the wedding photos to have the same look before I shared them with my family.

I tried using Lightroom. I would copy the settings from one photo and paste them to the next, then adjust it, and repeat. This was very slow. If I used a simple batch edit on all photos, it looked bad because the lighting changed in every shot. After 40 minutes, I was not even halfway done. I had to choose between bad quality batch edits or fixing 2K photos one by one.

I also did not want to upload my private wedding photos to a website or pay for a monthly subscription.

I wanted a way to edit fast but still have control over each photo. I also wanted everything to stay private on my computer.

So I built a Mac app called RapidPhoto.

It lets you set the look once and apply it to the whole wedding set. The important part is that you can still quickly tweak individual photos that look a bit different without starting over. I also added a feature to change the metadata for many photos at once, which is helpful for organizing big events.

The work that took me 40 minutes now takes about 90 seconds. It runs locally on your Mac with no uploads and there is no subscription.

8

HyperFlow – A self-improving agent framework built on LangGraph #

1 评论4:01 AM在 HN 查看
Hi HN, I am Umer. I recently built an experimental framework called HyperFlow to explore the idea of self-improving AI agents.

Usually, when an agent fails a task, we developers step in to manually tweak the prompt or adjust the code logic. I wanted to see if an agent could automate its own improvement loop.

Built on LangChain and LangGraph, HyperFlow uses two agents: - A TaskAgent that solves the domain problem. - A MetaAgent that acts as the improver.

The MetaAgent looks at the TaskAgent's evaluation logs, rewrites the underlying Python code, tools, and prompt files, and then tests the new version in an isolated sandbox (like Docker). Over several generations, it saves the versions that achieve the highest scores to an archive.

It is highly experimental right now, but the architecture is heavily inspired by the recent HyperAgents paper (Meta Research, 2026).

I would love to hear your feedback on the architecture, your thoughts on self-referential agents, or answer any questions you might have!

Documentation: https://hyperflow.lablnet.com/ GitHub: https://github.com/lablnet/HyperFlow

7

I'm organizing a vibe coding game dev competition #

vibej.am faviconvibej.am
0 评论12:47 PM在 HN 查看
Hi everyone,

I just saw a vibe coded game on HN, and thought maybe I should post about this here.

I'm organizing a vibe coding game dev competition called Vibe Jam.

Last year we did it too and there was 1000+ games submitted.

This year the deadline is May 1 and you can submit your games until then.

There's $35,000 in prizes with the Gold prize being $20,000.

Let me know what you think!

-Pieter

4

A living Vancouver. Connor is walking dogs at the SPCA this morning #

brasilia-phi.vercel.app faviconbrasilia-phi.vercel.app
1 评论6:42 PM在 HN 查看
I've spent most of my career in marketing, which for the last few years has meant building consumer personas for campaigns. I wanted to see if I could make these real, living in real neighborhoods, had real weather, real budgets, real Saturday lunches. I always wanted to build a world, not a segment.

This is that. 140 people so far, split across Vancouver (100), San Francisco (20), and Tokyo (20). Each one is about 1,000 lines of profile — family, finances, daily schedule, health, worldview, media diet, the channels you'd actually reach them through and the ones that will explicitly never work on them. Demographics are census-grounded income, age, ethnicity, household composition follow normal distributions against StatsCan, ACS, and Japanese e-Stat data, so the panel is roughly representative of the city instead of representative of whatever's overrepresented in an LLM's training corpus. The specific details come from real stories.

They live in real local time on a live map. Right now it's Saturday 11:32 AM in Vancouver. Connor Hughes, a 31-year-old software developer at Clio in Gastown, is on his SPCA volunteer shift, he walks shelter dogs at the Boundary Road location every other Saturday morning. Hassan Khoury is in the morning lunch rush with Tony at his Lebanese café — it's his busiest day of the week. Ahmad Noori is pulling Saturday overtime on a construction site. Jordan Whitehorse is on mid-shift at East Cafe on Hastings.

Every day is unique, no two days repeat. A 3 AM job fetches live data: weather from Open-Meteo, grocery CPI from StatsCan food vectors, Metro Vancouver transit delays from Google Routes API against specific corridors, Vancouver gas prices, sunrise and sunset. Each persona has a modifier file that reacts to all of it. When Vancouver gas hits $1.85/L, Jaspreet the long-haul trucker's Coquihalla run to Calgary stops feeling worth it, his margins are thin, his mood takes a hit. When food CPI spikes, Gurinder at the Amazon warehouse stops buying the $9 Subway and brings roti from home. A health flare rolls probabilistically each morning which maybe nothing, maybe Tanya's six month old had a rough night, maybe Frank's back is acting up. The days stack up and get remembered. Every persona has a journal, today's entry in a markdown file, a week of them compressed into a "dream" of ~30 lines that keeps the shape without the texture, a month compressed into ~15 lines. It's their journal. I'm not writing it; the simulation is.

Click any persona to open their detail, or hit "Talk to [name]" to have a conversation and they run on Claude Haiku with their full profile and recent diary entries as context. Not a product, not a startup, just a thing I've been quietly working on. They feel, in a way I didn't expect, like my fully grown kids. Happy to answer questions.

1

Sätteri, high-performance Markdown pipeline for JavaScript #

github.com favicongithub.com
0 评论8:00 PM在 HN 查看
Hello!

I work at Astro (https://astro.build), where we do a lot of Markdown / MDX and performance in this area is often a bottleneck. Our users use a lot of plugins to process their Markdown, so just blindly moving everything to a native language would make things a bit rigid. Inspired by the oxc and LightningCSS projects, I decided to start to start working on an "ideal" Markdown pipeline: The expensive stuff in Rust, with flexible plugins in JS, but with little performance cost.

This project uses techniques like lazy deserialization, raw transfer, arenas etc to ensure top performance. A JS plugin that only visits, ex, headings shouldn't suffer from the serialization cost of the entire Markdown tree. On the Rust side this is backed by pulldown-cmark, a really fast parser for Markdown.

You can try it in the browser here: https://satteri.erika.florist, or follow the instructions in the README to try it out locally: https://github.com/bruits/satteri/blob/main/packages/satteri...

Hope you'll like it! Don't hesitate if you have any questions.

1

Feedstock – Web Crawler for TypeScript Built on Bun and Playwright #

github.com favicongithub.com
0 评论8:00 PM在 HN 查看
Web crawler for TypeScript. Runs on Bun, uses Playwright by default, also speaks CDP and Lightpanda.

The part I want feedback on is the CLI. It's built for LLMs, not humans. JSON output when piped, feedstock schema crawl dumps every parameter at runtime, and --fields url,markdown lets you pull just what you need so a crawl result doesn't eat your whole context window. Other bits worth a look:

Fetch-first engine. Tries plain HTTP before booting a browser, escalates only if the page needs JS.

Deep crawl with BFS, DFS, a UCB1 bandit, and a Q-learning focused crawler. The learning ones seem to help on big docs sites but I haven't measured it carefully yet.

Accessibility tree snapshots instead of HTML. 3 to 10x smaller, easier to feed a model.

Cache uses bun:sqlite with ETag, Last-Modified, and content hashing.

v0.5.0, Apache 2.0, 325 tests. Just pushed it so the star count is what it is.

https://github.com/tylergibbs1/feedstock