Daily Show HN

Upvote0

Show HN for March 10, 2026

45 items
113

What was the world listening to? Music charts, 20 countries (1940–2025) #

88mph.fm favicon88mph.fm
51 comments4:18 PMView on HN
I built this because I wanted to know what people in Japan were listening to the year I was born. That question spiraled: how does a hit in Rome compare to what was charting in Lagos the same year? How did sonic flavors propagate as streaming made musical influence travel faster than ever? 88mph is a playable map of music history: 230 charts across 20 countries, spanning 8 decades (1940–2025). Every song is playable via YouTube or Spotify. It's open source and I'd love help expanding it — there's a link to contribute charts for new countries and years. The goal is to crowdsource a complete sonic atlas of the world.
89

RunAnwhere – Faster AI Inference on Apple Silicon #

github.com favicongithub.com
23 comments5:14 PMView on HN
Hi HN, we're Sanchit and Shubham (YC W26). We built a fast inference engine for Apple Silicon. LLMs, speech-to-text, text-to-speech – MetalRT beats llama.cpp, Apple's MLX, Ollama, and sherpa-onnx on every modality we tested. Custom Metal shaders, no framework overhead.

Also, we've open-sourced RCLI, the fastest end-to-end voice AI pipeline on Apple Silicon. Mic to spoken response, entirely on-device. No cloud, no API keys.

To get started:

  brew tap RunanywhereAI/rcli https://github.com/RunanywhereAI/RCLI.git
  brew install rcli
  rcli setup   # downloads ~1 GB of models
  rcli         # interactive mode with push-to-talk
Or:

  curl -fsSL https://raw.githubusercontent.com/RunanywhereAI/RCLI/main/install.sh | bash
The numbers (M4 Max, 64 GB, reproducible via `rcli bench`):

LLM decode – 1.67x faster than llama.cpp, 1.19x faster than Apple MLX (same model files): - Qwen3-0.6B: 658 tok/s (vs mlx-lm 552, llama.cpp 295) - Qwen3-4B: 186 tok/s (vs mlx-lm 170, llama.cpp 87) - LFM2.5-1.2B: 570 tok/s (vs mlx-lm 509, llama.cpp 372) - Time-to-first-token: 6.6 ms

STT – 70 seconds of audio transcribed in *101 ms*. That's 714x real-time. 4.6x faster than mlx-whisper.

TTS – 178 ms synthesis. 2.8x faster than mlx-audio and sherpa-onnx.

We built this because demoing on-device AI is easy but shipping it is brutal. Voice is the hardest test: you're chaining STT, LLM, and TTS sequentially, and if any stage is slow, the user feels it. Most teams fall back to cloud APIs not because local models are bad, but because local inference infrastructure is.

The thing that's hard to solve is latency compounding. In a voice pipeline, you're stacking three models in sequence. If each adds 200ms, you're at 600ms before the user hears a word, and that feels broken. You can't optimize one stage and call it done. Every stage needs to be fast, on one device, with no network round-trip to hide behind.

We went straight to Metal. Custom GPU compute shaders, all memory pre-allocated at init (zero allocations during inference), and one unified engine for all three modalities instead of stitching separate runtimes together.

MetalRT is the first engine to handle all three modalities natively on Apple Silicon. Full methodology:

LLM benchmarks: https://www.runanywhere.ai/blog/metalrt-fastest-llm-decode-e...

Speech benchmarks: https://www.runanywhere.ai/blog/metalrt-speech-fastest-stt-t...

How: Most inference engines add layers between you and the GPU: graph schedulers, runtime dispatchers, memory managers. MetalRT skips all of it. Custom Metal compute shaders for quantized matmul, attention, and activation - compiled ahead of time, dispatched directly.

Voice Pipeline optimizations details: https://www.runanywhere.ai/blog/fastvoice-on-device-voice-ai... RAG optimizations: https://www.runanywhere.ai/blog/fastvoice-rag-on-device-retr...

RCLI is the open-source voice pipeline (MIT) built on MetalRT: three concurrent threads with lock-free ring buffers, double-buffered TTS, 38 macOS actions by voice, local RAG (~4 ms over 5K+ chunks), 20 hot-swappable models, and a full-screen TUI with per-op latency readouts. Falls back to llama.cpp when MetalRT isn't installed.

Source: https://github.com/RunanywhereAI/RCLI (MIT)

Demo: https://www.youtube.com/watch?v=eTYwkgNoaKg

What would you build if on-device AI were genuinely as fast as cloud?

71

DD Photos – open-source photo album site generator (Go and SvelteKit) #

github.com favicongithub.com
22 comments1:13 PMView on HN
I was frustrated with photo sharing sites. Apple's iCloud shared albums take 20+ seconds to load, and everything else comes with ads, cumbersome UIs, or social media distractions. I just want to share photos with friends and family: fast, mobile-friendly, distraction-free.

So I built DD Photos. You export photos from whatever you already use (Lightroom, Apple Photos, etc.) into folders, run `photogen` (a Go CLI) to resize them to WebP and generate JSON indexes, then deploy the SvelteKit static site anywhere that serves files. Apache, S3, whatever. No server-side code, no database.

Built over several weeks with heavy use of Claude Code, which I found genuinely useful for this kind of full-stack project spanning Go, SvelteKit/TypeScript, Apache config, Docker, and Playwright tests. Happy to discuss that experience too.

Live example: https://photos.donohoe.info Repo: https://github.com/dougdonohoe/ddphotos

67

I Was Here – Draw on street view, others can find your drawings #

washere.live faviconwashere.live
52 comments5:04 AMView on HN
Hey HN, I made a site where you can draw on street-level panoramas. Your drawings persist and other people can see them in real time.

Strokes get projected onto the 3D panorama so they wrap around buildings and follow the geometry, not just a flat overlay. Uses WebGL2 for rendering, Mapillary for the street imagery.

The idea is for it to become a global canvas, anyone can leave a mark anywhere and others stumble onto it.

29

Web-Based ANSI Art Viewer #

sure.is faviconsure.is
7 comments8:40 AMView on HN
My love letter to ANSI art. Full width rendering, scrolling by baud rate, text is selectable, and more.

There are 3 examples at the top if you're feeling lucky

28

A usage circuit breaker for Cloudflare Workers #

8 comments1:09 PMView on HN
I run 3mins.news (https://3mins.news), an AI news aggregator built entirely on Cloudflare Workers. The backend has 10+ cron triggers running every few minutes — RSS fetching, article clustering, LLM calls, email delivery.

The problem: Workers Paid Plan has hard monthly limits (10M requests, 1M KV writes, 1M queue ops, etc.). There's no built-in "pause when you hit the limit" — CF just starts billing overages. KV writes cost $5/M over the cap, so a retry loop bug can get expensive fast.

AWS has Budget Alerts, but those are passive notifications — by the time you read the email, the damage is done. I wanted active, application-level self-protection.

So I built a circuit breaker that faces inward — instead of protecting against downstream failures (the Hystrix pattern), it monitors my own resource consumption and gracefully degrades before hitting the ceiling.

Key design decisions:

- Per-resource thresholds: Workers Requests ($0.30/M overage) only warns at 80%. KV Writes ($5/M overage) can trip the breaker at 90%. Not all resources are equally dangerous, so some are configured as warn-only (trip=null).

- Hysteresis: Trips at 90%, recovers at 85%. The 5% gap prevents oscillation — without it the system flaps between tripped and recovered every check cycle.

- Fail-safe on monitoring failure: If the CF usage API is down, maintain last known state rather than assuming "everything is fine." A monitoring outage shouldn't mask a usage spike.

- Alert dedup: Per-resource, per-month. Without it you'd get ~8,600 identical emails for the rest of the month once a resource hits 80%.

Implementation: every 5 minutes, queries CF's GraphQL API (requests, CPU, KV, queues) + Observability Telemetry API (logs/traces) in parallel, evaluates 8 resource dimensions, caches state to KV. Between checks it's a single KV read — essentially free.

When tripped, all scheduled tasks are skipped. The cron trigger still fires (you can't stop that), but the first thing it does is check the breaker and bail out if tripped.

It's been running in production for two weeks. Caught a KV reads spike at 82% early in the month — got one warning email, investigated, fixed the root cause, never hit the trip threshold.

The pattern should apply to any metered serverless platform (Lambda, Vercel, Supabase) or any API with budget ceilings (OpenAI, Twilio). The core idea: treat your own resource budget as a health signal, just like you'd treat a downstream service's error rate.

Happy to share code details if there's interest.

Full writeup with implementation code and tests: https://yingjiezhao.com/en/articles/Usage-Circuit-Breaker-for-Cloudflare-Workers

16

Ash, an Agent Sandbox for Mac #

ashell.dev faviconashell.dev
16 comments3:19 PMView on HN
Ash is a macOS sandbox that restricts AI coding agents. It limits access to files, networks, processes, IO devices, and environment variables. You can use Ash with any CLI coding agent by wrapping it in a single command: `ash run -- <agent>`. I typically use it with Claude to stay safe while avoiding repetitive prompts: `ash run -- claude --dangerously-skip-permissions`.

Ash restricts resources via the Endpoint Security and Network Extension frameworks. These frameworks are significantly more powerful than the sandbox-exec tool.

Each session is driven by a policy file. Any out-of-policy action is denied by default. You can audit denials in the GUI app, which lets you view out-of-policy actions and retroactively add them to your policy file.

Ash also comes with tools for building policies. You can use an "observation session" to watch the typical behavior of a coding agent and capture that behavior in a policy file for future sandbox sessions. Linting, formatting, and rule merging are all built into the Ash CLI to keep your policy files concise and maintainable.

Download Ash at https://ashell.dev

15

Modulus – Cross-repository knowledge orchestration for coding agents #

modulus.so faviconmodulus.so
4 comments6:52 PMView on HN
Hello HN, we're Jeet and Husain from Modulus (https://modulus.so) - a desktop app that lets you run multiple coding agents with shared project memory. We built it to solve two problems we kept running into:

- Cross-repo context is broken. When working across multiple repositories, agents don't understand dependencies between them. Even if we open two repos in separate Cursor windows, we still have to manually explain the backend API schema while making changes in the frontend repo.

- Agents lose context. Switching between coding agents often means losing context and repeating the same instructions again.

Modulus shares memory across agents and repositories so they can understand your entire system.

It's an alternative to tools like Conductor for orchestrating AI coding agents to build product, but we focused specifically on multi-repo workflows (e.g., backend repo + client repo + shared library repo + AI agents repo). We built our own Memory and Context Engine from the ground up specifically for coding agents.

Why build another agent orchestration tool? It came from our own problem. While working on our last startup, Husain and I were working across two different repositories. Working across repos meant manually pasting API schemas between Cursor windows — telling the frontend agent what the backend API looked like again and again. So we built a small context engine to share knowledge across repos and hooked it up to Cursor via MCP. This later became Modulus.

Soon, Modulus will allow teams to share knowledge with others to improve their workflows with AI coding agents - enabling team collaboration in the era of AI coding. Our API will allow developers to switch between coding agents or IDEs without losing any context.

If you wanna see a quick demo before trying out, here is our launch post - https://x.com/subhajitsh/status/2024202076293841208

We'd greatly appreciate any feedback you have and hope you get the chance to try out Modulus.

15

Svglib a SVG parser and renderer for Windows #

github.com favicongithub.com
1 comments3:04 PMView on HN
svglib is a SVG file parser and renderer library for Windows. It uses Direct2D for GPU assisted rendering and XMLLite for XML parsing.

This is meant for Win32 applications and games to easily display SVG images.

10

2D RPG base game client recreated in modern HTML5 game engine with AI #

github.com favicongithub.com
5 comments8:09 PMView on HN
When I was much younger, I used to play a Korean MMORPG called Helbreath, and I also hosted a bunch of private servers for it. I eventually moved on, but I always loved the game’s aesthetics, its 2D nature, and its atmosphere. That may just be nostalgia talking.

The community maintained private server and client, which to my knowledge were based on leaked official files, were written in fairly archaic C++. If you’re interested in the original sources, I’ve included the main client and server files, Client.cpp and Server.cpp, in the reference folder. I always felt that if the project was rewritten in something more modern and better structured, a lot more could be done with it. But rewriting an MMORPG client and server from scratch is not exactly the kind of thing you do on a whim. That said, there was a guy who got pretty far with a C# rewrite and an XNA-based client, though that project is now also discontinued.

Now that AI has become quite capable, I decided to see how far I could get by hooking up the original assets in a modern HTML5 game engine. I wanted HTML5 because I figured a nearly 30 year old 2D game should run just fine in a browser. I ended up choosing Phaser 3 for a few reasons. Mainly, it's 2D only, free, HTML5 first (JS/TS), and code-first, which mattered because I wanted good Cursor integration for AI assistance. Another thing I liked was its integration with React, which let me build the UI using browser technologies and render the UI at native resolution on top of the WebGL canvas, rather than building the UI inside the game engine itself, which runs at 1024x576 resolution. The original game ran at 640x480.

After about 1.5 months of talking to AI on evenings and weekends, and roughly $200 worth of Cursor usage later, I finished hooking up the original assets in a modern game engine that seems to run just fine in a browser.

By "base game client", I mean that it's not fully hooked up in terms of how the full (MMO)RPG should function, but it does include all the original assets and core mechanics needed to provide a solid foundation if you want to build your own 2D (MMO)RPG on top of it. Continuing to build with AI should also work just fine, since this is how I managed to get that far. The asset library is quite rich, if you ask me, but there is one caveat: these assets are not in the public domain. They are still the property of someone, or some entity, that inherited the IP from the original developer, which is no longer in business. You can read more about that on the GitHub page.

8

Local-first firmware analyzer using WebAssembly #

xray.boldwark.com faviconxray.boldwark.com
0 comments1:13 PMView on HN
Hi HN,

I just wanted to share what I have been working on for the past few months: A firmware analyzer for embedded Linux systems that helps uncovering security issues running entirely in the browser.

This is a very early Alpha. It is going to be rough around the edges. But I think it provides quite a lot of value already.

So please go ahead and drop a firmware (only .tar rootfs archives for now) and try to break it :)

6

A retention mechanic for learning that isn't Duolingo manipulation? #

dailylabs.co favicondailylabs.co
4 comments12:26 AMView on HN
i've spent the last few years shipping learning products at scale - Andrew Ng's AI upskilling platform, my MIT Media Lab spinoff focused on AI coaching. the retention problem was the same everywhere. people would engage with content once and not return. not because the content was bad - rather because there was no mechanism/motivation to make it a habit.

the standard industry answer is gamification — streaks, points, badges. Duolingo has shown this works for language. but I'm skeptical it generalizes. duolingo's retention is built on a very specific anxiety loop that feels increasingly manipulative and doesn't translate well to topics like astrophysics or reading dense research papers.

i've been building Daily - 5 min/day structured social learning on any topic, personalized by knowledge level. Eerly and small (20 users). the interesting design question i keep running into: what actually drives someone to return to learn something they want to learn but don't need to learn? no external accountability, no credential at the end, no job pressure. pure intrinsic motivation is notoriously hard to sustain.

my current hypothesis: the return trigger isn't gamification, it's social - knowing someone else is learning the same thing, or that someone will notice if you stop. testing this in month 1.

has anyone built in this space or thought carefully about the retention mechanic for purely intrinsic learning? curious what the HN crowd has seen work.

5

Agentic Data Analysis with Claude Code #

rubenflamshepherd.com faviconrubenflamshepherd.com
0 comments4:44 PMView on HN
Hey HN, as a former data analyst, I’ve been tooling around trying to get agents to do my old job. The result is this system that gets you maybe 80% of the way there. I think this is a good data point for what the current frontier models are capable of and where they are still lacking (in this case — hypothesis generation and general data intuition).

Some initial learnings: - Generating web app-based reports goes much better if there are explicit templates/pre-defined components for the model to use. - Claude can “heal” broken charts if you give it access to chart images and run a separate QA loop.

Would either feedback from the community or to hear from others that have tried similar things!

4

Silly Faces #

juliushuijnk.nl faviconjuliushuijnk.nl
15 comments2:14 PMView on HN
Tap on the canvas to create silly faces.

Discover new brushes by combining them on the canvas.

Used to be a React Native app from the pre-AI ages. Recently ported with AI to be just a wordpress plugin.

4

AI agent that runs real browser workflows #

ghostd.io faviconghostd.io
7 comments11:59 AMView on HN
I’ve been experimenting with letting an AI agent execute full workflows in a browser.

In this demo I gave it my CV and asked it to find matching jobs. It scans my inbox, opens the listings, extracts the details and builds a Google Sheet automatically.

4

Draxl, agent-native source code with stable AST node IDs #

github.com favicongithub.com
0 comments9:17 PMView on HN
Hello,

I’m building Draxl, a source format for a world where code is edited by millions of agents.

AI agents will produce far more code than humans do today. Rebased branches, concurrent edits, and long-lived forks will become much more common. Code management needs more precise control at that scale.

Draxl embeds stable AST node IDs directly in the source, so tools can target syntax by identity instead of by line position. Here’s a small example:

Here is a small example:

  @m1 mod demo {
    @d1 /// Add one to x.
    @f1[a] fn add_one(@p1[a] x: @t1 i64) -> @t2 i64 {
      @c1 // Cache the intermediate value.
      @s1[a] let @p2 y = @e1 (@e2 x + @l1 1);
      @s2[b] @e3 y
    }
  }
The syntax is:

  @id[rank]->anchor
* `@id` gives the next node a stable identity

* `[rank]` orders siblings inside ranked slots

* `->anchor` attaches detached docs or comments to an existing sibling id

The same code lowers to ordinary Rust:

  mod demo {
      /// Add one to x.
      fn add_one(x: i64) -> i64 {
          // Cache the intermediate value.
          let y = (x + 1);
          y
      }
  }
In Draxl, functions, statements, expressions, docs, and comments can all carry stable IDs. Ordered siblings carry explicit ranks. Detached docs and comments can carry explicit anchors.

That lets a tool say "replace expression `@e3`" or "insert a statement into `@f1.body[ah]`" instead of "change these lines near here."

That should make semantic replay more reliable under heavy concurrent editing. It should also reduce false merge conflicts and localize real ones more precisely.

It also opens up other uses. You could attach ownership, policy, or review metadata directly to AST nodes.

I’m interested in early feedback: does this source model feels useful, and whether editing code this way seems like a better fit for agent-heavy workflows. Where are the best places on the internet to discuss this sort of thing?

Connect with me: https://x.com/rndhouse

3

Hotwire Club – A Learning Community for Hotwire (Turbo/Stimulus/Rails) #

hotwire.club faviconhotwire.club
0 comments8:52 AMView on HN
Hotwire Club publishes free technical tutorials on Hotwire, each linking to a solution on Patreon (around 2/3 free, the rest available on a paid plan that's 5$ per month). We just open‑sourced our tooling stack.

Recent articles include: - Turbo Frames - Using External Forms - Turbo Frames - Loading Spinner - Faceted Search with Stimulus and Turbo Frames

3

Don't share code. Share the prompt #

openprompthub.com faviconopenprompthub.com
1 comments6:59 PMView on HN
Hey HN, I'm Mario. I recently talked to a colleague about AI, agents and how software development will change in the future. We were wondering why we should even share code anymore when AI agents are already really good at implementing software, just through prompts. Why can't everyone get customized software with prompts?

"Share the prompt, not the code."

Well, I thought, great idea, let's do that. That's why I built Open Prompt Hub: https://openprompthub.io.

Think GitHub just for prompts.

The idea is simple: Users can upload prompts that can then be used by you and your AI tools to generate a script, app, or web service (or prime their agent for a certain task): Just past it into your agent or ide and watch it build for you. If the prompt does not 100% covers your usecase, fork it, tweak it, et voila: tailor-made software ready to use!

The prompts are simple markdown files with a frontematter block for meta information. (The spec can be found here: https://openprompthub.io/docs) They versioned, have information on which AI models build it successfuly and have instructions on how the AI agent can test the resulting software.

Users can mention with which models they have successfully or unsuccessfully executed a prompt (builds or fail). This helps in assessing whether a prompt provides reliable output or not.

Want to create a open prompt file? Here is the prompt for it which will guide you through: https://openprompthub.io/open-prompt-hub/create-open-prompt

Security! Always a topic when dealing with AI and prompts? I've added several security checks that look at every prompt for injections and malicious behavior. Statistical analysis as well as two checks against LLMs for behaviour classification and prompt injection detection.

It's an MVP for now. But all the mentioned features are already included.

If this sounds good, let me know. Try a prompt, fork it, or tell me what you'd change in the spec or security scanner. I'm really curious about what would make you trust and reuse prompts. Or if you like the general idea...

3

An on-device Mac app for real-time posture reminders #

apps.apple.com faviconapps.apple.com
0 comments8:17 PMView on HN
Built this because I wanted a simple posture app I could leave running all day without extra friction. It runs locally on Mac, watches for posture drift in real time, and nudges you when you start slouching.

I’d especially love feedback on accuracy, privacy expectations, and whether the reminders feel useful in practice.

Cheers!

2

A mission-based game to help students apply math in real life #

owsterlabs.com faviconowsterlabs.com
0 comments10:23 AMView on HN
I was the weakest at maths and still get nightmares about it. After years of trying, shipping, and playing, I realised that rather than turning each concept into a module, it's much better to provide a place where students can apply their learning and see the outcome.

With this, I present Owster Labs. This is a game-based learning module where you go through missions and find solutions. Unlike normal games, this is a very watered-down version of CFD. You have a UAV to work on: understand the situation, prepare your UAV to fly below mountains and evade enemy aircraft. Do your calculations, find the best path, and learn the art of trade-offs and how maths is used in real life.

The system will play the outcome based on your solution and provide feedback. The whole mechanism is focused on making it look great and less fearful for kids.

Kindly try it on your PC and let me know what you think.

2

Get AI to write code that it can read #

github.com favicongithub.com
0 comments2:17 PMView on HN
not a new concept, but here's something me and Ana made for our cross-system-cross agent management tool. our own proprietary web design DSL heh

Like any codebase, this will change over time, and if you have tried something similar, your ideas, PRs are welcome, let's build some cool shit!

2

Find Engineering Manager Jobs Efficiently #

rolebeaver.com faviconrolebeaver.com
0 comments2:34 PMView on HN
I built a site to find engineering manager jobs and make the manager job search less painful. This was created out of my own job search last year. I was using various job aggregator sites and spreadsheets and wanted a more efficient method.

The main value of the site is: * High-quality job listings and data that is updated daily. It saves countless hours not having to sift through irrelevant or stale job listings.

Give it a try and feel open to share any feedback.

2

KaraMagic – automatic karaoke video maker #

karamagic.com faviconkaramagic.com
0 comments7:58 PMView on HN
Hi all, this is an early version of a side project of mine. Would love some feedback and comments.

I like karaoke and I grew up with the Asian style karaoke with the music video behind and the karaoke lyrics at the bottom.

Sometimes I want to do a song and there is no karaoke version video like that.

A few years ago I came across ML models that cleanly separate the vocals and the instrumental music of a song. I thought of the idea to chain together ML models that can take an input music video file, extract the audio (ffmpeg), separate the tracks (ML), transcribe the lyrics (ML), burn the lyrics back with timing into the video (ffmpeg), and output a karaoke version of the video.

This is an early version of the app, Mac only so far (since I use Mac, despite it being an electron app.. I do eventually want to make a Windows build), I've only let a few friends try it. Let me know what you think!

2

Draft2final – CLI converts Markdown into manuscript and screenplay PDFs #

draft2final.app favicondraft2final.app
0 comments9:13 AMView on HN
I got tired of the last-mile problem in writing: I write in Markdown, but submitting a novel manuscript or screenplay means either wrestling with Word/InDesign, installing a multi-GB LaTeX stack, or accepting output that looks like a printed webpage — and gets rejected.

So I built a small CLI that knows what specific publication formats are supposed to look like and just renders them correctly.

  draft2final story.md --as manuscript
  draft2final script.md --as screenplay
The manuscript format handles the actual spec: Courier/TNR at 12pt, 1-inch margins, running headers with word count, ~250 words/page, proper title page, widow/orphan control. The screenplay format handles scene headings, action blocks, character cues, parentheticals, MORE/CONT'D, and dual-dialogue — using blockquote syntax so the source file stays clean Markdown.

Some implementation details that might interest people here:

The binary is 4MB. I wanted it installable in one command with no system dependencies — no LaTeX, no Pandoc, no headless Chrome. The font situation was the hardest part: proper manuscript formatting needs specific fonts, and some formats need CJK support, which means potentially large font files. I solved this with JIT font downloading — the CLI ships tiny and only pulls what it needs at render time.

A 30-page document renders in under 0.5s.

It handles right-to-left scripts and mixed Arabic/Latin/CJK text, which most converters either break or require manual configuration for.

It's open source. Each format is a self-contained TypeScript module, so adding new input or output formats is relatively straightforward — I'd like to eventually support APA, Chicago, and stage play formats, and contributions are welcome. That said, I'd especially like feedback from anyone who knows the manuscript or screenplay specs well — I'm sure there are edge cases I haven't hit yet.

Docs + syntax guide: https://draft2final.app/guide

npm install -g draft2final

2

I wrote an application to help me practice speaking slower #

steady.cates.fm faviconsteady.cates.fm
0 comments7:55 AMView on HN
I’ve spoken fast and a bit unclearly my entire life. It’s one of those small curses you’re probably just born with. Speech therapy didn’t help and practicing on my own never really stuck. The worst part is worrying people won’t understand me, especially when presenting something important. I often wear a headset when presenting that plays my voice back to me in real time so I can hear myself speak and try to slow down. It hasn’t stopped me from doing things. I have a great career and good friends. But I still end up repeating myself a lot.

I’ve never found a system that fits into a normal schedule with work, kids, and everything else. So yesterday I built a small tool to help me practice pacing my speech in short sessions whenever I have a few minutes. It gives you paced stories to read, with tongue twisters mixed in, plus a free practice mode where it calculates how fast you were speaking at the end.

It runs fully client-side, uses Google’s speech API, and is open source: https://github.com/interactivecats/steady Note: Just saw there is a bug on mobile where it double counts - trying to fix (on desktop, where I use it, it works fine)

2

Rails Blocks update (ViewComponents are finally available) #

railsblocks.com faviconrailsblocks.com
0 comments4:45 PMView on HN
Hi, I run Rails Blocks, a UI component library for Rails apps that I started last summer.

Over the last few months, I reworked the docs for all 52 component sets to support 3 formats:

- ViewComponents

- Shared partials

- Markdown docs

I wanted the docs to be more useful for teams & save you time.

You can check out the improvements here: https://railsblocks.com

Next up, I’ll be adding a few tools to save even more time when coding using LLMs:

- CLI tooling

- An MCP server

- AI Skills

2

VR.dev – Open-source verifiers for what AI agents did #

vr.dev faviconvr.dev
0 comments1:21 PMView on HN
Hey HN,

Quick origin story: vr.dev started as a virtual reality project. The domain fit perfectly. The developer adoption did not. Rather than let a good domain go to waste, I pivoted to the other kind of VR: verification and rewards for AI agents.

The problem I kept running into: agents report success but system state tells a different story. The database row is still active. The IMAP sent folder is empty. The tests pass because the agent modified the tests. Real benchmarks put agent success at 12-30%, and even among reported successes a large fraction are procedurally wrong in ways that are hard to catch without actually checking state.

So I built a library of verifiers that check real system state rather than trusting agent self-reports. There are 38 of them across 19 domains right now, organized into three tiers: HARD (deterministic probes against databases, files, APIs, git), SOFT (LLM rubric scoring for things like tone or coherence that don't have a deterministic test), and AGENTIC (verifiers that actively probe the environment via headless browser, IMAP, or shell).

The design decision I'd most like feedback on is the composition model. SOFT scores are gated behind HARD checks, so if the deterministic check fails, the composed score is 0.0 regardless of what the LLM judge says. The idea is to make reward hacking structurally harder rather than just hoping the judge catches it.

MIT licensed, runs locally via pip install vrdev, no dependency on the hosted API which matters if you're using it in a training loop. Full verifier list at https://vr.dev/registry.

Curious whether the HARD/SOFT/AGENTIC taxonomy makes sense to people, whether fail_closed is the right default, and whether anyone has built something similar and run into problems I haven't hit yet.

https://vr.dev https://github.com/vrDotDev/vr-dev https://pypi.org/project/vrdev/

1

Krira Augment – Production Ready RAG in Minutes #

kriralabs.com faviconkriralabs.com
0 comments4:11 PMView on HN
Making RAG using tools like LangChain is relatively easy, but scaling it from a prototype to a reliable production system often requires a lot of engineering work — infrastructure, monitoring, scaling, pipelines, and maintenance.

To explore this problem, I’ve been building Krira Augment at Krira Labs.

I’m the founder and CEO of Krira Labs, and my goal is to build an AI infrastructure that helps developers and companies create production-ready systems for RAG, AI agents, MCP servers, and related workflows.

I’ve built an early prototype of Krira Augment to experiment with this idea. If this sounds interesting, you can join the waitlist here:

-> https://www.kriralabs.com/waitlist

You can watch Krira Augment demo here: -> https://youtu.be/v9i2tOorwqY?si=-IbuqSoO4bTfnw0D

I’m currently bootstrapping this project and would love feedback from the Hacker News community.

1

Parascene – a platform for AI, algorithmic, and traditional art #

sh.parascene.com faviconsh.parascene.com
0 comments1:20 PMView on HN
Hi HN — I am building Parascene as a place to create and share different kinds of art.

It supports AI-generated images, algorithmic/generative art, and traditional artwork. The goal is to put them in the same space so people can explore techniques, prompts, and creative ideas across mediums.

Current features include image generation tools, a public gallery, prompt sharing, and profiles where artists can post work.

Another goal is to encourage, enable, and reward local AI usage by sharing generation capabilities among the community.

I’m interested in feedback on the concept and the product. Does combining these mediums in one platform make sense?

https://www.parascene.com/help

1

Sandsofti.me – Visualize the time you have left with loved ones #

sandsofti.me faviconsandsofti.me
0 comments7:02 PMView on HN
I built this after reading “The Tail End” by Tim Urban. I wanted a simple way to understand my lifespan relative to the people most important to me.

SandsOfTi.me helps you visualize your approximate lifespan, healthy years remaining, and the time you likely left with your parents, children, spouse, and more.

Stuff of note:

- Single-page app (HTML/JS)

- No sign-up required

- Nothing you input is stored

- Save your visualization as an image, or export your data so you can restore it later

- All lifespan estimates are based on U.S. Social Security Administration Cohort Life Tables (already considering other data sources, but using more than one standard gets complex)

Feedback and suggestions are very welcome!

Related resources:

- SSA Cohort Life Tables - https://www.ssa.gov/oact/HistEst/CohLifeTables/2024/CohLifeT...

- The Tail End, by Tim Urban @ Wait But Why: https://waitbutwhy.com/2015/12/the-tail-end.html

- “When This Number Hits 5200 - You Will be Dead”, by Kurzgesagt: https://www.youtube.com/watch?v=JXeJANDKwDc

1

Gate – deterministic write-path checkpoint for AI agents #

zehrava.com faviconzehrava.com
0 comments1:21 PMView on HN
The write-path control plane for AI agents.

AI agents can read systems freely. The problem is writes — CRM imports, email sends, database updates, refunds. Once the agent does it, you can't unsee it.

Gate sits between the agent and production. Agents submit an intent, Gate evaluates a YAML policy, and only then issues a signed execution order. No LLM at evaluation time — same input always produces the same output.

The flow: POST /v1/intents → policy evaluated → approved/blocked/pending POST /v1/intents/:id/execute → gex_ token (15 min TTL) Worker uses token → reports outcome Full audit trail: proposed → policy_checked → approved → execution_requested → execution_succeeded