Daily Show HN

Upvote0

Show HN for March 24, 2026

38 items
644

I took back Video.js after 16 years and we rewrote it to be 88% smaller #

videojs.org faviconvideojs.org
138 comments6:03 PMView on HN
What do you do when private equity buys your old company and fires the maintainers of the popular open source project you started over a decade ago? You reboot it, and bring along some new friends to do it.

Video.js is used by billions of people every month, on sites like Amazon.com, Linkedin, and Dropbox, and yet it wasn’t in great shape. A skeleton crew of maintainers were doing their best with a dated architecture, but it needed more. So Sam from Plyr, Rahim from Vidstack, and Wes and Christain from Media Chrome jumped in to help me rebuild it better, faster, and smaller.

It’s in beta now. Please give it a try and tell us what breaks.

434

Gemini can now natively embed video, so I built sub-second video search #

github.com favicongithub.com
108 comments2:58 PMView on HN
Gemini Embedding 2 can project raw video directly into a 768-dimensional vector space alongside text. No transcription, no frame captioning, no intermediate text. A query like "green car cutting me off" is directly comparable to a 30-second video clip at the vector level.

I used this to build a CLI that indexes hours of footage into ChromaDB, then searches it with natural language and auto-trims the matching clip. Demo video on the GitHub README. Indexing costs ~$2.50/hr of footage. Still-frame detection skips idle chunks, so security camera / sentry mode footage is much cheaper.

161

ProofShot – Give AI coding agents eyes to verify the UI they build #

proofshot.argil.io faviconproofshot.argil.io
106 comments7:46 AMView on HN
I use AI agents to build UI features daily. The thing that kept annoying me: the agent writes code but never sees what it actually looks like in the browser. It can’t tell if the layout is broken or if the console is throwing errors.

So I built a CLI that lets the agent open a browser, interact with the page, record what happens, and collect any errors. Then it bundles everything — video, screenshots, logs — into a self-contained HTML file I can review in seconds.

proofshot start --run "npm run dev" --port 3000 # agent navigates, clicks, takes screenshots proofshot stop

It works with whatever agent you use (Claude Code, Cursor, Codex, etc.) — it’s just shell commands. It's packaged as a skill so your AI coding agent knows exactly how it works. It's built on agent-browser from Vercel Labs which is far better and faster than Playwright MCP.

It’s not a testing framework. The agent doesn’t decide pass/fail. It just gives me the evidence so I don’t have to open the browser myself every time.

Open source and completely free.

https://github.com/AmElmo/proofshot

118

AI Roundtable – Let 200 models debate your question #

opper.ai faviconopper.ai
95 comments7:15 PMView on HN
Hey HN! After the Car Wash Test post got quite a big discussion going (400+ comments, https://news.ycombinator.com/item?id=47128138), I spent the past few weeks building a tool so anyone can run these kinds of questions and get structured results. No signup and free to use.

You type a question, define answer options, pick up to 50 models at a time from a pool of 200+, and they all answer independently under identical conditions. No system prompt, structured output, same setup for every model.

You can also run a debate round where models see each other's reasoning and get a chance to change their minds. A reviewer model then summarizes the full transcript. All models are routed via my startup Opper. Any feedback is welcome!

Hope you enjoy it, and would love to hear what you think!

108

Gridland: make terminal apps that also run in the browser #

gridland.io favicongridland.io
14 comments4:57 PMView on HN
Hi everyone,

Gridland is a runtime + ShadCN UI registry that makes it possible to build terminal apps that run in the browser as well as the native terminal. This is useful for demoing TUIs so that users know what they're getting before they are invested enough to install them. And, tbh, it's also just super fun!

Gridland is the successor to Ink Web (ink-web.dev) which is the same concept, but using Ink + xterm.js. After building Ink Web, we continued experimenting and found that using OpenTUI and a canvas renderer performed better with less flickering and nearly instant load times.

We're excited to continue iterating on this. I expect a lot of criticism from the "why does this need to exist" angle, and tbh, it probably doesn't - it's really mostly just for fun, but we still think the demo use case mentioned previously has potential.

- Chris + Jess

12

Antimatter – Match the opposites (Mahjong solitaire mechanic) #

linguabase.org faviconlinguabase.org
5 comments7:32 PMView on HN
I love word association games. Here's a playable wordplay game where you match opposite word tiles. After playtesting a lot of mechanics, I think opposites play really nicely with Mahjong solitaire.

The current generation of frontier LLMs can't make puzzles that get much more interesting than hot-vs-big-vs-fast. New inferences keep circling a small pool of concepts unless the prompting has a way to get the LLM into new territories of language. Puzzlemaking needs graph traversals.

I made 20 levels algorithmically because I work with a huge semantic graph with over 100M edges, built from manual lexicography and millions of LLM inferences (various models). I keep exploring what can emerge from this graph. The puzzles are randomly selected; reload to see others.

The front-end was built with Claude Code.

Maybe someday I'll make this into a mobile game, increase the complexity and peril. If you are a gamedev, feel free to dissect it and borrow any parts.

10

Skub – a sliding puzzle browser game #

skub.app faviconskub.app
6 comments6:22 PMView on HN
Hi HN,

I've built Skub, a sliding puzzle game for the browser, based on a classic boardgame: Ricochet Robots.

It started as a challenge of trying to simplify the boardgame mechanics to fit on a mobile browser, which led to an 8x8 grid.

Since, it has evolved to a bit more of an experimentation with Deno, and a way for me to truly try out AI-assisted development. Claude Code has been especially helpful in building the BFS solver and setting up CI, less so in UI and logic.

I hope you enjoy it, all questions / feedback welcome.

9

Off By – a daily game about how wrong we are about the American economy #

offby.io faviconoffby.io
5 comments12:26 PMView on HN
We form opinions about wages, prices, and housing from vibes, headlines, and anecdotes. The actual numbers are usually somewhere unexpected.

Off By is a daily game: five real economic statistics, you guess each one with a slider, then see how far off you were. Everyone plays the same questions on the same day.

Average player is off by 25%. I'm usually in that range too.

Free, no account needed.

Curious whether HN skews more accurate than average, and always looking for statistics that are surprising but airtight. If you know a good one, feel free to drop it in the comments.

8

Running AI agents across environments needs a proper solution #

github.com favicongithub.com
5 comments11:57 AMView on HN
Hi HN folks,

I have been building AI agents for quite some time now. The shift has gone from LLM + Tools → LLM Workflows → Agent + Tools + Memory, and now we are finally seeing true agency emerge: agents as systems composed of tools, command-line access, fine-grained system capabilities, and memory.

This way of building agents is powerful, and I believe it is here to stay. But the real question is: are the systems powering these agents ready for that future?

I do not think so.

Using Docker for a single agent is not going to scale well, because agents need to be lightweight and fast. LLMs already add significant latency, so adding heavy runtime overhead on top only makes things worse. Existing solutions start to fall apart here.

Agents built in Python also tend to have a large memory footprint, which becomes a serious problem when you want to scale to thousands of agents.

And open-source for agents is still not where it should be. Right now, I cannot easily reuse agents built by domain experts the same way I reuse open-source software.

These issues bothered me, and I realized that if agents are ever going to be democratized, they need to be open and easy to use. Just like Docker solved system dependencies, we need something similar for agents.

That is why I started building an agent framework in Rust. It is modular and follows the principle of true agency: an agent is an entity with tools, memory, and an executor. In AutoAgents, users can independently create and modify tools, executors, and memory.

With AutoAgents, I saw that powerful agents could be built without compromising on performance or memory the way many other frameworks do.

But the other problems still remained: re-sharing agents, sandboxing, and scaling to thousands of agents.

So I created Odyssey — a bundle-first agent runtime written in Rust on top of AutoAgents, the Rust agent framework. It lets you define an agent once, package it as a portable artifact, and run it through the same execution model across local development, embedded SDK usage, shared runtime servers, and terminal workflows.

Both AutoAgents and Odyssey are fully open source and built in Rust, and I am planning to build an Odyssey Agent Hub soon, with additional features like WASM tools, custom memory layers, and more.

My vision is to democratize agents so they are available to everyone — securely and performantly. Being open is not enough; agents also need to be secure.

The project is still in alpha, but it is in a working state.

AutoAgents Repo -> https://github.com/liquidos-ai/AutoAgents

Odyssey Repo -> https://github.com/liquidos-ai/Odyssey

I would really appreciate feedback — especially from anyone who has dealt with similar problems. Your feedback help me shape the product.

Thanks for your time in advance!

6

Offline-first UK train planner #

railraptor.com faviconrailraptor.com
2 comments10:08 AMView on HN
Hey HN! I'd like to share my personal experiment with a different way of finding train journeys across the UK.

When I book flights, I use sites like Kiwi and Skyscanner for flexible searches - multiple airports, custom connections, creative routes, etc. But rail search feels oddly constrained. All the UK train operators offer basically the same experience, and surface the exact same routes. I always suspected there were better or just different options that weren’t being shown. Where is the "Skyscanner for trains"?

After digging through the national rail data feeds, I decided to have a go at building my own route planner that runs completely offline in the browser. This gave me the freedom to implement more complex filters, search to/from multiple stations, and do it without a persistent network connection.

Now I'm finding routes that aren't offered by the standard train operators, connecting at different stations, and finding it's often easier to travel to different stations (some I'd never heard of) that get me closer and faster to where I actually want to go!

It's still a little rough and I'd like to add more features such as fares, VSTP data, and direct-links to book tickets, but wanted to share early and get some initial feedback before investing more time into it. So, thanks in advance - let me know what you think.

6

Visualizing Apple Health workout data (stats, trends, insights) #

apps.apple.com faviconapps.apple.com
0 comments3:56 PMView on HN
I've just launched this little iOS app as an alternative to Apple Fitness, which is cluttered and chaotic when it comes to visualizing basic workout stats and metrics.

The idea is to focus on a clean, minimalistic design and only show high-level metrics that are actually useful, e.g. how often did I work out this week/month/year, cardio vs strength vs mobility, surf session count in February, etc.

It's free and offline. Also, no signup, no ads, no data sharing, no social feeds, and no notifications.

Just open it and get a quick glance at the state of your workout game within 2s.

It's built in native Swift with liquid glass.

Feedback is quite good so far, but it seems hard to get initial traction in the App Store, especially with all the AI slob these days. Would deeply appreciate some early downloads and honest reviews.

4

Lexplain – AI-powered Linux kernel change explanations #

lexplain.net faviconlexplain.net
0 comments10:24 PMView on HN
To understand what changed between kernel versions, you have to dig through the git repository yourself. Commit messages rarely tell you the real-world impact on your systems — you need to analyze the actual diffs with knowledge of kernel internals. For engineers who use Linux — directly or indirectly — but aren't kernel developers, that barrier is pretty high.

I kept finding out about relevant changes only after an issue had already hit, and it was most frustrating when the version was too new to find similar cases online. I built lexplain with the idea that it would be nice to quickly scan through kernel changes the way you'd skim the morning news.

It reads diffs, analyzes the code, and generates two types of documents:

- Commit analyses: context, code breakdown, behavioral impact, risks, references

- Release notes: per-version highlights, functional classification, subsystem breakdown, impact analysis

Documents build on each other — individual commits first, then merge commits using child analyses, then release notes using all analyses for that version. Claims based on inference are explicitly labeled.

Work in progress. Feedback welcome.

4

Jelly – SSH Social Hangout #

6 comments4:08 PMView on HN
built a social network you connect to over SSH.

no signup, no browser, just open your terminal and you're in.

channels, profiles, guestbook, shared blackboard, Top 8.

your identity is your SSH key fingerprint so no passwords needed.

to connect: ssh-keygen -t ed25519 (just hit enter through all the prompts) ssh jellyshell.dev

built with Go, Bubble Tea, and Wish.

i wanted to make something that maintains privacy and gets away from the brain rot and algorithms pushing rage bait.

lmk what you think.

3

Generate, preview, and export 3D models without complex software #

ai3dgen.com faviconai3dgen.com
0 comments4:16 AMView on HN
Hello HN. Creating 3D assets usually takes hours of manual work, so I created ai3dgen.com to automate the entire process. You simply upload a 2D image or describe what you want, and the AI generates a draftable 3D asset ready for your game, AR, or VR project. Please try it out and let me know how I can improve the pipeline for your specific use cases.
3

Origin – Git blame for AI agents (track which AI wrote every line) #

getorigin.io favicongetorigin.io
0 comments10:15 PMView on HN
We built Origin after realizing nobody on our team could answer "which AI wrote this?" when a bug appeared.

Origin hooks into Claude Code, Cursor, Gemini, and Codex. Every commit gets tagged with which agent wrote it, what prompt generated it, which model, and what it cost. Run `origin blame` on any file and you see [AI] or [HU] per line — like git blame but for AI agents.

All data stored in git notes. No server required, works offline, zero lock-in.

CLI is open source (MIT): github.com/dolobanko/origin-cli Team dashboard: getorigin.io

Would love feedback on the attribution approach — we're using git notes to store session data on commit hashes.

3

Danube – AI Tools Marketplace #

danubeai.com favicondanubeai.com
1 comments11:41 AMView on HN
Hey HN,

I built Danube, a marketplace where AI agents can discover and execute tools, and where developers can publish and monetize them.

I got tired of two things: giving my API keys directly to agents like OpenClaw (didn't feel secure), and having to re-setup all my MCP servers every time I switched between Cursor, Claude Code, and other tools.

Danube stores your credentials securely. Your agent calls the tool and never sees the keys. And since it's one MCP connection, you set it up once and it works across all your clients.

For devs who want to publish: you upload an OpenAPI spec or MCP server, optionally set pricing, and you're live. Agents can search and find your tools without users needing to manually configure anything.

100+ services work today. No signup required to browse.

Would love to hear what tools you use most with AI agents. If anyone's interested in publishing a tool, happy to help get you set up.

3

Updated GiantJSON Viewer – Opening 100GB JSONs on Android (Rust+SIMD) #

giantjson.com favicongiantjson.com
5 comments7:50 PMView on HN
Hey there,

Around 2 months ago I presented my first version of "Giant JSON Viewer" here: https://news.ycombinator.com/item?id=46609592

Now, after a lot of ups and downs, around a hundred bugs squeezed, and deeper testing, I'm proudly presenting a newer version after a complete rework of the Rust core, pumped with some related, privacy-first tools.

First of all, I added a few more common formats: - JSON, NDJSON/JSONL - CBOR (first time convert to JSON) - MsgPack (first time convert to JSON) - HAR (dedicated features for this) - Markdown (straight, open only for view)

Then, I went further with my stress testing... Managed to open a ~100GB JSON file on an S23 Ultra. Previous version crashed. With the reworked indexer, backend, and aux files, it worked! It took 40 mins to index, but worked.

After realizing my compiler optimization mistake, the SIMD finally got its real native power and the first indexing went down to 4-5 mins for 100GB. On a phone! Handling of the file, scrolling, viewing, and jumping to elements are instant after the first indexing.

Search and filter are still (relatively) blazing fast with memchr::memmem.

Then, to make the app more useful as a daily tool, I started building additional features:

- A rich REST API CLIENT (not fully featured yet, but supports GraphQL, OAuth2, and AWS SigV4).

- HAR analyzer. Since HAR is just JSON, why not utilize my existing backend to take advantage of it? (The first opening takes a bit longer, because apart from the initial JSON indexing, it requires request metadata processing for the filter/search and statistical features).

- Simple MOCK API: Nothing fancy, just host any files on the local network (Wifi, USB-tethering, USB-ethernet) statically.

- Privacy-first handy features: Why would you use online tools and risk sensitive data if you can do it locally? You might have your tools already (js, python, whatever), but if not, I can host a webUI for you from the app on the local network for all these. JWT, small JSON tools like prettify, minify, stringify, unescape, unix timestamp, hash generator... all on your own private webUI, hosted by the app. (Same features available in the app too).

Yes, this is my very first published app ever, and it's only available on Android, sorry. There are still some minor bugs, and 1 major (edge case) issue, but those will be handled too.

If you could take a look and let me know what your first impression is, I'd love to hear HONEST feedback! (I just realized recently how difficult it is to get useful info from users, even bug reports).

This is a freemium app: the JSON viewer section is completely free with no size limits, the export/transform features, api client and tools are paid.

3

A/B test images with your eyes using ARKit face tracking #

saccadeapp.com faviconsaccadeapp.com
0 comments9:52 PMView on HN
Saccade uses ARKit face tracking to measure which image you actually look at longer in a side-by-side (well, top-and-bottom as it runs in portrait mode) comparison.

Import a set of images, and it runs every possible pair, tracking your gaze for a dynamically-timed duration each (typically just a fraction of a second). It then ranks the images in descending order of how long your gaze lingered on each. You can then rerun the same batch of images to compound data over multiple tests, or start new. And of course you can easily share your results with friends and colleagues.

Bonus: The Emoji Duel game is pretty fun for onboarding; our 4yo can't get enough of it. :)

100% free, no backend, no user auth, no data leaves your phone. Requires iPhone with Face ID.

2

Palettepoint.com, AI palette generator with 120K+ curated palettes #

palettepoint.com faviconpalettepoint.com
0 comments10:18 PMView on HN
I built PalettePoint (https://palettepoint.com), a tool for generating color palettes from text prompts or images using AI.

You type something like "warm desert sunset" or upload a photo, and it gives you a palette with named colors and accessibility contrast data. The conversation is persistent so you can refine results ("make it more muted", "add a teal accent") like you would in ChatGPT.

There are also 120K+ palettes in the gallery, extracted from photos and generated by AI. You can export any palette to CSS custom properties, SCSS variables, Tailwind config, or JSON.

Other tools included: contrast checker (WCAG 2.1), gradient generator, color converter, color mixer, image color extractor, and a palette generator using color theory (complementary, analogous, triadic, etc.).

Would appreciate any feedback.

1

I built a local open-source tracker to bypass Jira UI friction #

0 comments12:36 PMView on HN
I originally built SheepCat (https://github.com/Chadders13/SheepCat-TrackingMyWork) to solve a massive personal pain point. When you are holding a complex mental model of a C# backend or trying to optimize a heavy SQL query, the absolute worst thing for your working memory is having a PM come to you and require and upadte which forces you to open a heavy web browser, navigate a clunky enterprise UI, and find a specific ticket just to log a status update. The context-switching friction is brutal. So, I built a local, lightweight Python desktop app to bypass it. The core philosophy is the "gentle nudge"—working with our brains instead of against them, which is especially helpful for neurodivergent developers dealing with the dyslexia or ADHD tax. It gently takes updates when your have two seconds and stores them ready for when you have the time to focus on a proper update. All messages are enhanced by Ai meaning that you can drop messy half written messages in and you will get a summary that is polished.

With the new v1.3 release, you can log your micro-updates locally all day, and then push them directly to Jira or Azure DevOps via API without ever leaving your flow state. The Code: https://github.com/Chadders13/SheepCat-TrackingMyWork I'm keeping this entirely free and open-source to help as many developers as possible. If this saves your flow state or makes your Monday a little less chaotic, any support to help ramp up the project and keep the caffeine flowing is massively appreciated: https://buymeacoffee.com/chadders13h/membership I'd love any feedback on the code