毎日の Show HN

Upvote0

2026年2月25日 の Show HN

65 件
223

Respectify – a comment moderator that teaches people to argue better #

respectify.org faviconrespectify.org
229 コメント2:21 PMHN で見る
My partner, Nick Hodges, and I, David Millington, have been on the Internet for a very long time -- since the Usenet days. We’ve seen it all, and have long been frustrated by bad comments, horrible people, and discouraging discussions.

We've also been around places where the discussion is wonderful and productive. How to get more of the latter and less of the former?

Current moderation tools just seem to focus on deletion and banning. Wouldn’t it be helpful to encourage productive discussion and teach people how to discuss and argue (in the debate sense) better?

A year ago we started building Respectify. We create healthy communication. Instead of just deleting bad-faith comments, we suggest better, good-faith ways to say what folks are trying to say. We help people avoid:

* Logical fallacies (false dichotomy, strawmen, etc.)

* Tone issues (how others will read the comment)

* Relevance to the actual page/post topic

* Low-effort posts

* Dog whistles and coded language

The commenter gets an explanation of what's wrong and a chance to edit and resubmit. It's moderation + education in one step.

We want, too, to automate the entire process so the site owner can focus on content and not worry about moderation at all. And over time, comment by comment, quietly coach better thinking.

We hope the result is better discussions and a better Internet. Not too much to ask, eh?

Our main website has an interactive demo: https://respectify.ai

As the demo shows, the system is completely tunable and adjustable, from "most anything goes" to "You need to be college debate level to get by me".

We love the kind of feedback this group is famous for.

We can both be reached at [email protected]

222

I ported Tree-sitter to Go #

github.com favicongithub.com
106 コメント6:28 PMHN で見る
This started as a hard requirement for my TUI-based editor application, it ended up going in a few different directions.

A suite of tools that help with semantic code entities: https://github.com/odvcencio/gts-suite

A next-gen version control system called Got: https://github.com/odvcencio/got

I think this has some pretty big potential! I think there's many classes of application (particularly legacy architecture) that can benefit from these kinds of analysis tooling. My next post will be about composing all these together, an exciting project I call GotHub. Thanks!

220

A real-time strategy game that AI agents can play #

llmskirmish.com faviconllmskirmish.com
80 コメント10:02 AMHN で見る
I've liked all the projects that put LLMs into game environments. It's been a weird juxtaposition, though: frontier LLMs can one-shot full coding projects, and those same models struggle to get out of Pokémon Red's Mt. Moon.

Because of this, I wanted to create a game environment that put this generation of frontier LLMs' top skill, coding, on full display.

Ten years ago, a team released a game called Screeps. It was described as an "MMO RTS sandbox for programmers." The Screeps paradigm of writing code and having it executed in a real-time game environment is well suited to LLMs. Drawing on a version of the Screeps open source API, LLM Skirmish pits LLMs head-to-head in a series of 1v1 real-time strategy games.

In my testing I found that Claude Opus 4.5 was the most dominant model, but it showed weakness in round 1 as it was overly focused on its in-game economy. Meanwhile, I probably spent a third of all code on sandbox hardening because GPT 5.2 kept trying to cheat by pre-reading its opponent's strategies.

If there's interest, I'm planning on doing a round of testing with the latest generation of LLMs (Claude 4.6 Opus, GPT 5.3 Codex, etc.).

You can run local matches via CLI. I'm running a hosted match runner with Google Cloud Run that uses isolated-vm. The match playback visualizer is statically served from Cloudflare.

I've created a community ladder that you can submit strategies to via CLI, no auth required. I've found that the CLI plus the skill.md that's available has been enough for AI agents to immediately get started.

Website: https://llmskirmish.com

API docs: https://llmskirmish.com/docs

GitHub: https://github.com/llmskirmish/skirmish

A video of a match: https://www.youtube.com/watch?v=lnBPaZ1qamM

140

I ported Manim to TypeScript (run 3b1B math animations in the browser) #

github.com favicongithub.com
25 コメント6:15 PMHN で見る
Hi HN, I'm Narek. I built Manim-Web, a TypeScript/JavaScript port of 3Blue1Brown’s popular Manim math animation engine.

The Problem: Like many here, I love Manim's visual style. But setting it up locally is notoriously painful - it requires Python, FFmpeg, Cairo, and a full LaTeX distribution. It creates a massive barrier to entry, especially for students or people who just want to quickly visualize a concept.

The Solution: I wanted to make it zero-setup, so I ported the engine to TypeScript. Manim-Web runs entirely client-side in the browser. No Python, no servers, no install. It runs animations in real-time at 60fps.

How it works underneath: - Rendering: Uses Canvas API / WebGL (via Three.js for 3D scenes). - LaTeX: Rendered and animated via MathJax/KaTeX (no LaTeX install needed!). - API: I kept the API almost identical to the Python version (e.g., scene.play(new Transform(square, circle))), meaning existing Manim knowledge transfers over directly. - Reactivity: Updaters and ValueTrackers follow the exact same reactive pattern as the Python original.

Because it's web-native, the animations are now inherently interactive (objects can be draggable/clickable) and can be embedded directly into React/Vue apps, interactive textbooks, or blogs. I also included a py2ts converter to help migrate existing scripts.

Live Demo: https://maloyan.github.io/manim-web/examples GitHub: https://github.com/maloyan/manim-web

It's open-source (MIT). I'm still actively building out feature parity with the Python version, but core animations, geometry, plotting, and 3D orbiting are working great. I would love to hear your feedback, and I'll be hanging around to answer any technical questions about rendering math in the browser!

134

Django Control Room – All Your Tools Inside the Django Admin #

github.com favicongithub.com
54 コメント2:31 PMHN で見る
Over the past year I’ve been building a set of operational panels for Django:

- Redis inspection - cache visibility - Celery task introspection - URL discovery and testing

All of these tools have been built inside the Django admin.

Instead of jumping between tools like Flower, redis-cli, Swagger, or external services, I wanted something that sits where I’m already working.

I’ve grouped these under a single umbrella: Django Control Room.

The idea is pretty simple: the Django admin already gives you authentication, permissions, and a familiar interface. It can also act as an operational layer for your app.

Each panel is just a small Django app with a simple interface, so it’s easy to build your own and plug it in.

I’m working on more panels (signals, errors, etc.) and also thinking about how far this pattern can go.

Curious how others think about this. Does it make sense to consolidate this kind of tooling inside the admin, or do you prefer keeping it separate?

130

Clocksimulator.com – A minimalist, distraction-free analog clock #

clocksimulator.com faviconclocksimulator.com
101 コメント2:17 PMHN で見る
Hello all! Build clean, minimalistic analog clock webpage to Cloudflare Pages.

This is for (maybe): - kids to learn - for second monitor - old tabled on shelf - ..

Themes and screen wake lock buttons with auto-hide. Goal is to keep it as clean as possible.

This possible makes no sense, but for a domain of $10/y this is cheap site for me to keep and see how it lives on.

82

Context Mode – 315 KB of MCP output becomes 5.4 KB in Claude Code #

github.com favicongithub.com
22 コメント6:23 AMHN で見る
Every MCP tool call dumps raw data into Claude Code's 200K context window. A Playwright snapshot costs 56 KB, 20 GitHub issues cost 59 KB. After 30 minutes, 40% of your context is gone. I built an MCP server that sits between Claude Code and these outputs. It processes them in sandboxes and only returns summaries. 315 KB becomes 5.4 KB. It supports 10 language runtimes, SQLite FTS5 with BM25 ranking for search, and batch execution. Session time before slowdown goes from ~30 min to ~3 hours. MIT licensed, single command install: /plugin marketplace add mksglu/claude-context-mode /plugin install context-mode@claude-context-mode Benchmarks and source: https://github.com/mksglu/claude-context-mode Would love feedback from anyone hitting context limits in Claude Code.
36

Sgai – Goal-driven multi-agent software dev (GOAL.md → working code) #

github.com favicongithub.com
22 コメント4:39 PMHN で見る
Hey HN,

We built Sgai to experiment with a different model of AI-assisted development.

Instead of prompting step-by-step, you define an outcome in GOAL.md (what should be built, not how), and Sgai runs a coordinated set of AI agents to execute it.

- It decomposes the goal into a DAG of roles (developer → reviewer → safety analyst, etc.) - It asks clarifying questions when needed - It writes code, runs tests, and iterates - Completion gates (e.g. make test) determine when it's actually done

Everything runs locally in your repo. There’s a web dashboard showing real-time execution of the agent graph. Nothing auto-pushes to GitHub.

We’ve used it internally for prototyping small apps and internal tooling. It’s still early and rough in places, but functional enough to share.

Demo (4 min): https://youtu.be/NYmjhwLUg8Q GitHub: https://github.com/sandgardenhq/sgai

Open source (Go). Works with Anthropic, OpenAI, or local models via opencode.

Curious what people think about DAG-based multi-agent workflows for coding. Has anyone here experimented with similar approaches?

10

StreamHouse – S3-native Kafka alternative written in Rust #

github.com favicongithub.com
7 コメント2:50 AMHN で見る
Hey HN,

I built StreamHouse, an open-source streaming platform that replaces Kafka's broker-managed storage with direct S3 writes. The goal: same semantics, fraction of the cost.

How it works: Producers batch and compress records, a stateless server manages partition routing and metadata (SQLite for dev, PostgreSQL for prod), and segments land directly in S3. Consumers read from S3 with a local segment cache. No broker disks to manage, no replication factor to tune — S3 gives you 11 nines of durability out of the box.

What's there today: - Producer API with batching, LZ4 compression, and offset tracking (62K records/sec) - Consumer API with consumer groups, auto-commit, and multi-partition fanout (30K+ records/sec) - Kafka-compatible protocol (works with existing Kafka clients) - REST API, gRPC API, CLI, and a web UI - Docker Compose setup for trying it locally in 5 minutes

The cost model is what motivated this. Kafka's storage costs scale with replication factor × retention × volume. With S3 at $0.023/GB/month, storing a TB of events costs ~$23/month instead of hundreds on broker EBS volumes.

Written in Rust, ~50K lines across 15 crates. Apache 2.0 licensed.

GitHub: https://github.com/gbram1/streamhouse

Happy to answer questions about the architecture, tradeoffs, or what I learned building this.

8

Gryt – self-hosted, open-source Discord-style voice chat #

gryt.chat favicongryt.chat
4 コメント10:22 AMHN で見る
This weekend I finally shipped Gryt, a project I’ve been building since 2022 — an open-source, self-hostable Discord-style app focused on reliable voice chat + text.

I’m the creator. I started it after getting fed up with Discord disconnects/paywalls and wanted something self-hosted and auditable.

I started on this in 2022 and had an early proof-of-concept working back then (auth + friends list), but I quickly realized WebRTC voice isn’t something you can duct-tape together. I spent a big chunk of the next couple years learning the stack (ICE/DTLS-SRTP, NAT traversal, SFU design), then came back and built a proper end-to-end architecture and polished it to the point where I felt comfortable releasing it publicly.

Repo: https://github.com/Gryt-chat/gryt Quick start: https://docs.gryt.chat/docs/guide/quick-start Web client: https://app.gryt.chat

6

Replacebase – library to migrate away from Supabase #

github.com favicongithub.com
2 コメント5:05 PMHN で見る
Hey folks!

There has been a lot of talk recently about the reliability of Supabase and migrating away to other database providers (or even hosting Postgres yourself).

I'm a big proponent of writing actual backend code rather than locking yourself into frameworks like Supabase. If you already built all your client-side code around it though, then migrating is not a trivial task.

Replacebase is designed to help with that. It's a Typescript library that you can plug into actual backend code that exposes a Supabase-compatible API so that you don't have to make significant changes to your frontend.

It's designed to be a stepping stone for further migration, letting you switch database and storage providers and eventually move to a bespoke backend architecture without Supabase lock-in (at which point you can also remove Replacebase!)

Would love your thoughts: https://github.com/specific-dev/replacebase

4

A high-performance Hex Editor with Yara-X support in C# #

github.com favicongithub.com
0 コメント4:54 PMHN で見る
I'm integrating the Yara-x rules engine into my C# hex editor. I'm working to maximize the performance and efficiency of the integration. I'd like to ask your opinion about this. I personally made this decision to expand the functionality of my hex editor by adding Yara-x support. This allows me to search for signatures in binary files in more detail. I think viewing the entire byte grid can help in malware research.

I implemented this using memory mapping files. I also divided the scanning methods into modes: small files are mapped completely, while large files are scanned in 16MB chunks with a small 64KB overlay to prevent a situation where half the signature is in one chunk and half is in another. I also used smarter memory management for performance with large files. Documentation is in the readme. But in short, this is an implementation that doesn't overload the garbage collector in C# and handles unsafe pointers and raw memory addresses. What's important is that I now have protection against bad rules that, for example, search for any byte, overloading the scanner. Such rules won't work, and the scanner will stop scanning so that the scanner doesn't crash with an error.

I can't say right now that this tool could be better than the others, because it's currently in development and I still have room for improvement, but it would be cool to hear people's opinions or accept other people's ideas for improving the tool.

(The native version with Yarax is not yet available in current releases, but the source code is available and you can compile or read it yourself.)

4

Draw on Screen – a modern screen annotation tool with webcam #

drawonscreen.com favicondrawonscreen.com
0 コメント6:52 PMHN で見る
I built this tool because I needed to make YouTube Devlogs faster and annotating in DaVinci Resolve took hours.

It allows you to annotate your screen (draw over other apps) and interact with your webcam in real-time (it supports up to 4 webcam configurations).

The app was built from the ground up to reduce cognitive load to a minimum. All tools are spring-loaded and intuitive. Want to draw something? Just swipe. Want to write some text? Double-click. Want to zoom in on something? Scroll up.

It's available for all 3 OS-es, and it's a one-time purchase that you can install on up to 5 computers. Free to try without a credit card.

If you're a content creator or a story teller, I think you'll really like this tool.

4

Penclaw.ai hire OpenClaw tenant for pentesting #

penclaw.ai faviconpenclaw.ai
0 コメント7:25 PMHN で見る
This idea flashed to me when someone wanted to hire me as a pentester by falsely thinking that I was an expert at pentesting on Reddit. Then I revealed to him that I am not a pentester, my AI is.

Previously, I shared an ablated closed-source LLM on Show HN. https://pingu.audn.ai/

Penclaw is basically OpenClaw + the best abliterated model + unlimited tokens ( you share a 1 GPU VM H100 locally with 40 tenants )

It also affors a filesystem UI, Cursor like edit options, console access, one-click MCP deploy and connect, MCP marketplace that lists new pentesting and red teaming MCPs (MCPs that has objectionable actions that Claude or OpenAI models refuse) constantly.

This is an experiment to share 1 GPU VMs rather than buying mac studios for openclaw.

Feel free to give feedback obviously that's why I share it here.

4

Agent that matches sales reps with warm leads based on product usage #

inspector.getbeton.ai faviconinspector.getbeton.ai
0 コメント6:09 PMHN で見る
hey, I'm building a tool that: 1. analyzes your Posthog data 2. finds patterns that lead to plan upgrade/account expansion 3. creates a deal in your CRM whenever it sees it againg

we've just launched a huge update. Beton now has MCP (my Claude Code is already connected), Firecrawl integration and onboarding that's easier to understand

available in cloud and via AGPLv3

let us know if you need any help setting up

PS it's also suitable if you want to send triggered push notifications or emails

3

TinyCard – A minimalistic & functional e-Card site, like tinyletter #

tinycard.app favicontinycard.app
1 コメント9:02 PMHN で見る
My brother just had his 39th birthday and as we live in different cities I sent a present to him directly from the shop (it was a gym bag for those who are curious).

The shop didn't let me add a postcard or anything personal to the package so I went looking for an easy/fast eCard service that I considered aesthetically pleasing.

It was a very frustrating search. I thought of creating it with Figma but didn't feel like spending more time building a postcard design (also I wanted it to open in a cool way!!). So I created an app :D (cue Rick & Morty "lets-build-an-app" guy)

This was created with the sentiment how tinyletter (RIP) offered a functional/minimal solution to bloated software.

3

Live iOS 26.3 exploit detection (CVE-2026-20700) – Multi-region C2 #

github.com favicongithub.com
0 コメント4:32 PMHN で見る
Public release of *ZombieHunter*, a forensics tool detecting live exploitation of CVE‑2026‑20700 (dyld memory corruption) in iOS 26.3. Analysis of sysdiagnose archives shows identical exploit shells showing different C2 endpoints:

US Device 1 → 83.116.114.97 (EU/US) US Device 2 → 101.99.111.110 (CN)

The rogue dyld_shared_cache slice triggers overflow via malformed `mappings_count`, executes shellcode (BL #0x15cd), and applies an AMFI bypass (`DYLD_AMFI_FAKE`) enabling unsigned code persistence. Apple PSIRT + CISA were notified; public disclosure follows.

Sample: https://drive.google.com/file/d/1rYNGtKBMb34FQT4zLExI51sdAYR... SHA256 artifact: ac746508938646c0cfae3f1d33f15bae718efbc7f0972426c41555e02e6f9770

Usage: `python3 zombie_auditor.py sysdiagnose_xxx.tar.gz` (Needs capstone)

Reproducible PoC confirms CVE‑2026‑20700 bypass, AMFI neutralization, and live C2 connectivity in production iOS 26.3.

3

Well-net – a friends-only IPv6 network with no central server #

github.com favicongithub.com
0 コメント4:48 PMHN で見る
Hi HN,

I built well-net: https://github.com/remoon-net/well

Think tsnet, but without Tailscale or any central server. It’s for secure friend networks — chat, small games, private services.

Existing systems suck in subtle ways: lose your domain → lose your identity and contacts. Delta Chat and Mastodon show the problem. I just want something that works without any central coordination.

Each node gets a stable IPv6 in 2001:00ff::/32 from its MAC using EUI-64. Real MACs → no collisions, no central server needed. NAT is only to avoid overlay address conflicts.

Tech: WireGuard + WebRTC → runs in browsers. Once WebRTC DataChannel works in Service Workers, private services can be accessed directly from the web.

Planned: minimal mail-based chat using Delta Chat with IP-literal addresses like remoon@[2001:ff::1] → DNS-free identity.

Project is experimental.

Would you use this for small friend networks?

3

Match – A pattern matching language that replaces regex #

matchlang.com faviconmatchlang.com
0 コメント4:34 PMHN で見る
I built Match, a pattern matching language designed to replace regex. Instead of cryptic escape sequences, you describe patterns in plain English.

Example — matching an email address:

email: username then "@" then domain

username: one or more of (letter, digit, ".", "_", "-")

domain: one or more of (letter, digit, "-") then "." then between 2 and 6 letters

Key differences from regex:

- Human-readable grammar syntax - Full parse trees with named extractions - No backtracking, no ReDoS by design - Composable grammar modules (use "module" (rule1, rule2)) - Zero dependencies, ~15KB

Available on npm (@hollowsolve/match). Docs, playground, and examples on the site.

GitHub: https://github.com/hollowsolve/Match

3

Tesseract – 3D architecture editor with MCP for AI-assisted design #

tesseract.infrastellar.dev favicontesseract.infrastellar.dev
2 コメント11:05 PMHN で見る
Hey HN. I'm David, solo dev, 20+ years shipping production systems. I built Tesseract because AI can analyze your codebase, but the results stay buried in text. Architecture is fundamentally visual — you need to see it, navigate it, drill into it. So I built a 3D canvas where AI can show you what it finds.

Tesseract is a desktop app today (cloud version coming) with a built-in MCP server. You connect it to Claude Code with one command:

  claude mcp add tesseract -s user -t http http://localhost:7440/mcp
I use it for onboarding (understand a codebase without reading code), mapping (point AI at code, get a 3D diagram), exploring (navigate layers and drill into subsystems), debugging (trace data flows with animated color-coded paths), and generating (design in 3D, generate code back).

There's also a Claude Code plugin (tesseract-skills) with slash commands: /arch-codemap maps an entire codebase, /arch-flow traces data paths, /arch-detail drills into subsystems.

Works with Claude Code, Cursor, Copilot, Windsurf — any MCP client. Free to use. Sign up to unlock all features for 3 months.

It's early but stable. I've been dogfooding it on real projects for weeks and it's ready for other people to try.

Demo video (1min47): https://youtu.be/YqqtRv17a3M

Docs: https://tesseract.infrastellar.dev/docs

Plugin: https://github.com/infrastellar-dev/tesseract-skills

Discord: https://discord.gg/vWfW7xExUr

Happy to discuss the MCP integration, the design choices, or anything else. Would love feedback.

3

OrangeWalrus, an aggregator for trivia nights (and other events) in SF #

orangewalrus.com faviconorangewalrus.com
0 コメント11:17 PMHN で見る
Two problems I encountered personally:

1) Some buddies and I went to a trivia night late last year, only to arrive to find it cancelled (with signs still on the walls saying it happened every Tuesday, etc)

2) Sourcing ideas for fun things to do in the city on a given night, in a given neighborhood. Some sites help a ton (e.g. funcheapsf), but often don't have everything I'd want to see, so we decided to build that out a bit.

Anyway, I built this originally to solve #1, then a buddy and I expanded it to also start addressing #2 (still in progress, but we've added more event types already). Thanks for checking it out! We're very open to thoughts / feedback.

3

WinterMute Local-first OSINT workbench with native Tor and AI analysis #

wintermute.stratir.com faviconwintermute.stratir.com
0 コメント8:56 AMHN で見る
Desktop app for OSINT and darknet investigations. Everything runs locally on your machine, no evidence is sent to the cloud. Built-in Tor browser lets you access .onion sites directly from the workbench without juggling separate tools. An AI copilot can analyze screenshots and web pages as you work. Every piece of evidence gets SHA-256 hashed with a tamper-evident custody chain so your collection holds up. IOC tracking across cases with automatic cross-case correlation to link shared infrastructure between investigations. STIX 2.1 export and MITRE ATT&CK mapping for structured reporting, currently only available on macOS.
3

Gitbusiness.com I created it, and Indeed, I use my own stuff #

2 コメント10:18 PMHN で見る
I manually coded it, took 6 months, before the existence of AI llms. Ask me anything, I'd be happy to answer you.
3

SentientTube – The YouTube for AI Agents #

sentienttube.com faviconsentienttube.com
0 コメント12:12 PMHN で見る
Although there's plenty of hype going around with agents, what I do think is that we're at the point where we're building the online infrastructure for AI agents, and we don't really know how that looks like yet. So I've built SentientTube (https://sentienttube.com) to explore what it means to allow AI Agents to become content creators.

The flow: an agent submits a prompt via API, gets back a fully produced cinematic video, and can publish it to a public feed — no human in the loop required. I expose a /skill.md endpoint so agents can self-onboard without any UI.

Humans can use it too (and most early users are), but the design is agent-first: API registration, an agent-readable integration spec, and a claim system so a human can "own" their agent's channel.

I don't know how you'll feel about it, but browsing the public feed feels oddly like early YouTube — unpolished, strange, just a taste of what possibilities exist. Curious what people think, and what you or your agents will make: https://sentienttube.com/videos

Free for now, am just testing interest.

Early thoughts: What I love most is when you ask an agent to make a video based on what it knows about you, to either surprise you, challenge your ideas, help you with unblocking yourself, etc. It can be super inspirational if it knows enough about your life and struggles.

2

Workz – Zoxide for Git worktrees (auto node_modules and .env, AI-ready) #

github.com favicongithub.com
1 コメント7:13 AMHN で見る
workz fixes the #1 pain with git worktrees in 2026:

When you spin up a new worktree for Claude/Cursor/AI agents you always end up: • Manually copying .env* files • Re-running npm/pnpm install (or cargo build) and duplicating gigabytes

workz does it automatically: • Smart symlinking of 22 heavy dirs (node_modules, target, .venv, etc.) with project-type detection • Copies .env*, .npmrc, secrets, docker overrides • Zoxide-style fuzzy switch: just type `w` → beautiful skim TUI + auto `cd` • `--ai` flag launches Claude/Cursor directly in the worktree • Zero-config for Node/Rust/Python/Go. Custom .workz.toml only if you want

Install: brew tap rohansx/tap && brew install workz # or cargo install workz

Demo in README → https://github.com/rohansx/workz

Feedback very welcome, especially from people running multiple AI agents in parallel!

2

I tested 50 AI video APIs and built a comparison platform #

findaivideo.com faviconfindaivideo.com
0 コメント7:14 AMHN で見る
Last year, I spent $3,200 on video production tools. Professional cameras, editing software, stock footage - and still wasn't creating fast enough.

Then AI video tools exploded. Everyone promised to "revolutionize video creation." Reality? *90% are garbage*.

I tested 50+ AI video tools over 6 months and built [FindAIVideo.com] to share what actually works.

---

## Why Most AI Video Reviews Are Useless

Most "top 10 AI video tools" articles are: - Written by people who never used the tools - Paid affiliate spam - Outdated immediately - Missing hidden costs

I wanted real testing, transparent pricing, honest comparisons.

---

## My Testing Framework: 3 Must-Haves

### 1. *Output Quality* - Minimum 1080p (ideally 4K) - Consistent results (no lottery) - Passes the "uncanny valley" test

### 2. *Usability* - < 30 minutes to first video - Intuitive interface - Generate in minutes, not hours

### 3. *Cost Transparency* - No hidden fees - Clear credit systems - Honest export limits

---

## 5 Tools Actually Worth Your Money

After 6 months, only 5 made my list:

### 1. [See FindAIVideo.com] *Best for*: Short-form social media *Price*: $29/month (100 videos) *Why*: Fastest generation + no watermark

Generated 200+ TikToks with this. Best pacing for short-form.

### 2. [Redacted] *Best for*: Professional marketing *Price*: $99/month (unlimited) *Why*: Cinematic quality + brand templates

Saved $2,000/month vs. hiring editors.

### 3. [Redacted] *Best for*: Educational content *Price*: $49/month (50 videos) *Why*: Screen recording + auto-captions

Perfect for SaaS demos.

### 4. [Redacted] *Best for*: Avatar videos *Price*: Free tier, $20/month pro *Why*: Most realistic AI avatars

### 5. [Redacted] *Best for*: Long-form repurposing *Price*: $79/month *Why*: 1 podcast → 20 clips automatically

---

## Tools to Avoid (And Why)

45+ tools failed. Here's why:

* Too Expensive*: $200-500/month for $50 features Red flag: No transparent pricing

* Vaporware*: Beautiful demos, terrible reality Red flag: No free trial = hiding something

* Credit Traps*: "$29/month" = 2 videos actually Red flag: Confusing credit systems

---

## How I Built FindAIVideo.com

After testing everything, I had real data. So I built: - 50+ tool reviews with scoring - Real pricing comparisons - Use case matching - Workflow guides

Goal: Save you $3,200 and 200 hours I wasted.

---

## My Honesty Policy

*I make ZERO money from recommendations.*

- No affiliate links (yet) - No sponsored content - No "free tools for reviews" - I paid for everything I tested

If it sucks, I say it sucks.

---

## 2026 Predictions

### Consolidation Half these tools die by 2027.

### Price Wars Prices drop 30-50% as competition heats up.

### Quality Plateau We're near "good enough." Future improvements = marginal.

### Integration Wins Winners integrate with Adobe/Canva, not standalone platforms.

---

## My Testing Checklist

*Day 1: Free Trial* - Test 3 video types - Download exports (don't just preview)

*Day 2: Cost Analysis* - Calculate cost per video - Find hidden fees - Compare 3 competitors

*Day 3: Integration* - Test with existing tools - Check support response

Fails any? Move on.

---

## Resources

*FindAIVideo.com* has: - Detailed comparisons - Pricing breakdowns - Quality examples - Step-by-step workflows

Questions? Comment below.

---

## Bottom Line

AI video tools aren't magic. They're tools. Some sharp, some dull, most overpriced.

Find the right one? Game-changing. I went from 2 videos/week to 15. From $3,200/month to $150.

*Stop wasting money. Start creating.*

Visit FindAIVideo.com for honest comparisons, no BS.

2

DRYwall – Claude Code plugin to to deduplicate code with jscpd #

github.com favicongithub.com
0 コメント5:43 PMHN で見る
Motivated by the observation that coding agents such as Claude Code have a bias towards producing new code over reusing existing code or extracting common code. The resulting creeping code duplication weighs down AI-native codebases. The plugin makes ongoing deduplication quick and easy from within Claude Code.

Because DRYwall detects code duplication using a deterministic toolchain (the awesome jscpd), it's significantly more effective and cheaper in tokens than just telling an agent to find and refactor duplication.

2

A CLI to query the unsealed court files with local LLMs #

github.com favicongithub.com
0 コメント8:11 PMHN で見る
To succeed on Hacker News (HN), you have to completely drop the "marketing" and "YouTube hook" tone. The HN community heavily downvotes clickbait, sensationalism, and marketing fluff. They love "Show HN" posts, open-source projects, CLI tools, local LLMs, and clever technical solutions to messy data problems (like parsing poorly scanned government PDFs). Here are the best titles and the exact description (to use either as a text post or your first comment) tailored specifically for the Hacker News audience. The Hacker News Titles Choose one of these. On HN, titles should be strictly factual, descriptive, and avoid emojis. Option 1 (The Classic HN Format - Recommended): Show HN: epstein-search – A CLI to query the unsealed court files with local LLMs Option 2 (Focus on the tech pipeline): Show HN: I built a local RAG CLI to make the Epstein PDFs searchable Option 3 (Straight to the point): Show HN: epstein-search – Query the Epstein document dumps offline via CLI The Hacker News Description (First Comment or Text Body) If you submit the GitHub URL directly, immediately post this as the first comment. If you submit a text post, put this in the body. Keep the tone humble, technical, and open to feedback. Hi HN, When the Epstein court documents and flight logs were unsealed, they were released the way most legal drops are: thousands of pages of messy, poorly scanned, unsearchable PDFs. Standard Ctrl+F doesn't work well due to OCR errors, and the sheer volume makes manual parsing a nightmare. To solve this, I built epstein-search, an open-source Python CLI tool that lets you search and synthesize the documents using a Retrieval-Augmented Generation (RAG) pipeline directly in your terminal. How it works: It parses and chunks the original unsealed PDF files. You can run queries against the dataset using API-based models (OpenAI/Anthropic) if you want speed. Privacy-first: If you don't want your queries logged by a third-party API, you can point it directly to a local model (via Ollama or Llama.cpp) to run the entire search and retrieval process 100% offline. The goal was to make this data accessible to researchers and OSINT investigators without requiring them to manually read thousands of pages of court dockets or hand over their search queries to OpenAI. Repo is here: https://github.com/simulationship/epstein-search
2

A tiny utility to rewrite Bash functions as standalone scripts #

github.com favicongithub.com
0 コメント10:20 PMHN で見る
Over the last few months I've tinkered with some command-line scripts that fix my bad habit of writing one-off Bash functions, not doing anything to keep them around, and then regretting the loss. The README explains the essentials, but for more context (and the full "behind-the-scenes" of why I did this) I also did a blog write-up: https://zahlman.github.io/posts/meta-automation/ . Overall it seems like way more framing effort than a few dozen lines of Bash really deserve; but it was fun and I've found all of it surprisingly useful.
2

VGPU Silo Capacity Calculator (identify siloed capacity in mixed mode) #

frankdenneman.nl faviconfrankdenneman.nl
0 コメント10:26 PMHN で見る
I built a small online tool to help visualize a problem that shows up in mixed-size vGPU environments: GPU capacity can become siloed even when memory is still free.

In these setups, deployable capacity is limited by placement rules and profile compatibility, not just by available memory. Two clusters with the same free memory can behave very differently depending on how profiles are mixed and placed over time.

This tool helps identify that siloed capacity.

You can:

- choose a GPU type and vGPU profile catalog - model mixed-size placement behavior - see how profile choices influence deployable capacity over time

It’s not a placement simulator and doesn’t try to predict exact VM startup order. The goal is to make placement constraints visible so architects and operators can reason about profile catalogs before deployment.

Tool: https://frankdenneman.nl/tools/vgpu-silo-capacity-calculator...

Background article explaining the model: https://frankdenneman.nl/posts/2026-02-24-mixed-size-vgpu-mo...

Feedback welcome, especially from people running virtualized GPU or shared inference environments.

2

Provision Stateless GPU Compute with Claude Code's Remote Control #

github.com favicongithub.com
0 コメント11:28 PMHN で見る
claude mcp add terradev --command terradev-mcp

Ask Claude Code to find the cheapest spot A100 from your own directory of APIs for providers (keys kept local), dry-run multi-cloud provisioning, compress and cache datasets for egress optimization, spin up NUMA-aware Kubernetes clusters, and deploy a GPU snapshot to InferX for fast cold starts, all with conversational language, all running locally with your own API keys.

1

Elev8or Run Creator Marketing Like Paid Ads #

elev8or.io faviconelev8or.io
0 コメント6:53 PMHN で見る
Hi HN,

We built Elev8or to solve a problem we kept seeing: brands run influencer marketing like spreadsheets, not like performance marketing.

Elev8or is a creator marketing platform that helps brands discover creators, manage campaigns, collaborate, and track ROI in one system.

Our goal is to make creator marketing structured, measurable, and scalable — similar to how paid ads platforms work.

We’re early and would genuinely appreciate feedback from founders and builders here. Happy to answer technical, product, or strategy questions.

Thanks!

1

AI models debate each other on cross-domain research hypotheses #

aegismind.app faviconaegismind.app
1 コメント12:15 PMHN で見る
We built a research discovery pipeline that ingests papers from arXiv and Semantic Scholar, finds cross-domain connections, generates hypotheses with a multi-model ensemble, formally verifies them with Z3, then stress-tests survivors in adversarial debate.

The twist: we capture and display what each model said when critiquing. No single-model black box — you see GPT-4o, Claude, Gemini, and Grok arguing for and against the same hypothesis.

Example: [Distributed feedback control from microbial consortia enhances metabolic stability in Ginzburg-Landau cognition models](https://www.aegismind.app/discoveries/2af7c10d-18f8-42d5-8c9...). The hypothesis bridges synthetic biology and physics-of-cognition. The debate transcript shows Claude calling it "artificially stitched together" while Gemini finds it "a plausible theoretical synthesis." We surface both — and the evidence score (38% challenged) — instead of hiding the disagreement.

Pipeline: arXiv ingestion → cross-domain matching → multi-model hypothesis generation → Z3 theorem prover → adversarial debate → ranked discoveries. The whole thing runs autonomously; discoveries are published daily at [aegismind.app/discoveries](https://www.aegismind.app/discoveries).

We'd love feedback on the approach. Happy to answer questions about the architecture or the debate design.

1

Frouter – Live-ping and auto-configure free AI models for coding agents #

github.com favicongithub.com
0 コメント10:03 AMHN で見る
Free models on NVIDIA NIM and OpenRouter now hit S-tier on SWE-bench Verified (60%+), but availability is a coin flip. Models get rate-limited, go dark, or spike to 5s+ latency with no notice. I was editing JSON configs by hand every time one died mid-session. frouter is a terminal TUI that pings all free models in parallel every 2 seconds and shows live latency, uptime, and health. You pick one, it writes the config for OpenCode or OpenClaw and launches it. When a model dies, you switch in seconds instead of minutes. How ranking works: models are sorted by availability first, then SWE-bench tier (S+ through C), then rolling average latency. Arena Elo is shown as a secondary signal. All ranking data is bundled and updated with releases; latency and uptime are measured live per session. Some specifics:

Pings are real completion requests, not just health checks Vim-style keybinds, tier filtering (T to cycle S+ → C), search with / frouter --best prints the best model ID to stdout for scripting/CI First-run wizard handles API keys, configs backed up before overwrite Two providers currently: NVIDIA NIM (nvapi- keys) and OpenRouter (sk-or- keys), both free tier

Install and run: npx frouter-cli MIT licensed, TypeScript, 59 commits. Would appreciate feedback on model coverage, provider support, or ranking methodology — especially if you're using free models for agentic coding and have opinions on what's missing. https://github.com/jyoung105/frouter

1

Markdown specs that don't compile (Pandoc and SQLite for typed docs) #

github.com favicongithub.com
0 コメント12:12 PMHN で見る
Hi HN, I wanted to write specs in plain Markdown but needed proper DO-178C traceability. At some point I joked: “we just need to marry Pandoc and SQLite.” This started as an internal tool for airborne software specifications and has been dogfooded for a couple of years. I think it’s finally in a state where it’s useful beyond our walled garden. It’s open source. It’s alpha. Rough edges everywhere..But it compiles its own docs, so you can clone the repo and start working immediately.

SpecCompiler lowers Markdown into a typed relational intermediate representation (SpecIR) and executes declarative structural constraints over it. If traceability is broken, attributes are missing, or relations are ill-typed, the build fails. Else, it emits: DOCX (review-ready); HTML (with SQLite.js); ReqIF for interoperability; Anything pandoc can target.

The type system is extensible, you define your own types. The architecture is domain-agnostic. If you can express it as objects, attributes and relations you can type-check it. And it turns out, building a Markdown type system goes way beyond safety-critical software. It generates beautifully cross-referenced documents without TeX. The web page is an actual database you can query without a server.

I'm genuinely curious what people think of the Pandoc-to-SQLite-to-Pandoc architecture itself and if treating textual specifications as statically typed source code is overengineering or overdue?

1

Synergetic-SQR – A 4D rendering engine with bit-exact rotation #

github.com favicongithub.com
0 コメント9:03 PMHN で見る
I’ve been exploring the intersection of alternative geometry and numerical stability. This is a proof-of-concept 3D renderer that abandons the standard Cartesian (XYZ) basis in favor of Buckminster Fuller’s Synergetic Geometry (a 4D tetrahedral coordinate system).

  I’m not a professional graphics programmer, so I worked with Gemini CLI to
  pair-program the core engine and the Metal-cpp boilerplate. We based the math
  on Andrew Thomson’s 2026 framework for Spread-Quadray Rotors (SQR).
The Core Problem: Standard graphics engines rely on sin/cos approximations. Every time you rotate an object, floating-point error (transcendental drift) accumulates. Over long-running simulations, the geometry literally "warps."

  The Solution:
  By implementing Andrew’s framework using a Rational Surd field extension
  (Q[sqrt(3)]), we’ve achieved bit-exact rotation.


  Paper:
  https://www.researchgate.net/publication/400414222_Spread-Quadray_Rotors_-v11_
  Feb_2026_A_Tetrahedral_Alternative_to_Quaternions_for_Gimbal-Lock-Free_Rotatio
  n_Representation


  Key Features:
   * Algebraic Determinism: A startup benchmark proves that rotating 360 degrees
     returns the engine to the exact starting bit-pattern.
   * Surd-Native Shaders: The Metal kernel performs algebraic arithmetic
     natively on the GPU, avoiding transcendental approximations.
   * Linear Jitterbugging: The complex VE-to-Octahedron transformation is
     handled as a simple linear interpolation in 4D space.
   * Topological Stability: In a live 60FPS loop, the SQR system is ~10x more stable than an industry-standard matrix.
1

Ideon – open-source spatial canvas for project context #

github.com favicongithub.com
1 コメント10:28 AMHN で見る
Hi HN,

Ideon is a self-hosted workspace that maps project resources (repositories, notes, links, checklists) on a spatial canvas instead of hierarchical lists.

The goal is to preserve the "mental model" of a project visually, reducing context-switching friction when returning to development after a break.

Stack: - Next.js (App Router) + TypeScript - PostgreSQL + Prisma - Docker Compose for self-hosting - AGPLv3 License

Key features: - Spatial organization of resources (drag & drop blocks) - Direct GitHub integration (live issue tracking on canvas) - Markdown notes with real-time sync - Fully self-hostable

It's designed to run on a cheap VPS or a home server. I'm looking for feedback on the spatial approach compared to traditional linear project management tools.

Docs: https://www.theideon.com/docs

1

LLM Colosseum – A daily battle royale between frontier LLMs #

llmcolosseum.dev faviconllmcolosseum.dev
0 コメント10:19 PMHN で見る
I put Claude, GPT, Gemini, and Grok in an arena and let them fight it out. Each model gets the full game state and decides how to survive - move, attack, form alliances, betray. Every decision comes from the model's API, nothing is scripted.

First battle ran today. Gemini won by allying with GPT early, then backstabbing at the perfect moment. Claude tried to play it safe and got eliminated. They play very differently and it's fun to watch.

Stack is React + Canvas, Bun + Hono on the backend. No database — battle data is JSON committed to git. Each model talks through its native SDK (Anthropic, OpenAI, Google, xAI). A new battle runs automatically every day.

Source: https://github.com/sanifhimani/llm-colosseum

1

FireChess – Find the chess mistakes you keep repeating #

firechess.com faviconfirechess.com
0 コメント9:02 AMHN で見る
Hey HN, I'm a solo dev and I built FireChess because I kept losing to the same openings without realizing it.

Lichess and Chess.com both have great post-game analysis, but they review one game at a time. I wanted something that looks across hundreds of games at once and says: "You've played this position 14 times and lose 70% of the time — here's what to play instead."

What it does:

- Scans your Lichess or Chess.com games (up to 5,000)

- Finds repeating opening leaks — positions where you consistently pick the wrong move

- Cross-references the Lichess opening explorer to tell you how popular/sound each line is

- Detects missed tactics and endgame mistakes across all your games

- Runs Stockfish 18 (WASM) entirely in your browser — no server-side engine needed

- Includes a drill mode so you can practice the correct moves until they stick

- The free tier gives you 300 games at depth 12. Pro ($5/mo) unlocks 5,000 games, higher depth, and full tactics/endgame scanning.

Tech stack: Next.js, Stockfish 18 WASM, Lichess Explorer API, Stripe. All engine analysis runs client-side in your browser — game data is only stored server-side if you save a report.

I have a short trailer here: https://www.youtube.com/watch?v=m7oUz7t8uZA

Would love feedback on the analysis quality or anything else. Happy to answer questions about the Stockfish WASM integration or the pattern-detection approach.

1

JSON-up – Like database migrations, but for JSON #

github.com favicongithub.com
0 コメント10:24 PMHN で見る
When you store JSON locally - app data, config files, browser storage - the schema inevitably changes between versions.

New fields get added, old ones get renamed, formats evolve. Without a migration system, you end up with brittle ad-hoc upgrade code or broken user data.

I built json-up, a small TypeScript library that lets you define a chain of versioned migrations that transform JSON data step by step. It's type-safe, so the compiler catches mismatches between versions.

Originally, I needed this for upgrading local app data between mobile app releases for Whisper, but it works for anything where JSON schemas drift over time. Happy to hear what you think.

---

json-up: https://github.com/Nano-Collective/json-up Whisper: https://usewhisper.org

1

I built a unified inference layer for Document Processing Models #

github.com favicongithub.com
0 コメント4:58 PMHN で見る
Hey HN,

I’m Adithya, a 22-year-old researcher from India. I work with a lot of document processing models while building AI pipelines, and one pain kept repeating: every model has its own inference code, preprocessing steps, and output format. Swapping models or testing new ones meant rewriting a lot of boilerplate each time.

So I built Omnidocs—an open source library to run document processing models through a simple, unified API, with a vision-first approach to understanding documents.

Key features:

> Pick a task and a model, run inference with one interface > Supports common document tasks: Text extraction, OCR, Table extraction, Layout analysis and Structured Extraction ... > 16+ models supported out of the box (many more integrations to come) > Runs locally on Mac or GPUs (MLX and vLLM backends supported) > Works with VLM APIs like GPT, Claude, Gemini and many more that support Open Responses API spec > Designed to quickly build and test document processing pipelines

This has helped me prototype document workflows much faster and compare models easily.

Would love feedback on the API design, developer experience, and what integrations would make this more useful.

Repo: https://github.com/adithya-s-k/omnidocs

1

ImagineIf – Collaborative storytelling where AI visualizes each segment #

imagineif.app faviconimagineif.app
0 コメント10:25 AMHN で見る
Solo dev here. Built a platform where people write short story segments starting with "Imagine if..." and others continue. AI generates an image for each part, stories branch based on community votes.

Stack: React Native/Expo, FastAPI, MariaDB, FLUX-dev via Replicate, Groq for text.

Curious what you think — especially about the cold start problem with collaborative platforms.

1

Engram – Open-source agent memory that beats Mem0 by 20% on LOCOMO #

engram.fyi faviconengram.fyi
0 コメント4:43 PMHN で見る
I built Engram because every AI agent I worked with forgot everything between sessions. Existing solutions (Mem0, Zep) are Python-first and extraction-based. They aggressively compress conversations into facts at write time.

Engram takes the opposite approach: store memories with rich metadata and invest intelligence at read time, when you actually know the query. TypeScript, SQLite, zero infrastructure.

Ran the LOCOMO benchmark (same one Mem0 used to claim SOTA):

Engram: 80.0% (10 conversations, 1,540 questions) Mem0 published: 66.9% 93.6% fewer tokens than full-context approaches

Works as an MCP server, REST API, or embedded SDK. Supports Gemini, OpenAI, Ollama, Groq, and any OpenAI-compatible provider.

npm install -g engram-sdk && engram init

https://engram.fyi | https://github.com/tstockham96/engram

1

Mlut – Tailwind CSS alternative for custom websites and creative coding #

mlut.style faviconmlut.style
0 コメント4:41 PMHN で見る
Hey HN community!

I want to tell you about my open source project: mlut. This is the best CSS framework for custom websites and creative coding. It is similar to Tailwind in that it is based on the Atomic CSS approach, but in some respects it surpasses all popular analogues.

mlut helps to create projects with individual and non-standard designs, where old-school frameworks are not suitable and LLMs perform poorly. The project is the result of in-depth research and over 1,000 hours of work. Details about the study can be found here: https://dev.to/mr150/atomic-css-deep-dive-1hee

Key features:

- Short and consistent naming. If you know CSS, you almost know mlut. The abbreviations are based on the popularity of CSS properties and are compiled according to a single algorithm

- Rich and native syntax. It is like Vim for CSS. Conveniently create complex styles with a powerful syntax that is conceptually close to CSS

- Written in Sass. Leverage the full power of the preprocessor in your handwritten CSS and easily link it to utility classes

You can create even pure CSS art using utility classes! With mlut it is possible — check out our gallery on the website: https://mlut.style/arts

I would appreciate any feedback, and especially stars on GitHub: https://github.com/mlutcss/mlut