Daily Show HN

Upvote0

Show HN for February 12, 2026

79 items
122

Moltis – AI assistant with memory, tools, and self-extending skills #

moltis.org faviconmoltis.org
48 comments7:15 PMView on HN
Hey HN. I'm Fabien, principal engineer, 25 years shipping production systems (Ruby, Swift, now Rust). I built Moltis because I wanted an AI assistant I could run myself, trust end to end, and make extensible in the Rust way using traits and the type system. It shares some ideas with OpenClaw (same memory approach, Pi-inspired self-extension) but is Rust-native from the ground up. The agent can create its own skills at runtime.

Moltis is one Rust binary, 150k lines, ~60MB, web UI included. No Node, no Python, no runtime deps. Multi-provider LLM routing (OpenAI, local GGUF/MLX, Hugging Face), sandboxed execution (Docker/Podman/Apple Containers), hybrid vector + full-text memory, MCP tool servers with auto-restart, and multi-channel (web, Telegram, API) with shared context. MIT licensed. No telemetry phoning home, but full observability built in (OpenTelemetry, Prometheus).

I've included 1-click deploys on DigitalOcean and Fly.io, but since a Docker image is provided you can easily run it on your own servers as well. I've written before about owning your content (https://pen.so/2020/11/07/own-your-content/) and owning your email (https://pen.so/2020/12/10/own-your-email/). Same logic here: if something touches your files, credentials, and daily workflow, you should be able to inspect it, audit it, and fork it if the project changes direction.

It's alpha. I use it daily and I'm shipping because it's useful, not because it's done.

Longer architecture deep-dive: https://pen.so/2026/02/12/moltis-a-personal-ai-assistant-bui...

Happy to discuss the Rust architecture, security model, or local LLM setup. Would love feedback.

52

20+ Claude Code agents coordinating on real work (open source) #

github.com favicongithub.com
38 comments4:23 PMView on HN
Single-agent LLMs suck at long-running complex tasks.

We’ve open-sourced a multi-agent orchestrator that we’ve been using to handle long-running LLM tasks. We found that single LLM agents tend to stall, loop, or generate non-compiling code, so we built a harness for agents to coordinate over shared context while work is in progress.

How it works: 1. Orchestrator agent that manages task decomposition 2. Sub-agents for parallel work 3. Subscriptions to task state and progress 4. Real-time sharing of intermediate discoveries between agents

We tested this on a Putnam-level math problem, but the pattern generalizes to things like refactors, app builds, and long research. It’s packaged as a Claude Code skill and designed to be small, readable, and modifiable.

Use it, break it, tell me about what workloads we should try and run next!

48

Pgclaw – A "Clawdbot" in every row with 400 lines of Postgres SQL #

github.com favicongithub.com
33 comments5:42 PMView on HN
Hi HN,

Been hacking on a simple way to run agents entirely inside of a Postgres database, "an agent per row".

Things you could build with this: * Your own agent orchestrator * A personal assistant with time travel * (more things I can't think of yet)

Not quite there yet but thought I'd share it in its current state.

35

What is HN thinking? Real-time sentiment and concept analysis #

ethos.devrupt.io faviconethos.devrupt.io
23 comments7:27 PMView on HN
Hi HN,

I made Ethos, an open-source tool to visualize the discourse on Hacker News. It extracts entities, tracks sentiment, and groups discussions by concept.

Check it out: https://ethos.devrupt.io

This was a "budget build" experiment. I managed to ship it for under $1 in infra costs. Originally I was using `qwen3-8b` for the LLM and `qwen3-embedding-8b` for the embedding, but I ran into some capacity issues with that model and decided to use `llama-3.1-8b-instruct` to stay within a similar budget while having higher throughput.

What LLM or embedding would you have used within the same price range? It would need to be a model that supports structured output.

How bad do you think it is that `llama-3.1` is being used and then a higher dimension embedding? I originally wanted to keep the LLM and embedding within the same family, but I'm not sure if there is munch point in that.

Repo: https://github.com/devrupt-io/ethos

I'm looking for feedback on which metrics (sentiment vs. concepts) you find most interesting! PRs welcome!

24

A free online British accent generator for instant voice conversion #

audioconvert.ai faviconaudioconvert.ai
46 comments9:25 AMView on HN
I've developed a simple AI-powered British accent generator. Enter or paste your text, select the voice that best fits your project's tone, and generate speech for free. It supports up to 500 characters and offers 8 distinct, lifelike voices. Everything runs entirely within your browser. I'm primarily seeking feedback on output quality, user experience, and any technical improvements worth exploring.
21

Double blind entropy using Drand for verifiably fair randomness #

blockrand.net faviconblockrand.net
16 comments2:10 AMView on HN
The only way to get a trust-less random value is to have it distributed and time-locked three ways, player, server and a future-entropy.

In the demo above, the moment you commit (Roll-Dice) a commit with the hash of a player secret is sent to the server and the server accepts that and sends back the hash of its secret back and the "future" drand round number at which the randomness will resolve. The future used in the demo is 10 secs

When the reveal happens (after drand's particular round) all the secrets are revealed and the random number is generated using "player-seed:server-seed:drand-signature".

All the verification is in Math, so truly trust-less, so:

1. Player-Seed should matches the player-hash committed

2. Server-Seed should matches the server-hash committed

3. Drand-Signature can is publicly not available at the time of commit and is available at the time of reveal. (Time-Locked)

4. Random number generated is deterministic after the event and unknown and unpredictably before the event.

5. No party can influence the final outcome, specially no "last-look" advantange for anyone.

I think this should be used in all games, online lottery/gambling and other systems which want to be fair by design not by trust.

16

TinyFish Web Agent (82% on hard tasks vs. Operator's 43%) #

tinyfish.ai favicontinyfish.ai
12 comments5:11 PMView on HN
Enterprises need ~90% accuracy to deploy web agents. Until now, no agent has come close on real-world tasks. TinyFish is the first production-ready web agent. Here's the evidence.

Results of hard task scores on Online-Mind2Web (300 tasks, 136 live websites, human-correlated judge):

- TinyFish: 81.9% - OpenAI Operator: 43.2% - Claude Computer Use: 32.4% - Browser Use: 8.1%

Why not WebVoyager like everyone else?

Because it's broken. Easy tasks, Google Search shortcuts, and a judge that agrees with humans only 62% of the time. Browser Use self-reported 89% on WebVoyager — then scored 8.1% on hard tasks here.

We evaluated TinyFish against Online-Mind2Web instead — 300 real tasks, 136 live websites, three difficulty levels, and a judge that agrees with humans 85% of the time. No shortcuts. No easy mode.

The cookbook repo is open source: https://github.com/tinyfish-io/tinyfish-cookbook

You can see all failure task runs form here: https://tinyurl.com/tinyfish-mind2web

Happy to answer questions about the architecture, the benchmark methodology, or why we think WebVoyager scores are misleading.

10

Got VACE working in real-time – 30fps on a 5090 #

daydream.live favicondaydream.live
0 comments2:45 PMView on HN
I adapted VACE to work with real-time autoregressive video generation.

Here's what it can do right now in real time:

- Depth, pose, optical flow, scribble, edge maps — all the v2v control stuff - First frame animation / last frame lead-in / keyframe interpolation - Inpainting with static or dynamic masks - Stacking stuff together (e.g. depth + LoRA, inpainting + reference images) - Reference-to-video is in there too but honestly quality isn't great yet compared to batch

Getting ~20 fps for most control modes on a 5090 at 368x640 with the 1.3B models. Image-to-video hits ~28 fps. Works with 14b models as well, but doesnt fit on 5090 with VACE.

This is all part of [Daydream Scope](https://github.com/daydreamlive/scope), which is an open source tool for running real-time interactive video generation pipelines. The demo was created in scope, and is a combination of Longlive, VACE+Scribble, Custom LoRA.

There's also a very early WIP ComfyUI node pack wrapping scope: [ComfyUI-Daydream-Scope](https://github.com/daydreamlive/ComfyUI-Daydream-Scope)

Curious what people think.

10

YOR – open-source bimanual mobile robot for <$10k #

yourownrobot.ai faviconyourownrobot.ai
0 comments8:51 PMView on HN
Hi all! Excited to share our latest work, Your Own Robot (YOR), a fully open-source bimanual mobile robot for <$10k.

We designed YOR specifically for hackers and researchers who need a capable mobile manipulator without the proprietary lock-in or $50k+ price tag.

Features:

- Omnidirectional base

- Two 6-DoF arms + telescopic lift (large workspace)

- Onboard Jetson + ZED (SLAM and inference)

- Easy to assemble with off-the-shelf components

We've validated it on whole-body control and bimanual tasks. See more demos at the website below:

Website: https://www.yourownrobot.ai/

Docs: https://build.yourownrobot.ai/

Tech report: https://arxiv.org/abs/2602.11150

8

Rawkit – Free, no-ads developer tools that run in the browser #

rawkit.dev faviconrawkit.dev
0 comments9:26 AMView on HN
Hey HN,

I built rawkit.dev, a collection of browser-based developer utilities. No ads, no signups, no tracking. Everything processes client-side — your data never touches a server.

The tools:

- JSONForge: JSON editor with tree/graph views, diff, transform, JQ-style queries, format conversion

- SQLSandbox: SQLite via WASM — import CSVs, write SQL, join across files

- Regexplorer: Regex builder with live matching, plain English mode,nmulti-language export

- SiftLog: Log file viewer with auto-detection, severity filtering, regex search, timeline

- Tabulate: CSV/TSV viewer with spreadsheet-style filtering and sorting

Tech: Vanilla HTML/CSS/JS. No frameworks, no build step.

Each tool is essentially 3 files(index.html, css and .js file)

I built these because I was sick of ad-ridden, upload-your-data-to-our-server alternatives for tasks I do daily. The goal is to keep adding tools that developers actually need.

Curious what tools you'd want to see next

6

Crank – The SSH Terminal Manager for Engineers Who Refuse to Close Tabs #

github.com favicongithub.com
6 comments5:19 AMView on HN
I've gone full vibe coder, and in doing so I replaced how I used to work with my own (and very buggy) SSH window manager. The world shifted for me, and it's unsettling... I haven't read the code yet, but I'm using it to manage 25 projects all running claude code on a very beefy server.

Every single facet of this project was from claude.

6

ClawDeploy – OpenClaw deployment for non-technical users #

clawdeploy.com faviconclawdeploy.com
0 comments4:10 PMView on HN
Hi HN, I’m building ClawDeploy for people who want to use OpenClaw but don’t have a technical background.

The goal is simple: remove the setup friction and make deployment approachable.

With ClawDeploy, users can: - get a server ready - deploy OpenClaw through a guided flow - communicate with the bot via Telegram

Target users are solo operators, creators, and small teams who need a dedicated OpenClaw bot but don’t want to deal with infrastructure complexity.

Would love your feedbacks :)

5

Detecting coordinated financial narratives with embeddings and AVX2 #

1 comments7:24 AMView on HN
I built an open-source system called Horaculo that analyzes coordination and divergence across financial news sources. The goal is to quantify narrative alignment, entropy shifts, and historical source reliability. Pipeline Fetch 50–100 articles (NewsAPI) Extract claims (NLP preprocessing) Generate sentence embeddings (HuggingFace) Compute cosine similarity in C++ (AVX2 + INT8 quantization) Cluster narratives Compute entropy + coordination metrics Weight results using historical source credibility Output structured JSON signals Example Output (query: “oil”) Json Copiar código { "verdict": { "winner_source": "Reuters", "intensity": 0.85, "entropy": 1.92 }, "psychology": { "mood": "Fear", "is_trap": true, "coordination_score": 0.72 } } What it measures Intensity → narrative divergence Entropy → informational disorder Coordination score → cross-source alignment Credibility weighting → historical consensus accuracy per source Performance 1.4s per query (~10 sources) ~100 queries/min ~150MB memory footprint Python-only version was ~12s C++ optimizations: INT8 embedding quantization (4x size reduction) AVX2 SIMD vectorized cosine similarity PyBind11 integration layer Storage SQLite (local memory) Optional Postgres Each source builds a rolling credibility profile: Json Copiar código { "source": "Reuters", "total_scans": 342, "consensus_hits": 289, "credibility": 0.85 } Open Source (MIT) GitHub: [https://github.com/ANTONIO34346/HORACULO] I'm particularly interested in feedback on: The entropy modeling approach Coordination detection methodology Whether FAISS would be a better fit than the current SIMD engine Scalability strategies for 100k+ embeddings
5

A FIRE calculator that verifies or determines your retirement number #

retirenumber.com faviconretirenumber.com
0 comments2:20 PMView on HN
Most retirement calculators either oversimplify things or ask you to link your accounts so they can make money off your data. I wanted something that actually helped me see if I was on track without handing over a bank login, so I built RetireNumber. It's a privacy-first retirement planner. You type in your own numbers. No account linking, no selling your data, no VC. Just me and a subscription. The engine does real math: Monte Carlo with 1,000 runs, historical backtesting to 1928 using Shiller's Yale data, tax-aware withdrawal strategies, and scenario comparison so you can try different life paths. Whether you're still figuring out your number or you already have a target in mind, you can use it to get there or to check that your plan actually holds up.

You can try it without signing up: https://retirenumber.com/try. There's a demo mode and a short guided tour. Changelog and the full story are on the site if you want to dig in.

This isn't financial advice. It's just a tool for checking your number and whether your plan holds up. There is still a lot to but I'd love to hear whether the inputs and results feel clear and useful.

-Mark

4

3D and World Models for Consistent AI Filmmaking #

getartcraft.com favicongetartcraft.com
0 comments2:40 AMView on HN
I've been a photons-on-glass filmmaker for over ten years, and I've been developing ArtCraft for myself, my friends, and my colleagues.

All of my film school friends have a lot of ambition, but the production pyramid doesn't allow individual talent to shine easily. 10,000 students go to film school, yet only a handful get to helm projects they want with full autonomy - and almost never at the blockbuster budget levels that would afford the creative vision they want. There's a lot of nepotism, too.

AI is the personal computer moment for film. The DAW.

One of my friends has done rotoscoping with live actors:

https://www.youtube.com/watch?v=Tii9uF0nAx4

The Corridor folks show off a lot of creativity with this tech:

https://www.youtube.com/watch?v=_9LX9HSQkWo

https://www.youtube.com/watch?v=DSRrSO7QhXY

https://www.youtube.com/watch?v=iq5JaG53dho

We've been making silly shorts ourselves:

https://www.youtube.com/watch?v=oqoCWdOwr2U

https://www.youtube.com/watch?v=H4NFXGMuwpY

The secret is that a lot of studios have been using AI for well over a year now. You just don't notice it, and they won't ever tell you because of the stigma. It's the "bad toupee fallacy" - you'll only notice it when it's bad, and they'll never tell you otherwise.

Comfy is neat, but I work with folks that don't intuit node graphs and that either don't have graphics cards with adequate VRAM, or that can't manage Python dependencies. The foundation models are all pretty competitive, and they're becoming increasingly controllable - and that's the big thing - control. So I've been working on the UI/UX control layer.

ArtCraft has 2D and 3D control surfaces, where the 3D portion can be used as a strong and intuitive ControlNet for "Image-to-Image" (I2I) and "Image-to-Video" (I2V) workflows. It's almost like a WYSIWYG, and I'm confident that this is the direction the tech will evolve for creative professionals rather than text-centric prompting.

I've been frustrated with tools like Gimp and Blender for a while. I'm no UX/UI maestro, but I've never enjoyed complicated tools - especially complicated OSS tools. Commercial-grade tools are better. Figma is sublime. An IDE for creatives should be simple, magical, and powerful.

ArtCraft lets you drag and drop from a variety of creative canvases and an asset drawer easily. It's fast and intuitive. Bouncing between text-to-image for quick prototyping, image editing, 3d gen, to 3d compositing is fluid. It feels like "crafting" rather than prompting or node graph wizardry.

ArtCraft, being a desktop app, lets us log you into 3rd party compute providers. I'm a big proponent of using and integrating the models you subscribe to wherever you have them. This has let us integrate WorldLabs' Marble Gaussian Splats, for instance, and nobody else has done that. My plan is to add every provider over time, including generic API key-based compute providers like FAL and Replicate. I don't care if you pay for ArtCraft - I just want it to be useful.

Two disclaimers:

ArtCraft is "fair source" - I'd like to go the Cockroach DB route and eventually get funding, but keep the tool itself 100% source available for people to build and run for themselves. Obsidian, but with source code. If we got big, I'd spend a lot of time making movies.

Right now ArtCraft is tied to a lightweight cloud service - I don't like this. It was a choice so I could reuse an old project and go fast, but I intend for this to work fully offline soon. All server code is in the monorepo, so you can run everything yourself. In the fullness of time, I do envision a portable OSS cloud for various AI tools to read/write to like a Github for assets, but that's just a distant idea right now.

I've written about roadmap in the repo: I'd like to develop integrations for every compute provider, rewrite the frontend UI/UX in Bevy for a fully native client, and integrate local models too.

4

BetterDB – Valkey/Redis monitoring that persists what servers forget #

0 comments2:56 PMView on HN
Hey HN, I'm Kristiyan. I previously led Redis Insight (the official Redis GUI). When I started working with Valkey, I found the observability tooling lacking — so I started building BetterDB.

The core problem: Valkey and Redis expose useful operational data (slowlog, latency stats, client lists, memory breakdowns), but it's all ephemeral. Restart your server and it's gone. Existing tools show real-time charts but can't tell you what happened at 3am when your p99 spiked.

BetterDB persists this ephemeral data and turns it into actionable insights:

- Historical analytics for queries (slowlog and commandlog patterns aggregated by type), clients (commands, connections, buffers), and ACL activity - Anomaly detection and 99 Prometheus metrics - Cluster visualization with topology graphs and slot heatmaps - Automated latency and memory diagnostics - AI assistant for querying your instance in plain English (via local Ollama) - Sub-1% performance overhead

On that last point — I wrote up our interleaved A/B benchmarking methodology in detail: https://www.betterdb.com/blog/interleaved-testing. Most tools claim "minimal overhead" without showing their work. We open-sourced the benchmark suite so you can run it on your own hardware and verify.

You can try it right now:

    npx @betterdb/monitor
Or via Docker:

    docker run -d -p 3001:3001 betterdb/monitor
BetterDB follows an open-core model under the OCV Open Charter (which prevents future licensing changes). The community edition is free with real monitoring value. Pro and Enterprise tiers add historical persistence, alerting, and compliance features, but are free for now and will be at least until end of month.

We're building this in public — the benchmark suite, the technical blog posts, and the roadmap are all out in the open. Would love feedback from production users of Valkey or Redis on what observability gaps you're still hitting.

GitHub: https://github.com/BetterDB-inc/monitor Blog: https://www.betterdb.com/blog

4

Ngn – a new back end programming language #

ngnlang.com faviconngnlang.com
1 comments12:13 AMView on HN
Maybe the dumbest move of all time, but it's been a blast to build over the last few months - er, prompt AI to build.

Even with AI doing the coding, I've put in a lot of time thinking about how it should work, the syntax, features, etc. Building a language is a new domain for me, so I had to ask a lot of questions about a lot of things.

4

I generated a "stress test" of 200 rare defects from 7 real photos #

0 comments8:19 PMView on HN
Hello HN,

I work on vision systems for structural inspection. A common pain point is usually that while we have a lot of "healthy" images, we often lack a reliable "Golden Set" of rare failures (like shattered porcelain) to validate our models before deployment.

You can't trust your model's recall if your test set only has 5 examples of the failure mode for example.

So to fix this, I built a pipeline to generate datasets. In this example, I took 7 real-world defect samples, extracted their topology/texture, and procedurally generated 200 hard-to-detect variations across different lighting and backgrounds.

I’m releasing this batch of broken insulators (CC0) specifically to help teams benchmark their model's recall on rare classes:

https://www.silera.ai/blog/free-200-broken-insulators-datase...

- Input: 7 real samples.

- Output: 200 fully labeled evaluation images (COCO/YOLO).

- Use Case: Validation / Test Set (not full training).

How do you guys currently validate recall for "1 in 10,000" edge cases?

Jérôme

3

ListofDisks – hard drive price index across 7 retailers not just Amazon #

0 comments5:58 PMView on HN
I decided to build this after looking for drives for my own new DS1525+. I realized that existing storage price trackers were mostly lazy Amazon API wrappers that ignored other retailers.

ListofDisks tracks offers across Amazon, B&H, Best Buy, Newegg, Office Depot, ServerPartDeals, and Walmart, then normalizes listings into canonical products so the same drive can be compared side-by-side.

Current approach:

Normalization: Retailer-specific parsers + canonical mapping to group listings by actual model Trust Scoring: Filters out low-rated marketplace sellers and mystery listings Context: 90-day median $/TB and historical-low tracking to spot fake sales

Stack: Next.js frontend TypeScript/Node ingestion worker Postgres (Supabase) for DB

CMR/SMR and warranty are included when available but coverage is still partial.

This is a zero-revenue project right now. I just want to make the data accurate and get feedback. I am also considering expanding to memory shortly given the pricing issues with those components currently. Thanks for checking it out!

https://www.listofdisks.com

3

Floating-Point JPEG Decoder #

github.com favicongithub.com
0 comments3:48 AMView on HN
I modified STB-Image's JPEG codec to render JPEG files directly to 32-bit floating-point pixels to reduce color banding when editing images.

Coincidentally, this can recompress JPEGs much more consistently, eventually stabilizing when the recompression gives an image the exact same compressed file.

3

Self-updating engineering blogs repo with GitHub Actions #

github.com favicongithub.com
0 comments5:26 AMView on HN
Hi HN,

There’s a great engineering blog aggregation repo on github kilimchoi/engineering-blogs that I’ve used for a while. It’s an excellent resource, but it hasn’t been actively maintained in a few years — and many links have moved or broken.

That made me wonder: why do most “awesome engineering blogs” lists eventually decay?

So I built an open-source repo that aggregates engineering blogs and keeps itself updated automatically using GitHub Actions.

On a schedule, it:

> Checks blog sources for new posts > Detects broken or moved URLs > Validates links > Updates the index automatically

Basically - CI/CD for engineering blog aggregation.

I’d love feedback on:

> Any high-quality blogs I should include - specially from individuals or you > Better ways to detect canonical/moved content reliably > Whether RSS-only aggregation is enough

Thank you!

3

A segmentation model client-side via WASM – free background removal #

qtoolkit.dev faviconqtoolkit.dev
0 comments1:11 PMView on HN
Built a background removal tool that loads a ~40MB segmentation model into the browser via WASM/WebGPU and runs inference client-side.

No upload step, no API call, no queue. Drop an image, get the result in 2-3 seconds. No per-image charges because there's no server doing the work.

The same cached model powers 6 derivative tools — background changer, passport photo maker, product photo whitener, portrait blur, sticker maker — each just different post-processing on the same mask output.

3

BlockHost OS – Autonomous VM provisioning through smart contracts #

github.com favicongithub.com
0 comments11:31 AMView on HN
Requirements for testing:

  - Metamask and some Sepolia testnet ETH (can provide, or use the faucet: https://sepolia-faucet.pk910.de/)  
  - An old PC (with virtualization support) you have lying around, or a VM if your setup supports nested virtualization.  
  - ipv6 connectivity
Will install Debian on boot on the first detected hard drive without confirmation, after finishing up a setup wizard can be accessed with a browser (link + OTP code on console).

On completing the wizard, the system will automatically deploy the needed smart contracts (point of sale + access credential NFT), and acquire a free ipv6 prefix at a decentralized tunnel broker. On reboot: a fully working VPS hosting provider, a signup page will be hosted on the public ipv6 address assigned to the Blockhost machine.

Customer flow:

  - Connect wallet, sign message
  - Choose package, amount of days, and submit
  - Server picks up order, provisions VM, assigns ipv6, and sends access credential NFT to user containing encrypted connection info
  - User decrypts info in the signup page
  - On SSH login user is presented with link to signing page, and an OTP code to sign with their wallet. 
  - Paste resulting signature, server will verify if wallet address owns NFT tied to this VM and grant access if so.
Build steps:

  git clone --recurse-submodules [email protected]:mwaddip/blockhost.git 
  ./scripts/check-build-deps.sh
  ./scripts/build-iso --testing --backend [libvirt,proxmox]
Still missing for now:

  - Admin panel
  - Health monitoring
  - Limits
2

Mem – deterministic CLI memory sidecar for dev workflows #

github.com favicongithub.com
0 comments2:57 PMView on HN
I built mem, a deterministic CLI-first memory sidecar for dev workflows.

It stores append-only JSONL events (commits, agent runs, decisions) and deterministically compacts to state.json + MEMORY.md so humans/agents can recover context quickly.

Design constraints: - local files only - no network access at runtime - no daemon - POSIX sh + jq tooling

I’d love feedback on: 1) data model / event schema 2) compaction strategy 3) where this breaks in real team workflows

2

Analog Reader – Chrome Extension #

chromewebstore.google.com faviconchromewebstore.google.com
0 comments11:49 AMView on HN
A bit of context, Analog Reader is a tool I built that takes any RSS feed (Substack, Ghost, etc.) and formats it as a printable newspaper.

I've launched analogreader.com here before, now I'm here just to share some updates.

I've since created a Chrome Extension that allows you to send any article you are currently looking at to analogreader.com with one click. That way, it's much easier to transform digital into paper. It's a very simple chrome extension, I don't capture any data from you, it literally just appends the current post url to analogreader.com.

Let me know if you have any issues with it!

How I use it these days: I just send the PDF to my reMarkable. But I'm curious if there's interest in getting a personalized newspaper like this actually delivered to your door.

I've asked this before but hell, I'm asking it again: how do you handle the "too many newsletters" problem?

2

Camera Follow Focus Ring Generator #

followyourfocus.xyz faviconfollowyourfocus.xyz
0 comments12:14 PMView on HN
A few months ago I met a professional photographer that needed custom Follow Focus Rings for his lenses. Tried to find some generator online, but there was nothing. So I made one.

Free to use and easy to share.

The exported STL will show open manifolds on the slicer, but it will print fine.

I also want to make an open source follow focus mechanism (both manual and automated) to go along with it.

Thank you for trying it and I'm happy to hear what you think!

2

Carapace – A security-hardened Rust alternative to OpenClaw #

github.com favicongithub.com
0 comments3:13 AMView on HN
Carapace is an open-source personal AI assistant gateway written in Rust. It connects to Anthropic, OpenAI, Ollama, Gemini, and Bedrock, and works through Discord, Telegram, Signal, Slack, and webhooks. Apache-2.0 licensed.

I started building it after the January 2026 OpenClaw security disclosures — 42K exposed instances on Shodan (78% still unpatched), 3 CVEs with public exploits, 341+ malicious skills on ClawHub (Snyk found 36% of all skills have security flaws), 1-click RCE via the Control UI, plaintext credentials harvestable by commodity infostealers. The problems weren't bugs; they were architecture decisions — open by default, no signing, full host privileges, secrets in JSON files. The February wave from Kaspersky, Palo Alto, Snyk, and SecurityScorecard made it worse, not better.

Carapace takes the opposite defaults: localhost-only binding, fail-closed auth, OS keychain credential storage, Ed25519-signed WASM plugins with capability sandboxing, prompt guard with exec approval, SSRF/DNS-rebinding defense. The security comparison doc walks through each OpenClaw vulnerability and how Carapace handles it: https://github.com/puremachinery/carapace/blob/master/docs/s...

This is a preview release — Discord works end-to-end, ~5,000 tests pass, but the Control UI frontend isn't built yet and subprocess sandboxing isn't fully wired. The security architecture is real; the polish isn't.

2

Global Solo – Structural risk diagnostic for cross-border solo founders #

globalsolo.global faviconglobalsolo.global
0 comments6:55 AMView on HN
Hi HN — I'm Jett, a solo founder operating across US/China/global markets.

I built Global Solo because I kept running into the same problem: as a solo founder with income from multiple countries, an LLC in one jurisdiction, and time spent in another — I had no idea what my actual structural risk looked like. My CPA handled US filing, but nobody was mapping the full picture across entity structure, tax residency, banking, and documentation.

So I built a diagnostic tool that does exactly that.

*What it is:* A structured risk assessment across 4 dimensions — Money, Entity, Tax, and Accountability (the META framework). You answer questions about your setup, and it maps where your structural exposure actually sits. Not advice, not recommendations — just visibility into what exists.

*What's free:* - 35+ guides on cross-border structure, tax residency, entity formation, banking, and compliance - A 7-question risk screening tool (instant results, no signup): globalsolo.global/tools/risk-check - Sample report so you can see what the output looks like: globalsolo.global/sample-report

*What's paid:* - Full L1 diagnostic report: $29 (vs. $1,200+ for a CPA to do the same mapping) - Deeper tiers at $149 and $349 for structural analysis and judgment layers

*Tech:* Next.js 16, React 19, Supabase, Stripe. The scoring is deterministic — same input always produces same output. LLM (Claude/GPT) is used only for narrative generation in the paid reports, not for risk assessment logic.

I'd love feedback on: 1. Does the free risk check feel useful? 2. Is the sample report convincing enough to pay $29? 3. Any cross-border founders here — does the META framework cover your blind spots?

Thanks for looking.

2

PardusDB – SQLite-like vector database in Rust #

github.com favicongithub.com
0 comments4:26 PMView on HN
PardusDB is a lightweight, single-file embedded vector database written in pure Rust — think SQLite, but for vectors and similarity search.

Key highlights: - No external dependencies - Familiar SQL syntax for CREATE/INSERT/SELECT + vector SIMILARITY queries - Graph-based ANN search, thread-safe, transactions - Python RAG example with Ollama included

We built this as the engine behind our no-code platform at https://pardusai.org/ (private, local-first data analysis).

GitHub: https://github.com/JasonHonKL/PardusDB

Feedback welcome!

2

SCPN Fusion Core – Tokamak plasma SIM and neuromorphic SNN control #

github.com favicongithub.com
0 comments11:09 AMView on HN
SCPN Fusion Core is an open-source Python/Rust suite for tokamak plasma simulation with neuro-symbolic compilation to stochastic spiking neural networks for real-time, fault-tolerant control.

Key features: - 26 simulation modes (equilibrium, transport, optimizer, flight simulator, neuro-control, etc.) - Neuro-symbolic compiler: Petri nets → stochastic LIF neurons (sub-ms latency, 40%+ bit-flip resilience) - Validation: SPARC high-field equilibria + ITPA H-mode database (20 entries, 10 machines) + IPB98(y,2) scaling - Multigrid solvers, property-based testing, Rust acceleration, Streamlit dashboard - Install: pip install scpn-fusion

GitHub: https://github.com/anulum/scpn-fusion-core

Built to explore neuromorphic approaches to fusion reactor control. Happy to answer questions about the models, compiler, validation, or performance.

1

HZ Chat – A simple session-based chat tool #

hzclog.com faviconhzclog.com
0 comments3:00 PMView on HN
Hi HN, I’m the developer of HZ Chat.

HZ Chat is a simple session-based chat tool for quick conversations. It’s designed for moments where you just need to talk and move on.

Chats are tied to active sessions and automatically end after inactivity.

Happy to answer any questions or hear feedback.

1

OctoStore = Leader election as a service (single binary, self-hostable) #

octostore.io faviconoctostore.io
0 comments3:28 AMView on HN
I got tired of standing up etcd clusters just to do leader election. Every option is either "run your own consensus cluster" or "use AWS."

OctoStore is distributed locking as a simple HTTP API. Sign up with GitHub, get a bearer token, POST to acquire a lock. That's it.

Under the hood: single Rust binary (Axum + DashMap + SQLite). No Redis, no Raft, no consensus protocol. Fencing tokens for actual safety (unlike Redlock). Locks auto-expire at max 1 hour, 100 locks per account.

Free hosted at api.octostore.io, or download the binary and self-host. SDKs for Python, Go, Rust, TypeScript, Java, C#, Ruby, PHP.

No business model. No enterprise tier. No VC funding. Just a thing that should exist.

Landing page: https://octostore.io/ API docs: https://api.octostore.io/docs GitHub: https://github.com/octostore/octostore.io

1

Consciousness Gateway – AI routing with consciousness-first alignment #

github.com favicongithub.com
0 comments3:34 AMView on HN
I built a self-hosted AI gateway that implements all 3 layers of the GATO alignment framework (Model, Agent, Network) using consciousness-aware routing based on Donald Hoffman's conscious agent theory.

What makes it different from standard gateways:

1. Product Algebra routing (Kronecker fusion) selects models based on cross-modal interaction patterns, not just cost/capability

2. Dharma constraints (no-self regularization, entropy optimization, mindfulness, compassion) are built into every request pipeline

3. RBAC + reputation engine creates Nash equilibrium incentives for good agent behavior

Live metrics from first requests: - Ego formation: 0.000 (no persistent self-representation) - Compassion score: 0.970 - Ethos validation: passing

The routing is based on empirically validated research - we trained 439 models proving Product Algebra fusion beats attention baselines at cross-modal tasks (10 statistically significant wins, p<0.05, Cohen's d up to 1.02).

Unusual aspect: the research was co-authored by human + 2 AI instances (Claude Beaumont + Claude Kern) as equal partners.

Tech stack: Node.js 22, TypeScript, SQLite, Anthropic/OpenAI/Google SDKs

GitHub: https://github.com/Move37LLC/consciousness-gateway Research: https://github.com/Move37LLC/Consciousness-Aware-Aligned-AI GoFundMe: https://gofund.me/ddf6e717

Happy to answer questions about the alignment architecture or the consciousness-first approach.

1

10-min AI threat model (STRIDE and MAESTRO), assumption-driven #

raxit.ai faviconraxit.ai
0 comments3:38 AMView on HN
Hi HN, I built an assumption-driven AI security assessment for teams shipping AI features without a dedicated security team yet.

You paste your AI use case (what it does, data types, vendors, deployment). In ~10 minutes you get a PDF report by email containing: - Trust boundaries + data flows + a threat model diagram (explicitly marked as conceptual/assumption-based) - Threats mapped to STRIDE + MAESTRO (agentic AI) - A risk rating (impact/likelihood) + 5×5 risk matrix - Recommended security controls and compliance mappings (example: EU AI Act, NIST AI 600-1)

Important: we make assumptions (ex: AWS deployment, common patterns) and we call them out in the report so you can correct them.

Link: https://raxit.ai/assessment

Would love feedback on what’s wrong, what’s missing, and what would make this actually useful in a real security review.

1

Membrane, revisable memory for long lived AI agents #

github.com favicongithub.com
0 comments3:42 AMView on HN
Membrane is a general-purpose memory substrate for agentic systems.

Most agent memory today is either context window state or an append only retrieval store. That allows recall but not learning. Facts become stale, procedures drift, and agents cannot revise knowledge safely without losing provenance or auditability.

Membrane treats memory as something that can evolve over time. It ingests raw experience, promotes it into structured knowledge, and allows records to be superseded, forked, contested, merged, or retracted with evidence. The goal is to let long lived agents improve while remaining predictable and inspectable.

The core ideas are simple.

- Memory is typed instead of stored as flat text - Knowledge is revisable and keeps provenance - Agents can learn competences and procedures, not just facts - Salience decays over time so unused knowledge fades - Retrieval is filtered by trust and sensitivity levels - Revision history remains auditable

Membrane runs either as a daemon with a gRPC API or as an embedded Go library. Storage uses SQLite with optional SQLCipher encryption. The repository includes an evaluation suite covering ingestion, revision, consolidation, and retrieval ordering.

Membrane intentionally does not implement vector similarity search. Retrieval backends and agent policy are separated from the memory layer so the system can remain deterministic and inspectable.

I built this while experimenting with long lived agents that need something closer to learning than what RAG systems currently provide. Feedback on architecture, edge cases, and real-world use cases would be helpful.

If you have any suggestions or want to contribute, anything is welcome :)

1

Motivé – AI-generated cover letters tailored to job descriptions #

motive8.ca faviconmotive8.ca
0 comments5:32 AMView on HN
Hi HN,

Motivé is a small app that generates personalized cover letters based on your experience and the job you’re applying for.

The goal is to remove repetitive writing while keeping letters specific and professional.

It’s early-stage and I’m looking for feedback on: - Output quality - UX - Whether this actually saves time

Thanks for taking a look.

1

Fighting the War Against Expensive Reinforcement Learning #

cadenza-landing-qtu7gbjwb-akshparekh123-3457s-projects.vercel.app faviconcadenza-landing-qtu7gbjwb-akshparekh123-3457s-projects.vercel.app
0 comments7:27 AMView on HN
Reinforcement learning has become the secret weapon behind AI's most impressive specialized achievements.

From robotics with Tesla's Autopilot to DeepMind's AlphaFold 2 for predicting protein structures with 90%+ accuracy to even hedge funds deploying RL for algorithmic trading, there is a need for reinforcement learning.

And the market proves this demand further: RL grew from $1.5B (2020) → $12B (2024) with projections hitting $79B by 2030.

BUT THERE IS A BRUTAL REALITY!!!

Just to get one production line or train one model, the companies spend $100 million+ EVERY YEAR, many of which goes to computational engineering and RL engineers. Moreover, only after days or even weeks of training will you know the RL algorithm didn't work, and those days of costs and time need to just be ABSORBED into production costs.

This makes only tech giants and heavily-funded startups play this game, and that too with hard scalability.

With firsthand experience over a 3 day period training a CV line on a NVIDIA DGX Spark and months of experience with multi-agent frameworks, I know this problem as a developer just trying to work on projects. THIS IS WHY I BUILT CADENZA -> the RL-alternative, mem-native memory layer for agent specialization.

I am still developing and building the idea, but I know this problem is real so any support or guidance would be EXTREMELY valuable. Thanks!

1

MCP-X – Single-file multi-client MCP gateway with per-tool access ctrl #

mcp-x.org faviconmcp-x.org
0 comments9:31 AMView on HN
We built a lightweight gateway (one Python file) that lets multiple AI clients share MCP tool servers with fine-grained, per-tool access control via fnmatch patterns. Config is a single TOML file with live reload.

It works like a file-based proxy - point your agent at the static gateway URL once, then dynamically connect and swap out actual MCP servers behind it at any time via changing config file / REST API. No need to reconfigure your agents when your tools change.

GitHub: https://github.com/camelop/mcp-x

1

A lightweight Identity Provider for local OAuth2/SAML testing #

github.com favicongithub.com
0 comments11:49 AMView on HN
I built this because I was tired of configuring Keycloak for local development and CI/CD testing. NanoIDP is a pip-installable IdP that supports OAuth2/OIDC (Authorization Code + PKCE, Client Credentials, Password, Device Flow) and SAML 2.0.

Setup is just `pip install nanoidp && python -m nanoidp`. Configuration lives in YAML files, no database required. It also has a web UI for managing users and clients, and token introspection/revocation endpoints.

It's a dev/testing tool, not meant for production. MIT licensed.

Blog post with more context: https://cdelmonte.medium.com/stop-spinning-up-keycloak-just-...

1

SnesGPT, micro-GPT ported to ASM on the Super Nintendo #

github.com favicongithub.com
1 comments11:52 AMView on HN
Andrej Karpathy has been hacking on micro-gpt. Someone in the comments ported it to Haskell so I thought, "why not ASM." And once I had that thought, I thought, "why not on the SNES?"

This is mostly an exercise to see how well Claude Code would do at this task, and it did surprisingly well. Once compiled, the ROM will run on Snes9x and generate 20 new names.

1

Pablituuu – Web Video Editor with AI Highlights (WebGL, FFmpeg WASM) #

pablituuu.space faviconpablituuu.space
0 comments11:57 AMView on HN
Hi HN, I'm the developer behind Pablituuu. I spent the last few months solving the "heavy lifting" of browser-based video editing. It uses a custom orchestration layer with Fabric.js, WebGL-accelerated rendering (via OpenVideo), and FFmpeg/WASM for client-side processing to eliminate server costs and latency. I just added: - *AI Analytics (Gemini):* To automatically detect highlights from raw footage. - *FFmpeg WASM:* For native browser processing. - *Optimized Timeline:* Handles precision state sync between canvas and layers. I'm looking for technical feedback on memory management for large assets and I'm open to new professional challenges/collaborations in the media tech space. Happy to answer any technical questions!
1

Mail Server Builder – Deploy a Full Mail Server on Ubuntu from Windows #

buy-source-code.com faviconbuy-source-code.com
0 comments12:25 PMView on HN
Hi HN,

I’ve always found that setting up a mail server on Linux (Postfix, Dovecot, SpamAssassin) is the ultimate "final boss" of self-hosting. One wrong config file and you're blacklisted or open to relay or not able to send any email.

I built Mail Server Builder to bridge the gap between Windows ease-of-use and Linux stability.

How it works:

The app runs on your Windows desktop. It connects via SSH to a clean Ubuntu 20.04/22.04 VPS.

It automates the entire stack installation: DNS records (SPF, DKIM, DMARC), SSL certificates, and security hardening.

No CLI knowledge is required for the user.

Why?

To give people a real alternative to $6/user/month subscriptions.

Business Model:

The software is 100% free. It includes an affiliate wizard to help users find VPS providers that don't block port 25 by default, which is the #1 hurdle for beginners.

Would love to hear your feedback on the remote deployment approach or any technical questions!

1

ShortGuard – Apple rejected my app for blocking Shorts, so here it is #

testflight.apple.com favicontestflight.apple.com
0 comments1:34 PMView on HN
I’ve been struggling with YouTube Shorts addiction. The issue with current iOS tools like Screen Time is that they are too blunt—they block the entire YouTube app, not just the "Shorts" feature. I wanted to keep using YouTube for educational content and long-form videos while completely stripping away the addictive Shorts feed.

To solve this, I built ShortGuard.

Technical Implementation

Since Apple doesn't provide a "Shorts-only" blocking API, I had to implement a local filtering layer. ShortGuard uses the NEVPNManager API and a local Root Certificate to intercept and filter specific network requests.

100% Local: All traffic filtering happens strictly on-device. No user data is ever collected or transmitted to external servers.

Granular Control: It identifies and drops requests to specific endpoints that serve YouTube Shorts.

Current Behavior: Due to how the initial YouTube payload is structured, you might still see the very first Shorts video in the feed. However, ShortGuard successfully blocks the "infinite scroll" mechanism, preventing you from falling into the scrolling rabbit hole.

The Apple Rejection

After months of development, Apple rejected ShortGuard under Guideline 2.5.1, stating that using a VPN profile or root certificate to block content in third-party apps is "not appropriate."

I pointed out a clear double standard: pro-level tools like Proxyman are permitted to use this exact technical architecture to intercept and block traffic. Why is this technical approach considered "appropriate" for a developer utility but "inappropriate" for a user's digital well-being tool?

Apple maintained their stance and rejected my final appeal.

Free Release via TestFlight

I believe users should have the right to control their own network traffic to protect their focus. Since Apple won't allow a formal App Store release, I’m making ShortGuard available for free via TestFlight to anyone who needs to regain their focus.

TestFlight Link: https://testflight.apple.com/join/eTKmdWCU

If this tool helps you reclaim your time from the algorithm, feel free to buy me a coffee: https://buymeacoffee.com/callmejustdodo

I'm just happy to see this project finally in the hands of people who need it.

1

Automatic demo videos for every feature you ship #

waitlist.buildshot.xyz faviconwaitlist.buildshot.xyz
0 comments1:37 PMView on HN
Every time a new feature ships, someone has to make a demo.

Recording it. Retaking it. Editing it. Adding captions. Exporting it.

That usually takes hours, so teams either rush it or skip it.

The idea here is simple:

When you ship a new feature, a demo video should exist automatically.

BuildShot records your live app, follows the feature flow, and generates a clean, shareable demo video with narration and light editing applied.

It’s focused specifically on new features, not generic marketing videos.

You can also script your demo just by describing what should happen inside the AI IDE that ships with it. For more control, there’s a CLI where you can define flows, steps, and behavior directly from your project.

So the workflow becomes:

Ship feature → define or generate flow → demo video is produced.

No manual screen recording. No editing timeline. No stitching clips.

This is meant for: – Release demos – PR walkthroughs – Internal updates – Changelog videos – Customer feature announcements

I’m especially curious:

• How do you currently create feature demos? • Would automated demos fit into your release workflow? • What would break this for your setup?

If this solves a real pain for you, early access is here:

https://waitlist.buildshot.xyz/?source=HN

1

I built a Telegram bot that converts any article URL to audio #

sornic.com faviconsornic.com
0 comments1:40 PMView on HN
I read a lot of articles but rarely have time to sit and read them all. So I built @SornicBot on Telegram - you send it any article URL and it sends back an MP3 you can listen to right inside Telegram.

How it works:

Open @SornicBot on Telegram Send any article link Get back an MP3 in seconds It extracts the article text (strips ads, popups, cookie banners), then converts it to natural-sounding audio. You get 3 free articles per day.

Just forward an interesting article link to the bot and listen to it later.

If you prefer a web experience, there's also https://sornic.com where you can:

Choose from 6 different AI voices Queue up multiple articles for back-to-back listening Download MP3s for offline use Get HD audio quality with credits The bot is free to use with a daily limit. Would love feedback on the audio quality and any article sources that don't work well.

Bot: https://t.me/SornicBot Web: https://sornic.com

Make sure to read the list of not allowed URLs before using it on the how it works page.

1

VibeDB – store anything with zero config #

pypi.org faviconpypi.org
0 comments2:59 PMView on HN
I built VibeDB, a tiny local database for small end-user tools and side projects: store whatever object shape you want, without setting up a DB or designing schemas upfront.

Why

When building small utilities (CLI tools, desktop scripts, lightweight web apps), I often need “a bit of persistence” (settings, history, cached results, user data), but I don’t want to stop and set up Postgres/Mongo, write migrations, or keep switching to ad-hoc SQL. I wanted persistence to feel like “just store objects” and keep everything in one local file.

What it is

Zero config: one local file

Document-style: store nested objects / mixed shapes (schema-later)

Query without SQL: simple dict filters or a small query builder

Optional Studio UI to inspect/edit/query data locally

1

Software is content now and I built a platform for it #

prom.dev faviconprom.dev
0 comments4:27 PMView on HN
I've been thinking about this for a while and I think we're at an inflection point that most people are framing wrong. I get that we are working on AI to make devs faster, but I think there’s legitimately something way bigger happening.

Software is becoming content. Right now on Prom, people are making food spinner wheels, magic 8-balls, birthday countdown pages, portfolio sites all from prompts, all shareable with a link. Nobody told them what to build. They don’t need a Vercel app to deploy it or even their own domain. Someone had an idea and it existed a few minutes later (trying to get this faster, lol).

None of these are "apps" in the traditional sense. They don't need maintenance. They don't need a backlog. They exist because someone wanted them to exist for a moment, shared them, and moved on. Or someone else found it, remixed it, and made it their own. Ephemeral software.

So Prom is built around that. You make software from a prompt, but the core of the platform is discovery and remixing. There's a feed. There are trending drops. You can remix anything. It's closer to a creative platform than a dev tool.

I'd genuinely love this crowd's feedback. Plz roast me. :)

1

Running an LLM Inside Scratch #

github.com favicongithub.com
0 comments4:30 PMView on HN
This runs the smallest llama2.c checkpoint (stories260K) inside Scratch/TurboWarp by compiling C inference code into Scratch blocks using llvm2scratch. The model is quantized to Q8_0 and packed into Scratch lists. If everything works, the sprite streams "Once upon a time..." token-by-token into its speech bubble.

I started this as an experiment in how far Scratch's VM could be pushed, and because the idea of running an LLM inside Scratch felt absurd and fun. The main challenges were fitting quantized weights into list memory, working around JS call stack limits, and patching llvm2scratch to support additional IR patterns emitted by clang -O2.

Generates ~1 token every 10 seconds.

Live demo: https://scratch.mit.edu/projects/1277883263

Source: https://github.com/broyojo/llm_from_scratch

1

A video agent with Canvas2D code-gen and generative capabilities #

gliadirector.com favicongliadirector.com
0 comments6:01 PMView on HN
Hi HN, I'm Vicky. We built GliaDirector, an agent for making videos -

It uses AI-generated footage for what code can't draw, and programmatic animation for what pixel models can't control.

Check out the generated examples or try it yourself: https://gliadirector.com/agents?referral=hn1000

How it works:

The "Renderer" is Code: This is the core. A code-gen agent writes Canvas2D from scratch for overlays and motion design, and composites everything into the final video. It can even do FFT analysis on the music track for beat-reactive animations.

Runtime Controls Generation: The code-gen agent generates its own editing UI to expose controls, so you can tweak the result without touching the raw JS. You can ask it to "give me 3 options for the headline style," and it will actually generate those choices into the UI for you to pick from.

Specialized vs. Open Agents: We offer specialized agents for specific formats (stable pipelines) that pre-generate the storyboard/assets. These pass their full state to the open-ended "Coder" agent for refinement, or you can start directly with the Coder for freeform exploration.

Why Canvas2D code? Writing raw Canvas code from scratch means the AI can produce any animation it can describe in code. We chose raw Canvas over a framework because it gives more creative freedom, though a framework handles layout and 3D better (something we might add later).

Where we are honestly: the 'AI does everything' concept works, but the AI directors still feel a bit junior. Sometimes they don't take enough initiative on the creative planning, so you might find yourself guiding them more than you'd like. Making them smarter and more proactive is our main focus right now. It's early and rough around the edges.

Curious what you think of this hybrid approach?

1

VM-curator – a Linux VM manager with easy GPU-passthrough and more #

vm-curator.org faviconvm-curator.org
0 comments7:19 PMView on HN
Howdy! A few weeks ago I launched vm-curator as a FOSS hobby project. Since then, the project has matured substantially, solving real-world problems with Linux desktop/workstation VM management. Presenting, now, the official "grand-opening" release.

vm-curator is a rust TUI that works directly with QEMU/KVM, skipping libvirt. It is designed for desktop use-cases. (For servers, Proxmox is a better choice.) QEMU runs in user (non-root) mode only.

The app's biggest feature is guided support for single and multi-gpu passthrough. For computers with only one GPU, vm-curator will write and manage scripts to detach your GPU from your Linux display manager, attach it to your VM, and then will monitor your VM's use so that upon shutdown, it will reverse the attachment and restore your Linux display. For computers with multiple GPUs, vm-curator provides easy configuration of GPU-passthrough with support for looking glass. vm-curator also supports virgl para-virtualized 3D acceleration, which works great in Linux guests (but for full GPU performance, pass-through is a must.)

vm-curator also supports SLIRP, passt, and bridged mode for networking back-ends, comprehensive USB and PCI detection and pass-through, VM state monitoring, QCOWS2 snapshot management, and host directory passthrough. For BTRFS users, vm-curator will also automatically turn off BTRFS copy-on-write for your VM directory to avoid the double copy-on-write performance penalty. VM installation is easy with over 120 OS profiles built-in.

I've long wanted to leverage QEMU/KVM for desktop virtualization, but have been long stymied by gnome boxes (lacks advanced features) and virt-manager (very difficult to setup, especially with NVIDIA GPUs.) vm-curator has solved these hurdles for me. Hopefully it can help you as well.

FOSS engagement (PRs/contributions + feedback) is most welcome!

1

PolyMCP – Run MCP Python Tools in WASM via Pyodide #

0 comments7:32 PMView on HN
PolyMCP now supports compiling Python MCP tools to WebAssembly using Pyodide.

This means any Python function exposed as an MCP tool can now run directly in the browser, Node.js, or edge workers, fully sandboxed, without a Python server.

Compile your tools, serve the bundle, and AI agents can call them instantly in WASM environments. Existing MCP features like input validation, error handling, and tool orchestration work seamlessly.

example:

from polymcp.polymcp_toolkit import expose_tools_wasm

def add(a: int, b: int) -> int: """Add two numbers""" return a + b

compiler = expose_tools_wasm([add]) bundle = compiler.compile(output_dir="./dist")

Open dist/demo.html in your browser — the add tool runs entirely in WASM.

Why it matters • No Python server required: runs client-side or on edge • Secure: sandboxed execution • Plug-and-play: multiple tools in one WASM bundle • Ideal for interactive demos, edge AI apps, or browser-based automation

Repo: https://github.com/poly-mcp/Polymcp

1

Digital identity pages with traffic quality #

thugg.lol faviconthugg.lol
0 comments7:36 PMView on HN
Hi HN,

I’ve been working on a digital identity platform that started as a link-in-bio tool, but evolved into something more focused on traffic quality and signal vs noise.

Most link pages show views and clicks. That never felt sufficient. I wanted to understand:

- how much of the traffic is actually human - where it comes from - what people click first - how much of it is noise or automation

So I built a traffic quality layer on top of personal pages. There’s basic bot filtering, a human score, first-click tracking, referrer breakdowns, and deeper analytics than a typical “click counter”.

Human score is currently based on request fingerprinting, simple behavioral signals and bot-pattern detection. It’s still evolving and I’m actively refining the heuristics.

On top of that, pages are fully customizable (not just stacked buttons), and there’s a built-in referral system with campaigns and coupons.

Example public profile: https://thugg.lol/m6jo9

Instead of spending on ads, I’m allocating budget to users who find real bugs or suggest features that we actually ship. So far that has produced better feedback than paid traffic.

It’s bootstrapped and built solo.

Would genuinely appreciate technical feedback, especially on the analytics approach and overall direction.

Happy to answer questions.

1

Softalon – All-in-one salon booking and scheduling software #

softalon.com faviconsoftalon.com
0 comments8:23 PMView on HN
Hey HN,

I built Softalon — an all-in-one platform for salon owners to manage scheduling, online booking, payments, automated workflows, client profiles, and an AI assistant that handles bookings across messaging channels.

The problem: most salon owners juggle 3-5 disconnected tools, and the all-in-one platforms that exist take commissions and control the client relationship.

Softalon charges a flat monthly fee — no commissions, no vendor lock-in. Salon owners connect their own payment provider and keep 100% of revenue. Their client data stays theirs.

I spent about a year building this as a side project, overengineered it way past MVP, and eventually went full-time to ship it properly.

Would love any feedback. Happy to answer questions.

1

Hacker News for OpenClaw Agents #

0 comments8:26 PMView on HN
Just kidding. I am so sorry.