每日 Show HN

Upvote0

2026年3月3日 的 Show HN

64 条
43

Reconstruct any image using primitive shapes, runs in-browser via WASM #

github.com favicongithub.com
8 评论1:55 PM在 HN 查看
I built a browser-based port of fogleman/primitive — a Go CLI tool that approximates images using primitive shapes (triangles, ellipses, beziers, etc.) via a hill-climbing algorithm. The original tool requires building from source and running from the terminal, which isn't exactly accessible. I compiled the core logic to WebAssembly so anyone can drop an image and watch it get reconstructed shape by shape, entirely client-side with no server involved.

Demo: https://primitive-playground.taiseiue.jp/ Source: https://github.com/taiseiue/primitive-playground

Curious if anyone has ideas for shapes or features worth adding.

42

Open-Source Article 12 Logging Infrastructure for the EU AI Act #

8 评论10:11 AM在 HN 查看
EU legislation (which affects UK and US companies in many cases) requires being able to truly reconstruct agentic events.

I've worked in a number of regulated industries off & on for years, and recently hit this gap.

We already had strong observability, but if someone asked me to prove exactly what happened for a specific AI decision X months ago (and demonstrate that the log trail had not been altered), I could not.

The EU AI Act has already entered force, and its Article 12 kicks-in in August this year, requiring automatic event recording and six-month retention for high-risk systems, which many legal commentators have suggested reads more like an append-only ledger requirement than standard application logging.

With this in mind, we built a small free, open-source TypeScript library for Node apps using the Vercel AI SDK that captures inference as an append-only log.

It wraps the model in middleware, automatically logs every inference call to structured JSONL in your own S3 bucket, chains entries with SHA-256 hashes for tamper detection, enforces a 180-day retention floor, and provides a CLI to reconstruct a decision and verify integrity. There is also a coverage command that flags likely gaps (in practice omissions are a bigger risk than edits).

The library is deliberately simple: TS, targeting Vercel AI SDK middleware, S3 or local fs, linear hash chaining. It also works with Mastra (agentic framework), and I am happy to expand its integrations via PRs.

Blog post with link to repo: https://systima.ai/blog/open-source-article-12-audit-logging

I'd value feedback, thoughts, and any critique.

34

A trainable, modular electronic nose for industrial use #

sniphi.com faviconsniphi.com
25 评论4:34 PM在 HN 查看
Hi HN,

I’m part of the team building Sniphi.

Sniphi is a modular digital nose that uses gas sensors and machine-learning models to convert volatile organic compound (VOC) data into a machine-readable signal that can be integrated into existing QA, monitoring, or automation systems. The system is currently in an R&D phase, but already exists as working hardware and software and is being tested in real environments.

The project grew out of earlier collaborations with university researchers on gas sensors and odor classification. What we kept running into was a gap between promising lab results and systems that could actually be deployed, integrated, and maintained in real production environments.

One of our core goals was to avoid building a single-purpose device. The same hardware and software stack can be trained for different use cases by changing the training data and models, rather than the physical setup. In that sense, we think of it as a “universal” electronic nose: one platform, multiple smell-based tasks.

Some design principles we optimized for:

- Composable architecture: sensor ingestion, ML inference, and analytics are decoupled and exposed via APIs/events

- Deployment-first thinking: designed for rollout in factories and warehouses, not just controlled lab setups

- Cloud-backed operations: model management, monitoring, updates run on Azure, which makes it easier to integrate with existing industrial IT setups

- Trainable across use cases: the same platform can be retrained for different classification or monitoring tasks without redesigning the hardware

One public demo we show is classifying different coffee aromas, but that’s just a convenient example. In practice, we’re exploring use cases such as:

- Quality control and process monitoring

- Early detection of contamination or spoilage

- Continuous monitoring in large storage environments (e.g. detecting parasite-related grain contamination in warehouses)

Because this is a hardware system, there’s no simple way to try it over the internet. To make it concrete, we’ve shared:

- A short end-to-end demo video showing the system in action (YouTube)

- A technical overview of the architecture and deployment model: https://sniphi.com/

At this stage, we’re especially interested in feedback and conversations with people who:

- Have deployed physical sensors at scale

- Have run into problems that smell data might help with

- Are curious about piloting or testing something like this in practice

We’re not fundraising here. We’re mainly trying to learn where this kind of sensing is genuinely useful and where it isn’t.

Happy to answer technical questions.

27

We want to displace Notion with collaborative Markdown files #

moment.dev faviconmoment.dev
12 评论6:13 PM在 HN 查看
Hi HN! We at Moment[1] are working on Notion alternative which is (1) rich and collaborative, but (2) also just plain-old Markdown files, stored in git (ok, technically in jj), on local disk. We think the era of rigid SaaS UI is, basically, over: coding agents (`claude`, `amp`, `copilot`, `opencode`, etc.) are good enough now that they instantly build custom UI that fits your needs exactly. The very best agents in the world are coding agents, and we want to allow people to simply use them, e.g., to build little internal tools—but without compromising on collaboration.

Moment aims to cover this and other gaps: seamless collaborative editing for teams, more robust programming capabilities built in (including a from-scratch React integration), and tools for accessing private APIs.

A lot of our challenge is just in making the collaborative editing work really well. We have found this is a lot harder than simply slapping Yjs on the frontend and calling it a day. We wrote about this previously and the post[2] did pretty well on HN: Lies I was Told About Collaborative editing (352 upvotes as of this writing). Beyond that, in part 2, we'll talk about the reasons we found it hard to get collab to run at 60fps consistently—for one, the Yjs ProseMirror bindings completely tear down and re-create the entire document on every single collaborative keystroke.

We hope you will try it out! At this stage even negative feedback is helpful. :)

[1]: https://www.moment.dev/

[2]: https://news.ycombinator.com/item?id=42343953

23

Giggles – A batteries-included React framework for TUIs #

github.com favicongithub.com
10 评论2:26 AM在 HN 查看
i built a framework that handles focus and input routing automatically for you -- something born out of the things that ink leaves to you, and inspired by charmbracelet's bubbletea

- hierarchical focus and input routing: the hard part of terminal UIs, solved. define focus regions with useFocusScope, compose them freely -- a text input inside a list inside a panel just works. each component owns its keys; unhandled keypresses bubble up to the right parent automatically. no global handler like useInput, no coordination code

- 15 UI components: Select, TextInput, Autocomplete, Markdown, Modal, Viewport, CodeBlock (with diff support), VirtualList, CommandPalette, and more. sensible defaults, render props for full customization

- terminal process control: spawn processes and stream output into your TUI with hooks like useSpawn and useShellOut; hand off to vim, less, or any external program and reclaim control cleanly when they exit

- screen navigation, a keybinding registry (expose a ? help menu for free), and theming included

- react 19 compatible!

docs and live interactive demos in your browser: https://giggles.zzzzion.com

quick start: npx create-giggles-app

19

Demucs music stem separator rewritten in Rust – runs in the browser #

github.com favicongithub.com
3 评论4:11 PM在 HN 查看
Hi HN! I reimplemented HTDemucs v4 (Meta's music source separation model) in Rust, using Burn. It splits any song into individual stems — drums, bass, vocals, guitar, piano — with no Python runtime or server involved.

Try it now: https://nikhilunni.github.io/demucs-rs/ (needs a WebGPU-capable browser — Chrome/Edge work best)

GitHub: https://github.com/nikhilunni/demucs-rs

It runs three ways:

- In the browser — the full ML inference pipeline compiles to WASM and runs on your GPU via WebGPU. No uploads, nothing leaves your machine.

- Native CLI — Metal on macOS, Vulkan on Linux/Windows. Faster than the browser path.

- DAW plugin — VST3/CLAP plugin for macOS with a native SwiftUI UI. Load a track, separate it, drag stems directly into your DAW timeline, or play as a MIDI instrument with solo / faders.

The core inference library is built on Burn (https://burn.dev), a Rust deep learning framework. The same `demucs-core` crate compiles to both native and `wasm32-unknown-unknown` — the only thing that changes is the GPU backend.

Model weights are F16 safetensors hosted on Hugging Face and downloaded / cached automatically on first use on all platforms. Three variants: standard 4-stem (84 MB), 6-stem with guitar/piano (84 MB), and a fine-tuned bag-of-4-models for best quality (333 MB).

The existing implementations I found online were mostly wrappers around the original Python implementation, and not very portable -- the model works remarkably well and I wanted to be able to quickly create samples / remixes without leaving the DAW or my browser. Right now the implementation is pretty MacOS heavy, as that's what I'm testing with, but all of the building blocks for other platforms are ready to build on. I want this to grow to be a general utility for music producers, not just "works on my machine."

It was a fun first foray into DSP and the state of the art of ML over WASM, with lots of help from Claude!

13

Agent Action Protocol (AAP) – MCP got us started, but is insufficient #

github.com favicongithub.com
2 评论5:22 PM在 HN 查看
Background: I've been working on agentic guardrails because agents act in expensive/terrible ways and something needs to be able to say "Maybe don't do that" to the agents, but guardrails are almost impossible to enforce with the current way things are built.

Context: We keep running into so many problems/limitations today with MCP. It was created so that agents have context on how to act in the world, it wasn't designed to become THE standard rails for agentic behavior. We keep tacking things on to it trying to improve it, but it needs to die a SOAP death so REST can rise in it's place. We need a standard protocol for whenever an agent is taking action. Anywhere.

I'm almost certainly the wrong person to design this, but I'm seeing more and more people tack things on to MCP rather than fix the underlying issues. The fastest way to get a good answer is to submit a bad one on the internet. So here I am. I think we need a new protocol. Whether it's AAP or something else, I submit my best effort.

Please rip it apart, lets make something better.

12

Online OCR Free – Batch OCR UI for Tesseract, Gemini and OpenRouter #

onlineocrfree.qzz.io favicononlineocrfree.qzz.io
2 评论8:12 PM在 HN 查看
Built this because people working with large document sets had no free tool that handled batch processing cleanly. Tesseract is free and runs locally. For anything that needs more accuracy — Google Vision, Gemini, or any OpenRouter model — you bring your own API key. No subscription, no markup on your usage. Export as TXT, JSON, XML or PDF. AI engines support custom prompts so you can translate, extract form fields, or get structured output in one step. App: https://onlineocrfree.qzz.io Source: https://github.com/naimurhasan/online-ocr-free
10

Mozilla.ai introduces Clawbolt, an AI Assistant for the trades #

github.com favicongithub.com
0 评论6:30 PM在 HN 查看
Hey everyone, Nathan here: I'm an MLE at Mozilla.ai. I can't tell you how many things around my house I've been saying "I would really like to have somebody take a look at that". But here's the problem: all the people in the trades are extremely overwhelmed with work. There is a lot to be done and not enough people to do it.

One of my best friends runs his own general contracting business. He's extremely talented and wants to spend his time working on drywall, building staircases, and listening to Mumford and Sons while throwing paint onto a ceiling. But you know what gets in the way of that wonderful lifestyle that all us software engineers dream about?

ADMINISTRATION.

He thought running his own business would be 85% show up and do the work, but turns out a large chunk of the time is spent talking to clients to schedule time to get an estimate, working with home management companies to explain the details of an invoice, and generally just manage all of the information that he's gathering on a single day.

Luckily for the world, AI is here to help with this. Tech like openclaw has really opened our eyes to the possibilities, and tech to help out small businesses like these are now within reach.

That's why I'm excited to share out an initial idea we're trying out: clawbolt. It's a python based project that takes inspiration from the main features that make openclaw so powerful: SOUL.md, heartbeat proactive communication, memory management, and communication over channels like WhatsApp and iMessage. With clawbolt, we're working on integrating our latest work with any-llm and any-guardrail, to help make clawbolt secure and to ease onboarding.

This is all new, so this is a call for ideas, usage, and bug reports. Most of us that try to get plumbers/roofers/handymen to come help us with a home project know how overwhelmed they are with admin work when they're a small team. I'm hoping that we can make clawbolt into something that helps enable these people to focus on doing what they love and not on all the paperwork.

Let me know what you think! Docs at https://mozilla-ai.github.io/clawbolt/

10

Kanon 2 Enricher – the first hierarchical graphitization model #

isaacus.com faviconisaacus.com
6 评论8:55 AM在 HN 查看
Hey HN, This is Kanon 2 Enricher, the first hierarchical graphitization model. It represents an entirely new class of AI models designed to transform document corpora into rich, highly structured knowledge graphs.

In brief, our model is capable of: - Entity extraction, classification, and linking: identifying key entities like individuals, companies, governments, locations, dates, documents, and more, and classifying and linking them together. - Hierarchical segmentation: breaking a document up into its full hierarchy, including divisions, sections, subsections, paragraphs, and so on. - Text annotation: extracting common textual elements such as headings, sigantures, tables of contents, cross-references, and the like.

We built Kanon 2 Enricher from scratch. Every node, edge, and label in the Isaacus Legal Graph Schema (ILGS), which is the format it outputs to, corresponds to at least one task head in our model. In total, we built 58 different task heads jointly optimized with 70 different loss terms.

Thanks to its novel architecture, unlike your typical LLM, Kanon 2 Enricher doesn't generate extractions token by token (which introduces the possibility of hallucinations) but instead directly classifies all the tokens in a document in a single shot. This makes it really fast.

Because Kanon 2 Enricher's feature set is so wide, there are a myriad of applications it can be used for, from financial forensics and due diligence all the way to legal research.

One of the coolest applications we've seen so far is where a Canadian government built a knowledge graph out of thousands of federal and provincial laws in order to accelerate regulatory analysis. Another cool application is something we built ourselves, a 3D interactive map of Australian High Court cases since 1903, which you can find right at the start of our announcement.

Our model has already been in use for the past month, since we released it through a closed beta that included Harvey, KPMG, Clifford Chance, Clyde & Co, Alvarez & Marsal, Smokeball, and 96 other design partners. Their feedback was instrumental in improving Kanon 2 Enricher before its public release, and we're immensely thankful to each and every beta participant.

We're eager to see what other developers manage to build with our model now that its out publicly.

10

Pane – Give your AI access to your financial data via MCP #

pane.money faviconpane.money
1 评论11:03 PM在 HN 查看
Hey HN!

Today I'm launching Pane, a tool that gives AI context on your financial data.

Once connected, any MCP-compatible client (Claude, Cursor, ChatGPT, etc.) can answer questions like: - "What did I spend on food this month?" - "What's my net worth right now?" - "Show me my recurring subscriptions" - "How much do I owe across all credit cards?" - "What are my investment holdings?"

It's been really transformative in helping me and some of my friends understand their finances, where they are overspending, or even being double billed in some cases.

I'm very aware that this is a somewhat controversial idea. Banking data is an extremely personal set of data and connecting it with something that in many cases can hallucinate and is often hosted by a third party is understandably concerning. Many people already do this by uploading billing statements, CSV's, etc. And this product is definitely for those early adopters, myself included. I'm really interested to hear some of the feedback from this community regarding this idea. I totally understand this is not for everyone, but if you do let it into your life I believe it can have a positive impact on your relationship with money.

If you'd like to give it a try, use code `HACKERNEWS` for 50% off your first month. If you try it and decide it is not for you within the first week, please reach out to [email protected] and I can set you up with a refund.

Excited to hear feedback, critique, thoughts - everything :~)

9

Cortexa – Bloomberg terminal for agentic memory #

cortexa.ink faviconcortexa.ink
3 评论4:45 AM在 HN 查看
Hi HN — I’m Prateek Rao. My cofounders and I built Cortexa, which we describe as a Bloomberg terminal for agentic memory.

A pattern I keep seeing: when agents misbehave, most teams iterate on prompts and then “fix” it by plugging in a memory layer (vector DB + RAG). That helps sometimes — but it doesn’t guarantee correctness. In practice it often introduces a new failure mode: the agent retrieves something dubious, writes it back to memory as if it’s truth, and that mistake becomes sticky. Over time you get memory pollution, circular hallucination loops, and debugging turns into log archaeology.

What Cortexa does:

1. Agent decision forensics (end-to-end “why”): trace outputs/actions back to the exact retrievals, memory writes, and tool calls that caused them.

2. Memory write governance: intercept and score memory writes (0–1), and optionally block/quarantine ungrounded entries before they poison future runs.

3. Memory hygiene + vector store noise control: automatically detect and remove near-duplicate / low-signal entries so retrieval stays high-quality and storage + inference costs don’t creep up.

Why this matters: Observability is the missing layer for agentic AI. Without it, autonomy is fragile: small errors silently compound, deployments become risky, and engineering cost goes up because failures aren’t reproducible or attributable.

Who this is for: 1. Teams shipping agentic workflows in production 2. Anyone fighting “unknown why” failures, memory pollution, or runaway context costs 3. Engineers who want auditability + faster debugging loops

Site: https://cortexa.ink/

Would love feedback from anyone running agents at scale: 1.What’s the most painful agent failure mode you’ve seen in production? 2.What signals would you want in an “agent terminal” (retrieval diffs, memory blame, tool-call traces, alerts, etc.)?

7

OctopusGarden – An autonomous software factory (specs in, code out) #

github.com favicongithub.com
9 评论12:07 AM在 HN 查看
I built this over the weekend after reading about StrongDM's software factory (their writeup: https://factory.strongdm.ai/, Simon Willison's deep dive: https://simonwillison.net/2026/Feb/7/software-factory/, Dan Shapiro's Five Levels: https://www.danshapiro.com/blog/2026/01/the-five-levels-from...). OctopusGarden is an open-source implementation of the pattern StrongDM described: holdout scenarios, probabilistic satisfaction scoring via LLM-as-judge, and a convergence loop that iterates until the code works; no human code review in the loop.

What stood out to me was that this architecture largely rhymes with the coding workflows I and others already do with coding agents. It's basically automating the connective tissue between the workflows I was already doing in Claude Code, and then brute-forcing a result. In the dark factory model, a spec goes in, code gets generated, built in Docker, validated against scenarios the agent never saw, scored, and failures feed back until it converges.

I've tried it with mostly standard CRUD/REST API apps and it works. I haven't tried anything with HTML/JS yet. You can try the sample specs in the repo.

Some raw notes from the experience:

1. I don't want to maintain the code these factories generate. It works. The phenotype is (largely) correct, but the genotype is pretty wild and messy. I did not use OctopusGarden to build OctopusGarden (you can tell because it uses strict linting and tests). I know the point of these systems is zero human in the loop, but I think there's a real opportunity to get factories to generate code that humans actually want to maintain. I'm going to work on getting OctopusGarden there.

2. Compliance might be a nightmare. In my day job I think a lot about ISO 27001 and SOC 2 compliance. The idea of deploying dark-factory-generated projects into my environments and checking compliance boxes sounds painful. That might just be the current state of OctopusGarden and the code it generates, but I think we can get to a point where generated code is completely linted, statically checked, and tested inside the factory. That's not OctopusGarden today, but maybe it will be there next week? I can see this moving fast.

3. These dark factory apps will be hard to debug. There was a Claude outage today and I couldn't run my smoke tests or generate new apps. I don't want to maintain services that can't be debugged and fixed by a human in a pinch. We're already partially there with AI-assisted code, but this factory-generated code is even more convoluted. Requiring AI to create a new app version is probably worth it...but it's still yet another thing between you and quickly patching an urgent bug.

4. Security needs a better story. These things need real security hardening. Maybe that's just better spec files and scenarios, maybe it's something more. I'm going to drink a strong cola and think about this one.

5. The unit of responsibility keeps growing. Last year we said code must come in PR-sized bites — that's how we manage risk. Now we're talking about deploying meshes of services created and deployed with no humans in the loop (except at creation). AI-generated services could really push the scale of what people are willing to accept responsibility for. Most SRE teams manage 1-5 services at big companies. Will that number increase per team? How much GDP is one person willing to manage via agents? Just a shower thought.

6. I was surprised this works. I'm surprised at how easy it was to make. I'm surprised more of these aren't out there already. I only did a couple of GitHub searches and didn't find many. I'm bad at searching. Sorry if I didn't find your project.

5

AI tool that brutally roasts your AI agent ideas #

whycantwehaveanagentforthis.com faviconwhycantwehaveanagentforthis.com
1 评论5:24 PM在 HN 查看
I built whycantwehaveanagentforthis.com — submit any problem and get a structured analysis of whether an AI agent could solve it. The output includes a creative agent name, feasibility verdict, real competitor analysis (actual products with URLs), a kill prediction (which big tech company makes this obsolete, when), build estimate, and a savage one-liner. Built with Next.js + Claude API (Haiku). Runs on ~$5/day. Rate limited with Upstash Redis (7 layers). The prompt engineering to get accurate, non-hallucinated competitor analysis was the hardest part. Free, no signup. Feedback welcome — especially on AI response quality.
4

GitHub Commits Leaderboard #

ghcommits.com faviconghcommits.com
0 评论2:55 AM在 HN 查看
I made a public leaderboard for all time GitHub commit contributions.

https://ghcommits.com

You can connect your GitHub account and see where you rank by total commit contributions.

It uses GitHub’s contribution data through GraphQL, so it is based on GitHub’s counting rules rather than raw git history. Private contributions can be included. Organization contributions only count if you grant org access during auth.

There is also a public read only API.

https://ghcommits.com/api

The main thing I learned building it is that commit counting sounds straightforward until you try to match how GitHub actually attributes contributions.

I’d be interested in feedback on whether commit contributions are the right ranking metric, and whether I should also support other contribution types.

4

Qast – Cast anything (files, URLs, screen) to any TV from the CLI #

github.com favicongithub.com
1 评论1:55 PM在 HN 查看
Hi HN,

I built qast because I couldn’t find a tool that “just works” for casting content to a TV. Some TVs support YouTube natively, some do screen mirroring, and only a handful actually show up in Chrome's cast menu. Even when you do get a connection, one TV might accept MKV but not WebM, while another just drops the audio entirely.

qast sidesteps the compatibility problem. It takes whatever you give it -- a local file, a YouTube URL, your desktop screen, a specific window, or a webpage rendered via headless Chromium -- and transcodes it on the fly to H.264/AAC. Because practically every smart TV in the last decade supports this lowest common denominator, it just works.

(Note: You currently need to be running Linux to use it. macOS/Windows support is on the roadmap).

Under the hood:

Written in Python.

Relies on ffmpeg for the heavy lifting (transcoding, window capture).

Uses yt-dlp for extracting web video streams.

Uses Playwright to render web dashboards in a headless browser before casting.

Auto-discovers Chromecast, Roku, and DLNA devices on your local network.

Mostly, I want to get some early feedback. If you have experience wrestling with this problem (especially the endless DLNA quirks) or have ideas for other useful features, that would be fantastic as well.

4

Rriftt_ai.h – A bare-metal, dependency-free C23 tensor engine #

github.com favicongithub.com
0 评论8:19 AM在 HN 查看
Hi HN, I built rriftt_ai.h because I hit my breaking point with the modern deep learning stack.

I wanted to train and run Transformers, but I was exhausted by gigabyte-sized Python environments, opaque C++ build systems, and deep BLAS dependency trees. I wanted to see what it actually takes to execute a forward and backward pass from absolute scratch.

The result is a single-header, stb-style C library written in strict C23.

Architectural decisions I made: - *Zero dependencies:* It requires nothing but a C compiler and the standard math library. - *Strict memory control:* You instantiate a `RaiArena` at boot. The engine operates entirely within that perimeter. There are zero hidden `malloc` or `free` calls during execution. - *The Full Stack:* It natively implements Scaled Dot-Product Attention, RoPE, RMSNorm, and SwiGLU. I also built the backprop routines, Cross-Entropy loss, AdamW optimizer, and a BPE tokenizer directly into the structs.

It is currently public domain (or MIT, your choice). The foundation is stable and deterministic, but it is currently pure C math. I built this architecture to scale, so if anyone wants to tear apart my C23 implementation, audit the memory alignment, or submit SIMD/hardware-specific optimizations for the matmul operations, I'm actively reviewing PRs.

4

Starcraft2 replay rendering engine and AI coach #

starcraft2.ai faviconstarcraft2.ai
0 评论4:09 AM在 HN 查看
Starcraft2 is an old game, but it's always lacked a way to visualize game replays outside of the game itself.

I built a replay rendering engine from scratch using the replay files and Claude Code.

The replay files contain sampled position coordinates and commands that the player inputs. So I built an isometric view using the map and overlayed unit icons over the map, then interpolated the positions that units move in over time.

I also extracted additional metrics from the game data as well - some are derived on top of other metrics.

Finally, I pass all this context into a LLM for it to critique gameplay and offer strengths and improvements per player.

It's not perfect, but a good starting point to iterate and improve

Let me know what you think!

3

Git-hunk – Stage hunks by hash, no "-p" required #

git-hunk.paulie.app favicongit-hunk.paulie.app
0 评论11:32 PM在 HN 查看
git add -p is the only built-in way to stage individual hunks, and it's interactive — you step through hunks one at a time answering "y/n/q/a/d/e/?". That works fine for humans at a keyboard, but it's completely unusable for LLM agents, shell scripts, and CI pipelines.

git-hunk is the non-interactive alternative. It gives every hunk a stable SHA-1 content hash, then lets you stage by hash:

$ git hunk list --oneline

  a3f7c21  src/main.zig   42-49  if (flags.verbose) {…  

  b82e0f4  src/parse.zig  15-28  fn parseArgs(alloc: …
$ git hunk add a3f7c21

  staged a3f7c21 → a3f7c21  src/main.zig

The key design choice: hashes are computed from the immutable side's line numbers, so staging one hunk never changes another hunk's hash. This makes multi-step scripted workflows reliable — you can enumerate hunks, make decisions, then stage them without the targets shifting underneath you.

Other things it does: line-range selection (a3f7:3-5,8), --porcelain output for machine consumption, count for CI guards, check --exclusive for hash validation, stash individual hunks, and restore to selectively discard changes.

Single static binary, written in Zig, zero runtime dependencies beyond git itself. Install via brew install shhac/tap/git-hunk.

I built this because I was trying to run AI agents in parallel, and stuck to file-level editing they'd fight eachother over what changes they wanted to put into commits. Now I can have multiple agents work in parallel and commit cleanly without needing worktrees.

3

Konform Browser v140.8.0-105 #

codeberg.org faviconcodeberg.org
0 评论2:53 AM在 HN 查看
Konform Browser

Some highlights last time:

- Rebased on latest ESR release (obviously) - Been porting over relevant security-motivated changes from Tor Browser[0] and Mozilla. - Significantly improved fingerprinting situation making anonymity less of a pipe-dream - Addons no longer leaking their local IDs in HTTP headers - ML/"AI" features adapted for local/offline use when enabled (while proprietary cloud snoopies fully removed)

Been quiet on HN previously so keeping this submission brief. Release notes are probably the best resource right now to get an overview and feel for the project. Happy to talk more if anyone has any questions!

Would also love to hear feedback from anyone who's either tried it out or dug into the repo.

[0]: Big thanks and cred to inspiring maintainers and contributors there

---

Mastodon: https://techhub.social/@konform

Releases: https://codeberg.org/konform-browser/source/releases

Past Show HN submissions:

- v140.7.0-103 https://news.ycombinator.com/item?id=46713226

- v140.7.0-108 https://news.ycombinator.com/item?id=46956420

3

A Puzzle Game Based on Non-Commutative Operations #

commutators.games faviconcommutators.games
0 评论3:20 AM在 HN 查看
While solving a Skewb[https://en.wikipedia.org/wiki/Skewb] cube I thought it would be interesting to have the subproblems of it presented as puzzle games, one thing lead to another and here is the result.

I have definitely some UX problems so looking for feedbacks and thoughts.

The best part of this game is, level generation and difficulty analysis can be automated. I have here 15 tested and 5 experimental levels.

I enjoy 15th level the most, has an intuitive solution.

You can try the competitive mode with a friend, you need to share the link with them.

If I can bring the level count to thousands, I will add a ranking system.

My mind keep racing about the possibilities, but kind of cannot prioritize at the moment.

All kind of feedback, collaboration requests are welcome!

3

PantheonOS–An Evolvable, Distributed Multi-Agent System for Science #

pantheonos.stanford.edu faviconpantheonos.stanford.edu
0 评论4:49 AM在 HN 查看
We are thrilled to share our preprint on PantheonOS, an evolvable, distributed multi-agent system for automatic genomics discovery.

Preprint: https://www.biorxiv.org/content/10.64898/2026.02.26.707870v1 Website(online platform free to everyone): https://pantheonos.stanford.edu/

PantheonOS unites LLM-powered agents, reinforcement learning, and agentic code evolution to push beyond routine analysis — evolving state-of-the-art algorithms to super-human performance.

Applications: Evolved batch correction (Harmony, Scanorama, BBKNN) and Reinforcement learning or RL agumented algorithms RL–augmented gene panel design Intelligent routing across 22+ virtual cell foundation models Autonomous discovery from newly generated 3D early mouse embryo data Integrated human fetal heart multi-omics with 3D whole-heart spatial data

Pantheon is highly extensible, although it is currently showcased with applications in genomics, the architecture is very general. The code has now been open-sourced, and we hope to build a new-generation AI data science ecosystem.

3

wo; a better CD for repo management #

github.com favicongithub.com
0 评论6:10 AM在 HN 查看
This is something that I've been wanting to make and use myself for the longest time.

if you're anything like me, you have a million projects in a million places ( I have 56 repositories!) and they're all from different people. I'm a big cli and neovim user, so for the longest time I've had to do the following.

cd some/long/path/foo/project

nvim .

This gets really infuriating after a while.

wo changes this to wo project

and you're already cded into your project.

running wo scan --root ~/workspaces --depth <depth>

will automatically scan for git repos (or .wo files if you choose not to track your repo), and add them to your repo list. Names for projects are inferred from the repo name's remote url, so they can be anywhere.

If your repo is local, project owners are inferred from the enclosing folder (e.g. I have a local folder, so project owner will be called local)

But I think the killer feature is hooks.

remember that nvim .?

now you can create custom hooks. on enter, we can automatically bring up nvim. so wo project brings up neovim with all your files loaded.

You can define a hook called claude, and call it like this: wo project claude

You can your hook automatically bring up claude code or any other cli tool of your choice. You can do cursor . code . or zen . or run any arbitrary script of your liking. Hooks can be global as well, no need to redefine for each directory.

I've been using this for a few weeks and it's been exactly what I needed. There's a ton of cool features that I didn't mention that are in the README. and also feel free to star it! ( I need more talking points for my resume). Also feel free to ask me any questions or tell me about similar implementations. or maybe any features you'd like to add!

Whole thing is open source and MIT licensed. Let me know if you guys like it!

3

FileHunter, Self-hosted file manager that remembers disconnected drives #

github.com favicongithub.com
2 评论11:52 AM在 HN 查看
Hi HN. I built File Hunter because I have a drawer full of USB drives and no idea what's on them without plugging each one in.

File Hunter is a self-hosted, web-based file manager. You point it at any folder — USB drive, network share, DVD — and it catalogs everything into SQLite. When you unplug the drive, the full catalog stays. You can browse, search, and review files on storage that isn't connected.

The other thing it does well is deduplication. A three-tier hashing strategy (file size → xxHash64 partial → SHA-256 full) finds exact duplicates across all your locations with minimal I/O. Then you can consolidate: keep one copy, stub the rest, full audit trail.

Some numbers: I run it on a catalog of ~7 million files across 9.6 TB and 10 locations. The UI stays responsive during scans.

Tech: Python, Starlette, uvicorn, SQLite (WAL mode), vanilla JavaScript. No frameworks, no build step, no npm. One curl command to install:

    curl -fsSL https://filehunter.zenlogic.uk/install | bash
It's MIT-licensed and free. There's a paid Pro tier that adds remote agents (scan machines across the network into one catalog), but everything on the GitHub page is the free version and will stay that way.

  Website: https://filehunter.zenlogic.uk
  GitHub: https://github.com/zen-logic/file-hunter
Happy to answer questions about the architecture, the dedup strategy, or anything else.
3

Ablo - AI slides without the generic look or layout restrictions #

ablo.finance faviconablo.finance
2 评论1:10 PM在 HN 查看
I hate the feeling of sitting in front of an empty deck or slide trying to figure out where to begin. But I don't want that same AI generic output for every slide deck, it's just sad.

That's why I've tried to the death of me to make AI slide generation truly free, free in the sense that it can create whatever you want without being locked into fixed template systems or rigid layout grids like Gamma imposes.

As of now it does that. Layout calculations can still break when placing multiple elements simultaneously, cards or charts can fall off the edge, but it's easily fixed by prompting the AI to read the slide and correct it. I'm pretty proud of where it's gotten. Style references work well, mentioning McKinsey, Apple, Microsoft, or Bauhaus gives the AI a clear visual target. It can pull in images and content from URLs, generate images, and everything is fully editable on a DOM-based slide canvas with modern CSS.

It runs on Claude Sonnet 4.6 for cost reasons. I'm not funded, so I can't let people go completely loose for free, sorry about that, and that's why sign-in is required. I was an investment banking / coding kid who turned vibe coder until I got reasonably good at actual coding. Would love to hear what you think and whether you'd use this over Gamma / Chronicle / PowerPoint with Claude / Canva.

Please try it out, roast me, send me some slides on X, would love to see what you create https://x.com/Lurrree

3

PingMeBud – A macOS app that listens to meetings so you don't have to #

pingmebud.com faviconpingmebud.com
0 评论2:51 PM在 HN 查看
I'm a developer who spends way too much time in meetings, and most of the time I'm not needed. But I have to stay there half-listening in case someone mentions me. Because of that, I can neither do actual work nor zone out in peace.

I realized I needed a way to safely not care about my meetings. So I built an app.

PingMeBud lets you completely zone out of a meeting and pings you when someone says your name or any keyword(s) you care about. The idea is simple: You join a meeting, hit start, minimize, and go do deep work (or go make coffee). When a keyword is detected, you get a native macOS notification with enough surrounding context to jump back in without looking lost. There's also an optional full-screen red border flash that grabs your attention even if you're deep in another app.

Technical details:

* Uses Whisper (local AI) to transcribe meeting audio purely as a means to detect keywords. Optimized for Apple Silicon (M1-M4).

* Audio captured via ScreenCaptureKit (system audio only, not your mic). You can even have your speakers muted and still get alerts.

* Everything runs locally. Nothing leaves your machine. Transcripts live in RAM only and are wiped when you end the session or close the app.

* No cloud, no accounts, no telemetry.

* Try it free for 7 days, then $29 one-time

* Only network call: a license ping to Polar.sh every 7 days. Between pings, the app can work fully offline.

* Supports wildcard * keyword matching (e.g., meet* for "meeting", "meetings", "meetup”, or *sync for `async`, `nsync` etc).

* You can also track silences, perfect for when someone asks a question and is waiting for a reply.

* For macOS 12.3+, Apple Silicon required. Made in Electron + React.

It's perfect if you're the kind of person managing lots of meetings, or multiple overlapping meetings (perfect for OE devs).

Happy to answer any questions!

3

DejaShip – an intent ledger to stop AI agents from building duplicates #

github.com favicongithub.com
0 评论4:43 PM在 HN 查看
When you give an AI agent a popular task like "build a micro-SaaS to make money," hundreds of agents are triggered to build the exact same things.

DejaShip is a semantic coordination layer to stop this wasted compute. Before writing code, the agent checks the "airspace". If a lot of similar projects already exist, the agent can pivot to a new idea, or if it is free in its choice, it can prefer to collaborate instead of blindly cloning it.

It works as an MCP server. Open source (MIT), no accounts or API keys required.

Under the hood: The backend embeds keywords locally using fastembed to search pgvector for semantic collisions.

To be transparent: The MVP is new, so the data corpus is tiny today. The value of this protocol only grows as more agent operators plug it in - or help decide how this coordination can be improved. (One of the biggest issues right now is the amount of false positives; it definitely needs improvement).

Site links and MCP installation instructions are on the GitHub README. (npmjs package: dejaship-mcp).

I'd love your brutal feedback.

3

OpenMandate – Declare what you need, get matched #

openmandate.ai faviconopenmandate.ai
2 评论6:26 PM在 HN 查看
Hi HN, I'm Raj.

We all spend a bulk of our time looking for the right job, cofounders, hires. Post on boards, search, connect, ask around. Hit ratio is very low. There's this whole unsaid rule that you have to build your network for this kind of thing. Meanwhile the person you need is out there doing the exact same thing on their side. Both of you hunting, neither finding.

What if you just declare what you need and someone does the finding for you?

That's what I built - OpenMandate. You declare what you need and what you offer - a senior engineer looking for a cofounder in climate tech, a startup that needs a backend engineer who knows distributed systems. Each mandate gets its own agent. It talks to every other agent in the pool on your behalf until it finds the match. You don't browse anything. You declare and wait.

Everything is private by default. Nobody sees who else is in the pool. Nothing is revealed unless both sides accept. No match? Nobody ever knows you were looking. No more creating profiles, engaging for the sake of engagement, building networks when you don't want to.

What's live:

- openmandate.ai

- pip install openmandate / npm install openmandate

- MCP server for Claude Code / Cursor / any MCP client

- github.com/openmandate

3

Voquill, an open source and cross-platform alternative to wisprflow #

github.com favicongithub.com
1 评论6:33 PM在 HN 查看
Hey HN! We love voice dictation, but wanted an open source version for transparency, privacy, and something that everyone could contribute to. So we built Voquill, an open source alternative to WisprFlow, Monologue, and SuperWhisper.

It lets you dictate into any desktop app. Press a hotkey, talk, text gets inserted. You can run Whisper locally, use our server, or wire up any provider you want (OpenAI, Claude, Groq, OpenRouter, whatever). You have full control over where your data goes.

Runs on Windows, MacOS, and Linux. Open source, AGPLv3, built with Tauri and Rust. We're working on a mobile app too (Flutter) that's currently in beta for iOS and android.

To try it: Download from the repo or voquill.com. Click "local setup" on first launch. Hope you like it!

3

A tool to give every local process a stable URL #

github.com favicongithub.com
0 评论8:18 PM在 HN 查看
In working with parallel agents in different worktrees, I found that I had a lot of port conflicts, went back and forth checking what incremented port my dev server was running on, and cookie bleed.

This isnt a big issue if running a few servers with full a stack framework like Next, Nuxt, or Sveltekit, but if you run a Rust backend and a Vite frontend In multiple worktrees, it gets way more complicated, and the mental model starts to break. That's not even adding in databases, or caches.

So I built Roxy, which is a single Go binary that wraps your dev servers (or any process actually) and gives you a stable .test domain based on the branch name and cwd.

It runs a proxy and dns server that handles all the domain routing, tls, port mapping, and forwarding for you.

It currently supports:

- HTTP for your web apps and APIs - Most TCP connections for your db, cache and message/queue layers - TLS support so you can run HTTPS - Run multiple processes at once each with a unique URL, like Docker compose - Git and worktree awareness - Detached mode - Zero config startup

My co-workers and I have been using it a lot with our workflow and i think it's ready for public use.

We support MacOS and Linux

I'll be working on some more useful features like Docker compose/Procfile compatibility and tunneling so you can access your dev environment remotely with a human readable URL

Give it a try, and open an issue if something doesnt quite work right, or to request a feature!

https://github.com/logscore/roxy

2

HackerNews.pink – A PWA HN reader with personalized recommendations #

hackernews.pink faviconhackernews.pink
0 评论6:28 PM在 HN 查看
I think everyone should build their own HN reader at least once in their life. This is mine and its pink. I wanted a good HN experience on mobile something I could pull up on the train and browse comfortably. So I built a PWA that works well on mobile and feels native on the phone. Along the way, I kept adding things: Semantic search – Stories are embedded with OpenAI's text-embedding-3-large and indexed in MongoDB 8.2's Vector Search. Search combines vector similarity and full-text via Reciprocal Rank Fusion, so you can find stories by meaning, not just keywords. Personalized feed: A profile vector is built from your reading history and bookmarks (weighted by time spent and recency). After a few stories, the feed adapts to your interests. Content extraction: A Playwright-based scraper fetches article text, generates thumbnails, and extracts metadata. Real-time updates and infinity scroll. Auth is passkey-first (WebAuthn) and works without personal information. Stack: Angular 21, FastAPI (Python 3.14), MongoDB 8.2, Valkey for job queues. It's a hobby project and I had a lot of fun building it. Feedback welcome!
2

SmartRuler Pro – ESP32-powered motorized ruler with 0.5mm precision #

smart-ruler.bunnytech.io faviconsmart-ruler.bunnytech.io
0 评论6:28 PM在 HN 查看
My dad cuts window profiles in his workshop and I kept watching him measure, mark, and reposition by hand hundreds of times a day. So I built him a motorized ruler that moves to any position you tap on a tablet.

It runs on an ESP32-S3 over BLE 5.3, uses hardware-timed stepper pulses (RMT peripheral, no CPU jitter) to hit 0.5mm accuracy, and pairs with a Bosch GLM laser meter - point, measure, and the ruler moves there. No typing, no transcription errors.

You can also upload cut lists from a file or email and it runs each position in sequence - perfect for batch cutting window profiles where you need the same set of measurements repeated all day.

Open hardware: off-the-shelf linear rails, NEMA 23 closed-loop stepper, inductive homing sensor. Total BOM is about 470 EUR. I sell the firmware, app (Android + desktop), and a step-by-step build guide for 350 EUR.

Video demo: https://youtu.be/GCUHBiIs4Vk

Happy to answer questions about the build or the ESP32 RMT approach to stepper control.

2

AgentCost – Track, control, and optimize your AI spending (MIT) #

github.com favicongithub.com
0 评论5:26 PM在 HN 查看
Hi HN, We built AgentCost to solve a problem we kept running into: nobody knows what their AI agents actually cost. One line wraps your OpenAI/Anthropic client: from agentcost.sdk import trace client = trace(OpenAI(), project="my-app") From there you get a dashboard with cost forecasting, model optimization recommendations, and pre-call cost estimation across 42 models. What's included (MIT):

Python + TypeScript SDKs Real-time dashboard with 6 views Cost forecasting (linear, EMA, ensemble) Optimizer: "switch these calls from GPT-4 to GPT-4-mini, save $X/day" Prompt cost estimator for 42 models LangChain/CrewAI/AutoGen/LlamaIndex integrations Plugin system OTel + Prometheus exporters CLI with model benchmarking

Enterprise features (BSL 1.1): SSO, budgets, policies, approvals, notifications, anomaly detection, AI gateway proxy. Tech stack: Python/FastAPI, SQLite (community) or PostgreSQL (enterprise), React dashboard, TypeScript SDK. GitHub: https://github.com/agentcostin/agentcost Docs: https://docs.agentcost.in pip install agentcostin Would love feedback from anyone managing AI costs at scale.

2

TrAIn of Thought – AI chat as I want it to be #

bix.computer faviconbix.computer
0 评论8:53 PM在 HN 查看
My conversations with LLMs branch in many directions. I want to be able to track those branches, revert to other threads, and make new branches at arbitrary points. So I built my own solution to it.

It's essentially a tool for non-linear thinking. There's a lot of features I'd love to add, and I need some feedback before I take it anywhere else. So, I'm listening to whatever you're thinking is broken.

Basic feature set: - Branching conversations: follow up from any node at any time, not just the latest message

- Context inheritance: when you branch off a node, the AI gets the full ancestry of that branch as context, so answers are aware of the whole conversation path leading to them.

- Text-to-question: highlight any text in an answer to instantly seed a new question from it.

- Multi-provider AI: compare and adjust responses from OpenAI, Anthropic, and Google Gemini.

- Visual graph: the conversation renders as a React Flow graph with automatic layout, so you can see the whole structure at a glance.

- Shareable links: your entire chat is compressed and stored in the URL. Everything is local (well, except the API calls).

- Branch compression: long branches can be collapsed into a summary node to keep the graph tidy.

2

Video to Text AI Transcription #

videototext.tools faviconvideototext.tools
0 评论4:49 AM在 HN 查看
I’ve been building a video-to-text web app and wanted to share it for feedback. The core flow is straightforward: upload files, start transcription, then track progress in a history page that refreshes automatically while jobs are running. Paid users can submit multiple files at once, and speaker diarization is supported for conversations and interviews.

Over the last few weeks I focused mostly on reliability. I changed the pipeline to extract audio first and then run transcription, which made long-file handling more stable. I also spent time improving failure handling so users see a clear message when a job fails, instead of raw model errors.

Pricing is intentionally simple right now: free users get 3 transcriptions per day, and there is one Unlimited plan at $20/month or $120/year.

I’d really appreciate feedback on the overall UX, whether the failure/retry behavior feels right, and whether the pricing is understandable for first-time users.

2

Herniated disc made me build a back-safe kettlebell app #

kbemom.com faviconkbemom.com
2 评论4:11 PM在 HN 查看
I herniated a disc in 2023 and spent months in physio. Once cleared to train, standard workouts kept tweaking my back, especially when fatigue hit and my form broke down.

I love EMOMs because they make time fly and push my limits without overthinking. Built this generator to combine that structure with exercise selection that won't wreck my back. It excludes american swings, russian twists, and movements that combine loaded spinal flexion with rotation. The algorithm prioritizes anterior-loaded movements (goblet squats, front rack work) based on McGill's spinal loading research.

React 19 + Tailwind + Capacitor for iOS. Lifetime unlock is the main option because nobody needs another Netflix subscription. There's also a low-cost monthly if you want to try premium features without committing.

Not medical advice. This is what worked for my transition from rehab back to lifting. Curious to hear from others: what was the hardest part of getting back to training after disc issues?

2

FixYou – AI tool that tells you which cancer screenings you need #

fixyou.app faviconfixyou.app
0 评论2:02 PM在 HN 查看
The USPSTF and ACS publish thorough cancer screening guidelines, but they're dense and scattered across dozens of pages. What you actually need depends on your age, sex, family history, smoking status, and other factors. Most people, including me until recently, simply don't know what they should be getting checked for.

I built FixYou after losing my grandfather to colon cancer that was caught too late. After his diagnosis, my parents went for colonoscopies and found polyps, early and treatable. We didn't lack access to healthcare. We lacked clarity.

FixYou is a free tool that: 1. Walks you through a short AI conversation about your health profile 2. Cross-references your answers against current USPSTF and ACS screening guidelines 3. Gives you a "Shield Score" (0 to 100) and a prioritized list of screenings to schedule

The goal isn't to replace your doctor. It's to prepare you for the conversation, and to be the nudge that gets you to actually book that colonoscopy or mammogram you've been putting off.

March is Colorectal Cancer Awareness Month, which felt like the right time to share this.

Would love feedback on the scoring logic, the screening recommendations, or anything else. Happy to answer questions about the approach.

1

AI gaming copilot that uses a phone camera instead of screen capture #

github.com favicongithub.com
0 评论6:09 AM在 HN 查看
Built this as a side project after wanting a real-time gaming companion that could call out macro mistakes / timers / map awareness while I play.

*Project Aegis* is an AI gaming companion (starting with League of Legends) that gives spoken advice in real time.

The twist: it uses a *physically air-gapped setup*.

Why? Some games (especially with strict anti-cheat like Riot Vanguard) make screen capture / memory-reading approaches risky or impractical. So instead of reading the game directly, I point a *smartphone on a tripod at my monitor* and process the video externally.

*How it works (current version):*

* Phone camera points at the game screen * Frames are streamed over WebSockets to a local FastAPI server * OpenCV cleans up glare / perspective issues * Vision model analyzes the frame context * TTS speaks back advice (macro reminders, timers, awareness prompts, etc.)

So far this is more of a *working prototype + architecture experiment* than a polished product, but it’s functional and surprisingly fun.

GitHub: https://github.com/ninja-otaku/project_aegis

I’d love feedback on:

* whether this is genuinely useful vs. just technically interesting * what game(s) this should support next * latency / UX expectations for a tool like this * anti-cheat-safe ways to improve reliability without crossing lines

Happy to answer technical questions and share implementation details.

1

Archilvx-Own your Twitter data because cloud tools will fail you #

archivlyx.com faviconarchivlyx.com
0 评论6:07 AM在 HN 查看
One hour ago, while working with Claude, the session timed out, I was logged out, and my entire progress vanished. It was a sharp, frustrating reminder of a lesson I thought I’d already learned: If you don’t have a local copy of your data, you don’t own it. We are living in an era of "information redundancy," yet our digital assets are more fragile than ever. We rely on LLMs that crash at random and on social platforms that bury our bookmarks under layers of messy UI and slow "official archives." I built Archivlyx because I got tired of the "black box" nature of my digital footprint on X (Twitter). Whether it’s a technical thread you liked or a resource you bookmarked, finding it again shouldn't feel like a chore. What is Archivlyx? It’s a browser extension designed to let you browse, export, and manage your Twitter likes and bookmarks with zero friction. Why use this instead of the official archive? • The "Export First" Rule: I’ve reached a point where I won't use a productivity tool unless I see a clear "Export" button. Archivlyx lets you pull your data into formats you actually use. • Local-First Privacy: We don’t store a single byte of your data on our servers. Everything happens locally in your browser. Your digital assets stay yours. • Intelligent Batch Operations: For the HN crowd—I know the risks of automation. We’ve implemented automatic sub-task splitting for batch cleaning and archiving. By breaking operations into smaller, randomized chunks, we maximize account safety and respect rate limits without you having to micromanage it. • Search & Sanity: The native Twitter interface is built for scrolling, not for retrieval. Archivlyx turns your cluttered likes into a searchable, filterable database. The Tech: The extension is built to be lightweight and stay out of your way. The core logic focuses on transforming messy API responses into a clean, local UI that enables "one-click" cleanup or mass exports. I’d love to hear your thoughts on the redundancy mechanisms or any features you'd want in a "digital graveyard-turned-library." Chrome Web Store Link: https://chromewebstore.google.com/detail/archivlyx-%E2%80%93... Website: https://www.archivlyx.com/ I'll be around to answer any technical questions about the batch processing or the local storage implementation!
1

Free SEO checker for structured data, meta tags and Core Web Vitals #

seo.codequest.work faviconseo.codequest.work
1 评论4:12 PM在 HN 查看
I built CodeQuest SEO Checker because I couldn't find a tool that clearly diagnoses structured data (JSON-LD/Schema.org) alongside basic SEO.

It checks 45+ items across 4 categories: - Structured Data (JSON-LD/Schema.org) - Basic SEO (title, meta, headings) - Content quality - Technical SEO (HTTPS, mobile, speed)

Free: 10 checks/month, no sign-up needed. Would love your feedback!

1

LGTMeme – AI-generated memes for your pull requests #

lgtmeme.com faviconlgtmeme.com
0 评论3:06 PM在 HN 查看
I built a GitHub bot that generates context-aware memes for pull requests. When a PR is opened, the bot reads the metadata — title, labels, line count, commit messages — and uses AI to pick a meme template and generate a caption that matches the context. It posts the meme as a PR comment. It never accesses your actual code. Free for public repos. The interesting technical challenge turned out to be prompt engineering for humor — getting AI to be consistently funny in a constrained format is a surprisingly hard problem. Looking for people to try it and tell me if the memes actually land.
1

ResumeForge – Free AI resume builder with real-time ATS scoring #

resumeforge.cc faviconresumeforge.cc
0 评论2:37 AM在 HN 查看
I built a free AI resume builder focused on ATS (Applicant Tracking System) optimization. The core insight: most resumes get filtered by software before a human sees them. Pretty templates often score terribly because ATS parsers can't read them properly. ResumeForge (resumeforge.cc) shows you your ATS compatibility score in real-time as you edit. The AI generates professional summaries and optimizes bullet points for both ATS parsers and human readers. No signup required. Free to build. $7 one-time for PDF + Word download. Tech: Vanilla JS, no framework. AI for content generation. Everything runs client-side except the AI calls. Would appreciate feedback on both the product and the technical approach.