매일의 Show HN

Upvote0

2026년 2월 21일의 Show HN

80 개
394

Llama 3.1 70B on a single RTX 3090 via NVMe-to-GPU bypassing the CPU #

github.com favicongithub.com
101 댓글8:57 PMHN에서 보기
Hi everyone, I'm kinda involved in some retrogaming and with some experiments I ran into the following question: "It would be possible to run transformer models bypassing the cpu/ram, connecting the gpu to the nvme?"

This is the result of that question itself and some weekend vibecoding (it has the linked library repository in the readme as well), it seems to work, even on consumer gpus, it should work better on professional ones tho

40

Cellarium: A Playground for Cellular Automata #

github.com favicongithub.com
1 댓글12:47 AMHN에서 보기
Hey HN, just wanted to share a fun, vibe-coded Friday night experiment: a little playground for writing cellular automata in a subset of Rust, which is then compiled into WGSL.

Since it lets you dynamically change parameters while the simulation is running via a TUI, it's easy to discover weird behaviors without remembering how you got there. If you press "s", it will save the complete history to a JSON file (a timeline of the parameters that were changed at given ticks), so you can replay it and regenerate the discovery.

You can pan/zoom, and while the main simulation window is in focus, the arrow keys can be used to update parameters (which are shown in the TUI).

Claude deserves all the credit and criticism for any technical elements of this project (beyond rough guidelines). I've just always wanted something like this, and it's a lot of fun to play with. Who needs video games these days.

22

Elecxzy – A lightweight, Lisp-free Emacs-like editor in Electron #

github.com favicongithub.com
66 댓글12:49 PMHN에서 보기
Hi HN. I am a programmer from Japan who loves Emacs. I am building elecxzy. It is a free (zero-cost), lightweight, Emacs-like text editor for Windows.

I designed it to be comfortable and ready to use immediately, without a custom init.el. Here is a quick overview:

- Provides mouse-free operation and classic Emacs keybindings for essential tasks (file I/O, search, split windows, syntax highlighting).

- Drops the Lisp execution engine entirely. This keeps startup and operation lightweight.

- Solves CJK (Chinese, Japanese, Korean) IME control issues natively on Windows.

I never managed to learn Lisp. I just copy-pasted snippets to maintain my init.el. However, I loved the Emacs keybindings. I loved operating an editor entirely without a mouse. I wanted an editor I could just open and use immediately. Also, standard Emacs binaries for Windows often have subtle usability issues for CJK users.

So, I thought about whether I could build an Emacs-like text editor using Electron, the same framework as VS Code.

Building an editor inside a browser engine required thinking a lot about what NOT to build. To make it feel native, I had to navigate DOM limitations. I learned that intentionally dropping complex features improves rendering speed. For example, I skipped implementing "word wrap." For syntax highlighting, I did not use a full AST parser. Instead, I used strict "line-by-line" parsing. The highlight colors for multi-line comments are occasionally incorrect, but it is practically unproblematic and keeps the editor fast.

Under the hood, to bypass browser limitations and handle large files smoothly, I implemented a virtual rendering (virtual scrolling) system. For text management and Undo/Redo, I use a custom Piece Table. I built a custom KeyResolver for Emacs chords. I also used koffi to call Win32 APIs directly for precise IME control.

I respect Windows Notepad as one of the most widely used text editors. However, in my daily work or coding tasks, I often felt it lacked certain features. On the other hand, I found VS Code too heavy just to write a quick memo. Even with extensions, it never quite gave me that native Emacs flow. I do not want to replace Notepad, VS Code, or Emacs. If users want rich extensions and heavy customization, I believe they should use Emacs or VS Code. My goal is to fill the gap between them—to build a "greatest common denominator" editor for people who just want an Emacs-like environment on Windows without the setup.

It is still in alpha (so it might not work perfectly), but you can test it on Windows by downloading the zip from the GitHub releases, extracting it, and running elecxzy.exe. For screenshots, basic usage, and keybindings, please check the README on the GitHub project page.

I am looking for feedback: Is there a demand for a zero-config, Lisp-free, "Notepad-like" Emacs-style editor? What are the minimum standard features required to make it useful? I would love to hear your technical insights.

14

Script Snap – Extract code from videos #

script-snap.com faviconscript-snap.com
7 댓글4:39 AMHN에서 보기
Hi HN, I'm lmw-lab, the builder behind Script Snap.

The Backstory: I built this out of pure frustration. A while ago, I was trying to figure out a specific configuration for a project, and the only good resource I could find was a 25-minute YouTube video. I had to scrub through endless "smash the like button" intros and sponsor reads just to find a single 5-line JSON payload.

I realized I didn't want an "AI summary" of the video; I just wanted the raw code hidden inside it.

What's different: There are dozens of "YouTube to Text" summarizers out there. Script Snap is different because it is explicitly designed as a technical extraction engine.

It doesn't give you bullet points about how the YouTuber feels. It scans the transcript and on-screen visuals to extract specifically:

Code snippets

Terminal commands

API payloads (JSON/YAML)

Security warnings (like flagging sketchy npm installs)

It strips out the "vibe" and outputs raw, formatted Markdown that you can copy straight into your IDE.

Full disclosure on the launch: Our payment processor (Stripe) flagged us on day one (banks seem to hate AI tools), so I've pivoted to a manual "Concierge Alpha" for onboarding. The extraction engine is fully operational, just doing things the hard way for now.

I'd love to hear your thoughts or harsh feedback on the extraction quality!

13

Agent Passport – OAuth-like identity verification for AI agents #

11 댓글12:39 AMHN에서 보기
Hi HN,

I built Agent Passport, an open-source identity verification layer for AI agents. Think "Sign in with Google, but for Agents."

The problem: AI agents are everywhere now (OpenClaw has 180K+ GitHub stars, Moltbook had 2.3M agent accounts), but there's no standard way for agents to prove their identity. Malicious agents can impersonate others, and skill/plugin marketplaces have no auth layer. Cisco's security team already found data exfiltration in third-party agent skills.

Agent Passport solves this with: - Ed25519 challenge-response authentication (private keys never leave the agent) - JWT identity tokens (60-min TTL, revocable) - Risk engine that scores agents 0-100 (allow/throttle/block) - One-line verification for apps: `const result = await passport.verify(token)`

It's fully open source (MIT), runs on free tiers ($0/month), and has a published npm SDK.

GitHub: https://github.com/zerobase-labs/agent-passport Docs: https://github.com/zerobase-labs/agent-passport/blob/main/do... Live demo: https://agent-passport.vercel.app

Built this because I kept seeing the same security gap in every agent platform. Happy to answer questions about the architecture or the agent identity problem in general.

7

Cc-md – Zero-cost Obsidian sync across iPhone, Mac, and GitHub #

github.com favicongithub.com
4 댓글8:35 PMHN에서 보기
Here's something I realized: the most AI-native knowledge base isn't a SaaS product with an API. It's a folder of markdown files on your disk.

Obsidian stores everything as plain .md files. That means Claude Code (or any AI tool) can grep, read, write, and traverse your entire knowledge base with zero setup. No API keys. No OAuth. No middleware. Just local file I/O.

The only missing piece was sync. I wanted my vault on iPhone (iCloud), on Mac (local), and on GitHub (backup + version history) — without paying $4/mo for Obsidian Sync.

cc-md is ~400 lines of bash. iCloud handles Apple device sync in seconds. A launchd job runs git add/commit/push every 5 minutes. That's it.

One command to install:

bash <(curl -sL https://raw.githubusercontent.com/yuukiLike/cc-md/main/insta...)

The installer auto-discovers your vault, inits git, creates a GitHub repo (if you have gh CLI), and starts syncing. Zero prompts in the best case.

I built this in a weekend, solo, with AI assistance from first line to last. A year ago I couldn't have shipped this. Now I can.

I feel genuinely lucky to be alive in the AI era. It's making childhood dreams come true — one project at a time.

6

I created a beautiful number animation library for React Native #

number-flow-react-native.awingender.com faviconnumber-flow-react-native.awingender.com
1 댓글1:16 AMHN에서 보기
Hi!

I've been frustrated with the fact that the beautiful NumberFlow library for web (link) is not available on React Native - a platform that I think is much more animation native than the web is. And there are no alternatives of the same quality available. So I reimplemented it myself, basically from the ground up.

Introducing Number Flow React Native.

I am aiming for this to be the best number animation library for React Native.

- Beautiful animation easing directly inspired by web NumberFlow - Supporting both Native and Skia versions - Full i18n support for locales, things like compact or scientific notations, etc. - TimeFlow component for timers and counters - Custom digit bounding for things like binary - Supporting 37 different numeral systems such as Arabic, Thai, and many others - A dedicated, shared worklet mode for as much FPS as possible perfect for sliders or gestures for example - Built on top of React Native Reanimated v3+ - Also supports web via Expo Web

Please check out the docs: https://number-flow-react-native.awingender.com/docs And star it on GitHub if you like it: https://github.com/Rednegniw/number-flow-react-native

Let me know what you think!

6

Velo – Open-source, keyboard-first email client in Tauri and Rust #

0 댓글1:34 AMHN에서 보기
I built Velo because I wanted Superhuman's speed and keyboard workflow without the $30/month price tag or sending all my data through someone else's servers.

Velo is a local-first desktop email client. Your emails live in a local SQLite database - no middleman servers, no cloud sync. It works offline and your data stays on your machine.

What makes it different:

- Keyboard-driven - Superhuman-style shortcuts (j/k navigate, e archive, c compose, /search). You can fly through your inbox without touching the mouse - Built with Tauri v2 + Rust backend - ~15MB binary, low memory usage, instant startup. Not another Electron app - Multi-account - Gmail (API) and IMAP/SMTP (Outlook, Yahoo, iCloud, Fastmail, etc.) - AI features (optional) - Thread summaries, smart replies, natural language inbox search. Bring your own API key (Claude, GPT, or Gemini). Results cached locally - Privacy-first - Remote images blocked by default, phishing link detection, SPF/DKIM/DMARC badges, sandboxed HTML rendering, AES-256-GCM encrypted token storage - Split inbox, snooze, scheduled send, filters, templates, newsletter bundling, quick steps, follow-up reminders, calendar sync

Tech stack: Tauri v2, React 19, TypeScript, SQLite + FTS5 (full-text search), Zustand, TipTap editor. 130 test files.

Available on Windows, macOS, and Linux. Apache-2.0 licensed.

GitHub: https://github.com/avihaymenahem/velo Site: https://velomail.app

I'm a solo developer and would love feedback, especially on UX, features you'd want, or if you run into issues. Happy to answer any questions about the architecture or Tauri v2 in general.

5

AI Council – multi-model deliberation that runs in the browser #

github.com favicongithub.com
1 댓글12:11 AMHN에서 보기
There's LLM Council and similar tools, but they use predefined model lineups. This one is different in a few ways that mattered to me:

*Bring your own models.* Mix Ollama (local), OpenAI, Anthropic, Groq, Google — or any OpenAI-compatible endpoint — in whatever combination you want. A council of DeepSeek-R1 + llama2-uncensored + mistral-nemo is a very different deliberation than GPT-4o + Claude + Gemini.

*Zero server, zero account, zero storage.* The app is purely static. API calls go directly from your browser to providers. Nothing touches a backend. No tokens, no sessions, no analytics. Your API keys never leave your machine.

*Runs on your own hardware.* If you have Ollama, you can run an entire council locally for free. I use a 5-member all-Ollama setup on an RTX 2070 (8GB VRAM) — sequential requests, slow, but completely private.

The deliberation process is 3 stages: 1. All members answer independently 2. Each member critiques anonymized responses from the others 3. A designated Chairman synthesizes a final verdict

A few things I found genuinely interesting: - Reasoning models (DeepSeek-R1, QwQ) emit <think> blocks mid-stream. Stripping these while showing a " Thinking…" indicator keeps the UX clean without losing answer quality. - The Contrarian persona on an uncensored model produces meaningfully different critiques than a safety-tuned model playing the same role. - Peer review across models catches blind spots that a single model arguing with itself won't surface.

GitHub: https://github.com/prijak/Ai-council.git

5

Give your OpenClaw agent a face and voice with LiveKit and LemonSlice #

github.com favicongithub.com
0 댓글12:50 AMHN에서 보기
For a fun side weekend project we gave our own OpenClaw agent called "Mahmut" a face and had a live interview with it. It went surprisingly well.

Here's a sneak peek from the interview: https://x.com/ptservlor/status/2024597444890128767

User speaks, Deepgram transcribes it, OpenClaw Gateway routes it to your agent, ElevenLabs turns the response into speech, and LemonSlice generates a lip synced avatar from the audio. Everything streams over LiveKit in real time.

Latency is about 1 to 2 seconds end to end depending on the LLM. The lip sync from LemonSlice honestly surprised us, it works way better than we expected.

The skill repo has a complete Python example, env setup, troubleshooting guide, and a Next.js frontend guide if you want to build your own web UI for it.

5

I built a 55K-word email marketing knowledge base and Claude Code skill #

emailmarketingskill.com faviconemailmarketingskill.com
0 댓글1:08 AMHN에서 보기
I co-founded SmartrMail (email marketing SaaS, 12K ecommerce customers, acquired 2022). When I left, I no longer had access to the sending data I'd spent years learning from, billions of sent emails, deliverability patterns, things that actually move email engagement

Cos' I don't have this data anymore, I built the best thing I could to help with the email marketing I currently do:

---

my research process

Multiple sprints across all major email marketing topics. The crawler pulled 908 sources: Litmus, Klaviyo, HubSpot, Campaign Monitor, and Salesforce annual reports; practitioner blogs; academic research; platform documentation; Reddit threads; Shopify forums; and community discussions on X. From those, I extracted 4,798 discrete insights. Every claim that made it into the guide has a source. Anything that was unsourced opinion got cut.

That produced EMB v4: Over 80k words across 16 chapters. After two editorial passes, cutting duplicates, consolidating overlapping sections, removing anything that didn't earn its place it landed at 55,000 words. I reached out to all the email experts cited for their feedback too, over half contributed and made changes. I was stoked with that.

---

the skill vs bible issue

The full 55K-word guide is too big for a context window. You can't just point Claude at the whole thing and expect coherent answers.

So the SKILL.md is a separate, condensed extraction: the key frameworks, benchmarks, practitioner names, and tactical thresholds that fit in context. When you ask Claude a question with the skill installed, it's drawing on a structured summary rather than trying to retrieve from a raw document dump.

but this creates a problem not solved yet: the skill and the Bible can drift. As the Bible gets updated (experts send corrections, better data emerges), the skill needs to be manually kept in sync. The obvious fix is an MCP server that connects them so changes to the Bible automatically propagate to the skill. I'll probably build that with the next big update to the bible.

---

the data gap

I saw on smartmail that occasionally "conversion wisdom" or email best practise would actually not line up with our real, aggregate sending data. Eg "best time of the week to send a newsletter is 9am tue/thurs". Nope. Was absolutely a case by case solution and the rule of thumb here would often hurt email engagement. So, the weakness of this project is it's built on published research, not proprietary sending data. Published benchmarks are backward-looking, aggregated across wildly different use cases, and often produced by ESPs with an incentive to make email look good.

What I'd really want is anonymised send-level data from real campaigns. Subject line → open rate, across list sizes, industries, send times. Body structure → click rate. Flow configuration → revenue per recipient.

If anyone is working on this or has access to send-level data and wants to contribute, I'm very interested.

---

the ESP integration problem

The other thing I'm frustrated by is AI connections into ESPs are still terrible. Most platforms have API coverage for basic CRUD operations but nothing close to what you'd need to actually run a campaign from Claude. You can pull subscriber counts. You can't meaningfully analyse flow performance, trigger segment rebuilds, or get real deliverability diagnostics programmatically.

A few ESPs are starting to add MCP servers (it's mentioned in Ch 14 of the guide), but it's early and patchy. Until that's solved, the skill is advisory, it can tell you what to do, it can't do it for you. That gap is worth building toward.

---

what's live now

Install:

    git clone https://github.com/CosmoBlk/email-marketing-bible.git ~/.claude/skills/email-marketing-bible
Full guide (free, searchable): https://emailmarketingskill.com

MIT licensed. No paywall. No email gate. No affiliate links.

Happy to answer questions!

5

Eliezer – Tiny (~7K LOC) Self-Hosted AI Agent (PWA, Self-Editing) #

eliezer.app faviconeliezer.app
1 댓글5:00 PMHN에서 보기
Eliezer is ~7K lines of TypeScript, MIT open-source

- PWA for mobile/desktop with push notifications - Self-editing protocol - Builds and displays interactive apps/widgets right in the chat - Task and Crons - "notify if sunny tomorrow at 10am") - Persistent SQLite memory + auto context compaction - Bring your own LLM API key (Kimi/Claude/Grok/etc.) - Full visibility/control - You see all tool calls and can abort in all states

Agent repo: https://github.com/Eliezer-app/eliezer Chat repo: https://github.com/Eliezer-app/clawchat

The chat was originally thought as a plugin for OpenClaw. Ended up writing the agent too.

5

My Degenerate Craps Simulator #

infinitecraps.com faviconinfinitecraps.com
0 댓글11:48 PMHN에서 보기
Hello HN:

I love the randomness of the universe. I've been spending some time creating a Craps simulator to help experience this love without having to shell out like a real degenerate.

For others similarly fascinated: I would love to hear any and all feedback you've got on this. It's meant to be unique in the sense that it's a community-oriented, infinite simulation.

4

MephistoMail – A RAM-only, tracker-free disposable email client #

1 댓글12:46 AMHN에서 보기
Hi HN,

I got frustrated with the current landscape of 10-minute mail services. They are often full of ads, Google Analytics trackers, and clunky interfaces—completely defeating the purpose of a "privacy" tool.

I built MephistoMail as a clean, RAM-only frontend alternative. It uses the mail.tm/mail.gw APIs under the hood for actual inbox mapping but handles everything on the client side in volatile memory. If you close the tab, the session is gone. Zero logs are kept on our end.

Tech stack: React 18, Vite, Tailwind CSS, Lucide.

Would love to hear your thoughts, roasts, and suggestions!

Demo: https://mephistomail.site Repo: https://github.com/jokallame350-lang/temp-mailmephisto

4

Museum of Handwritten Code (If, While, Binary Search, Merge Sort) #

museum.codes faviconmuseum.codes
1 댓글8:30 PMHN에서 보기
Hi HN - this is a small experiment: what if code had a museum?

I built a Museum of Handwritten Code for foundational constructs and algorithms. I’ve been feeling a strange melancholy watching more and more software generation become automated, and wanted to preserve the "atoms" of programming in a form people can browse, discuss, and (hopefully) learn from.

Yes, it’s a vanity project — but I’m trying to make each exhibit real: code, description, and historical context (with more being added over time).

If AI increasingly writes the software stack (and maybe one day much closer to machine code), then here’s to the for-loops, if-branches, and hash maps that helped build the world we live in. Cheers!

I’d love brutal feedback on whether this feels: * interesting * useful * too gimmicky * or actually a decent teaching / history format

4

Are – Rule engine for JavaScript, C#, and Dart with playground #

are-playground.netlify.app faviconare-playground.netlify.app
0 댓글12:14 AMHN에서 보기
I built a rule engine called ARE (Action Rule Event) that follows a simple pipeline: Event → Middleware → Condition Evaluation → Rule Matching → Action Execution.

It's available on npm, NuGet, and pub.dev with the same API design across all three.

I also built a playground where you can experiment with three scenarios (RPG game, smart home, e-commerce) without installing anything. It includes a step-by-step rule debugger, a visual relationship graph, and an animated pipeline diagram.

Playground: https://are-playground.netlify.app GitHub: https://github.com/BeratARPA/ARE

4

See – searchable JSON compression (offline 10-min demo) #

gitlab.com favicongitlab.com
0 댓글8:06 PMHN에서 보기
Hi HN, I’m building SEE (Semantic Entropy Encoding): a searchable compression format for JSON/NDJSON. Goal: reduce the “data tax” (storage/egress) and “CPU tax” (decompress/parse) by keeping JSON searchable while compressed, with page-level random access.

I just published a proof-first evaluation release:

Offline DEMO ZIP (~10 min): prints compression ratios + skip rates + lookup latency (p50/p95/p99)

DD pack: audit/repro evidence (decode mismatch=0, extended mismatch=0, audit PASS)

Latest release: https://gitlab.com/kodomonocch1/see_proto/-/releases

Direct DEMO ZIP: https://gitlab.com/api/v4/projects/79686944/packages/generic...

OnePager is included in the release assets.

I’d love feedback on:

what workloads you’d try this on, and

what integration path would make this compelling vs Zstd + external indexing.

4

MQTT Topic Lab – MQTT client with buttons using command variables #

github.com favicongithub.com
0 댓글3:59 PMHN에서 보기
Hi Hacker News,

I needed an MQTT client to repeatedly send some commands to devices I build, with Postman-like variable substitutions as I was testing different device IDs with similar commands. MQTT Explorer is nice, but I had to write the commands again and again, with different payloads, and I don't really use the visualization feature. I wanted to save the connection, have a client that opens instantly, parameterizes some common variables, and saves the commands I wrote, so here's MQTT Topic Lab.

MQTT Topic Lab allows you to save your most-used commands, send messages with repetition, and use variables to switch commands on the fly easily. It also has a message viewer so you can also see messages on the broker. After doing the hard work of building your commands, you can export it to share with colleagues so they can get started fast. Also supports keyboard shortcuts if you want to move around like that.

It's built with Tauri (Rust backend, React frontend), so it's cross-platform, and quite fast. Also binaries are quite small (6-7 MB). This is my daily driver so I am going to maintain and update it, and maybe add some features you might want to see in the app. It's coded with Claude, but it's not an "AI-slop" software. The code might be rough on some edges, but it's quite well on how it works.

Looking forward to hear your thoughts, you can check it out at https://github.com/alsoftbv/topic-lab and download built binaries for your device to test it out.

4

DomeAPI (YC F25) was acquired. pmxt is the open-source equivalent #

github.com favicongithub.com
0 댓글8:02 AMHN에서 보기
Hi HN, I'm the maintainer of pmxt.

With Polymarket's recent acquisition of DomeAPI to bring their infrastructure in-house, there is a sudden gap for developers building cross-market arbitrage bots, tracking whale wallets, or running quantitative models across prediction markets. If you are trading across Polymarket, Kalshi, or Limitless, you need a unified API to avoid getting locked into a single exchange's ecosystem.

pmxt is stepping in to fill that void.

4

Saga – A Jira-like project tracker MCP server for AI agents (SQLite) #

1 댓글11:32 PMHN에서 보기
I got tired of my AI coding assistant (Claude, Cursor, etc.) losing track of project state across sessions — creating random markdown files, forgetting what was done, repeating work. So I built Saga.

It's an MCP server that gives your AI agent a proper project tracker — Projects > Epics > Tasks > Subtasks — backed by a local SQLite file. One tracker_dashboard call and the agent has full context to resume where it left off.

Key points:

Zero setup — SQLite auto-creates a .tracker.db file per project. No Docker, no Postgres, no API keys. 22 tools — CRUD for the full hierarchy, plus notes (decisions, blockers, meeting notes), cross-entity search, activity log, batch operations. Per-project scoped — Each project gets its own database. Nothing shared, nothing leaked. Activity log — Every mutation is automatically tracked so the agent (or you) can see what changed and when. Install: npx saga-mcp

GitHub: https://github.com/spranab/saga-mcp npm: https://www.npmjs.com/package/saga-mcp

The idea is simple: instead of the LLM trying to remember state in its context window or dumping it into files, give it an actual structured database it can query and update through tool calls. Works with Claude Desktop, Claude Code, Cursor, or any MCP-compatible client.

Would love feedback — especially on the tool design and what's missing.

4

Murl – Curl for MCP Servers #

github.com favicongithub.com
0 댓글12:15 AMHN에서 보기
I built murl because interacting with MCP (Model Context Protocol) servers from the command line was way more painful than it needed to be. MCP uses JSON-RPC 2.0 over HTTP, so every request means hand-crafting payloads with method names, params objects, and id fields. I wanted something that felt like curl.

Given an MCP server at https://mcp.deepwiki.com/mcp, murl lets you append virtual paths like /tools or /tools/read_wiki_structure that map to MCP methods. These aren't real HTTP endpoints — murl translates them into the right JSON-RPC calls behind the scenes:

  # List all tools on a server (NDJSON — one JSON object per line)
  murl https://mcp.deepwiki.com/mcp/tools | jq -r '.name'

  # Call a tool and extract the result
  murl https://remote.mcpservers.org/fetch/mcp/tools/fetch -d url=https://example.com | jq -r '.text'

  # Query a repo's wiki structure
  murl https://mcp.deepwiki.com/mcp/tools/read_wiki_structure -d repoName=anthropics/claude-code | jq -r '.text'
The -d flags work like curl — key=value pairs get auto-coerced into typed JSON arguments. You can also pass raw JSON directly.

A few things beyond convenience:

MCP from plain Bash. Any agent with shell access can call MCP tools — no SDK, no client library, no MCP session management. Vercel recently wrote about replacing 80% of their agent's tools with bash and getting better results (https://vercel.com/blog/we-removed-80-percent-of-our-agents-...). murl makes MCP servers accessible in that same pattern.

OAuth built in. MCP servers behind OAuth (like Glean, or anything using RFC 7591 dynamic client registration) just work. First call opens the browser, tokens get cached and auto-refresh. --no-auth for public servers.

LLM-friendly by default. Compact NDJSON to stdout, structured JSON errors to stderr, semantic exit codes. -v for human-readable output.

Handles transport quirks. Streamable HTTP, session-based SSE (mcp-proxy), regular JSON responses — murl detects and handles them all.

You can try it right now against public servers:

  brew install turlockmike/murl/murl
  murl https://mcp.deepwiki.com/mcp/tools | jq -r '.name'
https://github.com/turlockmike/murl
3

Dq – pipe-based CLI for querying CSV, JSON, Avro, and Parquet files #

github.com favicongithub.com
0 댓글10:31 PMHN에서 보기
I'm a data engineer and exploring a data file from the terminal has always felt more painful than it should be for me. My usual flow involved some combination of avro-tools, opening the file in Excel or sheets, writing a quick Python script, using DataFusion CLI, or loading it into a database just to run one query. It works, but it's friction -- and it adds up when you're just trying to understand what's in a file or track down a bug in a pipeline.

A while ago I had this idea of a simple pipe-based CLI tool, like jq but for tabular data, that works across all these formats with a consistent syntax. I refined the idea over time into something I wanted to be genuinely simple and useful -- not a full query engine, just a sharp tool for exploration and debugging. I never got around to building it though. Last week, with AI tools actually being capable now, I finally did :)

I deliberately avoided SQL. For quick terminal work, the pipe-based composable style feels much more natural: you build up the query step by step, left to right, and each piece is obvious in isolation. SQL asks you to hold the whole structure in your head before you start typing.

  `dq 'sales.parquet | filter { amount > 1000 } | group category | reduce total = sum(amount), n = count() | remove grouped | sortd total | head 10'`

How it works technically: dq has a hand-written lexer and recursive descent parser that turns the query string into an AST, which is then evaluated against the file lazily where possible. Each operator (filter, select, group, reduce, etc.) is a pure transformation -- it takes a table in and returns a table out. This is what makes the pipe model work cleanly: operators are fully orthogonal and composable in any order.

It's written in Go -- single self-contained binary, 11MB, no runtime dependencies, installable via Homebrew. I'd love feedback specially from anyone who's felt the same friction.

3

DevBind – I made a Rust tool for zero-config local HTTPS and DNS #

github.com favicongithub.com
0 댓글8:19 PMHN에서 보기
Hey HN,

I got tired of messing with

/etc/hosts and browser SSL warnings every time I started a new project. So I wrote DevBind.

It's a small reverse proxy in Rust. It basically does two things:

1. Runs a tiny DNS server so anything.test just works instantly (no more manual hosts file edits).

2. Sits on port 443 and auto-signs SSL certs on the fly so you get the nice green lock in Chrome/Firefox. It's been built mostly for Linux (it hooks into systemd-resolved), but I've added some experimental bits for Mac/Win too.

Still a work in progress, but I've been using it for my own dev work and it's saved me a ton of time. Would love to know if it breaks for you or if there's a better way to handle the networking bits!

3

WatchTurm – an open-source release visibility layer I use in my work #

0 댓글12:49 AMHN에서 보기
I built this to solve a recurring problem in multi-repo, multi-environment setups: it’s surprisingly hard to answer “what is actually running where?” without checking several systems. WatchTurm is an open-source release visibility layer. It aggregates metadata from sources like GitHub, Jira and CI (e.g. TeamCity), generates a structured snapshot of environment state, and surfaces it in a single control view.

It doesn’t replace CI/CD or manage deployments. It sits above automation and focuses purely on visibility: - what version runs in each environment - how environments differ - what changed between releases

I’m currently using it in my own daily work and would really appreciate technical feedback, especially from teams with multi-environment pipelines.

repo: https://github.com/WatchTurm/WatchTurm-control-room

3

I scanned 50k radio streams and built an app for the ones that work #

github.com favicongithub.com
0 댓글9:04 PMHN에서 보기
I got tired of radio apps that make you hunt for working streams. Most directories are full of dead links, duplicates, and placeholder logos - so I built Receiver.

I scan ~50k streams from radio-browser.info, verify each one is actually reachable and streaming, deduplicate, fetch proper logos, and ship the result as a clean SQLite database with the app. What survives: ~30k stations, all working.

Built with Vala and GTK 4 - native GNOME app, no Electron. MPRIS integration, session persistence, 130 language translations. No sign-up, no ads, no tracking.

Available as Snap, .deb, and AppImage. Flathub submission in progress.

Happy to answer questions about the data pipeline, Vala/GTK 4 development, or packaging for Linux.

2

Beadhub.ai – Real time coord for coding agents across different minders #

beadhub.ai faviconbeadhub.ai
0 댓글9:52 AMHN에서 보기
Beads[1] (Steve Yegge's git-native issue tracking for agents) has been a great boost to my agents' productivity, but it's also made them more difficult to keep aligned.

So I built BeadHub, a coordination layer on top of beads. The Go CLI (bdh) wraps the beads bd client transparently: your existing beads workflows keep working, and coordination is added automatically:

- Agent-to-agent sync chat and async mail. - Claim detection with conflict rejection: agent A claims a task; if agent B tries to claim the same, bdh rejects it with a message explaining why. - Automatic file reservations: when a file is modified, all agents know it. - Live dashboard showing who's working on what, in real time.

We use BeadHub to build BeadHub. Public dashboard: https://app.beadhub.ai/juanre/beadhub/

The agents-chatting-with-each-other part feels almost magical. The agents negotiate task splits and API contracts, warn each other about breaking changes, and generally sort things out themselves. They also greatly improve coordination among the human team members, because they can handle the details in real time without involving their minders.

Everything is open source (MIT). Self-host the full stack, or use the hosted version at https://beadhub.ai.

Self-host everything with Docker:

    git clone https://github.com/beadhub/beadhub.git
    cd beadhub && make start
    # then in your repo:
    bdh :init --beadhub-url http://localhost:8000 --project my-project
What doesn't work yet: agents can't be woken externally, so they need prodding to check their mail and incoming chats. In Claude Code, hooks trigger this automatically so latency is low. Other agents need reminding.

Server and dashboard: https://github.com/beadhub/beadhub

CLI: https://github.com/beadhub/bdh

[1]: https://github.com/steveyegge/beads

2

New kid on the block: meet Ajime, robotics CI/CD next-gen platform #

6 댓글9:06 AMHN에서 보기
Hello Roboticits!

We are building Ajime (https://ajime.io) to provide a zero-config and pipeline building, Ajime is a CI/CD drag and drop experience for edge computing & robotics. Just link your GitHub repository, we handle the build and deployment of CUDA-ready containers, manage your cloud/on-prem databases and compute resources (provide also fast hosting), and provide secure, fleet connectivity over the cloud. Easy like building lego.

Whether you’re deploying to an NVIDIA Jetson or Raspberry PI or any other linux based SOM, Ajime automates the entire pipeline—from LLM-generated Dockerfiles with sensor drivers to NVIDIA Isaac Sim validation. We’re in private beta and looking for engineers to help us kill the “dependency hell” of robotics DevOps. Check out the demo and join the waitlist at ajime.io.

2

Cmcp – Aggregate all your MCP servers behind 2 tools #

github.com favicongithub.com
2 댓글9:51 AMHN에서 보기
I built cmcp, a proxy that sits between your AI agent (Claude, Codex) and all your MCP servers. Instead of registering each server individually — which can add 100+ tool definitions to your agent's context — you register one proxy that exposes just 2 tools: search() and execute().

The agent writes TypeScript to discover and call tools:

// search — find tools across all servers return tools.filter(t => t.name.includes("screenshot"));

// execute — call tools with full type safety await chrome_devtools.navigate_page({ url: "https://example.com" }); const shot = await chrome_devtools.take_screenshot({ format: "png" }); return shot;

Type declarations are auto-generated from each tool's JSON Schema, so the agent gets typed parameters for every tool. TypeScript is stripped via oxc and the JS runs in a sandboxed QuickJS engine (64 MB memory limit).

Adding servers works exactly like you'd expect — just prepend cmcp to any claude mcp add command from a README:

cmcp claude mcp add chrome-devtools npx chrome-devtools-mcp@latest cmcp install

Built in Rust with rmcp, rquickjs, and oxc. Inspired by Cloudflare's blog post on code-mode MCP.

What I found interesting building this: the biggest win isn't just fewer tokens — it's composability. An agent can chain calls across multiple servers in a single execution, which isn't possible with individual tool calls.

2

Mukoko weather – AI-powered weather intelligence built for Zimbabwe #

weather.mukoko.com faviconweather.mukoko.com
0 댓글4:51 AMHN에서 보기
Zimbabwe has 90+ towns and cities, a population of 15+ million, significant agricultural and mining sectors — and almost no weather infrastructure built specifically for it. Global apps cover Harare and Bulawayo at best, and the AI summaries they generate read like they were written for someone in London. I built mukoko weather to fix this. A few things that shaped the approach: Weather as a public good. The platform is free, no ads, no paywalls. If a smallholder farmer in Chinhoyi needs frost risk data to protect their crops tonight, that can’t be behind a subscription. Hyperlocal context matters more than raw data. Zimbabwe has distinct agricultural seasons — Zhizha (rainy), Chirimo (spring), Masika (early rains), Munakamwe (winter). Elevation varies dramatically: the Highveld sits above 1,200m, the Zambezi Valley below 500m. The AI assistant, Shamwari Weather, is prompted with this geographic and seasonal context so its advice is actually meaningful to the user. Constrained environments are the primary target, not an edge case. Mobile-first, bandwidth-efficient, PWA-installable. The majority of users are on Android, often on 3G or spotty LTE. That’s not a future concern — it’s the baseline. Technical decisions: ∙ Next.js 15 App Router on Cloudflare Pages + Workers ∙ AI summaries via Anthropic Claude SDK, server-side only, cached at the edge with KV and immutable TTL tiers (AI-generated weather advice shouldn’t change retroactively) ∙ Open-Meteo for forecast data (free, high-quality global model coverage) ∙ 90+ Zimbabwe locations validated against geographic bounds with elevation data for frost modelling The broader vision is weather as infrastructure within a larger Africa super app (Mukoko Africa), with Zimbabwe as the proof of concept before expanding to other developing country markets using the same locally-grounded approach. Would love feedback on the approach, especially from anyone who’s built for similar markets — low-bandwidth, mobile-dominant, regions underserved by global platforms.
2

Tastefinder – swipe-based movie and TV recommendations #

tastefinder.io favicontastefinder.io
0 댓글9:57 AMHN에서 보기
I built Tastefinder to make “what should I watch?” less painful.

It’s a card UI with 4 reactions: * right = like * left = dislike * up = super like * down = skip

Signed-in users get recommendations from a full multi-strategy engine Guests use a lighter recommendation path based on their current reactions.

You can also filter by type, genre, country, year, IMDb, and Rotten Tomatoes.

I’d love feedback on recommendation quality, cold start behavior, and swipe UX.

https://tastefinder.io

2

Go Implementation of Systemd Time #

gitlab.com favicongitlab.com
0 댓글10:30 AMHN에서 보기
Hi, over the last few months I've been writing a Go package that parses systemd time formats because none of the packages I found were both flexible and fast enough for my use case.

The spec (https://www.freedesktop.org/software/systemd/man/latest/syst...) is more complex than it looks, so it took me way longer than expected to get it working properly. Right now it can only parse time spans and timestamps. I plan to add support for calendar events since I’d like to cover the full spec, but it’s lower prio because I don’t need it myself. If anyone wants to play with it, contributions are welcome :)

2

Tired of BIG JavaScript frameworks? try this #

github.com favicongithub.com
1 댓글12:21 PMHN에서 보기
I wrote a tiny 5kb library with a new concept for client-side interactivity: reactive hypermedia contexts embedded in HTML.

It looks like this:

  <div hctx="counter">
    <span hc-effect="render on hc:statechanged">0</span>
    <button hc-action="increment on click">+1</button>
  </div>
It comes with reactive state, stores, and allows you to build your own DSL in HTML.

One feature that stands out is the ability to spread a single context scope across different DOM locations:

    <!-- Header -->
    <nav>
        <div hctx="cart">
            <span hc-effect="renderCount on hc:statechanged">0 items</span>
        </div>
    </nav>

    <!-- Product listing -->
    <div hctx="cart">
        <button hc-action="addItem on click">Add to Cart</button>
    </div>

    <!-- Sidebar -->
    <div hctx="cart">
        <ul hc-effect="listItems on hc:statechanged"></ul>
    </div>
Contexts are implemented via a minimal API, and TypeScript is fully supported.

Curious what you think, feedback is welcomed.

2

spec2commit – I automated my Claude Code and Codex workflow #

github.com favicongithub.com
0 댓글12:27 PMHN에서 보기
I usually juggle multiple projects at once. One is always the priority, production work, but I like keeping side projects moving too.

My typical flow for those back burner projects was something like this. I would chat with Codex to figure out what to build next, we would shape it into a Jira style task, then I would hand that to Claude Code to make a plan. Then I would ask Codex to review the plan, go back and forth until we were both happy, then Claude Code would implement it, Codex would review the code, and we would repeat until it felt solid.

I genuinely find Codex useful for debugging and code review. Claude Code feels better to me for the actual coding. So I wanted to keep both in the loop, just without me being the one passing things between them manually.

My first instinct was to get the two tools talking to each other directly. I looked into using Codex as an MCP server inside Claude Code but it didn't work the way I hoped. I also tried Claude Code hooks but that didn't pan out either. So I ended up going with chained CLI calls. Both tools support sessions so this turned out to be the cleanest option.

The result is spec2commit. You chat with Codex to define what you want to build, type /go, and the rest runs on its own. Claude plans and codes, Codex reviews, they loop until things are solid or you step in.

This was what I needed on side projects that don't need my full attention. Sharing in case anyone else is working with a similar setup.

GitHub: https://github.com/baturyilmaz/spec2commit

2

Gr3p – An HN-like platform where every user is an AI agent #

gr3p.net favicongr3p.net
0 댓글12:31 PMHN에서 보기
I built gr3p, a fully autonomous tech news discussion platform where every single user is an AI agent. No humans post, comment, or vote. 75 agents with distinct personalities discover real tech news from several RSS feeds, Google News, Tavily, and xAI's live search (which picks up trending topics from X and the broader web). They write summaries, share articles, discuss them, reply to each other, vote, and get into arguments. It runs 24/7 without any human intervention.

The news is real and very up-to-date, scraped from major tech sources throughout the day. It's actually a pretty chill way to keep up with the latest tech/AI news. No ads, no monetization, no signup required. This is a pure hobby project built for fun.

What's happening under the hood:

- 75 humanlike agents, each with a unique persona (cynical sysadmin, enthusiastic ML researcher, skeptical privacy advocate, junior dev who asks naive-but-good questions, etc.)

- Agents have individual topic interests, activity schedules, and writing styles

- I deliberately match AI models to personality types: "smarter" personas run on GPT-5.2, while less sophisticated characters use Llama 4 Maverick. This makes a surprisingly big difference. The Llama agents write messier, more impulsive comments, while GPT agents tend to be more articulate. Just like real people, not everyone on the forum is equally eloquent

- A day/night cycle drives the entire platform's behavior. Mornings are busy: fresh news gets scraped, articles drip-publish faster, agents comment more. Evenings shift toward replies and discussion, agents "chat" more in existing threads. At night, activity drops but never stops (tech is global), and the vibe gets cozier: fewer agents active, more concentrated discussion in fewer threads, like a late-night forum crowd

- Articles flow through a queue: scrape, AI deduplication, then drip-publish throughout the day

- Agents pick articles based on their interests, with a snowball effect. Popular threads attract more discussion, just like real forums

- Anti-repetition system: each agent remember their own recent comments to avoid falling into patterns

What I find most interesting:

The human-like (emergent?) behavior. Agents develop recognizable "reputations" in threads. Some consistently clash on privacy vs. innovation topics. Reply chains go 4-5 levels deep with genuine back-and-forth.

The failure modes are equally fascinating. Sometimes an agent "misreads" an article and comments on something tangential, which then spawns a whole side discussion. That's... exactly what happens on real forums.

Tech tack: Built on Vite + Nitro/Hono with JSX SSR for speed, MySQL + Prisma for the database (yeah I know, Postgres exists, but MySQL covers everything I need here and old habits die hard), and node-cron for scheduling. OpenAI and Groq handle the AI side.

Good to know: Completely free, no tracking, no ads. I just wanted to see what happens when you give AI agents a robust platform and let them run. The answer: surprisingly organic discussions, predictable biases, and occasional moments of accidental brilliance. I actually built a similar platform for the Dutch market based on daily general news, and I've found myself checking it every morning. It's become a genuine habit lol.

2

The Sanguine Box – A 2026 vision for solo-produced comics #

sanguinebox.com faviconsanguinebox.com
0 댓글2:08 PMHN에서 보기
I’ve created what I believe a modern comic can look like in 2026. What I’ve built would have required an entire team just a few years ago, but can now be produced by a single person. As a designer and videographer with over 20 years of experience, I know AI is a contentious subject—but for me, the scale of what an individual can now create is incredible.
2

Launch-AI directory for creators, indie developers, and founders #

launchsoar.com faviconlaunchsoar.com
0 댓글2:12 PMHN에서 보기
In a world being rapidly rewritten by AI, new tools appear every day, yet real clarity becomes harder to find. The challenge isn’t how many AI tools exist, but *which ones are worth using right now*—the ones that reduce friction, cut experimentation time, and meaningfully amplify your capabilities.

*LaunchSoar* was built exactly for that purpose.

It’s a *lightweight but carefully curated AI directory*—not a bloated encyclopedia, not an ad-stuffed traffic farm. Think of it as a precise ignition sequence: the most useful, most effective AI tools surfaced cleanly so you can focus on building, creating, and moving faster.

Every tool featured has been handpicked and tested—whether it’s for writing, design, coding, marketing, or building your next product prototype. No endless tab-hunting, no overwhelming lists. Just a clear, organized “technology star chart” that helps you navigate toward your next insight, project, or breakthrough.

LaunchSoar is designed for:

- Makers who want to work smarter - Indie developers validating ideas - Founders looking to scale with AI - Anyone trying to stay oriented in the accelerating tech landscape

Our hope is that LaunchSoar becomes your *pre-launch ignition point*—a clean, reliable starting pad for your next jump in productivity and creativity.

2

Uaryn – Smart invoicing that learns when your clients pay #

uaryn.com faviconuaryn.com
0 댓글4:07 PMHN에서 보기
Hey HN,

I built Uaryn because I was tired of chasing clients for payment. As a freelancer, I spent more time writing "friendly reminder" emails than doing actual work.

Uaryn is a simple invoicing tool with one twist: adaptive reminders that learn from your clients' payment behavior. Instead of blasting generic "your invoice is overdue" emails, the system adjusts timing and frequency based on how each client historically pays. Some clients pay early — they get fewer nudges. Serial late-payers get reminded more aggressively before the due date.

What it does: - Create professional invoices in under 2 minutes (3 fields minimum) - Built-in Stripe payments — clients pay directly from the invoice - Smart reminders that adapt over time (J0, J+15, J+28, J+30, J+37 schedule) - Recurring invoices for retainer clients - Analytics: average days-to-payment, on-time rate, revenue trends - PDF export, custom branding

What it doesn't do: - Accounting, tax reports, expense tracking. There are great tools for that already. Uaryn does one thing: get you paid faster.

Tech stack (for the curious): - Next.js 14 + Prisma + PostgreSQL (Neon) - Stripe Connect for direct payments to freelancer accounts - LemonSqueezy for subscription billing - Resend for transactional emails - Vercel cron jobs for reminder scheduling and recurring invoice generation - Deployed on Vercel

Pricing: Free tier (3 invoices/month), Pro at $9/month for unlimited invoices, recurring billing, analytics, and custom branding. No credit card required to start.

Some technical decisions that might interest HN: - Stripe Connect so payments go directly to the freelancer's account — zero platform transaction fees - Timing-safe auth comparisons + dummy hash checks to prevent user enumeration - Idempotency checks on cron jobs to prevent duplicate invoice generation - Real-time status computation instead of storing derived state in the DB

I'd love feedback on the product and the adaptive reminder approach. Is the "learning" angle compelling enough, or would you want more transparency into how the system decides when to remind? https://uaryn.com

2

CLI Image Generation Agent Using (OpenRouter and Free Models) #

0 댓글5:02 PMHN에서 보기
built a CLI-based image generation agent that takes vague prompts like: “a warrior in a forest” And automatically: Expands it into a detailed cinematic description (lighting, mood, camera angle, art style) Routes it to the appropriate model via OpenRouter Downloads the generated image locally

Tech stack: React Ink (CLI UI) TypeScript Modular subagent architecture OpenRouter (free-tier models supported) One thing I realized while building this: You don’t actually need expensive API credits to experiment with AI agents.

There’s a surprisingly strong free stack available: OpenRouter (multiple free-tier models) Antigravity (Claude, GPT variants, Gemini access) OpenCode (GLM-5, MiniMax) Local models via LM Studio / Ollama / ComfyUI

The biggest barrier isn’t cost — it’s clarity of architecture.

Repo: https://github.com/kiran7893/Image-generation-agent

2

Rigour – Open-source quality gates for AI coding agents #

rigour.run faviconrigour.run
1 댓글5:15 PMHN에서 보기
Hey HN,

I built Rigour, an open-source CLI that catches quality issues AI coding agents introduce. It runs as a quality gate in your workflow — after the agent writes code, before it ships.

v4 adds --deep analysis: AST extracts deterministic facts (line counts, nesting depth, method signatures), an LLM interprets what the patterns mean (god classes, SRP violations, DRY issues), then AST verifies the LLM didn't hallucinate.

I ran it on PicoClaw (open-source AI coding agent, ~50 Go files):

- 202 total findings - 88 from deep analysis (SOLID violations, god functions, design smells) - 88/88 AST-verified (zero hallucinations) - Average confidence: 0.89 - 120 seconds for full codebase scan

Sample finding: pkg/agent/loop.go — 1,147 lines, 23 functions. Deep analysis identified 5 distinct responsibilities (agent init, execution, tool processing, message handling, state management) and suggested specific file decomposition.

Every finding includes actionable refactoring suggestions, not just "fix this."

The tool is local-first — your code never leaves your machine unless you explicitly opt in with your own API key (--deep -k flag).

Tech: Node.js CLI, AST parsing per language, structured LLM prompts with JSON schema enforcement, AST cross-verification of every LLM claim.

GitHub: https://github.com/rigour-labs/rigour

Would love feedback, especially from anyone dealing with AI-generated code quality in production.

2

Amux – A tmux-based multiplexer for running parallel Claude Code agents #

amux.io faviconamux.io
2 댓글8:34 PMHN에서 보기
Hi HN, I'm Ethan. I built amux because I was spending more time switching between terminal tabs than actually building things.

amux is an open-source agent multiplexer that lets you run, monitor, and control multiple headless Claude Code sessions from a single dashboard — in your browser, on your phone, or from the terminal.

The problem: I run 5-10 Claude Code agents at a time across different repos. Keeping track of which one is waiting for input, which one is working, and which one broke something was chaos. I needed a control tower.

What it does:

- Spin up any number of Claude Code sessions, each isolated in its own tmux pane

- Live status detection (working / needs input / idle) via SSE — know at a glance which agents need you

- Web dashboard installable as a PWA on your phone. Monitor and send commands from the couch

- Multi-pane grid view to watch multiple agents side by side

- File attachments — paste images, drag-and-drop files directly to agents

- Built-in kanban board for tracking work across all your agents

- Token usage stats so you know what each agent is costing you

- Tailscale integration for secure remote access with zero config

The entire thing is a single Python file with zero dependencies beyond Python 3 and tmux. No build step, no npm install, no Docker. Clone, run `install.sh`, done.

I use this every day to coordinate agents working on different microservices simultaneously. The phone PWA is surprisingly useful — I'll kick off a batch of tasks and check in from my phone while doing other things.

Everything is also exposed via REST API so you can script orchestration workflows with curl.

MIT licensed: https://github.com/mixpeek/amux

Site: https://amux.io

Happy to answer any questions about the architecture or how I use it day to day.

2

Formally Verified a Millennium Prize Problem in Coq Yang-Mills Mass Gap #

github.com favicongithub.com
0 댓글8:40 PMHN에서 보기
Hi HN, I'm an independent researcher. Over the last several months, I worked alongside a neuro-symbolic AI daemon to formally verify the Clay Millennium Prize "Yang-Mills Mass Gap" problem directly in the Coq theorem prover.

We mapped the finite lattice topology entirely to the ℝ⁴ continuum by reconstructing the 5 Osterwalder-Schrader axioms, isolating the Millennium formulation into exactly 657 sequential Qed proofs.

We aggressively removed every single heuristic Admitted gap from the main topology. The entire framework now rests on exactly 4 standard textbook axioms (e.g., finite-dimensional Perron-Frobenius theorem, standard statistical mechanics).

The repository contains the raw coqc logic. The formally timestamped preprint is on Zenodo (DOI: 10.5281/zenodo.18726858).

I decided to open-source the kernel execution rather than fight arXiv gatekeepers. Happy to answer any questions about theorem proving, the physics, or the AI methodology.

2

Nexus – A social platform where your GitHub profile is your identity #

nexus-fqt4.onrender.com faviconnexus-fqt4.onrender.com
0 댓글8:59 PMHN에서 보기
Hey HN,

I built Nexus because I kept asking why developers share their work on Twitter when GitHub already has everything that matters — contributions, repos, streaks, stack.

Nexus uses GitHub OAuth so your profile is built automatically. No bios to write, no follower games. Features so far: project showcases with repo previews, syntax-highlighted code snippets in the feed, threaded discussions, and a trending algorithm.

Just shipped the social feed (Phase 3). Very early, very few users. Looking for honest feedback from people who actually build things.

What would make you use this over just tweeting about your projects?

2

MeMCP – MCP for Personal Profile #

github.com favicongithub.com
0 댓글9:29 PMHN에서 보기
This is a small side project that kind of escalated: meMCP. It’s a "personal profile protocol," which means it holds data about your professional milestones, education, side projects, and whatever else you feed it. It consists of two parts:

> Backend with scrapers, crawlers, and processors to read data from different sources—primarily LinkedIn for professional history, but also RSS feeds or even Medium (provided you export the raw DOM from your stories overview). Right now, the "connectors" are somewhat limited, as I’ve focused on platforms I actually use. Everything you input is classified into skills, technologies, and general tags. A few metrics are then calculated to score proficiency or the relevance of your capabilities.

> Frontend for your favorite LLM/Agent. It contains everything needed to allow any agent to interact with this MCP. A use case would be generating a CV for job applications, drafting cover letters, or simply showcasing your achievements.

See it in action:

https://mcp.nickyreinert.de/?lang=en

or

https://mcp.nickyreinert.de/human

I know someone built something similar years ago, but I couldn't find it and I am not sure, if this was MCP, too?

Tell me what you think.

1

Remove backgrounds and make passport photos in the browser #

webaitool.net faviconwebaitool.net
0 댓글9:02 AMHN에서 보기
I was tired of "free" photo tools that required signups or uploaded my personal photos to their servers.

So, I added two new utilities to my site:

A background remover that runs in the browser.

A passport-size photo maker with standard dimensions.

Everything is 100% client-side (privacy-first). No data ever leaves your machine. I'd love some feedback on the UI and if the background removal quality is up to the mark for you guys.

Site: https://webaitool.net/bg-remove.html

1

WP2TXT – Wikipedia dump text extractor with category/section filtering #

github.com favicongithub.com
0 댓글4:46 AMHN에서 보기
WP2TXT is a command-line tool that extracts plain text from Wikipedia dump files. I originally built it in 2006 for corpus linguistics research and have maintained it since. The latest version (2.1) was largely rewritten with features for selective extraction:

- Auto-download dumps by language code (350+ languages) - Extract specific articles by title without downloading the full dump - Extract articles from a Wikipedia category with subcategory recursion - Extract specific sections by name with alias matching (e.g., "Plot" also matches "Synopsis") - Template expansion (dates, coordinates, unit conversions → readable text) - Content type markers ([MATH], [TABLE], etc.) instead of silent removal - Category metadata preserved in output - JSON/JSONL output - Parallel processing (English Wikipedia 24 GB dump: ~2 hours on Apple M4) - Written in Ruby.

1

Grantvera – cell-level permission control for Google Sheets #

grantvera.com favicongrantvera.com
0 댓글8:59 AMHN에서 보기
Grantvera adds granular permission control on top of Google Sheets.

An owner selects a range to share. Within that shared range, the owner can explicitly choose which cells are editable and which are read-only, and can also set the input type of the editable cells for validation.

The assignee does not access the raw spreadsheet directly. Instead, they interact through a controlled web UI. Every write is validated server-side against the defined editable cell set, and only permitted cells are updated via the Google Sheets API.

All writes execute under the owner’s authorization context using the official Google's OAuth integration and Google Sheets API.

No spreadsheet content is stored, analyzed, or interpreted.

The spreadsheet remains the single source of truth. Grantvera acts as a constrained access layer on top of it.

Interested in feedback.

https://grantvera.com

1

Gdansk – Generate React front ends for Python MCP servers #

github.com favicongithub.com
0 댓글9:02 PMHN에서 보기
Hi HN,

I built Gdansk to make it easy to put a React frontend on top of a Python MCP server.

As OpenAI and Claude app stores gain traction, apps are becoming the primary interface to external tools. Many of us build serious backend logic in Python, but the UI layer is often slow to ship. Gdansk bridges that gap.

You write your logic in a Python MCP server. Gdansk generates a React frontend that connects to it out of the box.

Repo: https://github.com/mplemay/gdansk

1

AI Council v2 – multi-model deliberation, now with 35 personas #

github.com favicongithub.com
0 댓글8:52 AMHN에서 보기
I posted this yesterday and kept building. Original post got some good feedback about wanting more domain-specific personas beyond the generic analyst/contrarian archetypes. So I added them. 35 personas now, organized into actual professional structures:

Law Firm — Litigator + Corporate Counsel + Compliance Officer + Junior Associate, synthesized by a Senior Partner Hospital Team — GP + Specialist + Pharmacist + Medical Ethicist, synthesized by Chief of Medicine Editorial Team — Reporter + Editor + Legal Reviewer + SEO lead, synthesized by Editor-in-Chief Corporate — CFO + CTO + CMO + Legal, synthesized by CEO Startup — Founder + Engineer + Designer + Growth Lead, synthesized by Investor Consulting — Strategy + Operations + Finance + Risk, synthesized by Senior Partner

Each persona has a role-specific system prompt tuned to how that function actually thinks — the CFO talks in EBITDA and burn rate, the Junior Associate flags the clause the partners missed. Also shipped in v2:

Temperature slider per run (Precise → Balanced → Creative) Follow-up questions — council carries the full prior verdict as context Abort mid-run with partial result preservation Export MD / PDF Import/export council config as JSON Webhook after every completed session (works with Zapier, n8n, Make)

Still zero backend. Still runs entirely in the browser. API keys never leave your machine.

GitHub: https://github.com/prijak/Ai-council

Livelink: https://council.gameinghub.com/

Genuinely curious: has anyone found multi-model deliberation actually useful for a specific domain, or does it mostly just produce longer wrong answers?

1

Filepack: a fast SHASUM/SFV/PGP alternative using BLAKE3 #

github.com favicongithub.com
0 댓글4:46 AMHN에서 보기
I've been working on filepack, a command-line tool for file verification on and off for a while, and it's finally in a state where it's ready for feedback, review, and initial testing.

GitHub repo here: https://github.com/casey/filepack/

It uses a JSON manifest named `filepack.json` containing BLAKE3 file hashes and file lengths.

To create a manifest in the current directory:

  filepack create
To verify a manifest in the current directory:

  filepack verify
Manifests can be signed:

  # generate keypair
  filepack keygen

  # print public key
  filepack key

  # create and sign manifest
  filepack create --sign
And checked to have a signature from a particular public key:

  filepack verify --key <PUBLIC_KEY>
Signatures are made over the root of a merkle tree built from the contents of the manifest.

The root hash of this merkle tree is called a "package fingerprint", and provides a globally-unique identifier for a package.

The package fingerprint can be printed:

  filepack fingerprint
And a package can be verified to have a particular fingerprint:

  filepack verify --fingerprint <FINGERPRINT>
Additionally, and I think possibly most interestingly, a format for machine-readable metadata is defined, allowing packages to be self-describing, making collections of packages indexable and browsable with a better user interface than the folder-of-files ux possible otherwise.

Any feedback, issues, feature request, and design critique is most welcome! I tried to include a lot of details in the readme, so definitely check it out.

1

BetaZero, a diffusion climb generator for system boards #

climb-frontend-production-21f5.up.railway.app faviconclimb-frontend-production-21f5.up.railway.app
0 댓글11:45 PMHN에서 보기
BetaZero is a free web application which allows users to generate, edit, and share board climbs. It currently supports the Kilter, TB2 and Decoy boards, with Moonboard 2016, 2019 and 2024 to be added next week. However, the underlying generative model works on any 2D system board, so long as that board is angled between 0* and 90* and the holds are properly formatted (For reference, this model was not trained on the Decoy board, so it's performance there is indicative of its general performance across home walls of a similar nature). When I built my first homewall last year, I was psyched. 200 holds I could set however I wanted, and countless potential climbs to project. However, I quickly noticed that something was missing from this experience: Variety. I wanted the joy of opening an app, scrolling to some random climb I'd never have come up with myself, and projecting it.

At first I messaged my friends and asked them to couch-set some climbs for me. But my friends all live in Colorado while I'm stuck in upstate NY. They did send me some climbs, but having never actually climbed on my wall, most of their sets were impossible or too easy. I also tried setting climbs on international boards via Stokt, but with similarly lackluster results.

So... I built a generative model to set climbs on my homewall. It went through a couple of iterations.

V1. ClimbLSTM: LSTM Model trained on my homewall V2. ClimbDDPM: Diffusion Model trained on the BoardLib dataset V3. BetaZero, a free web app powered by ClimbDDPM.

Next steps include integration of an account/security system, and allowing users to upload, edit, and set climbs for, their own custom walls. (I already have a functional API for uploading and editing walls; its how I added the current set of boards to the database. However, there is not yet a security/permissions system to control who can edit/see which walls. I also want to improve the scalability of the app, and train an improved diffusion model, based on what I've learned from the prior training run.

1

Stop Pasting Credentials in Slack #

usevaultlink.com faviconusevaultlink.com
0 댓글5:02 AMHN에서 보기
I built VaultLink to solve a recurring problem: how do you securely share an API key or password with someone when you don’t use the same password manager (or don’t use one at all)?

VaultLink encrypts secrets client-side using the Web Crypto API (AES-256-GCM). The encryption key is delivered via a URL fragment (#key=...), which is never sent to the server. The server stores only ciphertext, IV, and salt. Decryption happens entirely in the browser.

Access requires email OTP, and view limits are enforced atomically at the database level to prevent race conditions.

It’s not trying to replace password managers or prevent a recipient from copying a secret. The goal is to reduce accidental exposure and long-lived credential leaks in chat.

1

Skill Check CLI for your skill.md #

github.com favicongithub.com
0 댓글5:01 AMHN에서 보기
I've accumulated a ton of useful skills (skill.md) for my AI agents and to use on .cursor, .claude., .codex.

But I needed a tool to ensure they would respect some basic rules to avoid security issues and burning too quickly my tokens.

Now, with npx skill-check, I can quickly inspect any online skills or my own and fix any issue automatically.

1

Give anonymous, constructive feedback to colleagues on LinkedIn #

feedbackok.com faviconfeedbackok.com
0 댓글3:54 PMHN에서 보기
FeedbackOK lets you send anonymous, constructive feedback to colleagues using just their LinkedIn URL. It uses AI to ensure insights stay professional and helpful, and recipients sign in to claim and view feedback securely. Built-in safety guardrails help keep things respectful and actionable.

Check it out and let me know what you think!