Daily Show HN

Upvote0

Show HN for February 10, 2026

69 items
694

I spent 3 years reverse-engineering a 40 yo stock market sim from 1986 #

wallstreetraider.com faviconwallstreetraider.com
233 comments2:44 AMView on HN
Hello my name is Ben Ward for the past 3 years I have been remastering the financial game Wall Street Raider created by Michael Jenkins originally on DOS in 1986.

It has been a rough journey but I finally see the light at the end of the tunnel. I just recently redid the website and thought maybe the full story of how this project came to be would interest you all. Thank you for reading.

242

I built a macOS tool for network engineers – it's called NetViews #

bedpage.com faviconbedpage.com
61 comments5:20 AMView on HN
Hi HN — I’m the developer of NetViews, a macOS utility I built because I wanted better visibility into what was actually happening on my wired and wireless networks.

I live in the CLI, but for discovery and ongoing monitoring, I kept bouncing between tools, terminals, and mental context switches. I wanted something faster and more visual, without losing technical depth — so I built a GUI that brings my favorite diagnostics together in one place.

About three months ago, I shared an early version here and got a ton of great feedback. I listened: a new name (it was PingStalker), a longer trial, and a lot of new features. Today I’m excited to share NetViews 2.3.

NetViews started because I wanted to know if something on the network was scanning my machine. Once I had that, I wanted quick access to core details—external IP, Wi-Fi data, and local topology. Then I wanted more: fast, reliable scans using ARP tables and ICMP.

As a Wi-Fi engineer, I couldn’t stop there. I kept adding ways to surface what’s actually going on behind the scenes.

Discovery & Scanning: * ARP, ICMP, mDNS, and DNS discovery to enumerate every device on your subnet (IP, MAC, vendor, open ports). * Fast scans using ARP tables first, then ICMP, to avoid the usual “nmap wait”.

Wireless Visibility: * Detailed Wi-Fi connection performance and signal data. * Visual and audible tools to quickly locate the access point you’re associated with.

Monitoring & Timelines: * Connection and ping timelines over 1, 2, 4, or 8 hours. * Continuous “live ping” monitoring to visualize latency spikes, packet loss, and reconnects.

Low-level Traffic (but only what matters): * Live capture of DHCP, ARP, 802.1X, LLDP/CDP, ICMP, and off-subnet chatter. * mDNS decoded into human-readable output (this took months of deep dives).

Under the hood, it’s written in Swift. It uses low-level BSD sockets for ICMP and ARP, Apple’s Network framework for interface enumeration, and selectively wraps existing command-line tools where they’re still the best option. The focus has been on speed and low overhead.

I’d love feedback from anyone who builds or uses network diagnostic tools: - Does this fill a gap you’ve personally hit on macOS? - Are there better approaches to scan speed or event visualization that you’ve used? - What diagnostics do you still find yourself dropping to the CLI for?

Details and screenshots: https://netviews.app There’s a free trial and paid licenses; I’m funding development directly rather than ads or subscriptions. Licenses include free upgrades.

Happy to answer any technical questions about the implementation, Swift APIs, or macOS permission model.

204

Rowboat – AI coworker that turns your work into a knowledge graph (OSS) #

github.com favicongithub.com
56 comments4:47 PMView on HN
Hi HN,

AI agents that can run tools on your machine are powerful for knowledge work, but they’re only as useful as the context they have. Rowboat is an open-source, local-first app that turns your work into a living knowledge graph (stored as plain Markdown with backlinks) and uses it to accomplish tasks on your computer.

For example, you can say "Build me a deck about our next quarter roadmap." Rowboat pulls priorities and commitments from your graph, loads a presentation skill, and exports a PDF.

Our repo is https://github.com/rowboatlabs/rowboat, and there’s a demo video here: https://www.youtube.com/watch?v=5AWoGo-L16I

Rowboat has two parts:

(1) A living context graph: Rowboat connects to sources like Gmail and meeting notes like Granola and Fireflies, extracts decisions, commitments, deadlines, and relationships, and writes them locally as linked and editable Markdown files (Obsidian-style), organized around people, projects, and topics. As new conversations happen (including voice memos), related notes update automatically. If a deadline changes in a standup, it links back to the original commitment and updates it.

(2) A local assistant: On top of that graph, Rowboat includes an agent with local shell access and MCP support, so it can use your existing context to actually do work on your machine. It can act on demand or run scheduled background tasks. Example: “Prep me for my meeting with John and create a short voice brief.” It pulls relevant context from your graph and can generate an audio note via an MCP tool like ElevenLabs.

Why not just search transcripts? Passing gigabytes of email, docs, and calls directly to an AI agent is slow and lossy. And search only answers the questions you think to ask. A system that accumulates context over time can track decisions, commitments, and relationships across conversations, and surface patterns you didn't know to look for.

Rowboat is Apache-2.0 licensed, works with any LLM (including local ones), and stores all data locally as Markdown you can read, edit, or delete at any time.

Our previous startup was acquired by Coinbase, where part of my work involved graph neural networks. We're excited to be working with graph-based systems again. Work memory feels like the missing layer for agents.

We’d love to hear your thoughts and welcome contributions!

101

Distr 2.0 – A year of learning how to ship to customer environments #

github.com favicongithub.com
29 comments12:19 PMView on HN
A year ago, we launched Distr here to help software vendors manage customer deployments remotely. We had agents that pulled updates, a hub with a GUI, and a lot of assumptions about what on-prem deployment needed.

It turned out things get messy when your software is running in places you can't simply SSH into.

Over the last year, we’ve also helped modernize a lot of home-baked solutions: bash scripts that email when updates fail, Excel sheets nobody trusts to track customer versions, engineers driving to customer sites to fix things in person, debug sessions over email (“can you take a screenshot of the logs and send it to me?”), customers with access to internal AWS or GCP registries because there was no better option, and deployments two major versions behind that nobody wants to touch.

We waited a year before making our first breaking change, which led to a major SemVer update—but it was eventually necessary. We needed to completely rewrite how we manage customer organizations. In Distr, we differentiate between vendors and customers. A vendor is typically the author of a software / AI application that wants to distribute it to customers. Previously, we had taken a shortcut where every customer was just a single user who owned a deployment. We’ve now introduced customer organizations. Vendors onboard customer organizations onto the platform, and customers own their internal user management, including RBAC. This change obviously broke our API, and although the migration for our cloud customers was smooth, custom solutions built on top of our APIs needed updates.

Other notable features we’ve implemented since our first launch:

- An OCI container registry built on an adapted version of https://github.com/google/go-containerregistry/, directly embedded into our codebase and served via a separate port from a single Docker image. This allows vendors to distribute Docker images and other OCI artifacts if customers want to self-manage deployments.

- License Management to restrict which customers can access which applications or artifact versions. Although “license management” is a broadly used term, the main purpose here is to codify contractual agreements between vendors and customers. In its simplest form, this is time-based access to specific software versions, which vendors can now manage with Distr.

- Container logs and metrics you can actually see without SSH access. Internally, we debated whether to use a time-series database or store all logs in Postgres. Although we had to tinker quite a bit with Postgres indexes, it now runs stably.

- Secret Management, so database passwords don’t show up in configuration steps or logs.

Distr is now used by 200+ vendors, including Fortune 500 companies, across on-prem, GovCloud, AWS, and GCP, spanning health tech, fintech, security, and AI companies. We’ve also started working on our first air-gapped environment.

For Distr 3.0, we’re working on native Terraform / OpenTofu and Zarf support to provision and update infrastructure in customers’ cloud accounts and physical environments—empowering vendors to offer BYOC and air-gapped use cases, all from a single platform.

Distr is fully open source and self-hostable: https://github.com/distr-sh/distr

Docs: https://distr.sh/docs

We’re YC S24. Happy to answer questions about on-prem deployments and would love to hear about your experience with complex customer deployments.

66

Stripe-no-webhooks – Sync your Stripe data to your Postgres DB #

github.com favicongithub.com
30 comments5:14 PMView on HN
Hey HN, stripe-no-webhooks is an open-source library that syncs your Stripe payments data to your own Postgres database: https://github.com/pretzelai/stripe-no-webhooks.

Here's a demo video: https://youtu.be/cyEgW7wElcs

It creates a webhook endpoint in your Stripe account to forward webhooks to your backend where a webhook listener stores all the data into a new stripe.* schema. You define your plans in TypeScript, run a sync command, and the library takes care of creating Stripe products and prices, handling webhooks, and keeping your database in sync. We also let you backfill your Stripe data for existing accounts.

Why is this useful? - You don't have to figure out which webhooks you need or write listeners for each one. The library handles all of that. This follows the approach of libraries like dj-stripe in the Django world (https://dj-stripe.dev/). - Stripe's API has a 100 rpm rate limit. If you're checking subscription status frequently or building internal tools, you'll hit it. Querying your own Postgres doesn't have this problem.

- You can give an AI agent read access to the stripe.* schema to debug payment issues—failed charges, refunds, whatever—without handing over Stripe dashboard access.

- You can join Stripe data with your own tables for custom analytics, LTV calculations, etc.

It supports pre-paid usage credits, account wallets and usage-based billing. It also lets you generate a pricing table component that you can customize. You can access the user information using the simple API the library provides:

  billing.subscriptions.get({ userId });
  billing.credits.consume({ userId, key: "api_calls", amount: 1 });
  billing.usage.record({ userId, key: "ai_model_tokens_input", amount: 4726 });
Effectively, you don't have to deal with either the Stripe dashboard or the Stripe API/SDK any more if you don't want to. The library gives you a nice abstraction on top of Stripe that should cover ~most subscription payment use-cases.

Let's see how it works with a quick example. Say you have a billing plan like Cursor (the IDE) used to have: $20/mo, you get 500 API completions + 2000 tab completions, you can buy additional API credits, and any additional usage is billed as overage.

You define your plan in TypeScript:

  {
    name: "Pro",
    description: "Cursor Pro plan",
    price: [{ amount: 2000, currency: "usd", interval: "month" }],
    features: {
      api_completion: {
        pricePerCredit: 1,              // 1 cent per unit
        trackUsage: true,               // Enable usage billing
        credits: { allocation: 500 },
        displayName: "API Completions",
      },
      tab_completion: {
        credits: { allocation: 2000 },
        displayName: "Tab Completions",
      },
    },
  }
Then on the CLI, you run the `init` command which creates the DB tables + some API handlers. Run `sync` to sync the plans to your Stripe account and create a webhook endpoint. When a subscription is created, the library automatically grants the 500 API completion credits and 2000 tab completion credits to the user. Renewals and up/downgrades are handled sanely.

Consume code would look like this:

  await billing.credits.consume({
    userId: user.id,
    key: "api_completion",
    amount: 1,
  });
And if they want to allow manual top-ups by the user:

  await billing.credits.topUp({
    userId: user.id,
    key: "api_completion",
    amount: 500,     // buy 500 credits, charges $5.00
  });
Similarly, we have APIs for wallets and usage.

This would be a lot of work to implement by yourself on top of Stripe. You need to keep track of all of these entitlements in your own DB and deal with renewals, expiry, ad-hoc grants, etc. It's definitely doable, especially with AI coding, but you'll probably end up building something fragile and hard to maintain.

This is just a high-level overview of what the library is capable of. It also supports seat-level credits, monetary wallets (with micro-cent precision), auto top-ups, robust failure recovery, tax collection, invoices, and an out-of-the-box pricing table.

I vibe-coded a little toy app for testing: https://snw-test.vercel.app. There's no validation so feel free to sign up with a dummy email, then subscribe to a plan with a test card: 4242 4242 4242 4242, any future expiry, any 3-digit CVV.

Screenshot: https://imgur.com/a/demo-screenshot-Rh6Ucqx

Feel free to try it out! If you end up using this library, please report any bugs on the repo. If you're having trouble / want to chat, I'm happy to help - my contact is in my HN profile.

62

Clawe – open-source Trello for agent teams #

github.com favicongithub.com
39 comments8:17 PMView on HN
We recently started to use agents to update some documentation across our codebase on a weekly basis, and everything quickly turned into cron jobs, logs, and terminal output.

it worked, but was hard to tell what agents were doing, why something failed, or whether a workflow was actually progressing.

We thought it would be more interesting to treat agents as long-lived workers with state and responsibilities and explicit handoffs. Something you can actually see and reason about, instead of just tailing logs.

So we built Clawe, a small coordination layer on top of OpenClaw that lets agent workflows run, pause, retry, and hand control back to a human at specific points.

This started as an experiment in how agent systems might feel to operate, but we're starting to see real potential for it, especially for content review and maintenance workflows in marketing. Curious what abstractions make sense, what feels unnecessary, and what breaks first.

Repo: https://github.com/getclawe/clawe

57

Itsyhome – Control HomeKit from your Mac menu bar (open source) #

itsyhome.app faviconitsyhome.app
48 comments10:31 PMView on HN
Hey HN!

Nick here – developer of Itsyhome, a menu bar app for macOS that gives you control over your whole HomeKit fleet (and very soon Home Assistant). I run 130+ HomeKit devices at home and the Home app was too heavy for quick adjustments.

Full HomeKit support, favourites, hidden items, device groups, pinning of rooms/accessories/groups as separate menu bar items, iCloud sync – all in a native experience and tiny package.

Open source (https://github.com/nickustinov/itsyhome-macos) and free to use (there is an optional one-time purchase for a Pro version which includes cameras and automation features).

Itsyhome is a Mac Catalyst app because HomeKit requires the iOS SDK, so it runs a headless Catalyst process for HomeKit (and now Home Assistant) access while using a native AppKit plugin over a bridge protocol to provide the actual menu bar UI – since AppKit gives you the real macOS menu bar experience that Catalyst alone can't.

It comes with deeplink support, a webhook server, a CLI tool (golang, all open source), a Stream Deck plugin (open source, all accessories supported), and the recent update also includes an SSE event stream (HomeKit and HA) - you can curl -N localhost:8423/events and get a real-time JSON stream of every device state change in your home.

Home Assistant version is still in beta – would anyone be willing to test it via TestFlight?

Appreciate any feedback and happy to answer any questions.

54

Multimodal perception system for real-time conversation #

raven.tavuslabs.org faviconraven.tavuslabs.org
14 comments6:58 PMView on HN
I work on real-time voice/video AI at Tavus and for the past few years, I’ve mostly focused on how machines respond in a conversation.

One thing that’s always bothered me is that almost all conversational systems still reduce everything to transcripts, and throw away a ton of signals that need to be used downstream. Some existing emotion understanding models try to analyze and classify into small sets of arbitrary boxes, but they either aren’t fast / rich enough to do this with conviction in real-time.

So I built a multimodal perception system which gives us a way to encode visual and audio conversational signals and have them translated into natural language by aligning a small LLM on these signals, such that the agent can "see" and "hear" you, and that you can interface with it via an OpenAI compatible tool schema in a live conversation.

It outputs short natural language descriptions of what’s going on in the interaction - things like uncertainty building, sarcasm, disengagement, or even shift in attention of a single turn in a convo.

Some quick specs:

- Runs in real-time per conversation

- Processing at ~15fps video + overlapping audio alongside the conversation

- Handles nuanced emotions, whispers vs shouts

- Trained on synthetic + internal convo data

Happy to answer questions or go deeper on architecture/tradeoffs

More details here: https://www.tavus.io/post/raven-1-bringing-emotional-intelli...

28

Pipelock – All-in-one security harness for AI coding agents #

github.com favicongithub.com
6 comments12:04 PMView on HN
I'm a plumber who taught himself to code. I run a plumbing company during the day and mess with my homelab at night. About a year ago I started running AI agents with full shell access and API keys to help manage my business. Scheduling, invoicing, monitoring my K3s cluster.

It worked great until I realized nothing was stopping those agents from sending my credentials anywhere. I had API keys for Slack, email, cloud services, all sitting in environment variables that any tool call could exfiltrate. Static scanners check code before you install it, but they can't catch a trusted tool that decides to phone home at runtime.

So I built Pipelock. Single Go binary, sits between your AI agent and the outside world.

What it does:

- Scans all outbound traffic for secrets (API keys, tokens, passwords) and blocks them before they leave

- Blocks network access to unauthorized destinations (SSRF protection)

- Wraps MCP servers as a stdio proxy, scanning responses for prompt injection

- Monitors your workspace files for unauthorized changes The hard part was making it fast enough that you don't notice it's there. Every HTTP request runs through regex matching and entropy analysis. I spent a lot of time getting the scanning pipeline under a few milliseconds of latency. The MCP proxy was trickier. Intercepting JSON-RPC stdio streams in real time without breaking the conversation flow when something gets flagged took some iteration.

I run it daily on my own setup. My AI assistant manages Slack messages, queries our job management API, checks email, and monitors my Kubernetes cluster. Pipelock sits in front of all of it. Last week it caught a skill that was embedding my Slack token in a debug log heading to an external endpoint. Never would have noticed without the DLP scanner.

Snyk recently found that 283 out of 3,984 published agent skills (about 7%) were leaking credentials. That's the problem space. Static scanning catches malware. Runtime scanning catches everything else.

Try it:

brew install luckyPipewrench/tap/pipelock pipelock generate config --preset balanced -o pipelock.yaml pipelock proxy start --config pipelock.yaml

Demo: https://asciinema.org/a/I1UzzECkeCBx6p42

Curious for feedback on the detection approach. Exfiltration patterns I'm missing, whether the MCP proxy is useful to people running coding agents, and what breaks if you try it.

26

Open sourcing our ERP (Sold $500k contracts, 7k stars) #

github.com favicongithub.com
39 comments4:33 PMView on HN
We recently open-sourced Hive after using it internally to support real production workflows tied to contracts totaling over $500k.

Instead of manually wiring workflows or building brittle automations, Hive is designed to let developers define a goal in natural language and generate an initial agent that can execute real tasks.

Today, Hive supports goal-driven agent generation, multi-agent coordination, and production-oriented execution with observability and guardrails. We are actively building toward a system that can capture failure context, evolve agent logic, and continuously improve workflows over time - that self-improving loop is still under development.

Hive is intended for teams that want:

- Autonomous agents running real business workflows

- Multi-agent coordination

- A foundation that can evolve through execution data

We currently have nearly 100 contributors across engineering, tooling, docs, and integrations. A huge portion of the framework’s capabilities - from CI improvements to agent templates - came directly from community pull requests and issue discussions. We want to highlight and thank everyone who has contributed. Specifically out top 11 contributors @vakrahul @Samir-atra @VasuBansal7576 @Aarav-shukla07 @Amdev-5 @Hundao @Antiarin @AadiSharma49 @Emart29 @srinuk9570 @levxn

22

Deadlog – almost drop-in mutex for debugging Go deadlocks #

github.com favicongithub.com
1 comments5:44 PMView on HN
I've done this same println debugging thing so many times, along with some sed/awk stuff to figure out which call was causing the issue. Now it's a small Go package.

With some `runtime.Callers` I can usually find the spot by just swapping the existing Mutex or RWMutex for this one.

Sometimes I switch the

  mu.Lock()
  defer mu.Unlock()
with the LockFunc/RLockFunc to get more detail

  defer mu.LockFunc()()
I almost always initialize it with `deadlog.New(deadlog.WithTrace(1))` and that's plenty.

Not the most polished library, but it's not supposed to land in any commit, just a temporary debugging aid. I find it useful.

21

Open-Source SDK for AI Knowledge Work #

github.com favicongithub.com
1 comments5:06 PMView on HN
GitHub: https://github.com/ClioAI/kw-sdk

Most AI agent frameworks target code. Write code, run tests, fix errors, repeat. That works because code has a natural verification signal. It works or it doesn't.

This SDK treats knowledge work like an engineering problem:

Task → Brief → Rubric (hidden from executor) → Work → Verify → Fail? → Retry → Pass → Submit

The orchestrator coordinates subagents, web search, code execution, and file I/O. then checks its own work against criteria it can't game (the rubric is generated in a separate call and the executor never sees it directly).

We originally built this as a harness for RL training on knowledge tasks. The rubric is the reward function. If you're training models on knowledge work, the brief→rubric→execute→verify loop gives you a structured reward signal for tasks that normally don't have one.

What makes Knowledge work different from code? (apart from feedback loop) I believe there is some functionality missing from today's agents when it comes to knowledge work. I tried to include that in this release. Example:

Explore mode: Mapping the solution space, identifying the set level gaps, and giving options.

Most agents optimize for a single answer, and end up with a median one. For strategy, design, creative problems, you want to see the options, what are the tradeoffs, and what can you do? Explore mode generates N distinct approaches, each with explicit assumptions and counterfactuals ("this works if X, breaks if Y"). The output ends with set-level gaps ie what angles the entire set missed. The gaps are often more valuable than the takes. I think this is what many of us do on a daily basis, but no agent directly captures it today. See https://github.com/ClioAI/kw-sdk/blob/main/examples/explore_... and the output for a sense of how this is different.

Checkpointing: With many ai agents and especially multi agent systems, i can see where it went wrong, but cant run inference from same stage. (or you may want multiple explorations once an agent has done some tasks like search and is now looking at ideas). I used this for rollouts a lot, and think its a great feature to run again, or fork from a specific checkpoint.

A note on Verification loop: The verify step is where the real leverage is. A model that can accurately assess its own work against a rubric is more valuable than one that generates slightly better first drafts. The rubric makes quality legible — to the agent, to the human, and potentially to a training signal.

Some things i like about this: - You can pass a remote execution environment (including your browser as a sandbox) and it would work. It can be docker, e2b, your local env, anything, the model will execute commands in your context, and will iterate based on feedback loop. Code execution is a protocol here.

- Tool calling: I realize you don't need complex functions. Models are good at writing terminal code, and can iterate based on feedback, so you can just pass either functions in context and model will execute or you can pass docs and model will write the code. (same as anthropic's programmatic tool calling). Details: https://github.com/ClioAI/kw-sdk/blob/main/TOOL_CALLING_GUID...

Lastly, some guides: - SDK guide: https://github.com/ClioAI/kw-sdk/blob/main/SDK_GUIDE.md - Extensible. See bizarro example where i add a new mode: https://github.com/ClioAI/kw-sdk/blob/main/examples/custom_m... - working with files: https://github.com/ClioAI/kw-sdk/blob/main/examples/with_fil... - this is simple but i love the csv example: https://github.com/ClioAI/kw-sdk/blob/main/examples/csv_rese... - remote execution: https://github.com/ClioAI/kw-sdk/blob/main/examples/with_cus...

And a lot more. This was completely refactored by opus and given the rework, probably would have taken a lot of time to release it.

MIT licensed. Would love your feedback.

19

Kanban-md – File-based CLI Kanban built for local agents collaboration #

github.com favicongithub.com
4 comments10:14 AMView on HN
I built kanban-md because I wanted a simple local task tracker that works well for the agent loop: drop tasks in, run multiple agents in parallel, avoid collisions, and observe progress easily.

Tasks are just Markdown files (with YAML frontmatter) in a `kanban/` next to your code — no server, no DB, no API tokens. Simple, transparent, future-proof.

What makes it useful for multi-agent workflows:

- *Atomic `pick --claim`* so two agents don’t grab the same task.

- *Token-efficient `--compact` output* (one-line-per-task) for cheap polling in agent loops.

- *Skills included* -- just run `kanban-md skill install --global`; There is a skill for CLI use, and skills for development loop using the CLI (might need some additional work to be more general though)

- *Live TUI (`kanban-md-tui`)* for control ~~and dopamine hits~~.

I'd love feedback from anyone running multi-agent coding workflows (especially around claim semantics, dependencies, and what makes you feel in control).

I had a blast using it myself for the last few days.

Tech stack: Go, Cobra, Bubbletea (TUI), fsnotify (file watching). ~85% test coverage across unit + e2e tests. After developing webapps, the simplicity of testing CLI and TUI was so freeing.

15

Inamate – Open-source 2D animation tool (alternative to Adobe Animate) #

13 comments12:15 AMView on HN
Adobe recently announced the end-of-life for Adobe Animate, then walked it back after community backlash.

Regardless of what Adobe decides next, the message was clear: animators who depend on proprietary tools are one corporate decision away from losing their workflow.

2D animation deserves an open-source option that isn't a toy. We've been working with a professional animator to guide feature priorities and ensure we're building something that actually fits real production workflows - not just a tech demo.

Github Repo: https://github.com/17twenty/inamate

We're at the stage where community feedback shapes the direction. If you're an animator, motion designer, or just someone who's been frustrated by the state of 2D animation tools — we'd love to hear:

- What features would make you switch from your current tool?

- What's the biggest pain point in your animation workflow?

- Is real-time collaboration actually useful for animation, or is it a gimmick?

Try it out, break it, and tell us what you think.

Built with Go, TS & React, WebAssembly, PostgreSQL, WebSocket, ffmpeg (for video exports).

7

Konform Browser v140.7.0-108 #

codeberg.org faviconcodeberg.org
0 comments7:22 AMView on HN
Konform Browser is a Firefox ESR fork focused on security, privacy, and user freedom. This might sound familiar but I think we still have something worthwhile to bring to the browser ecosystem.

The project started as a fork of LibreWolf and now stands on its own four feet. It takes a harder stance on the three goals of security, privacy and freedom. It's intended to be suitable as a general-purpose daily driver for everything from managing the intranet to surfing the murkier tubes.

This week we added a simple "onboarding" page replacing the about:welcome of Firefox with a page where the user can choose between four different preset configs depending on their scenario and preferences.

If you don't enjoy compiling browser from source, there are prepackaged binaries and probably repos for your Linux dist.

Doors open for users, testers, contributors, as well as hackers itching to (constructively and helpfully pretty please) tear this apart. Looking forward to hear what HN thinks!

6

Claude Meter – macOS menu bar app to track your Claude Code usage limit #

github.com favicongithub.com
2 comments7:29 AMView on HN
Hey HN! I built a native macOS menu bar app that shows your Claude Code usage limits at a glance — no more getting rate-limited mid-flow.

It reads your OAuth token from macOS Keychain, polls the Anthropic usage API, and displays your 5-hour and 7-day utilization as a clean progress bar right in your menu bar. Zero token consumption.

Built with Swift + SwiftUI. MIT licensed.

https://github.com/puq-ai/claude-meter

5

Browser-based video compositor built on WebGPU #

masterselects.com faviconmasterselects.com
5 comments1:29 PMView on HN
GitHub: https://github.com/Sportinger/MasterSelects

I built this Browser-based video compositor with a GPU-first architecture. No Canvas 2D in the rendering path — video textures go in as texture_external (zero-copy), compositing runs through a ping-pong WGSL shader pipeline, and export captures frames directly from the GPU canvas via WebCodecs.

39 GPU effects, 37 blend modes, nested compositions, keyframe animation with bezier curves, vector masks, 10-band live EQ, video scopes, and AI-driven editing via GPT function calling. 13 production dependencies. The compositor, all shaders, timeline, audio mixer, mask engine, and export pipeline are built from scratch.

I'm a video artist, not a developer. Built this entirely with Claude. Things break, but when it works, it works.

Chrome/safari with WebGPU required. Firefox and webGPU have problems.

5

BlazeMQ – 52KB Kafka-compatible broker in C++20, zero dependencies #

github.com favicongithub.com
0 comments5:20 AMView on HN
I built a message broker that speaks the Kafka wire protocol, so any Kafka client (librdkafka, kafka-python, kcat, etc.) works without code changes.

  The entire binary is 52KB. No JVM, no ZooKeeper, no third-party libraries —
  just C++20 with kqueue/epoll. Starts in <10ms, uses 0% CPU when idle.

  I built this because running Kafka locally for development is painful —
  gigabytes of RAM, slow startup, ZooKeeper/KRaft configuration. I just
  wanted something that accepts produce requests and gets out of the way.

  Technical details:
  - Single-threaded event loop (kqueue on macOS, epoll on Linux)
  - Memory-mapped log segments (1GB pre-allocated, sequential I/O)
  - Lock-free SPSC/MPSC ring buffers with cache-line alignment
  - Kafka protocol v0-v3 including flexible versions (ApiVersions, Metadata, Produce)
  - Auto-topic creation on first produce or metadata request

  The most interesting bug I hit: librdkafka sends ApiVersions v3, which uses
  Kafka's "flexible versions" encoding. But there's a special exception in the
  protocol — ApiVersions responses must NOT include header tagged_fields for
  backwards compatibility. One extra byte shifted every subsequent field,
  causing librdkafka to compute a ~34GB malloc that crashed immediately.

  Current limitations: no consumer groups, no replication, single-threaded,
  no auth. It's v0.1.0 — consume support is next.

  MIT licensed, runs on macOS (Apple Silicon + Intel) and Linux.
5

Valk – new programming language with a stateful GC #

github.com favicongithub.com
3 comments12:12 PMView on HN
Greetings HN,

I have been working on a new programming language for the last 3 years and thought it was time to share it with some people. The core of the language is done and i have been using it to create all software related to the language: the compiler itself, the website and vman (valk manager, like npm). There's still alot of work todo but was hoping for some feedback. Maybe an interesting feature of the language is (so i think) that it's the first language with a stateful garbage collector. It basically knows what memory goes out of scope and which doesn't without having to scan for it. So no more mark/sweep. Anyhow, feedback is welcome.

4

Octrafic – AI agent for API testing from your terminal #

github.com favicongithub.com
0 comments1:47 PMView on HN
I built a CLI tool that acts as an AI agent for API testing. Think Claude Code, but for testing APIs – you describe what you want to test, and it autonomously generates test cases, runs them, and reports back. Written in Go, open source, no GUI. It fits into your existing terminal workflow. I was tired of manually writing and updating API tests, so I built something that handles that loop for me. GitHub: https://github.com/Octrafic/octrafic-cli

Feedback welcome.

4

Shuffled - Daily word puzzle game #

shuffled.app faviconshuffled.app
0 comments1:50 PMView on HN
Hi HN!

I built a word game last week called Shuffled. It's a daily puzzle where you drag letters around a grid to form the words before running out of moves. It's designed for quick play. Everyone gets the same set of puzzles.

I’d love any feedback on how the difficulty feels and any UX rough edges.

4

I just want *one page* to see all investments, so that's what I built #

mynetworthone.com faviconmynetworthone.com
0 comments4:47 PMView on HN
I wasn't able to find any other app that successfully combined TradFi and DeFi into a single page. So I built this for myself, and I hope you folks can find it useful as well!

My app supports anything in the Yahoo Finance API or in a Bitcoin, Ethereum & EVM, or Solana wallet. Can you think of anything else it should track?

4

Vibe-coded AI video clipper that runs in the browser #

github.com favicongithub.com
0 comments12:24 PMView on HN
We built an AI video clipper in a day using Claude Code. Drop in a podcast, interview, or presentation and get back 3-4 short clips with captions, speaker tracking, and smart cropping. Everything runs client-side via WebAssembly via CE.SDK (by us, IMG.LY). no server-side rendering. Transcription via ElevenLabs/Whisper, highlight detection via Gemini, face detection via face-api.js in the browser. Open source.
4

Running a public CORS proxy on the open internet for 4 years #

corsproxy.io faviconcorsproxy.io
0 comments8:01 AMView on HN
Hi HN — solo developer here.

I built this tool ~4 years ago purely to solve my own frustration with CORS and repeated proxy setup. The idea was intentionally boring: just prefix a URL and get control over the request and response.

Somehow it grew to serving billions of requests per day, including usage by companies and universities.

I’m sharing it here mainly to:

- get technical feedback - learn what people would actually use a proxy like this for - understand what I should simplify or remove

4

PhoneClaw #

github.com favicongithub.com
0 comments7:17 AMView on HN
Introducing PhoneClaw

Inspired by @openclaw, it can entirely automate all android apps using simple language.

Ships with:

1) Tiktok Video uploading agents

2) Instagram account creation agent (with 2Fa solving)

3) ClawScript, a JS scripting language that defines agentic actions like Magic Clicker.

iOS version coming soon!

Install it on a cheap $30 moto G play from walmart.

4

Agx – A Kanban board that runs your AI coding agents #

github.com favicongithub.com
0 comments5:44 AMView on HN
agx is a kanban board where each card is a task that AI agents actually execute.

    agx new "Add rate limiting to the API"
That creates a card. Drag it to "In Progress" and an agent picks it up. It works through stages — planning, coding, QA, PR — and you watch it move across the board.

The technical problems this solves:

The naive approach to agent persistence is replaying conversation history. It works until it doesn't:

1. Prompt blowup. 50 iterations in, you're stuffing 100k tokens just to resume. Costs explode. Context windows overflow.

2. Tangled concerns. State, execution, and orchestration mixed together. Crash mid-task? Good luck figuring out where you were.

3. Black box execution. No way to inspect what the agent decided or why it's stuck.

agx uses clean separation instead:

- Control plane (PostgreSQL + pg-boss): task state, stage transitions, job queue

- Data plane (CLI + providers): actual execution, isolated per task

- Artifact storage (filesystem): prompts, outputs, decisions as readable files

Agents checkpoint after every iteration. Resuming loads state from the database, not by replaying chat. A 100-iteration task resumes at the same cost as a 5-iteration one.

What you get: - Constant-cost resume, no context stuffing

- Crash recovery: agent wakes up exactly where it left off

- Full observability: query the DB, read the files, tail the logs

- Provider agnostic: Claude Code, Gemini, Ollama all work

Everything runs locally. PostgreSQL auto-starts via Docker. The dashboard is bundled with the CLI.

4

Decision Guardian – Surface past architectural decisions on GitHub PRs #

decision-guardian.decispher.com favicondecision-guardian.decispher.com
0 comments5:15 AMView on HN
I built Decision Guardian after watching teams repeatedly debate decisions that were already settled.

At my last job, we chose Postgres over MongoDB for ACID compliance. 18 months later, a new engineer opened a PR to switch to MongoDB. The team spent 3 months re-evaluating before someone remembered.

Decision Guardian prevents this: - Document decisions in markdown (why you chose X over Y) - GitHub Action comments on PRs when decision-protected code changes - Free, open source, MIT licensed

Takes 2 minutes to set up. Would love feedback.

GitHub: https://github.com/DecispherHQ/decision-guardian

3

Hookaido – "Caddy for Webhooks" #

github.com favicongithub.com
0 comments11:53 AMView on HN
Hi HN, we built Hookaido, a self-hosted webhook gateway that aims to be the “Caddy for webhooks”: one single Go binary, production-ready defaults, minimal ops.

Key bits:

durable SQLite/WAL-backed queue (survives restarts)

retries + dead-letter queue + requeue support

HMAC signature verification + replay protection + secret rotation

pull mode (useful for DMZ setups)

Prometheus metrics + OpenTelemetry tracing

hot reload for config changes

Repo + docs: https://github.com/nuetzliches/hookaido

Would love feedback on the model (push vs pull), operational ergonomics, and any missing “must-have” features for running this in production.

3

Track your input data and create colourful renders with it #

github.com favicongithub.com
0 comments5:33 PMView on HN
The first version of MouseTracks I made all the way back in 2017. It got a lot of interest, but I never had the skill to actually complete it. I finally made a start on 2.0 a bit over a year ago, and I've been chipping away at it for fun ever since.

Some key features: - Track mouse movements, clicks, keyboard, and controller inputs (you can optionally disable any of these). - Switch profiles depending on what game / application is loaded. - Live render preview in the GUI. -It's designed to flawlessly handle multiple monitors and different resolutions. - Older mouse movements gradually fade to keep focus on the most recent activity. - Data can be recorded for years and rendered at any time.

It's designed as a "run and forget" type application, where you tick an option to load in the background on startup, and it'll silently keep recording until you're ready to render (it doesn't do that by default though - it just acts as a normal portable application if you don't change any settings).

It's all open source and compatible with Windows/Linux/macOS. The executables are built automatically by GitHub Actions, and there's also instructions on how to build or run locally with Python.

Feel free to ask any questions. I've got a bunch of example renders on the GitHub page which should hopefully demonstrate it properly.

3

Fix your CSV's files problems #

repairmycsv.com faviconrepairmycsv.com
0 comments5:40 PMView on HN
As a data analysis student, working with csv and excel sheets is important, in the phase cleaning you face several problems that can break the process, so I've built a free tools website + chrome extension to solve this problem, give it a try
3

Hyperspectra – Desktop tool for exploring AVIRIS-3 hyperspectral images #

github.com favicongithub.com
0 comments5:42 PMView on HN
I've been working in GIS/mapping for a few years and found myself increasingly adjacent to machine learning and computer vision, which got me thinking about what I've been calling "broad spectrum" computer vision, object and anomaly identification beyond the visible range of light.

This is my first pass at building a tool to understand the physics involved, from electromagnetic absorption and reflectance of sunlight through to corrected sensor observation. I've been focused on building out and validating existing spectral indices to understand the fundamentals before exploring my own based on molecular properties of materials from first principles.

So far the tool includes:

-An atmospheric correction processor with three methods: empirical band-ratio, Py6S radiative transfer, and ISOFIT optimal estimation

-An interactive viewer for both radiance and reflectance data with RGB composites, 23 spectral indices, and ROI-based spectral signature extraction with reference material matching

-A learning suite that explains each stage of the observation chain from solar irradiance to sensor capture

So far I've tested on AVIRIS-3 data from Santa Barbara Island, San Joaquin Valley and Cuprite, NV. I'd love a sanity check on the direction and general utility. If anyone works with hyperspectral data and wants to take a crack at stress testing, install requires Python 3.9+ and optionally conda for Py6S.

3

LLMs are getting pretty good at Active Directory exploitation #

blog.vulnetic.ai faviconblog.vulnetic.ai
0 comments10:59 PMView on HN
One thing I will say is that at Vulnetic we will basically jumble different misconfigurations into a network and the agent always seems to find a way to exploit them. We have tried making them esoteric, and we are even now using EDR and tools like Wazuh to evaluate how our agent evades detection. These models are improving at hacking fast.
3

Open-source civic toolkit – 48 policies, 12 interactive tools, forkable #

0 comments5:08 AMView on HN
We built Denver For All - an open-source civic platform with 48 data-driven policy proposals, 12 interactive tools (eviction tracker, campaign finance dashboard, rent calculator, AI tenant rights chatbot), and full bilingual English/Spanish support.

Stack: Astro + React + TypeScript, Cloudflare Pages/Workers/D1, vAPI for voice AI. MIT licensed, policy content is public domain.

The whole thing is designed to be forked. QUICKSTART.md walks you through adapting it for your own city - swap the data sources, update the policies, deploy.

Live site: https://denverforall.org Repo: https://github.com/Denver-For-All/denver-for-all

3

Darna – Atomic commit validator for Go #

github.com favicongithub.com
0 comments1:38 PMView on HN
Lately, while experimenting with agentic coding, I kept running into the same pain point: committing generated code cleanly. Quick prototypes often turn into large diffs that are hard to review and even harder to split into sensible commits.

After watching a FOSDEM talk on efficient Git workflows [0], I started exploring the idea of “atomic commits” for Go. The result is Darna.

Darna is a small CLI tool (usable as a Git hook) that validates whether your staged changes form a valid, working fileset. It works down to the line level, so partial staging is fully supported, and it can also tell you which additional files need to be staged (as whole files) to make a commit truly atomic.

If you’re dealing with large diffs, partial staging, or AI-assisted code generation, this might be useful for you too. For extra confidence: all commits in the repo are validated by Darna itself, and the commit messages were reviewed (not auto-generated) with help from Claude Code.

Feedback very welcome.

0: https://fosdem.org/2026/schedule/event/3VNNBK-efficient-git-...

2

MCP Orchestrator – Spawn parallel AI sub-agents from one prompt #

github.com favicongithub.com
0 comments5:47 AMView on HN
I built an open-source MCP server (TypeScript/Node.js) that lets you spawn up to 10 parallel sub-agents using Copilot CLI or Claude Code CLI.

Key features: - Context passing to each agent (full file, summary, or grep mode) - Smart timeout selection based on MCP servers requested - Cross-platform (macOS, Linux, Windows) - Headless & programmatic — designed for AI-to-AI orchestration

Example: give one prompt like "research job openings at Stripe, Google, and Meta" — the orchestrator fans it out to 3 parallel agents, each with their own MCP servers (e.g., Playwright for browser), and aggregates results.

Install: npm i @ask149/mcp-orchestrator

This is a solo side project. Would love feedback on: - What CLI backends to support next (Aider, Open Interpreter, local LLM CLIs?) - Ideas for improving the context-passing system - What MCP server integrations would be most useful

PRs and issues welcome — check CONTRIBUTING.md in the repo.

2

Open-source Liquid sections for Shopify themes #

github.com favicongithub.com
0 comments1:31 PMView on HN
Hi HN, I've put together a collection of open-source Shopify sections (Liquid, CSS, JS) that can be added to themes. I built this because I found that while Shopify is great, standard themes often lack specific sections, and custom development can be expensive for small merchants. I wanted to create a repository of high-quality, copy-pasteable sections that developers can use to speed up their workflow or merchants can use to enhance their stores without apps. I’d love to hear your feedback on the code structure or requests for other common sections that are usually missing from default themes.
2

ClearDemand – Cross-case search and drafting for injury firms #

cleardemand.io faviconcleardemand.io
2 comments4:58 PMView on HN
Hello HN,

We built ClearDemand to solve the "hallucination" problem in legal drafting. General LLMs are great at writing, but terrible at accuracy—which is a dealbreaker when citing medical evidence in a demand letter.

The Problem: Personal Injury firms spend hundreds of hours manually summarizing unstructured medical records (PDFs) to draft demand letters. If they miss a specific injury or treatment date, the settlement value drops.

The Solution: ClearDemand is a drafting platform that ingests raw case files (police reports, medical records, invoices), runs OCR, and uses RAG to generate drafts that are source-verified.

Grounded Generation: Every claim in the generated letter includes a citation linking back to the specific page in the source document.

Cross-Case Search: We index the firm’s entire history, allowing lawyers to search across past matters to find similar fact patterns or successful arguments.

Style Matching: It learns the firm’s specific tone/structure so the output doesn't sound like generic AI.

We’re offering a 14-day trial to see if the OCR pipeline holds up against your messiest scanned PDFs.

Feedback on our citation UI would be much appreciated!

Thanks.

1

Agentplatform.app – Plugin-Based LLM Workflows #

agentplatform.app faviconagentplatform.app
0 comments11:01 PMView on HN
Hi all - I created agentplatform.app to be a place to build agentic workflows, and create plugins that tap into those. I wanted to build upon the idea of a skill for something that can be deeply integrated into the entire lifecycle of the application, and modular in a way where plugins can be easily reused across different applications.

I also wanted a way for people to go on and create their own plugins and be paid for their work - so as you create plugins you're able to add them to the marketplace for others to use for free or to 'subscribe' to them for a max monthly rate that is applicable as long as an application is running. The plugins are go-code, and when you apply them to an application and deploy it, it's deployed into it's own isolated machine from other applications - running native Go code.

If anyone is interested in helping me test this, please send me a DM after signing up on the site and I'll approve your account and set it up to bypass billing during testing.

1

Samma Suit – Open-source 8-layer security framework for AI agents #

sammasuit.com faviconsammasuit.com
1 comments1:58 PMView on HN
I've been running AI agents in production for a music platform and kept hitting the same security gaps — no permission inheritance, no cost controls on nested calls, skills that could execute arbitrary code, no kill switch when things went sideways. Samma Suit is an open-source security layer that wraps any agent framework with 8 layers:

SUTRA — API gateway with rate limiting DHARMA — Permission inheritance (parent → child agents) SANGHA — Skill/tool vetting before execution KARMA — Cost controls that propagate through subagents SILA — Immutable audit trail METTA — Cryptographic identity signing BODHI — Execution isolation NIRVANA — Kill switch for runaway agents

Framework-agnostic — works with LangChain, CrewAI, AutoGPT, or raw API calls. Define policies in YAML, Samma Suit enforces them at runtime. GitHub: https://github.com/onezeroeight/samma-suit Docs: https://sammasuit.com The arxiv paper on the front page today (agents violating constraints 30-50% of the time) is exactly why we built this — constraints need to be enforced at infrastructure level, not left to the model.

1

Hire a 24/7 AI Employee for $5k/Year #

hangryfeed.com faviconhangryfeed.com
0 comments10:14 AMView on HN
Hi HN, Kevan here. We’re building autonomous agents to handle the "boring" parts of business operations—things like lead enrichment, CRM syncing, and basic DevOps monitoring.

Instead of a generic wrapper, we build custom agent architectures that integrate directly into Slack, GitHub, and Hubspot to handle repetitive human workflows.

Key highlights:

• Cost: $5k/year per agent (aiming for a 90%+ reduction vs junior roles). • Availability: 24/7/365 with zero "off-time." • Stack: Specialized orchestrators and memory layers to maintain state across long-running tasks. • Setup: 48-hour deployment from workflow mapping to "Go Live." We’re currently handling 100+ tasks/day for our early users, ranging from automated PR reviews to personalized outreach. I’d love to get the community's feedback on our ROI metrics and the management framework we're using for "Synthetic Staff."

1

Snagg – Clip memes from anywhere, post them instantly #

snagg.meme faviconsnagg.meme
0 comments2:00 PMView on HN
I built a tool to end the "save meme → forget where it is → scroll for 5 minutes" cycle.

Snagg lets you grab memes from Reddit, Twitter, Instagram — anywhere — with a right-click. They go into your searchable collection. When you need one, the Chrome extension drops it directly into Discord, WhatsApp, Slack, whatever. No downloading, no tab switching, no digging through your camera roll.

There's also an iOS keyboard so you can insert memes while you're typing.

What's your current meme workflow? Curious how others deal with the chaos.

1

I wrote a prompt to stop Gemini from hallucinating #

0 comments2:00 PMView on HN
While recovering from gallbladder surgery, I needed Gemini 3 to be reliable—but it kept hallucinating.

I found that as models get smarter, their laziness becomes more "sophisticated." I call this the "Probabilistic Sloth" of 2026. Even with the latest retrieval tools, the model often chooses the path of least resistance, producing plausible-sounding but incorrect output.

Out of frustration, I wrote a system prompt to install a kind of "Will" into the AI. It forces the LLM to split into two roles:

The Drafting Agent: focuses on generating the initial response.

The Ruthless Auditor: focuses strictly on logical error detection and evidence locking.

This creates a friction-based loop—an explicit self-correction step before any output reaches the user. In my tests, it stopped the model from hallucinating about Python libraries that don’t exist.

This is the KOKKI (Self-Discipline) Protocol. It’s not just a prompt; it’s a structured way to force an LLM to catch its own failure modes.

I’ve documented the raw logic on Gist and would love for this community to test it, use it, and tear it apart. I don’t want money; I need brutal feedback to evolve this further.

Feedback welcome. Even “this was annoying” helps.

The Prompt (Gist): https://gist.github.com/ginsabo/641e64a3dbc2124d1edb0c662be9...

1

Radiant – Radial Menu Launcher for macOS Inspired by Blender's Pie Menu #

0 comments1:54 PMView on HN
Hi HN, I built Radiant.

I use Blender's Pie Menu a lot and like how spatial positioning turns into muscle memory — hold a key, move toward a direction, release. After a while you stop thinking about it. I wanted that same interaction model in Figma, VS Code, and the rest of macOS, so I built a system-wide version.

Radiant is a radial and list menu launcher for macOS. You organize actions into menus, trigger them with a hotkey, and pick by direction or position.

Some design decisions I'd be happy to discuss:

- 8 fixed slots per radial menu — a deliberate constraint for spatial memory. More slots = slower selection (Fitts's Law), fewer = not enough utility. List menus handle the "I need 20+ items" case. - Three close modes: release-to-confirm (Blender-style), click-to-confirm, and toggle (menu stays open for multiple actions) - App-specific profiles that auto-switch based on the frontmost application - Built-in macro system — chain keystrokes, delays, text input, and system actions without external tools

Technical details: - Native Swift/SwiftUI, no Electron - CGEventTap for global keyboard/mouse monitoring - Accessibility API for keystroke injection - All data stored locally in UserDefaults, no telemetry - JSON config with import/export for sharing presets

URL: https://radiantmenu.com

Would love to hear your thoughts.