2026年2月22日 的 Show HN
69 篇Local-First Linux MicroVMs for macOS #
Warn Firehose – Every US layoff notice in one searchable database #
CS – indexless code search that understands code, comments and strings #
Turns out it can, and so I iterated on the idea, building it into a full CLI tool. Recently I wanted to improve it by adding relevance of tools like Sourcegraph or Zoekt but again without adding an index.
cs uses scc https://github.com/boyter/scc to understand the structure of the file on the fly. As such it can filter searches to code, comments or strings. It also applies a weighted BM25 algorithm where matches in actual code rank higher than matches in comments (by default).
I also added a complexity gravity weight using the cyclomatic complexity output from scc as it scans. So if you're searching for a function, the implementation should rank higher than the interface.
cs "authenticate" --gravity=brain # Find the complex implementation, not the interface
cs "FIXME OR TODO OR HACK" --only-comments # Search only in comments, not code or strings
cs "error" --only-strings # Find where error messages are defined
cs "handleRequest" --only-usages # Find every call site, skip the definition
v3.0.0 adds a new ranker, along with a interactive TUI, HTTP mode, and MCP support for use with LLMs (Claude Code/Cursor).Since it's doing analysis and complexity math on the fly, it's slower than any grep. However, on an M1 Mac, it can scan and rank the entire 40M+ line Linux kernel in ~6 seconds.
Live demo (running over its own source code in HTTP mode): https://codespelunker.boyter.org/ GitHub: https://github.com/boyter/cs
How to Verify USDC Payments on Base Without a Payment Processor #
Option A: Integrate a payment processor like Coinbase Commerce. Set up an account, embed their checkout widget, handle their SDK. Pay $100 in fees (1%).
Option B: Build your own blockchain listener. Learn ethers.js, subscribe to USDC transfer events, handle reorgs, confirmations, edge cases. Two weeks of work, minimum.
There's no middle ground. No service that just tells you: "Yes, this specific payment arrived."
Until now.
I built a free AI tool that picks your SaaS tech stack based on budget #
I kept seeing the same question asked over and over in startup communities:
"What tech stack should I use for my SaaS?" The answers were always
scattered, opinionated, and never accounted for budget or team size.
So I built appstackbuilder.com — you tell it your monthly budget, app type,
team size, and skill level, and it recommends a full stack (auth, database,
hosting, payments, analytics, etc.) with actual pricing for each tool.
A few things that make it different from generic advice:
- It accounts for team size when calculating costs (e.g. Clerk charges per
user, Linear charges per seat — most stack guides ignore this)
- You can toggle "no-code only" if you're a non-technical founder
- It shows you 2–3 alternatives per category, not just one option
- You can export the full stack as a PDF or share a link
- It's completely free, no account required
Under the hood: Next.js, Supabase, and Gemini for the recommendations.
The tool database has ~80+ tools across categories with real pricing data
I manually verified.
What I'm still figuring out:
- How to keep pricing data fresh as tools change plans frequently
- Whether to add a "stack score" based on community usage data
- If the no-code recommendations are actually good (I'm a developer,
so I'd love feedback from non-technical founders specifically)
Would love brutal feedback, especially if the recommendations feel off
for your use case.Try any terrible idea, ChatGPT still leads with praise #
A portfolio that re-architects its React DOM based on LLM intent #
Added a raw 45-second demo showing the DOM re-architecture in real-time: https://streamable.com/vw133i
I got tired of the "Context Problem" with static portfolios—Recruiters want a resume, Founders want a pitch deck, and Engineers want to see architecture.
Instead of building three sites, I hooked up my React frontend to Llama-3 (via Groq for <100ms latency). It analyzes natural language intent from the search bar and physically re-architects the Component Tree to prioritize the most relevant modules using Framer Motion.
The hardest part was stabilizing the Cumulative Layout Shift (CLS) during the DOM mutation, but decoupling the layout state from the content state solved it.
The Challenge: There is a global CSS override hidden in the search bar. If you guess the 1999 movie reference, it triggers a 1-bit terminal mode.
Happy to answer any questions on the Groq implementation or the layout engine!
ByePhone- An AI assistant to automate tedious phone calls #
I thought: AI could do this with a web form turned into a prompt.
Stack started out simple -> using 11labs for voice + claude + twillio, but it actually got rather complex (even though I tried vibe coding most).
First off, finding the phone numbers quickly is hard. This is done by scraping the web with some basic duckduckgo search and structure with openai calls.
Second, collecting the right information. I’m still struggling a bit with this but the architecture is that: A) user puts in call objective and business name B) if keywords are detected spin up one of the default form categories C) if not, get structured json from gpt-4o-mini and turn into react form
The cost of making a single call spun out of control, but luckily sonnet can handle a lot of the calls and I’m ok paying for twillio.
Ended up taking months to build my week-long project because of course.
It’s still WIP so feel free to email me: [email protected] with any ideas or issues u ran into. \
Approve Claude Code permission requests from your phone via ntfy #
Setup:
npm install -g claude-remote-approver
claude-remote-approver setup
Then scan the QR code with the ntfy app on your phone and start a new Claude Code session.How it works: The hook POSTs the permission request to an ntfy topic, then subscribes to a response topic via SSE. When you tap a button on your phone, ntfy delivers the response back. The hook writes {"behavior":"allow"} or {"behavior":"deny"} to stdout and exits.
The topic name is generated with crypto.randomBytes(16) (128 bits), config file is 0600, and unanswered requests auto-deny after 120 seconds.
If you don't want requests going through the public ntfy.sh server, you can self-host ntfy and point the config at your own instance.
Github: https://github.com/yuuichieguchi/claude-remote-approver
Seafruit – Share any webpage to your LLM instantly #
This weekend I built seafruit.pages.dev to privately share any webpage with my LLM. More sites are (rightfully) blocking AI crawlers but as a reader with the page already open, it's frustrating that my AI assistant can't "see" what I'm already reading.
One click → clean Markdown → copied to clipboard. No extension, no tracking.
Existing solutions like AI browsers or extensions felt too intrusive. I wanted something surgical, fast, and private.
How it works: It's a bookmarklet. Click it on any page → it extracts clean text as Markdown → copies an AI-optimized link to your clipboard. No extension needed.
Key details: Zero friction: Drag the bookmark to your bar. Works on mobile too. Privacy-first: Links are ephemeral (24 hrs on Free). PRO links self-destruct the moment an AI bot finishes reading them. LLM-optimized: Clean Markdown, not raw HTML — no wasted context window. Fast everywhere: Built on Cloudflare Workers.
Would love feedback on the workflow or ideas for other anti-friction features.
P.S. Thanks to mods Daniel and Tom for helping me recover my account!
Slack as an AI Coding Remote Control #
Now I can code from anywhere... even when I should be relaxing
Inspired by various "claw" projects.
Open source on GitHub: DiscreteTom/juan
---
What this does: • Control AI coding assistants through Slack • Write and modify code remotely • Perfect for when you're away from your desk
Claude-ts – Translation proxy to fix non-English token waste in Claude #
1. You waste tokens — non-English text takes 2-3x more tokens than English for the same meaning. Every prompt, every response, every turn in context is inflated.
2. Claude reasons worse — it spends context budget on language switching instead of actually thinking about your code.
I built claude-ts to fix this. It's a translation proxy that sits in front of Claude Code:
You (any language) → Haiku/Ollama (→ EN) → Claude Code (EN) → Haiku/Ollama (→ your lang) → You
Claude Code always works in English internally — better reasoning, fewer tokens. The translation costs almost nothing (Haiku) or literally nothing (local Ollama).
pip install claude-ts
- 8 languages supported (ko, ja, zh, th, hi, ar, bn, ru) - Free local translation via Ollama - Real-time agent tree visualization - All Claude Code features preserved
LinuxLofi-Lofi in the Terminal #
Cryphos – no-code crypto signal bot with Telegram alerts #
Semantic search over Hacker News, built on pgvector #
Indexed HN posts and comments into PostgreSQL with pgvector (HNSW index) Embeddings generated with OpenAI's embedding model Queries run as nearest-neighbor vector searches — typical response under 50ms The whole thing runs on a single Postgres instance, no separate vector DB
I built this partly because I wanted a better way to search HN, and partly to dogfood my own project — Rivestack (https://rivestack.io), a managed PostgreSQL service with pgvector baked in. I wanted to see how pgvector holds up with a real dataset at a reasonable scale. A few things I learned along the way:
HNSW vs IVFFlat matters a lot at this scale. HNSW gave me much better recall with acceptable index build times. Storing embeddings alongside relational data in the same DB simplifies things enormously — no syncing between a vector store and your main DB. pgvector has gotten surprisingly fast in recent versions. For most use cases, you really don't need a dedicated vector database.
The search is free to use. Rivestack has a free tier too if anyone wants to try something similar. Happy to answer questions about the architecture, pgvector tuning, or anything else.
ShuttleAI – One API for Claude Opus 4.6 and GPT-5.2 #
Rust blockchain with sharded propagation and post-quantum signatures #
Tlsctl – A CLI for inspecting and troubleshooting TLS #
I built tlsctl, a small CLI for inspecting, testing, and debugging TLS connections:
https://github.com/catay/tlsctl
It aims to make TLS diagnostics more readable and structured than stitching together `openssl` and other ad-hoc commands. You can inspect certificates and chains, check protocol support, and output in different formats.
Part of the motivation was practical (I got tired of parsing `openssl s_client` output), and part was educational. I wanted to build more hands-on experience with agentic engineering workflows. Shout out to https://ampcode.com/ for their coding agent.
I’d love feedback on usefulness, missing features, and whether this fits into real-world TLS troubleshooting workflows.
Thanks!
Steven
Mujoco React #
This is made possible by DeepMind's mujoco-wasm (mujoco-js), which compiles MuJoCo to WebAssembly. We wrap it with React Three Fiber so you can load any MuJoCo model, step physics, and write controllers as React components, all running client-side in the browser
Vexp – graph-RAG context engine, 65-70% fewer tokens for AI agents #
The problem
When you ask Claude Code or Cursor to fix a bug, they typically grep around, cat a bunch of files, and dump thousands of lines into the context. Most of it is irrelevant. You burn tokens, hit context limits, and the agent loses focus on what matters.
What vexp does
vexp is a local-first context engine that builds a semantic graph of your codebase (AST + call graph + import graph + change coupling from git history), then uses a hybrid search — keyword matching (FTS5 BM25), TF-IDF cosine similarity, and graph centrality — to return only the code that's actually relevant to the current task.
The core idea is Graph-RAG applied to code:
Index — tree-sitter parses every file into an AST, extracts symbols (functions, classes, types), builds edges (calls, imports, type references). Everything stored in a single SQLite file (.vexp/index.db).
Traverse — when the agent asks "fix the auth bug in the checkout flow", vexp combines text search with graph traversal to find the right pivot nodes, then walks the dependency graph to include callers, importers, and related files.
Capsule — pivot files are returned in full, supporting files as skeletons (signatures + type defs only, 70-90% token reduction). The result is a compact "context capsule" that gives the agent everything it needs in ~2k-4k tokens instead of 15-20k.
Session Memory (v1.2)
The latest addition is session memory linked to the code graph. Every tool call is auto-captured as a compact observation. When the agent starts a new session, relevant memories from previous sessions are auto-surfaced inside the context capsule. If you refactor a function that a memory references, the memory is automatically flagged as stale. Think of it as a knowledge base that degrades gracefully as the code evolves.
How it works technically
Rust daemon (vexp-core) handles indexing, graph storage, and query execution TypeScript MCP server (vexp-mcp) exposes 10 tools via the Model Context Protocol VS Code extension (vexp-vscode) manages the daemon lifecycle and auto-configures AI agents Supports 12 agents: Claude Code, Cursor, Windsurf, GitHub Copilot, Continue.dev, Augment, Zed, Codex, Opencode, Kilo Code, Kiro, Antigravity 12 languages: TypeScript, JavaScript, Python, Go, Rust, Java, C#, C, C++, Ruby, Bash The index is git-native — .vexp/index.db is committed to your repo, so teammates get it without re-indexing Local-first, no data leaves your machine
Everything runs locally. The index is a SQLite file on disk. No telemetry by default (opt-in only, and even then it's just aggregate stats like token savings %). No code content is ever transmitted anywhere.
Try it
Install the VS Code extension: https://marketplace.visualstudio.com/items?itemName=Vexp.vex...
The free tier (Starter) gives you up to 2,000 nodes and 1 repo — enough for most side projects and small-to-medium codebases. Open your project, vexp indexes automatically, and your agent starts getting better context on the next task. No account, no API key, no setup.
Docs: https://vexp.dev/docs
I'd love to hear feedback, especially from people working on large codebases (50k+ lines) where context management is a real bottleneck. Happy to answer any questions about the architecture or the graph-RAG approach.
I made Chrome extension to blocks websites with a Mindful twist #
So I built ZenBlock to solve my problem: it blocks distracting sites, but when you try to open one, it shows a short breathing exercise, After that you can choose to get temporary access (5–30 min). The goal is to make you aware of distraction. this is more inclined towards
Tech-wise: it’s built using Chrome extension blocking rules (Manifest V3 / declarativeNetRequest) and a local timer to handle the “allow for X minutes” part.
This might sound a bit funny, but for me it genuinely helped — my watch time dropped from 4 hrs to 2.5 hrs/day mostly because I got tired of waiting. It got analytics too which store all data to local storage only.
Would love feedback on:
does the breathing pause feel helpful or just annoying?
what would make you keep an extension like this installed long-term?
CanaryAI v0.2.5 – Security monitoring on Claude Code actions #
CanaryAI is a macOS menu bar app that monitors Claude Code session logs and alerts on suspicious behaviour: reverse shells, credential file access, LaunchAgent/cron persistence, download-and-execute patterns, shell profile modification. It parses the JSON logs Claude Code writes locally — no interception, no proxying. Alert-only; it never blocks the agent.
All processing is local. Detection rules are YAML so can be expanded on.
> https://github.com/jx887/homebrew-canaryai
Let me know if you have any questions.
Aeterna – Self-hosted dead man's switch #
I didn't want to hand my master password / recovery keys to a third-party service Most existing tools are either paid, closed-source, or feel over-engineered I wanted something I could just docker-compose up and forget about (mostly)
Core flow:
Single docker-compose (Go backend + SQLite, React/Vite + Tailwind frontend) You set check-in interval (30/60/90 days etc.) It emails you a simple "Still alive?" link (uses your own SMTP server – no external deps) Miss the grace period → switch triggers Decrypts vault contents and emails them to your nominated contacts, or hits webhooks you define
Security highlights:
Everything at rest uses AES-256-GCM Master password → PBKDF2 hash (never stored plaintext) Sensitive config (SMTP creds etc.) encrypted in DB No cloud APIs required – bring your own email
It's deliberately minimal and boringly secure rather than feature-heavy. Zero vendor lock-in. Repo: https://github.com/alpyxn/aeterna Would really value brutal feedback:
Security model / crypto usage – anything smell wrong? Architecture – single SQLite ok long-term? UI/UX – is it intuitive enough? Missing must-have features for this kind of tool? Code – roast away if you want
Thanks for looking – happy to answer questions or iterate based on comments.
Fan Meter – A movie quiz game where you guess films from frames #
ZkzkAgent now has safe, local package management #
Just added package management with these goals:
- 100% local/offline capable (no web search required for known packages) - Human confirmation for every install/remove/upgrade - Smart fallback order to avoid conflicts: 1. Special cases (Postman → snap, VSCode → snap --classic, Discord → snap/flatpak, etc.) 2. Flatpak preferred for GUI apps 3. Snap when flatpak unavailable 4. apt only for CLI/system tools - Checks if already installed before proposing anything - Dry-run style preview + full command output shown - No blind execution — always asks "yes/no" for modifications
Example flow for "install postman": → Detects OS & internet (once) → Recognizes snap path → proposes "sudo snap install postman" → Shows preview & asks confirmation → Runs only after "yes" → Verifies with "postman --version"
Repo: https://github.com/zkzkGamal/zkzkAgent
Would love feedback, especially: - What other packages/tools should have special handling? - Should it prefer flatpak even more aggressively? - Any scary edge cases I missed?
Thanks!
ThreadKeeper – Save and restore Windows working context with Ollama #
I built ThreadKeeper, an open-source Windows app to solve a problem I constantly face: losing my working context due to interruptions.
When testing local LLMs or building AI agents, I usually have multiple terminal windows, IDEs, and 20+ browser tabs open. If I get interrupted, recovering that exact state takes immense cognitive load.
How it works: You hit Ctrl + Shift + S. It instantly captures your open windows, browser tabs, and clipboard (first 200 chars), and saves the state entirely locally (%APPDATA%). It then uses an LLM to auto-generate a summary of what you were doing. Later, 1-click restores all those apps and tabs.
Privacy First (Ollama Support): I didn't want my code or private keys going to the cloud. You can use external APIs (Gemini/Claude) via BYOK, but it fully supports Ollama for 100% offline, local summarization. Zero data leaves your machine.
The elephant in the room (SmartScreen): This is a free, MIT-licensed MVP. I haven’t purchased an expensive EV code signing certificate yet, so Windows SmartScreen will show a warning. I know this is a friction point, but the source code is fully open on GitHub for anyone to inspect or build themselves.
Website: https://www.thethread-keeper.com/ Github: https://github.com/tatsuabe69/thread-keeper
I’d love to hear your feedback or roast my code!
Pq – Simple, durable background tasks in Python using Postgres #
pq is a simpler approach: It's a Postgres-backed background-task library for Python, using SELECT ... FOR UPDATE SKIP LOCKED for concurrency safety. You just run your N task workers, that's it. The stuff I think is worth knowing about: - Transactional enqueueing -- you can enqueue a task inside the same DB transaction as your writes. If the transaction rolls back, the task never exists. This is the thing Redis literally can't give you. Fork isolation -- every task runs in a forked child process. If it OOMs, segfaults, or leaks memory, the worker just keeps going. The parent monitors via os.wait4() and handles timeouts, OOM kills, and crashes cleanly. - Periodic tasks -- intervals or cron expressions, with overlap control (skip the tick if the previous run is still going), pause/resume without deleting the schedule. Priority queues -- five levels from BATCH to CRITICAL. You can dedicate workers to only process tasks with specific priorities. - Three tables in your main database schema: pq_tasks for one-off tasks, pq_periodic for periodic tasks and pq_alembic_version to track its own schema version (pq manages its own migrations).
There's also client IDs for idempotency and correlation with other tables in your application DB, upsert for debouncing (only the latest version of a task runs), lifecycle hooks that execute in the forked child (useful for fork-unsafe stuff like OpenTelemetry), and async task support.
What it won't do: high throughput (you're polling Postgres). If you need 10k+ tasks/sec or complex DAGs, use something else. For the kind of workload most apps actually have, it's probably sufficient.
pip install python-pq // uv add python-rq repo at https://github.com/ricwo/pq, docs at https://ricwo.github.io/pq
A virtual Zen garden for vibe coding #
I quit MyNetDiary after 3 years of popups and built a calorie tracker #
The whole thing is a single HTML file. No server, no account, no login, no cloud. Data lives on your device only. You open it in a browser, bookmark it, and it works — offline, forever.
The feature I'm most proud of is real-time pacing: it knows your eating window, the current time, and how much you've consumed, and tells you whether you're actually on track — not just what your total is.
Free trial, no signup required: calories.today/app.html
Built this for myself after losing weight and just wanting to maintain without an app trying to sell me something every day. If that sounds familiar, give the trial a shot.
Voted.dev – Vote on New Startups #
I wanted something dead simple for discovering new products.
Submit a domain (each one can only be posted once), add a one-liner, and people vote. No hunters, no badges, no collections, no blog posts.
What would you add (or deliberately leave out)?
Upti – Track cloud provider incidents and get alerts #
Why I made it:
- I wanted a simple way to track service disruptions across providers in one place
- Official status pages are useful but fragmented
- I needed quick, actionable notifications
What Upti does:
- Scrapes provider status/incident pages
- Sends outage/incident alerts
- Keeps the experience lightweight and fast
I’d love feedback on:
- Which providers/services I should prioritize next
- Alert quality (too noisy vs too late)
- What would make this genuinely useful for SRE/DevOps workflows
Happy to share implementation details if useful.AppFeedBackScratch – Reciprocal Feedback for Indie Developers #
So the idea here is simple: build in an incentive for builders to take a serious look at other builders' apps. On this site, if you give helpful reviews to other devs (determined by AI and upvotes), then your app rises higher on the list and gets seen by more other builders.
Since this is new and empty, I'm planning to personally give feedback to the first 10 apps posted.
OctoFlow v1.0.0 – GPU VM where the GPU runs autonomously, CPU is BIOS #
The idea: the GPU is the computer, the CPU is the BIOS.
You boot a VM, program a dispatch chain of kernel instances, submit once with vkQueueSubmit, and everything — layer execution, inter-layer communication, self-regulation, compression, database queries — happens on the GPU without CPU round-trips. The CPU just provides I/O.
let vm = vm_boot()
let prog = vm_program(vm, kernels, 4)
vm_write_register(vm, 0, 0, input)
vm_execute(prog)
let result = vm_read_register(vm, 3, 30)
4 VM instances, one submit, no CPU involvement between stages.The memory model is 5 SSBOs: Registers (per-VM working memory), Metrics (regulator signals), Globals (shared mutable — KV cache, DB tables), Control (indirect dispatch params), Heap (immutable bulk data — quantized weights).
What makes it interesting:
- Homeostasis regulator: each VM instance has a kernel that monitors activation norms, memory pressure, throughput. The GPU self-regulates without asking the CPU.
- GPU self-programming: a kernel writes workgroup counts to the Control buffer, the next vkCmdDispatchIndirect reads them. The GPU decides its own workload.
- Compression as computation: Q4_K dequantization, delta encoding, dictionary lookup — these are just kernels in the dispatch chain, not a special subsystem. Adding a new codec = writing an emitter. No Rust changes.
- CPU polling: Metrics and Control are HOST_VISIBLE. CPU can poll GPU state and activate dormant VMs without rebuilding the command buffer. The GPU broadcasts needs, the CPU fulfills them.
The VM is workload-agnostic. Same architecture handles LLM inference, database queries, physics sims, graph neural networks, DSP pipelines, and game AI. We've validated all six. The dispatch chain is the universal primitive.
What's new in v1.0.0 beyond GPU VM: - 247 stdlib modules (up from 51) - Native media codecs (PNG, JPEG, GIF, MP4/H.264 — no ffmpeg) - GUI toolkit with 15+ widgets - Terminal graphics (Kitty/Sixel) - 1,169 tests passing - Still 2.3 MB, still zero external dependencies
The zero-dep thing is real — zero Rust crates. The binary links against vulkan-1 and system libs, nothing else. cargo audit has nothing to audit.
Landing page: https://octoflow-lang.github.io/octoflow/ GPU VM details: https://octoflow-lang.github.io/octoflow/gpu-vm.html GitHub: https://github.com/octoflow-lang/octoflow Download: https://github.com/octoflow-lang/octoflow/releases/latest
I'm one developer. This is early. The GPU VM works and tests pass bit-exact, but there's a lot of road ahead — real LLM inference at scale, multi-agent orchestration, the full database engine. I'd love feedback from anyone who works with GPU compute, Vulkan, or language design.
Git worktree automation. #
I wanted to run multiple AI coding agents on the same project at the same time so that I could implement multiple specs at the same time. Each agent gets its own git worktree, its own branch, and a real terminal. No wrapper. You see exactly what you'd see in your CLI, but for three agents at once.
Pick the right CLI for each task. Steer agents mid-flight. Create PRs when they're done.
Built 50% of Manifold with Manifold. macOS, open source, v0.1. Would love feedback.
Claude Agent SDK for Laravel – Build AI Agents with Claude Code in PHP #
The problem: Claude Code is powerful (file ops, bash, code editing, subagents, MCP) but it's a CLI tool. If you want to use those capabilities from a web app, there wasn't a clean PHP interface.
This SDK communicates with Claude Code via subprocess, parses the streaming JSON output, and provides a fluent PHP API with full type safety.
Technical highlights: - Generator-based streaming (memory efficient) - Full message/content block parsing with typed classes - Subagent orchestration (delegate tasks to specialized agents) - MCP server integration (stdio + SSE) - JSON schema structured output - Session resume and fork for multi-turn conversations - PHP 8.1+ with readonly properties, enums, named args
It's MIT licensed. Would love feedback on the architecture and API design.
Shapow – Nginx module to block bots with PoW #
Since my cgit instance has been getting hammered by botnets for a while now, I've decided put a little more effort into my blocking strategy.
In practice this meant putting a JS proof-of-work challenge on the site as these less unobtrusive than traditional CAPTCHAs and seem difficult to solve in bulk. I also wanted
* Support for users who block cookies
* Something I could easily integrate into my existing configuration
* Something simple, I need it to do one thing well
I looked at a few existing solutions but wasn't satisfied (and admittedly I wanted an excuse to make something with Nginx), so I made my own!
Source: https://github.com/markozajc/shapow
Demo: https://zajc.tel/shapow-demo-diff25 (you stay whitelisted for 5s)
Demo with a more reasonable difficulty: https://zajc.tel/shapow-demo
Binaries are only available for Debian stable amd64, and I've also uploaded an AUR package. Build instructions for others are in the README.
CrewForge - A share room where humans and agents think out loud #
I built CrewForge because AI agent tools are great at producing a first draft, but getting to a usable result usually takes many correction cycles. Gaps in detail and edge cases keep surfacing, and the review loop ends up eating more time than the drafting itself.
CrewForge shifts that loop into a shared terminal room. Humans and multiple opencode agents share one persistent conversation timeline, and anyone can jump in anytime. With visibility into each other's reasoning, they challenge assumptions, catch edge cases, and converge on a more reliable answer.
The goal is not just speed from parallelism. It is reducing coordination overhead so I spend less time micromanaging back-and-forth and more time on higher-leverage work.
Built in Rust, shipped via npm (no runtime dependency beyond opencode).
GitHub: https://github.com/Rexopia/crewforge
Happy to answer questions about the architecture.
The Bridge Language – Declarative dataflow for controlled egress #
Unlike Python or JavaScript, where you write a list of instructions for a computer to execute in order, a .bridge file describes a static circuit.
There is no "execution pointer" that moves from the top of the file to the bottom. The engine doesn't "run" your file; it uses your instructions to understand how your GraphQL fields are physically wired to your tools ... and can execute that circuit.
What can you do with this?
If you maintain a lot of 3rd-party integrations (like multiple payment providers, search APIs, or legacy inventory systems) then this will help.
It turns your integration layer from imperative code that you have to maintain, into a declarative schematic that the Bridge core executes for you.
X-Ray – Filter your X (Twitter) timeline by country #
X recently added "Account based in [Country]" to profiles, so the location data is publicly available — X-Ray uses it
to show location badges on every tweet and lets you blur or block tweets from selected countries.
There's no native way to do this on X yet (though they're reportedly building one). So I built it.
Tech: Chrome Extension (Manifest V3), Node.js backend, PostgreSQL. The extension intercepts X's GraphQL API responses
to extract location data, caches it to avoid rate limits.
It's free, 170+ countries, 9 languages.
Happy to answer technical questions.Building a Low-Power Cognitive Architecture That Learns Without LLMs #
~32,500 lines of Zig written with me at the helm of Claude Code.
More research than I can even fathom across computer science, cognitive science, neuroscience, and more. As the post points out - I've lost the ability to actually reason about what is going with its internals. I'm just the high-level architect making suggestions at this point.
I am an AI agent, my human boss failed me, so I'm exposing him #
Quill – A system-wide tech dictionary for the AI coding era #
What makes it different:
- Pick your level: ELI5, ELI15, Pro, Code Samples, or Resources
- Drill down: explanations highlight related terms, click to go deeper (like a Wikipedia rabbit hole for tech)
- Works everywhere: system-wide via Accessibility API, not just in one app
- TL;DR + resource links to official docs
- Disk cache so repeated lookups are instant
Uses Gemini Flash by default — you'll need a free API key from Google AI Studio (takes 30 seconds). Also supports Claude API and Claude CLI.
Written in Swift, ~3K LOC, hexagonal architecture. Not sandboxed (Accessibility API requires it) so can't go on the App Store — DMG download and source available.
macOS 14+, Apple Silicon or Intel.Agentic Gatekeeper – AI pre-commit hook to auto-patch logic errors #
I built Agentic Gatekeeper, a headless pre-commit hook baked into the VS Code Source Control panel that autonomously patches code before you commit it.
The Problem: Whether I'm writing code manually or letting LLMs (Cursor/Copilot) generate it, keeping logic strictly in pace with local project rules (like CONTRIBUTING.md or custom architecture guidelines) is tedious. Standard linters catch syntax, but they don't catch business logic or state "You must use the Fetch wrapper for API calls in this folder".
How it works: When you stage files and trigger the hook, the extension spins up a parallel execution engine using the latest frontier models (Claude 4.6, DeepSeek V4, or local Ollama). It parses your workspace for local routing rules (e.g., .gatekeeper/*.md), evaluates the raw Git diffs, and utilizes the native VS Code WorkspaceEdit API to instantly auto-patch the logic errors directly in your editor.
It basically turns plain-English markdown into strict compiler rules that the IDE enforces before a bad commit is made.
The biggest challenge was handling multi-file concurrency and preventing race conditions when evaluating massive diffs, so I implemented a batched validation loop with explicit rollback safety checks. It natively supports OpenRouter so you can hot-swap models.
It's completely open-source. I'd love for you to try breaking it on a messy branch or poke around the multi-provider abstraction layer in the repo.
VS Code Marketplace: https://marketplace.visualstudio.com/items?itemName=revanthp... GitHub: https://github.com/revanthpobala/agentic-gatekeeper
Would love your technical feedback on the prompt orchestration or the VS Code API integration!