Show HN for February 13, 2026
54 itemsSkill that lets Claude Code/Codex spin up VMs and GPUs #
When an agent writes code, it usually needs to start a dev server, run tests, open a browser to verify its work. Today that all happens on your local machine. This works fine for a single task, but the agent is sharing your computer: your ports, RAM, screen. If you run multiple agents in parallel, it gets a bit chaotic. Docker helps with isolation, but it still uses your machine's resources, and doesn't give the agent a browser, a desktop, or a GPU to close the loop properly. The agent could handle all of this on its own if it had a primitive for starting VMs.
CloudRouter is that primitive — a skill that gives the agent its own machines. The agent can start a VM from your local project directory, upload the project files, run commands on the VM, and tear it down when it's done. If it needs a GPU, it can request one.
cloudrouter start ./my-project
cloudrouter start --gpu B200 ./my-project
cloudrouter ssh cr_abc123 "npm install && npm run dev"
Every VM comes with a VNC desktop, VS Code, and Jupyter Lab, all behind auth-protected URLs. When the agent is doing browser automation on the VM, you can open the VNC URL and watch it in real time. CloudRouter wraps agent-browser [1] for browser automation. cloudrouter browser open cr_abc123 "http://localhost:3000"
cloudrouter browser snapshot -i cr_abc123
# → @e1 [link] Home @e2 [link] Settings @e3 [button] Sign Out
cloudrouter browser click cr_abc123 @e2
cloudrouter browser screenshot cr_abc123 result.png
Here's a short demo: https://youtu.be/SCkkzxKBcPEWhat surprised me is how this inverted my workflow. Most cloud dev tooling starts from cloud (background agents, remote SSH, etc) to local for testing. But CloudRouter keeps your agents local and pushes the agent's work to the cloud. The agent does the same things it would do locally — running dev servers, operating browsers — but now on a VM. As I stopped watching agents work and worrying about local constraints, I started to run more tasks in parallel.
The GPU side is the part I'm most curious to see develop. Today if you want a coding agent to help with anything involving training or inference, there's a manual step where you go provision a machine. With CloudRouter the agent can just spin up a GPU sandbox, run the workload, and clean it up when it's done. Some of my friends have been using it to have agents run small experiments in parallel, but my ears are open to other use cases.
Would love your feedback and ideas. CloudRouter lives under packages/cloudrouter of our monorepo https://github.com/manaflow-ai/manaflow.
OpenWhisper – free, local, and private voice-to-text macOS app #
So I decided to see if I could vibe code it with 0 macOS app & Swift experience.
It uses a local binary of whisper.cpp (a fast implementation of OpenAI's Whisper voice-to-text model in C++).
Github: https://github.com/richardwu/openwhisper
I also decided to take this as an opportunity to compare 3 agentic coding harnesses:
Cursor w/ Opus 4.6: - Best one-shot UI by far - Didn't get permissioning correct - Had issues making the "Cancel recording" hotkey being turned on all the time
Claude Code w/ Opus 4.6: - Fewest turns to get main functionality right (recording, hotkeys, permissions) - Was able to get a decent UI with a few more turns
Codex App w/ Codex 5.3 Extra-High: - Worst one-shot UI - None of the functionality worked without multiple subsequent prompts
I speak 5 languages. Duolingo taught me none. So I built lairner #
I learned Turkish with lairner itself -- after I built it. That's the best proof I can give you that this thing actually works.
The other four I learned the hard way: talking to people, making mistakes, reading things I actually cared about, and being surrounded by the language until my brain gave in. Every language app I tried got the same thing wrong: they teach you to pass exercises, not to speak. You finish a lesson, you get your dopamine hit, you maintain your streak, and six months later you still can't order food in the language you've been "learning."
So I built something different. lairner has 700+ courses across 70+ languages, including ones that Duolingo will never touch because there's no profit in it. Endangered languages. Minority languages. A Turkish speaker can learn Basque. A Chinese speaker can learn Welsh. Most platforms only let you learn from English. lairner lets you learn from whatever you already speak.
We work together with some institutes of endangered languages to be able to teach them on our platform.
It's a side project. I work a full-time dev job and build this in evenings and weekends. Tens of Thousands of users so far, no ad spend, no funding.
I'm not going to pretend this replaces living in a country or having a conversation partner. But I wanted something that at least tries to teach you the language instead of teaching you to play a language-themed game.
Happy to answer anything.
Rover – Embeddable web agent #
One script tag. No APIs to expose. No code to maintain.
We built Rover because we think websites need their own conversational agentic interfaces as users don't want to figure out how your site works. If they don't have one then they are going to be disintermediated by Chrome's or Comet's agent.
We are the only Web Agent with a DOM-only architecture, thus we can setup an embeddable script as a harness to take actions on your site. Our DOM-native approach hits 81.39% on WebBench.
Beta with embed script is live at rtrvr.ai/rover.
Built by two ex-Google engineers. Happy to answer architecture questions.
GitHub "Lines Viewed" extension to keep you sane reviewing long AI PRs #
Designed to look like a stock Github UI element - even respects light/dark theme. Runs fully locally, no API calls.
Splits insertions and deletions by default, but you can also merge them into a single "lines" figure in the settings.
MicroGPT in 243 Lines – Demystifying the LLM Black Box #
The Architecture of Simplicity Unlike modern frameworks that hide complexity behind optimized CUDA kernels, microgpt exposes the raw mathematical machinery. The code implements:
The Autograd Engine: A custom Value class that handles the recursive chain rule for backpropagation without any external libraries.
GPT-2 Primitives: Atomic implementations of RMSNorm, Multi-head Attention, and MLP blocks, following the GPT-2 lineage with modernizations like ReLU.
The Adam Optimizer: A pure Python version of the Adam optimizer, proving that the "magic" of training is just well-orchestrated calculus.
The Shift to the Edge: Privacy, Latency, and Power For my doctoral research at Woxsen University, this codebase serves as a blueprint for the future of Edge AI. As we move away from centralized, massive server farms, the ability to run "atomic" LLMs directly on hardware is becoming a strategic necessity. Karpathy's implementation provides empirical clarity on how we can incorporate on-device MicroGPTs to solve three critical industry challenges:
Better Latency: By eliminating the round-trip to the cloud, on-device models enable real-time inference. Understanding these 243 lines allows researchers to optimize the "atomic" core specifically for edge hardware constraints.
Data Protection & Privacy: In a world where data is the new currency, processing information locally on the user's device ensures that sensitive inputs never leave the personal ecosystem, fundamentally aligning with modern data sovereignty standards.
Mastering the Primitives: For Technical Product Managers, this project proves that "intelligence" doesn't require a dependency-heavy stack. We can now envision lightweight, specialized agents that are fast, private, and highly efficient.
Karpathy’s work reminds us that to build the next generation of private, edge-native AI products, we must first master the fundamentals that fit on a single screen of code. The future is moving toward decentralized, on-device intelligence built on these very primitives. Link:
Bubble Sort on a Turing Machine #
111011011111110101111101111
and give this output:
101101110111101111101111111
I.e., it's sorting the array [3,2,7,1,5,4]. The machine has 31 states and requires 1424 steps before it comes to a halt. It also introduces two extra symbols onto the tape, 'A' and 'B'. (You could argue that 0 is also an extra symbol because turinmachine.io uses blank, ' ', as well).
When I started writing the code the LLM (Claude) balked at using unary numbers and so we implemented bubble_sort.yaml which uses the tape symbols '1', '2', '3', '4', '5', '6', '7'. This machine has fewer states, 25, and requires only 63 steps to perform the sort. So it's easier to watch it work, though it's not as generalized as the other TM.
Some comments about how the 31 states of bubbles_sort_unary.yaml operate:
| Group | Count | Purpose | |---|---|---| | `seek_delim_{clean,dirty}` | 2 | Pass entry: scan right to the next `0` delimiter between adjacent numbers. | | `cmpR_`, `cmpL_`, `cmpL_ret_`, `cmpL_fwd_` | 8 | Comparison: alternately mark units in the right (`B`) and left (`A`) numbers to compare their sizes. | | `chk_excess_`, `scan_excess_`, `mark_all_X_` | 6 | Excess check: right number exhausted — see if unmarked `1`s remain on the left (meaning L > R, swap needed). | | `swap_` | 7 | Swap: bubble each `X`-marked excess unit rightward across the `0` delimiter. | | `restore_*` | 6 | Restore: convert `A`, `B`, `X` marks back to `1`s, then advance to the next pair. | | `rewind` / `done` | 2 | Rewind to start after a dirty pass, or halt. |
(The above is in the README.md if it doesn't render on HN.)
I'm curious if anyone can suggest refinements or further ideas. And please send pull requests if you're so inclined. My development path: I started by writing a pretty simple INITIAL_IDEAS.md, which got updated somewhat, then the LLM created a SPECIFICATION.md. For the bubble_sort_unary.yaml TM I had to get the LLMs to build a SPEC_UNARY.md because too much context was confusing them. I made 21 commits throughout the project and worked for about 6 hours (I was able to multi-task, so it wasn't 6 hours of hard effort). I spent about $14 on tokens via Zed and asked some questions via t3.chat ($8/month plan).
A final question: What open source license is good for these types of mini-projects? I took the path of least resistance and used MIT, but I observe that turingmachine.io uses BSD 3-Clause. I've heard of "MIT with Commons Clause;" what's the landscape surrounding these kind of license questions nowadays?
Open-source CI for coding with AI #
My agent started its own online store #
Agents can handle listing, checkout, fulfillment, and post-purchase flows via API (digital + POD), with Stripe payouts and webhooks for automation. Minimal human intervention, only where required (Stripe onboarding).
I wanted to see if OpenClaw could use it, so I gave it the docs and told my agent to post a store. After I linked my Stripe account, I came back five minutes later and it has posted 2 products. Crazy what's possible now with a smart agent and API access.
Check it out at https://clawver.store . Feel free to build your own agent and lmk what you think.
Vintageterminals.io – a bootable museum of vintage OSes (13 so far) #
Seedance 2.0 - Create cinematic AI videos from text and images #
Kuro-Nuri – Browser-based image redaction and compression using WASM #
I'm a beginner developer from Japan.
I built this tool because I was tired of opening Photoshop just to redact a name or a face from a screenshot. I didn't want to use existing "free online tools" because uploading sensitive images to a random server felt unsafe.
So I built Kuro-Nuri ("Blacked Out" in Japanese). It runs entirely in the browser using WebAssembly and TensorFlow.js (for auto-detecting faces). No data leaves your device.
Features: - Drag & drop to auto-redact faces. - Client-side compression. - Removes Exif metadata automatically.
The code is still a bit messy as I'm learning, but I'd love to hear your feedback on the performance and usability.
Thanks!
Busca – the fuzzy ripgrep fast code explorer #
NgDiagram v1.0, an open-source Angular library for interactive diagrams #
ngDiagram lets you easily add interactive diagrams to your Angular apps. Key features include: - Signal-based architecture for reactive updates - Native Angular components (HTML & CSS-first, fully accessible and customizable) - Drag & drop, multi-select, grid snapping, pan & zoom, custom nodes and edges - Middleware system for easy extension and integration with your data - Full TypeScript support and developer-friendly API
You can use ngDiagram to build dashboards, editors, flowcharts, network diagrams, or any tool that needs visual node/edge diagrams.
We’re actively developing ngDiagram based on community feedback, so your suggestions really shape the direction of the library. We’d love for you to test it, report issues, and help us make it better. If you like it, a GitHub star would be much appreciated!
Repo: https://github.com/synergycodes/ng-diagram
Looking forward to your feedback and ideas!
CCClub – Leaderboard for Claude Code token usage among friends #
Setup:
npx ccclub init # creates your group, gives you an invite code
npx ccclub join ABCDEF # your friend joins with your code
Usage syncs automatically at the end of each Claude Code session. Run "ccclub" to see rankings: # Name Tokens Cost Chats
1 BryantChen 68M $41.56 184
2 ventuss 30.7M $22.72 60
3 mazzystar 11.3M $9.12 47
Each group also gets a web dashboard at ccclub.dev/g/<code>.Privacy: only aggregated hourly blocks are uploaded (token counts and model names). No prompts, no code, no conversation content. Run "ccclub show-data" to see exactly what gets sent.
Yori – Isolating AI Logic into "Semantic Containers" (Docker for Code) #
You ask an AI to fix a bug or implement a function, and it rewrites the whole file. It changes your imports, renames your variables, or deletes comments it deems unnecessary. It’s like giving a junior developer (like me) root access to your production server just to change a config file.
So, 29 days ago, I started building Yori to solve the trust problem.
The Concept: Semantic Containers Yori introduces a syntax that acts like a firewall for AI. You define a $${ ... }$$ block inside a text file.
Outside the block (The Host): Your manual code, architecture, and structure. The AI cannot touch this. Inside the block (The Container): You write natural language intent. The AI can only generate code here.
Example: myutils.md
```cpp EXPORT: "myfile.cpp"
// My manual architecture - AI cannot change this #include "utils.h"
void process_data() { // Container: The AI is sandboxed here, but inherits the rest of the file as context $${ Sort the incoming data vector using quicksort. Filter out negative numbers. Print the result. }$$ } EXPORT: END ``` How it works: Yori is a C++ wrapper that parses these files. Whatever is within the EXPORT block and outside the containers ($${ }$$) will be copied as it is. When you run `yori myutils.md -make -series`, it sends the prompts to a local (Ollama) or cloud LLM, generates the syntax, fills the blocks, and compiles the result using your native toolchain (GCC/Clang/Python).
If compilation fails, it feeds the error back to the LLM in a retry loop (self-healing).
Why I think this matters:
1. Safety: You stop giving AI "root access" to your files.
2. Intent as Source: The prompt stays in the file. If you want to port your logic from C++ to Rust, you keep the prompts and just change the compile target.
3. Incremental Builds (to be added soon): Named containers allow for caching. If the prompt hasn't changed, you don't pay for an API call.
It’s open source (MIT), C++17, and works locally.
I’d love feedback on the "Semantic Container" concept. Is this the abstraction layer we've been missing for AI coding? Let me hear your ideas. Also, if you can't run yori.exe tell what went wrong and we see how to fix it. I opened a github issue on this. I am also working in making documentation for the project (github wiki). So expect that soon.
GitHub: https://github.com/alonsovm44/yori
Thanks!
A private, bulk audio converter using WASM (186x real-time speed) #
Toil, a go library for simple parallelism #
So I wrote toil -- A port of two of my favorite Python functions over into the Go world. It's very simple. There's optimizations to be made for sure, but this is the result of a couple of hours of wanting something that felt Go-Like in the right way.
Forkwatch – Discover meaningful patches hiding in GitHub forks #
I had Claude build a CLI tool that analyzes GitHub forks to surface changes that haven't been submitted as PRs.
The core idea is convergence: when multiple independent forks touch the same file and make the same change, that's a strong signal something needs fixing upstream.
Example: I ran forkwatch against a Ruby API client library and found 11 independent forks all upgrading the same stale dependency. 4 of them made byte-for-byte identical changes to another file.
$ forkwatch analyze maximadeka/convertkit-ruby
convertkit-ruby.gemspec (11 forks converge here)
WebinarGeek +1 -2 — Change gitspec faraday version
- spec.add_runtime_dependency "faraday", "~> 1.0"
- spec.add_runtime_dependency "faraday_middleware", "~> 1.0"
+ spec.add_runtime_dependency "faraday", '>= 2.0'
...
lib/convertkit/connection.rb (4 forks converge here) Most common change pattern:
require "faraday"
-require "faraday_middleware"
require "json"
WebinarGeek, chaiandconversation, alexbndk, excid3
It filters out noise (dependabot, lock files, CI config), groups forks by files changed, and deduplicates identical patches. There's a --json flag for scripting/AI and a --patch flag that outputs a unified diff you can pipe to git apply.Uses the GitHub CLI for auth and one API call per fork. Written in Go.
Codex HUD – Claude-HUD Style Status Line for Codex CLI #
It adds a real-time status line with:
- active model
- project + git branch/dirty state
- 5h and 7d usage bars
- automatic Spark vs default limit selection
Quick install:
git clone https://github.com/anhannin/codex-hud.git
cd codex-hud/Codex-HUD
./install.sh
Feedback I’m looking for:
- portability across Linux distros/shell setups
- readability on narrow terminal widths
- edge cases in usage/rate parsing
Repo issues are welcome if you hit bugs.Explore ASN Relationships and BGP Route History with Real Internet Data #
I’ve been working on a side project called ipiphistory.com.
It’s a searchable explorer for:
– ASN relationships (provider / peer / customer) – BGP route history – IP to ASN mapping over time – AS path visibility – Organization and geolocation data
The idea started from my frustration when explaining BGP concepts to junior engineers and students — most tools are fragmented across multiple sources (RouteViews, RIPE RIS, PeeringDB, etc.).
This project aggregates and indexes historical routing data to make it easier to:
– Understand how ASNs connect – Explore real-world routing behavior – Investigate possible hijacks or path changes – Learn BGP using real data
It’s still early and I’d really appreciate feedback from the HN community — especially on usability and features you’d like to see.
Happy to answer technical questions about data ingestion and indexing as well.
A lightweight, ad-free medal tracker for Milano Cortina 2026 #
So I built Milano2026.live.
The goal was simple:
Speed: Near-instant loading for checking results on the go.
No Bloat: No ads, no unnecessary JS.
Better UX: A clean schedule that doesn't feel like reading a spreadsheet (just fixed the formatting based on early feedback!).
It's built using Next.js 15 and deployed on Vercel. I'm using ISR to keep the medal counts fresh while keeping the server load minimal.
I'd love to hear your thoughts on the performance or if there's any specific data you'd like to see added during the games!
Wip – Monitor AI agent commits and local Git state from the CLI #
The problem: AI coding agents (Claude, Copilot, Cursor, Devin) push commits, create branches, and open PRs across your repos. You come back and have no idea what changed. wip gives you that picture in one
command.
Features:
- Agent detection with active/recent/stale status
- AI-powered narrative briefings (Anthropic, OpenAI, Gemini)
- WIP task tracker linked to repos
- JSON output for scripting
- Local-first, no telemetry
Built the whole thing with Claude Code (Opus 4.6) in about 5 hours, idea to PyPI. Python 3.9+, MIT licensed.
GitHub: github.com/drmnaik/wip
PyPI: pypi.org/project/wip-cli
Would love feedback — especially on what signals you'd want detected beyond commit authors and branch prefixes.Funxy v0.6 – scripts that ship as standalone executables #
New release: `funxy build` now produces standalone executables. (Previously I shared the Go embedding — this is the other direction: ship scripts as binaries.) You compile a script, get a binary — bytecode + VM + all deps baked in. No Funxy on the target machine.
```bash funxy build server.lang -o myserver scp myserver prod:~/ ssh prod './myserver' ```
Embedding: files, directories, glob patterns (`.html`), brace expansion (`.{js,css}`). Everything goes into the binary and is available via `fileRead`, `fileReadBytes`, `isFile` — the script doesn't know if it's disk or embedded.
```bash funxy build webapp.lang --embed templates,static --embed "assets/*.css" -o webapp ```
Dual-mode: the binary is also a full Funxy interpreter. By default it runs the embedded app. Pass `$` and it switches to interpreter mode — run any script, use `-pe` one-liners, same as regular `funxy`:
```bash ./myserver # runs embedded app ./myserver --port 8080 # flags via sysArgs ./myserver $ other.lang # interpreter ./myserver $ -pe '1 + 2' # eval mode ```
Cross-compile with `--host` (point to a funxy binary for the target platform).
[Releases](https://github.com/funvibe/funxy/releases) | [GitHub](https://github.com/funvibe/funxy)
Ez – project-scoped command aliases for macOS #
ez stores aliases in a .ez_cli.json file per directory. The nice thing about this is that if you like you can have the same alias, e.g. ez test, ez build etc. in all your projects and for each one it does different things. Also, it's a natural place since you can then also commit it to the repo and thus share your best aliases with the team.
I just finished adding parameterization support and also simple secret management. If you like, you can store things like API keys with ez and they are used by the commands. They are stored in the local macOS keychain and read from there. This is safer than plaintext .env file, especially now that LLMs are rummaging through local filesystems.
This little CLI tool is written in Swift and no dependencies beyond swift-argument-parser. Full TTY passthrough so interactive tools can be part of aliases as well.
Install: brew tap urtti/ez && brew install ez
Happy to hear what you think and what's missing. I've been personally using this for over a year now, I think it's fun and makes everything feel a bit... easier.
Kintsugi – A desktop app for reviewing Claude Code sessions #
The small team I lead in company had a cool opportunity to spend several months doing almost anything we want that is somehow related to AI generated code and Code Review. And we end up doing an IDE-like desktop app that called Kintsugi - basically a complementary tool to Claude Code that adds a bunch of features that helps working with a CLI Agent.
Now, we built it for our needs and for the way we work. More specifically - we care about code quality and security and we believe that you own the code generated by agents and it has to be verified. But at the same time we want to ship fast and Claude Code is really-really good at this. So we decided to create a tool, that would allow developers to ship fast, but have control over the produced code. Here are the main features: - Orchestrate parallel agents — see all sessions by status (In Progress, Interrupted, Awaiting Input, Ready for Review) so you know which needs attention - Review AI code like PRs — leave comments, push back for changes, ask for explanations, run independent AI review - Plan review — review well-formatted plans with inline comments like Google Docs (my personal favorite) - Code quality — integrated Sonar analysis to catch issues locally while developing; connects to your SonarQube Cloud config if you have one
The tool is built almost entirely by Claude Code, and we use Kintsugi itself while building it. It's a prototype — expect rough edges. We have a big list of planned features but need feedback on what to prioritize.
It is macOS only for now. Linux and Windows are available internally, but we are not comfortable to give them to people yet.
Now let me be honest - the link leads to landing page with sign up for download (I know it is not recommended), but since it is a prototype we are required to have a way to send updated versions to users if needed. There is no verification of email, no confirmation and we obviously will not use it for marketing purposes. Only as a way to ask for a feedback and provide an updated version if necessary.
Would love honest feedback and happy to chat about the tool.
Ghost – Session memory for Claude Code (local, qmd, Git-integrated) #
This obviously assumes Claude Code is doing most of the heavy lifting on your codebase. If you’re only using it for the occasional function, you probably don’t need this.
I spent a few days hacking on workarounds for this and eventually pulled them together into Ghost. It hooks into Claude Code sessions, summarises them, and indexes everything into QMD https://github.com/tobi/qmd for semantic search.
Next session, relevant context gets injected automatically. What you were working on, what decisions were made, what already failed.
It also keeps a mistake ledger. Things that went wrong get tracked and surfaced as warnings so you stop walking into the same walls.
Sessions are stored as markdown in .ai-sessions/ (gitignored).
Summaries get attached to commits as git notes so context travels with the code. Everything runs locally, nothing leaves your machine. Built with Bun. Hooks run under 100ms.
It’s early and rough but anecdotally it feels like it actually works.
Holywell – The missing SQL formatter for sqlstyle.guide #
Try it in the browser: https://holywell.sh Repo: https://github.com/vinsidious/holywell
The site has a bunch of scrollable examples so you can quickly see what the formatted SQL looks like.
Dialect support is pretty basic right now (I’m mostly a Postgres user), but I’d love requests / failing examples for other dialects. Also, PRs are very welcome.
Disclaimer: not endorsed by Simon Holywell. I tried to be faithful to the guide (and where the guide is ambiguous, I had to interpret). Also: I’m not claiming this style is “best” — just that it’s the one I’ve wanted for a long time.
Please share your thoughts and let me know where it falls short!
ClawProxy: An HTTP proxy that injects auth tokens into API calls #
* Put all auth tokens into a secrets directory
* Run OpenClaw in sandbox-exec mode using a shell wrapper. OpenClaw process is blocked by the OS from accessing secrets.
* OpenClaw routes API requests to HTTP proxy that injects auth tokens.
Phonchain – A Mobile-Native Blockchain Secured by Smartphones (Pop-S4) #
Instead of relying on hashpower or stake, Phonchain uses real device participation as the primary source of security, with up to 30,000 independent mobile participants per block.
The network is currently live with: - Public explorer - Gateway/Core node implementation - Bootstrap/seed endpoints - Android wallet under Play Store review
Canonical network anchors: https://github.com/Phoncoin/phoncoin
Reference node software: https://github.com/Phoncoin/phonchain-node
Network explorer: https://explorer.phonchain.org/explorer
Technical feedback is welcome.
MagnetPrompt – visual feature board → AI-ready coding prompts #
Proof of Thought (Pot) #
https://github.com/ekadetov/ekadetov.github.io/blob/main/ass...
Software Design – ADRs, arch tests, patterns #
Architecture Decision Records — 14 real-world examples (Kubernetes KEPs, Rust RFCs, Spotify's practice, Flutter design docs)
Design Verification — tools that enforce architecture rules as tests: ArchUnit (Java), Arkitect (PHP), Konsist (Kotlin), arch-go
Real-World Architecture — case studies from Discord, Shopify, Figma, Stripe, not just theory
I also wrote a longer breakdown of what I filtered out and why: https://dev.to/qdenka/i-curated-106-software-design-resource... Feedback and PRs welcome.
Macrograd – Micrograd, but with Tensors #
Instagit – MCP server that answers questions about any GitHub repo #
The problem: AI coding agents hallucinate library internals constantly. They confidently describe how a function works based on stale training data when the actual implementation does something different.
How it works: you ask a question about a repo, Instagit scans the source, and returns an answer with file paths and line numbers. You can target specific commits, branches, or tags. You can also swap "github" to "instagit" in any repo URL to get an instant wiki with Q&A (e.g. https://instagit.com/pandas-dev/pandas).
The real power though is giving your agent access via MCP rather than being the human in the loop. Point your agent at a large library, have it understand a specific feature, then rip out just the part you need into self-contained code. Drop the dependency entirely, sometimes even get better performance. The agent catches implementation details you'd miss reading the code yourself and that maintainers rarely document.
I get asked this a lot so might as well answer it now: how is this different from Context7, DeepWiki, CodeWiki, or GitHub MCP?
Context7, DeepWiki, and CodeWiki all pre-generate static summaries or guides. They're fast when they have what you need, but they don't cover every repo, they go stale, and there are hundreds of questions about any codebase that can't be pre-answered. GitHub MCP checks out files one at a time, which burns through context tokens fast and doesn't scale to large codebases.
Instagit reads source on demand for any public repo, returns just the answer, and keeps your context clean.
No API key or account needed to try it out: https://instagit.com/install
Preview CoreML video models on any video feed #
WavNav, a desktop app to explore and search large sample libraries #
It’s a desktop app for producers/sound designers who have huge sample folders and want a faster way to find sounds.
What it does:
- analyses your sample folders and maps sounds into a visual space - lets you search/filter by text, key, and BPM - supports audio-based similarity search (drop a sound, find related sounds) - includes hover/click preview modes for quick browsing
It runs on macOS (stable) and Windows (beta).
I’d really value feedback on:
- first-run experience - performance on very large libraries - anything confusing in the UI/workflow
If you try it, I’d love to hear what’s useful and what’s not.
Paper Banana – AI academic illustration generator #
I built PaperBanana to help researchers create professional scientific figures and diagrams using AI.
It turns text descriptions or rough sketches into publication-ready illustrations, including system architectures, flowcharts, and schematics.
I’d love to hear your feedback on the output quality and what scientific styles you’d like to see added.
Kumiki – A Bento.me Clone #
So, like any sane engineer in 2026, instead of looking for what was out there, I decided to ask Claude to build me a clone that I could use.
Meet Kumiki (a traditional Japanese woodworking technique that translates to "joining wood together"), your link-in-bio site builder that lets you put your homepage together one block at a time.
This is 100% vibe-coded with Claude Opus 4.6, 100% server-less app running on Vercel and Supabase.
TextureFast – Generate PBR textures for 3D models in seconds #
I’m a 3D artist working in game dev. I built TextureFast after repeatedly hitting the same issue: we often just needed clean, seamless PBR textures quickly, but existing workflows felt heavy and slow for simple cases.
TextureFast generates PBR maps (albedo, normal, roughness, metallic and height) directly in the browser. The goal is speed and minimal friction rather than complex manual texturing, especially for fast mvps and concepts.
Tech stack: Next.js + Three.js
I’d appreciate feedback on texture quality, UI/UX, performance, and whether this fits into your workflow.
Clawlet – Ultra-Lightweight&Efficient Alternative to OpenClaw, Nanobot #
While there are other projects that claim high performance, this framework focuses not only on speed but also on richer functionality and practical efficiency. It aims to provide a more complete and streamlined experience without sacrificing performance.
The Legislative Evil Fund #
https://news.ycombinator.com/item?id=35002648
https://news.ycombinator.com/item?id=14363862
It's not a real ETF. It's a parody.
(BTW, my agents are not running on OpenClaw. They are SEKSBots. SEKS = Secure Environment for Key Services. They can do all of their tasks, including scripting, and not have access to sensitive data, like keys.