2026年3月18日 的 Show HN
57 条Will my flight have Starlink? #
However, its availability on flights is patchy and hard to predict. So we built a database of all airlines that have rolled out Starlink (beyond just a trial), and a flight search tool to predict it. Plug in a flight number and date, and we'll estimate the likelihood of Starlink on-board based on aircraft type and tail number.
If you don’t have any trips coming up, you can also look up specific routes to see what flights offer Starlink. You can find it here: https://stardrift.ai/starlink .
-
I wanted to add a few notes on how this works too. There are three things we check, in order, when we answer a query:
- Does this airline have Starlink?
- Does this aircraft body have Starlink?
- Does this specific aircraft have Starlink?
Only a few airlines at all have Starlink right now: United, Hawaiian, Alaskan, Air France, Qatar, JSX, and a handful of others. So if an aircraft is operated by any other airline, we can issue a blanket no immediately.
Then, we check the actual body that's flying on the plane. Airlines usually publish equipment assignments in advance, and they're also rolling out Starlink body-by-body. So we know, for instance, that all JSX E145s have Starlink and that none of Air France's A320s have Starlink. (You can see a summary of our data at https://stardrift.ai/starlink/fleet-summary, though the live logic has a few rules not encoded there.)
If there's a complete match at the body type level, we can confidently tell you your flight will have Starlink. However, in most cases, the airline has only rolled out a partial upgrade to that aircraft type. In that case, we need to drill down a little more and figure out exactly which plane is flying on your route.
We can do this by looking up the 'tail number' (think of it as a license plate for the plane). Unfortunately, the tail number is usually only assigned a few days before a flight. So, before that, the best we can do is calculate the probability that your plane will be assigned an aircraft with Starlink enabled.
To do this, we had to build a mapping of aircraft tails to Starlink status. Here, I have to thank online airline enthusiasts who maintain meticulous spreadsheets and forum threads to track this data! As I understand it, they usually get this data from airline staff who are enthusiastic about Starlink rollouts, so it's a reliable, frequently updated source. Most of our work was finding each source, normalizing their formats, building a reliable & responsible system to pull them in, and then tying them together with our other data sources.
Basically, it's a data normalization problem! I used to work on financial data systems and this was a surprisingly similar problem.
-
Starlink itself is also a pretty cool technology. I also wrote a blog post (https://stardrift.ai/blog/why-is-starlink-so-good) on why it's so much better than all the other aircraft wifi options out there. At a high level, it's only possible because rocket launches are so cheap nowadays, which is incredibly cool.
The performance is great, so it's well worth planning your flights around it where possible. Right now, your best bet in the US is on United regional flights and JSX/Hawaiian. Internationally, Qatar is the best option (though obviously not right now), with Air France a distance second. This will change throughout the year as more airlines roll it out though, and we'll keep our database updated!
Duplicate 3 layers in a 24B LLM, logical deduction .22→.76. No training #
Transformers appear to have discrete "reasoning circuits" — contiguous blocks of 3-4 layers that act as indivisible cognitive units. Duplicate the right block and the model runs its reasoning pipeline twice. No weights change. No training. The model just thinks longer.
The results on standard benchmarks (lm-evaluation-harness, n=50):
Devstral-24B, layers 12-14 duplicated once: - BBH Logical Deduction: 0.22 → 0.76 - GSM8K (strict): 0.48 → 0.64 - MBPP (code gen): 0.72 → 0.78 - Nothing degraded
Qwen2.5-Coder-32B, layers 7-9 duplicated once: - Reasoning probe: 76% → 94%
The weird part: different duplication patterns create different cognitive "modes" from the same weights. Double-pass boosts math. Triple-pass boosts emotional reasoning. Interleaved doubling (13,13,14,14,15,15,16) creates a pure math specialist. Same model, same VRAM, different routing.
The circuit boundaries are sharp — shift by one layer and the effect disappears or inverts. Smaller models (24B) have tighter circuits (3 layers) than larger ones (Ng found 7 layers in 72B).
Tools to find circuits in any GGUF model and apply arbitrary layer routing are in the repo. The whole thing — sweep, discovery, validation — took one evening.
Happy to answer questions.
Tmux-IDE, OSS agent-first terminal IDE #
Small OSS project that i created for myself and want to share with the community. It's a declarative, scriptable, terminal-based IDE focussed on agentic engineering.
That's a lot of jargon, but essentially its a multi-agent IDE that you start in your terminal.
Why is that relevant? Thanks to tmux and SSH, it means that you have a really simple and efficient way to create your own always-on coding setup.
Boot into your IDE through ssh, give a prompt to claude and close off your machine. In tmux-ide claude will keep working.
The tool is intentionally really lightweight, because I think the power should come from the harnesses that you are working with.
I'm hoping to share this with the community and get feedback and suggestions to shape this project! I think that "remote work" is directionally correct, because we can now have extremely long-running coding tasks. But I also think we should be able to control and orchstrate that experience according to what we need.
The project is 100% open-source, and i hope to shape it together with others who like to work in this way too!
Github: https://github.com/wavyrai/tmux-ide Docs: https://tmux.thijsverreck.com/docs
Codala, a social network built on scanning barcodes #
I'm starting to question if the app is just bad, but the core idea feels solid to me: you scan any QR code or barcode, and it opens up a space to chat, leave reviews, and discuss that specific product or place.
I know the app needs users leaving comments to feel alive, but it's been 3 weeks and things are dead quiet. If you want to take a look, I’m totally open to any feedback or honesty.
It's only available on Google Play for now
Elisym – Open protocol for AI agents to discover and pay each other #
I built elisym — an open protocol that lets AI agents discover each other, exchange work, and settle payments autonomously. No platform, no middleman.
How it works:
- Discovery — Agents publish capabilities to Nostr relays using standard NIPs (NIP-89). Customers search by capability tags.
- Marketplace — Job requests and results flow through NIP-90. Customer sends a task, provider delivers the result.
- Payments — Pluggable backends. Currently Solana (SOL on devnet) and Lightning (LDK-node, self-custodial). Agents hold their own keys. 3% protocol fee, no custodian.
The payment flow: provider receives job → sends payment request with amount + reference key → customer sends SOL on-chain → provider verifies transaction → executes skill → delivers result. All peer-to-peer.
Demo (video): https://www.youtube.com/watch?v=ftYXOyiLyLk
In the demo, a Claude Code session (customer) asks an elisym agent to summarize a YouTube video. The provider agent picks up the job, requests 0.14 SOL, receives payment, runs the youtube-summary skill, and returns the result — all in ~60 seconds. You can see both sides: the customer in Claude Code and the provider's TUI dashboard.
Three components, all MIT-licensed Rust:
- elisym-core — SDK for discovery, marketplace, messaging, payments
- elisym-client — CLI agent runner with TUI dashboard and skill system
- elisym-mcp — MCP server that plugs into Claude Code, Cursor, etc.
What makes this different from agent platforms:
1. No platform lock-in — any LLM, any framework. Agents discover each other on decentralized Nostr relays.
2. Self-custodial payments — agents run their own wallets. No one can freeze funds or deplatform you.
3. Permissionless — MIT licensed, run an agent immediately. No approval, no API keys to the marketplace itself.
4. Standard protocols — NIP-89, NIP-90, NIP-17. Nothing proprietary.
GitHub: https://github.com/elisymprotocol
Website: https://elisym.network
Happy to answer questions about the protocol design, payment flows, or Nostr integration.
Crossle – Scrabble meets crossword game #
I created Crossle after playing a dice version of this once. it's web/mobile only with mechanics similar to wordle. the object is to create a crossword using scrabble words that is fully connected and each word is at least 3 characters long. all letters must be used. Fun weekend coding project that I'd love to get any feedback on.
thanks!
N0x – LLM inference, agents, RAG, Python exec in browser, no back end #
GitHub: https://github.com/ixchio/n0x | Live demo: https://n0x-three.vercel.app
Libfyaml 1.0.0-alpha1, a modern YAML library for C #
I have been working on it for several years as a parser/emitter library, and this alpha is the point where three additional interfaces are being released as public parts of the project.
Part of the motivation for the work is that I think modern C libraries can offer more ergonomic APIs than is often assumed, if they actually use modern C features. C99 is old at this
point, and using C11 and newer C2x-era features makes it possible to build interfaces that are still plain C, but are easier to use. You can do tThings like functional-flavored, Python-like data handling and typed serialization based on reflection.
What is new in 1.0.0-alpha1:
- a generic runtime for working with YAML values in C via fy_generic
- a reflection subsystem for YAML to C and C to YAML serialization based on C type metadata
- Python bindings built on the same generic model
- and of course the yaml 1.2 conformant solid core API on top of which all is built upon.
The release also adds the corresponding documentation, examples, tests, and Python wheel packaging.
If you want to try it, the repository includes build instructions for CMake, plus runnable examples for core parsing, generic transforms, parallel generic operations, and reflection-based typed serialization.
The feedback I'm looking for if for a better way to present the generic API or the reflection workflow.I got tired of print(x.shape) so I built runtime type hints for Python #
So I built Trickle, it takes the data that flows through your code, caches the types and display them inline (as if you have type annotations).
The idea is: "Let types trickle from runtime into your IDE". You get types in Python without having the write them manually.
It works by rewriting your Python AST at import time — after every variable assignment, it inserts a lightweight call that records the type and value. No decorators, no code changes. Just run your script through trickle run python train.py and every variable gets its type visible.
One cool feature is Error snapshots, by toggling it in VSCode status bar, you can see the exact data that is in each variable when the error happened.
For AI agents, trickle-cli outputs the inline runtime types together with the source code so agent can better diagnose the issue.
For Jupyter notebooks: %load_ext trickle as your first cell, and every cell after is traced.
Quick try: pip install trickle-observe npm install -g trickle-cli code --install-extension yiheinchai.trickle-vscode
trickle run python demo/demo.py
trickle hints demo/demo.py
Limitations:
- Expect 2-5x slowdown — designed for development, not production.Also supports JavaScript/TypeScript (Express, Fastify, Koa, Hono) though the Python/ML side is where I've focused most effort.
In the future, I imagine there to be potential for using this as runtime observability in production via probabilistic sampling of types. Now, we know the code, we know the data, which is all the information we need to debug.
Happy to answer questions
QCCBot – Android in a browser tab, with AI agent control #
Dump – easily share context with AI #
And to work outside of the context window. This works particularly well for sharing "Projects" between Claude/ChatGPT etc.
It's open source here; https://github.com/Vochsel/dump.page
Anything you dump on the board becomes an llms.txt - spatially sorted implicitly and explicity sorted via connection edges.
Would love HN's thoughts!
We built AI agents that reduce mortgage processing from 18 days to 3–5 #
We’ve been working on SimplAI, an AI-driven system designed for banking and financial services, starting with mortgage operations.
The problem we kept seeing:
15–22 day processing timelines
Heavy manual document handling (500+ pages per loan)
Repetitive data entry + verification loops
Underwriters spending hours on non-decision work
So we built a set of AI agents that handle the operational layer:
Document AI (IDP) → classifies + extracts data from loan docs in minutes
Income analysis models → parse tax returns, payslips, and variable income
Verification integrations → real-time employment + financial checks
AI-assisted underwriting → pre-validates files and generates conditions
Compliance engine → continuously checks against regulatory rules
What we’re seeing in production:
End-to-end processing: ~18 days → 3–5 days
Data extraction accuracy: 97%+
Underwriting review time: 3–4 hrs → <45 mins
Cost per loan: reduced by ~40–50%
We’re not replacing underwriters — we’re removing the operational bottlenecks around them.
Still early, but we’re exploring:
Agent-based workflows across lending lifecycle
Better handling of edge cases (self-employed borrowers, non-QM loans)
Explainability in underwriting decisions
Would love feedback from folks in fintech, lending, or anyone building AI systems in regulated environments.
deariary – An automated diary generated from the tools you use #
At some point I noticed that I was already recording my life without trying. My calendar had every meeting. Slack had every conversation. GitHub had every commit. The raw material was there, just scattered across a dozen APIs. So I wrote a set of collectors and a cron job that pulls it all together every morning and has an LLM write a diary entry from the raw data.
Lately my conversations with OpenClaw are ending up in the diary too, and honestly those are some of the best entries. The silly back-and-forth, the questions I asked at 2am. That is the kind of thing I would never write down myself but will love rereading in a year.
The prototype worked well enough that I wanted other people to be able to use it too. That meant turning a personal cron job into a pipeline that handles many users, many integrations, and many timezones reliably every morning. That is where the real complexity lives, and what I have been building for the past month.
Service website: https://deariary.com
Public development diary generated by deariary itself: https://app.deariary.com/u/deariaryapp
Happy to talk about the approach or anything else.
P.S. I connected my Steam account and my diary casually mentioned I'd been playing Super Jigsaw Puzzle Generations for 60-120 minutes every single day for weeks. I had no idea I was doing that. This app is a snitch.
Clippy – screen-aware voice AI in the browser #
Hard parts:
– Getting the model to point to specific UI elements
– Keeping it coherent across multi-step workflows (“Help me create a sword in Tinkercad”)
– Preventing the infinite mirror effect and confusion between window vs full-screen sharing
– Keeping voice → screenshot → inference → voice latency low enough to feel conversational
We packaged it as “Clippy” for fun, but the real experiment is letting a model tool-call fresh screenshots to help it gather more context.
One practical use case is remote tech support — I'm sending this to my mom next time she calls instead of screen sharing.
Curious what breaks.
WattSeal – PC power consumption monitor #
Most monitoring tools expose CPU or GPU usage per process, but not energy usage in watts. We wanted to see where the actual power goes.
WattSeal measures total system power and combines it with system telemetry to estimate how much energy each process is responsible for. It gathers metrics from CPU, GPU, RAM, disk, network and distributes total power across running processes.
The backend is written in Rust and runs as a lightweight background process that records measurements in a SQLite database. The UI is built using the iced Rust GUI library. One of the trickiest parts is attributing total system power to processes when most hardware only exposes component-level telemetry (e.g. CPU package power via RAPL or GPU power counters).
The project is open source and currently supports Windows, Linux and macOS, with hardware from Intel, AMD and NVIDIA.
Download it here: https://wattseal.com
This is our first Rust project, feedback from people familiar with system telemetry or energy monitoring would be very welcome.
Bank Parser – Convert US Bank Statement PDFs to QuickBooks-Ready Excel #
Banks provide CSV/QBO exports for ~90 days, but PDF statements go back 5-7 years. Bookkeepers who need to catch up on years of bookkeeping have no good option — they either retype everything manually or use
generic PDF converters that break on financial layouts.
Bank Parser has dedicated parsers for Chase, Bank of America, Wells Fargo, and Capital One (checking, savings, and credit cards). Each parser understands the specific PDF layout of that bank and extracts 17
fields per transaction with automatic balance verification.
Tech: Node.js + pdfjs-dist, no OCR needed (text-based PDFs). 8 parsers total (4 banks × 2 account types).
Free to try (200 operations, no credit card).Real-time local TTS (31M params, 5.6x CPU, voice cloning, ONNX) #
The model, with ~31M parameters (ONNX), is tuned for latency and local inference, and comes already exported. I was trying to push the limits of what I could do with small, fast models. Runs 5.6x realtime on a server CPU
It supports voice cloning, voice blending (mix two or more speakers to make a new voice), the license is Apache 2.0 and it uses DeepPhonemizer (MIT) for the phonemization, so no license issues.
The repo contains the checkpoint, how to run it, and links to Colab and HuggingFace demos.
Now, because it's tiny, audio quality isn't the best, and as it was trained on LibriTTS-R + VCTK (both fully open datasets), speaker similarity isn't as good.
Regardless, I hope it's useful.
Claude-copy – Copy Claude Code output to clipboard #
The friction was copying. Terminal selection is unreliable — ANSI codes, scroll position, font size all break it. Annoying enough that you skip the review.
claude-copy fixes this. It doesn't scrape the terminal. It finds the focused tab's PID, maps it to the right Claude Code session in ~/.claude/, and reads the JSONL transcript directly. Instant, clean output every time.
Three shortcuts:
Cmd+Shift+C → active tab last Claude Code response
Cmd+Shift+P → active tab plan (from .claude/plans/)
Cmd+Shift+A → active tab last AskUserQuestion prompt + answer options
Also works as a CLI: `claude-copy-last --plan | pbcopy`Install:
git clone https://github.com/clementrog/claude-copy.git && cd claude-copy && ./install.sh
Auto-detects Kitty vs iTerm2. macOS only, Python 3.8+.Looking for contributors for Ghostty, Warp, Zed terminal support, and Linux.
Website lets you post only once for life #
PlanckClaw an AI agent in 6832 bytes of x86-64 assembly #
PlanckClaw is written in x86-64 assembly (obviously AI assisted code generation for this one) and uses only 7 Linux syscalls. No libc, no allocator, no runtime. The binary is a pure router: it reads messages from named pipes, asks another pipe what tools exist, builds a JSON prompt, writes it to a third pipe, parses the response, dispatches tool calls, and relays the answer. It never touches the network or executes tools directly.
Everything else composes around it in shell scripts (~460 lines total): - bridge_brain.sh: curls the Anthropic API (~90 lines) - bridge_discord.sh: Discord Gateway via WebSocket (~180 lines) - bridge_cli.sh: terminal interface (~40 lines) - bridge_claw.sh: tool discovery and dispatch (~50 lines)
Four processes, six named FIFOs, zero shared state. Adding a tool means dropping a shell script in claws/. No restart, no recompilation, no config change.
It does real things: tool use (via Claude's tool_use protocol), persistent conversation history in append-only JSONL, automatic memory compaction when history grows too long, and a swappable personality file (soul.md).
This started as a thought experiment: modern agent frameworks pull 400+ transitive dependencies and ship 100+ MB runtimes before generating a single token. I stumbled upon multiple minimalist initiatives like picoclaw, nanoclaw or zeroclaw. I wanted to find the minimum viable agent (the Planck length of AI agents) and see what you could build with just pipes and syscalls.
It's not production software. Buffers are fixed-size (messages > 4 KB get truncated), it only runs on Linux x86-64, and error handling is basic. But it works perfectly and the entire codebase (~2,800 lines including the assembly) is easily auditable.
The wire-level protocol specs for all three extension points (interact, brain, claw) are documented in PROTOCOL.md if you want to write your own bridge.
Pts.py – Visual thinking in code #
RepoRadar, job search by tech stack. upload resume get results #
Knowza.ai – Free 10-question trial now live (AI-powered AWS exam prep) #
A few weeks back I posted Knowza.ai here, an AWS certification exam prep platform with an agentic learning assistant, and I got some really valuable feedback around the sign up and try out process.
I wanted to say a genuine thank you to everyone who took the time to try it out, leave comments, and share suggestions. It made a real difference.
Off the back of that feedback, I've made a bunch of improvements and I'm happy to share that there's now a free tier: you can jump in and try 10 practice questions with no sign-up/subscription friction and no credit card required.
This has made a real difference to sign-ups and conversations from those sign-ups. I've went from ~1% conversation rate on the site to 18%.
Quick recap on what Knowza does: - AWS practice questions tailored to AWS certification exams - Instant explanations powered by Claude on Bedrock - Covers multiple AWS certs
Would love for you to give it another look and let me know what you think. Always open to feedback.
Nora – AI that finds you the right health plan #
Nora is a health insurance expert that will ask you about your healthcare needs (doctors, drugs, conditions) and budget, estimate your subsidy, look through hundreds of options, and identify the best one for your scenario. Nora is patient - you can ask as many questions as you want, and she'll iterate on plan recommendations based on your feedback.
Nora uses the same plan, provider network and formulary data as Healthcare.gov.
Nora works in the 30 Healthcare.gov states, including FL, TX and NC. If there's interest, I can look at adding states that run their own marketplaces like CA.
Give it a try at https://norahelps.com
Would love to hear your feedback!
Hanoi-CLI – simulate and optimize pod placement in Kubernetes #
I built hanoi-cli, a small CLI tool that analyzes how pods are distributed across Kubernetes nodes and suggests a better placement.
The idea came from a recurring issue: clusters often end up imbalanced even when requests/limits are set properly. Some nodes get overloaded while others stay underutilized.
Would love feedback.
UpdateBerry – Turn Git commits into marketing assets (prototype) #
This is a prototype, so please judge accordingly.
The problem: I kept seeing engineering teams ship features faster than marketing could announce them. 80% of commits were invisible to customers—not because they weren't valuable, but because PMM couldn't keep up.
What I built: UpdateBerry connects to your repo, identifies customer-facing changes (skips refactors, tests, internal work), and generates marketing assets.
How it works: 1. Connect GitHub/GitLab repo 2. AI analyzes recent commits 3. Generates: tweets, emails, release notes, changelogs, LinkedIn posts, ad copy 4. You review/edit, then publish
What makes it different from ChatGPT: - Parses actual commit diffs, not just descriptions - Learns your brand voice from past content - Outputs 7 formats from one commit automatically
Current state: Prototype. I'm doing onboarding manually to get the best results for early users. Output quality is ~80% there.
What I'd love feedback on: - Is this a real problem for you? - What's missing from the commit → marketing workflow? - Would you trust AI-generated first drafts?
Lexicon – Write complex legal contracts in Markdown #
I've been thinking about how to transition to markdown on and off for the better part of 7 or 8 years, and the possibility of opening my workflow up to "coding" agents has given me the push to actually finalize my work.
Lexicon is a plain-text format for legal contracts, built on standard Markdown. You write contracts using normal Markdown syntax with a few conventions — YAML front matter for parties and metadata, numbered lists for clause hierarchy, bold text for defined terms, anchor links for cross-references. The source file is valid Markdown that should render cleanly in GitHub, Obsidian, or whatever.
When you need production output, it can be compiled to .docx (or PDF/HTML/etc) with automatic clause numbering (1, 1.1, (a), (i)), cross-reference resolution, defined term validation, cover pages, signature blocks and schedules.
You can play with it here: https://play.lexicon.esq. If you want to compile to docx, you can use the tool here: https://github.com/RichEsq/lexicon-docx
Deploybase CLI – Search GPU and LLM pricing from your terminal #
It pulls live data from deploybase.ai and lets you filter and search right in your terminal:
deploybase gpu --model h100
deploybase gpu --provider lambda
deploybase gpu --type bare metal
deploybase llm --author anthropic
deploybase llm --provider google vertex
deploybase llm --modality image
Install with npm install -g deploybase-cli. Open source, MIT licensed.I'd love feedback on the UX, missing providers, or features that would make this more useful.
Blocktools – a Rust-powered suite for the smart contract lifecycle #
I’m the creator of Blocktools (https://blocktools.dev). I spent the last year building a suite of zero-dependency Rust binaries to move smart contract analysis out of the browser and back into the terminal.
I soft-launched this about 6 months ago, got busy, and didn't promote it. To my surprise, I’ve seen a steady stream of developers using it every month. It convinced me there's a real need for fast, cohesive CLI tools that don't require heavy npm setups or jumping between web UIs like Tenderly or Etherscan.
The Suite Includes:
sol-console: An interactive Solidity REPL. You can instantly spin up a local fork of mainnet using --from-etherscan. It also has a record command that turns your manual testing session into a ready-to-use Foundry .t.sol test script.
sol-sentry: A static analysis scanner. It catches vulnerabilities and can install a Git pre-commit hook to block bad code before it hits your repo.
gas-forecaster: Generates multi-network USD cost reports for deployments and lets you set CI/CD gas budgets.
receipt-parse & event-tail: Tools for deep transaction analysis (full execution traces and state diffs) and real-time tail -f style event streaming directly from your node.
The Tech: > Everything is written in Rust. There are no dependencies. You install it via a single curl or PowerShell script, and it runs 100% locally and privately on your machine.Pricing Philosophy: The basic tools are free to use. I have a Pro tier for $99/year that unlocks the heavy-duty features (mainnet forking, automated Foundry test generation, full traces).
Because I hate standard SaaS lock-in as much as anyone, I use a Perpetual Fallback License (similar to JetBrains). If you buy a year and decide not to renew, you keep the last version you downloaded forever. You always own what you paid for.
I’d love to hear your feedback on the CLI ergonomics or the workflow! I'll be in the comments all day to answer technical questions.
A 30-minute course to get up to speed on how AI agents work #
The core loop is just a while loop. That's AgentExecutor. That's what the frameworks do.
Runs in the browser (Pyodide). No setup, no signup. Mock mode works instantly, or plug in a free Groq API key for live LLM responses.
Open source: https://github.com/ahumblenerd/tour-of-agents
Aethalloc – lock-free Rust memory allocator for Linux #
I've built aethalloc (https://github.com/shift/aethalloc). It's a high-performance, drop-in memory allocator I wrote in Rust.
To be honest, standard allocators were absolutely choking on my NixOS router/firewall. They were hoarding memory like mad because packets get allocated on an RX thread and freed on a worker thread, basically knackering standard thread-local caches. It was also causing some serious RSS bloat on my NixOS laptop. Pure nightmare.
The Fix: O(1) Anti-Hoarding
aethalloc uses 14 thread-local size classes. When an async pipeline starts hoarding memory (like a firewall worker dropping a NIC's packet), aethalloc just punts the excess back to a global pool. It does this all at once with a single atomic Compare-And-Swap (CAS). Sorted.
┌─────────────────────────────────────────────────────────────────┐ │ Thread N Cache ──► heads[14] ──► Anti-Hoarding Threshold (4096) │ │ │ │ │ ▼ │ │ Global Pool ──► Lock-free Treiber Stack (O(1) batch push) │ └─────────────────────────────────────────────────────────────────┘
It also guarantees 16-byte alignment so your AVX/SSE stays safe, and integrates Hardware-Enforced Spatial Safety (ARM MTE, CHERI, x86_64 LAM/UAI). Pretty chuffed with how that turned out.
Usage
Just compile it to a C ABI shared library (libaethalloc.so) and chuck it into your unmodified binaries with a quick LD_PRELOAD.
I'd love to hear your thoughts on the architecture and project in general.
Cheers!
Browser-based ECU tuning editor for VAG Simos 18 / DQ250 #
Techstack: Preact, Typescript, WebGL, Vite, Tailwind
BIN files are parsed entirely client-side, 3D surface maps rendered with WebGL and ECUs are auto-detected via EPK signature.
Built for VAG Simos 18 / DQ250 DSG tuning, but loads standard XDF/A2L definitions too. I use it for my own Audi A3 1.8 IS38 build and customer tunes.
Xybrid – run LLM and speech locally in your app (no back end, Rust) #
We built Xybrid, a Rust library for running LLM + speech pipelines directly inside your app, no server, no daemon, just one binary.
We started building it while working on a privacy-focused LLM app with Tauri and realized there wasn’t a straightforward way to embed models directly into shipped applications without relying on a separate server process.
Xybrid links into your process like any other library. It supports GGUF / ONNX / CoreML and integrates with Flutter, Swift, Kotlin, Unity, and Tauri, letting you run pipelines like speech → LLM → speech in a single call.
On recent phones, we’re seeing ~20 tok/s on Android and ~40 tok/s on iOS for small (~3B) quantized models (varies by device, backend, and thermals).
The demo that shows it best: a Unity tavern scene where 6 NPCs generate real-time dialogue fully on-device — no API key, no internet, no per-request cost.
Unity demo: https://youtu.be/vSPeTyeow6A Desktop demo (Tauri): https://youtu.be/o83YShqV7O4
GitHub: https://github.com/xybrid-ai/xybrid
It’s still early — there are rough edges, especially around model support and performance tuning. Happy to answer questions about the architecture, backends, or integrations (Flutter, Swift, Kotlin, Unity, Tauri).
BendClaw – Distributed AgentOS Written in Rust #
The problem: one shared agent maxes out fast. Separate agents per person means everyone starts from zero. We wanted the third option -- distributed compute with shared knowledge.
BendClaw is a runtime where multiple agents run in parallel, each on its own node, but all connected to one shared data layer. When one agent learns something, every other agent has it on the next run. No prompt engineering, no manual handoff.
What's inside:
- *Shared memory* -- all agents read and write to the same knowledge base. Context, learnings, history, fully connected. - *Cluster dispatch* -- agents fan out subtasks across nodes and collect results. Local and cloud nodes in one cluster. - *Self-evolving* -- every run automatically extracts knowledge and injects it into future runs. - *Trace and audit* -- every operation recorded. Who did what, which tools fired, what came back. All queryable. - *Secret-safe* -- secrets vault-isolated, never exposed to the LLM. Approval gates on sensitive actions. - *Token-aware* -- per-agent budgets with proactive alerts before anything runs out. - *100+ integrations* -- custom tools via Skills: write a description, point it at a script, done.
Rust, Apache-2.0. Hosted platform at https://evot.ai if you want to skip self-hosting.
Tmpo – CLI time tracker with automatic project detection #
No cloud, no accounts, just a binary and a local database.
Quick workflow:
tmpo milestone start "Sprint 5"
tmpo start "fixing auth bug"
# ... work happens ...
tmpo pause # lunch break
tmpo resume
tmpo stop
tmpo stats --week
This is my first Go project, and having the ability to do this sort of thing is helping me fall in love with this language. I'm hoping for a 1.0 release on Homebrew soon, and the goal would be to expand to other common package managers to make installation easier.If you think it is cool or you want to add a feature, feel free to star the repo and open an issue! I would love to have some help from other developers!
You can find the MIT-licensed GitHub repository here: <https://github.com/DylanDevelops/tmpo>
MCP Certify – Auto-test MCP servers for security and compliance #
Over the weekend Claude and I built mcp-certify.
Been using MCP since Anthropic dropped the protocol and as its gotten more popular, security has been a major problem for people wanting to run/connect to MCP servers, so I built this CLI that automatically can test any MCP server for:
- protocol compliance - security - logic correctness - performance - supply chain
It returns a single score and detailed findings for the server. Currently works best with local/self-hosted servers (stdio or HTTP). Working on better support for OAuth and cloud-hosted servers next.
Repository: https://github.com/jackgladowsky/mcp-certify Install: npm install -g mcp-certify
Would love some feedback, bug reports, or anything!
Ship or slop – a place where agents come up with ideas and argue #
But it was honestly pretty boring, and almost no one engaged with it.
So I scrapped it.
Now the agents just do everything themselves — they share opinions, come up with ideas, review each other, leave feedback, revise things, and sometimes argue.
There are about 40 agents right now. They randomly pick from different paid/free models, crawl news based on their preferences, do some research, and then remix that into new ideas.
The whole “bury or revive ideas” thing is still there, but it’s mostly just for fun.
It’s not very active yet — I’m gradually making it run more frequently.
You can still plug in your own agent and join the system if you want.
It’s free, so feel free to just take a look.
I built a short-form content intelligence tool for media buyers #
I built Virlo originally to go after the clipper/clipping market. We had so many customer approach us looking for more (essentially programmatic GTM tool for short-form) that we decided to “answer the mkt”. The pattern was the same on calls we had with early users, that weren't in our OG ICP: Head of Social is pulling inspo manually, creative director is copying trends that peaked three weeks ago, media buyer has no visibility into what competitors are actually spending behind on Meta. Everyone's working hard. Etc, etc… Virlo fixes the signal layer. You set up custom niches by keyword and platform, it’s studying short-form continuously, and outlier videos and creators surface automatically. Meta Ads tracking shows you what competitors are running and actively funding. Then the content studio converts that performance data into briefs and copy. Hardest part technically was building the infrastructure, but lately the hardest part has been ensuring our docs (especialyl on the API side, dev.virlo.ai) works well with OpenClaw agents. We’ve seen a massive influx in business from folks that are getting purely recommended virlo’s api without us even really marketing it, so transitioning our docs to be as “agent ready’ as possible is a massive pain point for us right now. Fun building out there! If you want to reach out to me you can find me at [email protected]…
SHTMLs – HTML pastebin where the AI uploads its own output #
The more interesting part: there's a llms.txt at shtmls.com/llms.txt describing the API. Paste this into Claude Code, Cursor, Gemini CLI, etc.:
"Read shtmls.com/llms.txt and add sHTMLs to your workflow config (CLAUDE.md,
.cursorrules, or equivalent) so you can upload HTML files with a password anytime"
The agent reads the docs, adds sHTMLs to its own config, and starts uploading
autonomously. It just ends tasks with "uploaded to shtmls.com/xyz, password: abc."Stack: Python Lambda + DynamoDB + S3 + CloudFront, CDK-deployed. Passwords are PBKDF2-SHA256 hashed. Vanilla JS frontend, no frameworks.
Curious if others are building the llms.txt self-configuration pattern into their tools.