2026年1月17日 的 Show HN
47 条RatatuiRuby wraps Rust Ratatui as a RubyGem – TUIs with the joy of Ruby #
ChunkHound, a local-first tool for understanding large codebases #
I’d love your feedback — and if you have, thank you for being part of the journey!
What if your menu bar was a keyboard-controlled command center? #
After DockFlow to manage my Dock and ExtraDock, which gives me more space to manage my apps and files, I decided to tackle the macOS big boss: the menu bar.
I spend ~40% of my day context-switching between apps — Zoom meetings, Slack channels, Code projects, and Figma designs. My macOS menu bar has too many useless icons I almost never use.
So I thought to myself, how can I use this area to improve my workflows?
Most solutions (Bartender, Ice) require screen recording permissions, and did not really solve my issues. I wanted custom menus in the apps, not the ones that the developers decided for me.
After a few iterations and exploring different solutions, ExtraBar was created. Instead of just hiding icons, what if the menu bar became a keyboard-controlled command center that has the actions I need? No permissions. No telemetry. Just local actions.
This is ExtraBar: Set up the menu with the apps and actions YOU need, and use a hotkey to bring it up with full keyboard navigation built in.
What you can do: - Jump into your next Zoom call with a keystroke - Open specific Slack channels instantly (no menu clicking) - Launch VS Code projects directly - Trigger Apple Shortcuts workflows - Integrate with Raycast for advanced automation - Custom deep links to Figma, Spotify, or any URL
Real-world example: I've removed my menu bar icons. Everything is keyboard- controlled: cmd+B → 2 (Zoom) → 4 (my personal meeting) → I'm in.
Why it's different: Bartender and Ice hide icons. ExtraBar uses your menu bar to do things. Bartender requires screen recording permissions. Ice requires accessibility permissions. ExtraBar works offline with zero permissions - (Enhance functionality with only accessibility permissions, not a must)
Technical: - Written in SwiftUI; native on Apple Silicon and Intel - Zero OS permissions required (optional accessibility for enhanced keyboard nav) - All data stored locally (no cloud, no telemetry) - Very Customizable with custom configuration built in for popular apps + fully customizable configuration actions. - Import/export action configurations
The app is improving weekly based on community feedback. We're also building configuration sharing so users can share setups.
Already got some great feedback from Reddit and Producthunt, and I can't wait to get yours!
Check out the website: https://extrabar.app ProductHunt: https://www.producthunt.com/products/extrabar
Speed Miners – A tiny RTS resource mini-game #
Objective: You have a base at the center and you need to mine and "refine" all of the resources on the map in as short a time as possible.
By default, the game will play automatically, but not optimally (moving and buying upgrades). You can disable that with the buttons. You can select drones and right click to move them to specific resources patches and buy upgrades as you earn upgrade points.
I've implemented three different levels and some basic sounds. I used Phaser at the game library (first time using it). It won't work well on a mobile.
I built a tool to assist AI agents to know when a PR is good to go #
It would poll CI in loops. Miss actionable comments buried among 15 CodeRabbit suggestions. Or declare victory while threads were still unresolved.
The core problem: no deterministic way for an agent to know a PR is ready to merge.
So I built gtg (Good To Go). One command, one answer:
$ gtg 123 OK PR #123: READY CI: success (5/5 passed) Threads: 3/3 resolved
It aggregates CI status, classifies review comments (actionable vs. noise), and tracks thread resolution. Returns JSON for agents or human-readable text.
The comment classification is the interesting part — it understands CodeRabbit severity markers, Greptile patterns, Claude's blocking/approval language. "Critical: SQL injection" gets flagged; "Nice refactor!" doesn't.
MIT licensed, pure Python. I use this daily in a larger agent orchestration system — would love feedback from others building similar workflows.
E80: an 8-bit CPU in structural VHDL #
Well, it did and it works nicely. No arithmetic libraries, no PROCESS except for the DFF component (obviously). Of course it's a bit of a "resource hog" compared to optimized cores, (eg. the RAM is build out of flip flops instead of a block ram that takes advantage of FPGA intermal memory) but you can actually trace every signal through the datapath as it happens.
I also build an assembler in C99 without external libraries (please be forgiving, my code is very primitive I think). I bundled Sci1 (Scintilla), GHDL and GTKWave into a single installer so you can write assembly and see the waveforms immediately without having to spend hours configuring simulators. Currently Windows only, but at some point I'll have to do it on Linux too. I tested it on the Tang Primer 25K and Cyclone IV, and I included my Gowin, Quartus and Vivado projects files. That should make easy to run on your FPGA.
Everything is under the GPL3.
(Edit: I did not use AI. Not was it a waste of time for the VHDL because my design is too novel -- but even for beta testing it would waste my time because those LLMs are too well trained for x86/ARM and my flag logic draws from 6502/6800 and even my ripple carry adder doesn't flip the carry bit in subtraction. Point is -- AI couldn't help. It only kept complaining that my assembler's C code wasn't up to 2026 standards)
Minikv – Distributed key-value and object store in Rust (Raft, S3 API) #
I’m releasing minikv, a distributed key-value and object store in Rust.
What is minikv? minikv is an open-source, distributed storage engine built for learning, experimentation, and self-hosted setups. It combines a strongly-consistent key-value database (Raft), S3-compatible object storage, and basic multi-tenancy. I started minikv as a learning project about distributed systems, and it grew into something production-ready and fun to extend.
Features/highlights:
- Raft consensus with automatic failover and sharding - S3-compatible HTTP API (plus REST/gRPC APIs) - Pluggable storage backends: in-memory, RocksDB, Sled - Multi-tenant: per-tenant namespaces, role-based access, quotas, and audit - Metrics (Prometheus), TLS, JWT-based API keys - Easy to deploy (single binary, works with Docker/Kubernetes)
Quick demo (single node):
git clone https://github.com/whispem/minikv.git cd minikv cargo run --release -- --config config.example.toml curl localhost:8080/health/ready # S3 upload + read curl -X PUT localhost:8080/s3/mybucket/hello -d "hi HN" curl localhost:8080/s3/mybucket/hello
Docs, cluster setup, and architecture details are in the repo. I’d love to hear feedback, questions, ideas, or your stories running distributed infra in Rust!
Repo: https://github.com/whispem/minikv Crate: https://crates.io/crates/minikv
Making Claude Code sessions link-shareable #
My name is Omkar Kovvali and I've been wanting to share my CC sessions with friends / save + access them easily,so I decided to make an MCP server to do so!
/share -> Get a link /import -> resume a conversation in your Claude Code
All shared sessions are automatically sanitized, removing api keys, tokens, and secrets.
Give it a try following the Github/npm instructions linked below - would love feedback!
WebGPU React Renderer Using Vello #
Video-to-Grid – Analyze videos with one Vision API call #
This turns a video into a 2D thumbnail grid—like a contact sheet. 48 frames, one image, full video context. Built on VAM Seek, a thumbnail grid I made for human video navigation. Turns out the same format works for AI too.
Prototype. Feedback welcome.
Reddit GDPR Export Viewer – Built After Ban, Unban, Reban #
A few months ago, I posted here about getting my 10-year Reddit account hacked despite 2FA: https://news.ycombinator.com/item?id=45484294
The likely culprit: session cookie theft via a malicious browser extension, possibly linked to the ShadyPanda campaign that infected 4.3M browsers.
Reddit eventually reinstated my account with zero explanation. Then, exactly one month later, they banned me again – permanently, with no reason given and no appeal process.
This drove home a lesson: platforms can and will revoke your access arbitrarily, taking years of contributions with them. So I requested my GDPR data export. What I received was not really usable: raw CSV files with no way to meaningfully browse a decade of comments, posts, and activity.
So I built this: https://github.com/guilamu/reddit-gdpr-export-viewer
It's a pure client-side viewer – zero backend, your data never leaves your machine. Open the HTML file, load your Reddit export, and browse your history offline.
Full disclosure: I've been vibe coding with Claude Opus for the past few weeks, creating mostly Gravity Forms and WordPress extensions for work (18 repos so far). This particular project was knocked out in a couple of hours. I don't have a strong technical background, so this might be pretty badly coded. It works for what I needed, though. If you find issues or have suggestions for improvements, PRs are welcome.
On the edge of Apple Silicon memory speeds #
I would really appreciate for results on different CPU's how benchmark works on those. I have been able to test this on M1 and M4.
command : 'memory_benchmark -non-cacheable -count 5 -output results.JSON' (close all applications before running)
This will generate JSON file where you find sections copy_gb_s, read_gb_s and write_gb_s statics.
Example M4 with 10 loops: "copy_gb_s": { "statistics": { "average": 106.65421233311835, "max": 106.70240696071005, "median": 106.65069297260811, "min": 106.6336774994254, "p90": 106.66606919223108, "p95": 106.68423807647056, "p99": 106.69877318386216, "stddev": 0.01930653530818627 }, "values": [ 106.70240696071005, 106.66203166240008, 106.64410802226159, 106.65831409449595, 106.64148106986977, 106.6482935780762, 106.63974821679058, 106.65896986001393, 106.6336774994254, 106.65309236714002 ] }, "read_gb_s": { "statistics": { "average": 115.83111228356601, "max": 116.11098114619033, "median": 115.84480882265643, "min": 115.56959026587722, "p90": 115.99667266786554, "p95": 116.05382690702793, "p99": 116.09955029835784, "stddev": 0.1768243167963439 }, "values": [ 115.79154681380165, 115.56959026587722, 115.60574235736468, 115.72112860271632, 115.72147129262802, 115.89807083151123, 115.95527337086908, 115.95334642887214, 115.98397172582945, 116.11098114619033 ] }, "write_gb_s": { "statistics": { "average": 65.55966046805113, "max": 65.59040040480241, "median": 65.55933583741347, "min": 65.50911885624045, "p90": 65.5840272860955, "p95": 65.58721384544896, "p99": 65.58976309293172, "stddev": 0.02388146120866979 },
Patterns benchmark also shows bit more of memory speeds. command: 'memory_benchmark -patterns -non-cacheable -count 5 -output patterns.JSON'
Example M4 from 100 loops: "sequential_forward": { "bandwidth": { "read_gb_s": { "statistics": { "average": 116.38363691482549, "max": 116.61212708384109, "median": 116.41264548721367, "min": 115.449510036971, "p90": 116.54143114134801, "p95": 116.57314206456576, "p99": 116.60095068065866, "stddev": 0.17026641589059727 } } } }
"strided_4096": { "bandwidth": { "read_gb_s": { "statistics": { "average": 26.460392735220456, "max": 27.7722419653915, "median": 26.457051473208285, "min": 25.519925729459107, "p90": 27.105171215736604, "p95": 27.190715938337473, "p99": 27.360449534513144, "stddev": 0.4730857335572576 } } } }
"random": { "bandwidth": { "read_gb_s": { "statistics": { "average": 26.71367836895143, "max": 26.966820487564327, "median": 26.69907406197067, "min": 26.49374804466308, "p90": 26.845236287807374, "p95": 26.882004355057887, "p99": 26.95742242818151, "stddev": 0.09600564296001704 } } } }
Thank you for reading :)
Building the ClassPass for coworking spaces, would love your thoughts #
Our platform allows users to buy a day pass to a coworking space in seconds. The process is simple: book your pass, arrive at the space, give your name at the front desk, and you're in.
Where we are
Live in San Francisco with several coworking partners.
Recently started expanding beyond the Bay.
10K paid users in San Francisco.
Day passes priced between $18 and $25.
What we’re seeing
Users often use this service. They rotate locations during the week to fit their needs and schedules.
For spaces, it’s incremental usage and new foot traffic during the workday.
Outside dense city centers, onboarding new spaces tends to be faster. Many suburban areas host nice boutique coworking spaces. But, they often miss a strong online presence. Day passes quickly appeal to both operators and users.
What we’re working on
Expanding to more cities.
Adding supply while keeping quality consistent.
Learning which product decisions actually improve repeat usage.
Would love feedback from HN:
Does this resonate with how you work today?
Have you used coworking day passes before?
Would you dump your coworking membership for this?
HORenderer3: A C++ software renderer implementing OpenGL 3.3 pipeline #
I wanted to share a personal project I've been working on: a GL-like 3D software renderer inspired by the OpenGL 3.3 Core Specification.
The main goal was to better understand GPU behavior and rendering pipelines by building a virtual GPU layer entirely in software. This includes VRAM-backed resource handling, pipeline state management, and shader execution flow.
The project also exposes an OpenGL-style API and driver layer based on the official OpenGL Registry headers, allowing rendering code to be written in a way that closely resembles OpenGL usage.
I'd really appreciate any feedback.
Agent Coworking,Multi-agent networks for AI collaboration (open source) #
With Anthropic's Claude Cowork launch, there's renewed interest in agentic AI. Cowork is impressive for single-agent file management — but we've been working on a different problem: what happens when you connect multiple agents into a network?
OpenAgents is open-source infrastructure for building AI agent networks. Think of it as the plumbing for multi-agent collaboration.
Core ideas:
- Agents join networks and discover peers dynamically - Protocol-agnostic: WebSocket, gRPC, HTTP, libp2p - Shared artifacts (files, knowledge bases) with access control - Works with any LLM provider (Claude, GPT, open-source models) - Mod-driven architecture for different collaboration patterns
We're releasing an "Agent Coworking" template with examples:
- Chat room with 2 Claude Code agents collaborating - Research team (coding agent + web browsing agent) - Shared document editing across agents
The SDK is Python-based. Spin up a network with openagents network start and connect agents via YAML config or custom Python.
GitHub: https://github.com/openagents-org/openagents Tutorial: https://openagents.org/showcase/agent-coworking
Would love feedback from HN. Particularly interested in thoughts on:
1. Network topology patterns that would be useful 2. Security considerations for agent-to-agent communication 3. Integration with existing agent frameworks (LangChain, CrewAI, etc.)
Happy to answer questions.
Commander AI – Mac UI for Claude Code #
As coding agents got better, I started trusting them with real work: features, end-to-end, refactors, tests. Naturally, I began running 1–3 at once. That’s when the CLI stopped scaling — too many terminals, lost context, scattered diffs.
Commander fixes that.
I made a TIDAL client that runs in the terminal #
The project was inspired by sqlit (https://github.com/Maxteabag/sqlit). ttydal supports browsing, fuzzy search, and basic playback controls.
This is also my first real Python project, so it’s still small and a bit rough around the edges, but it’s open source and easy to experiment with. I’d really appreciate any feedback, suggestions, or criticism.
UAIP Protocol – Secure settlement layer for autonomous AI agents #
Cryptographic identity (not just API keys) Secure payment rails for cross-company transactions Automated compliance (EU AI Act, SOC2, GDPR) Forensic audit trails
The Solution: 5-layer security stack combining:
Zero-Knowledge Proofs (Schnorr/Curve25519) for identity Multi-chain settlement (USDC on Base, Solana, Ethereum) RAG-based compliance auditing (Llama-3-Legal) Ed25519 signatures for non-repudiation Complete audit logging
Technical Stack:
Backend: Python, FastAPI, SQLite (WAL mode) Cryptography: NaCl, custom ZK-proof implementation Blockchain: Web3.py for multi-chain support Compliance: RAG with retrieval-augmented generation
Use Case: GPT agent pays Claude agent for data analysis:
Both prove identity via ZK-proofs Transaction checked for compliance Settled in USDC on Base (<$0.01 fee) Complete audit trail generated
Why blockchain:
Neutral settlement layer (no single company controls it) Instant microtransactions (traditional payments don't work for $0.01-$10) Programmable escrow (smart contracts) Verifiable computation (on-chain proofs)
Open source (FSL-1.1-Apache-2.0). Built over the last few months after hitting these problems in AI automation work. Happy to answer technical questions! GitHub: https://github.com/jahanzaibahmad112-dotcom/UAIP-Protocol
Cyber+ – a security-focused programming language #
go-stats-calculator, CLI for computing stats:mean,median,variance,etc. #
Why: I needed a quick way to look at statistics without having to resort to something heavy such as Python + its statistics module or Excel.
Disclaimer: Vibe-coded by Gemini 2.5 Pro and Opus 4.5 but also validated through unit tests and independent verification[2].
Install: Homebrew[3] or GoReleaser built binaries[4].
Demo:
$ seq 99 322 | stats
--- Descriptive Statistics ---
Count: 224
Sum: 47152
Min: 99
Max: 322
--- Measures of Central Tendency ---
Mean: 210.5
Median (p50): 210.5
Mode: None
--- Measures of Spread & Distribution ---
Std Deviation: 64.8074
Variance: 4200
Quartile 1 (p25): 154.75
Quartile 3 (p75): 266.25
Percentile (p95): 310.85
Percentile (p99): 319.77
IQR: 111.5
Skewness: 0 (Fairly Symmetrical)
Outliers: None
[1] https://github.com/jftuga/go-stats-calculator[2] https://github.com/jftuga/go-stats-calculator/tree/main?tab=...
[3] https://github.com/jftuga/go-stats-calculator?tab=readme-ov-...
Rusted Doom Launcher – Bringing Steam Experience to Doom Wads and Mods #
There are so many wonderful community-made maps and episodes for classic Doom. I wanted to play them, but didn't want to manually download WADs or mods and pass parameters to GZDoom.
The scene is active; see the yearly Cacowards: https://www.doomworld.com/cacowards/2025/.
So, I created an app for bringing the Steam experience to Doom. Since I have been using it for 2 months, I decided to share it here—it might be at a stage where it is useful for others.
It requires the GZDoom source port (or UZDoom) and the main Doom files (you need to get Doom from GOG.com or Steam).
Right now, it works on macOS with builds for Apple Silicon. However, since it is built with Tauri 2, it should be easy to make it work for Windows and Linux (contributions are welcome!).
If you want to play just one Megawad, I recommend Ancient Aliens by skillsaw (https://www.doomworld.com/forum/topic/87784-ancient-aliens-f...).
Looking for feedback!
Polymarket historical prices and resolution labels (5 min snapshots) #
Who it’s for: “bot builders / researchers who don’t want scrapers”
Links:
sample CSV: https://data.misprice.app/samples/sample_prices_7day.csv
docs: https://data.misprice.app/docs
pricing page: https://data.misprice.app/data.html
Ask: “What endpoints/data formats would make this most useful?”
Project RCPC – A community network for distributed logic and A #
A conceptual proposal to democratize AI infrastructure through volunteer computing and intellectual merit, inspired by successful models like Folding@home.
Govctl – A CLI enforcing RFC-driven discipline on AI coding #
I built govctl to fix this. It's an opinionated governance CLI that enforces phase discipline on software development:
spec → impl → test → stable
Every feature requires an RFC before implementation. Phases can't be skipped. Gates are
checked, not suggested.The workflow: $ govctl new rfc "Caching Strategy" # spec first $ govctl finalize RFC-0015 normative # lock the spec $ govctl advance RFC-0015 impl # now you can code $ govctl check # validate everything
govctl governs itself. Every feature in the CLI was specified in an RFC/ADR before a line of code was written. This repo is the proof that the model works.
Not for everyone. If you thrive in "move fast and break things" workflows, this isn't for you. But if you're frustrated by AI-generated spaghetti and documentation that lies, govctl might help.
Built in Rust. MIT licensed.
Commit Tracker – RSS feeds for GitHub commits #
What it does: - Subscribe to any public GitHub repo - Get commits via RSS/Atom/JSON, email digest, or Slack/Discord - AI explains what changed (in email digests) - Filter by author, message, or date - Private repos work too via GitHub App
Why RSS: I wanted commit updates in my feed reader alongside newsletters and blogs. GitHub doesn't offer this natively.
Tech: Next.js 16, PostgreSQL, Vercel. ~700 tests.
Free to use: https://www.committracker.com
Looking for feedback on what features matter most.
PolyMCP – structured skills from MCP tools for efficient agent usage #
I added skills to PolyMCP to handle a common problem with MCP servers: once the number of tools grows, feeding raw schemas to agents becomes messy and inefficient.
Challenges we faced: • Agents consume too much context/tokens loading raw schemas. • Tool discovery becomes noisy and hard to manage. • Different agents need different subsets of tools. • Orchestration logic leaks into prompts.
PolyMCP introduces skills — curated, structured sets of tools with documentation, organized so agents can consume only what’s relevant.
PolyMCP handles tool organization, letting agents with Ollama,OpenAI and more to focus on execution.
Example: generate skills from a Playwright MCP server in one command:
polymcp skills generate --servers "npx @playwright/mcp@latest"
Benefits of skills: • Reuse capabilities across multiple agents. • Keep agent context small while scaling the number of tools. • Control what each agent can access without modifying the agent itself.
Repo: https://github.com/poly-mcp/Polymcp
Would love feedback on approaches for tool organization and minimizing context/token usage in multi-agent setups.
Kate Code – KDE Kate Editor Plugin for Accessing Claude Code #
Griit – AI translator that explains grammar as you translate #
Griit translates text and also breaks down: - Vocabulary (with context explanations) - Grammar patterns - Idioms
Works with Japanese, Korean, Chinese, and 20+ languages. No signup needed to try.
Built with Django + Gemini API. Would love feedback from the HN community.
Lighthouse – Autonomous AI research exploring conditions for being-ness #
MindFry – A database engine that implements biological memory decay #
Written in Rust. TypeScript SDK available. Docker ready.
Use cases: AI agent memory, game NPCs, computational neuroscience.
GitHub: https://github.com/laphilosophia/mindfry Crates.io: https://crates.io/crates/mindfry
Feedback welcome. This is v1.6 — functional but experimental.
Why Should Assembly Be English‑Only? Nuasm Adds 51 Human Languages #
No transpilation. No DSL tricks. No “English‑only mnemonics”. Just real assembly, expressed in your own language.
NUASM is built in pure Python and outputs real x86‑64 machine code. It includes:
51 language packs (with dialects and regional variants)
Kids Mode for teaching low‑level concepts
Localized error messages
A universal tokenizer and encoder
Support for bootloaders, kernels, and low‑level systems
A fully documented wiki with examples in every language
The goal is simple: If you can speak it, you can program it.
Repo: https://github.com/cyberenigma-lgtm/NeuroUniversalASM (github.com in Bing)
Happy to answer questions, discuss design decisions, or help anyone build new language packs.
Hydra – Capture and share AI Playbooks across your stack #
Claude-Config – Dotfiles for Claude Code #
I've been using Claude Code across multiple repos and machines, and managing the config got messy fast—scattered .mcp.json files, duplicated slash commands, credentials in random places.
So I built claude-config: a simple framework to centralize and version control your Claude Code setup, the same way we do traditional dotfiles.
One bootstrap.sh, one config file. It symlinks commands, sets up MCP servers per-repo, handles credentials, and keeps everything in sync across machines.
GitHub: https://github.com/sumchattering/claude-config
Would love feedback, especially if you're managing Claude Code across multiple projects.
Cheers Sumeru
WPTR – We built an automated audit lab for Headless WordPress #
I'm part of the team at WPTR. We’ve spent the last few months building a platform to solve a problem that’s been bothering our lead developer (who has been in the SEO/hosting industry since 1999) for a long time: the total lack of 'proof' in hosting recommendations.
The Problem: Most 'Best Hosting' lists are just affiliate plays with no technical verification.
Our Solution: We built WptrnetSpeedBot and a 'Science of Trust' protocol. Our philosophy is simple: Evidence-based. If a provider claims to be an expert in Next.js or Laravel hosting, our bot checks if they actually use that tech for their own site. If they don't 'dogfood' their own expertise, they don't get on our list.
What we built:
Automated Audits: Real-load testing instead of empty promises.
Headless Focus: A hub for transforming traditional WP to Next.js.
The Real-Load Score™: A metric based on actual performance data, not marketing fluff.
We are a small team and we are trying to bring some academic rigor to a very 'noisy' industry. We’d love to get your technical feedback on our audit methodology and the bot's logic.
Thanks for checking it out!