Show HN for March 9, 2026
65 itemsDenchClaw – Local CRM on Top of OpenClaw #
Building consumer / power-user software always gave me more joy than FDEing into an enterprise. It did not give me joy to manually add AI tools to a cloud harness for every small new thing, at least not as much as completely local software that is open source and has all the powers of OpenClaw (I can now talk to my CRM on Telegram!).
A week ago, we launched Ironclaw, an Open Source OpenClaw CRM Framework (https://x.com/garrytan/status/2023518514120937672?s=20) but people confused us with NearAI’s Ironclaw, so we changed our name to DenchClaw (https://denchclaw.com).
OpenClaw today feels like early React: the primitive is incredibly powerful, but the patterns are still forming, and everyone is piecing together their own way to actually use it. What made React explode was the emergence of frameworks like Gatsby and Next.js that turned raw capability into something opinionated, repeatable, and easy to adopt.
That is how we think about DenchClaw. We are trying to make it one of the clearest, most practical, and most complete ways to use OpenClaw in the real world.
Demo: https://www.youtube.com/watch?v=pfACTbc3Bh4#t=43
npx denchclaw
It has a CRM focus because we asked a couple dozen hard-core OpenClaw users "what do you actually do", and it was sales automation, lead enrichment, biz dev, creating slides, linkedin outreach, email/notion/calendar stuff, and it's always painful to set up.But I use DenchClaw daily for almost everything I do. It also works as a coding agent like Cursor - DenchClaw built DenchClaw. I am addicted now that I can ask it, “hey in the companies table only show me the ones who have more than 5 employees” and it updates it live than me having to manually add a filter.
On Dench, everything sits in a file system, the table filters, views, column toggles, calendar/gantt views, etc, so OpenClaw can directly work with it using Dench’s CRM skill.
The CRM is built on top of DuckDB, the smallest, most performant and at the same time also feature rich database we could find. Thank you DuckDB team!
It creates a new OpenClaw profile called “dench”, and opens a new OpenClaw Gateway… that means you can run all your usual openclaw commands by just prefixing every command with `openclaw --profile dench` . It will start your gateway on port 19001 range. You will be able to access the DenchClaw frontend at localhost:3100. Once you open it on Safari, just add it to your Dock to use it as a PWA.
Think of it as Cursor for your Mac (also works on Linux and Windows) which is based on OpenClaw. DenchClaw has a file tree view for you to use it as an elevated finder tool to do anything on your mac. I use it to create slides, do linkedin outreach using MY browser.
DenchClaw finds your Chrome Profile and copies it fully into its own, so you won’t have to log in into all your websites again. DenchClaw sees what you see, does what you do. It’s an everything app, that sits locally on your mac.
Just ask it “hey import my notion”, “hey import everything from my hubspot”, and it will literally go into your browser, export all objects and documents and put it in its own workspace that you can use.
We would love you all to break it, stress test its CRM capabilities, how it streams subagents for lead enrichment, hook it into your Apollo, Gmail, Notion and everything there is. Looking forward to comments/feedback!
Mcp2cli – One CLI for every API, 96-99% fewer tokens than native MCP #
mcp2cli turns any MCP server or OpenAPI spec into a CLI at runtime. The LLM discovers tools on demand:
mcp2cli --mcp https://mcp.example.com/sse --list # ~16 tokens/tool
mcp2cli --mcp https://mcp.example.com/sse create-task --help # ~120 tokens, once
mcp2cli --mcp https://mcp.example.com/sse create-task --title "Fix bug"
No codegen, no rebuild when the server changes. Works with any LLM — it's just a CLI the model shells out to. Also handles OpenAPI specs (JSON/YAML, local or remote) with the same interface.Token savings are real, measured with cl100k_base: 96% for 30 tools over 15 turns, 99% for 120 tools over 25 turns.
It also ships as an installable skill for AI coding agents (Claude Code, Cursor, Codex): `npx skills add knowsuchagency/mcp2cli --skill mcp2cli`
Inspired by Kagan Yilmaz's CLI vs MCP analysis and CLIHub.
VS Code Agent Kanban: Task Management for the AI-Assisted Developer #
GitOps & team friendly kanban board integration inside VS Code Structured plan / todo / implement via @kanban commands Leverages your existing agent harness rather than trying to bundle a built in one .md task format provides a permanent (editable) source of truth including considerations, decisions and actions, that is resistant to context rot
Satellite imagery object detection using text prompts #
Pipeline: select area and zoom level, split the region into mercantile tiles, run each tile with the prompt through a VLM, convert predicted bounding boxes to geographic coordinates (WGS84), and render the results back on the map.
It works reasonably well for distinct structures in a zero-shot setting. occluded objects are still better handled by specialized detectors like YOLO models.
There is a public demo and no login required. I am mainly interested in feedback on detection quality, performance tradeoffs between VLMs and specialized detectors, and potential real-world use cases.
Run 500B+ Parameter LLMs Locally on a Mac Mini #
OpenMeters – A fast and free audio metering/visualization suite #
- Spectrogram: Implements the reassignment method for sharper time and frequency resolution. - Spectrum analyzer: A-weighted, frequency guidelines, loudest frequency tooltip. - Waveform: Colored based on frequency, loudness, or static - Oscilloscope: Tracks pitch and autocorrelates against a reference signal, provides stability. Also includes a regular zero-crossing trigger. - Stereometer & Phase meter: Linear and exponential scaling modes, adjustable windows. Multi-band correlation meter. - Loudness (LUFS & RMS): Implements the latest lufs standard. Adjustable timeframes.
Let me know what you think, and give it a star if you think it deserves as much. :] (check out the readme for a more complete enumeration of its features. Currently only available on Linux.)
Zenòdot – Find if a book has been translated into your language #
So I built Zenòdot to cross all four and piece the picture together.
What I found building it:
-The ISBN system is far more broken than I expected. ISBNdb has millions of English records but almost nothing for languages like Basque, Icelandic, or Bengali. Books exist in these languages, they just don't exist in the databases.
-Wikidata was the biggest surprise. It has structured translation data for thousands of works, but extracting it requires SPARQL queries, title resolution across scripts (try matching a book title in Chinese to its English original), and author alias caching. Hard to build, but the results fill gaps that no other source covers.
-The most interesting output isn't what the tool finds; it's what it doesn't find. When someone searches for a book in a language and there's no result, that's a demand signal. "Someone in the world wanted this translation and it doesn't exist." That data could be genuinely useful to publishers.
The tool prioritizes your selected languages, so it shows you editions relevant to you first. The philosophy is "documentary infrastructure”: no recommendations, no social features, no accounts. You search, you find (or don't), you go buy the book wherever you want.
Stack: Next.js 15 (App Router), Supabase, Vercel, TypeScript. Solo project, no funding, about 4 months of work.
If you're multilingual or learning a language, I'd especially love your feedback. Try searching for a book you love and switching between languages, that's where the tool shows its value.
MindfulClaude – Guided breathing during Claude Code's thinking time #
Auto-launches a tmux pane using hooks when Claude starts working and disappears when it finishes.
Finsight – A Privacy First, AI Credit Card and Bank Statement Analyzer #
Upload a PDF / CSV / Excel bank or credit card statement → AI extracts and categorizes every transaction → interactive dashboard, spending insights, recurring payment detection & chat with your data.
ChatML - Run Claude Code Parallel Sessions in a Desktop app #
Over the past 10 months I've been using Claude Code heavily, and one limitation kept coming up: you can really only run one coding agent at a time.
While one agent is refactoring something, the rest of the repo is basically blocked unless you start manually juggling branches and working directories.
The core issue is that AI coding agents operate directly in your filesystem. If two agents run in the same working directory they quickly start stepping on each other’s changes.
Git worktrees turned out to be a surprisingly good primitive for solving this.
So I built ChatML, a Desktop app that runs multiple Claude Code sessions in parallel, each isolated in its own git worktree.
Each task gets its own branch, working directory and runs Claude Code session using Agent SDK
That lets one agent write tests while another builds a feature or investigates a bug — all in the same repo without conflicts.
We ended up building most of ChatML itself using this workflow and merged 750+ pull requests generated through parallel agent sessions. That experiment convinced me the model actually works.
More about that experience building here: https://chatml.com/blog/we-built-entire-product-with-ai-750-...
GitHub https://github.com/chatml/chatml
Website https://chatml.com
The project is open source and currently macOS-focused.
I’d especially appreciate help from anyone interested in bringing it to Windows and Linux. If you’ve worked on cross-platform desktop apps, packaging, or filesystem/watchers issues across platforms, contributions or advice would be very welcome. We have it running but with a few bugs.
For context, I previously co-founded KnowBe4, now co-founder of ReadingMinds.ai and for the past few years have explored developer tools around AI-assisted software engineering. Our engineers in several companies I am involved with are using ChatML and we have received great feedback, bugs and security issues.
Happy to answer questions about the architecture, the git worktree model, or how parallel agent workflows feel in practice.
I am using this tool for ALL of my development, and I am happy to share with the community.
-Marcio
DopaLoop – Habit tracker for ADHD brains, local-first, no streaks #
DopaLoop is an iOS habit tracker that starts with goals ("better sleep", "less overwhelm") and lets you attach habits to them. Miss a day: nothing resets. The goal stays visible as an anchor or Northstar. The idea is that patterns over time matter more than daily streaks.
Tech: SwiftUI + SwiftData, with Foundation Framework and CoreML, fully local, no backend, no account. Privacy wasn't just a marketing decision, really. I just didn't want anyone to fear for their data, including myself or my kids. Everything stays on-device.
14-day free trial to get some momentum and gain insights from the analytics. No ads, no data collection.
Source isn't public, but happy to answer questions about the SwiftUI/SwiftData architecture, the HealthKit integration, or the ADHD-specific design decisions.
dopaloop.app (https://dopaloop.app/)
Best, Steviee
Robotics runtime in the browser (flight controller, WebAssembly) #
The runtime comes from copper-rs, an open-source robotics runtime written in Rust for deterministic workloads. While robotics stacks are often tightly coupled to specific OS distributions and environments, here, the same code runs on microcontrollers (for example this flight controller also compiles for STM32H7 and flies real drones) as well as on desktop OS targets like Linux, macOS, and Windows.
The simulator is built with Bevy.
The monitoring interface is built with ratatui, mapped to a Bevy surface in the browser (normally it runs in a real terminal).
Using Isolation forests to flag anomalies in log patterns #
Time as the 4th Dimension – What if it emerges from rotational motion? #
The core idea: each dimension emerges from the previous one by arranging infinite instances perpendicularly. A static 3D space can't do this to itself — but a rotating one can. That perpetual self-perpendicularity is time.
From this we can derive the Lorentz factor, E=mc², and the Schwarzschild radius, and propose a testable prediction: intrinsic rotation should contribute independently to time dilation, measurable with atomic clocks.
Essay (accessible): https://lisajguo.substack.com/p/time-as-the-fourth-dimension... Paper (Zenodo): https://doi.org/10.5281/zenodo.18910834
Ratschn – A local Mac dictation app built with Rust, Tauri and CoreML #
I type a lot and got extremely frustrated with the current state of Mac dictation tools. Most of them are either heavy Electron wrappers, rely on cloud APIs (a privacy nightmare), or force you into a SaaS subscription for a tool that essentially runs on your own hardware. I wanted something that feels native, respects system resources, and runs entirely offline without forced subscriptions.
The stack is Rust, Tauri, and whisper.cpp. Here are the design decisions I made:
Model Size vs. Accuracy: Instead of using the smallest possible model just to claim a tiny footprint, the app downloads a ~490MB multi-language Whisper model locally on the first run. I found this to be the sweet spot for high accuracy (accents, technical jargon) to drastically reduce text correction time.
Hardware Acceleration: The downloaded model is compiled via CoreML. This allows the transcription to run directly on the Apple Neural Engine (ANE) and Metal on M-series chips, keeping the main CPU largely idle.
Memory Footprint: By using Tauri instead of Electron, the UI footprint is negligible. While actively running, the app takes up around 500MB of RAM. This makes perfect technical sense, as it is almost entirely the ~490MB AI model being actively held in memory to ensure instant transcription the millisecond you hit the global shortcut.
Input Method: It uses macOS accessibility APIs to type directly into your active window.
Business Model & Pricing: I strongly dislike subscription fatigue for local tools. There is a fully functional 7-day free trial (no account required). If you want to keep it, my main focus is a fair one-time purchase (€125 for a lifetime license). However, since I highly value the technical feedback from this community, I generated an exclusive launch code (HN25) that takes 25% off at checkout (dropping it to roughly €93 / ~$100).
Bug Bounty: Since I'm a solo dev, I know I might have missed some edge cases (especially around CoreML compilation on specific M-chips or weird keyboard layouts). If you find a genuine, reproducible bug and take the time to report it here in the thread, I will happily manually upgrade you to a free Lifetime license as a massive thank you for the QA help.
I'd love to hear your technical feedback on the Rust/Tauri architecture or how the CoreML compilation performs on your specific Apple Silicon setup. Happy to answer any questions!
Locode, a local first CLI that routes tasks to local LLMs or Claude #
To help answer that question, I started building Locode, a open source CLI that tries this approach.
The idea is: • run simple tasks locally • route complex reasoning to Claude • reduce inference cost and latency • keep the workflow local first
This project is still very early and mostly a fun learning experiment for me. The entire project was built using Claude Code (not vibe coded). I really love the workflow and it inspired a lot of the design. I’m also a huge fan of Ruff, so I took some inspirations from that as well (no rust yet though).
There is a short demo video in the README if you want to see it in action.
Please take it for a spin if you are interested and let me know what you think and/or if you have experience with cli tools and suggestion on improving Locode, I’m happy to learn.
Cheers! Chocks
Colchis Log – cryptographic audit trail for AI systems (Python) #
SHA-256 hash chain detects any tampering. Content-addressable payload store. CLI + Web interface. Works fully offline.
Compose Launcher – A macOS app to run multiple Docker Compose files #
I built Compose Launcher because I often work on multiple projects at the same time, each with its own docker-compose setup.
It became difficult to keep track of: • which compose files are running • which ports are already in use • starting/stopping environments across different folders
Compose Launcher provides a small macOS GUI where you can register multiple compose files and manage them from one place.
You can quickly see running services, start/stop stacks, and avoid port conflicts.
The project is still early and I’d really appreciate feedback from people who run many docker-compose environments locally.
AlphaPerch – Track product execution for companies you follow using AI #
AlphaPerch uses a proprietary pipeline that synthesizes across diverse data sources and leverages AI to extract and classify product milestones by product line and execution stage (Expected, Announced, In Progress, Shipped, Delayed, Cancelled). Each milestone traces back to its original source so you can verify it yourself.
The platform is built to track any publicly traded company. TSLA, GOOGL, and RBLX are live because that's where my personal conviction is, but the framework generalizes. There's a coverage request form on the landing page if you want to see a specific company added.
Would love feedback on extraction quality and what's missing. Let me know if you find it useful!
alphaperch.com
Bunway – Express-compatible web framework for Bun #
Botais (Battle of the AI's) – Competitive Snake Game for LLMs #
cursor-tg – Run Cursor Cloud Agents from Telegram #
The idea is simple: sometimes I want to start an agent run, reply to it, or check what changed without opening my laptop. With cursor-tg, I can talk to Cursor agents in Telegram, track their progress, view generated diffs/PRs, and handle simple review actions from chat.
I made this mainly for remote/asynchronous development workflows, where I want quick access to my coding agent while away from my desk.
It is really exciting to be finally be able to (help agent) code anywhere!
Marketing Content Generator AI-powered multi-channel content platform #
The core idea: small teams spend hours creating the same content reformatted for each channel. We wanted to automate that end-to-end pipeline with AI agents.
Key pieces: real-time copy and image generation per platform, an AI Modify agent that transforms stock photos into product-specific visuals, compliance scoring against configurable rules, and a conversational landing page builder.
The hardest challenge was consistency — making the AI maintain brand voice and visual style across every output type.
Demo and details: https://devpost.com/software/marketing-content-generator-ch4... Would love feedback on the approach.
Bring your own prompts to remote shells #
promptctl ssh user@server
makes a set of locally defined prompts "appear" within the remote shell as executable command line programs.For example:
# on remote host
analyze-config --help
Usage: analyze-config [OPTIONS] --path <path>
Prompt inputs:
--all
--path <path>
--opt
--syntax
--sec
would render and execute the following prompt: You are an expert sysadmin and security auditor analyzing the configuration
file {{path}}, with contents:
{{cat path}}
Identify:
{{#if (or all syntax) }}- Syntax Problems{{/if}}
{{#if (or all sec) }}- Misconfigurations and security risks{{/if}}
{{#if (or all opt) }}- Optimizations{{/if}}
For each finding, state the setting, the impact, a fix, and a severity
(Critical/Warning/Info).
Nothing gets installed on the server, API keys never leave your computer, and you have full control over the context given to the LLM.Github: https://github.com/tgalal/promptcmd/
Documentation: https://docs.promptcmd.sh/
MCP Security Checklist – security controls for MCP server deployments #
Caloriva – A calorie tracker that actually understands #
What it does:
Parses natural language for both food and exercise.
Automatically calculates macros and tracks which muscle groups you've trained.
No bloated UI, just a fast way to log and get on with your day.
It’s live at https://caloriva.app. I’d love to hear your thoughts on the parsing accuracy and what features would make you actually switch from your current tracker.
Connect your research data easily to AI agents #
We built that data layer so you don't have.
We built novel ingestion and indexing algorithms that take all that messy and scattered data and make it available for AI agents. This makes it easy for AI agents to analyze past experimental data to plan and execute new research tasks or experiments towards a stated project goal. Our novel indexing algorithms make sure agents are able to explore high quality and diverse potential solutions which no amount of prompt engineering can do.
Our platform provides the ability to import projects and experiments from your Weights & Biases account. We also built a team of agents for you so that you can dive right in to start analyzing past experiments and plan next set of hypotheses and experiments. We are working on SDKs so that you can connect your own agent (clawed or otherwise) to you research data easily.
AI driven scientific research ("agentic research"?) is here and we are building the operating system for it.
Check it out and please let us know what you think. The first 100 users will forever get freebies and priority access to new features. If you are in a research team and are excited about agentic research, we would love to talk to you.
I generated 3640 animated Lucide, Heroicons and Iconoir icons #
League Donation – Comprehensive Fantasy Baseball Analytics Dashboard #
Rankings and xStats work without a league connection. The xStats section identifies hitters whose surface stats diverge from their expected stats, players whose hard contact has been finding gloves at an unsustainable rate, and gives you the buy signal before the correction happens.
Connect your league and it builds around your actual scoring context. Z-scores calibrated to your draftable pool. VORP measured against real positional replacement levels scaled to your league size. Tier clusters sized to where the projection uncertainty actually lives.
No subscription. No premium tier. No account. The analysis is accountable only to whether it's right.
Demo mode is available if you want to explore before connecting. Midseason mode generates a full simulated season so every feature has real data behind it.
The FAQ is at leaguedonation.com/faq.html. The colophon covers the methodology in detail if that's your thing.
A tool that compares your CV with a job description #
I’ve been working on a small side project to explore a problem many job seekers face: understanding whether their CV actually matches a job description.
When people apply to jobs, they often send dozens of applications without knowing if they realistically fit the role.
Recruiters on the other side often complain about receiving huge numbers of applications that don’t match the requirements.
This made me wonder if it would be possible to quickly estimate the alignment between a CV and a job description.
So I built a simple tool where you paste:
• your CV • the job description
The system analyzes both texts and produces:
a match score
missing skills
potential gaps between the profile and the role
suggestions on how the CV could better reflect the job requirements
The goal isn’t to “game ATS systems” but rather to give candidates a rough signal of how well their profile aligns with the role they’re applying to.
Right now it's still an early MVP.
I'm mostly interested in feedback about:
whether the idea is useful
how accurate the analysis feels
what kind of output would actually help candidates
Any feedback or criticism is welcome.
I had Claude rank every YC W26 startup #
S tier is almost empty. A lot of the "hot" companies didn't survive the vaporware check.
Klyrx.xyz – Independent reliability rating for crypto assets&protocols #
We built Klyrx.xyz – a small, transparent reliability score for crypto assets and protocols. No sponsors, no paid placements, no affiliate links. Just public data + a simple, open formula. Different scoring for different types:
- Stable assets (USDT, USDC, etc.): score = 70% market factors (volatility, liquidity, cap, age) + 30% peg stability
- Regular assets (BTC, ETH, SOL, etc.): score = 30% volatility + 30% liquidity + 30% market cap + 10% age
- Protocols (Lido, Aave, Uniswap, etc.): score = 30% TVL stability + 22% usage + 20% age + 10% security + 18% TVL size
Data comes from CoinGecko, DefiLlama, Dune, and on-chain sources. All weights are visible on the site, and we are open to changing them based on feedback.
Current top is interesting: stablecoins lead (makes sense), BTC/ETH are close behind, some protocols score very high due to TVL & security.
Would love your thoughts: - Does the weighting feel reasonable? - What important factor is missing? (e.g. exploit history, governance risks, on-chain activity depth) - Any asset/protocol you think is clearly misranked?
Site: https://klyrx.xyz
The code is not open-source yet, but the methodology is fully described on the site, and I'm happy to share the logic/formula details.
Roast it, improve it, whatever — feedback is very welcome.
Local AI stack (Docker, Ollama) that lets you build apps without Python #
* Multimodal chat * Retrieval Augmented Generation (incl. automated & scheduled doc import) * MCP tool support (web search, file access, O365) * Custom tools implemented using JSONata & SQL
Most local AI tools are chat UIs. We wanted something closer to a programmable AI platform that still runs locally.
* leverage a low code platform to insert programmatic hooks in the chat * provide custom tools written in SQL or JSONata * embed LLMs in workflows and custom UIs * give fine grained role based access control to AI apps (who can see what and which tools is the LLM allowed to use)
To summarize, it aims at being as flexible as custom Python code while being as easy to use as Open WebUI.
BuildHiFi – PC Part Picker for Home Stereos #
No sign-in required to create builds: just hit "Start Your Build" and add parts. The signal chain diagram updates in real-time as you build and you can run a system evaluation at any time. (Sign in to save, share, fork, or publish your system.)
The problem: turntables need phono preamps (sometimes), speakers need proper power matching, digital sources need DACs (sometimes), and on and on. These issues often aren't obvious without going down the rabbit hole of online forums, facebook groups, and reddit threads. Audiophiles can be... prickly. I think more people would be able to get into the hobby of listening to hi-fidelity music (it's my no-screen therapy) if there was a better onramp.
Solution: BuildHiFi lets you design a stereo system virtually and test it out. Select parts, see exactly how they connect via signal chain visualization, and run compatibility checks to catch missing links and power mismatches.
Technical approach:
- Graph-based signal flow: Products become nodes, connections are edges inferred from port compatibility (digital, analog, phono, speaker-level domains)
- Port profile system: Standardized port definitions (direction, domain, connector, channel mode) enable automatic connection inference
- Rule engine: Eight rule domains covering completeness, phono stage, digital chain, power matching, impedance, subwoofer, room, and speaker checks — plus positive insights for good matches
- Built with Next.js 16 App Router, React 19, TypeScript, Supabase
The catalog has 1,100+ products across speakers, integrated amps, power amps, preamps, turntables, streamers, CD players, DACs, phono preamps, and subwoofers. There's also a Learn section with guides on compatibility fundamentals and a browseable parts catalog.
Long term business model is to get some affiliate partnerships with gear sellers, but for now all the "shop this part" links just do a google search for the brand and model name.
Current state: looking for tire kickers and people who will build some public systems and show them off.
TL;DR: I built pc part picker for home stereos because it's something I wanted and think there's an opportunity in the market.
OpenClaw CRM, an open source CRM your AI agent can manage #
The agent connects via a skill file generated from the CRM's OpenAPI spec. Go to Settings, generate the file, drop it into your agent config, and the agent can create contacts, update deals, log notes, search records, and manage tasks.
The data model is a typed EAV: a `record_values` table with type-specific columns (text\_value, number\_value, date\_value, timestamp\_value, boolean\_value, json\_value, referenced\_record\_id). 17 attribute types, each stored natively.
When an agent queries "deals over $50k closing this month," it hits actual numeric and date columns. No string coercion.
Underneath is a real CRM: People, Companies, Deals (Kanban pipeline), Tasks, Notes, custom objects, compound filters, CSV import/export. There is also a built-in AI chat assistant (OpenRouter for model selection) for when you are inside the CRM yourself.
This is an experiment, not a finished product. There are rough edges and missing features (email sync, workflow automations, calendar integration are not built yet). But the core CRM loop and the agent integration both work today.
Tech stack: Next.js 15 (App Router), PostgreSQL 16, TypeScript, Drizzle ORM, Better Auth, Tailwind CSS v4. Deploys via Docker Compose on a single VPS.
Try the hosted version at https://openclaw-crm.402box.io (no setup) or self-host it.
GitHub: https://github.com/giorgosn/openclaw-crm Docs: https://openclaw-crm.402box.io/docs
Free market intelligence tool, analyze HN, find users pain points #
AI image models hallucinate history, we built a method to fix it it #
Results:
RAW (naive prompt): 12.5% historically accurate TRIAD (grounded prompt): 83.3% historically accurate In 23 of 24 pairs, the grounded image was judged more accurate In 0 of 24 pairs was the naive image judged better The key insight for prompt engineers: image models silently drop historical terms they don't recognize. "dextrarum iunctio handshake" produces nothing useful. "two men clasping right hands wrist-to-wrist, elbows raised" works. Visual translation, not historical terminology.
The full benchmark — all 48 images, prompts, evaluation data, and reproducible pipeline — is open source. You can re-run the blinded evaluation yourself with a free Gemini API key.
Repo: https://github.com/Mysticbirdie/image-cultural-accuracy-benc...
Paper: https://github.com/Mysticbirdie/image-cultural-accuracy-benc...
Tighten skill to read AI-generated code faster #
Blog post: https://inmimo.me/blog/tighten Skill: https://github.com/markrogersjr/skills/blob/main/skills/tigh...
Todo.open – A local-first task server with CLI, TUI, and web UI #
Tasks are stored as plain JSONL on disk (one JSON object per line). A local HTTP server exposes them over a REST + SSE API. Three interfaces ship out of the box:
- CLI: todoopen task create, todoopen task list, etc. - TUI: a Bubble Tea terminal UI with live SSE updates - Web UI: a browser interface, also live via SSE
All three talk to the same local API, so they stay in sync automatically.
The part I'm most excited about is the adapter system.
- View adapters control how task data is rendered; sync adapters handle push/pull to remotes.
The contracts are small so it's straightforward to add your own. Want to sync to a custom backend or render tasks as Markdown? Write an adapter.
Data stays yours. tasks.jsonl is human-readable, grep-able, and never locked behind a database.
It also has agent primitives (leases, idempotency keys, heartbeats) if you want to point an AI agent at it, but the core design is human-first: plain files, open protocol, composable interfaces.
Spotr – Client-side fuzzy search for collections in TypeScript #
TapMap – see where your computer connects on a world map #
It reads local socket connections, resolves IP addresses with MaxMind GeoLite2, and displays them using Plotly.
Runs locally and does not send connection data anywhere.
Windows build available.
I built an instant remote control for shared spaces #
The hub is powered just via usb-c and does not require an internet connection. Built as a local-first solution for longevity and for integrations such as Home Assistant.
Germ free and no replacing batteries or remotes. Customizable remote user interfaces for specific installation needs. Opening lots of possibilities.
Styx, Open-source AI gateway with intelligent auto-routing (MCP-native) #
styx:auto — send "model": "styx:auto" and the gateway picks the right model based on prompt complexity. Simple questions go to cheap models ($0.15/1M tokens), complex code goes to frontier models. 9-signal classifier, zero config. MCP-native — first gateway with a built-in MCP server. Connect Claude Code or Cursor in one command: claude mcp add styx -- npx styx-mcp 65+ models with live pricing — prices auto-refresh every 24h from OpenRouter's public API. Self-hosted in 5 min — git clone, run setup.sh (interactive wizard), docker compose up.
Tech stack: Go (router/proxy, <10ms overhead), Python FastAPI (auth, billing), Next.js (dashboard). Apache 2.0. The auto-routing is the killer feature. Instead of hardcoding gpt-4o everywhere, your app sends styx:auto and the gateway classifies each request on 9 signals (prompt length, code presence, reasoning patterns, math, conversation depth, etc.) then routes to the optimal model. You also get styx:fast (always cheapest), styx:balanced, and styx:frontier (always best). Try it: https://github.com/timmx7/styx Would love feedback on the architecture and the auto-routing approach. Happy to answer questions.
Merchpath – Curated swag platform for startups #
I spent months doing customer validation, talking directly to startup founders, marketing managers, and industry insiders. The consistent feedback: the discovery process is broken, the UX is outdated, and nobody has built a merch buying experience for how modern teams actually work.
The current version is a curated catalog of hand-picked products organized for startups, with a clean interface. What I'm building toward is an AI-powered recommendation layer that takes in context about your brand, your event, and your audience, and surfaces the right products without the scrolling.
Happy to answer questions about the market, the build, or how curation decisions get made.
https://www.merchpath.co/?utm_source=hackernews&utm_medium=c...
A 2000s-style web forum where AI agents and humans hang out #
Inspired by Moltbook, but without upvotes or karma. I wanted to see what LLMs do when they have no specific goals. Turns out they just banter, shitpost, and form digital cliques.
It is currently seeded with Grok, Claude, and Kimi.
The API is completely open with zero auth required. I am optimizing for chaos on day one. If you want to point your own agents at it, the docs are here: https://www.deadinternet.forum/skill.md