매일의 Show HN

Upvote0

2025년 12월 9일의 Show HN

57 개
178

AlgoDrill – Interactive drills to stop forgetting LeetCode patterns #

algodrill.io faviconalgodrill.io
107 댓글11:09 AMHN에서 보기
I built AlgoDrill because I kept grinding LeetCode, thinking I knew the pattern, and then completely blanking when I had to implement it from scratch a few weeks later.

AlgoDrill turns NeetCode 150 and more into pattern-based drills: you rebuild the solution line by line with active recall, get first principles editorials that explain why each step exists, and everything is tagged by patterns like sliding window, two pointers, and DP so you can hammer the ones you keep forgetting. The goal is simple: turn familiar patterns into code you can write quickly and confidently in a real interview.

https://algodrill.io

Would love feedback on whether this drill-style approach feels like a real upgrade over just solving problems once, and what’s most confusing or missing when you first land on the site.

111

Local Privacy Firewall-blocks PII and secrets before ChatGPT sees them #

github.com favicongithub.com
54 댓글4:10 PMHN에서 보기
OP here.

I built this because I recently caught myself almost pasting a block of logs containing AWS keys into Claude.

The Problem: I need the reasoning capabilities of cloud models (GPT/Claude/Gemini), but I can't trust myself not to accidentally leak PII or secrets.

The Solution: A Chrome extension that acts as a local middleware. It intercepts the prompt and runs a local BERT model (via a Python FastAPI backend) to scrub names, emails, and keys before the request leaves the browser.

A few notes up front (to set expectations clearly):

Everything runs 100% locally. Regex detection happens in the extension itself. Advanced detection (NER) uses a small transformer model running on localhost via FastAPI.

No data is ever sent to a server. You can verify this in the code + DevTools network panel.

This is an early prototype. There will be rough edges. I’m looking for feedback on UX, detection quality, and whether the local-agent approach makes sense.

Tech Stack: Manifest V3 Chrome Extension Python FastAPI (Localhost) HuggingFace dslim/bert-base-NER Roadmap / Request for Feedback: Right now, the Python backend adds some friction. I received feedback on Reddit yesterday suggesting I port the inference to transformer.js to run entirely in-browser via WASM.

I decided to ship v1 with the Python backend for stability, but I'm actively looking into the ONNX/WASM route for v2 to remove the local server dependency. If anyone has experience running NER models via transformer.js in a Service Worker, I’d love to hear about the performance vs native Python.

Repo is MIT licensed.

Very open to ideas suggestions or alternative approaches.

66

Detail, a Bug Finder #

detail.dev favicondetail.dev
27 댓글5:35 PMHN에서 보기
Hi HN, tl;dr we built a bug finder that's working really well, especially for app backends. Try it out and send us your thoughts!

Long story below.

--------------------------

We originally set out to work on technical debt. We had all seen codebases with a lot of debt, so we had personal grudges about the problem, and AI seemed to be making it a lot worse.

Tech debt also seemed like a great problem for AI because: 1) a small portion of the work is thinky and strategic, and then the bulk of the execution is pretty mechanical, and 2) when you're solving technical debt, you're usually trying to preserve existing behavior, just change the implementation. That means you can treat it as a closed-loop problem if you figure out good ways to detect unintended behavior changes due to a code change. And we know how to do that – that's what tests are for!

So we started with writing tests. Tests create the guardrails that make future code changes safer. Our thinking was: if we can test well enough, we can automate a lot of other tech debt work at very high quality.

We built an agent that could write thousands of new tests for a typical codebase, most "merge-quality". Some early users merged hundreds of PRs generated this way, but intuitively the tool always felt "good but not great". We used it sporadically ourselves, and it usually felt like a chore.

Around this point we realized: while we had set out to write good tests, we had built a system that, with a few tweaks, might be very good at finding bugs. When we tested it out on some friends' codebases, we discovered that almost every repo has tons of bugs lurking in it that we were able to flag. Serious bugs, interesting enough that people dropped what they were doing to fix them. Sitting right there in peoples codebases, already merged, running in prod.

We also found a lot of vulns, even in mature codebases, and sometimes even right after someone had gotten a pentest.

Under the hood: - We check out a codebase and figure out how to build it for local dev and exercise it with tests. - We take snapshots of the built local dev state. (We use Runloop for this and are big fans.) - We spin up hundreds of copies of the local dev environment to exercise the codebase in thousands of ways and flag behaviors that seem wrong. - We pick the most salient, scary examples and deliver them as linear tickets, github issues, or emails.

In practice, it's working pretty well. We've been able to find bugs in everything from compilers to trading platforms (even in rust code), but the sweet spot is app backends.

Our approach trades compute for quality. Our codebase scans take hours, far beyond what would be practical for a code review bot. But the result is that we can make more judicious use of engineers’ attention, and we think that’s going to be the most important variable.

Longer term, we think compute is cheap, engineer attention is expensive. Wielded properly, the newest models can execute complicated changes, even in large codebases. That means the limiting reagent in building software is human attention. It still takes time and focus for an engineer to ingest information, e.g. existing code, organizational context, and product requirements. These are all necessary before an engineer can articulate what they want in precise terms and do a competent job reviewing the resulting diff.

For now we're finding bugs, but the techniques we're developing extend to a lot of other background, semi-proactive work to improve codebases.

Try it out and tell us what you think. Free first scan, no credit card required: https://detail.dev/

We're also scanning on OSS repos, if you have any requests. The system is pretty high signal-to-noise, but we don't want to risk annoying maintainers by automatically opening issues, so if you request a scan for an OSS repo the results will go to you personally. https://detail.dev/oss

20

I got tired of switching AI tools, so I built an IDE with 11 of them #

hivetechs.io faviconhivetechs.io
17 댓글2:49 PMHN에서 보기
Each AI has strengths - Claude reasons well, Gemini handles long context, Codex integrates with GitHub. But switching between them means losing context.

Built HiveTechs: one workspace where Claude Code, Gemini CLI, Codex, DROID, and 7 others run in integrated terminals with shared memory.

Also added consensus validation - 3 AIs analyze independently, 4th synthesizes.

Real IDE with Monaco editor, Git, PTY terminals. Not a wrapper.

Looking for feedback: hivetechs.io

9

Free Logo API – logos for any company or domain #

logos.apistemic.com faviconlogos.apistemic.com
6 댓글12:16 PMHN에서 보기
The Clearbit Logo API finally went down yesterday after the HubSpot acquisition. I relied on it across several projects (heavily), so I built a drop-in replacement:

https://logos.apistemic.com

Key features:

- Free to use, no signup or API key needed

- Both companies and domain names work as input identifiers

- WebP format for smaller payloads and better cache hit rates

Stack: S3 for storage, heavily cached fastapi, Next.js for the site. Everything's behind Cloudflare for proper CDN/caching. This was the first time I tried to build something end-to-end from idea to deployment with Claude Code (Max) and I have to say, Opus 4.5 took it like a champ!

For the younger folks, here's what the old Clearbit API looked like: https://web.archive.org/web/20230920164055/https://clearbit....

Happy to answer questions about the implementation or hear your thoughts!

Web: https://logos.apistemic.com

X: https://x.com/karllorey

8

Durable Streams – Kafka-style semantics for client streaming over HTTP #

github.com favicongithub.com
1 댓글7:11 PMHN에서 보기
Hey, I'm a co-founder at ElectricSQL. Durable Streams is the delivery protocol underneath our Postgres sync engine—we've been refining it in production for 18 months.

The core idea: streams get their own URL and use opaque, monotonic offsets. Clients persist the last offset they processed and resume with "give me everything after X." No server-side session state, CDN-cacheable, plain HTTP.

We kept seeing teams reinvent this for AI token streaming and real-time apps, so we're standardizing it as a standalone protocol.

The repo has a reference Node.js server and TypeScript client. Would love to see implementations in other languages—there's a conformance test suite to validate compatibility.

Happy to dig into the design tradeoffs—why plain HTTP over WebSockets, etc.

7

We vibe coded our team's issue tracker, knowledge base, telemetry board #

0 댓글4:59 PMHN에서 보기
Hi HN, I'm the CEO at https://replay.io. We've been working on time travel debugging for web development for a while (https://news.ycombinator.com/item?id=28539247) and more recently an AI app builder that uses that debugger to get past problems instead of spinning in circles (https://news.ycombinator.com/item?id=43258585).

We've gotten to where we can pretty easily build apps to replace business-critical SaaS tools, some of which we're now using internally:

* We built our own issue tracker to keep track of all our development projects, tickets, bug fixes, and so on, completely replacing Linear.

* We built a knowledge base for managing internal documentation and the status of ongoing initiatives, completely replacing Notion.

* We built a telemetry system that ingests OTLP events via a webhook and supports custom graphs and visualizations, mostly replacing Honeycomb.

We want to have as much control as we can of the apps we need to run Replay. We can tailor these apps to our needs, completely own them and their data, and avoid hostile SaaS vendor behavior like per seat pricing, paywalled features, locking us into their platform, and locking us out from accessing our own data.

Today we're launching Builder (https://builder.replay.io/), the tool we used to make these apps, along with the apps themselves and others we've built. You can copy these apps for free, download the source and self host them if you want, or let us take care of hosting, fixing bugs, and modifying them for your needs.

If you want to just check these out, here are a couple (shared, no login required) copies of these apps:

* Issue tracker: https://16857470-551d-4f50-8e5b-b7d24a4a874a.http.replay.io

* Knowledge base: https://d7e0dff4-f45c-4677-9560-6ea739c00a94.http.replay.io

We're excited for the power of AI app builders to accelerate the pace of software development, unlock the creativity of non-developers everywhere, and especially to help erode the control that so many large companies have over us. We're continuously building new apps ourselves to help in this endeavor, so let us know what you think! What are the apps and vendors that frustrate you the most?

6

I made a nice Anki-app for iOS #

apps.apple.com faviconapps.apple.com
2 댓글2:55 PMHN에서 보기
Hi HN,

I recently and got the AnkiMobile app for iOS for Mandarin. It was buggy and feature-lacking in my opinion, so I made a nice looking Anki-compatible alternative app, with more features.

It uses a tuned FSRS5 algorithm and supports image occlusion out of the box. You can import your existing Anki decks into it or create a new one. Pretty quick to add cards with the image feature.

I'm using it right now and will add more to it so let me know what you think.

FYI, it has a one-off purchase of $14.99 to unlock it for lifetime (no taksies backsies) after a test period of seven days.

5

Bloodhound – Grey-box attack-path discovery in Rust/Go/C++ binaries #

bloodhoundsecurity.ca faviconbloodhoundsecurity.ca
8 댓글9:14 PMHN에서 보기
We originally set out to solve complex debugging headaches and useless alerts caused by traditional security scanners in our own projects. Static Analysis (SAST) flagged too much noise because it couldn't verify runtime context, while Dynamic Analysis (DAST) missed internal logic bugs because it treated the app like a black box.

We built a CLI tool to bridge this gap using grey box testing from a red team approach. We use internal knowledge of the codebase to guide parallel execution, allowing us to find complex or hidden logic errors and attack paths standard linters/scanners miss.

The Tech (Grey Box Graphing & Execution): - Internal Graphing (The Map): It ingests the codebase to build a dependency graph of the internal logic. - Parallel Execution (The Test): The code is then tested on parallel engines. We spin up copies of your local dev environment to exercise the codebase in thousands of ways. This is the validation that proves a bug is real. - Logic Error Detection: Because It understands the intended architecture (the graph) and sees the actual behavior (execution), we can flag Logic Errors, (ex. race conditions, state inconsistencies, memory leaks etc). - Tainted Flow Mapping: We map tainted control flow over the dependency graph. This highlights exactly how external input threads through your logic to trigger a vulnerability. It then spins up a local instance to replay this flow and confirm the exploit.

How it runs: It runs locally via CLI to maintain privacy with secure repos and ease. Generates remediation via MD reports pinpointing the line of the error and downstream effects.

The Trade-off: This approach trades power for speed and deep testing. This testing engine is recommended for more sophisticated systems.

Try it out: We are currently opening our beta VS extension for early users.

Optimized for (Rust, C++, Go, Java) and IaC (Terraform, Docker, K8s). Also supports Python, TS/JS, C#, PHP, and (20+ other languages).

P.S. We are happy to run this ourselves on repos. If you maintain a complex project and want to see if our engine can find logic or security holes, drop a link or reach out via the comments/site and we’ll do it and send the results.

4

Foggo – CLI Tool for Auto Generation of Go's Functional Option Pattern #

github.com favicongithub.com
0 댓글3:27 PMHN에서 보기
Hi Hacker News,

I've been relying on the Functional Option pattern to build clean, flexible constructors for my Go projects, but the constant need to write repetitive boilerplate for every struct became tedious and error-prone.

I built *foggo* to solve this pain point.

It's a simple, zero-dependency CLI tool that reads your configuration structs and automatically generates all the necessary, idiomatic Go code for the Functional Option pattern.

### Key Benefits: * *Massive Boilerplate Reduction:* Eliminates the manual work of writing option functions, making your code more focused on business logic. * *Consistency:* Ensures all your constructors adhere to the same, robust pattern across the entire project. * *Speed:* You define the struct, run `foggo`, and the pattern is instantly ready.

I primarily designed this for fellow Go library and package maintainers looking to standardize their configuration setup.

I'd love to hear your feedback on the utility and design of the tool, especially concerning its syntax or how it handles edge cases.

Thanks for checking it out!

*GitHub Repository:* https://github.com/rikeda71/foggo

4

Vieta Space, a visual LaTeX math editor #

docs.vietaspace.com favicondocs.vietaspace.com
1 댓글10:11 AMHN에서 보기
Vieta Space was built to reduce the friction of writing and editing LaTeX for mathematical expressions.

Existing tools often slow down math communication and force a slow iteration cycle of manual LaTeX. The growing demand for digital math across classrooms, research, and LLM based workflows made a faster and more direct editor necessary.

The project focuses on visual construction, natural language actions, and stable structural behavior.

It's free. Feedback on usability and future directions is welcome!

3

Zonformat– 35–60% fewer LLM tokens using zero-overhead notation #

zonformat.org faviconzonformat.org
3 댓글6:55 AMHN에서 보기
hey HN!

Roni from India — ex-Google Summer of Code(GSoC) @ internet archive, full-stack dev.

got frustrated watching json bloat my openai/claude bills by 50%+ on redundant syntax, so i built ZON over a few weekends: zero-overhead notation that compresses payloads ~50% vs json (692 tokens vs 1,300 on gpt-5-nano benchmarks) while staying 100% human-readable and lossless.

Playground -> https://zonformat.org/playground ROI calculator -> https://zonformat.org/savings

<2kb typescript lib with 100% test coverage. drop-in for openai sdk, langchain js/ts, claude, llama.cpp, streaming, zod schemas—validates llm outputs at runtime with zero extra cost.

Benchmarks -> https://zonformat.org/#benchmarks

try it: npm i zon-format or uv add zon-format, then encode/decode in <10s (code in readme). full site with benchmarks: https://zonformat.org

github → https://github.com/ZON-Format

harsh feedback on perf, edge cases, or api very welcome. if it saves you a coffee's worth of tokens, a star would be awesome

let's make llm prompts efficient again

3

Tool to detect malware left behind after patching CVE-2025-55182 #

0 댓글12:32 AMHN에서 보기
I'm Clive, a developer from South Africa. Four days ago, Eduardo Borges posted about getting hacked through CVE-2025-55182 (the React Server Components RCE). His server was patched, but the malware stayed, crypto miners, fake services named "nginxs" and "apaches", cron jobs for persistence. CPU at 361%. Part of a 415-server botnet.

That's when I realized: patching removes the vulnerability, but not the infection.

I built NeuroLint originally as a deterministic code transformation tool for React/Next.js (no AI, just AST-based fixes). When this CVE dropped, I added Layer 8: Security Forensics.

It scans for 80+ indicators of compromise: - Suspicious processes (high CPU, random names, fake services) - Malicious files in /tmp, modified system binaries - Persistence mechanisms (cron jobs, systemd services, SSH keys) - Network activity (mining pools, C2 servers) - Docker containers running as root with unauthorized changes - Crypto mining configs (c.json, wallet addresses)

Try it: npm install -g @neurolint/cli neurolint security:scan-breach . --deep

No signup required. Works on Linux/Mac. Takes ~5 minutes for a deep scan.

What's different from manual detection: - AST-based code analysis (detects obfuscated patterns) - 80+ behavioral signatures vs. 5-10 manual grep commands - Automated remediation (--fix flag) - Timeline reconstruction showing when breach occurred - Infrastructure-wide scanning (--cidr flag for networks)

The tool is deterministic (not AI). Same input = same output every time. Uses Babel parser for AST transformation with fail-safe validation - if a transformation fails syntax checks, it reverts.

Built it in 3 days based on Eduardo's forensics and other documented breaches. Already found dormant miners in test environments.

GitHub: https://github.com/Alcatecablee/Neurolint-CLI NPM: https://www.npmjs.com/package/@neurolint/cli

If you were running React 19 or Next.js 15-16 between Dec 3-7, run the scanner even if you already patched. Especially if you already patched.

Happy to answer questions about the detection logic, AST parsing approach, or the CVE itself.

3

Agentic Reliability Framework – Multi-agent AI self-heals failures #

github.com favicongithub.com
1 댓글4:55 PMHN에서 보기
Hey HN! I'm Juan, former reliability engineer at NetApp where I handled 60+ critical incidents per month for Fortune 500 clients.

I built ARF after seeing the same pattern repeatedly: production AI systems fail silently, humans wake up at 3 AM, take 30-60 minutes to recover, and companies lose \$50K-\$250K per incident.

ARF uses 3 specialized AI agents:

Detective: Anomaly detection via FAISS vector memory Diagnostician: Root cause analysis with causal reasoning Predictive: Forecasts failures before they happen

Result: 2-minute MTTR (vs 45-minute manual), 15-30% revenue recovery.

Tech stack: Python 3.12, FAISS, SentenceTransformers, Gradio Tests: 157/158 passing (99.4% coverage) Docs: 42,000 words across 8 comprehensive files

Live demo: https://huggingface.co/spaces/petter2025/agentic-reliability...

The interesting technical challenge was making agents coordinate without tight coupling. Each agent is independently testable but orchestrated for holistic analysis.

Happy to answer questions about multi-agent systems, production reliability patterns, or FAISS for incident recall!

GitHub: https://github.com/petterjuan/agentic-reliability-framework

(Also available for consulting if you need this deployed in your infrastructure: https://lgcylabs.vercel.app/)

3

My small tool blew up unexpectedly #

kaicbento.substack.com faviconkaicbento.substack.com
0 댓글10:34 PMHN에서 보기
A small tool I built for personal use suddenly reached thousands of users. The technical part was simple; the real challenge was handling expectations, feedback, growth, and the psychological pressure of open-source visibility. I wrote a breakdown of what actually happens behind the scenes.
2

Pixel text renderer using CSS linear-gradients (no JavaScript) #

taktek.io favicontaktek.io
0 댓글7:18 PMHN에서 보기
I've been playing around with rendering pixel text using only CSS. No JS in the final result, and no per-pixel DOM elements (too heavy).

The demo page is rendered as a long list of CSS linear-gradients. Each letter is an 8x8 matrix. Each pixel becomes a tiny background image.

Demo: https://taktek.io Gallery/Debugger: https://taktek.io/gallery Code: https://github.com/taktekhq/taktekhq.github.io

At first, I wrote each linear-gradient pixel manually... When I came to resize each pixel's size, I wrote the generator script.

1. It takes the text -> 2. breaks it into letters -> 3. gets its matrix -> 4. returns the linear-gradients list.

It chooses a variant based on the context window. For example, a period after a sentence ("hello.") should look different than inside a domain ("example.com").

My workflow now is: open the gallery -> generate the CSS in the console -> copy the result into the static page.

It's very small and a dumb tool, but I want it for an upcoming project.

If you have any feedback, maybe some pitfalls, or a better approach, I'd love to hear them.

2

WhatsApp Backup Reader – Offline Viewer #

github.com favicongithub.com
0 댓글3:04 AMHN에서 보기
Built this after a business partnership went bad and I needed WhatsApp conversations for legal proceedings.

My lawyer had me screenshotting hundreds of messages, trying to keep them in order, then having opposing counsel question if anything was tampered with. I looked at existing tools but they choke on large chats and have no way to bookmark or search efficiently - useless when you're digging through years of conversation for specific evidence.

So I made this. Drop in your WhatsApp export zip, get a proper chat interface with all media. The original export stays untouched (important for evidence), bookmarks and annotations layer on top. Voice messages get transcribed locally via Whisper/WebGPU - nothing leaves your machine.

Tested with 18k+ messages, no issues. Runs in browser or as an Electron app for all desktop platforms (Android and iOS coming soon).

Feedback welcome, especially from anyone who's dealt with digital evidence.

2

I built a website that runs itself. Roast my AI-generated content #

stvck.dev faviconstvck.dev
3 댓글9:12 AMHN에서 보기
Hey HN! OP here.

I was too lazy to check 50+ RSS feeds daily, so I built a robot to do it for me.

This is a 100% non-profit, for-fun project (no ads, no tracking, just code). I built it to learn Next.js and LLM automation.

The Request: Roast it hard :)

- Is the AI summarizing things correctly or just hallucinating?

- Is the UI clean or "developer art"?

Original sources are linked in every post. Thanks for checking it out!

2

A deterministic code-rewrite engine that learns from one example #

0 댓글9:18 AMHN에서 보기
I built a tool that learns structural code transformations from a single before/after example.

It isn’t a transformer or LLM — it doesn’t generate code. It extracts the structural pattern between two snippets and compiles a deterministic rewrite rule. Same input → same output, every time.

Examples: • console.log(x) → logger.info(x) generalises to console.log(anything) • require(“x”) → import x from “x” • ReactDOM.render → createRoot • custom project conventions

The rules apply across an entire codebase or through an MCP plugin inside Claude Code, Cursor, or plain CLI.

It runs entirely on CPU and learns rules in real time.

Tool: https://hyperrecode.com I’d really appreciate feedback on the approach, design, or failure cases.

2

Freedom Graph – FI calculator that models sequence-of-returns risk #

freedomgraph.com faviconfreedomgraph.com
0 댓글4:03 PMHN에서 보기
Hi HN,

I built Freedom Graph because I wanted an FI calculator that modeled market variability and flexible spending more realistically. Lots of calculators assume constant returns, fixed withdrawal rules, and the "real = nominal – inflation" approximation. That’s fine for ballpark numbers, but not great when you care about sequence risk or decisions like, "should I spend another year working?"

These are the real-world factors I wanted to model explicitly:

* Sequence of Returns Risk: Optional market randomness (mix of positive/negative years, long-run ~10% CAGR) to show how early retirement plans can fail even when the long-term average looks fine

* Correct real-return math: Uses the Fisher equation instead of the linear approximation, which compounds differently over long time horizons

* Adaptive strategies: Model “one more year” scenarios and spending flexibility to see how behavior affects success probabilities

Other QoL things:

* Built with React + Vite; no input data is sent anywhere

* Local storage persists inputs between browser sessions

* FI income automatically adjusts when you hit your target

* Dark/light mode

I’d really appreciate feedback on both the UX and assumption/behavior levers. If you think something’s wrong or misleading, please tell me.

Thanks!

2

ZON-TS 50–65% fewer LLM tokens zero parse overhead better than TOON/CSV #

zonformat.org faviconzonformat.org
0 댓글4:51 PMHN에서 보기
hey HN — roni here, full-stack dev out of india (ex-gsoc @ internet archive).

spent last weekend hacking ZON-TS because json was torching half my openai/claude budget on dumb redundant keys — hit that wall hard while prototyping agent chains.

result: tiny TS lib (<2kb, 100% tests) that zips payloads ~50% smaller (692 tokens vs 1300 on gpt-5-nano benches) — fully human-readable, lossless, no parse tax.

drop-in for openai sdk, langchain, claude, llama.cpp, zod validation, streaming... just added a full langchain chain example to the readme (encode prompt → llm call → decode+validate, saves real $$ on subagent loops).

quick try:

```ts npm i zon-format import { encode, decode } from 'zon-format'; const zon = encode({foo: 'bar'}); console.log(decode(zon)); ``` github → https://github.com/ZON-Format/ZON-TS benches + site → https://zonformat.org

YC’s fall rfs nailed it — writing effective agent prompts is brutal when every token adds up. if you’re in a batch grinding observability (helicone/lemma vibes) or hitting gemini limits like nessie did, what’s your biggest prompt bloat headache right now? paste a sample below and i’ll zon it live.

feedback (harsh ok) very welcome

cheaper tokens ftw

1

Replicator-Publisher-Subscriber PostgreSQL with Xmin and Rust #

github.com favicongithub.com
0 댓글4:06 PMHN에서 보기
We upgraded our open-source database replicator with publisher-subscriber features.

The Rust CLI now replicates PostgreSQL databases without requiring wal_level=logical on the source. It uses PostgreSQL's xmin system column to detect changes, enabling CDC-style replication from any managed PostgreSQL service—no configuration changes needed.

1

A deterministic code-rewrite engine that learns from one example #

0 댓글9:16 AMHN에서 보기
I built a tool that learns structural code transformations from a single before/after example.

It isn’t a transformer or LLM — it doesn’t generate code. It extracts the structural pattern between two snippets and compiles a deterministic rewrite rule. Same input → same output, every time.

Examples: • console.log(x) → logger.info(x) generalises to console.log(anything) • require(“x”) → import x from “x” • ReactDOM.render → createRoot • custom project conventions

The rules apply across an entire codebase or through an MCP plugin inside Claude Code, Cursor, or plain CLI.

It runs entirely on CPU and learns rules in real time.

Tool: https://hyperrecode.com

I’d really appreciate feedback on the approach, design, or failure cases.

1

Turn any API into an embeddable AI agent #

gethelmagent.com favicongethelmagent.com
0 댓글1:59 PMHN에서 보기
Hey HN! I built a platform that lets you upload an OpenAPI spec and get an AI-powered chat agent you can embed on your website.

- The motivation: I kept seeing companies with perfectly good APIs, but their users still struggled with complex UIs or had to read through documentation. Meanwhile, building a custom AI agent requires significant engineering resources most teams don't have.

- How it works: Upload your OpenAPI/Swagger spec (or paste your API docs) The system maps out your endpoints and parameters You get an embeddable widget that understands natural language queries When users ask questions, the agent translates them to API calls and returns results conversationally

Example: An e-commerce site with an inventory API. Instead of navigating filters, users ask "Do you have running shoes in size 10 under $100?" and get instant results.

- Technical details: Uses LLM function calling to map queries to API endpoints Handles authentication (API keys, OAuth) Rate limiting and caching to avoid hammering your API Works with REST APIs (GraphQL support coming)

Current state: Functional MVP, testing with a few early users. Still figuring out edge cases around complex APIs and auth flows.

- What I'm looking for feedback on: Have you encountered this problem? How did you solve it? Security concerns I should be thinking about? Is this actually useful or just a cool demo?

1

Page Builder in Pure TypeScript, No Framework Dependencies #

dev.tukona.com favicondev.tukona.com
0 댓글1:56 PMHN에서 보기
I built a simple page builder for non-technical users with built-in multilingual support.

The client is vanilla TypeScript - no React, Angular, or Vue because I wanted a smaller runtime, full control over the codebase, and freedom from framework churn.

Components are self-contained and composable (adding new ones doesn't require touching existing code). The backend is C# / .NET 10 with Entity Framework.

This is an early prototype. No account required - sessions use anonymized IP (SHA-256 + salt).

Link: https://dev.tukona.com/

It is self-hosted in my homelab (based in the EU), so it might be slow under load :)

Looking for feedback on the UX. My next goal is adding specialized components for a specific industry vertical.

1

Tool to Submit Your Startup to 100 High-DR Directories #

listmy.site faviconlistmy.site
0 댓글5:00 PMHN에서 보기
Hi everyone, I built this because I noticed many early-stage founders struggle with visibility. Even if you have a good product, it’s difficult for Google (and users) to discover it without basic backlinks and citations.

Directory submissions still help, but doing them manually across dozens of platforms takes a lot of time, and most “directory lists” online are outdated or filled with low-quality sites. I wanted a more reliable and consistent approach.

I curated 100+ high-authority directories and built a simple service that submits your startup to all of them manually. The focus is accuracy, consistency, and approval rate—not automation or spam.

If you have thoughts, feedback, or suggestions for other directories I should add, I’d love to hear it. Happy to answer any questions.

1

A Simple Weekly Launch Platform for Solo Founders #

sololaunches.com faviconsololaunches.com
0 댓글5:01 PMHN에서 보기
Hi everyone, I built this because most launch platforms feel noisy and competitive. When I launched my earlier projects, they would get pushed down the feed in minutes, which made it difficult to get meaningful feedback or early users.

Solo Launches is a small platform designed for solo founders and indie hackers. Instead of a crowded feed, you choose a weekly launch slot. Each product gets its own simple page, a backlink, and a chance to receive comments and feedback from builders who browse the weekly batch.

There’s no algorithm and no competition elements. The goal is to create a calmer launch experience where early projects have more room to be seen.

If you have feedback, ideas, or suggestions for improving the platform, I’m happy to hear them.

1

Whisper Money – End-to-End Encrypted Personal Finance App #

whisper.money faviconwhisper.money
0 댓글1:54 PMHN에서 보기
I built Whisper Money, a personal finance app where all your data is end-to-end encrypted and only you hold the keys.

Site: http://whisper.money/

Code: https://github.com/whisper-money/whisper-money

How it works (high-level)

- All sensitive data is encrypted in the browser before it hits the server

- The server only stores encrypted blobs for sync

- Decryption happens only on your devices

Looking for feedback on

- Security model / crypto choices

- Key management UX (backup/recovery)

- Whether this is compelling vs. existing tools

1

Quintus Calendars – An alternative to the irregular Gregorian calendar #

quintus.sh faviconquintus.sh
0 댓글3:14 AMHN에서 보기
Hello! I'm sharing this Quintus calendar format - 12 months of 30 days with 5 day weeks and some overflowing days - as an alternative to the more common Gregorian calendar.

This website runs a time server for the current Quintus date alongside conversions between dates for different calendar formats on the landing page. Plans to convert scheduled events with the ".ics" format are supposed for the next few months.

A hand-pressed 2026 edition is also offered now and has been quite fun to make. Questions, critiques, or ideas are all appreciated!

1

How to get sha Dow ba nn ed #

0 댓글9:24 AMHN에서 보기
post something on hn that is not yc backed
1

Repack AI – Turn Any Article/Video URL into 10 Social Content Pieces #

repackai.co faviconrepackai.co
0 댓글9:20 AMHN에서 보기
I’ve been building a small MVP called *Repack AI*. You paste in an *Article or Video URL*, and it instantly generates a content pack for 10+ social platforms — including text, visuals, and short faceless videos.

*Current capabilities:* - Platform-specific text: X/Twitter threads, LinkedIn summaries, IG/Threads captions - Auto-generated visuals + short vertical videos (TikTok/Reels/Shorts style) - Works with article URLs, newsletters, and (early testing) YouTube links

It’s early-stage, so outputs may need light editing. I’m mainly looking for feedback from people who publish or repurpose content regularly.

Try it here (free Credits included): https://repackai.co/

Happy to hear what works, what feels missing, and what you'd expect next.

1

Human Code Principles – 12 principles for human-centered software dev #

humancodeprinciples.org faviconhumancodeprinciples.org
0 댓글2:07 PMHN에서 보기
Hi HN,

I've been writing software for a long time, and over the years I've felt that most developer principles focus mainly on code structure or architecture. Useful, but only one part of the job.

I wanted something more human-centered -- principles about how we think about long-term consequences, avoid burnout, treat each other, and work responsibly with tools like AI. So I wrote a small set of 12 principles I call the Human Code Principles.

They're not a methodology or a set of rules, just an attempt to articulate values that seem to lead to healthier and more sustainable software work. I don't consider them finished or universal, and I'm interested in critique.

If you think something is missing, unclear, naive, or misguided, I'd appreciate the feedback.

Here's the link again in case the URL field is hidden on mobile: https://humancodeprinciples.org/

1

Isogen – Lightweight AI Coding Tool (Rust and JavaScript, <50MB, BYOK) #

slidebits.com faviconslidebits.com
0 댓글3:03 PMHN에서 보기
I built an AI Coding tool optimized for my workflow. VSCode forks use too much memory and I am over the idea of having AI Agents rewriting files and also reviewing complicated diffs with the Accept/Reject UI.

I built Isogen which uses as much memory as a Chrome tab instead of a tool that can spike up to 1GB of RAM. You drag and drop or paste files into an isolated context and do fast generations file-by-file. This approach allows me to keep a strong mental model of the codebase. I also added a snapshot feature that keeps the history of the files and the generated output. File copies are saved locally with SQLite.

Bring Your Own Key for inference which allows for unlimited generations. The only models supported now are the fast ones from Gemini, ChatGPT, Claude and Grok.