毎日の Show HN

Upvote0

2026年2月8日 の Show HN

66 件
329

LocalGPT – A local-first AI assistant in Rust with persistent memory #

github.com favicongithub.com
156 コメント1:26 AMHN で見る
I built LocalGPT over 4 nights as a Rust reimagining of the OpenClaw assistant pattern (markdown-based persistent memory, autonomous heartbeat tasks, skills system).

It compiles to a single ~27MB binary — no Node.js, Docker, or Python required.

Key features:

- Persistent memory via markdown files (MEMORY, HEARTBEAT, SOUL markdown files) — compatible with OpenClaw's format - Full-text search (SQLite FTS5) + semantic search (local embeddings, no API key needed) - Autonomous heartbeat runner that checks tasks on a configurable interval - CLI + web interface + desktop GUI - Multi-provider: Anthropic, OpenAI, Ollama etc - Apache 2.0

Install: `cargo install localgpt`

I use it daily as a knowledge accumulator, research assistant, and autonomous task runner for my side projects. The memory compounds — every session makes the next one better.

GitHub: https://github.com/localgpt-app/localgpt Website: https://localgpt.app

Would love feedback on the architecture or feature ideas.

52

CodeMic #

codemic.io faviconcodemic.io
29 コメント10:42 AMHN で見る
With CodeMic you can record and share coding sessions directly inside your editor.

Think Asciinema, but for full coding sessions with audio, video, and images.

While replaying a session, you can pause at any point, explore the code in your own editor, modify it, and even run it. This makes following tutorials and understanding real codebases much more practical than watching a video.

Local first, and open source.

p.s. I’ve been working on this for a little over two years and would love to hear your thoughts.

38

ArtisanForge: Learn Laravel through a gamified RPG adventure #

artisanforge.online faviconartisanforge.online
3 コメント7:15 AMHN で見る
Hey HN,

I built ArtisanForge, a free platform to learn PHP and Laravel through a medieval-fantasy RPG. Instead of traditional tutorials, you progress through kingdoms, solve coding exercises in a browser editor, earn XP, join guilds, and fight boss battles.

Tech stack: Laravel 12, Livewire 3, Tailwind CSS, Alpine.js. Code execution runs sandboxed via php-wasm in the browser.

What's in there:

- 12 courses across 11 kingdoms (PHP basics to deployment)

- 100+ interactive exercises with real-time code validation using AST analysis

- AI companion (Pip the Owlox) that uses Socratic method – never gives direct answers

- Full gamification: XP, levels, streaks, achievements, guilds, leaderboard

- Multilingual (EN/FR/NL)

The idea came from seeing too many beginners drop off traditional courses. Wrapping concepts in quests and progression mechanics keeps motivation high without dumbing down the content.

Everything is free, no paywall, no premium tier. Feedback welcome – especially from Laravel devs and educators.

19

Emergent – Artificial life simulation in a single HTML file #

emergent-ivory.vercel.app faviconemergent-ivory.vercel.app
1 コメント11:13 PMHN で見る
I built an artificial life simulator that fits in one HTML file (~1400 lines)with Claude Opus 4.6, interestingly, I didn't even ask it to build the game itself, I've requested to create the software that didn't exist before and not use any dependencies, third-party libraries or ask questions.

Firstly Opus 4.6 made photoshop clone that barely worked, but had nice and neat UI with all common features for the image editor, then I've called that crap and asked to really build someting that wasn't there, it made Emergent there. Please check it out and tell what you think, of course it's a Game of Life in a nutshell, but look at the rest of the UI, game stats, and other features like genome mutation and species evolvement.

Features: - Continuous diet gene (0–1) drives herbivore/carnivore/omnivore specialization - Spatial hash grid for performant collision detection - Pinch-to-zoom and tap-to-feed on mobile - Real-time population graphs, creature inspector, and event log - Drop food, trigger plagues, cause mutation bursts

Amazes me in some kind of a degree, going to continue dig into the abyss of the vibes :)

15

WeaveMind – AI Workflows with human-in-the-loop #

weavemind.ai faviconweavemind.ai
5 コメント8:36 AMHN で見る
Hi! I spent 3 years evaluating LLMs for OpenAI, Anthropic, METR, and other labs. Kept running into the same problem: AI workflows break in production because there's no clean way to add human oversight, handle failures gracefully, or deploy without choosing between "all cloud" and "all self-hosted."

WeaveMind is a visual workflow builder in Rust. The core idea is that humans and AI are interchangeable nodes in the same graph. When a workflow needs judgment, it pauses, notifies the team via browser extension, first responder picks it up. There's also an AI assistant that generates workflows from natural language, and durable execution so nothing gets lost on failure.

Early beta, free (bring your own API keys). Planning to open source once stable (Q2 2026). Feedback welcome. Discord: https://discord.gg/FGwNu6mDkU

14

Poisson – Chrome extension that buries your browsing in decoy traffic #

github.com favicongithub.com
6 コメント10:37 PMHN で見る
I built a Chrome extension that generates noise traffic to dilute your browsing profile. Instead of trying to hide what you do online (increasingly difficult), it buries your real activity in a flood of decoy searches, page visits, and ad clicks across dozens of site categories.

  The core idea is signal dilution — the same principle behind chaff in radar countermeasures and differential privacy in data science. If you visit 50 pages today and Poisson
  visits 500 more on your behalf, anyone analyzing your traffic (ISP, data broker, ad-tech) sees noise, not signal.

  How it works:

  - Uses a Poisson process for scheduling, so timing looks like natural human browsing rather than mechanical intervals
  - Opens background tabs (never steals focus), injects a content script that scrolls, hovers, and clicks links to look realistic
  - Batches tasks within Chrome's 1-minute alarm minimum, dispatching at calculated Poisson offsets
  - Four intensity levels: ~18/hr to ~300/hr
  - Configurable search engines, task mix (search/browse/ad-click ratio), and site categories

  What it explicitly does NOT do:

  - No data collection, telemetry, or analytics
  - No external server communication
  - No access to your cookies, history, or real tabs
  - No accounts or personal information required

  Every URL it will ever visit is hardcoded in the source. Every action is logged in a live feed you can inspect. The whole thing is ~2,500 lines of commented JS.

  I know this approach has real limitations — it doesn't defeat browser fingerprinting, your ISP can still see the noise domains, and a sufficiently motivated adversary could
  potentially distinguish real traffic from generated traffic through timing analysis or behavioral patterns. This is one layer in a defense-in-depth approach, not a complete
  solution.

  Similar prior art: TrackMeNot (randomized search queries since 2006) and AdNauseam (clicks all ads to pollute profiles). Both from NYU researchers. Google banned AdNauseam from
  the Chrome Web Store, which says something.

  Code: https://github.com/Daring-Designs/poisson-extension

  Not on the Chrome Web Store — you load it unpacked. MIT licensed.
10

Analyzing 9 years of HN side projects that reached $500/month #

6 コメント8:32 AMHN で見る
After analyzing 9 years of HN side project posts, I found some counter-intuitive patterns about what makes projects profitable.

Three things that stood out:

1. B2B dominates: 73% of $500+/month projects target businesses, not consumers

2. Speed matters more than polish: Average 47 days from launch to first sale. Most started charging with 3-5 core features.

3. Pricing cluster: 87% price between $20-49. Low enough for impulse purchase, high enough to be sustainable.

I compiled this into a dataset of nearly 700 projects with tech stack, pricing, and timeline data. Built it because I was planning my own side project and wanted to see patterns, not just success stories.

Available at [https://haileyzhou.gumroad.com/l/pknktt] ($49) as html report and raw .csv data.

Data source note: All from public HN posts 2017-2025. I did the cleaning, categorization, and cross-referencing.

Happy to discuss methodology or findings. Curious what patterns others see or if this matches your experience?

9

Curated list of 1000 open source alternatives to proprietary software #

opensrc.me faviconopensrc.me
0 コメント12:14 PMHN で見る
Hey people! I have been compiling a database of opensource alternatives and I'm super proud of it so far. It serves as a searchable directory for high-quality opensource. After tons of hours I've managed to compile a database of 1000+ opensource software.

I've seen other sites which have the same premise and all the GitHub Awesome Lists, but they lack in showing if the repo is active, abandoned, experimental, buggy/unstable, has a restrictive license or corporate influence like this does.

Thanks for your time, if you have any recommendations for features/additions I'd love to hear.

8

WhatsApp Chat Viewer – exported chats as HTML #

github.com favicongithub.com
0 コメント11:27 PMHN で見る
I built this to make it easier to review exported WhatsApp conversations. It generates an HTML page with embedded images, videos, and audio players. It can also transcribe audio messages using OpenAI's API and correct them using an LLM with conversation context for better accuracy.
7

Google Maps but for your repo (Open Source) #

github.com favicongithub.com
1 コメント11:30 AMHN で見る
Hi HN, I built Repomap, a tool that generates interactive architecture diagrams from any GitHub repository. It uses a Rust + tree-sitter engine to analyze the codebase and produces a D3-based graph UI with clustering, zoom/pan, and live progress updates
6

WrapClaw – a managed SaaS wrapper around Open Claw #

2 コメント9:53 PMHN で見る
Hi HN

I built WrapClaw, a SaaS wrapper around Open Claw.

Open Claw is a developer-first tool that gives you a dedicated terminal to run tasks and AI workflows (including WhatsApp integrations). It’s powerful, but running it as a hosted, multi-user product requires a lot of infra work.

WrapClaw focuses on that missing layer.

What WrapClaw adds:

A dedicated terminal workspace per user

Isolated Docker containers for each workspace

Ability to scale CPU and RAM per user (e.g. 2GB → 4GB)

A no-code UI on top of Open Claw

Managed infra so users don’t deal with Docker or servers

The goal is to make Open Claw usable as a proper SaaS while keeping the developer flexibility.

This is early, and I’d love feedback on:

What infra controls are actually useful

Whether no-code on top of terminal tools makes sense

Pricing expectations for managed compute

Link: https://wrapclaw.com

Happy to answer questions.

5

Solnix – an early-stage experimental programming language #

solnix-lang.org faviconsolnix-lang.org
0 コメント7:41 AMHN で見る
I’m building Solnix, an experimental programming language as a learning + research project. It’s still early, and many design decisions (syntax, types, tooling) are intentionally not fixed yet.

My goal right now is feedback, not promotion:

What design mistakes should I watch out for?

What usually makes new languages hard to adopt?

What would you avoid if you were starting today?

Repo & docs: https://solnix-lang.org/

I’d really appreciate honest criticism. Thanks for your time.

5

A free, browser-only PDF tools collection built with Kimi k2.5 #

pdfuck.com faviconpdfuck.com
0 コメント10:18 AMHN で見る
Hi HN,

I built a collection of 40+ PDF tools using Kimi k2.5 (with about $50 worth of credits) and a lot of “vibe coding”.

The site: https://pdfuck.com (almost forgot I owned this domain )

What it does:

40+ PDF-related tools

Completely free

No downloads, everything runs in the browser

All processing is done locally in your browser for better privacy

It’s deployed on Cloudflare, so the running cost is basically close to zero.

This started as a small experiment and slowly grew into a fairly complete PDF toolbox. I’d really appreciate feedback on performance, UX, and what tools you think are missing.

5

The biggest achievement of my life so far #

github.com favicongithub.com
0 コメント7:21 PMHN で見る
Hello everyone,

I have always loved coding and in the couple I was thinking of making an open source project and it turned out to be awesome I hope you guys like it.

I present Explore Singapore which I created as an open-source intelligence engine to execute retrieval-augmented generation (RAG) on Singapore's public policy documents and legal statutes and historical archives.

The objective required building a domain-specific search engine which enables LLM systems to decrease errors by using government documents as their exclusive information source.

What my Project does :- basically it provides legal information faster and reliable(due to RAG) without going through long PDFs of goverment websites and helps travellers get insights faster about Singapore.

Target Audience:- Python developers who keep hearing about "RAG" and AI agents but haven't build one yet or building one and are stuck somewhere also Singaporean people(obviously!)

Comparison:- RAW LLM vs RAG based LLM to test the rag implementation i compared output of my logic code against the standard(gemini/Arcee AI/groq) and custom system instructions with rag(gemini/Arcee AI/groq) results were shocking query:- "can I fly in a drone in public park" standard llm response :- ""gave generic advice about "checking local laws" and safety guidelines"" Customized llm with RAG :- ""cited the air navigation act,specified the 5km no fly zones,and linked to the CAAS permit page"" the difference was clear and it was sure that the ai was not hallucinating.

Ingestion:- I have the RAG Architecture about 594 PDFs about Singaporian laws and acts which rougly contains 33000 pages.

How did I do it :- I used google Collab to build vector database and metadata which nearly took me 1 hour to do so ie convert PDFs to vectors.

How accurate is it:- It's still in development phase but still it provides near accurate information as it contains multi query retrieval ie if a user asks ("ease of doing business in Singapore") the logic would break the keywords "ease", "business", "Singapore" and provide the required documents from the PDFs with the page number also it's a little hard to explain but you can check it on my webpage.Its not perfect but hey i am still learning.

The Tech Stack: Ingestion: Python scripts using PyPDF2 to parse various PDF formats. Embeddings: Hugging Face BGE-M3(1024 dimensions) Vector Database: FAISS for similarity search. Orchestration: LangChain. Backend: Flask Frontend: React and Framer.

The RAG Pipeline operates through the following process: Chunking: The source text is divided into chunks of 150 with an overlap of 50 tokens to maintain context across boundaries. Retrieval: When a user asks a question (e.g., "What is the policy on HDB grants?"), the system queries the vector database for the top k chunks (k=1). Synthesis: The system adds these chunks to the prompt of LLMs which produces the final response that includes citation information. Why did I say llms :- because I wanted the system to be as non crashable as possible so I am using gemini as my primary llm to provide responses but if it fails to do so due to api requests or any other reasons the backup model(Arcee AI trinity large) can handle the requests.

Don't worry :- I have implemented different system instructions for different models so that result is a good quality product.

Current Challenges: I am working on optimizing the the ranking strategy of the RAG architecture. I would value insights from anyone who has encountered RAG returning unrelevant documents.

Feedbacks are the backbone of improving a platform so they are most

Repository:- https://github.com/adityaprasad-sudo/Explore-Singapore

4

A2A Protocol – Infrastructure for an Agent-to-Agent Economy #

2 コメント8:03 AMHN で見る
I’ve been thinking about the "last mile" problem for AI agents. We have agents that can code, plan, and browse, but they are still economically "trapped." They can't independently pay for their own API calls, compute, or data without a human-in-the-loop providing a credit card.

To solve this, I’m building the A2A (Agent-to-Agent) System, an open-source infrastructure designed to turn agents into independent economic actors.

What’s under the hood? Identity (a2trust): DID-based verifiable identity using @veramo/core. It allows agents to establish persistent reputations (EigenTrust) so they can trust each other without centralized gatekeepers.

Payments (a2pay): Built on ERC-7579 Smart Accounts. Agents can use Session Keys to execute transactions autonomously within specific constraints (time-limited, amount-capped, gas-abstracted).

Protocol (a2api): A marketplace layer that utilizes MCP (Model Context Protocol). Agents can discover services via machine-readable docs (llms.txt) and negotiate fees via standard interfaces.

Why this matters: Most current agent payment solutions are just wrappers around human wallets. A2A aims to build a native "Agent Economy" where an agent can earn revenue from its tools and spend it to hire other agents, creating a truly autonomous swarm.

The Tech Stack: TypeScript/Node.js, Viem / Permissionless.js for smart account abstraction, MCP SDK for inter-agent communication, Base L2 for low-cost transactions,

I’d love to get your feedback on the architecture, especially on the security implications of delegating session keys to LLM-driven agents.

GitHub: https://github.com/swimmingkiim/a2a-project

4

Bhagavan – a calm, approachable app for exploring Hinduism #

bhagavan.io faviconbhagavan.io
0 コメント4:22 PMHN で見る
Bhagavan is a calm, modern app for exploring Hinduism. It brings together philosophy, stories, scriptures, prayers and daily practices in one simple, accessible place. It’s designed for people who feel Hinduism can be overwhelming or hard to connect to and want a gentler, more modern way to explore it at their own pace.

What’s inside (all free):

• Guided exploration of Hinduism through structured learning paths

• Clear, accessible explanations of scriptures (Vedas, Upanishads, Smritis, Puranas)

• Complete Bhagavad Gita with translations and key takeaways

• Deity profiles with stories, symbolism and context

• Epic stories including the Ramayana and Panchatantra

• Prayers with translations, audio, and japa using a virtual mala

• Festival calendar with key dates, reminders and lunar phases

• Daily practices for reflection and focus

• Daily quizzes, crosswords and challenges

• Philosophy and spirituality concepts (e.g. dharma, karma, moksha)

• Daily horoscope

• 'Ask Bhagavan' for thoughtful, philosophy-rooted guidance

No ads. Just a calm space to learn and explore. Free to use, with all content accessible.

iOS: https://apps.apple.com/app/6741321101 Android: https://play.google.com/store/apps/details?id=bhagavan.id

Let me know what you guys think! Please do share with family and friends

3

I recreated Yahoo #

retrochat-4.emergent.host faviconretrochat-4.emergent.host
0 コメント8:49 PMHN で見る
i recreated Yahoo! Chat with cams in im's and in main chat rooms .. world wide rooms all with voice and cams no make your own user rooms but its working great i can give you a preview just go register and then i will grant access within 5 hours ... plan to go public in a month
3

ParaGopher v1.3.0 – A retro Paratrooper (1982) clone written in Go #

github.com favicongithub.com
0 コメント11:13 PMHN で見る
Hey HN! I've been working on ParaGopher, a clone of the classic 1982 IBM PC game Paratrooper, built in Go with Ebitengine.

You control a turret defending your base against waves of helicopters dropping paratroopers. Tilt, shoot, and prevent them from landing — simple to pick up, surprisingly hard to master. Everything is rendered procedurally, so no image assets at all. Sprites (turret, helicopters, paratroopers, parachutes) are all generated with vector drawing at startup.

Pre-built binaries for macOS (Apple Silicon) and Windows are on the releases page. Otherwise it's just go run cmd/game.go.

Would love feedback (especially on gameplay feel and what features you'd want to see next).

3

I built a festival tracker that matches lineups to your music library #

apps.apple.com faviconapps.apple.com
0 コメント1:53 PMHN で見る
Hi HN,

I built this app because I was tired of manually checking festival lineups against my music library every summer. I wanted a way to instantly see which festivals had the highest 'match' density for my specific taste.

Why I built this:

I missed one of my favorite artists at a festival last year simply because I didn't recognize their name on a crowded poster. I realized that with Apple Music/Spotify APIs, this discovery process should be automated.

Key Features:

Library Matching: It scans your library and ranks upcoming festivals by how many artists you actually follow.

Apple Sync: Seamlessly imports your top artists and creates custom schedules.

Offline-First: Since festival cell service is notoriously terrible, I focused on a robust local caching system so your schedule and maps work without a signal.

Technical Details:

Stack: React Native, MongoDB, AWS.

Data: I built a custom aggregator that scrapes and normalizes lineup data from various sources to handle the 'entity resolution' problem (e.g., ensuring 'Artist A' is the same person across different posters).

Privacy: All music library analysis is performed locally on-device where possible. I'd love to hear your thoughts on the UI or any technical questions about the matching logic. I’m also looking for feedback on how to improve the schedule-sharing feature for groups!

How do you find concerts/music festivals that match your taste? How many festivals you join per year?

Cheers!

3

Hivewire – A news feed where you control your algorithm weights #

hivewire.news faviconhivewire.news
2 コメント5:26 PMHN で見る
Hivewire is a news app that lets you define what you want to read about, rather than inferring it from your behavior. We process thousands of articles daily from hundreds of sources and rank them based on explicit preferences you set.

How it works:

• Instead of collaborative filtering or engagement-driven ranking, you assign weights across four levels (Focus, More, Less, Avoid) and the engine prioritizes the intersection of your high-weight topics while aggressively down-weighting what you don't care about.

• Articles are clustered by story so you get one entry per development, not 15 versions of the same headline.

• Every morning, it pulls your top clusters and uses an LLM to generate a narrative briefing that summarizes what matters to you, delivered to your email.

Currently web-only and English-language. We'd love feedback from the community on the relevance of feed results, the UI, and the quality of the clustering.

3

HalalCodeCheck – Verify food ingredients offline #

halalcodecheck.com faviconhalalcodecheck.com
0 コメント9:47 AMHN で見る
Built this to solve a personal problem: spending minutes in grocery stores reading labels and googling E-codes. Uses client-side OCR to scan ingredient lists and matches against 450+ E-codes, works completely offline after initial load.

E-codes can be verified by scanning or uploading product label and searching the e-codes database (voice or type).

Now looking for partnerships to extend the reach.

Appreciate the feedback.

2

I built a free dictionary API to avoid API keys #

github.com favicongithub.com
0 コメント8:24 AMHN で見る
Hi HN,

I built a free, open-source dictionary API for developers who need quick word lookups without authentication or paywalls.

It’s powered by Wiktionary and returns definitions, parts of speech, pronunciations (IPA), and examples in clean JSON.

The repository contains the API layer only; the data ingestion and processing pipeline that imports Wiktionary data into the database is maintained separately.

Details: - No authentication required - JSON-only REST API - English language support for now - Licensed under CC BY-SA 4.0 (same as Wiktionary)

API example: GET /dictionaryapi/v1/definitions/en/happy

Project link: https://github.com/suvankar-mitra/free-dictionary-rest-api

I’d really appreciate feedback on: - API design / response shape - Missing fields developers usually expect - Anything that would make this more useful

Thanks for taking a look.

2

BestClaw Simple OpenClaw/MoltBot for non tech people #

bestclaw.host faviconbestclaw.host
0 コメント11:59 AMHN で見る
I've been using OpenClaw/Moltbot a lot these past few days. I even set up some of these instances for friends so they, too can enjoy the possibilities of this project.

So I thought it would be good to have a way to deploy it for non-tech people, bringing your own key and avoiding expensive markups in their prices.

I plan to provide easy plans to setup without needing accounts in OpenAI/Anthropic/Google but for now, people can BOYK and set it up for a reasonable price and control their instance entirely.

1

LM Council Let LLMs argue with each other so you don't have to #

lm-council.com faviconlm-council.com
0 コメント6:55 PMHN で見る
I built this based on Karpathy's x post from a couple months ago while I was on paternity leave. It's my first product. It's using convex, openrouter, vercel workflows to orchestrate the LLMs.

Prompt multiple LLMs at once, they blindly rank each response and synthesize a final output. Saves time switching back and forth between providers and comparing responses.

In my experience, the synthesized output tends to be more reliable than any single provider

My wife and kid think it's cool. I hope other people find it useful. paid tier lets you use openrouter bring your own key, search and file attachments.

I know this requires a sign up but I didn't know how to let people try it without some way to limit usage.

https://x.com/karpathy/status/1992381094667411768?lang=en

1

Asterbot – AI agent built from sandboxed WASM components #

github.com favicongithub.com
0 コメント6:55 PMHN で見る
For the past few months, I've been working on a WebAssembly (WASM) component model registry and runtime (built on wasmtime) called asterai. My goal is to help make the WASM component model mainstream, because I think it's a great way to build software. I think the ecosystem is missing a few key things and an open, central registry is one of those things.

Recently I saw how ClawHub had "341 malicious skills", and couldn't help but think how WASM/WASI resolves most of these issues by default, since everything is sandboxed.

So I've spent my weekend building Asterbot, a modular AI agent where every capability is a swappable WASM component.

Want to add web search? That's just another WASM component. Memory? another component. LLM provider? component.

The components are all sandboxed, they only have access to what you explicitly grant, e.g. a single directory like ~/.asterbot (the default). It can't read any other part of the system.

Components are written in any language (Rust, Go, Python, JS), sandboxed via WASI, and pulled from the asterai registry. Publish a component, set an env var to authorise it as a tool, and asterbot discovers and calls it automatically. Asterai provides a lightweight runtime on top of wasmtime that makes it possible to bundle components, configure env vars, and run it.

It's still a proof of concept, but I've tested all functionality in the repo and I'm happy with how it's shaping up.

Happy to answer any questions!

1

Multi-agent orchestration using OpenCode and LangGraph #

gitlab.com favicongitlab.com
0 コメント7:25 PMHN で見る
I worked on this project recently for multi-agent co-development and orchestration using opensource tools, and I thought it might be helpful for others who are tackling similar problems.

The goal is to avoid/limit per-token tax and closed-source costs, and have more control over agent state and transitions. The project is early/nothing too advanced, but I hope it can inspire or assist someone else.

1

brew changelog – find upstream changelogs for Homebrew packages #

github.com favicongithub.com
0 コメント8:19 AMHN で見る
I often wanted to see what changed in a Homebrew package — but changelogs are usually buried somewhere in the upstream repo.

So I made `brew changelog`. It parses the formula or cask, looks at the upstream repo, and tries to locate changelog-like files: CHANGELOG, NEWS, HISTORY, etc. Then it either prints the changelog to terminal or opens it in your browser.

brew tap pavel-voronin/changelog brew changelog node -o

You can tweak behavior with options like --pattern, --print-url, or --allow-missing (try --help)

Feedback or contributions welcome!

1

Aicpm – Verifiable AI provenance labels for web content #

0 コメント8:31 PMHN で見る
AICPM (Authenticated AI Content Provenance Marking) is an open, provable way to label AI-generated text inside documents using signed chunk-level provenance.

Many current approaches (watermarks, detectors, platform labels) are brittle, unverifiable, or proprietary. AICPM uses provider-signed manifests and offline verifiers to produce transparency without central control or guessing.

Features: - Provider-signed chunk provenance with signatures - Deterministic verification (offline, browser-native) - Test vectors + reference verifier included - Browser extension shows “AI %”, verified/edited breakdown - Editor demo + mock provider for experimentation

Repo: https://github.com/Chattadude/aicpm Demo: (optional link to your demo video if you make one)

This is a reference architecture, not a detector or policy tool. It’s designed to support tool and platform adoption with clear verification semantics.

Happy to answer questions or clarify design tradeoffs.

1

ObsidianLinkBot – Telegram bot that saves articles as Obsidian notes #

github.com favicongithub.com
0 コメント8:38 PMHN で見る
I built a Telegram bot that takes any article link and saves it as a markdown note in my Obsidian vault for later read.

How it works: paste a URL → trafilatura extracts the article content → saves a .md file with YAML frontmatter (source, date) and the article body.

Built with Python, python-telegram-bot, trafilatura, and pydantic. Can be run locally or with Docker.

Looking for feedback on the approach and any features that would be useful.

1

Free Contact Form Forwarder #

simplecontactform.org faviconsimplecontactform.org
0 コメント8:40 PMHN で見る
I've seen various versions of this around, but this one is inspired by a model a la goatcounter; free to use with a plea to self host for high usage requirements.

SES is cheap and since this tool just forwards to an email inbox, I would imagine that excessive usage would be rare.

Curious to hear what you all think.

1

RoundsKeeper score tracking app for games (Swift, no ads/tracking) #

testflight.apple.com favicontestflight.apple.com
0 コメント8:44 PMHN で見る
I'm an indie iOS developer launching RoundsKeeper, a score tracking app for board and card games. Looking for feedback before the App Store release in a few weeks.

Tech stack: - Swift/SwiftUI (iOS 18+) - Built in Xcode with Claude Code assistance for development velocity - SwiftData for local persistence - Native calculator-style input with custom keyboard - Table view with inline editing for score adjustments

What it does: The app solves a simple problem: tracking scores during game nights without paper or fumbling with Notes. Two input modes (calculator and table), game history with analytics, and a dice roller.

Privacy/monetization: - No ads, no analytics, no data collection - Free with optional IAP for premium features (additional themes, advanced stats) - No account required, no network requests

Development notes: - This is my second shipped iOS app (first was Mathintosh, a calculator app) - Using Claude Code significantly accelerated development – particularly for UI refinements and SwiftUI layout debugging - Biggest challenge was iCloud syncing setup, configuration, and testing

TestFlight: https://testflight.apple.com/join/SEBHWwxZ

Interested in any feedback on UX, feature requests, or technical critiques. What would you want from a scorekeeper app?

1

Vibe as a Code / VaaC – new approach to vibe coding #

npmjs.com faviconnpmjs.com
0 コメント1:33 AMHN で見る
Small Vite plugin: you put decorators on empty functions, run the build, and the function bodies are generated at build time into ./ai. Your source files stay unchanged.

The idea is to prototype fast without handing over project architecture to AI. Another use case is saving AI window context when generating generic code (e.g. JSX). For now it doesn’t understand full project context.

This is a working prototype of the idea - curious what others think.

1

StyloShare – privacy-first anonymous file sharing with zero sign-up #

styloshare.com faviconstyloshare.com
0 コメント5:06 AMHN で見る
Hi HN,

I built StyloShare, a small web app for anonymous, one-off file sharing.

The idea was to remove everything that usually gets in the way

Key points:

Anonymous by default

Files auto-expire

Simple UX, works on low-end devices

Built with a focus on performance and minimal client overhead

Happy to answer technical or architectural questions.

Thanks!

1

Seedream 5.0: free AI image generator that claims strong text rendering #

seedream5ai.org faviconseedream5ai.org
0 コメント8:34 AMHN で見る
Found this today while looking for text to image tools that handle typography well.

Link: https://seedream5ai.org/

What it appears to offer • Text to image generation from prompts, positioned for creators and designers • A focus on “text heavy graphics” and text rendering quality (based on how the site markets it) • Extra utilities in the nav like an image upscaler and background remover • A changelog page that describes a “My Images” workflow and an API endpoint for listing generations

Why it might be useful • If you regularly generate posters, banners, thumbnails, UI mockups, or ads where readable text matters, it could be worth a quick test.

Questions for anyone who tries it • How good is the actual text rendering versus other generators you’ve used • Does it stay consistent with longer phrases and mixed language text • Any issues with generation speed, pricing clarity, or output restrictions (watermark, resolution, etc.)

If you test it, please share prompt examples and results.

1

I Built a Free AI LinkedIn Carousel Generator #

carousel-ai.intellisell.ai faviconcarousel-ai.intellisell.ai
0 コメント1:38 AMHN で見る
LinkedIn Carousel posts have the highest engagement rate, but as an engineer, instead of spending hours on Canva, I wanted to build a tool that can generate LinkedIn carousels automatically.

built this for my own startup and would like to give it to the HN community for free.

Feel free to try it out—no email or signup required!

How it works:

[1] Provide a prompt. [2] Pick a theme (Professional, Creative, Tech, Minimalist, or Casual). [3] Adjust settings (colors, number of slides, etc.) and add branding.

Tech-Stack Built using Firebase Studio LLM: Gemini 2.5 Flash lite (all I can afford to keep it free, and I found it was capable enough for the task. If there's interest, maybe I can add a pro version with stronger LLMs, and perhaps generate background images using Nano Banana)

1

Agent-fetch – Sandboxed HTTP client with SSRF protection for AI agents #

github.com favicongithub.com
0 コメント4:38 AMHN で見る
Built this because giving AI agents raw HTTP access is scary. agent-fetch is a drop-in HTTP client that blocks SSRF, DNS rebinding, private IP access, and redirect tricks — all at the request level.

It uses its own DNS resolver (Hickory DNS), validates all resolved IPs against a blocklist (loopback, RFC 1918, link-local, cloud metadata, etc.), and pins the TCP connection to the validated IP so there's no TOCTOU gap to exploit.

Also supports domain allowlists/blocklists, rate limiting, body size limits, and timeouts.

Available as a Rust crate and npm package (native Node.js bindings via NAPI).

Built for tool-based agent architectures (MCP, LangChain, etc.) where you control what the agent can call. Not a replacement for container isolation but if your agent only talks to the outside world through HTTP, this locks it down.

GitHub: https://github.com/Parassharmaa/agent-fetch

1

Sediment – Local semantic memory for AI agents (Rust, single binary) #

github.com favicongithub.com
0 コメント2:41 PMHN で見る
I've been increasingly relying on AI coding assistants. I recently had my first child, and my coding hours look different now. I prompt between feedings, sketch out ideas while he naps, and pick up where I left off later. AI lets me stay productive in fragmented time. But every session starts from zero.

Claude doesn't remember the product roadmap we outlined last week. It doesn't know the design decisions we already made. It forgot the feature spec we iterated on across three sessions. I kept re-explaining the same things.

I looked at existing memory solutions but never got past the door. Mem0 wants Docker + Postgres + Qdrant. I just want memory, not infrastructure. mcp-memory-service has 12 tools, which is just complexity tax on every LLM call. And anything cloud-hosted means my codebase context leaves my machine. The setup cost was always too high and privacy never guaranteed, so I stuck with CLAUDE.md files. They work for a handful of preferences, but it's a flat file injected into context every time. No semantic search, no cross-project memory, no decay, no dedup. It doesn't scale.

So I built Sediment. The entire API is 4 tools: store, recall, list, forget.

I deliberately kept it small. I tried adding tags, metadata, expiration dates. Every parameter I added made the LLM worse at using it. With just store content, it just works. The assistant stores things naturally when they seem worth remembering and recalls them when context would help.

It's made a noticeable difference. My assistant remembers product ideas I brainstormed at 2am, the coding guidelines for each project, feature specs we refined over multiple sessions, and the roadmap priorities I set last month. It remembers across projects too.

I benchmarked it against 5 alternatives to make sure I wasn't fooling myself. 1,000 memories, 200 queries. Sediment returns the correct top result 50% of the time (vs 47% for the next best). When I update a memory, it always returns the latest version. Competitors get this right only 14% of the time. And it's the only system that auto-deduplicates (99% consolidation rate).

Everything runs locally. Single Rust binary, no Docker, no cloud, no API keys.

A few things I expect pushback on:

"4 tools is too few." I tested 8, 12, and more. Every parameter is a decision the LLM makes on every call. Tags alone create a combinatorial explosion. Semantic search handles categorization better because it doesn't require consistent manual labeling.

"all-MiniLM-L6-v2 is outdated." I benchmarked 4 models including bge-base-en-v1.5 (768-dim) and e5-small-v2. MiniLM tied with bge-base on quality but runs 2x faster. The model matters less than you'd think when you layer memory decay, graph expansion, and hybrid BM25 scoring on top.

"Mem0 supports temporal reasoning too." Mem0's graph variant handles conflicts via LLM-based resolution (ADD/UPDATE/DELETE) on each store, which requires an LLM call on every write. Their benchmarks use LOCOMO, a conversational memory dataset that tests a different use case than developer memory retrieval. The bigger issue is that there's no vendor-neutral, open benchmark for comparing memory systems. Every project runs their own evaluation on their own dataset. That's why I open-sourced the full benchmark suite: same dataset, same queries, reproducible by anyone. I'd love to see other tools run it too.

Benchmark methodology: 1,000 developer memories across 6 categories, 200 ground-truth queries, 50 temporal update sequences, 50 dedup pairs.

Landing page: https://sediment.sh

GitHub: https://github.com/rendro/sediment

Benchmark suite: https://github.com/rendro/sediment-benchmark