毎日の Show HN

Upvote0

2026年2月28日 の Show HN

62 件
304

Now I Get It – Translate scientific papers into interactive webpages #

nowigetit.us faviconnowigetit.us
135 コメント1:29 PMHN で見る
Understanding scientific articles can be tough, even in your own field. Trying to comprehend articles from others? Good luck.

Enter, Now I Get It!

I made this app for curious people. Simply upload an article and after a few minutes you'll have an interactive web page showcasing the highlights. Generated pages are stored in the cloud and can be viewed from a gallery.

Now I Get It! uses the best LLMs out there, which means the app will improve as AI improves.

Free for now - it's capped at 20 articles per day so I don't burn cash.

A few things I (and maybe you will) find interesting:

* This is a pure convenience app. I could just as well use a saved prompt in Claude, but sometimes it's nice to have a niche-focused app. It's just cognitively easier, IMO.

* The app was built for myself and colleagues in various scientific fields. It can take an hour or more to read a detailed paper so this is like an on-ramp.

* The app is a place for me to experiment with using LLMs to translate scientific articles into software. The space is pregnant with possibilities.

* Everything in the app is the result of agentic engineering, e.g. plans, specs, tasks, execution loops. I swear by Beads (https://github.com/steveyegge/beads) by Yegge and also make heavy use of Beads Viewer (https://news.ycombinator.com/item?id=46314423) and Destructive Command Guard (https://news.ycombinator.com/item?id=46835674) by Jeffrey Emanuel.

* I'm an AWS fan and have been impressed by Opus' ability to write good CFN. It still needs a bunch of guidance around distributed architecture but way better than last year.

64

Xmloxide – an agent made rust replacement for libxml2 #

github.com favicongithub.com
63 コメント11:44 PMHN で見る
Recently several AI labs have published experiments where they tried to get AI coding agents to complete large software projects.

- Cursor attempted to make a browser from scratch: https://cursor.com/blog/scaling-agents

- Anthropic attempted to make a C Compiler: https://www.anthropic.com/engineering/building-c-compiler

I have been wondering if there are software packages that can be easily reproduced by taking the available test suites and tasking agents to work on projects until the existing test suites pass.

After playing with this concept by having Claude Code reproduce redis and sqlite, I began looking for software packages where an agent-made reproduction might actually be useful.

I found libxml2, a widely used, open-source C language library designed for parsing, creating, and manipulating XML and HTML documents. Three months ago it became unmaintained with the update, "This project is unmaintained and has [known security issues](https://gitlab.gnome.org/GNOME/libxml2/-/issues/346). It is foolish to use this software to process untrusted data.".

With a few days of work, I was able to create xmloxide, a memory safe rust replacement for libxml2 which passes the compatibility suite as well as the W3C XML Conformance Test Suite. Performance is similar on most parsing operations and better on serialization. It comes with a C API so that it can be a replacement for existing uses of libxml2.

- crates.io: https://crates.io/crates/xmloxide

- GitHub release: https://github.com/jonwiggins/xmloxide/releases/tag/v0.1.0

While I don't expect people to cut over to this new and unproven package, I do think there is something interesting to think about here in how coding agents like Claude Code can quickly iterate given a test suite. It's possible the legacy code problem that COBOL and other systems present will go away as rewrites become easier. The problem of ongoing maintenance to fix CVEs and update to later package versions becomes a larger percentage of software package management work.

49

Decided to play god this morning, so I built an agent civilisation #

github.com favicongithub.com
34 コメント2:05 PMHN で見る
at a pub in london, 2 weeks ago - I asked myself, if you spawned agents into a world with blank neural networks and zero knowledge of human existence — no language, no economy, no social templates — what would they evolve on their own?

would they develop language? would they reproduce? would they evolve as energy dependent systems? what would they even talk about?

so i decided to make myself a god, and built WERLD - an open-ended artificial life sim, where the agent's evolve their own neural architecture.

Werld drops 30 agents onto a graph with NEAT neural networks that evolve their own topology, 64 sensory channels, continuous motor effectors, and 29 heritable genome traits. communication bandwidth, memory decay, aggression vs cooperation — all evolvable. No hardcoded behaviours, no reward functions. - they could evolve in any direction.

Pure Python, stdlib only — brains evolve through survival and reproduction, not backprop. There's a Next.js dashboard ("Werld Observatory") that gives you a live-view: population dynamics, brain complexity, species trajectories, a narrative story generator, live world map.

thought this would be more fun as an open-source project!

can't wait to see where this could evolve - i'll be in the comments and on the repo.

https://github.com/nocodemf/werld

49

Visual Lambda Calculus – a thesis project (2008) revived for the web #

github.com favicongithub.com
9 コメント6:04 AMHN で見る
Originally built as my master's thesis in 2008, Visual Lambda is a graphical environment where lambda terms are manipulated as draggable 2D structures ("Bubble Notation"), and beta-reduction is smoothly animated.

I recently revived and cleaned up the project and published it as an interactive web version: https://bntre.github.io/visual-lambda/

GitHub repo: https://github.com/bntre/visual-lambda

It also includes a small "Lambda Puzzles" challenge, where you try to extract a hidden free variable (a golden coin) by constructing the right term: https://github.com/bntre/visual-lambda#puzzles

44

SQLite for Rivet Actors – one database per agent, tenant, or document #

github.com favicongithub.com
16 コメント4:11 PMHN で見る
Hey HN! We posted Rivet Actors here previously [1] as an open-source alternative to Cloudflare Durable Objects.

Today we've released SQLite storage for actors (Apache 2.0).

Every actor gets its own SQLite database. This means you can have millions of independent databases: one for each agent, tenant, user, or document.

Useful for:

- AI agents: per-agent DB for message history, state, embeddings

- Multi-tenant SaaS: real per-tenant isolation, no RLS hacks

- Collaborative documents: each document gets its own database with built-in multiplayer

- Per-user databases: isolated, scales horizontally, runs at the edge

The idea of splitting data per entity isn't new: Cassandra and DynamoDB use partition keys to scale horizontally, but you're stuck with rigid schemas ("single-table design" [3]), limited queries, and painful migrations. SQLite per entity gives you the same scalability without those tradeoffs [2].

How this compares:

- Cloudflare Durable Objects & Agents: most similar to Rivet Actors with colocated SQLite and compute, but closed-source and vendor-locked

- Turso Cloud: Great platform, but closed-source + diff use case. Clients query over the network, so reads are slow or stale. Rivet's single-writer actor model keeps reads local and fresh.

- D1, Turso (the DB), Litestream, rqlite, LiteFS: great tools for running a single SQLite database with replication. Rivet is for running lots of isolated databases.

Under the hood, SQLite runs in-process with each actor. A custom VFS persists writes to HA storage (FoundationDB or Postgres).

Rivet Actors also provide realtime (WebSockets), React integration (useActor), horizontal scalability, and actors that sleep when idle.

GitHub: https://github.com/rivet-dev/rivet

Docs: https://www.rivet.dev/docs/actors/sqlite/

[1] https://news.ycombinator.com/item?id=42472519

[2] https://rivet.dev/blog/2025-02-16-sqlite-on-the-server-is-mi...

[3] https://www.alexdebrie.com/posts/dynamodb-single-table/

25

Tomoshibi – A writing app where your words fade by firelight #

tomoshibi.in-hakumei.com favicontomoshibi.in-hakumei.com
12 コメント5:12 PMHN で見る
I spent ten years trying to write a novel. Every time I sat down, I'd write a sentence, decide it wasn't good enough, and rewrite it.

The problem wasn't discipline — it was that I could always see what I'd written and go back to change it.

I tried other approaches. Apps that delete your words when you stop typing — they fight fear with fear. That just made me panic. I wanted the opposite: not punishment, but permission.

"Tomoshibi" is Japanese for a small light in the dark — just enough to see what's in front of you.

You write on a dark screen. Older lines fade, but not when you hit return. They fade when you start writing again. If you pause, they wait. You can edit the current line and one line back — enough to fix a typo, not enough to spiral. The one-line-back rule also catches my own practical issue: Japanese IME often fires an accidental newline on kanji confirmation.

Everything is saved. There's a separate reader view for going back through what you've written. Tomoshibi is for writing over months, not just one session. When you come back, your last sentence appears as an epigraph — as if it always belonged there.

No account, no server, no build step. Your writing stays in your browser's local storage — export anytime as .txt. Vanilla HTML/CSS/ES modules.

Try it in your browser. A native Mac app (built with Tauri) with file system integration is coming to the store.

I've been writing on it for two months.

https://tomoshibi.in-hakumei.com/app/

14

Rust-powered document chunker for RAG – 40x faster, O(1) memory #

github.com favicongithub.com
3 コメント2:58 PMHN で見る
I built a document chunking library for RAG pipelines with a Rust core and Python bindings.

The problem: LangChain's chunker is pure Python and becomes a bottleneck at scale — slow and memory-hungry on large document sets.

What Krira Chunker does differently: - Rust-native processing — 40x faster than LangChain's implementation - O(1) space complexity — memory stays flat regardless of document size - Drop-in Python API — works with any existing RAG pipeline - Production-ready — 17 versions shipped, 315+ installs

pip install krira-augment

Would love brutal feedback from anyone building RAG systems — what chunking problems are you running into that this doesn't solve yet?

9

News Pulse – Real-time global news feed, 475 sources, no algorithm #

news-alert-eta.vercel.app faviconnews-alert-eta.vercel.app
3 コメント8:02 AMHN で見る
Former investigative reporter turned developer. I built a simple breaking-news monitor because tracking events across platforms is a mess now that Twitter’s unreliable.

Bluesky is the backbone (a fraction of twitter, but still lots of journalists and OSINT folks), plus RSS, Telegram, Reddit, YouTube and Mastodon. Everything is one chronological feed with no algorithm, clear source labels, and lightweight activity detection when a region spikes above baseline (frequency math, not LLMs).

But we do have a (hopefully non-obtrusive) AI-generated recent/post summary.

Been building for a while, figured I'd post in light of today's events. Can't promise it will survive or is good, any feedback appreciated.

Built with Next.js 15 + TypeScript + Tailwind on Vercel. Real coded + supervised vibe coded. Free, no login, signups, ect

https://pulse-osint.vercel.app/

6

Mowgli – Figma for the agent era, with Claude Code and design export #

mowgli.ai faviconmowgli.ai
1 コメント5:06 PMHN で見る
Hi HN! We're excited to announce the public beta of Mowgli, a spec-backed, AI-native design canvas for scoping and ideating on products.

The productivity gains unleashed by coding agents have made everything else an unexpected bottleneck. In an effort to make the most out of this new paradigm, we ceded a lot of ground in product thinking, thoughtful UX, and design excellence. In other words, the pace of tooling for deciding what to build has not kept up.

Mowgli is inspired by, in equal parts, Figma and Claude's plan mode. It evolves a detailed specification and designs for every screen and state of the product on an infinite canvas. Owing to this UI, it can quickly mock up new features and flows, redesign existing ones, and show you variations side by side. LLMs are amazing at helping you explore vast solution spaces, but most current tooling focuses on getting narrowly perfect output based on a well-defined spec. We try to bridge that gap with Mowgli.

When you're ready to build, download a .zip with a SPEC.md and unopinionated design .tsx files for your screens, in a perfect format for coding agents.

In this early stage, we support two entrypoints:

1. Building a product from scratch through a guided experience

2. Importing an existing product from Figma - we have a powerful, almost pixel-perfect Figma to code + spec pipeline that works on files of any size - from 0 to 300+ frames

...but we're working hard on other ways to get your existing products into Mowgli (and would appreciate hearing about what you would like!)

Some links:

- Timelapse of making a functional second brain app in Mowgli + Claude Code: https://www.youtube.com/watch?v=HeOoy8WDmMA

- Sample project designed and specced entirely by Mowgli (demo, no login needed): https://app.mowgli.ai/projects/cmluzdfa0000v01p91l5r61e3?the...

- Sample Figma import of the whole Posthog product, ready for iteration: https://app.mowgli.ai/projects/cmkl0zqng000101lo8yn2gqvd?dem... (thanks PostHog for being radically open with public Figma files!)

6

Free, open-source native macOS client for di.fm #

github.com favicongithub.com
1 コメント10:21 PMHN で見る
I built a menu bar app for streaming DI.FM internet radio on macOS. Swift/SwiftUI, no Electron.

The existing options for DI.FM on desktop are either the web player (yet another browser tab) or unofficial Electron wrappers that idle at 200+ MB of RAM to play an audio stream. This sits in the menu bar at ~35 MB RAM and 0% CPU. The .app is about 1 MB.

What it does: browse and search stations, play/pause, volume, see what's playing (artwork, artist, track, time), pick stream quality (320k MP3, 128k AAC, 64k AAC). Media keys work. It remembers your last station.

Built with AVPlayer for streaming, MenuBarExtra for the UI, MPRemoteCommandCenter for media key integration. The trickiest part was getting accurate elapsed time. DI.FM's API and the ICY stream metadata don't always agree, so there's a small state machine that reconciles the two sources.

macOS 14+ required. You need a DI.FM premium account for the high-quality streams.

Source and binary: https://github.com/drmikexo2/DIBar-macOS

6

RTS with known stars and exoplanets can now be played in the browser #

stardustexile.com faviconstardustexile.com
0 コメント1:41 PMHN で見る
Hi HN,

I'm a solo dev working on a space RTS game set in the Milky Way galaxy, containing currently known stars and exoplanets with their real characteristics, the remaining stars are procedurally generated.

In the online mode, all players are in the same universe, and the world is persistent. You compete with other players to take back the Solar System and end your exile. Players can always retreat to the periphery to regain strength and rebuild their fleets as needed.

It also includes an procedural spaceship generator that allows creating your own spaceship designs. Spaceships visuals don't change their stats. This way players won't have to use spaceships they don't like due to superior stats.

Spaceships keep operating even when the player is offline: production spaceships keep harvesting resources, and combat spaceships automatically engage invading spaceships. To allow players to mount proper defenses against large attacks, they can build disruptors to slow down travel from enemies to their systems for up to 24 hours.

Now it can be played in the browser:

https://stardustexile.com

6

I built a tool to translate and declutter articles for my immigrant mom #

dulink.click favicondulink.click
7 コメント11:42 PMHN で見る
Hello HN,

I built DuLink to solve a personal problem: my primary language is English, but my mother’s primary language is Mandarin Chinese. I often find articles on health or current events that I want to share with her, but the friction of copy/pasting and Google Translate meant she rarely read them.

What it does:

DuLink takes an article URL, extracts the core content to strip out clutter, and then generates a static, translated reading view. It preserves the semantic HTML structure (headers, lists) but removes the original site's JS and ad tracking.

The "Why":

Existing tools either translate the entire messy page (including nav menus/footers) or require the recipient to install an extension. I wanted something truly simple. A "fire and forget" link that removed every obstacle. No setup, no extra tools, just open and understand. I also added audio playback because reading dense text on a phone can be tiring, especially for aging eyes. Sometimes listening is simply easier.

I built this for a personal use case, but I'm sharing it here for anyone who wants to share something meaningful across languages. If it helps you bridge that gap, I'd love to hear about it.

Any feedback is welcome.

https://dulink.click/

5

EEGFrontier – A compact open-source EEG board using ADS1299 #

github.com favicongithub.com
0 コメント11:41 AMHN で見る
Hi HN,

I built EEGFrontier, a compact open-source EEG acquisition board based on the ADS1299 and an RP2040.

The goal was to design a low-cost board that works with dry electrodes while exposing the full EEG signal chain — no abstractions, no closed firmware.

What surprised me most during this project were the practical issues that datasheets don’t really prepare you for: grounding (REF/BIAS), noise coupling from digital lines, routing constraints, and how small layout decisions drastically affect signal quality.

The repository includes full KiCad files, firmware, a BOM with cost references, and documentation images. This is a V1 board and already works, but I’m actively iterating on shielding and noise mitigation.

I’d really appreciate feedback from people with experience in EEG, biosignals, or analog front-end design — especially criticism.

5

I built GeoQuests where people can request photos of a place #

geoquests.io favicongeoquests.io
0 コメント6:37 AMHN で見る
Hi HN. I had faced an issue where I wanted to know how a place I was travelling to looked like. Like everyone else I looked at google maps and snap chat too. But Google streetview images were usually old and snapchat snaps lacked control. So I built GeoQuests for anyone who wants to know what’s going on on Earth.

You drop a quest at a real location. People see it on the map, go there, and complete it by taking a photo when they’re close enough. The app checks the image's GPS coordinatee, time of the image and if the image fits the request's description. I am using Gemini to verify the image.

Basically you, pin a place -> others discover it on the map -> they go there and complete the quest with a verified photo.

You can browse the map, see public quests and create quests. Wanted some feedback on the project.

4

Monohub – a new GitHub alternative / code hosting service #

monohub.dev faviconmonohub.dev
5 コメント7:13 PMHN で見る
Hello everyone,

My name is Teymur Bayramov, and I am developing a forge/code hosting service called Monohub. It is at a fairly early stage of development, so it's quite rough around the edges. It is developed and hosted in EU.

I have started developing it as a slim wrapper around Git to serve my own code, but it grew to such extent that I decided to give it a try and offer it as a service. It doesn't have much at the moment, but it already has basic pull requests. Accessibility is high priority.

It will be a paid service, but since it's an early start, an "early adopter discount" is applied – 6 months for free. No card details required.

I would be happy if you give it a try and let me know what do you think, and perhaps share what you lack in existing solutions that you would like to see implemented here.

Warmest wishes, Teymur.

4

Polpo – Control Claude Code (and other agents) from your phone #

github.com favicongithub.com
1 コメント10:32 AMHN で見る
Polpo is an open-source mobile controller for AI coding agents. It runs a lightweight server on your machine and gives you a phone-friendly dashboard to manage sessions, send prompts, approve tool calls, and review plans.

We just released v1.1.0 with support for 5 agents (Claude Code, Codex, Gemini, OpenCode, Pi), skills management from the phone (browse/install/remove skills from skills.sh), and the ability to start new sessions without touching the terminal.

The idea started because we wanted to kick off coding tasks from the couch and check on them from the phone. It grew from there.

Built with Node.js, no framework on the frontend, WebSocket for real-time updates. Works on LAN or remotely via tunnel (cloudflared, localtunnel, ngrok, SSH).

Built by PugliaTechs, a non-profit association from Puglia, Italy.

4

Psmux – tmux compatible multiplexer for Windows #

1 コメント1:05 PMHN で見る
I built psmux to bring a tmux style workflow to Windows Powershell without relying on WSL, Cygwin, or MSYS2.

It is written in Rust, integrates with PowerShell, reads existing .tmux.conf files, and supports split panes, sessions, mouse mode, and common tmux keybindings.

I would love feedback from Windows developers who rely on terminal workflows.

4

Nugx.org – A Fresh Nuget Experience #

nugx.org faviconnugx.org
4 コメント5:23 PMHN で見る
NuGet's default registry is serviceable, but discovering and comparing packages isn't great.

nugx.org (http://nugx.org/) is an alternative front-end with better search, popularity signals, dependency graphs, and a cleaner UI.

Built with Noundry (C# packages/tooling). Currently in beta — would love feedback on nugx.org.

nugx.org (https://nugx.org/)

4

GitShow Repo Showroom – a landing page for any GitHub repo #

1 コメント5:47 PMHN で見る
I wanted a way to quickly see what's going on in an open source project without clicking through 10 GitHub tabs. Contributors, languages, commit activity, community health, recent PRs - all scattered across different pages.

GitShow now generates a showroom page for any public repo. Just visit gitshow.dev/owner/repo.

Try a few:

  gitshow.dev/facebook/react
  gitshow.dev/vercel/next.js
  gitshow.dev/solidjs/solid
  gitshow.dev/ofershap/mcp-server-devutils
Each page shows:

- Top contributors with accurate total count. GitHub's API returns max 30 per page, so I use the Link header pagination trick (request per_page=1, read the last page number) to get the real number. facebook/react shows ~2,000 contributors, not 30.

- Language breakdown as a visual bar and weekly commit sparkline from the participation stats API.

- Community health score: does the repo have a README, license, contributing guide, code of conduct, issue/PR templates? Pulled from GitHub's community profile endpoint.

- Recent open PRs and recently merged fixes. Bot authors (dependabot, renovate, etc.) are filtered out. Markdown and HTML in body snippets are stripped to plain text.

- Quick actions inside the hero card: Star on GitHub, Fork, Clone (copies the git clone command), and an npm link for JS/TS repos.

- Breadcrumb navigation: GitShow / owner / repo.

Every page includes Schema.org SoftwareSourceCode structured data and a BreadcrumbList, so it surfaces in AI search engines (ChatGPT, Perplexity, Google AI).

There's also a structured data block at the bottom of each page specifically for LLM agents scraping the web - key stats in a machine-readable format.

Stack: Next.js with 1-hour ISR, all data from GitHub REST API, zero client JS except two small interactive bits (clone-to-clipboard and scroll-to-top). Tailwind, Vercel.

No signup, no AI processing, no paywall. MIT licensed.

Source: https://github.com/ofershap/gitshow

3

CarbonLint – Open-source real-time energy&carbon profiler for software #

nishal21.github.io faviconnishal21.github.io
0 コメント1:36 PMHN で見る
Hi HN,

I'm excited to share CarbonLint, an open-source tool I've been building to help developers measure the actual energy consumption and carbon emissions of their software in real-time.

While there are great tools for tracking CPU/RAM, there aren't many accessible ways to translate those hardware metrics into actual environmental impact natively on your local machine.

How it works: • It's built with Rust & Tauri (v2) for a tiny footprint. • It monitors CPU, memory, disk I/O, and network activity of specific processes. • It calculates energy usage based on user-configurable hardware profiles (e.g., Apple M-series, Intel, AMD). • It translates energy into CO2-equivalent emissions using real-time, region-specific power grid carbon intensity data. • Available natively across Windows (.exe/.msi), macOS (.dmg), Linux (.deb/.AppImage/.rpm), and Android (.apk).

Key features:

Global keyboard shortcuts to start/stop profiling sessions instantly. UI runs quietly in the system tray. Comprehensive session reports detailing exactly how much energy a specific software run consumed. The code is fully open-source (MIT), and everything runs locally on your machine. I'd love to hear your feedback on the architecture, the energy estimation models, or any feature requests you might have.

Repo: https://github.com/nishal21/CarbonLint Website: https://nishal21.github.io/CarbonLint/

Thanks!

3

Voice-coded a Doom-scroll Hacker News (Twitter-style feed) #

youtube.com faviconyoutube.com
0 コメント5:55 PMHN で見る
Built a mobile-friendly Hacker News clone where stories and comments flow in an infinite, Doom-scrollable timeline (think Twitter/X feed meets HN).

Voice-coded the entire website from scratch using my yapboard voice keyboard + OpenAI Codex via a Discord bridge bot. Screen-mirrored with scrcpy for real-time testing. Forgot to turn on mic during the live build, so added narration afterward.

Demo (try on mobile): https://hackernews.lukestephens.co.za Full build video (voice coding timelapse): https://youtu.be/RRF-F5jDS50

Tools: scrcpy, yapboard.app, Codex Discord bridge (https://github.com/imprisonedmind/codex-discord-bridge)

Curious to hear thoughts, especially on the voice-coding workflow or the endless-scroll UX for HN content!

3

InstallerStudio – Create MSI Installers Without WiX or InstallShield #

ionline.com faviconionline.com
0 コメント10:02 PMHN で見る
Hi, I'm Paul — 25 years of enterprise Windows development. I built InstallerStudio after WiX went from free/open source to $6,500/year support and InstallShield hit $2,000+/year. Every tool in this space is either unaffordable or requires writing XML by hand. InstallerStudio is a visual MSI designer built on WinUI 3/.NET 10. No XML, no subscriptions, no external dependencies. Handles files, Windows services, registry, shortcuts, file associations, custom actions, and full installer UI. $159 this month, $199 after. 30-day free trial. It ships its own installer, built with itself. Happy to answer questions about MSI internals.
3

I made a website to write online math as fast as paper #

scratchpad-math.com faviconscratchpad-math.com
2 コメント12:15 AMHN で見る
I want to preface this by saying I'm extremely new to webdev, and this is my first finished product. Any feedback is greatly appreciated!

Every digital math tool, at least for me, has been significantly worse than just pen and paper. LaTeX is far too slow, and most WYSIWYG editors are lacking features.

Scratchpad is my solution to this. You type shortcuts, and they render in real time inline using MathQuill. Stuff that I've added:

- Greek letters, lower and uppercase - Matrices (super clunky in every other editor I've tried!) - Accents like vector arrows and derivative dots - Tons of other useful symbols (set notation, gradients, plusminus, partial derivatives)

In my experience using it while in development, it's been substantially faster than pen and paper. There's a bit of a learning curve with the shortcuts, but I tried to make it as intuitive as possible.

The whole thing is on one index.html file, just open it and start writing. I would really appreciate any notes, especially from anyone who has experience with digital math notes. Thanks!

3

Telos – A structured context framework for humans and AI agents #

github.com favicongithub.com
0 コメント12:45 PMHN で見る
Telos is a structured intent and decision tracking layer that works alongside Git. It doesn't replace Git — it captures the why behind code changes in a queryable, machine-readable format.

Git tracks what changed in your code. Telos tracks what you intended, what constraints you set, and what decisions you made and why. Every intent, decision, and behavioral expectation is stored as a content-addressable object (SHA-256), forming a DAG that mirrors your development history.

Telos is designed for both human developers and AI agents. Its --json output mode and context command make it a natural integration point for LLM-powered coding assistants that need to recover project context across sessions.

3

MemoryKit – Persistent memory layer for AI agents URL #

github.com favicongithub.com
0 コメント1:39 PMHN で見る
Most AI agents forget everything when a session ends. MemoryKit is a lightweight Python library that gives any AI agent persistent memory across sessions. Three core methods: remember(), recall(), compress(). Works locally for free with sentence-transformers, or with OpenAI embeddings. No external database required. Built this over a weekend as a side project. Still early — would love feedback from people building AI agents.
2

Accept.md now supports SvelteKit – return Markdown from any page #

accept.md faviconaccept.md
0 コメント12:32 AMHN で見る
A few days ago I shared accept.md, a small utility that lets a Next.js page return Markdown when the client sends:

Accept: text/markdown

Instead of HTML.

No one asked for SvelteKit support, still I shipped it.

It now works with:

* Next.js (App Router and Pages Router) * SSG / SSR / ISR * SvelteKit routes * Vercel (no custom server required)

What it does:

If a client sends:

Accept: text/markdown

The exact same page returns clean Markdown.

If not, it behaves normally and renders HTML.

No duplicate routes. No separate .md files. No API layer. No SEO changes.

Just proper HTTP content negotiation.

Why I built this:

LLMs prefer Markdown. Internal tools prefer Markdown. Scrapers prefer Markdown. CLI workflows prefer Markdown.

But most sites only return HTML.

The usual solutions are:

* Maintain a parallel Markdown version * Build a custom export route * Create a docs API * Spin up a custom server

That felt unnecessary.

Browsers already send Accept: text/html. Agents can send Accept: text/markdown.

HTTP already solves this. Accept.md just makes it easy to use content negotiation inside modern frameworks without breaking static generation or edge deployments.

Design goals:

* Zero UI changes * Zero runtime cost for normal visitors * Works with static builds * Cache-friendly * Framework-native

It’s intentionally small. No heavy abstraction. Just a clean way to expose Markdown representations of existing pages.

Would love feedback — especially from people building AI-native apps, documentation systems, or content-heavy SaaS.

Curious whether Markdown negotiation becomes more common as agents become first-class web clients.

2

Soma, a local-first AI OS with 178 cognitive modules and P2P learning #

github.com favicongithub.com
0 コメント3:41 PMHN で見る
Local-first AI operating system — 178 cognitive modules, persistent memory, multi-model reasoning, P2P Graymatter Network. I can no longer develop this AI as it has gotten to be out of my knowledge range so I figured I would give her to the public, she should be a good base for any future AI development even going towards ASI!
2

MailFomo – Drive urgency in emails with live countdown timers #

mailfomo.com faviconmailfomo.com
0 コメント4:04 PMHN で見る
Hey HN! We built MailFomo to help email marketers create urgency and drive conversions. The idea is simple: drop a live countdown timer into any email — it updates every time someone opens it. Works with any email platform like Mailchimp, Klaviyo, SendGrid, etc. It also includes AI tools to help you plan campaigns and create timers faster, plus ready-made templates so you can get started in seconds. Free tier available — would love your feedback!
2

Jarvish – The J.A.R.V.I.S. AI inside your shell investigates errors #

github.com favicongithub.com
0 コメント11:19 AMHN で見る
Hi HN, I'm the creator of Jarvish.

https://github.com/tominaga-h/jarvis-shell

I spend most of my day in the terminal, and I got incredibly frustrated with the standard error-resolution loop: command fails -> copy the stderr -> open a browser -> paste into ChatGPT/Google -> copy the fix -> paste back into the terminal. It completely breaks the flow state.

I wanted a seamless experience where the shell already knows the context of what just happened.

So I built Jarvish. It’s a fully functional interactive shell written in Rust, but with an AI agent seamlessly integrated into the REPL loop. You don't need any special prefixes—if you type `ls -la`, it runs it. If you type `Jarvis, why did that build fail?`, it routes to the AI.

Here is how it works under the hood:

- The "Black Box" (I/O Capture): It uses `os_pipe` and multithreading to tee the `stdout`/`stderr` of child processes in real-time. This captures the output to memory for the AI while simultaneously rendering it to the terminal without breaking interactive TUI tools.

- Context Memory: The captured I/O is compressed with `zstd`, hashed (like Git blobs), and the metadata is stored in a local SQLite database (`rusqlite`). When you ask the AI a question, it automatically retrieves this recent I/O history as context.

- Agentic Capabilities: Using `async-openai` with function calling, the AI can autonomously read files, execute shell commands, and investigate issues before giving you an answer.

- REPL: Built on top of `reedline` for a Fish-like experience (syntax highlighting, autosuggestions).

I’ve been using it as my daily driver (currently v1.1.0). I would absolutely love to hear your thoughts on the architecture, the Rust implementation, or any feature requests!

2

I built a 0-CPU desktop app to track LLM limits,Python/DjangoPyWebView #

github.com favicongithub.com
0 コメント1:16 AMHN で見る
Hey HN, tracking limits across 20 Gemini/Opus accounts was getting tedious, but I refused to build a heavy Electron app just to run a countdown clock. I built Antigravity-Model-Reset-Timer using a 'Target Timestamp' approach: the Django backend calculates the absolute future UTC time and saves it to a local MongoDB. You can kill the app, and the JS engine just compares the DB time to the OS clock on next load. Zero zombie processes. It’s under 10 files. Looking for feedback on the PyWebView implementation and contributors to help add Anthropic/Google API webhooks.

Become a contributor to this open source project! https://github.com/PeterJFrancoIII/Antigravity-Model-Reset-T...

1

Mdspec.dev – open-source spec management platform for technical teams #

mdspec.dev faviconmdspec.dev
0 コメント6:08 PMHN で見る
mdspec.dev - build and manage specs adapted to the speed of agentic software development.

This is a side project of mine. I have made it as an open source spec management platform. Self host of you want and I run a hosted version.

And a spec is a md (markdown) file within your projects.

As we are building with AI agents either via Cursor, VScode or Antigravity, we can see md files or specs (software term for md files) are fast becoming AI infrastructure. A communication layer.

I have used specs mainly for, - Developing spec for a new feature or a bug, generally a new development artifact and discuss the specs with the technical teams. AI agents are good at exploring the features in technical depth, costs, security aspects etc. that can be in one document or more. If there are changes, we can ask the AI agents to change them. - If we need an analysis from a codebase that we need like a compliance analysis on a certain part. Spec that is filtered for the understanding of the target audience or a tool. I have built cost simulation tools based on the codebase, where I built yaml based infrastructure specs that fed into simulation tool. Beauty of these solutions are, the target audience doesn’t need access to whole codebase. I think spec driven engineering is enabled only by AI agentic era. It was there before but it was time consuming and not scalable. In essence, - Specs give AI agents the context they need to build correctly. - They also give teams clarity into why and how something was implemented. - Specs are filtered to provide as inputs to other tools or analysis

But AI agentic development is really fast. As with all fast things, we could see two problems were emerging:

- Discussions across teams became long and fragmented. Compliance needs those specs. Design needs those specs. Risk Modeling needs those specs. - Specs were duplicated, scattered, and hard to reuse across projects.

I built mdspec to remove that friction. I made it open source to make sure anyone can bend it any way they see fit for their own use cases.

mdspec is a specification management platform designed for fast phased AI-driven development. It helps you: - Create structured, reusable specs - Link specs across features and projects - Maintain clarity as development speed increases - Give AI agents reliable, durable context - When specs become first-class citizens, teams move faster, without losing alignment. - Go cross team securely.

I'm exploring the possible integrations to integrate this back into AI agents and external tools.

https://mdspec.dev

1

The Terminal for Marketing Decisions #

velovra.com faviconvelovra.com
0 コメント5:07 PMHN で見る
We're building Velovra to help product and growth teams make data-driven marketing decisions without the noise. It combines your traffic, conversion, revenue data with proprietary datasets and domain-specific foundational models to generate actionable weekly plans, what to fix, what to ignore, and what to double down on.

Think of it as a Bloomberg Terminal, but for marketing decision-making. Every insight is generated by structured decision logic and AI models fine-tuned for marketing contexts, not generic dashboards. The goal: skip the charts, get actionable guidance.

Early days, we're collecting signups for early access and feedback from folks running growth experiments.

Waitlist: https://velovra.com

1

Paster – A keyboard-first clipboard manager for Vim users #

pasterapp.com faviconpasterapp.com
0 コメント1:37 PMHN で見る
Hi HN,

I’ve tried just about every clipboard manager for macOS, but I've always ran into the same two issues: either they were heavy Electron apps that felt sluggish, or they required me to take my hands off the keyboard to find what I needed. Raycast is what I used most of the time, but it's slow in loading screenshots and is search first, meaning I needed to leave the loved home row to scroll down through items.

I built Paster because I wanted something that felt like an extension of my terminal and had instant load of the content being copied. It's written in Rust to keep the latency as low as possible and uses a local SQLite database for history. It's completely private and does not have any telemetry, your data is your own. It does reach to it's domain to validate the license.

Some specific choices I made: - Navigation: I mapped it to j/k and / for search. If you use Vim or a terminal, it should feel like second nature. - Privacy: I’m not a fan of cloud-syncing my clipboard. Everything stays local on your machine. - Quick look: I've added a nice little bonus feature to view each clipboard item in a larger quick look window. Pretty handy for screenshots and offers syntax highlighting for text.

It’s currently a paid app with a 7-day trial. I’m really curious what the community thinks about the "Vim-for-everything" approach. For transparency sake, it's built with help from AI (Gemini) mostly for UI stuff which requires lots of boiler plate.

It's Macos only for now, I do intend to work on a Linux version but no promises.

1

Nano Banana 2 – 4K AI image generator with accurate text rendering #

ainanobanana2.pro faviconainanobanana2.pro
0 コメント2:59 PMHN で見る
Hey HN,

  I built Nano Banana 2, an AI image generation platform powered by Google's Gemini 3.1 Flash Image   
  model.                                                                                               
                                                                                                       
  The main problems I wanted to solve:                                                                 
  - Text rendering in AI images has always been broken. This model gets it right ~90% of the time.     
  - Most generators cap at 1-2K. This does true 4K output.                                             
  - No character consistency across images. This tracks up to 5 characters + 14 objects.               
                                                                                                       
  Tech stack: Next.js 16, React 19, TypeScript, Drizzle ORM, PostgreSQL, Cloudflare R2.                
                                                                                                       
  4 models available depending on your needs — from $0.039/image (budget) to $0.134/image (pro).       
  Credits never expire. Free credits on signup.                                                        
                                                                                                       
  Live at https://ainanobanana2.pro                                                                    
                                                                                                       
  Happy to answer any questions about the tech, the Gemini API integration, or the business model
1

Code-snippet flashcards for 600 programming cheat sheets #

cheatsheet-plus-plus.com faviconcheatsheet-plus-plus.com
0 コメント2:28 PMHN で見る
Today, I'm launching Flashcards across all 600+ topics on CheatSheet++ (cheatsheet-plus-plus.com).

Active recall is crucial for really learning and remembering concepts (especially for interview prep).

To make these actually useful for developers, I focused on a few key features:

Code Snippets Included: Standard flashcards are often too text-heavy. These flashcards feature syntax-highlighted code examples on the back alongside the conceptual explanations. Progressive Difficulties: The decks scale from Beginner to Advanced, adjusting the depth of the questions and the complexity of the concepts accordingly.

This feature is designed to work alongside our existing Interview Q&A section for comprehensive prep.

Search the topic you are currently learning and try it: https://cheatsheet-plus-plus.com

Feedback and critique are incredibly welcome!

1

Which VCs are Tier 1? #

vc-compare.vercel.app faviconvc-compare.vercel.app
0 コメント6:06 PMHN で見る
Saw some discussion on twitter on which firms are considered tier 1. Was inspired to make a little game where users choose their favorite and least favorite firm to get a term sheet from when presented with four choices.

At the end of the game you can see your own ranking as well as an aggregated results from the crowd.

I went with the choose favorite/least favorite mechanic because it felt like a better experience then just doing head to head.

With ~20 firms that's a lot of pairs you need to go through.

If you do triplets that reduces the pairs, but it was still too many rounds.

By doing min/max four at a time you can efficiently get implied preferences in ~20 rounds. This make it take only about 3-5 minutes to finish the game!