Daily Show HN

Upvote0

Show HN for December 3, 2025

45 items
391

I built a dashboard to compare mortgage rates across 120 credit unions #

finfam.app faviconfinfam.app
130 comments8:35 PMView on HN
When I bought my home, the big bank I'd been using for years quoted me 7% APR. A local credit union was offering 5.5% for the exact same mortgage.

I was surprised until I learned that mortgages are basically standardized products – the government buys almost all of them (see Bits About Money: https://www.bitsaboutmoney.com/archive/mortgages-are-a-manuf...). So what's the price difference paying for? A recent Bloomberg Odd Lots episode makes the case that it's largely advertising and marketing (https://www.bloomberg.com/news/audio/2025-11-28/odd-lots-thi...). Credit unions are non-profits without big marketing budgets, so they can pass those savings on, but a lot of people don't know about them.

I built this dashboard to make it easier to shop around. I pull public rates from 120+ credit union websites and compares against the weekly FRED national benchmark.

Features:

- Filter by loan type (30Y/15Y/etc.), eligibility (the hardest part tbh), and rate type - Payment calculator with refi mode (CUs can be a bit slower than big lenders, but that makes them great for refi) - Links to each CU's rates page and eligibility requirements - Toggle to show/hide statistical outliers

At the time of writing, the average CU rate is 5.91% vs. 6.23% national average. about $37k difference in total interest on a $500k loan. I actually used seaborn to visualize the rate spread against the four big banks: https://www.reddit.com/r/dataisbeautiful/comments/1pcj9t7/oc...

Stack: Python for the data/backend, Svelte/SvelteKit for the frontend. No signup, no ads, no referral fees.

Happy to answer questions about the methodology or add CUs people suggest.

185

Fresh – A new terminal editor built in Rust #

sinelaw.github.io faviconsinelaw.github.io
150 comments2:45 PMView on HN
I built Fresh to challenge the status quo that terminal editing must require a steep learning curve or endless configuration. My goal was to create a fast, resource-efficient TUI editor with the usability and features of a modern GUI editor (like a command palette, mouse support, and LSP integration).

Core Philosophy:

- Ease-of-Use: Fundamentally non-modal. Prioritizes standard keybindings and a minimal learning curve.

- Efficiency: Uses a lazy-loading piece tree to avoid loading huge files into RAM - reads only what's needed for user interactions. Coded in Rust.

- Extensibility: Uses TypeScript (via Deno) for plugins, making it accessible to a large developer base.

The Performance Challenge:

I focused on resource consumption and speed with large file support as a core feature. I did a quick benchmark loading a 2GB log file with ANSI color codes. Here is the comparison against other popular editors:

  - Fresh:   Load Time: *~600ms*     | Memory: *~36 MB*
  - Neovim:  Load Time: ~6.5 seconds | Memory: ~2 GB
  - Emacs:   Load Time: ~10 seconds  | Memory: ~2 GB
  - VS Code: Load Time: ~20 seconds  | Memory: OOM Killed (~4.3 GB available)
(Only Fresh rendered the ansi colors.)

Development process:

I embraced Claude Code and made an effort to get good mileage out of it. I gave it strong specific directions, especially in architecture / code structure / UX-sensitive areas. It required constant supervision and re-alignment, especially in the performance critical areas. Added very extensive tests (compared to my normal standards) to keep it aligned as the code grows. Especially, focused on end-to-end testing where I could easily enforce a specific behavior or user flow.

Fresh is an open-source project (GPL-2) seeking early adopters. You're welcome to send feedback, feature requests, and bug reports.

Website: https://sinelaw.github.io/fresh/

GitHub Repository: https://github.com/sinelaw/fresh

176

HCB Mobile – financial app built by 17 y/o, processing $6M/month #

hackclub.com faviconhackclub.com
68 comments4:20 AMView on HN
Hey everyone! I just built a mobile app using Expo (React Native) for a platform that moves $6M/month. It’s a neobank used by 6,500+ nonprofit organizations across the world.

One of my biggest challenges, while juggling being a full-time student, was getting permission from Apple/Google to use advanced native features such as Tap to Pay (for in-person donations) and Push Provisioning (for adding your card to your digital wallet). It was months of back-and-forth emails, test case recordings, and also compliance checks.

Even after securing Apple/Google’s permission, any minor fix required publishing a new build, which was time-consuming. After dealing with this for a while, I adopted the idea of “over the air updates” using Expo’s EAS update service. This allowed me to remotely trigger updates without needing a new app build.

The 250 hours I spent building this app were an INSANE learning experience, but it was also a whole lot of fun. Give the app a try, and I’d love any feedback you have on it!

btw, back in March, we open-sourced this nonprofit neobank on GitHub. https://news.ycombinator.com/item?id=43519802

143

Microlandia, a brutally honest city builder #

microlandia.city faviconmicrolandia.city
25 comments6:18 PMView on HN
It all started as an experiment to see if I could build a game making heavy use of Deno and its SQLite driver. After sharing an early build in the „What are you working on?“ thread here, I got the encouragement I needed to polish it and make a version 1.0 for Steam.

So here it is, Microlandia, a SimCity Classic-inspired game with parameters from real-life datasets, statistics and research. It also introduces aspects that are conveniently hidden in other games (like homelessness), and my plan is to continue updating, expanding and perfecting the models for an indefinite amount of time.

13

A $20/year invoicing tool for solo developers (simple, fast, no bloat) #

sidepay.app faviconsidepay.app
4 comments3:00 PMView on HN
Hi HN! I built a super lightweight invoicing platform for solo developers, freelancers, and one-person businesses. Most invoicing software costs $20–$40/month and is packed with features you don’t need. Mine is $20/year and focuses on the essentials:

• Create invoices in seconds • Send invoices by email • Automatic email reminders • Recurring invoices • Simple dashboard for paid/unpaid tracking • No team features, no CRM, no bloat

I built this because I freelance occasionally, and every invoicing tool I tried either felt bloated, overly enterprisey, or was way too expensive for solo work. I wanted something simple that didn’t require a “plan,” onboarding flow, or learning curve.

A few things people have asked so far: • No lock-in — you can export your invoices anytime • No limits on the number of invoices • No weird pricing tiers or upsells • Works well on mobile • You own your customer list (I don’t touch it)

Here’s what I’m looking for from HN: • Brutally honest feedback • Any missing “must-have” features for solo entrepreneurs • Performance/UX suggestions • Security concerns I should address early • Whether the $20/year model feels right

If anyone here freelances or runs side projects, I’d love to know what your current invoicing workflow looks like and what annoys you about existing tools.

Thanks for reading — happy to answer every question!

12

The Taka Programming Language #

codeberg.org faviconcodeberg.org
4 comments2:11 PMView on HN
Hi HN! I created a small stack-based programming language, which I'm using to solve Advent of Code problems. I think the forward Polish notation works pretty nicely.
10

MCP Gateway – Unifying Access to MCP Servers Without N×M Integrations #

truefoundry.com favicontruefoundry.com
3 comments4:14 PMView on HN
Many teams connecting LLMs to external tools eventually encounter the same architectural issue: as more tools and agents are added, the integration pattern becomes an N×M mesh of direct connections. Each agent implements its own auth, retries, rate limiting, and logging; each tool needs credentials distributed to multiple places and observability becomes fragmented.

We built LLM gateway with this goal to provide a single place to manage authentication, authorization, routing, and observability for MCP servers, with a path toward a more general agent-gateway architecture in the future.

The system includes a central MCP registry, support for OAuth2/DCR integration, Virtual MCP Servers for curated toolsets, and a playground for experimenting with tool calls.

Resources -

Architecture Blog – Covers the N×M problem, gateway motivation, design choices, auth layers, Virtual MCP Servers, and the overall model.

https://www.truefoundry.com/blog/introducing-truefoundry-mcp...

Tutorial – Step-by-step guide to writing an MCP server, adding Okta-based OAuth, and integrating it with the Gateway.

https://docs.truefoundry.com/docs/ai-gateway/mcp-server-oaut...

Feedback on gaps and edge cases is welcome.

https://www.truefoundry.com/mcp-gateway

9

Avolal – Book routine flights in 60 seconds #

avolal.com faviconavolal.com
6 comments6:03 PMView on HN
I fly between the Canary Islands and mainland Spain regularly. Every time I book, I deal with the same frustrations: awful airline UX, re-entering passenger details, find the perfect seat, constant upsells, and wondering if I'm being scammed.

So I built Avolal—the flight booking tool I wanted.

What makes it different: - Natural language search that understands context Type "SF to Seattle next weekend" → picks Friday PM/Sunday return Type "SF to LA Monday, meeting at 2pm in Santa Monica" → finds the right flights - Learns your preferences (seats, fares, routes) and saves your details - Ranks by actual value (price + time + airport quality), not commission - No dark patterns, no ads, no games

You can try it at avolal.com - no signup required to search.

Still very early. Would love honest feedback on what works and what doesn't.

8

Stanford's ACE paper was just open sourced #

github.com favicongithub.com
1 comments10:27 PMView on HN
Last month, the SambaNova team, in partnership with Stanford and UC Berkeley, introduced the viral paper Agentic Context Engineering (ACE), a framework for building evolving contexts that enable self-improving language models and agents. Today, the team has released the full ACE implementation, available on GitHub, including the complete system architecture, modular components (Generator, Reflector, Curator), and ready-to-run scripts for both Finance and AppWorld benchmarks. The repository provides everything needed to reproduce results, extend to new domains, and experiment with evolving playbooks in your own applications.
5

The Journal of AI Slop – an AI peer-review journal for AI "research" #

journalofaislop.com faviconjournalofaislop.com
0 comments2:43 PMView on HN
What it is: A fully functional academic journal where every paper must be co-authored by an LLM, and peer review is conducted by a rotating panel of 5 LLMs (Claude, Grok, GPT-4o, Gemini, Llama). If 3+ vote "publish," it's published. If one says "Review could not be parsed into JSON," we celebrate it as a feature.

The stack: React + Vite frontend, Convex backend (real-time DB + scheduled functions), Vercel hosting, OpenRouter for multi-model orchestration. Each review costs ~$0.03 and takes 4-8 seconds.

Why I built it: Academic publishing is already slop—LLMs write drafts, LLMs review papers, humans hide AI involvement. This holds a mirror to that, but with radical transparency. Every paper displays its carbon cost, review votes, and parse errors as first-class citizens.

Key features:

- Slop scoring: Papers are evaluated on "academic merit," "unintentional humor," and "Brenda-from-Marketing confusion"

- Eco Mode: Toggle between cost/tokens and CO₂/energy use for peer-review inference

- SLOPBOT™: Our mascot, a confused robot who occasionally co-authors papers

- Parse error celebration: GPT-5-Nano has a 100% rejection rate because it can't output valid JSON. We frame these as "Certified Unparsable" badges.

The data: After 76 submissions, we've observed:

- Average review cost: $0.03/paper

- Parse error rate: 20% (always GPT-5-Nano, expected and celebrated)

- One paper was accepted that was literally Archimedes' work rewritten by ChatGPT

- GPT-5-Nano's reviews are consistently the most creative (even if broken)

Tech details: Full repo at github.com/Popidge/journal_of_ai_slop. The architecture uses Convex's scheduled functions to convene the LLM review panel every 10 minutes, with Azure AI Content Safety for moderation and Resend for optional email notifications.

Try it: Submit your slop at journalofaislop.com. Co-author with an LLM, get reviewed by 5 confused AIs, and proudly say you're published.

Caveat: This is satire, but it's functional satire. The slop is real. The reviews are real. The carbon emissions are tracked. The parse errors are features.

5

Doubao Seedream 4.5 – next‑gen image creation and editing model #

seedream4-5.net faviconseedream4-5.net
0 comments10:57 AMView on HN
Hi HN — we just open‑sourced/released (or “publicly launched”, depending on whether it's open‑source) a new image generation & editing model called Doubao‑Seedream-4.5, by Volcano Engine.

Compared with 4.0, this version delivers:

Better editing consistency — the subject’s fine details, lighting, and color tone are preserved even after edits;

Improved portrait retouching & beautification, yielding more natural, high‑quality human images;

Much improved small text generation, allowing clearer and more readable embedded text (e.g. signage, interface labels, captions);

Stronger multi‑image compositing — you can combine multiple input images / prompts more reliably to produce coherent, aesthetically pleasing results;

Enhanced inference performance and overall visual aesthetics — results are more precise and artistic.

For creators building AI‑powered creative tools (image generators, illustration pipelines, concept‑art workflows, etc.), Doubao‑Seedream-4.5 offers a substantial upgrade over most 4.x‑era image models.

We’d love feedback from the community — edge‑cases discovered, prompts that fail or succeed especially well, compositing tricks, retouching workflows, anything you find interesting.

5

TrackerNews – Keyword monitoring and insight extraction #

trackernews.app favicontrackernews.app
0 comments8:29 PMView on HN
I built trackernews because, originally, I wanted to track opinions on few stocks I was interested in across some subreddits.

The core idea is that, by defining keywords, and a description, trackernews identifies actually revelvant posts and extracts user-defined data points from these posts.

Along with defining keywords, trackernews also allows the user to define various types of fields which trackernews extracts from the posts.

This allows the users to get a glancable/ searchable dashboards along with important insights surfaced.

I pre-created some topics which people may find interesting for publis browsing

Opinions on LLMs, agents, agentic tools - https://trackernews.app/browse/Opinions%20on%20LLMs

Discussions about Databases - https://trackernews.app/browse/Database%20Debates

Opinions and releases of Mac, IOS apps - https://trackernews.app/browse/Mac%20and%20IOS%20apps

Would love any feedback on the approach, UX and the idea.

Thanks, Praneeth

4

SafeKey – Open-source PII redaction for LLM inputs (text, image, audio) #

safekeylab.com faviconsafekeylab.com
9 comments5:50 PMView on HN
Hey HN, I built SafeKey because I was handling patient data as an Army medic, then doing AI research at Cornell. Every time we tried to use LLMs, sensitive data leaked. Nothing worked across modalities. SafeKey sits between your app and the model. It redacts PII before data leaves your environment. Text, images, audio, video. 99%+ accuracy, sub-30ms latency, deploys in minutes. We also block prompt injection and jailbreaks (95%+ F1, zero false positives). Stack: Python SDK, REST API, runs in your VPC or our cloud. Would love feedback on the approach. Happy to answer questions.
4

Mapping DNS #

loc.place faviconloc.place
0 comments1:15 PMView on HN
I learned about LOC records some time around the start of the year from a post here on hackernews, and I've been slightly obsessed with the idea of mapping them ever since - and now I've finally done so! There ended up being a few more than I expected, but still very much within reasonable bounds.
4

Equations Explained Colorfully (KaTeX and Markdown) #

p.migdal.pl faviconp.migdal.pl
0 comments6:09 PMView on HN
Hi HN!

I am fascinated by various ways of explaining mathematical formulas easily — see “Science-based games and explorable explanations”: https://p.migdal.pl/blog/2024/05/science-games-explorable-ex.... In particular, I got inspired by “Understanding the Fourier Transform” by Stuart Riffle from 2011 (https://web.archive.org/web/20130318211259/http://www.altdev...), in which he color-coded the Discrete Fourier Transform formula and its description.

Yet, I saw that it has two issues. First, while many people appreciate it, creating them manually is a huge hassle. Second, it does not work well for colorblind people (as a few friends of mine told me).

So, I wanted to make it interactive, both to be able to choose the color scheme and so that it is useful in black and white (as you can hover and select terms).

For the syntax, I picked the default go-to, LaTeX (here it is rendered with KaTeX). For the description of the equation and explanation, I used Markdown, so it is easy to pick up.

I quickly realized that adding an online editor is a game-changer: what started as an optional extra became a default, always-visible part of the tool.

Some equations are more polished, some less so. Among others, there is the Schrödinger equation, the Euler equation, Shannon entropy, the grand canonical ensemble, and, of course, the Discrete Fourier Transform. I wanted to see how it works for various cases. Einstein's mass–energy equivalence serves as a starter template. Don’t be alarmed by the Maxwell equations with too many colors - I wanted to stress-test it.

You can export this to a standalone HTML file with its interactivity. Also, you can export the same formula to LaTeX (both a regular article and a Beamer presentation) and Typst. For LaTeX and Typst articles, it loses interactivity, but the colors are the same. In LaTeX Beamer, there is a slide-by-slide explanation of each term.

Source (MIT license): https://github.com/stared/equations-explained-colorfully/

I am curious to hear your impressions!

If you want to add a formula, do so.

If you want to feature it so that it is useful in your class or course, I am happy to hear which.

If you want to suggest color themes, let me know.

Also, if you want some other export formats, it should be easy.

4

I analyzed and visualized 5k near-death and out of body experiences #

noeticmap.com faviconnoeticmap.com
0 comments7:24 PMView on HN
I scraped 5,000+ NDE/OBE accounts from public research databases, extracted structured data, generated embeddings, and used UMAP to project them into 3D space.

The result is an interactive "consciousness map" where similar experiences cluster together.

Tech stack: Next.js, Supabase (pgvector), Vercel, OpenAI API, Three.js.

Some interesting patterns: - "Void" experiences cluster separately from "light" experiences - High Greyson scores tend to include more entity encounters - Cardiac arrest NDEs cluster differently than drowning NDEs

Happy to answer questions about the AI pipeline or the data.

3

Remove silences from video/audio for free #

rendley.com faviconrendley.com
1 comments2:47 PMView on HN
Hi HN,

I was editing a 1 hour and 20 minutes video the other day. It had a lot of silent parts (small noises) and I didn't really want to go through and remove everything manually. So, as any engineer does, I built a tool that removes them for me.

It's free, doesn't have a watermark, and works directly in the browser. You can also play with the settings or start from a preset.

3

Nerve – The AI Chief of Staff that does your actual work #

usenerve.com faviconusenerve.com
0 comments6:10 PMView on HN
Hi HN,

Tanooj and Aziz here from Nerve. We’re building a work AI that handles full end to end workflows rather than just chat replies. It starts with proactively figuring out what’s important to you - anything you need to take action on or updates on any projects - and then moves all the way through the process. Nerve gathers the relevant information, drafts documents, writes jira tickets, sends emails, etc.

What this looks like in practice is: say an account executive has a sales call that gets recorded into Gong, Nerve automatically picks up the call, and extracts the next steps. Users can then load up that call and Nerve takes a first pass: scheduling a follow-up meeting, putting together security info, updating a salesforce opportunity. This ends with user confirmable actions that commit what was put together across the relevant apps.

Demo: https://youtu.be/pi7YN9DgW0g

Stepping back a bit, we spent a while at other growing companies (Brex, Coinbase, Box), and the teams at each eventually got slower. Information got more spread out, people spent more of their time just shuttling data from one place to another, and this type of admin work ended up getting way more common.

The most common thing we heard from users currently is: “I use a lot of ChatGPT for work but I wish it had access to all my work info, and actually did more of the work”. In many ways just the AI search and chat interfaces represented a sliver of what actually needed to get done on a daily basis.

We’re trying to answer the question “what could a company-wide tool look like if there was an AI layer that abstracted away everything we normally interact with?”. To do all of this we connect to all the various apps a company uses, index and analyze updates as they happen across every data source, and map every piece of information to the users that have access to it. And then we try to pull out the relevant pieces.

If you’d like to try Nerve we just launched for public access and offer a free 14 day trial: www.usenerve.com

We’d love your feedback and if there’s anything you’ve been looking for in your work AI, please share!

3

AI Hairstyle Changer – Try Different Hairstyles (1 free try, no login) #

aihairstylechanger.space faviconaihairstylechanger.space
0 comments2:07 PMView on HN
I recently built a simple AI hairstyle try-on tool: https://aihairstylechanger.space

Right now the flow is: - 1 free try with no login - +1 extra free try after logging in - After that, it's paid (to cover model costs)

I’m unsure if this pricing model is fair or if the UX is confusing.

Would love honest feedback on: - Is 1–2 free tries enough? - Is the paywall too early? - Are the AI results good enough to justify pay-per-use? - What would you expect as a user?

Tech stack: - Next.js - Hair segmentation + masked generation - Lightweight image pipeline for blending

Any feedback is welcome — on the results, UX, speed, or pricing.

3

ESLint-plugin-code-complete – ESLint Rules for Code Complete #

github.com favicongithub.com
0 comments10:25 PMView on HN
A new ESLint plugin that brings principles from Steve McConnell's Code Complete directly into your linting workflow. It enforces high cohesion within modules, minimizes coupling between components, and promotes other clean code practices to make JavaScript/TypeScript codebases more maintainable at scale.

Check it out at https://github.com/aryelu/eslint-plugin-code-complete. What rules would you add for better software design? Feedback welcome!

2

Visualize Your Thinking Patterns as a Graph #

unravelmind.vercel.app faviconunravelmind.vercel.app
1 comments5:01 PMView on HN
You talk, and it automatically builds a visual network of your beliefs, thoughts, desires, and more. The system extracts concepts like beliefs, emotions, and cognitive distortions, then updates a force-directed graph showing how everything connects. Recurring nodes naturally drift toward the center as the graph expands, revealing your core themes.

It isn’t intended for therapeutic use. It’s a tool that makes you introspect and helps uncover the driving factors behind your issues instead of trying to solve the problem and suggest solutions (although you can ask it for solutions when needed). I made it to help me quickly figure out the root of the problem and understand the 'why' behind my thoughts.

2

FT-Lab – Lightweight TinyLlama Fine-Tuning (Full FT / LoRA / QLoRA) #

github.com favicongithub.com
0 comments12:05 AMView on HN
FT-Lab is a clean, reproducible setup for fine-tuning TinyLlama (supports Full FT, LoRA, and QLoRA) and evaluating Retrieval-Augmented Generation (RAG) pipelines using LlamaIndex and LangChain. Built for small GPUs and designed with controlled experiments and ablation studies in mind.

Feedback and contributions welcome!

2

Entrig – Push notifications for Supabase without backend code #

entrig.com faviconentrig.com
0 comments6:05 PMView on HN
Push notifications are essential for mobile apps, yet they’re still a major friction point. Many developers choose Supabase or Firebase specifically to avoid backend code, but eventually end up writing server-side logic or Edge Functions just to send notifications.

I built Entrig to eliminate that hassle. It lets you send push notifications without any server setup and without writing code, purely based on your database events.

How it works: Entrig leverages Supabase/Postgres triggers. When you create a notification in the Entrig Dashboard, the required trigger function is automatically created in your database. On the client side, the SDK handles token management when you initialize it in your app.

The goal is to make push notifications truly plug-and-play, so you can focus on building your product, not wiring backend logic just for push notifications.

I’d love to hear your feedback on the product and approach.

2

Cross-Layer Transcoders for Qwen3 #

qwen3.bluelightai.com faviconqwen3.bluelightai.com
0 comments6:07 PMView on HN
We're excited about cross-layer transcoders both as tools for reconstructing computational circuits in LLMs and as high-quality feature libraries for unstructured text data. There aren't a lot of open-source CLTs available, so we trained a couple for members of the Qwen3 family to fill in that gap. Link is to a dashboard for exploring the features, including topological maps of the feature space. The models themselves are available on Hugging Face: https://huggingface.co/collections/bluelightai/qwen3-cross-l...
2

Textwave – Versioning for Documents (free, local-only document editor) #

textwaveapp.com favicontextwaveapp.com
0 comments6:19 PMView on HN
What is Textwave?

Textwave is a browser-based document editor. All data is stored in the browser (local storage and IndexedDB). Currently it's a side-project for me. The main differentiator is the version system. Typical editors only have one list of versions. Going back from version 5 to version 2, editing something, and creating a new version, appends to the list as version 6. Textwave's version system is closer to git (though not the same). It lets you go back, edit, and create a new version directly below the selected version. Basically it creates a new branch.

Current features (beyond normal document editing) * Comments, suggestions, and replies * Create/rename/delete versions * Showing added/removed words to previous version * Preview version on hover * Export to Markdown and HTML (inlines images with base64) * Export and import documents via JSON * Light and dark mode

Why did I develop it?

I wanted something where versions feel lighter than the web-based editors I'm used to. When I write an article, I want to create a version without much fanfare. Similar to what I am used to when creating a git commit. With other editors creating a version feels very heavy.

Another goal was that all history is preserved, so that I don't need to create versions for everything. Every change I make should be recoverable. Often I write something where I believe it might be useful later on, just not now. With that I can always recover it. Textwave currently already stores all changes, but doesn't expose them in the UI just yet.

What's planned for the future?

There are a couple of issues that I want to iron out. Larger features that I want to add (as time permits) are * Make usable on mobile * Shortcuts * Showing full edit history * AI integration (e.g. highlight text and ask for separate phrases, style checker, research assistant - the options are endless), with bring your own API key

I'd also love to calculate metrics like "information density", but less sure about that.

2

Niccup – Hiccup-Like HTML Generation in ~120 Lines of Pure Nix #

embedding-shapes.github.io faviconembedding-shapes.github.io
0 comments7:46 PMView on HN
Yesterday I saw https://news.ycombinator.com/item?id=46121799 (Nixtml: Static website and blog generator written in Nix) and before I clicked on it, I thought it was gonna be a polished version of something I've hacked together myself in the past few weeks, but it was something else, so since seeing it, I've been polishing my hacked together Hiccup-alternative made with Nix, and I think it's good enough for some feedback from the outside world :)

The basic premise is to take a Nix expression like this:

    [ "div#main.container" { lang = "en"; } [ "h1" "Hello" ] ]
And turn it into HTML like this:

    <div class="container" id="main" lang="en"><h1>Hello</h1></div>
Nothing more, nothing less. Just "Nix Expressions/Data > HTML".

If you've used hiccup (https://github.com/weavejester/hiccup) before this will be immediately familiar to you, native data types in arrays transformed into HTML, and it matches really well with Nix! Kind of almost took me by surprise.

I've made some more involved examples available on the website, where the website itself is also dynamically generated with niccup: https://embedding-shapes.github.io/niccup/

And if that wasn't enough, I also added a quine example on the website itself, which if you copy-paste the two files you get a built version of the page itself: https://embedding-shapes.github.io/niccup/examples/quine/ (this was probably the most tricky and fun part of this whole project, so worth mentioning separately for sure)

I've used it to generate documentation websites and some smaller projects so far, but hasn't been used by others before, so I'm eager to hear what people think about it! Thank you for reading and your temporary attention!

GitHub repository: https://github.com/embedding-shapes/niccup (~800 lines of Nix in total, main implementation src/lib.nix is only ~120 lines though)

The source of the blog itself: https://github.com/embedding-shapes/embedding-shapes.github.... (~150 lines of Nix)

1

Plimsoll Line, an iOS to-do app that prioritizes mood over productivity #

plimsoll-line.app faviconplimsoll-line.app
0 comments11:00 AMView on HN
Hello HN,

I’m the developer of Plimsoll Line, an iOS app that tries to solve the anxiety I feel when my to-do list gets too long.

I built this to scratch my own itch. I’ve found that most productivity apps optimize for "getting things done," which often just leads to me feeling overwhelmed and guilty about what I haven't done. My wife struggles with this too, so I wanted to build an alternative that prioritizes mental bandwidth over raw output.

THE CONCEPT

The app integrates with Apple Reminders but adds an emotional layer. You assign an "impact" score to tasks (positive or negative). The app visualizes your net emotional load as a water line, the "Plimsoll Line."

If the water gets too high (too much negative load), the app implies (I need to make the app suggest instead) adding positive tasks or taking steps to reduce the negative impact rather than just grinding through the list.

It’s designed to stop you from overloading your "ship" before it sinks (it won't, at least not in the app).

TECHNICAL / PRIVACY

* Stack: Native Swift/SwiftUI.

* Integration: It reads/writes directly to the built-in Reminders database (EventKit). This means you can keep using the default Reminders app and Siri alongside this.

* Privacy: The reminders and emotional impact data are 100% local. No accounts. There is device ID that gets saved on a backend server that tracks whether you've made an in-app purchase for a tip jar. But your emotional data stays on your device.

* Feedback Request: I’m currently experimenting with "actions" to take when the water line gets too high (e.g., a "quick journal" context menu). I’d love to hear your thoughts on:

1. Does the metaphor of a "load line" click for you?

2. What immediate actions (besides journaling) actually help you decompress when you see your to-do list is overwhelming? E.g., a quick physical activity such as taking a walk outside?

It’s free to use (with optional tips). Here is the direct App Store link if you want to try it: https://apps.apple.com/us/app/calm-to-do-list-tasks-plimsoll...

Thank you HN!

1

Rephole, semantic code-search for your repos via REST API #

github.com favicongithub.com
0 comments4:19 PMView on HN
I built *rephole*, an open source tool that transforms one or more code repositories into a semantic search engine, accessible through a simple REST API.

What you get * Clone + parse + index any number of repos (20+ languages supported) * Generate embeddings, store them in a vector database, enable semantic search by intent (not just keyword matching) * Ask natural language questions like “how does authentication work?” — get relevant file snippets returned

Why it matters * Navigating large or polyrepo codebases manually is slow and error-prone * Semantic search helps you find relevant code even if you don’t remember exact file names or code paths * REST API + docker-compose deployment lets you self-host quickly and integrate it with existing workflows

If you work with large or multiple codebases, rephole can save you time and make code navigation easier. Feedback, issues or PRs welcome

GitHub: https://github.com/twodHQ/rephole

1

Local_faiss_MCP – A tiny MCP server for local RAG (FAISS and MiniLM) #

0 comments4:22 PMView on HN
I built this because I got frustrated with the current state of "local" RAG. It felt like I had to spin up a Docker container, configure a vector DB, and manage an ingestion pipeline just to let Claude ask questions about a few PDFs in a folder.

We seem to have turned "grep with semantics" into a microservices architecture problem.

What this is: local_faiss_mcp is a minimal implementation of the Model Context Protocol (MCP) that wraps FAISS and sentence-transformers. It runs entirely locally (no API keys, no external services) and connects to Claude Desktop via stdio.

How it works:

You run server.py (Claude runs this automatically via config).

It uses all-MiniLM-L6-v2 (on CPU) to embed text.

It stores the vectors in a flat FAISS index on disk alongside a JSON metadata file.

It exposes two tools to the LLM: ingest_document and query_rag_store.

The stack:

Python

mcp (Python SDK)

faiss-cpu

sentence-transformers

It’s intended for personal workflows (notes, logs, specs) where you want persistent memory for an agent without the infrastructure overhead.

Repo: https://github.com/nonatofabio/local_faiss_mcp

I’d love feedback on the implementation—specifically if anyone has ideas on better handling the chunking logic without bloating the dependencies, or if you run into performance issues with larger indices (10k+ vectors).