Ежедневные Show HN

Upvote0

Show HN за 20 апреля 2026 г.

37 постов
202

TRELLIS.2 image-to-3D running on Mac Silicon – no Nvidia GPU needed #

github.com favicongithub.com
39 комментариев12:07 AMПосмотреть на HN
I ported Microsoft's TRELLIS.2 (4B parameter image-to-3D model) to run on Apple Silicon via PyTorch MPS. The original requires CUDA with flash_attn, nvdiffrast, and custom sparse convolution kernels: none of which work on Mac.

I replaced the CUDA-specific ops with pure-PyTorch alternatives: a gather-scatter sparse 3D convolution, SDPA attention for sparse transformers, and a Python-based mesh extraction replacing CUDA hashmap operations. Total changes are a few hundred lines across 9 files.

Generates ~400K vertex meshes from single photos in about 3.5 minutes on M4 Pro (24GB). Not as fast as H100 (where it takes seconds), but it works offline with no cloud dependency.

https://github.com/shivampkumar/trellis-mac

160

Mediator.ai – Using Nash bargaining and LLMs to systematize fairness #

mediator.ai faviconmediator.ai
74 комментариев3:07 PMПосмотреть на HN
Eight years ago, my then-fiancée and I decided to get a prenup, so we hired a local mediator. The meetings were useful, but I felt there was no systematic process to produce a final agreement. So I started to think about this problem, and after a bit of research, I discovered the Nash bargaining solution.

Yet if John Nash had solved negotiation in the 1950s, why did it seem like nobody was using it today? The issue was that Nash's solution required that each party to the negotiation provide a "utility function", which could take a set of deal terms and produce a utility number. But even experts have trouble producing such functions for non-trivial negotiations.

A few years passed and LLMs appeared, and about a year ago I realized that while LLMs aren’t good at directly producing utility estimates, they are good at doing comparisons, and this can be used to estimate utilities of draft agreements.

This is the basis for Mediator.ai, which I soft-launched over the weekend. Be interviewed by an LLM to capture your preferences and then invite the other party or parties to do the same. These preferences are then used as the fitness function for a genetic algorithm to find an agreement all parties are likely to agree to.

An article with more technical detail: https://mediator.ai/blog/ai-negotiation-nash-bargaining/

106

Alien – Self-hosting with remote management (written in Rust) #

48 комментариев3:18 PMПосмотреть на HN
Self-hosting is becoming very popular because it lets users keep their data private, local, and inside their own environment.

Unfortunately, self-hosting breaks down when someone starts paying for your software. Especially if it's an enterprise customer.

Customers usually don't actually know how to operate your software. They might change something small — Postgres version, environment variables, IAM, firewall rules — and things start failing. From their perspective, the product is broken. And even if the root cause is on their side, it doesn't matter... the customer is always right, you're still the one expected to fix it.

But you can't. You don't have access to their environment. You don't have real visibility. You can't run anything yourself. So you're stuck debugging a system you don't control, through screenshots and copy-pasted logs on a Zoom call. You end up responsible for something you don't control.

I think there's a better model of paid self-hosting: the software runs in the customer's environment, but the developer can actually operate it. It's a win-win: for the customer, their data stays private and local, and the developer still has control over deployments, updates, and debugging.

Alien provides infrastructure to deploy and operate software inside your users' environments, while retaining centralized control over updates, monitoring, and lifecycle management. It currently supports AWS, GCP, and Azure targets.

GitHub: https://github.com/alienplatform/alien

Getting started: https://alien.dev/docs/quickstart

How it works: https://alien.dev/docs/how-alien-works

72

Ctx – a /resume that works across Claude Code and Codex #

github.com favicongithub.com
28 комментариев4:35 PMПосмотреть на HN
ctx is a local SQLite-backed skill for Claude Code and Codex that stores context as a persistent workstream that can be continued across agent sessions. Each workstream can contain multiple sessions, notes, decisions, todos, and resume packs. It essentially functions as a /resume that can work across coding agents.

Here is a video of how it works: https://www.loom.com/share/5e558204885e4264a34d2cf6bd488117

I initially built ctx because I wanted to try a workstream that I started on Claude and continue it from Codex. Since then, I’ve added a few quality of life improvements, including the ability to search across previous workstreams, manually delete parts of the context with, and branch off existing workstreams.. I’ve started using ctx instead of the native ‘/resume’ in Claude/Codex because I often have a lot of sessions going at once, and with the lists that these apps currently give, it’s not always obvious which one is the right one to pick back up. ctx gives me a much clearer way to organize and return to the sessions that actually matter.

It’s simple to install after you clone the repo with one line: ./setup.sh, which adds the skill to both Claude Code and Codex. After that, you should be able to directly use ctx in your agent as a skill with ‘/ctx [command]’ in Claude and ‘ctx [command]’ in Codex.

A few things it does:

- Resume an existing workstream from either tool

- Pull existing context into a new workstream

- Keep stable transcript binding, so once a workstream is linked to a Claude or Codex conversation, it keeps following that exact session instead of drifting to whichever transcript file is newest

- Search for relevant workstreams

- Branch from existing context to explore different tasks in parallel

It’s intentionally local-first: SQLite, no API keys, and no hosted backend. I built it mainly for myself, but thought it would be cool to share with the HN community.

19

Git Push No-Mistakes #

github.com favicongithub.com
6 комментариев6:40 PMПосмотреть на HN
no-mistakes is how I kill AI slop. It puts a local git proxy in front of my real remote. I push to no-mistakes instead of origin, and it spins up a disposable worktree, runs my coding agent as a validation pipeline, forwards upstream only after every check passes, opens a clean PR automatically, and babysits CI pipeline for me.
15

CyberWriter – a .md editor built on Apple's (barely-used) on-device AI #

cyberwriter.app faviconcyberwriter.app
6 комментариев1:07 PMПосмотреть на HN
Apple has quietly shipped a pretty complete on-device AI stack into macOS, with these features first getting API access in MacOS 26. There are multiple components in the foundation model, but the skills it shipped with actually make this ~3b parameter model useful. The API to hit the model is super easy, and no one is really wiring them together yet.

- Foundation Models (macOS 26) - a ~3B-parameter LLM with an API. Streaming, structured output, tool use. No API key, no cloud call, no per-token cost. - NLContextualEmbedding (Natural Language framework, macOS 14+) -- a BERT-style 512-dim text embedder. Exactly what OpenAI and Cohere sell, sitting in Apple's SDKs since iOS 17. - SFSpeechRecognizer / SpeechAnalyzer - on-device speech-to-text including live dictation. Solid accuracy on Apple Silicon.

I built cyberWriter, a Markdown editor, on top of all three, mostly as a test and showcase to see what it can do. I actually integrated local and cloud AI first, and then Apple shipped the foundation model, it stacked on super easy, and now users with no local or API AI knowledge can use it with just a click or two. Well the real reason is because most markdown editors need plugins that run with full system access, and I work on health data and can't have that.

Vault chat / semantic search. The app indexes your Markdown folder via NLContextualEmbedding (around 50 seconds for 1000 chunks on an M1). The search bar gets a "Related Ideas" section that matches by meaning - typing "orbital mechanics" surfaces notes about rockets and launch windows even when those exact words never appear. Ask the AI a question and it retrieves the top 5 chunks as context. Plain RAG, but the embedder, retrieval, chat model, and search all run locally.

AI Workspace. Command+Shift+A opens a chat panel, Command+J triggers inline quick actions (rewrite, summarize, change tone, fix grammar, continue). Apple Intelligence is the default; Claude, OpenAI, Ollama, and LM Studio all work if you prefer. The same context layer - document selection, attached files, retrieved vault chunks - feeds every provider through the same system-message path. Because the vault context is file and filename aware, it can create backlinks to the referenced file if it writes or edits a doc for you.

Voice notes and dictation. Record a voice note directly into your doc, transcribe it with SpeechAnalyzer, or just dictate into the editor while you think. Audio never leaves the Mac.

The privacy story is straightforward because the primitives are already private. Vectors live in a `.vault.embeddings.json` file next to your vault, never sent anywhere. If you use Apple Intelligence, even the retrieved text stays on-device. For cloud models there is a clear toggle and an inline warning before any filenames or snippets leave the machine.

Honest limitations:

- 512-dim embeddings are solid mid-tier. A GPT-4-class embedder catches subtler relationships this will miss. - 256-token chunks can split long paragraphs mid-argument. - Foundation Models caps its context window around 6K characters, so vault context is budgeted to 3K with truncation markers on the rest. - Multilingual support is English-only right now. NLContextualEmbedding has Latin, Cyrillic, and CJK model variants; wiring the language detector across chunks is Phase 2.

The developer experience for these APIs is genuinely good. Foundation Models streams cleanly, NLContextualEmbedding downloads assets on demand and gives you mean-poolable token vectors in a handful of lines. Curious what others here are building on this stack - feels like low-hanging fruit that has been sitting there for a while.

https://imgur.com/a/HyhHLv2

The Apple AI embedding feature is going live today. I'm honestly surprised it even works out of the box.

9

MCPfinder – An MCP server that finds and installs other MCP servers #

mcpfinder.dev faviconmcpfinder.dev
1 комментариев9:38 PMПосмотреть на HN
I’ve been building and using agents heavily lately. The Model Context Protocol ecosystem is growing insanely fast, but discovering and configuring new tools is still highly manual. Every time I needed to connect an agent to a new service, I had to browse registries, figure out the transport type, identify required env vars, and manually update "mcp.json" files.

So I built MCPfinder. It aggregates servers from the official MCP registry, Glama, and Smithery (around 25,000 combined entries) into a deduplicated, ranked catalog.

But the real twist is the DX: MCPfinder is itself an MCP server :D

You only install it once as your "base capability" via standard stdio: npx -y @mcpfinder/server

From then on, when you tell your AI, "I need to query my PostgreSQL database," the magic happens autonomously.

It's completely free, AGPL-3.0 licensed, and built purely to optimize AI-tool surface discovery.

I'd love to hear your thoughts, feedback, or edge cases where JSON generation for specific platforms is acting up.

8

Themeable HN #

github.com favicongithub.com
1 комментариев3:54 PMПосмотреть на HN
Hi HN, I needed a distraction from the new and scary thing I was working on this weekend, so I grabbed HN's stylesheet, extracted CSS variables for every colour it defined, then re-applied them back onto HN using its own selectors (plus some extras for bits of HN which are styled inline, and some separation between header and content styling), allowing it to be themed with variables.

After using them to implement a dark mode (and a pure black variant for OLED), plugging everything into my existing HN browser extension which already lets you apply custom CSS, and making it handle theme switching via attributes on <html>, custom styles and theming are now manageable using it. There are a few examples in the friendly release notes above, with screenshots and copy-pasteable CSS.

If you just want to grab the stylesheet which sets up the CSS variables and application rules to use in your own thing, it's here [1], but not _everything_ is themeable without a bit more work - in particular, I had to replace the HN <img src="y18.svg"> logo with an inline version of the SVG, so its fills can be controlled by CSS.

[1] https://github.com/insin/comments-owl-for-hacker-news/blob/m...

7

Libredesk – self-hosted, single binary Intercom/Zendesk alternative #

libredesk.io faviconlibredesk.io
4 комментариев1:16 PMПосмотреть на HN
Libredesk is a 100% free and open-source helpdesk, a Zendesk/Intercom alternative. Backend in Go, frontend in Vue + shadcn/ui. Unlike many "open-core" alternatives that lock essential features behind enterprise plans, Libredesk is fully open-source and will always stay that way.

Last year I posted v0.1.0 here: https://news.ycombinator.com/item?id=43158166

A year later, it's omni-channel. Alongside email, you can drop a live chat widget (beta) onto your website and handle both channels in the same agent UI. The chat widget, CSAT pages, and email templates are all customizable, and self-hosters can swap out the bundled HTML/JS/CSS assets for full white-labeling.

Genuinely, if you're paying per-agent SaaS pricing for a helpdesk today, I really think Libredesk can replace it. It covers most of what mainstream helpdesks do, and more lands with each release. I'd love to hear what would stop you from switching.

I originally built Libredesk for what we needed at work, we were on osticket and wanted something cleaner. These days I work on Libredesk in evenings and weekends alongside a full-time job, so response times on issues aren't instant, but I read every one. Docs are a bit behind the code too, but catching up.

Agent dashboard demo: https://demo.libredesk.io/

Live chat widget demo: https://libredesk.io/ (bottom-right corner)

Github: https://github.com/abhinavxd/libredesk

7

Modular – drop AI features into your app with two function calls #

modular.run faviconmodular.run
1 комментариев1:15 AMПосмотреть на HN
I kept hitting the same wall at work every time we needed to ship an AI feature. What looked like a week of work turned into picking a model, setting up a vector DB, managing embeddings, wiring up chat history, handling retries — none of it was the actual feature. So I built Modular. You register a function that returns your app's data, then call ai.run() for one-shot features or ai.chat() for stateful conversation. Everything else — context management, embeddings, session history, model routing, retries — is handled. MCP-native from day one. Works with Claude, GPT-4o, and Gemini. Still early — collecting feedback before building the full SDK. Would love to hear if others have hit this same wall, or if you think I'm solving the wrong problem.
5

fmsg – An open distributed messaging protocol #

markmnl.github.io faviconmarkmnl.github.io
0 комментариев2:06 PMПосмотреть на HN
fmsg is a message definition and protocol intended as an alternative to email and IM apps. Like email it's distributed – anyone can host a server for their domain. Unlike email, messages are binary, verifiable by all peers, and linked into a DAG using cryptographic hashes. Sender verification and message integrity are built into the protocol, so you get what email needs SPF/DKIM/DMARC for out of the box.

The host implementation (fmsgd) is written in Go. There's a Docker compose setup to get a full stack running in minutes: https://github.com/markmnl/fmsg-docker

The spec is nearing v1.0: https://github.com/markmnl/fmsg — would love feedback. Send me a fmsg at @[email protected]

4

Open load forecasts that beat US grid operators on 6 of 7 RTOs #

surgeforecast.com faviconsurgeforecast.com
0 комментариев3:14 PMПосмотреть на HN
Fine-tuned Chronos-2 on 7 years of EIA-930 demand + ASOS temperature for every US balancing authority that publishes a load series — 53 across the three interconnections.

On a 2025 hold-out (~61,000 hours), it beats the operators' own day-ahead submissions to EIA — the production forecasts they use to schedule generation — on 6 of 7 major RTOs. Macro MAE ~40% lower. The one loss is ISO-NE, whose forecasting is just very good (24h-ahead MASE 0.34). On the same window, CAISO and SPP operator submissions did worse than "same as yesterday."

The site plots the median + 80% PI band against the operator submission, with 48h of actuals running into the forecast.

Code, model on HF, operator-comparison benchmark reproduces from one script:

- https://github.com/tylergibbs1/surge - https://huggingface.co/Tylerbry1/surge-fm-v3

4

Einlang, a math-intuitive language with lots of good stuff #

github.com favicongithub.com
0 комментариев4:03 PMПосмотреть на HN
With Einlang, you can write codes as

let x = [

    [[1.0, 2.0], [3.0, 4.0]],

    [[5.0, 6.0], [7.0, 8.0]]
];

let row_sums[..batch, i] = sum[j](x[..batch, i, j]);

let loss = sum[..batch, i](row_sums[..batch, i] * row_sums[..batch, i]);

let dloss_dx = @loss / @x;

Einlang also supports recurrence. You can write codes like

let alpha = 0.25;

let x[0] = 8.0;

let x[k in 1..6] = {

    let prev = x[k - 1];

    let loss = prev * prev;

    let g = @loss / @prev;

    prev - alpha * g
};
4

Pwneye – discovering and accessing IP cameras (ONVIF/RTSP) #

github.com favicongithub.com
0 комментариев5:24 PMПосмотреть на HN
Hi HN,

I’ve been working on pwneye, a CLI tool for interacting with IP cameras exposing ONVIF and RTSP services.

During penetration tests and red team engagements, I kept running into the same friction, with discovery, authentication testing, enumeration and stream validation spread across different tools or quick one-off scripts.

pwneye was built to handle that workflow end-to-end, from discovery to actually accessing and validating streams.

Current features include:

- ONVIF discovery and authentication testing (wordlists, multithreading)

- Post-auth enumeration (device info, users, network config, media profiles)

- RTSP extraction via ONVIF

- RTSP port detection and basic vendor identification

- Vendor-aware RTSP bruteforce

- Stream validation, preview and recording

- ONVIF reboot support

It’s still early, but already usable in real-world engagements.

Would be interested in feedback, especially from people who have dealt with ONVIF/RTSP cameras or IoT security in general.

Repo: https://github.com/hackerest/pwneye

3

Auto-generated titles and colors for parallel Claude Code sessions #

github.com favicongithub.com
0 комментариев10:53 PMПосмотреть на HN
I run three to seven Claude Code sessions in parallel terminals, and I kept losing track of which was doing what, especially toward the end of the day when your mind is so tired with the constant context-switching.

which-claude-code is a small plugin that adds a title and color to each Claude Code, so you know right away which Claude Code session does what.

1. On every prompt submit, a UserPromptSubmit hook forks a background claude -p --model haiku call that reads the last few transcript turns and writes a 3-6 word title to ~/.claude/which-claude-code/titles/<session_id>.txt. The hook returns in ~10ms so the prompt isn't delayed.

2. A statusline script prints ● <title> · <model> · <cwd> (<branch>) on every render. The dot and title are tinted from a 20-color palette hashed off the session ID.

Enjoy!

3

Tmux-bar – One-tap switching between windows in current tmux session #

github.com favicongithub.com
1 комментариев3:54 PMПосмотреть на HN
I revived an old idea I had, a small native macOS menu bar app that shows your tmux windows as Touch Bar buttons, so switching windows is one tap away.

It runs quietly in the menu bar, watches which terminal is focused (Terminal, iTerm2, Ghostty), and refreshes the Touch Bar with your current tmux windows.

This was a fun “vibe coding” side project, but also a practical tool I wanted for my own workflow. Hope it can be useful to someone else.

3

GeoFastMapAPI – open-source Fast vector and raster server for mapmakers #

geofastmap.com favicongeofastmap.com
0 комментариев8:29 PMПосмотреть на HN
Hello HN, built this project to unify cutting edge tech for fast vector and raster serving over the web into a single docker compose, using only open source technologies.

The main language is python, DB is postgres with postgis, vector tiles made with tippecanoe, satellite image search via STAC and rendered with titiler.

OGC API spec is the glue to make it all integrable with existing software like Qgis.

The project code was created using ai assistance, following my architecture decisions gathered building (and using) systems to serve large geospatial content.

Rather than a job done this is a start, being a base to integrate more tools and allow more analysis in future.

Code on github: https://github.com/rupestre-campos/geofastmapAPI

3

Eris – desktop PGP workstation with simple GUI #

eris.sibexi.co faviconeris.sibexi.co
0 комментариев10:19 PMПосмотреть на HN
So, I wanna show my personal project, what I made for myself and already used it for some time... This is a lightweight PGP workstation for work with text messages. Very simple interface, encrypted storage and basic process protection from tempering (in Windows and Linux). I made it as a lightweight and simple alternative for Kleopatra. Just basic workflows for sign/verify/encode/decode text messages. I will rly appreciate any feedback about this project.

Direct link to the project at GitHub: https://github.com/sibexico/Eris

For Windows users available thru WinGet: winget install --id sibexico.Eris -e

2

MyKana, a Japanese learning app I built for my own study #

mykana.app faviconmykana.app
0 комментариев9:42 AMПосмотреть на HN
I built this for myself while trying to get serious about learning Japanese again:

https://mykana.app/

I studied a little Japanese more than 10 years ago, but only for a short time, so most of it didn't stick. Since then, I've been to Japan many times because my girlfriend is Japanese and my older brother lives there. More recently, with me and my girlfriend traveling back and forth to Japan quite a lot and both of us having family in Tokyo, I wanted a better way to rebuild my foundation and keep practicing consistently.

The app started around kana practice, but grew into something larger:

hiragana / katakana practice kanji and vocabulary study review-based learning AI chat for more natural practice some light gamification

The AI chat is free, but requires Google sign-in. That is mostly to prevent bot abuse, since it costs me money to run, and secondarily because it allows progress syncing across devices.

I'd appreciate honest feedback, especially on what would make this more genuinely useful for learning rather than just more feature-rich.

1

I built an app to practice public speaking for ESL learners #

orratio.com faviconorratio.com
0 комментариев5:28 PMПосмотреть на HN
Hi HN,

I wanted an app to practice public speaking and get feedback on my English, since I’m a non-native speaker. I couldn’t find anything like that since most tools are AI tutors, which feel very repetitive and boring.

The idea is simple: you prepare and record your opinion on a random topic, then improve over time. Topics are varied, so you can practice defending your point, sharing your experiences, or talking about your plans.

I’m open to any feedback: orratio.com

1

Workingasync.io – A job board for asynchronous remote jobs #

workingasync.io faviconworkingasync.io
0 комментариев1:41 PMПосмотреть на HN
After living in multiple countries around the world and working remotely, and getting frustrated with hiring policies, I discovered that the best working cultures are asynchronous ones. Meetings are minimized, and employees can work when and where they please for maximum productivity.

That’s why I created a job board dedicated to posting exactly these types of jobs.

You can check out the site here: https://workingasync.io

Right now there are a limited number of jobs listed (including roles from companies like DuckDuckGo and Docker), but more will be added soon. I’m also planning to add new features over the coming weeks.

If you’re tired of “remote” jobs that still force you into 9-5 overlap and constant meetings, this is for you.

Feedback is very welcome!