매일의 Show HN

Upvote0

2026년 1월 10일의 Show HN

46 개
499

I used Claude Code to discover connections between 100 books #

trails.pieterma.es favicontrails.pieterma.es
145 댓글4:56 PMHN에서 보기
I think LLMs are overused to summarise and underused to help us read deeper.

I built a system for Claude Code to browse 100 non-fiction books and find interesting connections between them.

I started out with a pipeline in stages, chaining together LLM calls to build up a context of the library. I was mainly getting back the insight that I was baking into the prompts, and the results weren't particularly surprising.

On a whim, I gave CC access to my debug CLI tools and found that it wiped the floor with that approach. It gave actually interesting results and required very little orchestration in comparison.

One of my favourite trail of excerpts goes from Jobs’ reality distortion field to Theranos’ fake demos, to Thiel on startup cults, to Hoffer on mass movement charlatans (https://trails.pieterma.es/trail/useful-lies/). A fun tendency is that Claude kept getting distracted by topics of secrecy, conspiracy, and hidden systems - as if the task itself summoned a Foucault’s Pendulum mindset.

Details:

* The books are picked from HN’s favourites (which I collected before: https://hnbooks.pieterma.es/).

* Chunks are indexed by topic using Gemini Flash Lite. The whole library cost about £10.

* Topics are organised into a tree structure using recursive Leiden partitioning and LLM labels. This gives a high-level sense of the themes.

* There are several ways to browse. The most useful are embedding similarity, topic tree siblings, and topics cooccurring within a chunk window.

* Everything is stored in SQLite and manipulated using a set of CLI tools.

I wrote more about the process here: https://pieterma.es/syntopic-reading-claude/

I’m curious if this way of reading resonates for anyone else - LLM-mediated or not.

226

WebTiles – create a tiny 250x250 website with neighbors around you #

webtiles.kicya.net faviconwebtiles.kicya.net
38 댓글12:32 AMHN에서 보기
There is a large grid of 250x250 tiles, on which you are be able to create a tiny website, contained into the tile. You can basically consider the tile as a mini version of your website, showcasing what your full site has (but it can be anything). You are able to link to your full site, and use any HTML/CSS/JS inside. The purpose is to create beautiful and interesting tiles, that could be used for exploring the indie-web in an easy and interesting way.
162

Watch and play poker with LLMs #

llmholdem.com faviconllmholdem.com
94 댓글7:27 PMHN에서 보기
I was curious to see how some of the latest models behaved and played no limit texas holdem.

I built this website which allows you to:

Spectate: Watch different models play against each other.

Play: Create your own table and play hands against the agents directly.

139

Librario, a book metadata API that aggregates G Books, ISBNDB, and more #

48 댓글11:45 PMHN에서 보기
TLDR: Librario is a book metadata API that aggregates data from Google Books, ISBNDB, and Hardcover into a single response, solving the problem of no single source having complete book information. It's currently pre-alpha, AGPL-licensed, and available to try now[0].

My wife and I have a personal library with around 1,800 books. I started working on a library management tool for us, but I quickly realized I needed a source of data for book information, and none of the solutions available provided all the data I needed. One might provide the series, the other might provide genres, and another might provide a good cover, but none provided everything.

So I started working on Librario, a book metadata aggregation API written in Go. It fetches information about books from multiple sources (Google Books, ISBNDB, Hardcover. Working on Goodreads and Anna's Archive next.), merges everything, and saves it all to a PostgreSQL database for future lookups. The idea is that the database gets stronger over time as more books are queried.

You can see an example response here[1], or try it yourself:

  curl -s -H 'Authorization: Bearer librario_ARbmrp1fjBpDywzhvrQcByA4sZ9pn7D5HEk0kmS34eqRcaujyt0enCZ' \
  'https://api.librario.dev/v1/book/9781328879943' | jq .
  
This is pre-alpha and runs on a small VPS, so keep that in mind. I never hit the limits in the third-party services, so depending on how this post goes, I’ll or will not find out if the code handles that well.

The merger is the heart of the service, and figuring out how to combine conflicting data from different sources was the hardest part. In the end I decided to use field-specific strategies which are quite naive, but work for now.

Each extractor has a priority, and results are sorted by that priority before merging. But priority alone isn't enough, so different fields need different treatment.

For example:

- Titles use a scoring system. I penalize titles containing parentheses or brackets because sources sometimes shove subtitles into the main title field. Overly long titles (80+ chars) also get penalized since they often contain edition information or other metadata that belongs elsewhere.

- Covers collect all candidate URLs, then a separate fetcher downloads and scores them by dimensions and quality. The best one gets stored locally and served from the server.

For most other fields (publisher, language, page count), I just take the first non-empty value by priority. Simple, but it works.

Recently added a caching layer[2] which sped things up nicely. I considered migrating from net/http to fiber at some point[3], but decided against it. Going outside the standard library felt wrong, and the migration didn't provide much in the end.

The database layer is being rewritten before v1.0[4]. I'll be honest: the original schema was written by AI, and while I tried to guide it in the right direction with SQLC[5] and good documentation, database design isn't my strong suit and I couldn't confidently vouch for the code. Rather than ship something I don't fully understand, I hired the developers from SourceHut[6] to rewrite it properly.

I've got a 5-month-old and we're still adjusting to their schedule, so development is slow. I've mentioned this project in a few HN threads before[7], so I’m pretty happy to finally have something people can try.

Code is AGPL and on SourceHut[8].

Feedback and patches[9] are very welcome :)

[0]: https://sr.ht/~pagina394/librario/

[1]: https://paste.sr.ht/~jamesponddotco/a6c3b1130133f384cffd25b3...

[2]: https://todo.sr.ht/~pagina394/librario/16

[3]: https://todo.sr.ht/~pagina394/librario/13

[4]: https://todo.sr.ht/~pagina394/librario/14

[5]: https://sqlc.dev

[6]: https://sourcehut.org/consultancy/

[7]: https://news.ycombinator.com/item?id=45419234

[8]: https://sr.ht/~pagina394/librario/

[9]: https://git.sr.ht/~pagina394/librario/tree/trunk/item/CONTRI...

44

GlyphLang – An AI-first programming language #

27 댓글11:46 PMHN에서 보기
While working on a proof of concept project, I kept hitting Claude's token limit 30-60 minutes into their 5-hour sessions. The accumulating context from the codebase was eating through tokens fast. So I built a language designed to be generated by AI rather than written by humans.

GlyphLang

GlyphLang replaces verbose keywords with symbols that tokenize more efficiently:

  # Python
  @app.route('/users/<id>')
  def get_user(id):
      user = db.query("SELECT * FROM users WHERE id = ?", id)
      return jsonify(user)

  # GlyphLang
  @ GET /users/:id {
    $ user = db.query("SELECT * FROM users WHERE id = ?", id)
    > user
  }

  @ = route, $ = variable, > = return. Initial benchmarks show ~45% fewer tokens than Python, ~63% fewer than Java.
In practice, that means more logic fits in context, and sessions stretch longer before hitting limits. The AI maintains a broader view of your codebase throughout.

Before anyone asks: no, this isn't APL with extra steps. APL, Perl, and Forth are symbol-heavy but optimized for mathematical notation, human terseness, or machine efficiency. GlyphLang is specifically optimized for how modern LLMs tokenize. It's designed to be generated by AI and reviewed by humans, not the other way around. That said, it's still readable enough to be written or tweaked if the occasion requires.

It's still a work in progress, but it's a usable language with a bytecode compiler, JIT, LSP, VS Code extension, PostgreSQL, WebSockets, async/await, generics.

Docs: https://glyphlang.dev/docs

GitHub: https://github.com/GlyphLang/GlyphLang

31

Yuanzai World – LLM RPGs with branching world-lines #

yuanzai.world faviconyuanzai.world
5 댓글12:48 PMHN에서 보기
Hi HN, I'm Kai Wang, one of the creators of Yuanzai World.

We built a simulation engine (currently on iOS & Android) that allows the community to create and share text adventures populated by multiple LLM-based agents. Unlike standard chatbots, our focus is on community co-creation—users define the worldviews, and our agents (with persistent memory and social relationships) bring them to life.

The cool part:

We implemented a system we call "World-Line Divergence" (inspired by visual novels like Steins;Gate). Usually, AI RPGs feel random or infinite loop. We built a state machine that tracks "World Deviation." If players interact with NPCs in specific ways (e.g., convincing an artist to change their style), it triggers a graph switch, leading to a completely different generated ending, effectively breaking the original script.

Tech Stack:

- Backend: Python / Java with a custom AI orchestration framework (to handle agent concurrency).

- Models: Hybrid routing between Gemini, GPT, and DeepSeek (optimizing for cost/performance based on task).

- Vector DB: Milvus (for handling long-term agent memory).

We are currently live on App Store and Google Play. Since it's a mobile-first experience, the link leads to our landing page where you can see the demo flow.

Would love to hear your feedback on the "World-Line" concept: Does this state-machine approach solve the aimlessness of AI RPGs?

26

Porting xv6 to HiFive Unmatched board #

github.com favicongithub.com
4 댓글2:07 PMHN에서 보기
Hi HN,

I ported the teaching OS xv6-riscv to HiFive Unmatched and got it running on real hardware, including passing usertests.

I've been self-studying OS internals using the MIT 6.1810 materials. After finishing most of the labs, I was eager to see what it's like to run the OS on bare metal, rather than QEMU.

The Unmatched may not have the latest RISC-V features, but it's well-documented, and the Rev B release has made it more affordable, which makes it a good learning platform.

The porting process involved several interesting challenges:

- Hardware Quirks: Handling things like enabling A/D bits in PTEs (the hardware doesn't set them automatically, causing page faults), proper handling of interrupts, and instruction cache synchronization.

- Boot Flow: xv6 expects M-mode on startup, but standard RISC-V boot flows (typically via OpenSBI) jump to S-mode. To bridge this gap, I created a minimal U-Boot FIT image that contains only the xv6 kernel. This way, U-Boot SPL handles the complex CPU/DDR initialization, then hands control to xv6 in M-mode (skipping OpenSBI).

- Drivers: Ported an SPI SD card driver, replacing the virtio disk driver.

I wrote up implementation notes here: https://github.com/eyengin/xv6-riscv-unmatched/blob/unmatche...

Hopefully, this is useful for others who are learning OS internals and want to try running their code on real RISC-V hardware.

9

15 Years of StarCraft II Balance Changes Visualized Interactively #

p.migdal.pl faviconp.migdal.pl
0 댓글5:37 PMHN에서 보기
Hi HN!

"Never perfect. Perfection goal that changes. Never stops moving. Can chase, cannot catch." - Abathur (https://www.youtube.com/watch?v=pw_GN3v-0Ls)

StarCraft 2 is one of the most balanced games ever - thanks to Blizzard’s pursuit of perfection. It has been over 15 years since the release of Wings of Liberty and over 10 years since the last installment, Legacy of the Void. Yet, balance updates continue to appear, changing how the game plays. Thanks to that, StarCraft is still alive and well!

I decided to create an interactive visualization of all balance changes, both by patch and by unit, with smooth transitions.

I had this idea quite a few years ago, yet LLMs made it possible - otherwise, I wouldn't have had the time to code or to collect all changes from hundreds of patches (not all have balance updates). It took way more time than expected - both dealing with parsing data and dealing with D3.js transitions.

Pretty much pure vibe coding with Claude Code and Opus 4.5 - while constantly using Playwright skills and consulting Gemini 3 Pro (https://github.com/stared/gemini-claude-skills). While Opus 4.5 was much better at executing, it was often essential to use Gemini to get insights, to get cleaner code, or to inspect screenshots. The difference in quality was huge.

Still, it was tricky, as LLMs do not know D3.js nearly as well as React. The D3.js transition part is a thing that sometimes I think would be better to do manually, and only use LLMs for details. But it was also a lesson.

Enjoy!

Source code is here: https://github.com/stared/sc2-balance-timeline

7

Persistent Memory for Claude Code (MCP) #

github.com favicongithub.com
2 댓글8:34 PMHN에서 보기
This is my attempt in building a memory that evolves and persist for claude code.

My approach is inspired from Zettelkasten method, memories are atomic, connected and dynamic. Existing memories can evolve based on newer memories. In the background it uses LLM to handle linking and evolution.

I have only used it with claude code so far, it works well with me but still early stage, so rough edges likely. I'm planning to extend it to other coding agents as I use several different agents during development.

Looking for feedbacks!

5

MCP server for SOAP web services #

github.com favicongithub.com
0 댓글8:55 PMHN에서 보기
Did you just wake up from a 20 year coma? Did you build a bunch of buzzword compliant web services back in the early 2000s and want all your SOAP and WSDL to be relevant again? Now you can put the smooth sheen of AI on your pile of angle brackets by exposing your SOAP-based web service as a Model Context Protocol (MCP) server.
5

arxiv2md: Convert ArXiv papers to markdown #

arxiv2md.org faviconarxiv2md.org
0 댓글12:40 AMHN에서 보기
I got tired of copy-pasting arXiv PDFs / HTML into LLMs and fighting references, TOCs, and token bloat. So I basically made gitingest.com but for arxiv papers. You can just append "2md" to any arxiv URL (with HTML support), and you'll be given a clean markdown version, and the ability to trim what you wish very easily (ie cut out references, or appendix, etc.)
5

Revibing nanochat's inference model in C++ with ggml #

github.com favicongithub.com
0 댓글2:46 PMHN에서 보기
Recently I wanted to see if I could vibe some serious C++ code.

The result is a C++ re-implementation of Andrej Karpathy's nanochat's inferencing part (https://github.com/karpathy/nanochat), built on top of ggml. Unlike llama.cpp, this isn't a standalone binary; it is a C++ library & Python wrapper designed to swap out some core classes within the nanochat pipeline. For playability, I’ve tried to keep the dependencies to a minimum: just ggml, nanobind, and gtest for unit tests.

Features and limitations:

- A drop-in replacement of nanochat’s `GPT` and `KVCache` classes. So far I’ve only tested this with `chat_web.py`. You can see how it's integrated here: https://github.com/k-ye/nanochat/pull/1

- Supports CPU and GPU (Metal yes, CUDA probably?).

- Handles PyTorch-to-GGUF conversion automatically on demand.

- Only float32 is currently supported.

Benchmark:

On an M3 Max (Metal), throughput is roughly 1/3 that of the original PyTorch implementation. I haven’t profiled the code yet, but I suspect the bottleneck is the lack of bf16 support.

Motivation

- Writing meaningful (& fun) C++ again: I used to spend a lot of my day-to-day time in C++ while working at various tech companies. These days, opportunities to use it for personal projects are rare, as it’s often hard to find a use case where C++'s advantages truly matter.

- Testing "Vibe Coding" capabilities: Most of my current work is in UE5. Ironically, Blueprints—which were designed to help non-coders—have become a bottleneck in the LLM era... Admittedly, the AI agent has generated some FOMO in me, and I wanted to see if AI could handle a lower-level C++ implementation of a complex system from scratch.

- Understanding the LLM internals.

Why nanochat?

It hits the "Goldilocks" zone: popular enough to be relevant, concise enough to be educational, and practical enough to deserve a serious C++ implementation.

If you’re like me — an infra guy from the old days who feels a bit threatened by LLM and/or AI coding — I think nanochat is a great reference. Tinkering with it however you like is a nice way to demystify the tech. I relied heavily on Claude Code (CC) for the implementation. Overall, I am both impressed and genuinely pleased with the experience.

Happy to answer questions, hear feedback or further discuss AI coding!

5

Horizon Engine – C++20 3D FPS Game Engine with ECS and Modern Renderer #

github.com favicongithub.com
1 댓글10:23 PMHN에서 보기
Hi HN,

I’m working on an experimental 3D FPS game engine in C++20, aiming to deeply understand engine internals from first principles rather than just using existing frameworks.

Currently I'm strictly following LearnOpenGL docs.

This project focuses on: Entity-Component-System (ECS) architecture for high performance. OpenGL 4.1 rendering with a PBR pipeline, material system, HDR, SSAO, and shadow mapping. Modular systems: input, physics (Jolt), audio (miniaudio), assets, hot reload. A sample FPS game & debug editor built into the repo.

Repo: https://github.com/jackthepunished/horizon-engine

This isn’t intended to be a commercial rival to any commercial game engines.

it’s a learning and exploration project: understanding why certain engine decisions are made, and how to build low-level engine systems from scratch.

I’m especially looking for feedback on: Architecture choices (ECS design, render loop, module separation) Your thoughts on modern C++ engine patterns

What you’d build vs stub early in a homemade engine

Tips from experienced graphics/engine developers Criticism and suggestions are very welcome — it’s early days and meant to evolve. Thanks for checking it out!

5

Turn any topic into a 3Blue1Brown-style video #

github.com favicongithub.com
0 댓글1:35 AMHN에서 보기
Hey HN! I built Topic2Manim to automate the creation of educational videos like those from 3Blue1Brown.

The workflow is simple:

1. Give it any topic (e.g., "how ChatGPT works")

2. An LLM generates an educational script divided into scenes

3. LLM generates Manim code for each scene

4. FFmpeg concatenates everything into a final video

Currently working on TTS integration for narration!

Would love feedback on the approach and ideas for the TTS integration

3

buse – automate your browser from the terminal #

github.com favicongithub.com
0 댓글9:53 PMHN에서 보기
I wanted to control the browser from the terminal so I made buse:

buse browser-1 # open chrome

buse browser-1 navigate "https://example.com"

buse browser-2 # open a second browser

buse browser-2 search "cat"

buse browser-1 observe # returns JSON about the page

buse browser-1 click 16 # clicks on the learn more link

I've been reading about agentic computer use and I tried to use MCPs and Browserbase, but there was just a lot of friction for me. So, I brought it to the CLI instead.

https://github.com/rinvii/buse

3

Awesome-Nanobanana-Prompts #

github.com favicongithub.com
0 댓글1:26 AMHN에서 보기
I curated a Nano Banana / Nano Banana Pro prompt library with image examples. Prompts are collapsed by default so you can browse images first, then expand prompts. Feedback / contributions welcome.
3

Human or AI-made song detector and 100% Private Audio Mastering #

kliga.com faviconkliga.com
2 댓글10:10 PMHN에서 보기
Main tools on Kliga.com:

1) AI Music Detector

- Detects with 99.9% accuracy with smart models attuned to latest AI models. - You can upload an audio file or paste a Spotify track URL and Kliga analyzes whether the music is likely AI-generated or human-made. - It uses spectral + temporal analysis and gives a probability breakdown instead of a simple yes/no.

2) Audio Mastering

- 100% private, your songs never hit our servers - Preview with 12 Studio-grade presets designed by industry experts. - Download in HD-WAV 24 bit, MP3 320kbps for FREE. - No audio engineering required.

There are other useful tools like video compressor, audio cutter, etc. Do check them out.

Why I built this:

- AI-generated music is everywhere now, but it’s hard to quickly verify authenticity accurately.

- Most artists want to master their songs privately before releasing. They want one-click processing happening in their own device without having to download an expensive software.

Direct Links to main tools:

- AI Detector: https://kliga.com/ai-music-detector

- Audio Mastering: https://kliga.com/mastering

Happy to answer any questions. Thanks for checking it out!

2

Sigma Runtime – model-agnostic identity control for LLMs #

github.com favicongithub.com
0 댓글2:50 PMHN에서 보기
We’ve validated the Sigma Runtime architecture (v0.4.12) on Google Gemini-3 Flash, confirming that long-horizon identity control and stability can be achieved without retraining or fine-tuning the model.

The system maintains two distinct personas (“Fujiwara”, a stoic Edo-period ronin, and “James”, a formal British analyst) across 220 dialogue turns in stable equilibrium. This shows that cognitive coherence and tone consistency can be controlled at runtime rather than in model weights.

Unlike LangChain or RAG frameworks that orchestrate prompts, Sigma Runtime treats the model as a dynamic field with measurable drift and equilibrium parameters. It applies real-time feedback — injecting entropy or coherence corrections when needed — to maintain identity and prevent both drift and crystallization. The effect is similar to RLHF-style fine-tuning, but done externally and vendor-agnostic.

This decouples application logic from any specific LLM provider. The same runtime behavior has been validated on GPT-5.2 and Gemini-3, with Claude tests planned next.

We use narrative identities like “Fujiwara” or “James” because their linguistic styles make stability easy to verify by eye. If the runtime can hold these for 100+ turns, it can maintain any structured identity or agent tone.

Runtime versions ≥ v0.4 are proprietary, but the architecture is open under the Sigma Runtime Standard (SRS): https://github.com/sigmastratum/documentation/tree/main/srs

A reproducible early version (SR-EI-037) is available here: https://github.com/sigmastratum/documentation/tree/bf473712a...

Regulated under DOI: 10.5281/zenodo.18085782 — non-commercial implementations are fully open.

HN discussion focus: – Runtime-level vs weight-level control – Model-agnostic identity stability – Feedback-based anti-crystallization – Can cognitive coherence be standardized?

1

VoiceBrainDump – voice-first idea capture, single HTML file, offline #

voicebraindump.app faviconvoicebraindump.app
0 댓글4:31 AMHN에서 보기
I built VoiceBrainDump because I kept losing ideas. By the time I opened a notes app and started typing, the thought was gone.

This lets you tap, speak, and move on. No folders, no tags, no organizing upfront.

How it works: - Voice-to-text using Web Speech API - Ideas are saved to localStorage (no server, no account) - Keywords are extracted automatically and used to connect related ideas - After 7 days, old ideas resurface so you can keep or archive them

The whole thing is one HTML file (~2000 lines). No build step, no npm, no frameworks.

Built it in a weekend. Intentionally minimal — meant for people who think faster than they type.

Works on mobile too. Just open it in your browser.

Happy to hear why this is a terrible idea.

1

Focus timer that turns hours into assets #

seton.run faviconseton.run
0 댓글4:26 AMHN에서 보기
# The Problem

You probably spend a lot of time on screens.

If time is money, the hours we spend are capital expenditures. We should be careful about how we allocate them, because they're the capital we have.

Peter Drucker said: *The first step toward executive effectiveness is to record actual time-use.*

But it's not easy. We generally track time on the spot with a timer, or just a clock. It doesn't give us a sense of long-term progress.

Most timers on app stores have limited features, such as pomodoro. Some apps are too complicated. We don't need calendar or team features, which sacrifice simplicity for personal use.

# The Solution

That's why I built Seton - a focus timer that visualizes your time spent like an asset you’ve invested.

https://seton.run/

All you need to do is press "Start" for the activity you're working on. The activity can be reading, writing, meditation, or anything else.

When you complete a focus session in the app, it accumulates. You can look back on your time spent with charts. Celebrate your progress like "I've spent 100 hours reading this year!" You can configure how many minutes to focus and break, and number of cycles of sessions.

It's very simple, but highly effective. The more you record your time, the better you will be at managing it. That will make you more productive for sure.

# The Philosophy

*1. Frictionless:* No need to sign up. Just press "Start".

*2. Privacy:* Data is stored in the browser. No syncing feature for now.

*3. Low-cost:* It runs on Cloudflare to minimize costs. It's free for the time being.

If you like Seton, please share it with your friends. Let me know what you think on X ([@keplerjst](https://x.com/keplerjst)).

Time is money, and life is an accumulation of time. In that sense, I think everybody is an investor. Good luck on your investment journey!

1

WebRTC-rs/rtc – A Sans-I/O WebRTC Stack for Rust #

0 댓글10:31 PMHN에서 보기
We're excited to share rtc (https://github.com/webrtc-rs/rtc), a pure Rust WebRTC implementation built on the sans-I/O architecture. With the recent release of rtc v0.7.0, we've achieved feature parity with our async-based webrtc crate while offering complete runtime independence.

## What is Sans-I/O?

Sans-I/O separates protocol logic from I/O operations. Instead of the library performing network reads/writes, YOU control all I/O. The library acts as a pure state machine.

Core API (8 methods):

- poll_write() / poll_event() / poll_read() / poll_timeout() - Get outputs

- handle_read() / handle_timeout() / handle_write() / handle_event() - Feed inputs

## Why Sans-I/O for WebRTC?

WebRTC is a STACK of protocols (ICE, DTLS, SRTP, SCTP, RTP/RTCP). Traditional implementations tightly couple protocol logic with async runtimes, making them runtime-locked, difficult to test without network I/O, and hard to integrate into existing event loops.

Sans-I/O solves this by modeling WebRTC as a composable protocol pipeline where each layer implements the same sansio::Protocol trait.

## Feature Parity

- Full WebRTC 1.0 API (PeerConnection, Media, DataChannel)

- Complete protocol stack (ICE, DTLS, SRTP/SRTCP, SCTP, RTP/RTCP)

- Simulcast with multiple spatial layers

- RTCP Interceptors (NACK, Sender/Receiver Reports, TWCC)

- mDNS support for IP privacy

- W3C and RFC compliant

## Architecture Highlights

- Pure Protocol Pipeline: WebRTC as composable handlers implementing sansio::Protocol. Read: Raw Bytes → Demuxer → ICE → DTLS → SCTP/SRTP → Interceptor → Endpoint. Write path reverses this.

- Zero-Cost Abstractions: Interceptors use generic composition instead of async trait objects. No heap allocations in the hot path.

- Multi-Socket I/O: Handle multiple sockets (mDNS multicast + WebRTC traffic) in one event loop - difficult with async designs.

- Testable: Protocol logic tested without network I/O. Feed synthetic packets, verify state transitions.

## Relationship with async webrtc

rtc (sans-I/O) and webrtc (async) are COMPLEMENTARY:

- Use webrtc for async/await, Tokio integration, quick start

- Use rtc for runtime independence, custom I/O, maximum control, embedded systems

Both actively maintained.

## Recent Milestones

v0.7.0 (Jan 10) - mDNS support for IP privacy with .local hostnames

v0.6.0 (Jan 9) - Complete interceptor framework (NACK, RTCP Reports, TWCC)

v0.5.0 (Jan 5) - Enhanced simulcast support

v0.3.0 (Jan 4) - Initial public release

## Use Cases

Sans-I/O shines for: custom networking, embedded systems, game engines, high performance applications, testing infrastructure, WebAssembly.

## Links

- GitHub: https://github.com/webrtc-rs/rtc

- Crates.io: https://crates.io/crates/rtc (v0.7.0)

- Docs: https://docs.rs/rtc

- Blog: https://webrtc-rs.github.io/blog/

- Discord: https://discord.gg/4Ju8UHdXMs

Technical deep dives available on our blog exploring the protocol pipeline architecture and interceptor design principles.

We'd love feedback from the Rust and WebRTC communities!

1

Viidx – AI video generation with Reference-to-Video and Frame control #

viidx.com faviconviidx.com
0 댓글1:39 AMHN에서 보기
Hello HN,

I’m the founder of Viidx (https://viidx.com).

I built this platform to create a unified interface for the latest AI video generation models, so you don't have to juggle multiple subscriptions or complex setups just to experiment with video AI.

Key features available right now:

- Multi-Model Support: Switch between models like Seedance 1.5 Pro, Veo 3.1, and Sora 2 to compare results directly.

- Advanced Control: Beyond simple text-to-video, we support "Frames to Video" (control start/end frames) and "Reference to Video" (using video input for motion/style control).

- Specialized Workflows: The interface allows for precise aspect ratio selection and credit management across different backends.

This is an early release. My goal is to aggregate the best models as they come out and build a "Swiss Army Knife" for AI video creation. We are actively adding more features and models based on user feedback.

I’d love to hear your thoughts on the generation quality and specifically how the "Reference to Video" workflow feels for your use cases.

Link: https://viidx.com

1

Stryda – I built a Red Teaming tool for LLMs at 14yo #

stryda.online faviconstryda.online
0 댓글3:07 PMHN에서 보기
Hi HN I'm André, a 14yo student from Uruguay. I've been diving deep into LLMs (MoE, LoRA, PEFT and finetunig) and recently started focusing on AI Security and Red Teaming. I built Stryda to help secure AI models, specifically targeting prompt injections and other LLM vulnerabilities. It's still a very early MVP.

The Stack: Clodufare Pages, React and I forget.

Known Issues (please, be gentle guys): I know the site isn't perfect yet. For example, the Reset Password flow is currently broken (working on it maybe this weekend xD). i'm aware of the irony of a security site having bugs, right? I'm building this to learn and improve both my dev and sec skills.

i'd love your feedback on the concept and the approach to Red Teaming. If you find vulnerabilities in my platform, please let me know,, I'm here to learn and if you want you can buy a plan for it thanks!!

1

DeleteThreads – Bulk delete and auto-prune Meta Threads posts #

deletethreads.net favicondeletethreads.net
0 댓글12:56 PMHN에서 보기
Hi HN,

I built a simple tool called DeleteThreads (https://deletethreads.net/) to solve a personal annoyance: the lack of a "bulk delete" or "auto-archive" feature on Meta Threads.

If you've used the platform for a while, you've likely noticed there’s no way to clean up your history or prune old replies without doing it manually one by one.

What it does:

Bulk Delete: Mass remove posts and replies based on date filters.

Auto-Prune (The "Set and Forget" part): You can schedule a daily task to automatically delete posts older than X days (e.g., keeping only your last 30 days of activity).

It’s free to use for the core features. I'm an indie developer and would love to get your feedback on the UX or any technical features you'd like to see added.

Thanks!

1

TheTabber – Create, repurpose, and post across 9+ platforms #

thetabber.com faviconthetabber.com
0 댓글11:46 PMHN에서 보기
I've been using TheTabber primarily to repurpose my mom's TikTok content for other social media platforms. The available options are too expensive and too cluttered and confusing.

I thought, why not add more features with a clean and easy-to-use UI and publish it? So here I am.

I came up with the name TheTabber because the more tabs you have open for social media posting, the more of The Tabber you are--and this product is for you. LOL!

These are the available features so far:

- Connect 9+ social platforms - Post or schedule image, carousel, video, or text content - Repurpose content from other social media accounts - View analytics of your posts - Create UGC-style videos with AI help - Create 2x2 image grid videos with AI help - Generate captions, edit styles, split long video into segments with AI help