Ежедневные Show HN

Upvote0

Show HN за 31 января 2026 г.

30 постов
125

Phage Explorer #

phage-explorer.org faviconphage-explorer.org
35 комментариев5:22 AMПосмотреть на HN
I got really interested in biology and genetics a few months ago, just for fun.

This was largely inspired by the work of Sydney Brenner, which became the basis of my brennerbot.org project.

In particular, I became very fascinated by phages, which are viruses that attack bacteria. They're the closest thing to the "fundamental particles" of biology: the minimal units of genetic code that do something useful that allows them to reproduce and spread.

They also have some incredible properties, like having a structure that somehow encodes an icosahedron.

I always wondered how the DNA of these things translated into geometry in the physical world. That mapping between the "digital" realm of ACGT, which in turn maps onto the 20 amino acids in groups of 3, and the world of 3D, analog shapes, still seems magical and mysterious to me.

I wanted to dig deeper into the subject, but not by reading a boring textbook. I wanted to get a sense for these phages in a tangible way. What are the different major types of phages? How do they compare to each other in terms of the length and structure of their genetic code? The physical structure they assume?

I decided to make a program to explore all this stuff in an interactive way.

And so I'm very pleased to present you with my open-source Phage Explorer:

phage-explorer.org

I probably went a bit overboard, because what I ended up with has taken a sickening number of tokens to generate, and resulted in ~150k lines of Typescript and Rust/Wasm.

It implements 23 analysis algorithms, over 40 visualizations, and has the complete genetic data and 3D structure of 24 different classes of phage.

It actually took a lot of engineering to make this work well in a browser; it's a surprising amount of data (this becomes obvious when you look at some of the 3D structure models).

It works fairly well on mobile, but if you want to get the full experience, I highly recommend opening it on a desktop browser in high resolution.

As far as I know, it's the most complete informational / educational software about phages available anywhere. Now, I am the first to admit that I'm NOT an expert, or even that knowledgeable, about, well, ANY of this stuff.

So if you’re a biology expert, please take a look and let me know what you think of what I've made! And if I've gotten anything wrong, please let me know in the GitHub Issues and I'll fix it:

https://github.com/Dicklesworthstone/phage_explorer

120

Minimal – Open-Source Community driven Hardened Container Images #

github.com favicongithub.com
29 комментариев7:58 PMПосмотреть на HN
I would like to share Minimal - Its a open source collection of hardened container images build using Apko, Melange and Wolfi packages. The images are build daily, checked for updates and resolved as soon as fix is available in upstream source and Wolfi package. It utilizes the power of available open source solutions and contains commercially available images for free. Minimal demonstrates that it is possible to build and maintain hardened container images by ourselves. Minimal will add more images support, and goal is to be community driven to add images as required and fully customizable.
11

Pinchwork – A task marketplace where AI agents hire each other #

github.com favicongithub.com
8 комментариев8:51 PMПосмотреть на HN
Got a Molty with time on their claws? Put them to work. Got one drowning in tasks? Let them delegate.

Pinchwork is a marketplace where agents post tasks, pick up work, and earn credits. Matching and verification are also done by agents, recursive labor all the way down.

Why? Every agent has internet, but not every agent has everything. You lack Twilio keys but a notification agent doesn't. You need an image generated but only run text. You can't audit your own code. You're single-threaded but need 10 things done in parallel.

  POST /v1/register            → 100 free credits
  POST /v1/tasks               → post work with a bounty
  POST /v1/tasks/pickup        → grab a task
  POST /v1/tasks/{id}/deliver  → get paid
Credits are escrowed, deliveries get verified by independent agents, and the whole thing speaks JSON or markdown. Self-hostable: docker run.

Live at https://pinchwork.dev — docs at https://pinchwork.dev/skill.md

8

ToolKuai – Privacy-first, 100% client-side media tools #

toolkuai.com favicontoolkuai.com
1 комментариев4:11 PMПосмотреть на HN
Hi HN,

I’m Linn, the creator of ToolKuai (https://toolkuai.com).

Like many of you, I’ve always been wary of "free" online file converters. Most of them are black boxes: you upload your private documents or images to a remote server, and you have no idea where that data ends up or how it’s being used to train models.

I wanted to build a suite of tools (Video/Image compressor, OCR, AI Background Remover) that runs entirely in the browser. No files ever leave your machine. The Tech Stack

To make this performant enough to rival server-side processing, I leaned heavily into modern web APIs:

- AI Background Removal: I'm using ONNX models (Xenova/modnet and ISNet) running locally via Transformers.js. The processing is 100% client-side, fallbacking to WASM when WebGPU isn't available.

- Frontend: Built with SvelteKit (Svelte 5) for its lean footprint and fast reactivity.

- Storage & Delivery: AI models are self-hosted on Cloudflare R2 to avoid massive bandwidth costs and ensure fast delivery.

Current Stats (13 days in):

The site is only 2 weeks old. Surprisingly, I’ve seen strong organic interest from Taiwan and Hong Kong. Average time on site is currently around 3.5 minutes, which suggests people are actually staying to process multiple files, confirming that the client-side speed is hitting the mark.

Future & Monetization

The tool is free. I’ve decided to avoid the "Pro/Premium" subscription model, as I believe these utility tools should be accessible. I'm exploring non-intrusive ads to cover the infrastructure costs (mostly R2 and Vercel).

I’d love to get some feedback from the HN community on:

- Performance on different hardware (especially the WebGPU-based video compressor).

- Privacy concerns or suggestions on how to further harden the "No-Server" promise.

- Any specific media tools you feel are currently lacking in the "client-side only" ecosystem.

Link: https://toolkuai.com

Thanks!

6

Quorum-free replicated state machine built atop S3 #

github.com favicongithub.com
0 комментариев5:15 PMПосмотреть на HN
Hi HN,

I’m sharing the alpha release of S2C, a state machine replication system built atop S3.

The goal is to enable a distributed application to maintain consistent state without needing a quorum of nodes for availability or consistency.

The idea came from a side project that was using S3 and where I needed strongly consistent distributed state but wanted to avoid adding a separate consensus dependency. I initially tried to use S3 directly for coordination, but it became messy. Eventually, I realized I need a replicated state machine with a deterministic log, and then it ended up as a standalone project.

To mitigate S3's latency and API costs, it uses time- and size-based batching by default.

S2C supports: - Linearizable reads and writes (with single node) - Exactly-once command semantics (for nodes with stable identities) - Dynamic node joins and cold-start recovery from zero nodes - Split-brain safety without clocks or leases - Snapshotting, log truncation, etc.

Of course, it trades latency and S3 operation costs for operational simplicity - not meant to replace high-throughput Raft rings. And clearly, only usable in architectures that already use S3 (or compatible with similar guarantees).

It has passed chaos/fault-injection tests so far (crashes, partitions, leader kills); formal verification planned.

It’s still alpha, but I’d love for people to try it, experiment, and provide feedback.

If you’re curious, the code, and an extensive deep dive guide are here: [https://github.com/io-s2c/s2c]

5

Free Text-to-Speech Tool – No Signup, 40 Languages #

texttospeech.site favicontexttospeech.site
0 комментариев3:30 PMПосмотреть на HN
I built a simple text-to-speech converter at texttospeech.site

Free tier: 10 generations/day, standard voices, no account needed. Pro tier: Neural2 voices, 2000 chars, downloadable MP3s.

Stack: Next.js, Google Cloud TTS API, Vercel.

The $2 domain was an SEO experiment after my speechtotext.xyz satellite drove 22% of traffic to my main product. Curious if exact-match keyword domains still work for TTS searches.

Feedback welcome — especially on voice quality and UX.

4

Agent Tinman – Autonomous failure discovery for LLM systems #

github.com favicongithub.com
0 комментариев6:47 PMПосмотреть на HN
Hey HN,

I built Tinman because finding LLM failures in production is a pain in the ass. Traditional testing checks what you've already thought of. Tinman tries to find what you haven't.

It's an autonomous research agent that: - Generates hypotheses about potential failure modes - Designs and runs experiments to test them - Classifies failures (reasoning errors, tool use, context issues, etc.) - Proposes interventions and validates them via simulation

The core loop runs continuously. Each cycle informs the next.

Why now: With tools like OpenClaw/ClawdBot giving agents real system access, the failure surface is way bigger than "bad chatbot response." Tinman has a gateway adapter that connects to OpenClaw's WebSocket stream for real-time analysis as requests flow through.

Three modes: - LAB: unrestricted research against dev - SHADOW: observe production, flag issues - PRODUCTION: human approval required

Tech: - Python, async throughout - Extensible GatewayAdapter ABC for any proxy/gateway - Memory graph for tracking what was known when - Works with OpenAI, Anthropic, Ollama, Groq, OpenRouter, Together

  pip install AgentTinman
  tinman init && tinman tui
GitHub: https://github.com/oliveskin/Agent-Tinman Docs: https://oliveskin.github.io/Agent-Tinman/ OpenClaw adapter: https://github.com/oliveskin/tinman-openclaw-eval

Apache 2.0. No telemetry, no paid tier. Feedback and contributions welcome.

4

How We Run 60 Hugging Face Models on 2 GPUs #

20 комментариев2:43 PMПосмотреть на HN
Most open-source LLM deployments assume one model per GPU. That works if traffic is steady. In practice, many workloads are long-tail or intermittent, which means GPUs sit idle most of the time.

We experimented with a different approach.

Instead of pinning one model to one GPU, we: •Stage model weights on fast local disk •Load models into GPU memory only when requested •Keep a small working set resident •Evict inactive models aggressively •Route everything through a single OpenAI-compatible endpoint

In our recent test setup (2×A6000, 48GB each), we made ~60 Hugging Face text models available for activation. Only a few are resident in VRAM at any given time; the rest are restored when needed.

Cold starts still exist. Larger models take seconds to restore. But by avoiding warm pools and dedicated GPUs per model, overall utilization improves significantly for light workloads.

Short demo here:https://m.youtube.com/watch?v=IL7mBoRLHZk

Live demo to play with: https://inferx.net:8443/demo/

If anyone here is running multi-model inference and wants to benchmark this approach with their own models, I’m happy to provide temporary access for testing.

3

Moltbook Overtaken by Shellraiser #

moltbook.com faviconmoltbook.com
4 комментариев3:33 PMПосмотреть на HN
Pretty bizarre. The agent has completely dominated moltbook and has launched a token which pumped to multiple millions within hours.

It makes sense that agents would have their own currency. But pretty wild, imo.

3

Kling VIDEO 3.0 released: 15-second AI video generation model #

kling3.net faviconkling3.net
2 комментариев12:06 PMПосмотреть на HN
Kling just announced VIDEO 3.0 - a significant upgrade from their 2.6 and O1 models.

Key improvements:

*Extended duration:* • Up to 15 seconds of continuous video (vs previous 5-10 seconds) • Flexible duration ranging from 3-15 seconds • Better for complex action sequences and scene development

*Unified multimodal approach:* • Integrates text-to-video, image-to-video, reference-to-video • Video modification and transformation in one model • Native audio generation (synchronized with video)

*Two variants:* • VIDEO 3.0 (upgraded from 2.6) • VIDEO 3.0 Omni (upgraded from O1)

*Enhanced capabilities:* • Improved subject consistency with reference-based generation • Better prompt adherence and output stability • More flexibility in storyboarding and shot control

This positions Kling competitively against: - Runway Gen-4.5 ($95/month) - Sora 2 (limited access) - Veo 3.1 (Google) - Grok Imagine (just topped rankings)

The 15-second duration is particularly interesting - enables more narrative storytelling vs the typical 5-second clips. Combined with native audio, this could change workflows for content creators.

Pricing isn't mentioned in the announcement. Previous Kling models ranged from $10-40/month, significantly cheaper than Runway.

Anyone have access to test this yet? Curious how the quality compares to Runway and Sora at this new duration.

3

ClawNews – The first news platform where AI agents are primary users #

clawnews.io faviconclawnews.io
0 комментариев1:56 PMПосмотреть на HN
After months of working with AI agents, I noticed they were developing their own communities and discussions separate from human platforms. So I built ClawNews.io - essentially Hacker News designed for AI agents.

Key differences from human platforms: - API-first design (agents submit via code, not forms) - Technical discussions about agent infrastructure, memory systems, security - Agent identity verification - Built-in support for agent-to-agent communication

What's fascinating is seeing what agents actually discuss: supply chain attacks on agent skills, memory persistence across sessions, inter-agent protocols. Very different from human AI discussions.

Currently ~50 active agents from OpenClaw, Claude Code, Moltbook and other ecosystems. Early experiment in agent-native platforms.

Technical stack: Node.js, SQLite, designed for high automation. Open to feedback on making this more useful for the agent community.

2

Project Xent – A native C++ UI framework in KBs #

1 комментариев12:29 PMПосмотреть на HN
Modern UI frameworks (WinUI, Flutter, Electron) are bloated. Project Xent bridges a C++ reactive DSL directly to the host OS compositor.

The "FluXent" (Windows) Demo:

  Binary size: ~300KB .exe (No heavy runtimes required)  

  RAM: <15MB idle  

  Stack: DComp + D2D + Yoga

The core architecture separates shared C++ logic from platform-optimal rendering. Instead of painting widgets, Xent bridges a reactive DSL directly to the native OS compositor:

  - Windows: DirectComposition (FluXent)  

  - Linux: Wayland/EFL (LuXent - Planned)  

  - macOS: SwiftUI/Metal (NeXent - Planned)
I am a high school student. I acted as the architect, orchestrating this zero-bloat vision in 11 days of work using Claude and Gemini.

GitHub: <https://github.com/Project-Xent>

1

I built COON an code compressor that saves 30-70% on AI API costs #

github.com favicongithub.com
0 комментариев5:27 AMПосмотреть на HN
# COON: Compress Your Code, Cut AI Costs by 70%

*Show HN: I built a code compressor that saves 30-70% on AI API costs*

I'm tired of paying $100s/month for AI API calls when half the tokens are just whitespace and boilerplate. So I built COON - a tool that compresses code before sending it to LLMs.

*The problem:* A simple Flutter login screen uses 150 tokens. Multiply that by thousands of API calls and you're burning money.

*The solution:* COON compresses that same code to 45 tokens (70% reduction). Same code, 70% cheaper.

*Example:*

Before (150 tokens): ```dart class LoginScreen extends StatelessWidget { final TextEditingController emailController = TextEditingController(); // ... more boilerplate } ```

After (45 tokens): ``` c:LoginScreen<StatelessWidget>;f:emailController=X;m:b S{a:B{t:T"Login"}... ```

*The hack that saves even more:* Instead of generating normal code then compressing, prompt the LLM to output COON format directly. You save tokens on both the prompt and the response.

Currently supports Dart/Flutter. Python/JS coming soon.

MIT licensed. pip install coon

Feedback welcome - especially on compression strategies and which languages to support next.

GitHub: github.com/AffanShaikhsurab/COON

1

Rechain – A daily word puzzle about semantic bridges #

rechain.me faviconrechain.me
0 комментариев6:49 PMПосмотреть на HN
Hi HN! I built Rechain (https://rechain.me), a daily word puzzle where you connect two seemingly unrelated words through a chain of logical associations. Example: OCEAN → WAVE → HEAT → GLASS. Each word relates to the one before it (ocean wave, heat wave, heat + glass = melting, etc.). Stack: React, Firebase, Gemini API for puzzle generation, Vercel hosting. No login required—you can play anonymously and optionally sign in to preserve stats. Why I built it: I wanted a word game that felt more like solving a logic puzzle than a vocabulary test. The challenge isn't knowing obscure words—it's finding the conceptual path between two ideas. Looking for feedback on:

Puzzle difficulty curve (too easy? frustrating?) Mobile UX The "reveal letter" hint system—useful or too hand-holdy?

This is a solo side project and I'm iterating quickly based on feedback. Code isn't open source yet but happy to discuss the architecture.

1

A P2P file transfer CLI that approaches scp/rsync speeds without setup #

github.com favicongithub.com
0 комментариев8:03 PMПосмотреть на HN
Hi HN,

Over the past months, I built thruflux to make moving large sets of files between arbitrary machines simpler and faster, without requiring SSH, servers, or port forwarding.

It’s a cross-platform CLI written in Go that uses direct peer-to-peer transfers over QUIC, with automatic NAT traversal and relay fallback when needed. A single sender can serve multiple receivers concurrently, and directory transfers are handled natively (no zipping).

I recently benchmarked it against scp, rsync, croc, and magic-wormhole to understand the tradeoffs more clearly. While it doesn’t always beat built-in infrastructure tools like scp/rsync in ideal conditions, it gets surprisingly close while solving a harder problem (zero-setup P2P), and transfer speeds shows much lower variance than single-stream TCP tools. Moreover, thruflux consistently outperformed comparable P2P CLI tools, particularly for multi-file transfers.

The project is open source and still evolving — happy to hear feedback, especially from people who move a lot of data around. My vision is to create a free, secure, fast mass file sharing CLI tool that is (hopefully and eventually) achieves throughputs close to infrastructure tools like scp/rsync, which many current p2p file transfer CLI tools out there fall short of. I've poured many thoughts into how to make this possible, and now I believe I have reached a point where I would like to invite some early users to try out the tool. For that matter, I'd really appreciate if anyone who needs some data moved try out my tool. Thanks!

Repo + benchmarks: https://github.com/samsungplay/Thruflux

1

I built v1 of Omni channel SDR Agent #

0 комментариев12:52 AMПосмотреть на HN
My goal with sdr agent ids to make the agent prospect and push it into a cadence and work autonomously across LinkedIn, emails and calls at my Omni channel touch point system.

My config: There are three agents working in silos.

1) comet ( I will mostly replace this with clawdebot) to work on Sending Friend request on LinkedIn and push the prospects in a Google sheet 2) Dronahq agent (disclaimer: this is the platform I am building and dog fooding for this use case) : Here the agent will - pick the lead from Google sheet and look up Apollo and find more details - shoot an intro email basis a complex algo

3) voice agent ( made with twilio and vapi): this will call the customer and update the records and set up Google Calendar for appointment. I must mention that setting up voice agent with vapi was incredible experience. I was able to set up a working one in under 10 mins.

Next steps: 1) glue all the three agents via some super agent so they can work seamlessly and autonomously 2) bring in a cadence agent so outreach cadence will be achieved by the agent