Daily Show HN

Upvote0

Show HN for November 11, 2025

51 items
111

Gametje – A casual online gaming platform #

gametje.com favicongametje.com
40 comments2:36 PMView on HN
Hi all, I’ve been working on this project for a while but haven't shared it properly on Hacker News.

What is it?

It is a casual gaming platform focused on simple multiplayer games that can be played in person with a central screen (like a TV) or remotely via video chat. You can also play on your smart Android based TVs via the app: https://play.google.com/store/apps/details?id=com.gametje (it was just released recently so could be buggy). It is also available directly in Discord: https://discord.com/discovery/applications/12153230008666071... as an embedded activity.

It is playable in 9 languages and doesn’t require any downloads. Most games revolve around creativity in some shape or form. They can be played by just about anyone whether or not you consider yourself a “gamer”. If you can text, you can play these games.

Why did I create it?

Some of you may see the resemblance to Jackbox games. I have been a huge fan of them for 10+ years and enjoyed playing their games a lot. However, I found their support for other languages a bit lacking. While living in the Netherlands, I have encountered quite a few non-native English speakers and wanted to help them have a similar experience. Jackbox also has some fragmentation issues between app stores. I own their games on PC and PS4 but I can’t share a “license” between them. They also come out with a pack every year with 5 games. You never know if the game(s) will be fun, or if you should try to buy a previous pack with the one killer party game in it.

I designed Gametje with these issues in mind. It is playable in multiple languages with more being added regularly (feel free to request one). You can play it from any device with a web browser. There is no need to install it via Steam or a game console. All games are available in one place with no “packs” to buy.

What’s up with the name?

I have been living in the Netherlands for some years and part of my original motivation stems from wanting to give my friends here a game to play in their native language. It's way easier to be witty/funny in your mother tongue after all! Because of that, I wanted to incorporate something Dutch into the site's name. The suffix ‘-tje’ is one of the diminutive endings in Dutch and is meant to soften a word or make it "smaller". Game + tje = Gametje, or a little game. I have been informed by native Dutch speakers that it should have been ‘Gamepje’ to be "correct" but I liked the way Gametje sounded better.

Where can I try it?

Go here: https://gametje.com/

You can test it out as a guest without signing up in order to get a feel for the games. Clicking into each game gives a short explanation and a small example of the gameplay. When creating a game room, you can choose to host via a central screen, host and play from a single device (like a phone) or cast the main screen to a Chromecast. There is also an Android TV app available that was just recently released: https://play.google.com/store/apps/details?id=com.gametje

After creating a game room, you can join from another browser window or device. You can also add AI players if you want to try it out on your own, although it is a lot more fun with real people. I also created a discord channel: https://discord.gg/7jrftHuHp9 where you can find other users to play with. If you sign up for an account, you can opt-in as an alpha tester and see the new games as they are developed. It’ll also keep track of all your previous games and make sure not to duplicate content. You can review previous games as well and relish in your past victories.

What am I looking for?

I am interested in feedback about the whole concept and also the gameplay. Is it fun? What could be improved? Interested in helping out? Let me know!

Happy to share the more technical details as well for those that are interested. You can also read a bit about the platform and games in my blog:

https://blog.gametje.com/

Thanks!

54

Tusk Drift – Open-source tool for automating API tests #

github.com favicongithub.com
17 comments2:18 PMView on HN
Hey HN, I'm Marcel from Tusk. We’re launching Tusk Drift, an open source tool that generates a full API test suite by recording and replaying live traffic.

How it works:

1. Records traces from live traffic (what gets captured)

2. Replays traces as API tests with mocked responses (how replay works)

3. Detects deviations between actual vs. expected output (what you get)

Unlike traditional mocking libraries, which require you to manually emulate how dependencies behave, Tusk Drift automatically records what these dependencies respond with based on actual user behavior and maintains recordings over time. The reason we built this is because of painful past experiences with brittle API test suites and regressions that would only be caught in prod.

Our SDK instruments your Node service, similar to OpenTelemetry. It captures all inbound requests and outbound calls like database queries, HTTP requests, and auth token generation. When Drift is triggered, it replays the inbound API call while intercepting outbound requests and serving them from recorded data. Drift’s tests are therefore idempotent, side-effect free, and fast (typically <100 ms per test). Think of it as a unit test but for your API.

Our Cloud platform does the following automatically:

- Updates the test suite of recorded traces to maintain freshness

- Matches relevant Drift tests to your PR’s changes when running tests in CI

- Surfaces unintended deviations, does root cause analysis, and suggests code fixes

We’re excited to see this use case finally unlocked. The release of Claude Sonnet 4.5 and similar coding models have made it possible to go from failing test to root cause reliably. Also, the ability to do accurate test matching and deviation classification means running a tool like this in CI no longer contributes to poor DevEx (imagine the time otherwise spent reviewing test results).

Limitations:

- You can specify PII redaction rules but there is no default mode for this at the moment. I recommend first enabling Drift on dev/staging, adding transforms (https://docs.usetusk.ai/api-tests/pii-redaction/basic-concep...), and monitoring for a week before enabling on prod.

- Expect a 1-2% throughput overhead. Transforms result in a 1.0% increase in tail latency when a small number of transforms are registered; its impact scales linearly with the number of transforms registered.

- Currently only supports Node backends. Python SDK is coming next.

- Instrumentation limited to the following packages (more to come): https://github.com/Use-Tusk/drift-node-sdk?tab=readme-ov-fil...

Let me know if you have questions or feedback.

Demo repo: https://github.com/Use-Tusk/drift-node-demo

36

Data Formulator 0.5 – interactive AI agents for data visualization #

data-formulator.ai favicondata-formulator.ai
11 comments5:44 PMView on HN
Hi everyone, we are excited to share with you our new release of Data Formulator. Starting from a dataset, you can communicate with AI agents with UI + natural language to explore data and create visualizations to discover new insights. Here's a demo video of the experience: https://github.com/microsoft/data-formulator/releases/tag/0.....

This is a build-up from our release a year ago (https://news.ycombinator.com/item?id=41907719). We spent a year exploring how to blend agent mode with interactions to allow you more easily "vibe" with your data but still keeping in control. We don't think the future of data analysis is just "agent to do all for you from a high-level prompt" --- you should still be able to drive the open-ended exploration; but we also don't want you to do everything step-by-step. Thus we worked on this "interactive agent mode" for data analysis with some UI innovations.

Our new demo features:

* We want to let you import (almost) any data easily to get started exploration — either it's a screenshot of a web table, an unnormalized excel table, table in a chunk of text, a csv file, or a table in database, you should be able to load into the tool easily with a little bit of AI assistance.

* We want you to easily choose between agent mode (more automation) vs interactive mode (more fine-grained control) yourself as you explore data. We designed an interface of "data threads": both your and agents' explorations are organized as threads so you can jump into any point to decide how you want to follow-up or revise using UI + NL instruction to provide fine-grained control.

* The results should be easily interpretable. Data Formulator now presents "concept" behind the code generated by AI agents alongside code/explanation/data. Plus, you can compose a report easily based on your visualizations to share insights.

We are sharing the online demo at https://data-formulator.ai/ for you to try! If you want more involvement and customization, checkout our source code https://github.com/microsoft/data-formulator and let's build something together as a community!

34

Gerbil – an open source desktop app for running LLMs locally #

github.com favicongithub.com
9 comments10:03 AMView on HN
Gerbil is an open source app that I've been working on for the last couple of months. The development now is largely done and I'm unlikely to add anymore major features. Instead I'm focusing on any bug fixes, small QoL features and dependency upgrades.

Under the hood it runs llama.cpp (via koboldcpp) backends and allows easy integration with the popular modern frontends like Open WebUI, SillyTavern, ComfyUI, StableUI (built-in) and KoboldAI Lite (built-in).

Why did I create this? I wanted an all-in-one solution for simple text and image-gen local LLMs. I got fed up with needing to manage multiple tools for the various LLM backends and frontends. In addition, as a Linux Wayland user I needed something that would work and look great on my system.

31

Creavi Macropad – Built a wireless macropad with a display #

creavi.tech faviconcreavi.tech
7 comments6:58 PMView on HN
Hey HN,

We built a wireless, low-profile macropad with a display called the Creavi Macropad. It lasts at least 1 month on a single charge. We also put together a browser-based tool that lets you update macros in real time and even push OTA updates over BLE. Since we're software engineers by day, we had to figure out the hardware, mechanics, and industrial design as we went (and somehow make it all work together). This post covers the build process, and the final result.

Hope you enjoy!

21

Linnix – eBPF observability that predicts failures before they happen #

github.com favicongithub.com
6 comments1:00 PMView on HN
I kept missing incidents until it was too late. By the time my monitoring alerted me, servers/nodes were already unrecoverable.

So I built Linnix. It watches your Linux systems at the kernel level using eBPF and tries to catch problems before they cascade into outages.

The idea is simple: instead of alerting you after your server runs out of memory, it notices when memory allocation patterns look weird and tells you "hey, this looks bad."

It uses a local LLM to spot patterns. Not trying to build AGI here - just pattern matching on process behavior. Turns out LLMs are actually pretty good at this.

Example: it flagged higher memory consumption over a short period and alerted me before it was too late. Turned out to be a memory leak that would've killed the process.

Quick start if you want to try it:

  docker pull ghcr.io/linnix-os/cognitod:latest
  docker-compose up -d
Setup takes about 5 minutes. Everything runs locally - your data doesn't leave your machine.

The main difference from tools like Prometheus: most monitoring parses /proc files. This uses eBPF to get data directly from the kernel. More accurate, way less overhead.

Built it in Rust using the Aya framework. No libbpf, no C - pure Rust all the way down. Makes the kernel interactions less scary.

Current state: - Works on any Linux 5.8+ with BTF - Monitors Docker/Kubernetes containers - Exports to Prometheus - Apache 2.0 license

Still rough around the edges. Actively working on it.

Would love to know: - What kinds of failures do you wish you could catch earlier? - Does this seem useful for your setup?

GitHub: https://github.com/linnix-os/linnix

Happy to answer questions about how it works.

9

Lexical - How I learned 3 languages in 3 years #

lexical.app faviconlexical.app
12 comments6:13 PMView on HN
Hey HN, I'm Taylor and today I'm launching Lexical, a language learning app actually works by teaching the 5,000 most common words in the target language.

Spaced repetition is how I've learned Spanish, French, and Italian in the last 3 years and I couldn't find tools that have the languages that I want to learn in the future, so I built Lexical.

The link goes to my White-Paper describing the language learning philosophy behind Lexical if you just want to try it out you can click "Get Started" at the top of the page. To really experience Lexical at it's best you'll need an account (sorry guys) but I've tried to make it as easy as possible.

I'm a big fan of Hacker News and am looking forward to your comments and hopefully even some beta users.

Here are some old HN posts that inspired my app and white-paper.

Learning is remembering https://news.ycombinator.com/item?id=32982513

Kanji only requires 777 words https://news.ycombinator.com/item?id=20721736

Most of language learning is temporary heuristics https://news.ycombinator.com/item?id=44907531

Duolingo sucks https://news.ycombinator.com/item?id=45425061

7

Kerns, an AI interface to understand anything, better than NotebookLM #

kerns.ai faviconkerns.ai
4 comments5:45 AMView on HN
I'm building out Kerns, as an AI environment for research. You can seed a space with a topic and multiple source documents, and complete your research completely in one space. There's interactive mindmaps for exploration, podcast mode, powerful source readers with original plus chapter level summaries that let you zoom into source on demand, a powerful chat agent that lets you control context and cite refs, and AI assisted note taking.

My goal is to have one place to do research on any topic which minimizes manual context engineering, and jumping around between chat/notes/readers.

Please let me know if you try it and have feedback!

6

Project AELLA – Open LLMs for structuring 100M research papers #

aella.inference.net faviconaella.inference.net
2 comments6:38 PMView on HN
We're releasing Project AELLA - an open-science initiative to make scientific knowledge more accessible through AI-generated structured summaries of research papers.

Blog: https://inference.net/blog/project-aella

Visualizer: https://aella.inference.net

Models: https://huggingface.co/inference-net/Aella-Qwen3-14B, https://huggingface.co/inference-net/Aella-Nemotron-12B

Highlights: - Released 100K research paper summaries in standardized JSON format with interactive visualization.

- Fine-tuned open models (Qwen 3 14B & Nemotron 12B) that match GPT-5/Claude 4.5 performance at 98% lower cost (~$100K vs $5M to process 100M papers)

- Built on distributed "idle compute" infrastructure - think SETI@Home for LLM workloads

Goal: Process ~100M papers total, then link to OpenAlex metadata and convert to copyright-respecting "Knowledge Units"

The models are open, evaluation framework is transparent, and we're making the summaries publicly available. This builds on Project Alexandria's legal/technical foundation for extracting factual knowledge while respecting copyright.

Technical deep-dive in the post covers our training pipeline, dual evaluation methods (LLM-as-judge + QA dataset), and economic comparison showing 50x cost reduction vs closed models.

Happy to answer questions about the training approach, evaluation methodology, or infrastructure!

4

HelloTriangle – Python-based online 3D modeling and sharing platform #

hellotriangle.io faviconhellotriangle.io
1 comments3:50 PMView on HN
Hi HN,

I’d like to introduce HelloTriangle, an online platform for Python-based 3D modelling, mesh generation and analysis. You can write a few lines of code, generate or import a mesh, manipulate it, analyse it, and instantly share it via a link.

I built HelloTriangle because I often ran into barriers when doing advanced, mesh-based 3D modeling: steep learning curves, sometimes tricky installations, juggling multiple tools, or using expensive software.

Another frustration was sharing results: 3D insights often get lost when all you can send are 2D screenshots, since others don’t have the software to view your models, or because you don't want to share the underlying files.

You can try it here: https://www.hellotriangle.io

I’d really love your feedback — especially on use cases or missing features you’d want for a smoother Python-based 3D workflow.

3

Mapnitor – Lightweight server monitoring for Linux and Windows #

mapnitor.com faviconmapnitor.com
1 comments8:42 AMView on HN
Hey HN

I’ve been building Mapnitor, a lightweight server monitoring platform focused on speed, simplicity, and no bloat. It’s designed for small teams, sysadmins, and hosting providers who just want quick visibility — without setting up Zabbix, Grafana, or a huge stack.

What it does:

Uptime & latency checks (Ping, TCP, HTTP)

Clean dashboard with per-node performance view

Lightweight agent (optional) — or just add targets directly

Instant history and analytics view

What it doesn’t do:

No complex dashboards, plugins, or configs.

No heavy setup or long onboarding — just add IPs and go.

Why I built it: I manage multiple servers across clients and hosting environments — I was tired of setting up full-stack monitoring tools for small cases. Mapnitor started as a personal script, then evolved into a minimal SaaS that can handle 20+ devices from different companies in seconds.

It’s still in early beta, so I’d love feedback from sysadmins, hosting engineers, or anyone dealing with uptime monitoring.

https://mapnitor.com

Would love to hear:

What features matter most for small infra setups?

Should I focus next on alerting (email/Slack) or integrations?

3

CyBox Security – Your Virtual Security Team for the AI Era #

cybox.ai faviconcybox.ai
3 comments1:03 PMView on HN
Hey HN,

I’ve been building CyBox Security a platform that acts like a virtual security team for developers.

It combines multiple security scanners into one unified dashboard, covering SAST, SCA, IaC, and Secrets in a single workflow.

The idea came after seeing how many small dev teams and startups don’t have dedicated security staff, yet still need continuous scanning and clear remediation guidance. CyBox integrates with GitHub and stores only scan results (no source code).

It’s still early (MVP stage), but it already works, you can sign up, connect a repo, and see scan results in minutes.

Would love your feedback, use cases, and what matters most for you when using security tools.

Thanks

3

Pytest-httpdbg – a simple way to include HTTP traces in Allure reports #

github.com favicongithub.com
0 comments2:39 PMView on HN
Hi HN,

I recently updated my pytest plugin based on httpdbg to include the HTTP traces directly in the Allure reports. As with httpdbg, the idea is to have nothing more to do than to add an argument to your command line: --httpdbg-allure.

For example:

pytest examples/pytest_demo.py --alluredir=./allure-results --httpdbg-allure

For each test, all HTTP requests will be recorded and saved in the Allure report under a step named httpdbg.

You can check the README in the repository to see how it looks: https://github.com/cle-b/pytest-httpdbg?tab=readme-ov-file#c... (the compact mode is quite simple, but the full mode is identical to the httpdbg UI).

I hope this will be helpful for some of you :) Any feedback is welcome.

If you enjoy using httpdbg, don’t hesitate to check out the Git repository to discover new features — and give it a star to help make it more visible.

httpdbg: https://github.com/cle-b/httpdbg -- https://pypi.org/project/httpdbg/ pytest-httpdbg: https://github.com/cle-b/pytest-httpdbg -- https://pypi.org/project/pytest-httpdbg/ documentation: https://httpdbg.readthedocs.io/en/latest/test/ a blog post: https://medium.com/@cle-b/python-rest-api-tests-enhance-your...

3

OgBlocks – Animated UI Library for CSS Haters #

ogblocks.dev faviconogblocks.dev
0 comments6:09 PMView on HN
Hey HN,

I'm Karan, a frontend developer who loves creating UIs, but I've found that many people don't like CSS, but they want their website to look beautiful and polished, and what better way to enhance a website than with animations

Animations using plain CSS are tricky, and that's why I leaned towards Motion a powerful animation library for React, and I built ogBlocks using React, Motion, and Tailwind CSS

I built it for three reasons:

1. Anyone can integrate beautiful animated components into their website without any CSS Skills 2. Developers will save time creating UI components 3. 100% customization (tweak the way you want to fit in your brand's style and even build something on top of it)

I also included a Bonus 107 Page ebook on 100 Tips about HTML, CSS, and JavaScript

Check out the website and tell me what you think about it (It's cool though )

Karan

2

Visual GraphQL Query Builder #

gqlvis.hadid.dev favicongqlvis.hadid.dev
0 comments10:17 AMView on HN
A small tool to explore and build GraphQL queries visually.

I built it for fun, but I think it has real utility for navigating complex, deeply nested GraphQL schemas like GitHub's, which can contain hundreds of interconnected object and interface types.

Under the hood, it works by recursively introspecting the schema based on user selections.

Everything runs locally in the browser.

Source code: https://github.com/mhadidg/gqlvis

2

mDNS name resolution for Docker container names #

npmjs.com faviconnpmjs.com
0 comments10:27 PMView on HN
I always wanted this: an easy way to reach "resolve docker container by name" -- e.g., to reach web servers running in Docker containers on my dev machine. Of course, I could export ports from all these containers, try to keep them out of each others hair on the host, and then use http://localhost:PORT. But why go through all that trouble? These containers already expose their respective ports on their own IP (e.g., 172.24.0.5:8123), so all I need is a convenient way to find them.

mdns-docker allows you do to, e.g., "ping my-container.docker.local", where it will find the IP of a running container whose name fuzzily matches the host. The way it does it is by running a local mDNS service that listens to `*.docker.local` requests, finding a running container whose name contains the request (here: "my-container"), getting that container's local IP address, and responding to the mDNS query with that IP.

Example: Start a ClickHouse service (as an example) `docker run --rm --name myclicky clickhouse:25.7` and then open `http://myclick.docker.local:8123" to open the built-in dashboard -- no port mapping required!

If you haven't played with mDNS yet, you've been missing a lot of fun. It's easy to use and the possibilities to make your life easier are endless. It's also what Spotify and chromecast use for local device discovery.

2

Privacy Experiment – Rewriting HTTPS, TLS, and TCP/IP Packet Headers #

1 comments1:27 AMView on HN
The README: https://github.com/un-nf/404/blob/main/README.md

Or the LP: https://404-nf/carrd.co

Or read on...

In a small enough group of people, your TLS-handshake can be enough to identify you as a unique client. Around six-months ago, I began learning about client-fingerprinting. I had understood that it was getting better and more precise, but did not realize the ease with which a server could fingerprint a user - after all, you're just giving up all the cookies! Fingerprinting, for the modern internet experience, has become almost a necessity.

It was concerning to me that servers began using the very features that we rely on for security to identify and fingerprint clients.

- JS - Collection of your JS property values - Font - Collection of your downloaded fonts - JA3/4 - TLS cipher-suite FP - JA4/T - TCP packet header FP (TTL, MSS, Window Size/Scale, TSval/ecr, etc.) - HTTPS - HTTPS header FP (UA, sec-ch, etc.) - Much more...

So, I built a tool to give me control of my fingerprint at multiple layers:

- Localhost mitmproxy handles HTTPS headers and TLS cipher-suite negotiation - eBPF + Linux TC rewrites TCP packet headers (TTL, window size, etc.) - Coordinated spoofing ensures all layers present a consistent, chosen fingerprint - (not yet cohesive)

Current Status: This is a proof-of-concept that successfully spoofs JA3/JA4 (TLS), JA4T (TCP), and HTTP fingerprints. It's rough around the edges and requires some Linux knowledge to set up.

When there are so many telemetry points collected from a single SYN/ACK interaction, the precision with which a server can identify a unique client becomes concerning. Certain individuals and organizations began to notice this and produced sources to help people better understand the amount of data they're leaving behind on the internet: amiunique.org, browserleaks.com, and coveryourtracks.eff.org to name a few.

This is the bare bones, but it's a fight against server-side passive surveillance. Tools like nmap and p0f have been exploiting this for the last two-decades, and almost no tooling has been developed to fight it - with the viable options (burpsuite) not being marketed for privacy.

Even beyond this, with all values comprehensively and cohesively spoofed, SSO tokens can still follow us around and reveal our identity. When the SDKs of the largest companies like Google are so deeply ingrained into development flows, this is a no-go. So, this project will evolve, I'm looking to add some sort of headless/headful swarm that pollutes your SSO history - legal hurdles be damned.

I haven't shared this in a substantial way, and really just finished polishing up a prerelease, barely working version about a week ago. I am not a computer science or cysec engineer, just someone with a passion for privacy that is okay with computers. This is proof of concept for a larger tool. Due to the nature of TCP/IP packet headers, if this software were to run on a distributed mesh network, privacy could be distributed on a mixnet like they're trying to achieve at Nym Technologies.

All of the pieces are there, they just haven't been put together in the right way. I think I can almost see the whole puzzle...

2

Mac menu bar app to monitor your internet connection #

twitter.com favicontwitter.com
0 comments5:06 AMView on HN
I was on a United flight with an especially bad Viasat connection and I kept having to recheck it while trying to do some work. Since I had ollama and gpt-oss:120b on the machine I asked it to write a small app for me. Interactions with computers like this I think will soon be the norm. Most software will be requested by end users and built just in time by AI much like this app was built.
2

Echos – A lightweight multi-agent AI system with pre-built agents #

github.com favicongithub.com
0 comments1:52 AMView on HN
Hi all, I'm Dante, and I'm building Echos, a platform that gives you pre-built AI agents so you can stop rebuilding orchestrators, database agents, and retry logic every time.

What it does:

- Pre-built agents: Database queries, API calls, web search, data analysis, code generation.

- YAML-based workflows: Define your agent architecture without rebuilding orchestrators.

- Built-in guardrails: SQL injection protection, SSRF blocking, table/domain whitelisting.

- Visual traces: See what happened, where it failed, and how much it cost.

Why I built it:

Every time I build a multi-agent system, I spend 2-3 weeks creating the same infrastructure: orchestrators that route tasks, database agents with SQL guardrails, retry logic, loop limiting, and cost tracking. Then another week of debugging when things break. I wanted to ship features, not plumbing.

Most frameworks are bulky and complex. You just want pre-built components you can compose like AWS services.

What Echos gives you:

- Ship faster: Pre-built agents you compose in YAML.

- Debug in minutes: Visual traces show exactly what happened, where it failed, and how much it cost.

- Prevent disasters: Built-in guardrails (SQL injection protection, SSRF blocking, loop limiting) catch 80% of dangerous operations.

- Control costs: Per-agent spending limits prevent runaway bills.

Try it: Clone https://github.com/treadiehq/echos or go to https://echoshq.com

import { EchosRuntime } from '@echoshq/runtime';

const runtime = new EchosRuntime({ apiKey: process.env.ECHOS_API_KEY, apiUrl: process.env.ECHOS_API_URL, workflow: './workflow.yaml' // Define agents and routes in YAML });

// Simple usage await runtime.run({ task: 'Analyze customer churn', memory: { year: 2024, region: 'north' } });

Tech:

- NestJS for the backend API: Needed structured DI and middleware for auth.

- Postgres for trace storage: JSON columns for flexible span logs, native SQL performance.

- Resend for magic link authentication: Reliable email delivery without managing SMTP.

- Nuxt 3 for the dashboard: SSR for fast initial load, client-side interactivity for live traces.

- Railway for deployment: Fast deploys. First time trying it. My previous default is Digital Ocean.

What I learned:

- Time saved is the real value: Teams don't want another framework, they want to ship faster.

- Debugging is 50% of the work: Visual traces that show the full execution path are essential.

- Simple guardrails work: Blocking DELETE/DROP and unknown domains catches most disasters.

- YAML > Code for config: Non-engineers can edit workflows without touching code.

Looking for feedback:

- Does this solve a real problem for you?

- Which agents would you use most? database, API calls, web search, data analysis, or code generation?

- Is YAML configuration expressive enough, or do you need more programmatic control?

- What agents should we add next? (GitHub, Slack, email, cloud APIs?)

- Would you use this for autonomous agents, or just one-off tasks?

- Would this save you time on your next multi-agent project?

- What's missing that would make this immediately useful?

Thank you!

2

TabPref – The platform for professionals and businesses in hospitality #

1 comments3:23 PMView on HN
Hey HN, Aaron here, founder of TabPref.com .

A few months ago, I shared TabPref as a social network for the people who keep the hospitality world running: bartenders, servers, barbacks, vendors, and venue owners.

Since then, it has evolved far beyond networking.

TabPref is now an all-in-one platform for the service and hospitality industry that connects professionals, establishments, and vendors while powering real business operations.

It includes:

• Multi-profile switching for professionals, establishments, vendors, and consumers • Business Hub with scheduling, time-clock, vendor catalogs, and POS integrations such as Toast, Square, and 7shifts • Tab Chat for real-time team communication • Groups and Events for community and discovery • Jobs to help professionals find shifts and venues fill roles quickly

We are still in public beta, and user feedback continues to shape what comes next.

I recently submitted TabPref to Y Combinator to accelerate growth and bring more value to the people who keep hospitality alive.

https://tabpref.com

If you have ever worked behind the bar, managed a venue, or supplied one, I would appreciate your thoughts and feedback.

Cheers, Aaron Horne Founder, TabPref

2

A tool to estimate how long your script/speech will take to read aloud #

wordtotime.org faviconwordtotime.org
0 comments12:47 PMView on HN
Hi: I built WordToTime.org, a free browser-tool that helps you convert word-counts, character-counts or page-counts into estimated time for speaking, reading, voice-over or audiobook delivery (all processed locally in your browser).

Why I built it When working on talks, podcasts or video scripts, I kept asking: “How many minutes will it take to read this aloud?” Simple word-to-time tools exist but often ignore realistic pause modelling, language differences, or character/ page-based inputs. This tool addresses those gaps.

Key features

Paste text or input count → estimate duration (HH:MM:SS) for various delivery modes

Select mode (speech / silent reading / voice-over / audiobook) + WPM

Local processing only: your text never leaves your device; no upload or server-storage. (See our privacy policy) WordToTime.org

Useful for content creators, educators, presenters, authors, voice-over artists

Why you might find it interesting

Lightweight front-end engine with configurable pacing and language presets

Bridges the gap between raw word-count and actual delivery time

Embeddable future widget/API planned for script-tools, editor integrations

Would love to hear your feedback:

What delivery modes do you use (live talk vs video vs voice-over)?

What WPM or character-per-minute benchmarks do you personally track?

Any features you wish such timing tools had?

Thanks for checking it out!

2

I Built Streamlit for Java #

github.com favicongithub.com
0 comments10:22 AMView on HN
Hey HN, I've been working on an Java alternative to Streamlit in the last 4 months: Javelit.

I've explored interactive programming and malleable code principles for a long time, and I've always felt that this kind of tool was missing in the Java ecosystem. I think this will be useful to the Java community for presentations, experimentation, talks, learning, data viz apps and small back office apps.

You can try a few apps in your browser in the playground: https://javelit.io/playground (note this is not compatible with Safari).

Under the hood the project is a bit different from Streamlit, as it is possible to integrate Javelit apps in existing Java systems. There are 2 main ways to run and build Javelit: "standalone", as a CLI, with a great hot-reload experience and a single click deploy with railway, or "embedded", as a library in an existing system. The API to build custom components is also simpler, as there is no difference between official components and user components.

If you are curious about my build process, I maintain a dev log here: https://world.hey.com/cdecatheu/

2

FakerFill – a browser extension that fills web forms with fake data #

fakerfill.com faviconfakerfill.com
0 comments9:19 PMView on HN
I built a browser extension called FakerFill to help developers and QA testers test web forms faster. It automatically detects input fields and fills them with realistic fake data (names, emails, addresses, etc.).

The extension focuses on simplicity — there’s no data collection, and everything runs locally in the browser. You can also create and reuse custom templates, choosing which fields to fill and how they’re populated.

Right now, it’s available for Chrome, Edge, and Firefox. I’d love to get feedback from other developers — both on usability and ideas for future improvements.

1

Scout QA – AI that finds small bugs people often overlook #

scoutqa.ai faviconscoutqa.ai
0 comments2:39 PMView on HN
We’ve been working on Scout QA, a tool that uses AI to catch subtle bugs that usually slip past manual testing and reviews.

Instead of focusing on test automation, Scout analyzes your app’s UI, flows, and logs to surface tiny inconsistencies — things like broken states, unclear error messages, or small regressions that humans might ignore because “it still works.”

The goal isn’t to replace testers but to act like an extra pair of curious eyes — pointing out issues that affect real users’ experience.

We’re exploring questions like: - How can an AI develop an intuition for “something feels off”? - What kind of feedback loop helps it learn from real user bugs?

Would love feedback from developers, QA engineers, and indie makers: what small-but-painful bugs do you wish tools could catch automatically?

1

Reframe Labs – We Build, Scale and Grow Your Startup #

reframelabs.co faviconreframelabs.co
0 comments4:56 PMView on HN
Hey HN,

I’m Arete, a designer and frontend developer who recently launched Reframe Labs a software design and development agency that helps startups and scale-ups build high-quality products, from MVPs to enterprise-grade applications.

At Reframe Labs, we specialize in:

Product Design – clean, modern UI/UX that turns ideas into intuitive experiences Web Development – fast, scalable apps built with modern tools AI Automations – integrating AI tools to streamline workflows and enhance productivity Growth & Optimization – helping products scale and convert better post-launch

I’ve spent the past eight years working with startups and agencies across multiple industries, helping them go from rough concepts to polished, revenue-generating products. Now, we’re opening up a few new client slots this month and would love to connect with founders who want to move fast and build something great.

https://reframelabs.co

I’d love your feedback on the site, messaging, and focus areas, and happy to answer any questions about our stack, design process, or client workflows.

1

Vocaware – AI Voice Agents (Shadcn, Twilio, OpenAI, Supabase, Next.js) #

vocaware.com faviconvocaware.com
0 comments6:42 PMView on HN
Staying fresh with the "startup stack"... so I made Vocaware.com: an AI voice agent that picks up the phone, talks naturally, takes notes, and connects to your workflows.

It’s built with shadcn/ui + Next.js, Twilio for telephony, OpenAI for reasoning + speech, Stripe for payments, and Supabase + Railway for the backend. Basically a modern full-stack AI app — start to finish — that actually works in production.

Right now you can:

* Get a phone number instantly

* Have an AI answer calls 24/7

* Build agent conversation workflows

* View transcripts, summaries, and analytics in real time

I built it in a few weeks. It’s live and works surprisingly well — but now I’m trying to figure out what’s next:

* Should I pick a vertical (e.g. real estate, service businesses, golf courses, ...)?

* Offer free trials?

* Make it open-source and developer-first?

* Go onsite with customers and implement it?

Would love any feedback, advice, or collaboration ideas from other builders.

— Alex (x.com/NicitaAlex)

1

We built a node to use Hugging Face Spaces without writing API code #

0 comments2:41 PMView on HN
I'm one of the creators of Datastripes, a visual data analysis builder. We've been building out our platform and kept running into a recurring pain point: we wanted to quickly prototype workflows that involved AI models from the Hugging Face ecosystem, but it always meant dropping out of our visual editor to write a Python script or a JS client to call the Gradio API. It killed the "flow" of building.

So we decided to solve it by building a "Hugging Face Space" node directly into our tool. You give it the URL of any public, Gradio-based Hugging Face Space (e.g., user/space-name), and the node does the rest.

If you wanna try it: https://datastripes.com