매일의 Show HN

Upvote0

2025년 10월 21일의 Show HN

39 개
124

Katakate – Dozens of VMs per node for safe code exec #

github.com favicongithub.com
53 댓글3:22 PMHN에서 보기
I've built this to make it easy to host your own infra for lightweight VMs at large scale.

Intended for exec of AI-generated code, for CICD runners, or for off-chain AI DApps. Mainly to avoid Docker-in-Docker dangers and mess.

Super easy to use with CLI / Python SDK, friendly to AI engs who usually don't like to mess with VM orchestration and networking too much.

Defense-in-depth philosophy.

Would love to get feedback (and contributors: clear & exciting roadmap!), thx

68

I'm rewriting a web server written in Rust for speed and ease of use #

ferron.sh faviconferron.sh
92 댓글7:26 AMHN에서 보기
Hello! I got quite some feedback on a web server I'm building, so I'm rewriting the server to be faster and easier to use.

I (and maybe some other contributors?) have optimized the web server performance, especially for static file serving and reverse proxying (the last use case I optimized for very recently).

I also picked a different configuration format and specification, what I believe is easier to write.

Automatic TLS is also enabled by default out of the box, you don't need to even enable it manually, like it was in the original server I was building.

Yesterday, I released the first release candidate of my web server's rewrite. I'm so excited for this. I have even seen some serving websites with the rewritten web server, even if the rewrite was in beta.

Any feedback is welcome!

30

We tried to build a job board that isn't awful #

teeming.ai faviconteeming.ai
51 댓글3:36 PMHN에서 보기
Hi HN,

We’re definitely not the first to realise there’s something seriously wrong with how hiring and job-seeking works today. Zero-cost communication and LLMs have created so much noise that good candidates can’t get heard, and it becomes all too tempting to game the system with keywords and prompt-hacking.

In fact we discovered that 70% of early stage AI startups don't post their jobs on LinkedIn. Instead, many founders hire exclusively within their network, which works at the start but doesn’t scale.

We thought a lot about this problem, and pivoted through a few ideas including an AI voice agent recruiter. We even spent some time trying to be conventional tech recruiters to better understand the problem space.

And in the end we built...a job board.

But we think there are a few things that make ours different:

- We decided to not put barriers between the user and the data. You can search, filter or browse however you like from the minute you sign up. Zero onboarding

- We wanted to nail one niche, so we focused on surfacing opportunities at early-stage AI companies (over 30,000 jobs at 24,000 companies)

- You can navigate it using keyboard shortcuts!

- We built a voice agent, Nell, who conducts a technical recruiter call with you through your browser and immediately finds matches, the way a well-connected friend who knows you well would

- When you tell us you’re interested in a role, we make a best effort to connect you to founders directly, along with your profile, so no cover letters, no pointless forms

- We enriched jobs data with investor-grade intelligence - you can look at same data that VCs use to decide whether a startup is worth joining or not

Give it a try and let us know what you think: https://teeming.ai

23

Django Keel – 10 Years of Django Best Practices in One Template #

github.com favicongithub.com
27 댓글1:00 PMHN에서 보기
After a decade of shipping Django to production, I got tired of solving the same setup problems on every new project.

Environment-first settings. Sensible auth defaults. Structured logging. CI from day zero. Pre-commit hooks. Docker. Security hardening. Every project meant two days of boilerplate before writing business logic.

So I built Django Keel: a production-ready Django starter that eliminates the yak-shaving. GitHub: https://github.com/CuriousLearner/django-keel

*What you get*:

- 12-factor config with environment-based secrets - Production-hardened security defaults - Pre-wired linting, formatting, testing, pre-commit hooks - CI workflow ready to go - Clear project structure that scales - Documentation with real trade-offs explained

*Background*:

I maintained a popular cookiecutter template for years. Django Keel is what that should've been from the start—battle-tested patterns without the accumulated cruft.

*Who it's for*:

Teams and solo builders shipping Django to production who want a strong baseline without tech debt. Feedback welcome on what works, what doesn't, and what's missing. Issues and PRs appreciated.

19

Clink – Bring your own CLI Agents, Ship instantly #

clink.new faviconclink.new
20 댓글3:32 PMHN에서 보기
Clink lets you use the coding agents you already pay for (Claude Code, Codex CLI, Gemini CLI, Z.ai GLM) to build → live-preview → ship apps in an isolated container.

No token purchases, no extra cost for coding. Just link your existing Claude/OpenAI/Gemini account and start building and deploying instantly.

Why we built this:

Claude Code is our go-to for coding, but it lacked preview + deploy capabilities. We didn't want to pay Lovable again just for that.

Different agents excel at different tasks - Claude Code for versatility, Codex for complex work, GLM for speed. We needed one platform to leverage them all.

CLI agents offer more freedom than traditional web builders. We wanted to unlock their full potential with proper dev tooling.

What it does:

• Prompt → Build → Live → Deploy - The fastest path from idea to live website. Deploy for free.

• BYO Subscription - Use your existing plans efficiently (Claude Code $20 = 10x Lovable $25 usage, GLM $3 = 3x Claude Code $20)

DEV Mode (Beta):

• Multi-stack support - Build with Node, Python, Go, Rust and deploy containers to public URLs instantly

• Repo imports - Upgrade and deploy your existing projects across any stack

Links:

• Clink: https://clink.new

• OSS origin (Claudable, ~2.8k): https://github.com/opactorai/Claudable

We'd love any feedback, bug reports, or stack requests - we iterate fast and read every comment.

14

Docker/Model-Runner – Join Us in Revitalizing the DMR Community #

github.com favicongithub.com
2 댓글2:54 PMHN에서 보기
Hey Hacker News,

We're the maintainers of docker/model-runner and wanted to share some major updates we're excited about.

Link: https://github.com/docker/model-runner

We are revitalizing the community:

https://www.docker.com/blog/revitalizing-model-runner-commun...

At its core, model-runner is a simple, backend-agnostic tool for downloading and running local large language models. Think of it as a consistent interface to interact with different model backends. One of our main backends is llama.cpp, and we make it a point to contribute any improvements we make back upstream to their project. It also allows people to transport models via OCI registries like Docker Hub. Docker Hub hosts our curated local AI model collection, packaged as OCI Artifacts and ready to run. You can easily download, share, and upload models on Docker Hub, making it a central hub for both containerized applications and the next wave of generative AI.

We've been working hard on a few things recently:

- Vulkan and AMD Support: We've just merged support for Vulkan, which opens up local inference to a much wider range of GPUs, especially from AMD.

- Contributor Experience: We refactored the project into a monorepo. The main goal was to make the architecture clearer and dramatically lower the barrier for new contributors to get involved and understand the codebase.

- It's Fully Open Source: We know that a project from Docker might raise questions about its openness. To be clear, this is a 100% open-source, Apache 2.0 licensed project. We want to build a community around it and welcome all contributions, from documentation fixes to new model backends.

- DGX Spark day-0 support, we've got it!

Our goal is to grow the community. We'll be here all day to answer any questions you have. We'd love for you to check it out, give us a star if you like it, and let us know what you think.

Thanks!

13

Apicat – A Lightweight Offline Postman Alternative #

apicat.com faviconapicat.com
4 댓글1:22 PMHN에서 보기
Apicat is the ultimate offline Postman alternative that stores your .http files locally. It’s Git-friendly, open-source, and highly compatible with Postman. Test APIs offline with this powerful free offline API client designed for developers who need a reliable local API testing tool.
11

W++ – Garbage-Collected Threads #

6 댓글2:00 PMHN에서 보기
Hey HN! I’ve been working on something that might change how we think about multithreading.

You might remember W++ — a chaotic scripting language I originally built on .NET. Well, I’ve rewritten it from scratch in Rust + LLVM, and accidentally invented something new: a garbage collector for threads.

Instead of leaving OS threads to manual management, W++ treats them as heap objects. They’re reference-counted, swept, and safely cleaned up — just like any other GC value.

Highlights: • Threads are managed with `Arc` + `Weak` and collected by a background daemon • `GcMutex` auto-unlocks if its owning thread dies • Thread ancestry tracking prevents recursive spawns • A background GC thread periodically joins finished threads • All compiled to native LLVM IR — no VM required

The result? No zombie threads, no deadlocks that outlive their owner, and no manual joins.

It’s experimental, not perfect — but it works. If you’ve built runtimes or GCs before, I’d love your thoughts.

GitHub: https://github.com/sinisterMage/WPlusPlus Feedback, critique, or “you’re insane but I love it” comments welcome!

8

LunaRoute – a high-performance local proxy for AI coding assistants #

github.com favicongithub.com
1 댓글11:23 PMHN에서 보기
LunaRoute is a high-performance local proxy for AI coding assistants like Claude Code, OpenAI Codex CLI, and OpenCode. Get complete visibility into every LLM interaction with zero-overhead passthrough, comprehensive session recording, and powerful debugging capabilities.

- See Everything Your AI Does - get full logs (JSONL), summary of sessions including tokens used (input/output) as well as tools usage and success rates.

- Privacy & Compliance Built-In - redact or tokenize any sensitive information (regex based).

- Speaks OpenAI and Anthropic dialects so you can route (and translate) when needed between models and providers.

- High performance - passthrough is 0.1ms - 0.2ms latency. Logging and summary is off loaded to a secondary thread to allow maximum performance.

Feedback is always appreciated as well as stars on the repo :)

7

I built TicketData – Ticket price trends and data for Sports & Concerts #

ticketdata.com faviconticketdata.com
5 댓글3:26 PMHN에서 보기
14 years ago, I launched my original version of TicketData via a Show HN. While the public site didn't get traction, the data proved valuable to those in the ticketing industry, and it became a B2B data product that's kept me busy ever since. I'm now relaunching the front-end site as a totally free and public tool for anyone to make smarter decisions when buying tickets for live events.

The re-launched site shows live resale price charts (using data from sites like StubHub, Vivid Seats, SeatGeek, etc) that update as often as every few minutes. You can see price history for the cheapest tickets ("Get In Price"), or for pre-defined zones (like Floor seats), or you can set up your own custom zone across any sections or rows (for example "Floor Center, Rows 1-10"). You can also set alerts for when prices go above or below whatever threshold you set.

Many of the larger events also include a forecast of where prices will go next. Those are based on years of historical data combined with the live data and other inputs (like sales data and a whole bunch of other variables…opponent, day of week, venue capacity, etc) passed through a model built in XGBoost. I'm slowly rolling out the forecast feature to more events as the models get refined and I'm confident in the accuracy for different event types/genres.

In addition to all the data for upcoming events, there are currently about 2.5 years of historical data live on the site. If there is sustained interest, I'll work on getting data all the way back to 2011 loaded in.

Here are a few examples:

[Upcoming Event] Paul McCartney next week: https://www.ticketdata.com/events/12435572

[Past Event] Kansas City Chiefs vs. Las Vegas Raiders (Monday Night) https://www.ticketdata.com/events/1174976

All 8 of Oasis's North American concerts this summer: https://www.ticketdata.com/events/compare?ids=1006457%2C1006...

Or to see your cheapest local shows happening this weekend, click the "Local" + "This Weekend" filters on the homepage: https://www.ticketdata.com/

Feedback appreciated!

7

GoSMig – minimal, type-safe SQL migrations written in Go #

github.com favicongithub.com
0 댓글9:35 PMHN에서 보기
I built GoSMig for my own projects and open-sourced it in case it helps others. It’s a tiny generic library (no external deps except golang.org/x/term) for writing SQL migrations in Go with compile-time checks.

It supports transactional and non-transactional migrations, rollback, status, version, and a small CLI handler so you can ship your own migration binary.

Why another migrator?

- Minimal API, no DSL or file layout to learn

- Type-safe via Go generics

- Works with database/sql and sqlx out of the box

- Should work with any db library (or wrapper) that implements some generic interfaces (see the "Core Types" section here https://github.com/padurean/gosmig?tab=readme-ov-file#core-t...)

- Tested with PostgreSQL, should work with any SQL RDBMS (MySQL, SQLite, MS SQL Server, ...)

Repo: https://github.com/padurean/gosmig

Docs & examples: README + examples branch https://github.com/padurean/gosmig/tree/examples

Would love feedback: ergonomics, missing guardrails, API rough edges, and real-world gotchas, etc.

5

MTOR – A free, local-first PWA to automate workout progression #

mtor.club faviconmtor.club
1 댓글6:34 PMHN에서 보기
Hi HN,

My motivation for this came from frustration with existing workout trackers. Most felt clunky, hid core features like performance graphs behind a paywall, or forced a native app download. A few people close to me who take their training seriously shared the same sentiment, so I decided to build my own.

I'm working on mTOR, a free, science-based workout tracker I built to automate progressive overload. It's a local-first PWA that works completely offline, syncs encrypted between your devices using passwordless passkeys, and allows for plan sharing via a simple link.

The core idea is to make progression easier to track and follow. After a workout, it analyzes your performance (weight, reps, and RIR), highlights new personal records (PRs), and generates specific targets for your next session. It also reviews your entire program to provide scientific analysis on weekly volume, frequency, and recovery for each muscle group. This gets displayed visually on an anatomy model to help you learn which muscles are involved, and you can track your gains over time with historical performance charts for each exercise.

During a workout, you get a total session timer, an automatic rest timer, and can see your performance from the last session for a clear target to beat. It automatically advances to the next incomplete exercise, and when you need to swap an exercise, it provides context-aware alternatives targeting the same muscles.

It's also deeply customizable:

* The UI has a dark theme, supports multiple languages (English, Spanish, German), lets you adjust the UI scale, and toggle the visibility of detailed muscle names, exercise types, historical performance badges, and a full history card. * You can set global defaults for weight units (kg/lbs), rest times, and plan targets, or enable/disable metrics like Reps in Reserve (RIR) and estimated 1-Rep Max. The exercise library can be filtered by your available equipment, you can create your own custom exercises with global notes, and there's a built-in weight plate calculator. * The progression system lets you define default rep ranges and RIR targets, or create specific overrides for different lifts (e.g., a 3-5 rep range for strength, 10-15 for accessories). * Editing is flexible: you can drag-and-drop to reorder days, exercises, and sets, duplicate workout days, track unilateral exercises (left/right side), and enter data with a quick wheel picker.

I'll be here all day to answer questions. I'm also thinking about making the project open-source down the line and would be curious to hear any thoughts on that. Thanks for checking it out!

4

RunOS – Run Your Own Cloud on Your Own Infra #

0 댓글8:08 PMHN에서 보기
RunOS lets you run your application stack on any infrastructure - cloud VMs, bare metal, or on-premises - without vendor lock-in.

What it does: - One-click deployment of 20+ production-ready services (PostgreSQL, Kafka, MinIO, etc.) with security hardening, backups, and HA by default.

- Git-based app deployment with automatic Docker builds, SSL certificates, and Kubernetes orchestration.

- Infrastructure portability - switch between providers without code changes.

- Automatic service discovery and integration between your apps and services.

Built on Kubernetes but hides the complexity. You get container orchestration benefits without the configuration overhead.

Looking for feedback on: - Platform UX and onboarding experience - Which VPS providers to prioritize - Which services to add next

Try it: https://runos.com

4

OpenJobHub – A Free and Open Job Board #

github.com favicongithub.com
0 댓글5:33 PMHN에서 보기
Hey everyone,

I recently built OpenJobHub, a free and open source job board for engineers.

My goal is simple: in these tough times, I hope more people can find good jobs, and that job information can be more open and transparent.

Feel free to share it with any recruiters, HRs, or friends who are hiring.

Let's build a healthier job ecosystem together.

GitHub: https://github.com/junminhong/jobs

Website: https://jobs.wowkit.net

3

Realizing Karpathy's Prediction for Natural Language Programming #

2 댓글3:45 PMHN에서 보기
Hi everyone,

Andrej Karpathy's posted in early 2023 (https://x.com/karpathy/status/1617979122625712128) -

> "The hottest new programming language is English"

I have worked tirelessly years to realize this vision. I built a Natural Language Programming stack for building AI Agents. I think it is the first true Software 3.0 stack.

The core idea: Use LLMs as CPUs! You can finally step debug through your prompts and get reliable, verifiable execution. The stack includes a new language, compiler, developer tooling like VSCode extension.

Programs are written as markdown. H1 tags are agents, H2 tags are natural language playbooks (i.e. functions), python playbooks. All playbooks in an agents run on the same call stack. NL and python playbooks can call each other.

Quick intro video: https://www.youtube.com/watch?v=ZX2L453km6s

Github: https://github.com/playbooks-ai/playbooks (MIT license)

Documentation: https://playbooks-ai.github.io/playbooks-docs/getting-starte...

Project website: runplaybooks.ai

Example Playbooks program -

    # Country facts agent
    This agent prints interesting facts about nearby countries

    ## Main
    ### Triggers
    - At the beginning
    ### Steps
    - Ask user what $country they are from
    - If user did not provide a country, engage in a conversation and gently nudge them to provide a country
    - List 5 $countries near $country
    - Tell the user the nearby $countries
    - Inform the user that you will now tell them some interesting facts about each of the countries
    - process_countries($countries)
    - End program

    ```python
    from typing import List

    @playbook
    async def process_countries(countries: List[str]):
        for country in countries:
            # Calls the natural language playbook 'GetCountryFact' for each country
            fact = await GetCountryFact(country)
            await Say("user", f"{country}: {fact}")
    ```

    ## GetCountryFact($country)
    ### Steps
    - Return an unusual historical fact about $country
There are a bunch of very interesting capabilities. A quick sample -

- "Queue calls to Extract table of contents for each candidate file" - Effortless calling MCP tools, multi-threading, artifact management, context management

- "Ask Accountant what the tax rate would be" is how you communicate with other agents

- you can mix procedural natural language playbooks, ReAct playbooks, Raw prompt playbooks, Python playbooks and external playbooks like MCP tools seamlessly on the same call stack

- "Have a meeting with Chef, Marketing expert and the user to design a new menu" is how you can spawn multi-agent workflows, where each agent follows their own playbook for the meeting

- Coming soon: Observer agents (agents observing other agents - automated memory storage, verify/certify execution, steer observed agents), dynamic playbook generation for procedural memory, etc.

I hope this changes how we build AI agents going forward for the better. Looking forward to discussion! I'll be in the comments.

3

FocusStream – Learn from YouTube Without the Distractions #

0 댓글9:11 AMHN에서 보기
I built FocusStream because I was frustrated with how hard it is to stay focused when trying to learn from YouTube.

You go there to watch one tutorial and twenty minutes later, you're deep into random recommendations that have nothing to do with what you came for.

FocusStream solves that. It strips away the noise and lets you use YouTube intentionally: - You enter a topic you want to learn. - It fetches only relevant educational videos. - No autoplay, no sidebar recommendations, no algorithmic rabbit holes.

The goal is simple: help people actually learn from YouTube without getting pulled away by distractions.

It's free, minimal, and runs directly in the browser.

Try it: https://focusstream.media For those who prefer to watch a video, here's the demo: https://drive.google.com/file/d/1fCvOJ6kRs9jn7O_hIGgP6wBuq4f...

Would love your feedback, especially from anyone who's struggled to stay focused while learning online.

3

FastQR – A Fast QRCode Generator Supporting Batch Processing #

2 댓글3:59 PMHN에서 보기
I'd like to share FastQR (https://github.com/tranhuucanh/fastqr), a high-performance QR code generator written in C++. This is my first open-source project, and I'm excited (and a bit nervous!) to share it with you all.

What it is: - A fast CLI tool and library for generating QR codes - Written in C++ with bindings for Ruby, PHP, and Node.js - Full UTF-8 support (works great with Vietnamese, Japanese, and other languages) - Supports custom colors, logo embedding, and precise size control - Pre-built binaries included – no need to install dependencies separately

Why I built this: - I needed a QR code generator that was fast, supported UTF-8 properly (especially for Vietnamese text), and could be easily integrated into different languages. Most existing solutions were either slow or had poor Unicode support.

Performance: - Generating 100 QR codes (500x500px): ~0.3 seconds - Generating 1000 QR codes (500x500px): ~3 seconds - Batch mode is 7x faster than individual generation

Tech stack: - C++ core using libqrencode and libpng - Language bindings for Ruby, PHP, and Node.js - Precompiled binaries for easy installation

This is my very first open-source project, so I'm sure there are things that could be improved or bugs I haven't caught yet. I'd really appreciate it if you could try it out and share your feedback. If you find any issues or have suggestions, please open an issue on GitHub – I'll do my best to fix them quickly.

Any feedback, criticism, or suggestions would be greatly appreciated. Thanks for taking the time to check it out!

GitHub: https://github.com/tranhuucanh/fastqr

3

Asimov – Context Manager for Coding Agents #

2 댓글10:01 PMHN에서 보기
Hey HN, I built https://asimov.mov. an MCP for coding agents to search from latest docs of APIs, repos or basically anything.

It can index and retrieve docs + repos very fast to search from them and can do it on-the-go (like something launched yesterday and you indexed it right away, from your agent).

The indexed docs would only be available to you and would be stored on your own little server (we manage that!)

Hope Asimov can make your Coding Agents hallucinate less (by not implementing 1000-year old Stripe API or Cloudfare) :)

Happy to chat about the features and feedback + giving some Pro tiers (for free!) if anyone wants to try more!

2

Live weather radar wallpaper for Android #

play.google.com faviconplay.google.com
0 댓글3:32 PMHN에서 보기
My very small team has built an Android live wallpaper that displays the current NEXRAD weather radar data as your Android phone’s live wallpaper.

The app automatically selects the nearest radar site using coarse location permissions, or you can manually choose a location. Everything is size optimized and only updates when the wallpaper is being viewed to minimize battery footprint.

Interesting technical bits:

  - Using the US National Weather Service public radar feed (via public S3 bucket)
  - Custom radar volume data decoder written in Go and running as AWS Lambdas (triggered by S3 bucket events)
  - Custom rendering API
  - Kotlin Android app is <10 MB (no cross-platform bloat)
Available on Google Play:

https://play.google.com/store/apps/details?id=app.radarlove....

Happy to answer questions about the radar data processing pipeline, infrastructure, or the live wallpaper implementation. Also open to interesting opportunities for collaboration...

2

UltraFaceSwap – Multi-face swap for photos, GIFs, and videos #

ultrafaceswap.com faviconultrafaceswap.com
0 댓글7:59 AMHN에서 보기
Hi HN,

I just launched UltraFaceSwap (https://ultrafaceswap.com), an AI face swapping tool that handles photos, GIFs, and videos with single or multiple face swaps.

*What makes it different:* - *Multi-person swapping*: Swap multiple faces in one image/video (most tools only do one) - *Multi-face GIF support*: First tool I've seen that handles multi-person face swapping in animated GIFs - *Video quality*: Maintains temporal consistency (no flickering between frames) - *Comprehensive format support*: Works seamlessly across photos, GIFs, and videos

*Why I built it:* A friend needed a face swap tool for their project, so I tried every option on the market. They all had limitations: - Only single face swaps - Poor video quality with flickering - No proper GIF support, especially for multiple faces - Clunky UX requiring multiple steps

So I built what I wished existed.

*Pricing model:* - Free trial to test it out - Daily free credits (casual users might never pay) - Pay-as-you-go for volume (no subscription)

*Tech challenges I solved:* - Face tracking across video frames - Handling occlusions (when faces turn away) - Multi-face detection and assignment in animated GIFs - GIF color palette preservation while swapping multiple faces

*Current limitations:* - Processing videos takes time (30s video = ~3-5 min processing) - Works best with clear, well-lit faces - Very fast motion can still cause artifacts

I'm a solo developer and spent 2 months getting the quality right. Would love feedback from HN: - What use cases would you have for this? - Is multi-face swapping something you've needed? - Any concerns about the technology?

Try it out – I think you'll find the quality noticeably better than alternatives.

2

DocCraft AI – Generate professional business documents #

0 댓글10:20 PMHN에서 보기
Hi HN,

I built DocCraft AI after spending way too many hours writing job descriptions, privacy policies, and marketing copy for my startup. What should take 5 minutes was taking 2+ hours of research, writing, and revisions.

DocCraft generates professional business documents using AI, tailored to your company's industry and voice. Just fill out your company profile once, then generate: - Job descriptions with proper requirements/benefits - GDPR-compliant privacy policies - Marketing emails that don't sound generic - Terms of service, press releases, etc.

Built with Next.js 15, TypeScript, and OpenAI. Free tier available.

Try it: https://doccraftai.vercel.app/

What documents do you spend the most time writing? Would love feedback from fellow founders/operators.

2

A tool to track your marketing consistency #

marketingmemory.io faviconmarketingmemory.io
0 댓글9:17 AMHN에서 보기
Hey HN,

I'm a solopreneur and I ship apps like a madman: 7 projects in the last year.

I always felt like I was doing marketing, a tweet here, a LinkedIn post there.. but when I looked back, I couldn’t remember what I’d actually done.

So I built Marketing Memory for 2 reasons: 1. To make marketing visible and consistent, like tracking commits but for growth 2. To understand which actions actually drive traffic and impact

It connects your marketing activity logs with Google Analytics so you can finally see your effort and results in one place.

I hope this helps other makers who struggle to stay consistent with marketing. Would love your feedback pls

Axel

2

Gnoke Station – A WebOS for Industrial and IoT Dashboards #

1 댓글3:59 PMHN에서 보기
When you first open Gnoke Station, you're greeted by... well, almost nothing. A clean desktop, a terminal, an app store, desktop settings, and silence. We understand why our previous post led to comments about the minimalism.

Our mistake was not making the core architectural intent clear. The empty screen isn't a lack of features—it's the feature.

The Philosophy of an Empty Desktop Gnoke Station is a browser-based Web Operating System—a lightweight runtime environment, not a desktop clone. We built it for those who need a modular, minimal canvas they can fully control, specifically in industrial, IoT, and dashboard contexts.

The emptiness is a design choice to minimize overhead and maximize extensibility. Instead of wrestling with a bundled, heavy general-purpose OS, the user (or manufacturer) starts with a near-zero state and only loads the components they need.

Why Gnoke Station is a WebOS for Makers The core value lies in the architecture, not the initial icons:

* Modular for Integrators: The entire desktop is designed as a minimal shell that manages external web applications. Manufacturers can swap out the login manager, default apps, and even the taskbar with their own specialized components using a simple JSON manifest.

* Built for Thin Clients: It transforms any modern browser into a desktop environment. There are no downloads, no installations, and no virtual machine requirements. This is crucial for environments where resources are constrained.

* Offline-Ready: We leverage modern browser APIs (like Service Workers and IndexedDB) to ensure the environment remains resilient to network interruptions—a necessity for reliable field and industrial applications. This project is an experiment in rethinking what an operating system should be in the browser age: a flexible framework that lets you build your own digital control panel.

Proof of Footprint As a testament to its resource lightness, Gnoke Station was built entirely on an Infinix phone by myself (EdmundSparrow). Innovation doesn't have to depend on expensive hardware—only imagination.

If you are an engineer or developer working on embedded UIs, industrial Human-Machine Interfaces (HMI), or specialized web dashboards, I'd love for you to explore this concept and check out the API for building your own apps.

Live Demo: https://gnokestation.netlify.app GitHub Repo: https://GitHub.com/edmundsparrow/gnokestation

Pitch/Design Rationale:

https://gnokepitch.netlify.app

(Note: GnokePitch supports phone and tablet viewing, which is a big advantage over traditional C++ HMI solutions.)

2

Tempus is a high accuracy time-series analysis project #

zarkoasen.com faviconzarkoasen.com
0 댓글3:29 PMHN에서 보기
Tempus is a project aimed at high accuracy modeling of time series data or regression problems. It implements an improved kernel regression algorithm based on the support vector machine theory by professors Vapnik, Chervonenkis and Lerner but with several improvements such are nested kernels, multilayered weights, massive parallelization and systemic parameter tuning.

Tempus provides several signal decomposition methods and automated feature engineering. This new implementation of SVM allows for using of any statistical model, even itself, as a kernel function. This is done by calculating the ideal kernel matrix which is used as a reference for measuring the kernel function fitness. So far LightGBM, Torch, Path, RBF and Global alignment kernels have been implemented.

Tempus achieves significantly higher accuracy than other competing models even on the most complex data.

1

TBR Deal Finder – Track and compare deals on audiobooks and ebooks #

github.com favicongithub.com
0 댓글12:09 PMHN에서 보기
I've been working on this for a while and figured there were a lot of avid readers on here.

TBR Deal Finder is a desktop app to track and centralize pricing across digital book retailers. It integrates with Kindle, Audible, Chirp, and Libro.fm, plus book discovery platforms like Goodreads and StoryGraph. It tracks prices over time and shows you where to find the best deal.

It also handles Whispersync pricing—where you can add a Kindle Unlimited ebook to unlock deep discounts on the Audible version.

Built with Python and Flet. Happy to answer questions!

1

Pitched a VC for 30min before realizing they invested in a competitor #

getbriefing.io favicongetbriefing.io
0 댓글3:12 PMHN에서 보기
Hey HN,

I pitched an investor for 30 minutes before realizing they'd already passed on a similar company in their portfolio.

That moment broke me. I'd wasted both our time because I didn't do basic research.

So I built Briefing AI: Paste any calendar invite and Get an Intelligence-style dossier on attendees in 30 seconds.

THE WORKFLOW:

You have a meeting in 10 minutes. You know nothing about the attendees.

Before: • Open LinkedIn, search "Sarah Chen Sequoia" • Skim profile, open 5 tabs • Google her recent activity • Try to remember talking points • Show up flustered • Total time: 15-20 minutes (if you even do it)

After: • Paste calendar invite into Briefing AI • Get: Executive summary, attendee backgrounds, company context, talking points, red flags • Walk in prepared • Total time: 30 seconds

THE TECH:

  • Next.js 15 + Cloudflare Pages (serverless)
  • Brave Search API (real company/attendee data, not just AI hallucinations)
  • GPT-4 (parsing + generation)
  • Device fingerprinting (1 free briefing, then $49 LTD or $19/month)
Launched Friday. Made 2 sales. Learning fast.

"BUT CAN'T I JUST USE CHATGPT?"

Yes. But: • ChatGPT = 8-10 prompts, 10 minutes ("extract attendees" → "who is Sarah?" → "what should I ask?") • Briefing AI = 1 paste, 30 seconds • ChatGPT = conversation disappears • Briefing AI = saved dashboard, searchable history

I'm not competing with ChatGPT. I'm making one specific workflow 10x faster.

WHAT I LEARNED (2 sales in 3 days):

The problem is real (everyone wings meetings), but I have zero distribution. 80 website visits, 2.5% conversion. Product works. Distribution doesn't.

So I'm asking HN: • Is this genuinely useful, or am I solving a fake problem? • Would you pay $49 lifetime for this? • What's the #1 feature I'm missing?

Try it free (no card required): getbriefing.io

I'm here all day to answer questions and iterate based on your feedback.

PS: If you've ever Googled someone 5 minutes before a Zoom call, this is for you.

1

I built a virtual tabletop for in person RPG games #

tableslayer.com favicontableslayer.com
0 댓글12:15 PMHN에서 보기
Hey all. I've spent the bulk of 2025 fiddling with Table Slayer, a virtual table top specific to in person play with a television. Although there are plenty of options for playing games like Dungeons and Dragons online, there wasn't much tailored to running a game with friends at a table.

The source is open[0], and my stack is currently SQLite, Sveltekit, and Fly. I use Cloudflare for my media, and workers that handle the Partykit real-time syncing (which additionally uses YJS for conflict resolution and syncing).

I have a few dozen paying users at this point, but mostly this is a labor of love that scratches a specific itch I've had myself. Right now I'm working on touch screen controls to support some large-format touchscreens I'm trying to source.

[0]: https://github.com/Siege-Perilous/tableslayer

1

An Audible alternative for classic audiobooks #

soundreads.io faviconsoundreads.io
0 댓글3:34 PMHN에서 보기
I spent the last 1yr+ building an audiobook streaming service that focuses on timeless classic works in the public domain.

One thing I'm especially proud of is the restoration I did on the "War of the Worlds" 1938 Radio broadcast. I'm really happy with how it turned out. I've made it temporarily free (no login required or anything untoward) to listen to [1] in case anyone is curious. You should compare it with the original [2] and let me know what you think. I find the actual broadcast super engaging and enjoyable.

I've done everything from building the app to the audio engineering. A lot of learning involved as my background is on the software side, not marketing, audio engineering, etc.

A lot of the initial catalog is stuff that I've sourced from Librivox [3] (legends), but I've done a lot of curation, clean up, editing, mastering, and packaging to make it more enjoyable. There's also some original audiobooks too, and more should be landing each month (they're expensive and time consuming to produce with real humans and AI isn't quite there yet IMHO).

It's at a point now where I think it's ready enough to release it to the public.

I'd love any constructive feedback (audio quality, UX, concept, etc) that anyone would be willing to provide. Finally, some of the things that are unique:

1) Subscribers vote on what content we produce next (new content monthly is the plan). 2) 3 referrals = a free subscription 3) A "lifetime" option (possibly naive, but it's what I want as a consumer)

Anyways, if you even read to this point I appreciate it!

[1] https://app.soundreads.io/discover/item/war-of-the-worlds [2] https://archive.org/details/WarOfTheWorlds1938RadioBroadcast... [3] https://librivox.org/