Daily Show HN

Upvote0

Show HN for October 23, 2025

39 items
188

I built a tech news aggregator that works the way my brain does #

deadstack.net favicondeadstack.net
97 comments5:48 PMView on HN
An honest to god, non-algorithmic reverse chrono list of tech news that passes my signal-to-noise tests, updated hourly.

A lightweight a page design as I've been able to keep; simple, clean, fast. No commercial features or aspirations - this is a passion project, something I've been fooling around with on and off for decades.

There's a "Top" view too with an LLM edited front page & summary, and categorized views for a large number of topics - see the Directory. A few more buried features to explore, but the fundamental use case is pop in, scan, exit - fast and concise.

Your feedback would be appreciated!

143

Deta Surf – An open source and local-first AI notebook #

github.com favicongithub.com
41 comments12:11 PMView on HN
Hi HN!

We got frustrated with the fragmented experience of exploring & creating across our file manager, the web and document apps. Lots of manual searching, opening windows & tabs, scrolling, and ultimately copying & pasting into a document editor.

Surf is a desktop app meant for simultaneous research and thinking to minimize the grunt work. It’s made of two parts:

1) A multi-media library where you can save and organize files and webpages into collections called Notebooks.

2) A LLM-powered smart document which you can auto-generate using the context from any stored page, tab or entire notebook. This document contains deep links back to the source material — like a page of a PDF or timestamp in a YouTube video. Unlike Deep Research products (or NotebookLMs chat) the entire thing is editable. The user also stays in the loop.

With a technology like AI, context / data is proving to be king. We think it should stay under the user’s control, with minimal lock in: where you can own & export, and plug & play with different models. That’s why Surf is:

- Open Source on GitHub - Open (& Local Data): the data saved in Surf is stored on your local machine in open and accessible formats and mostly works offline. - Open Model Choice: you can choose which models you use with Surf, and can add custom & Local LLMs

Early users include students & researchers who are learning and doing thematic research using Surf.

Github repo: https://github.com/deta/surf/

Website: https://deta.surf/

110

Tommy – Turn ESP32 devices into through-wall motion sensors #

tommysense.com favicontommysense.com
80 comments5:04 PMView on HN
Hi HN! I would like to present my project called TOMMY, which turns ESP32 devices into motion sensors that work through walls and obstacles using Wi-Fi sensing.

TOMMY started as a project for my own use. I was frustrated with motion sensors that didn't detect stationary presence and left dead zones everywhere. Presence sensors existed but were expensive and needed one per room. I explored echo localization first, but microphones listening 24/7 felt too creepy. Then I discovered Wi-Fi sensing - a huge research topic but nothing production-ready yet. It ticked all the boxes: could theoretically detect stationary presence through breathing/micromovements and worked through walls and furniture so devices could be hidden away.

Two years and dozens of research papers later, TOMMY has evolved into software I'm honestly quite proud of. Although it doesn't have stationary presence detection yet (coming Q1 2026) it detects motion really well. It works as a Home Assistant Add-on or Docker container, supports a range of ESP32 devices, and can be flashed through the built-in tool or used alongside existing ESPHome setups.

I released the first version a couple of months ago on Home Assistant's subreddit and got a lot of interest and positive feedback. More than 200 people joined the Discord community and almost 2,000 downloaded it.

Right now TOMMY is in beta, which is completely free for everyone to use. I'm also offering free lifetime licenses to every beta user who joins the Discord channel.

You can read more about the project on https://www.tommysense.com. Please join the Discord channel if you are interested in the project.

A note on open source: There's been a lot of interest in having TOMMY as an open source project, which I fully understand. I'm reluctant to open source before reaching sustainability, as I'd love to work on this full time. However, privacy is verifiable - it's 100% local with no data collection (easily confirmed via packet sniffing or network isolation). Happy to help anyone verify this.

106

Git for LLMs – a context management interface #

twigg.ai favicontwigg.ai
37 comments3:12 PMView on HN
Hi HN, we’re Jamie and Matti, co-founders of Twigg.

During our master’s we continually found the same pain points cropping up when using LLMs. The linear nature of typical LLMs interfaces - like ChatGPT and Claude - made it really easy to get lost without any easy way to visualise or navigate your project.

Worst of all, none of them are well suited for long term projects. We found ourselves spending days using the same chat, only for it to eventually break. Transferring context from one chat to another is also cumbersome. We decided to build something more intuitive to the ways humans think.

We started with two simple ideas. Enabling chat branching for exploring tangents, and an interactive tree diagram to allow for easy visualisation and navigation of your project.

Twigg has developed into an interface for context management - like “Git for LLMs”. We believe the input to a model - or the context - is fundamental to its performance. To extract the maximum potential of an LLM, we believe the users need complete control over exactly what context is provided to the model, which you can do using simple features like cut, copy and delete to manipulate your tree.

Through Twigg, you can access a variety of LLMs from all the major providers, like ChatGPT, Gemini, Claude, and Grok. Aside from a standard tiered subscription model (free, plus, pro), we also offer a Bring Your Own Key (BYOK) service, where you can plug and play with your own API keys.

Our target audience are technical users who use LLMs for large projects on a regular basis. If this sounds like you, please try out Twigg, you can sign up for free at https://twigg.ai/. We would love to get your feedback!

101

Nostr Web – decentralized website hosting on Nostr #

nweb.shugur.com faviconnweb.shugur.com
35 comments2:20 PMView on HN
We built Nostr Web, a new way to publish and host websites that live entirely on the Nostr network instead of centralized servers.

Each website is a collection of signed, verifiable Nostr events distributed across relays—so it can’t be taken down, censored, or lost.

It includes: • DNS TXT records for domain-based discovery (_nweb.domain.com) • CLI publisher tool for versioned deployments (nw-publisher) • Browser extension (nw-extension) for native browsing experience • Relay v1.3.5 support for Nostr Web event kinds

Try the live demo: https://nweb.shugur.com

Repos: https://github.com/Shugur-Network/relay | https://github.com/Shugur-Network/nw-nips | https://github.com/Shugur-Network/nw-publisher | https://github.com/Shugur-Network/nw-extention

Would love feedback from the HN community—on protocol design, relay performance, or UX ideas for improving decentralized web publishing.

77

OpenSnowcat – A fork of Snowplow to keep open analytics alive #

opensnowcat.io faviconopensnowcat.io
18 comments7:24 PMView on HN
I’ve been a long-time Snowplow user and unofficial evangelizer. I have deep respect for its founders, Alex and Yali, who I met a few times.

What made me fall in love with Snowplow was that it was unopinionated, gave access to raw event data, and was truly open source. Back in 2013, that changed everything for me. I couldn’t look at GA the same way again.

Over the years, analytics moved into SQL warehouses driven by cheaper CPU/storage, dbt, reproducibility, and transparency. I saw the need for a democratized Snowplow pipeline and launched a hosted version in 2019.

In January 2024, Snowplow changed its license (SLULA), effectively ending open-source Snowplow by restricting production use. When that happened, I realized the spirit of open data and open architecture was gone.

A week later, I forked it, I wanted to keep the idea alive.

OpenSnowcat keeps the original collector and enricher under Apache 2.0 and stays fully compatible with existing Snowplow pipelines. We maintain it with regular patches, performance optimizations, and integrations with modern tools like Warpstream Bento for event processing/routing.

The goal is simple: keep open analytics open.

Would love to hear how others in the community think we can preserve openness in data infrastructure as “open source” becomes increasingly commercialized.

That's it, I should have posted here earlier but now felt right.

22

ScreenAsk – Free Screen Recording Links for Customer Support #

screenask.com faviconscreenask.com
1 comments4:59 PMView on HN
Hey HN,

My name is Brett and I'm excited to share ScreenAsk with you today!

ScreenAsk makes it easy to collect screen recordings from your customers by simply sending a link.

At my SaaS company, we spend hours every week teaching customers to record their screen to show us support issues.

They have to sign up for a new tool, download and install it, create the recording, and then upload it somewhere to send it over…

Most of the time spent on support wasn't fixing the issue but rather understanding it.

I built ScreenAsk to solve this exact problem, making it simple to see what your customers see:

- Send over your recording link

- They follow a few easy steps to record their screen

- You instantly see what they’re experiencing

No sign up, no installing extra software, and no uploading to another service just to share.

You can get notified via Email + Slack + Zapier + Webhooks when somebody records, and recordings include transcription + AI summaries for quick scanning.

We also offer a widget that can be embedded in your site and is fully customizable + controllable with javascript.

- Show / hide it when you want - Change colors and language - Listen for a recording and populate a form field with the viewing link - Add metadata like name, email, ID to the recording - Capture network and console

I’d be grateful if you gave it a spin! You get 10 free recordings per month and a personalized recording link: https://screenask.com

Launch tweet + demos + discussion: https://x.com/balindenberg/status/1981394673173205418

16

Story Keeper – AI agents with narrative continuity instead of memory #

github.com favicongithub.com
2 comments6:50 PMView on HN
Hi HN! Creator here. I built Story Keeper to solve a problem I kept hitting with AI agents: they remember everything but lose coherence over long conversations. The Core Idea Instead of storing chat history and retrieving chunks (RAG approach), Story Keeper maintains a living narrative:

Characters: Who you are (evolving), who the agent is Arc: Where you started → where you're going Themes: What matters to you Context: The thread connecting everything

Think of it as the difference between reading meeting notes vs. being in the relationship. Technical Approach ~200 lines of Python. Three primitives:

Story State (not message list) Story Evolution (not appending) Story-Grounded Response (not retrieval)

Works with any LLM - tested with GPT-4, Claude, Llama 3.1, Mistral. Why This Works Traditional memory is about facts. Story Keeper is about continuity. Example: Health coaching agent

Normal: Generic advice each time Story Keeper: "This is the pattern we identified last month. You do better with 'good enough' than perfect."

The agent carries forward understanding, not just data. Implementation Part of PACT-AX (open source agent collaboration framework). MIT licensed. Simple integration: pythonfrom pact_ax.primitives.story_keeper import StoryKeeper

keeper = StoryKeeper(agent_id="my-agent") response = keeper.process_turn(user_message) Use Cases I'm Exploring

Long-term coaching/mentorship Multi-session research assistants Customer support with relationship continuity Educational tutors that understand learning journeys

What I'd Love Feedback On

Is this solving a real problem or am I overthinking it? Performance concerns at scale? Other approaches people have tried for this? Use cases I'm missing?

The full technical writeup is in the repo blog folder. Happy to answer questions!

13

Pg_textsearch – BM25 Ranking for Postgres #

docs.tigerdata.com favicondocs.tigerdata.com
0 comments5:25 PMView on HN
I built pg_textsearch, a Postgres extension that brings proper BM25 ranking to full-text search. It's designed for AI/RAG workloads where search quality directly impacts LLM output.

Postgres native ts_rank lacks corpus-aware signals (no IDF, no TF saturation, no length normalization). This causes mediocre documents to rank above excellent matches, which matters when your LLM depends on retrieval quality.

Quick example:

  CREATE EXTENSION pg_textsearch;
  CREATE INDEX articles_idx ON articles USING bm25(content);
  SELECT title, content <@> to_bm25query('database performance', 'articles_idx') AS score
  FROM articles
  ORDER BY score
  LIMIT 10;
Works seamlessly with pgvector or pgvectorscale for hybrid search. Fully transactional (no sync jobs). Preview release uses in-memory architecture (64MB default per index); disk-based segments coming soon.

I love ParadeDB's pg_search but wanted something available on our managed Postgres. You can try pg_textsearch free on Tiger Cloud: https://console.cloud.timescale.com

Blog: https://www.tigerdata.com/blog/introducing-pg_textsearch-tru...

Docs: https://docs.tigerdata.com/use-timescale/latest/extensions/p...

Feedback welcome, especially from folks building RAG systems or hybrid search applications.

7

A Visual No-Code Game Engine – 50x Easier Than Unity #

play-maker.io faviconplay-maker.io
1 comments9:47 AMView on HN
I've spent 30 years making games as both a hobby and profession, working with Unity, Unreal, GameMaker, RPG Maker, Ren'Py, Godot, and many others.

A few years ago, I attended a game dev meetup and met dozens of people with incredible passion for making games – but they were struggling because they lacked technical knowledge. That's when I decided to build a tool that anyone could use to create games.

I started developing this as a side project, eventually raised ~$100K through crowdfunding in Korea, then secured ~$200K in angel investment. I assembled a team, and we've been building this together ever since. We're officially launching in November, and we'll also be releasing it on Steam.

What makes this different:

The editor has an incredibly simple structure and UX. I'm also deeply passionate about AI, and I've designed this engine to integrate AI seamlessly. Post-launch, I plan to add prompt-based game generation (think Google AI Studio, Vercel v0, or Lovable, but for games) – allowing anyone to create structured game content just by describing it.

Our mission is "democratizing creativity." We want to empower everyone to express their ideas through games. Current focus: Visual novels, point-and-click adventures, and 2D animations. But we're also building in a physics engine for more dynamic gameplay possibilities – and it's actually pretty solid.

We're offering early bird pricing until November launch – if you're interested, grabbing it now would really help us build momentum.

We're a small team based in South Korea (yes, the land of K-pop, Squid Game, and KPop Demon Hunters... okay that last one isn't Korean, but you get the vibe ).

Would love to hear your thoughts and feedback!

7

OpenAI ChatGPT App starter DevXP feels like 2010, I built a better one #

github.com favicongithub.com
2 comments4:44 PMView on HN
Hi HN!

Two weeks ago at their Dev Days, OpenAI unveiled ChatGPT apps, interactive widgets that render inside ChatGPT using the Apps SDK and MCP. Building them using their template repository was surprisingly painful: every time you make a change on your React widget, you need to rerun the entire build pipeline to get fresh JS and CSS assets. The dev feedback loop was pretty terrible.

That’s why I built a TypeScript starter that gets you up and running in under a minute. Clone, npm install, npm run dev, paste your ngrok URL into ChatGPT, and you're iterating on widgets with Hot Module Reload in the ChatGPT interface, just like building a regular web app. When you're ready to ship, the production build pipeline deploys instantly to Alpic or any other PaaS.

What's included: - Vite dev server with Hot Module Reload (HMR) piggy-backed on your MCP server Express server - Skybridge framework: an abstraction layer I built on top of OpenAI's skybridge runtime that maps MCP tool invocations to React widgets, eliminating manual iframe communication and component wiring. - Production build pipeline: one-click deploy to Alpic.ai (with bundling, hosting, auth, MCP-specific analytics included) or elsewhere - No lock-in: uses the official MCP SDK, works with OpenAI's examples

Quick start:

git clone https://github.com/alpic-ai/apps-sdk-template

cd apps-sdk-template

pnpm install && pnpm run dev

ngrok http 3000

# Add https://your-url.ngrok-free.app/mcp to ChatGPT SettingsConnectors

Feedback welcome, especially if you hit rough edges or want specific features!

Starter Kit GitHub → https://github.com/alpic-ai/apps-sdk-template

Skybridge Github → https://github.com/alpic-ai/skybridge

7

Coyote – Wildly Real-Time AI #

getcoyote.app favicongetcoyote.app
11 comments4:38 PMView on HN
Hey all, we just shipped coyote. it's an ai assistant but built different — everything runs async and feels way more natural. You text it, it handles work in the background and you can keep talking to it. No more stop button. Instead of creating another app we put it in WhatsApp (iMessage coming soon) so you can just text it for free and get the feeling. The core idea: most ai assistants make you sit there waiting for an answer. coyote's like texting a friend. you ask it to grab something for you, they say "on it," and you just keep chatting while it's out getting it. no awkward silence, no being stuck. Built it to handle real tasks — emails, calendar stuff, research, whatever. all non-blocking. Everything happens concurrently so you're never left hanging. We're still early but it's live and working. Happy to answer questions or get feedback. We've also worked hard to make it snappy, and friendly. Try it out and would love some feedback! Thanks!
6

Onetone – A PHP full-stack framework with AI runtime, ORM, CLI #

2 comments12:16 AMView on HN
Hey HN,

I built Onetone Framework as a full-stack PHP framework that pushes the boundaries of what PHP can do.

It’s fast, modular, and surprisingly capable — combining backend routing, ORM, CLI tooling, and frontend build support into a single developer-friendly experience.

Highlights: - PHP 8.2+ with autowired routing and ActiveRecord-style ORM - Built-in CLI, Docker setup, and .env support - Frontend build pipeline with Vite, esbuild - Growing test suite covering routing, query building, crypto, math, UUID, and more

It’s still experimental, but I’m excited about how far it’s come. Would love feedback on what feels over-engineered, what’s missing, and what could make it truly useful.

GitHub: https://github.com/onetoneframework/framework

6

I made Quantify AI – alternative to TradingView (2 months work) #

quantify-ai.co faviconquantify-ai.co
2 comments6:11 AMView on HN
How it works (tech stack):

-Built entirely with Lovabl.dev (no-code front-end + logic) -ChatGPT / Claude for research and inspiration -Powered by GPT-4 Vision to interpret charts visually -Hosted on Supabase for performance & caching

It’s not meant to replace analysts — just to speed up how traders interpret data.

I’m a designer exploring AI tools, and this is my first attempt to turn an idea into a functional product.

Would love to know what you think.

6

Distil-NPC: a family of models for non-playable characters in games #

github.com favicongithub.com
0 comments5:25 PMView on HN
we finetuned Google's Gemma 270m (and 1b) small language models specialized in having conversations as non-playable characters (NPC) found in various video games. Our goal is to enhance the experience of interacting in NPSs in games by enabling natural language as means of communication (instead of single-choice dialog options).
6

FlowLens – MCP server for debugging with Claude Code #

magentic.ai faviconmagentic.ai
1 comments9:35 PMView on HN
Hi HN,

We often run into this with coding agents like Claude Code: debugging turns into copy-pasting logs, writing long explanations, and sharing screenshots.

FlowLens is an MCP server plus a Chrome extension that captures browser context (video, console, network, user actions, storage) and makes it available to MCP-compatible agents like Claude Code.

You can try it here: https://magentic.ai/flowlens

Any feedback—good, bad, or brutal—is welcome.

4

hist: An overengineered solution to `sort|uniq -c` with 25x throughput #

github.com favicongithub.com
4 comments4:26 PMView on HN
Was sitting around in meetings yesterday and remembered an old shell script I had to count the number of unique lines in a file. Gave it a shot in rust and with a little bit of (over-engineering)™ I managed to get 25x throughput over the naive approach using coreutils as well as improve over some existing tools.

Some notes on the improvements:

1. using csv (serde) for writing leads to some big gains

2. arena allocation of incoming keys + storing references in the hashmap instead of storing owned values heavily reduced the number of allocations and improves cache efficiency (I'm guessing, I did not measure).

There are some regex functionalities and some table filtering built in as well.

happy hacking

4

I built Kumi – a typed, array-oriented dataflow compiler in Ruby #

kumi-play-web.fly.dev faviconkumi-play-web.fly.dev
0 comments9:39 PMView on HN
Hi HN,

I'm the author of Kumi, a project I've been working on and would love to get your feedback on.

The Original Problem:

The original idea for Kumi came from a complex IAM problem I faced at a previous job. Provisioning a single employee meant applying dozens of interdependent rules (based on role, location, etc.) for every target system. The problem was deeper: even the data abstractions were rule-based. For instance, 'roles' for one system might just be a specific interpretation of Active Directory groups.

This logic was also highly volatile; writing the rules down became a discovery process, and admins needed to change them live. This was all on top of the underlying challenge of synchronizing data between systems. My solution back then was to handle some of this logic in a component called "Blueprints" that interpreted declarative rules and exposed this logic to other workflows.

The Evolution:

That "Blueprints" component stuck in my mind. About a year later, I decided to tackle the problem more fundamentally with Kumi. My first attempts were brittle—first runtime lambdas, then a series of interpreters. I knew what an AST was, but had to discover concepts like compilers, IRs, and formal type/shape representation. Each iteration revealed deeper problems.

The core issue was my AST representation wasn't expressive enough, forcing me into unverifiable 'runtime magic'. I realized the solution was to iteratively build a more expressive intermediate representation (IR). This wasn't a single step: I spent two months building and throwing away ~5 different IRs, tens of thousands of lines of code. That painful process forced me to learn what it truly meant to compile, represent complex shapes, normalize the dataflow, and verify logic. This journey is what led to static type-checking as a necessary outcome, not just an initial goal.

This was coupled with the core challenge: business logic is often about complex, nested, and ragged data (arrays, order items, etc.). If the DSL couldn't natively handle loops over this data, it was pointless. This required an IR expressive enough for optimizations like inlining and loop fusion, which are notoriously hard to reason about with vectorized data.

You can try a web-based demo here: https://kumi-play-web.fly.dev/

And the repo is here: https://github.com/amuta/kumi

3

New Version 3.1.0 of Hmpl #

github.com favicongithub.com
0 comments9:22 PMView on HN
Hello HN! We're pleased to announce the release of a new version of our template language.

In it, we've improved server security and simplified the handling of asynchronous functions.

We're on track to deliver the perfect solution for HATEOAS applications, replacing many of its competitors in terms of functionality.

2

401K Traditional vs. Roth Calculator #

401k.pages.dev favicon401k.pages.dev
1 comments4:21 PMView on HN
Hi everyone! I built a 401(k) Traditional vs. Roth calculator in Cursor to give a quick estimate of how your investments might grow and which option could work better for you. I’d love your thoughts and suggestions on how to improve it and make it easier for more people to use.
2

VT Code – LLM-agnostic coding agent with MCP/ACP and sandboxed tools #

github.com favicongithub.com
0 comments4:44 PMView on HN
VT Code is a Rust CLI/TUI coding agent that works semantically on code (Tree-sitter + ast-grep), routes across multiple LLMs (incl. local via Ollama), and integrates with editors via ACP and external tools via MCP. It focuses on safety (workspace boundaries, per-tool Allow/Prompt/Deny, sandboxed commands/timeouts) and reproducibility (TOML config, caching, summarization).

What’s different: - AST-aware search/refactors with preview-before-apply - Provider-agnostic routing with failover and cost/latency-aware caching - Policy-gated tools and strict workspace boundaries - Terminal-first ergonomics you can script

Install:

  cargo install vtcode
  # or
  brew install vinhnx/tap/vtcode
  npm install -g vtcode
Quick start:

  export OPENAI_API_KEY=...   # or another provider key
  vtcode
  vtcode ask "Find md5 usages across the repo and propose SHA-256 replacements (show diff)"
  vtcode ask "Search for .unwrap() on Result and suggest safer handling"
Status: research preview (MIT). Feedback welcome on ergonomics, ACP/MCP DX, safety defaults, and real-world AST refactor recipes.
2

Software Fails-A book about complex system failures(sample chapter) #

0 comments4:02 PMView on HN
Based on Richard Cook's research on complex system failures, I wrote a book exploring why disasters like Knight Capital's $440M loss happen when every component functions correctly.

Free sample chapters: https://leanpub.com/how-software-fails

The book explores patterns behind complex system failures—from cosmic rays flipping bits in voting machines to the Therac-25 radiation overdoses. Key insight: traditional "root cause analysis" fundamentally misunderstands how these systems actually fail—they emerge from interactions between components that were individually functioning correctly.

Feel free to grab the sample chapter and give it a try.

2

BesiegeField – LLM Agents Learn to Build Machines in a Physics Sandbox #

besiegefield.github.io faviconbesiegefield.github.io
0 comments7:28 PMView on HN
Hi HN! We'd like to share something we built: BesiegeField — an environment where large language models design, test, and refine machines in the physics-based construction game Besiege.

Machine design is treated as a code generation problem: LLMs choose standard parts and specify how to connect them, making the task well-suited to language models.

The environment includes tasks like throwing a stone far or navigating bumpy terrain, and supports custom goals and environments.

We used it to explore both agentic workflows (no finetuning) and reinforcement learning, trying tasks like building cars and catapults.

It runs 100+ parallel processes on Linux clusters for scalable RL training.

Feedback from the AI, RL, engineering, and game dev communities is welcome.

GitHub: https://github.com/Godheritage/BesiegeField HuggingFace Demo (single-agent): https://huggingface.co/spaces/Godheritage/BesiegeField-Machi...

2

Emdash – OS UI for parallel coding agents and Linear tickets #

github.com favicongithub.com
0 comments4:42 PMView on HN
Hey HN — Emdash is an open-source UI for running multiple coding agents in parallel. It’s provider-agnostic (10+ CLIs). Each agent runs in its own Git worktree; after a run, compare diffs side-by-side and apply only what you want. Data stays local. Linear tickets can be handed off directly to agents.
1

Don't Build Things No One Wants We Test Ideas for Founders #

artalabs.com faviconartalabs.com
0 comments3:44 PMView on HN
Many founders waste months building something only to discover no one wants it. We help you validate ideas first so you can find out in weeks, what others find out in months: "is there any actual demand for my idea?".

Is your idea worth building? Let’s prove it.

The Validation Sprint Process:

① Landing Page

We design and launch a polished one-page site that clearly communicates your idea, building trust fast through strong visuals, benefits, and calls-to-action. Give your idea the best chance possible.

② Wait List

Interested users can sign up, giving you a tangible signal of demand. Beyond numbers, it’s a pool of potential early adopters or beta testers to engage with directly.

③ Deep User Insights

A waitlist shows interest, but not commitment. We gather deeper insights through short surveys and optional interviews, also helping you tailor your build for your potential user base. This way, if you decide to build, you build with confidence and clarity.

④ Targeted Promotion

We help you share your idea in the right places — from niche communities to launch platforms like Product Hunt. The goal: meaningful feedback from potential users, not vanity traffic.

Flexible Involvement

Choose the level of involvement that fits you — whether you want a quick setup handled by us, or a full sprint where we work together closely. We’ll tailor pricing around your level of involvement.

Interested? Get in touch today!

1

I Made InfoBeatLive – The Operating System for Startup Success #

infobeatlive.com faviconinfobeatlive.com
0 comments4:03 PMView on HN
Hi HN!

I'm the founder of InfoBeatLive. I built this platform to help founders avoid early startup failure.

InfoBeatLive acts as an AI-powered operating system for startups — helping you with strategies in marketing, analyze product-market fit, and track performance with real-time insights.

I created this because I noticed most startups fail not because of the idea, but because of a lack of clear strategy and validation. I’d love your feedback, especially from early-stage founders and builders.

https://infobeatlive.com

Thanks for checking it out!