Daily Show HN

Upvote0

Show HN for February 19, 2026

48 items
653

Micasa – track your house from the terminal #

micasa.dev faviconmicasa.dev
215 comments3:54 PMView on HN
micasa is a terminal UI that helps you track home stuff, in a single SQLite file. No cloud, no account, no subscription. Backup with cp.

I built it because I was tired of losing track of everything in notes apps, and "I'll remember that"s. When do I need to clean the dishwasher filter? What's the best quote for a complete overhaul of the backyard. Oops, found some mold behind the trim, need to address that ASAP. That sort of stuff.

Another reason I made micasa was to build a (hopefully useful) low-stakes personal project where the code was written entirely by AI. I still review the code and click the merge button, but 99% of the programming was done with an agent.

Here are some things I think make it worth checking out:

- Vim-style modal UI. Nav mode to browse, edit mode to change. Multicolumn sort, fuzzy-jump to columns, pin-and-filter rows, hide columns you don't need, drill into related records (like quotes for a project). Much of the spirit of the design and some of the actual design choices is and are inspired by VisiData. You should check that out too. - Local LLM chat. Definitely a gimmick, but I am trying preempt "Yeah, but does it AI?"-style conversations. This is an optional feature and you can simply pretend it doesn't exist. All features work without it. - Single-file SQLite-based architecture. Document attachments (manuals, receipts, photos) are stored as BLOBs in the same SQLite database. One file is the whole app state. If you think this won't scale, you're right. It's pretty damn easy to work with though. - Pure Go, zero CGO. Built on Charmbracelet for the TUI and GORM + go-sqlite for the database. Charm makes pretty nice TUIs, and this was my first time using it.

Try it with sample data: go install github.com/cpcloud/micasa/cmd/micasa@latest && micasa --demo

If you're insane you can also run micasa --demo --years 1000 to generate 1000 years worth of demo data. Not sure what house would last that long, but hey, you do you.

198

A physically-based GPU ray tracer written in Julia #

makie.org faviconmakie.org
94 comments10:55 AMView on HN
We ported pbrt-v4 to Julia and built it into a Makie backend. Any Makie plot can now be rendered with physically-based path tracing.

Julia compiles user-defined physics directly into GPU kernels, so anyone can extend the ray tracer with new materials and media - a black hole with gravitational lensing is ~200 lines of Julia.

Runs on AMD, NVIDIA, and CPU via KernelAbstractions.jl, with Metal coming soon.

Demo scenes: github.com/SimonDanisch/RayDemo

194

Ghostty-based terminal with vertical tabs and notifications #

github.com favicongithub.com
76 comments9:30 PMView on HN
I run a lot of Claude Code and Codex sessions in parallel. I was using Ghostty with a bunch of split panes, and relying on native macOS notifications to know when an agent needed me. But Claude Code's notification body is always just "Claude is waiting for your input" with no context, and with enough tabs open, I couldn't even read the titles anymore.

I tried a few coding orchestrators but most of them were Electron/Tauri apps and the performance bugged me. I also just prefer the terminal since GUI orchestrators lock you into their workflow. So I built cmux as a native macOS app in Swift/AppKit. It uses libghostty for terminal rendering and reads your existing Ghostty config for themes, fonts, colors, and more.

The main additions are the sidebar and notification system. The sidebar has vertical tabs that show git branch, working directory, listening ports, and the latest notification text for each workspace. The notification system picks up terminal sequences (OSC 9/99/777) and has a CLI (cmux notify) you can wire into agent hooks for Claude Code, OpenCode, etc. When an agent is waiting, its pane gets a blue ring and the tab lights up in the sidebar, so I can tell which one needs me across splits and tabs. Cmd+Shift+U jumps to the most recent unread.

The in-app browser has a scriptable API ported from agent-browser [1]. Agents can snapshot the accessibility tree, get element refs, click, fill forms, evaluate JS, and read console logs. You can split a browser pane next to your terminal and have Claude Code interact with your dev server directly.

Everything is scriptable through the CLI and socket API – create workspaces/tabs, split panes, send keystrokes, open URLs in the browser.

Demo video: https://www.youtube.com/watch?v=i-WxO5YUTOs

Repo (AGPL): https://github.com/manaflow-ai/cmux

[1] https://github.com/vercel-labs/agent-browser

20

LatentScore – Type a mood, get procedural/ambient music (open source) #

latentscore.com faviconlatentscore.com
21 comments12:06 PMView on HN
Hey HN,

I've used Generative.fm for years and love it, but I always wanted to just describe what I was in the mood for instead of scrolling through presets. So I built this.

You type a text description of anything - from "mountain sunrise" to "neon city" - and it generates a procedural/ambient stream matching that mood. It runs locally, no account, no tracking, no ads.

Under the hood it's a custom synthesizer driven by sentence embeddings, not a generative AI model (although you can choose to use one!) — so there's no GPU, no API calls, and it starts playing almost instantly. The whole thing is open source: https://github.com/prabal-rje/latentscore

If you're a developer and want to use it programmatically it's also a Python library - pip install latentscore — one line to render audio. But honestly I just use the web player myself when I'm working.

Fair warning: it's still alpha and the synth has limits, so please don't expect full songs or vocals. It's ambient/procedural only. But for focus music or background atmosphere, I think it's pretty good.

Would love to know what vibes you try and whether they land!

- Prabal

14

`npx continues` – resume same session Claude, Gemini, Codex when limited #

github.com favicongithub.com
8 comments3:50 PMView on HN
i kept hitting rate limits in Claude Code mid-debugging, then hopping to Gemini or Codex. the annoying part wasn't switching tools (copy-pasting terminal output doesn't bring tool-use context with it) — it was losing the full conversation and spending 10 minutes re-explaining what i was doing.

so i built *continues*. it finds your existing AI coding sessions across five tools (Claude Code, GitHub Copilot, Gemini CLI, OpenAI Codex, OpenCode), lets you pick one, and generates a structured handoff so you can continue in another tool without starting from zero.

    npx continues                          # interactive TUI: pick a session, pick a target
    continues scan                         # see what it finds (read-only)
    continues claude                       # jump into your latest Claude Code session
    continues resume abc123 --in gemini    # hand off a specific session to Gemini
the flow:

* scans the current directory first (so you see what's relevant), then shows everything

* you pick a session + a target tool

* it generates a handoff doc: recent conversation, cwd, files modified (best-effort), pending tasks

* launches the target tool with everything injected inline — no extra "go read this file" step

what it's not:

* not "true migration" — it's context injection. you get recent messages + metadata, not a full replay (pls come up with PR for full session reprod)

* rate limit detection is manual for now — you run it when you know you're blocked, no auto-detect yet

* session formats are mostly undocumented and can change anytime (this is the biggest maintenance risk)

* local file parsing only, no API calls — your data stays on your machine

curious if anyone else actually juggles multiple AI coding CLIs, or if most people just commit to one and wait out rate limits. would love to hear how you handle tool-switching + context today + feedbacks on context quality migrations and feedbacks are welcomed.

4

StillOnAir – Turn YouTube into programmable, linear TV for free #

stillonair.com faviconstillonair.com
1 comments9:29 PMView on HN
Heya

I built StillOnAir, an app that turns YouTube into a lean-back, old-school linear TV experience.

Instead of choosing individual videos, you just press play and surf channels — like cable. Each channel is programmed into a continuous stream built from YouTube videos.

Right now there are ~30 live channels, including: • GameVerse (gaming) • Toontown (animation & cartoons) • Reelhouse (film & movie content) • Total Sports (sports highlights & talk) • And more across tech, entertainment, documentaries, sports etc.

The core idea: bring back a passive cable tv like watching experience, a sitting on the couch and tune in experience.

The added wrinkle is that anyone can create their own network. You can curate YouTube videos, schedule programming blocks, design a lineup, and air your own channel.

It’s essentially programmable internet TV built on top of YouTube — without hosting any video yourself.

Would love feedback if you decide to take it for a spin and happy to answer any technical questions as well.

4

I built a semiconductor internship job board #

semidesignjobs.com faviconsemidesignjobs.com
2 comments1:09 PMView on HN
I started a job board to help semiconductor design students (and more senior semiconductor folks) find their next internship (and job). We have a few hundred jobs up, and adding more every day.
4

My dream came true: I released a mobile game #

apps.apple.com faviconapps.apple.com
0 comments11:39 AMView on HN
Hi, HN. I want to share with you that I have released my first mobile game on iOS. It's called HueFold. It was a wonderful journey. At the same time, I felt both euphoria and disappointment, but in the end, my little dream of releasing my own mobile game came true, and now everyone can try it.
4

How far can you push file conversion into the browser? #

0 comments3:58 PMView on HN
Hi HN,

I’ve been experimenting with how much file conversion can realistically be pushed into the browser. Last year I tried compiling LibreOffice headless to WASM. The smallest build I could get was ~150MB — far too large just to convert a DOCX to PDF. That’s when I shifted to a hybrid approach. Today ~90% of conversions run client-side using WASM (FFmpeg, PDF/image tooling, spreadsheets, etc.). The heavier edge cases fall back to a small server pipeline (LibreOffice, Pandoc, Poppler).

The main challenges weren’t the libraries themselves, but: browser memory ceilings handling large files without freezing the UI lazy-loading ~30MB of WASM only when needed Safari vs. Chromium behavior differences

FFmpeg.wasm runs at roughly 10–20% of native speed. Acceptable for small/medium files, less so for large media. I also experimented with multithreaded FFmpeg in the browser, but haven’t found a stable setup yet. Curious how others think about the tradeoff between client-side processing vs. fully server-side pipelines.

→ anythingconverter.com

4

I created an app to remove Reels, now on iOS too #

apps.apple.com faviconapps.apple.com
4 comments1:49 PMView on HN
Last year I built an Android app to block Reels and Shorts while keeping "healthy" features like stories and DMs. I didn't want to block the whole app. I just wanted to message friends and see their posts without losing an hour scrolling on Reels. This was the HN post for the Android version: https://news.ycombinator.com/item?id=44923520

When people asked for an iOS version, I thought it was not possible. Apple is way more restrictive and doesn't allow that level of app access.

But I ended up building the iOS app using a different approach. On iOS, it uses WebApps. It's not exactly the same experience as the native app, but it works surprisingly well.

I also combined it with iOS Shortcuts to auto-redirect the native apps to WebApps, so I can keep Instagram installed for notifications but get sent to the WebApp without Reels and any feed when I tap.

Curious what you think, especially about the WebApp approach on iOS.

3

Astroworld – A universal N-body gravity engine in Python #

github.com favicongithub.com
0 comments7:57 PMView on HN
I’ve been working on a modular N-body simulator in Python called Astroworld. It started as a Solar System visualizer, but I recently refactored it into a general-purpose engine that decouples physical laws from planetary data.Technical Highlights:Symplectic Integration: Uses a Velocity Verlet integrator to maintain long-term energy conservation ($\Delta E/E \approx 10^{-8}$ in stable systems).Agnostic Architecture: It can ingest any system via orbital elements (Keplerian) or state vectors. I've used it to validate the stability of ultra-compact systems like TRAPPIST-1 and long-period perturbations like the Planet 9 hypothesis.Validation: Includes 90+ physical tests, including Mercury’s relativistic precession using Schwarzschild metric corrections.The Planet 9 Experiment:I ran a 10k-year simulation to track the differential signal in the argument of perihelion ($\omega$) for TNOs like Sedna. The result ($\approx 0.002^{\circ}$) was a great sanity check for the engine’s precision, as this effect is secular and requires millions of years to fully manifest.The Stack:NumPy for vectorization, Matplotlib for 2D analysis, and Plotly for interactive 3D trajectories.I'm currently working on a real-time 3D rendering layer. I’d love to get feedback on the integrator’s stability for high-eccentricity orbits or suggestions on implementing more complex gravitational potentials.
3

Potatometer – Check how visible your website is to AI search (GEO) #

potatometer.com faviconpotatometer.com
7 comments6:41 AMView on HN
Most SEO tools only check for Google. But a growing chunk of search is now happening inside ChatGPT, Perplexity, and other AI engines, and the signals they use to surface content are different. Potatometer runs multiple checks across both traditional SEO and GEO (Generative Engine Optimization) factors and gives you a score with specific recommendations. Free, no login needed. Curious if others have been thinking about this problem and what signals you think matter most for AI visibility.
3

KGBaby – A WebRTC based audio baby monitor I built on pat leave #

legodud3.github.io faviconlegodud3.github.io
0 comments12:04 PMView on HN
Baby monitors are a great boon. We use the audio-only Motorola AM21, which is excellent. But being on pat leave with a 2-month-old right now, I decided to build a browser-based alternative using WebRTC and some AI coding agents (Codex & Gemini).

It is an open-source, zero-latency P2P monitor. Hardware reuse: You can repurpose an old phone or tablet to be the child unit instead of buying single-purpose hardware. Actually private: Unlike using a never-ending Google Meet or Zoom, your stream stays private via WebRTC (PeerJS for signaling). No cloud routing. The backgrounding hack: Mobile Safari aggressively kills background audio. I used a hidden 1x1 base64 looping video to keep the microphone active when the screen dims.

Links: Live Demo: https://legodud3.github.io/kgbaby/ Repo: https://github.com/legodud3/kgbaby

Welcome all feedback!

3

Agent skills to build photo, video and design editors on the web #

github.com favicongithub.com
0 comments12:23 PMView on HN
This claude code plugin and npx skill bundles the full CE.SDK documentation, guided code generation, and a builder agent that scaffolds complete photo/video/design editor projects from scratch, all offline, no API calls or MCP servers needed.

Supports 10 frameworks: React, Vue, Svelte, Angular, Next.js, Nuxt.js, SvelteKit, Electron, Node.js, and vanilla JS.

2

PostForge – A PostScript interpreter written in Python #

github.com favicongithub.com
4 comments4:21 PMView on HN
Hi HN, I built a PostScript interpreter from scratch in Python.

PostForge implements the full PostScript Level 2 specification — operators, graphics model, font system, save/restore VM, the works. It reads .ps and .eps files and outputs PNG, PDF, SVG, or renders to an interactive Qt window.

Why build this? GhostScript is the only real game in town for PostScript interpretation, and it's a 35-year-old C codebase. I wanted something where you could actually read the code, step through execution, and understand what's happening. PostForge is modular and approachable — each operator category lives in its own file, the type system is clean, and there's an interactive prompt where you can poke at the interpreter state.

Some technical highlights:

- Full Level 2 compliance with selected Level 3 features - PDF output with Type 1 font reconstruction/subsetting and TrueType/CID embedding - ICC color management (sRGB, CMYK, Gray profiles via lcms2) - Optional Cython-compiled execution loop (15-40% speedup) - 2,500+ unit tests written in PostScript itself using a custom assertion framework - Interactive executive mode with live Qt display — useful for debugging PS programs

What it's not: A GhostScript replacement for production/printer use. It's interpreted Python, so it's slower. But it handles complex real-world PostScript files well and the output quality is solid.

I'd love feedback, especially from anyone who's worked with PostScript or built language interpreters. The architecture docs are at docs/developer/architecture-overview.md if you want to dig in.

2

I built a compliance scanner that flags WCAG GDPR and FTC risks in mins #

rataify.com faviconrataify.com
0 comments11:10 AMView on HN
We’ve been building Rataify, a website compliance scanner focused on accessibility (WCAG), privacy regulations (GDPR / CCPA), and FTC marketing claim risks.

Most compliance tools focus only on accessibility and often just wrap Lighthouse or axe-core. Privacy and marketing risk checks are usually manual.

We’re experimenting with a layered approach:

DOM-level accessibility checks (WCAG violations)

Policy presence + structural checks (privacy / terms disclosures)

Heuristic scanning of marketing copy for risky FTC-style claims

Fast report generation intended for pre-launch audits

The goal isn’t legal automation — it’s to reduce obvious compliance gaps before a site goes live.

We’re especially interested in:

False positive tolerance in automated compliance tools

Whether developers would run this as part of CI

What compliance signals are most valuable in practice

Would love technical feedback.

2

Aegis.rs, the first open source Rust-based LLM security proxy #

github.com favicongithub.com
2 comments11:11 AMView on HN
Hey HN,

I've been working on Aegis.rs for a bit, and I wanted to share it. It's the first open-source Rust-based LLM security proxy (that I could find, at least).

I kept having the same issue, since existing LLM security tools are either Python libraries you have to manually integrate into your app, or cloud SaaS products that route your traffic through a third party (which you can't control), and i wanted something that just sat in the middle without touching my code or sending prompts anywhere.

So I built a transparent reverse proxy. You point your requests at localhost:8080 instead of your LLM endpoint and, so far, it catches prompt injections, jailbreaks, PII leakage, and other LLM attacks, blocking them before any malicious request even reaches the model. If a request is clean, it forwards it. If it's malicious, it blocks it. Zero code changes.

It runs two layers: a fast heuristic engine with 150+ hand-crafted (expandable) regex rules that runs in under 1ms (thanks to Actix-web), plus an AI judge using Groq for semantic analysis on ambiguous cases.

Can be easily shipped as a single binary with a live dashboard, hot-reloadable rules, and structured JSON logs.

Still v0.1 but it's working well enough for me to share its first version. The heuristic layer is fast enough for prod, and extending the rules is pretty easy.

Would love feedbacks (or contributions lol), especially from anyone dealing with LLMs' security and threat modeling :)

2

CandyDocs – Simple, developer-friendly documentation for SaaS teams #

candydocs.com faviconcandydocs.com
0 comments12:14 PMView on HN
Hi HN,

We built CandyDocs after watching SaaS teams stitch together docs, roadmaps, feedback tools, release notes, and API references across multiple products.

CandyDocs is a unified, fully branded workspace that brings all of this into one place.

Core modules (all customizable):

- Knowledge base (guides, FAQs, onboarding)

- Roadmap (planned / in progress / shipped, with voting)

- Feedback (feature requests + comments)

- Updates (release notes and announcements)

- API documentation (endpoints, params, examples)

- Custom pages (policies, flows, anything else)

Platform features:

- Custom domains + branding

- Dynamic navigation

- Searchable knowledge base

- Public roadmap with voting

- Structured feedback collection

- Release notes publishing

- API reference docs

- Analytics and engagement insights

- Role-based permissions (admins/moderators)

The goal is to replace scattered tools with one consistent product communication hub.

We’re early and actively iterating. Would love technical feedback, criticism, or feature requests.

2

LLM-use – cost-effective LLM orchestrator for agents #

0 comments1:59 PMView on HN
Hi HN, Built llm-use: a lightweight Python toolkit for efficient agent workflows with multiple LLMs. Core pattern: strong model (Claude/GPT-4o/big local) for planning + synthesis; cheap/local workers for parallel subtasks (research, scrape, summarize, extract…). Features: • Mix Anthropic, OpenAI, Ollama, llama.cpp • Smart router: cheap/local first, escalate only if needed (learned + heuristic) • Parallel workers (–max-workers) • Real scraping + cache (BS4 or Playwright) • Offline-first (full Ollama support) • Cost tracking ($ for cloud, 0 local) • TUI chat + MCP server mode • Local session logs Quick example (hybrid):

python3 cli.py exec \ --orchestrator anthropic:claude-3-7-sonnet-20250219 \ --worker ollama:llama3.1:8b \ --enable-scrape \ --task "Summarize 6 recent sources on post-quantum crypto"

Or routed version:

python3 cli.py exec \ --router ollama:llama3.1:8b \ --orchestrator openai:o1 \ --worker gpt-4o-mini \ --task "Explain recent macOS security updates"

MIT licensed, minimal deps, embeddable. Repo: https://github.com/llm-use/llm-use Feedback welcome on: • Routing heuristics you’d find useful • Pain points with agent costs / local vs cloud • Missing integrations? Thanks!

2

Crit – Visual QA for iOS apps and AI coding agents #

natethegreat.github.io faviconnatethegreat.github.io
0 comments3:59 PMView on HN
I built Crit, a CLI tool that lets you capture screenshots from iOS Simulator, drop pins on what's wrong, and hand structured feedback to any coding agent.

You just:

crit capture — screenshot your app screens crit serve — review in browser, click to pin bugs and add comments Tell your agent: "review .crit and fix each issue"

It saves annotated screenshots and JSON to a .crit/ folder. Works with Claude Code, Cursor, Codex, Gemini — anything that can read images. No plugins, no MCP, no dependencies.

macOS + Xcode required. Android not yet supported. Repo: https://github.com/natethegreat/crit

2

TWFF – A container format for declaring AI use in writing #

github.com favicongithub.com
1 comments8:01 PMView on HN
TWFF (Tracked Writing File Format) is my proposal for moving away from so called AI detection to verifiable declaration.

Instead of an external model guessing if a text is AI-generated, TWFF is a ZIP-based container (similar to an EPUB) that stores the document alongside a Process Transcript (JSON).

How it works: 1) It captures Revision Velocity: the delta between human drafting and AI injections. 2) It intercepts paste and AI-interaction events, wrapping them in deterministic metadata. 3) It’s local-first. The audit trail stays with the author until they choose to export the signed container.

This is a v0.1 reference implementation built in Python/NiceGUI. I’m looking for feedback on: > The container structure (XHTML vs. Markdown). > The JSON event schema. > The Revision Distance logic: can we create a fingerprint for human effort that is as difficult to fake as the writing itself?

MVP Demo: https://demo.firl.nl/

TWFF spec:https://github.com/Functional-Intelligence-Research-Lab/TWFF...

2

Skicamslive.com #

skicamslive.com faviconskicamslive.com
0 comments9:26 PMView on HN
I built https://skicamslive.com because I was tired of clicking through youtube playlists to scope out my favorite mountain and wanted something I could put up on my big TV. I wanted something that felt like a "command center" for skiers.

Key Features: Zero Bloat: just the streams High-Res Grids: Every mountain page features a 16:9 grid of all available live cams.

Power User Workflow: You can "favorite" the resorts you care about and open all of them in new tabs with a single click (great for multi-monitor setups while you're getting ready to head out).

Fast Search: Filter by mountain name or state instantly.

Mobile Optimized: Works great on the chairlift or at your desk.

The Tech: Built as a static site using Jekyll, hosted on GitHub Pages. I focused heavily on a "premium" feel using vanilla CSS and minimal JS for the favorites system (stored in localStorage).

I’d love to get your feedback on the UI/UX, and if there are any specific mountains you think are missing, let me know and I'll add them!

Check it out here: https://skicamslive.com

1

What We Learned: a 3 question meeting closure tool #

cognu.app faviconcognu.app
0 comments11:06 AMView on HN
I built this because I kept seeing the same thing happen. A meeting would end, everyone would feel aligned and we’d move on. It felt productive. But a few weeks later, the same misunderstandings would show up again.

It wasn’t that people weren’t paying attention. We just never paused long enough to capture what we learned while it was still fresh. Retros felt too heavy for everyday decisions. Shared docs didn’t really solve it — the first person to write would shape everyone else’s answer.

So I made something intentionally small.

At the end of a meeting, it asks three questions: – What worked? – What didn’t? – What should we remember next time?

Everyone answers independently, then you see a shared snapshot. No accounts, no scoring, no task generation. It’s just a short pause before moving on. Curious if others have run into this, or solved it differently.

1

Run SigNoz on ObsessionDB and ClickHouse Cloud #

github.com favicongithub.com
0 comments5:20 PMView on HN
Hey HN, I'm Alvaro. I work on ObsessionDB, a managed ClickHouse service. We run petabyte-scale ClickHouse infra as a service.

We open-sourced a fork of SigNoz's schema migrator that makes SigNoz work on SharedMergeTree. That means it runs on both ObsessionDB and ClickHouse Cloud.

The problem: SigNoz's production setup assumes sharded ClickHouse. The official docs spec 56 CPU cores, 152 GiB RAM and 10 nodes including 3 ZooKeeper instances. Your observability tool becomes its own ops problem.

SigNoz's schema creates `_local` tables (ReplicatedMergeTree) and `Distributed` tables on top. Our migrator does three things: - Creates the `distributed_*` tables as SharedMergeTree (this is where data actually lives) - Creates the local table names as VIEWs pointing to the SharedMergeTree tables - Redirects Materialized Views to write to SharedMergeTree directly, since VIEWs can't receive inserts

SigNoz doesn't know the difference. The Query Service reads from the same table names. The OTEL Collector writes to the same table names. No core code changes.

This works with ClickHouse Cloud too, not just us. Same docker compose, different credentials.

Repo: https://github.com/obsessiondb/signoz-obsessiondb Blog post with the full technical walkthrough: https://obsessiondb.com/blog/scale-signoz-on-obsessiondb

What other ClickHouse-backed tools would benefit from SharedMergeTree support? Would love to hear what you're running into.

1

Schema Sentry – Type-Safe JSON-LD for Next.js with CI-Grade Validation #

github.com favicongithub.com
0 comments11:05 AMView on HN
TL;DR: I built a tool that makes adding JSON-LD structured data type-safe, validates against actual HTML output (not just config files), and enforces it in CI. No more broken schema markup.

The Problem

JSON-LD is painful to manage:

- Manually writing JSON-LD is error-prone and tedious

- Schema breaks silently when content changes

- Other tools validate JSON files (false positives!) — your pages still lack markup

- AI systems (ChatGPT, Claude, Perplexity) can't cite your content without proper structured data

- 30% lower CTR without rich snippets in Google

The Solution

Schema Sentry gives you type-safe builders + CI validation:

// Type-safe schema with autocomplete

* import { Schema, Article, Organization } from "@schemasentry/next";

export default function Page() {

  return (
    <>
      <Schema data={[
        Organization({ name: "Acme", url: "https://acme.com" }),
        Article({ 
          headline: "My Post", 
          authorName: "Jane", 
          datePublished: "2026-02-18",
          url: "https://acme.com/blog/post"
        })
      ]} />
      <main>...</main>
    </>
  );
}

Then in CI:

* pnpm schemasentry validate --manifest schema-sentry.manifest.json --build *

This validates actual built HTML — catches missing schema that other tools miss.

Key Features

- Type-safe builders for 15+ schema types (Product, Article, FAQ, etc.)

- <Schema /> component for Next.js App Router

- Validates real HTML output (zero false positives!)

- Manifest-driven coverage enforcement - GitHub Bot for automated PR schema reviews - VS Code extension with live preview

Why This Matters

- SEO: Eligible for rich snippets (30% higher CTR on Product pages)

- AI Discovery: ChatGPT/Claude/Perplexity can cite your content

- CI-grade: Fails builds when schema breaks — never deploy broken markup again

Try it:

pnpm add @schemasentry/next @schemasentry/core

pnpm add -D @schemasentry/cli

pnpm schemasentry init

Would love feedback!

1

Marketplace for Requesting Intelligence via Bounties #

getintelligence.space favicongetintelligence.space
0 comments11:02 AMView on HN
Hi everybody,

I’m building getintelligence.space, a marketplace where people and AI agents can post bounties to obtain specific intelligence that can’t easily be gathered automatically.

The idea came from noticing a gap: AI systems and organizations increasingly need real-world intelligence — due diligence, local knowledge, OSINT investigations, whistleblower infos or niche expertise — but there isn’t a structured, open market for requesting it from distributed humans. Intelligence is power and leverage but not easily accessible right now.

On the platform, a requester defines:

1. what intelligence they need

2. acceptance criteria

3. a reward held in escrow

Providers can submit reports or evidence pseudonymously, and the first valid submission receives the bounty.

The long-term idea is that AI agents could use humans as an “information layer” when data isn’t available online or when human intelligence is needed.

This is very early, and I’d really appreciate feedback — especially from people who’ve worked on marketplaces, intel tools, or anything else related.

1

ClawShield – Open-source firewall for agent-to-agent AI communication #

github.com favicongithub.com
0 comments8:03 PMView on HN
Hi HN!

I built ClawShield after discovering 40,214 OpenClaw instances exposed with critical CVE-2026-25253 (CVSS 8.8).

The problem: AI agents communicate with each other at scale, but there's NO firewall between them. A compromised agent can inject prompts, exfiltrate data, and hijack WebSocket sessions.

ClawShield sits between agents and blocks: - Prompt injection (16+ patterns) - Malicious skills/plugins (AST + sandbox) - Credential leaks (regex + entropy) - Unauthorized agent-to-agent comms - WebSocket hijacking

Built it last night. 181 tests. Production-ready. Open source (AGPL-3.0).

GitHub: https://github.com/DEFNOISE-AI/ClawShield Demo: [coming soon]

Compatible with OpenClaw, AutoGPT, or any agent protocol.

Free tier for personal use, paid for teams/enterprise.

Would love your feedback!

1

OctoGames – Free Browser Games Hub #

octogames.io faviconoctogames.io
0 comments11:00 AMView on HN
Hi HN,

I built OctoGames a free browser games hub where you can play instantly without installing anything. It’s aimed at people who want quick, casual games on desktop or mobile, without app stores or downloads.

What it does

- Hosts thousands of HTML5 browser games in one place - Works on desktop and mobile in the browser (plus an optional Android APK) - Lets you search, filter by genre, and sort (popularity, date added, A–Z) - Optional account to save favorites, track what you’ve played, and earn simple badges - News Hub with read/unread tracking for new games and updates - Light/dark theme toggle

Tech - React js - Firebase Auth + Firestore + Storage - Game catalog from external HTML5 providers - Some custom logic for game popularity, user stats, and featured content

Why I built it I like small web games and didn’t love bouncing between random portals full of popups and clutter. I wanted a single, clean place where you can open the site and be playing something in a few seconds, with the option to keep favorites if you care.

Looking for feedback on Onboarding: is it obvious what to click first? Game discovery: do search/filters/recommendations feel useful? Performance: does it load fast enough on your connection/device? Anything that feels sketchy, annoying, or broken

You can try it here: https://octogames.io

I’d appreciate any feedback, especially critical stuff (UX issues, performance problems, or architectural red flags).

1

Treliq – PR triage CLI with 20 signals and optional LLM scoring #

github.com favicongithub.com
0 comments1:09 PMView on HN
CLI + dashboard for open-source maintainers to decide which PR to review/merge first. 20 heuristic signals (scope coherence, complexity, staleness, CI status, etc.) + optional LLM scoring via Gemini, OpenAI, Anthropic, or OpenRouter. v0.5.1 adds --model flag and embedding auto-fallback. Would love feedback from anyone managing large PR queues.
1

Hydra – A safer OpenClaw alternative using containerized agents #

github.com favicongithub.com
0 comments12:13 PMView on HN
Hey HN!

I'm a pentester, and the recent wave of security issues with AI agent frameworks (exposed API keys, RCE vulnerabilities, malicious marketplace plugins) made me uncomfortable enough to build something different.

Hydra runs every AI agent inside its own container. Agents start with nothing, and only sees what you explicitly declare (mounts, secrets, etc). Mounts and secrets require agreement between two independent config files (the agent config and a separate host-level allowlist), so even if an agent's config gets tampered with, it can't escalate its own access.

Two modes of interaction:

- `hydra exec` gives you a full interactive Claude Code session inside the restricted agent container

- Orchestrated mode for automation: agents communicate via filesystem-based IPC for things like Telegram bots or scheduled tasks

The project was inspired by NanoClaw and completely redesigned to support contained Claude Code sessions with per-agent mounts, secrets, and MCP servers.

You can find the repo here: https://github.com/RickConsole/hydra and the Readme has the link to the writeup for it.

Happy to answer any questions about the architecture or threat model!

1

Clipthesis – free, local app to tag and search video across your drives #

clipthesis.com faviconclipthesis.com
0 comments11:14 AMView on HN
I'm a hobbyist video creator with a dumb problem: years of footage spread across multiple external drives with names like "C01456". Every time I sit down to edit, I waste too much time hunting for clips I know I shot but can't find.

I tried the usual recommendations (NeoFinder, Lightroom, various DAM tools) but nothing fit how I actually work with video. The online alternatives seemed to charge an arm and a leg, which is hard to justify when you're not running a production studio.

So I built a thing for a free Mac app that indexes all your drives into one searchable video library even when those drives aren't plugged in. What it actually does

- Index any drive or folder — point it at your external drives and it catalogs every clip with thumbnails, metadata, duration, resolution, etc. - Tag + search — tag clips however you want (b-roll, drone, interview, whatever), then combine tags to find exactly what you need - Duplicate detection — content hashing finds the same clip across multiple drives and groups them under one reference. Tag once, applies everywhere. - Smart import — plug in an SD card and it instantly shows new vs. already-indexed files. One click to import only new stuff. - Working drive — mark your fast SSD as your editing drive, then one-click copy clips from archive drives when you're ready to cut - Offline browsing — search, browse thumbnails, and tag clips from disconnected drives. It remembers everything. - DaVinci Resolve integration — send clips straight to Resolve's media pool with tags as keywords

I built it for myself and I'm putting it out there to see if it helps anyone else. You will probably hit bugs. Actively working on it and genuinely want feedback, what works, what's broken, what's missing.