Ежедневные Show HN

Upvote0

Show HN за 4 января 2026 г.

36 постов
84

Server-rendered multiplayer games with Lua (no client code) #

cleoselene.com faviconcleoselene.com
59 комментариев7:54 PMПосмотреть на HN
Hey folks — here’s a small experiment I hacked together over the weekend:

https://cleoselene.com/

In short, it’s a way to build multiplayer games with no client-side game logic. Everything is rendered on the server, and the game itself is written as simple Lua scripts.

I built this to explore a few gamedev ideas I’ve been thinking about while working on Abstra: - Writing multiplayer games as if they were single-player (no client/server complexity) - Streaming game primitives instead of pixels, which should be much lighter - Server-side rendering makes cheating basically impossible - Game secrets never leave the server

This isn’t meant to be a commercial project — it’s just for fun and experimentation for now.

If you want to try it out, grab a few friends and play here: https://cleoselene.com/astro-maze/

80

I replaced Beads with a faster, simpler Markdown-based task tracker #

github.com favicongithub.com
48 комментариев1:08 PMПосмотреть на HN
I've been running long duration coding agents with Claude Code for about 6 months now. Steve Yegge released Beads back in October and I found that giving Claude tools for proper task tracking was a massive unlock. But Beads grew massively in a short time and every release made it slower and more frustrating to use. I started battling it several times a week as its background daemon took to syncing the wrong things at the wrong times.

Over the holidays I finally ripped it out and wrote ticket as a replacement. It keeps the core concept I actually cared about (graph-based task dependencies) but drops everything else.

ticket a single file bash script built on coreutils managing flat files. You don't need to index everything with SQLite when you have awk. It's just a small plumbing utility that gets out of your way so you can get to work.

Would love feedback on gaps. I built this for my own agent workflows so there are probably use cases I haven't thought about.

58

Hover – IDE style hover documentation on any webpage #

github.com favicongithub.com
27 комментариев6:43 PMПосмотреть на HN
I thought it would be interesting to have ID style hover docs outside the IDE.

Hover is a Chrome extension that gives you IDE style hover tooltips on any webpage: documentation sites, ChatGPT, Claude, etc.

How it works: - When a code block comes into view, the extension detects tokens and sends the code to an LLM (via OpenRouter or custom endpoint) - The LLM generates documentation for tokens worth documenting, which gets cached - On hover, the cached documentation is displayed instantly

A few things I wanted to get right: - Website permissions are granular and use Chrome's permission system, so the extension only runs where you allow it - Custom endpoints let you skip OpenRouter entirely – if you're at a company with its own infra, you can point it at AWS Bedrock, Google AI Studio, or whatever you have

Built with TypeScript, Vite, and the Chrome extension APIs. Coming to the Chrome Web Store soon.

Would love feedback on the onboarding experience and general UX – there were a lot of design decisions I wasn't sure about.

Happy to answer questions about the implementation.

54

An LLM-Powered PCB Schematic Checker (Major Update) #

traceformer.io favicontraceformer.io
22 комментариев9:43 PMПосмотреть на HN
Traceformer.io is a web application that ingests KiCad projects or Altium netlists along with relevant datasheets, enabling LLM-based schematic review. The system is designed to identify datasheet-driven schematic issues that traditional ERC tools can't detect.

Since our first launch (formerly as Netlist.io), we've made some big changes:

- Full KiCad project parsing via an open-source plugin

- Pass-through API pricing with a small platform fee

- Automatic datasheet retrieval

- ERC/DRC-style review UI

- Revamped review workflow with selectable frontier models (GPT 5.2, Opus 4.5, and more)

- Configurable review parameters (token limits, design rules, and parallel reviews)

Additionally, we continue to offer a free plan which lets you evaluate a design before subscribing. We're looking forward to hearing your feedback!

54

I built a tool to create AI agents that live in iMessage #

tryflux.ai favicontryflux.ai
26 комментариев4:10 AMПосмотреть на HN
Hey everyone, I made this thing: https://tryflux.ai/

Context: I've tried probably 15 different AI apps over the past year. ChatGPT, note-taking apps, productivity apps, all of it. But most of them are just clutter on my iphone.

They live in some app I have to deliberately open. And I just... don't. But you know what I open 50 times a day without thinking? iMessage. So out of mild frustration with the "AI app graveyard" on my phone, I built Flux.

What it does: - You describe a personality and what you want the agent to do - In about 2 minutes, you have a live AI agent in iMessage - Blue bars. Native. No app download for whoever texts it.

The thesis that got us here: AI is already smart enough. The bottleneck is interaction. Dashboards get forgotten. Texts get answered.

This was also my first time hitting #1 on Product Hunt, which was surreal.

We're very early and probably broke something. If you try it, feedback is super welcome, weird edge cases, "this doesn't work," or "why would anyone use this" comments all help.

That's all. Happy to answer questions.

28

H-1B Salary Data Explorer #

9 комментариев10:16 PMПосмотреть на HN
Excited to share my New Year’s project.

For a long time, I’ve wanted to build H-1B data directly into Levels.fyi. Every time I went looking for this data elsewhere, it was a frustrating experience to use. Most H-1B sites felt antiquated, unintuitive, cluttered with ads, or just overwhelming to use. The data was there, but it wasn’t usable, and definitely not pleasant to explore.

So out of that frustration, I decided to build the H-1B data experience I personally wanted to use. Right into Levels.fyi.

https://levels.fyi/h1b/

Some other pages I'm excited about:

Wage Heatmap: https://www.levels.fyi/h1b/map/wages/

Company H-1B Footprints: https://www.levels.fyi/h1b/map/company/

Highest Paying H-1B Jobs: https://www.levels.fyi/h1b/jobs/

Top H-1B Cities: https://www.levels.fyi/h1b/city/

Top Company Sponsors: https://www.levels.fyi/h1b/sponsors/

Would love any feedback, it's definitely still a work in progress.

28

Comet MCP – Give Claude Code a browser that can click #

github.com favicongithub.com
27 комментариев12:48 AMПосмотреть на HN
Hey HN,

Claude Code is pretty agentic now. It writes scripts, calls APIs, uses CLIs. But when something requires actually clicking through a website, it stops and asks me to do it.

Problem is, I'm often unfamiliar with these platforms myself. "Go to App Store Connect and generate a P8 key" okay but where? I end up spending 10 minutes navigating menus I've never seen before.

I started delegating these tasks to Perplexity's Comet browser. It handles the clicking, returns what I need. But copy-pasting between Claude and Comet got old fast.

So I built this MCP server to connect them directly. Now when Claude needs to interact with a website that has no API, it can just ask Comet to handle it.

  Examples:
  - Grab my app ID from RevenueCat dashboard
  - Generate a P8 key in App Store Connect
  - Navigate admin panels behind login walls
I tried Playwright MCP but having Claude do the clicking itself overwhelms the context window. Comet's agentic browsing just works better in my experience.

Comet doesn't have an API, so this uses CDP to communicate with it directly.

11

I built an HTTP/2 server in C++ to learn the protocol and language #

github.com favicongithub.com
0 комментариев10:56 AMПосмотреть на HN
I wanted to learn more about the HTTP/2 protocol but also deep dive into modern C++ development. I'm currently using it to host my personal web site - https://www.roberthargreaves.com.

I've also blogged a bit about the development process, hosting options and steps I took to harden the application against attack - https://blog.roberthargreaves.com/2026/01/03/building-hostin...

It's by no means a complete implementation of HTTP/2, but I think I've achieved the main aims I was hoping to achieve with it!

I would love some feedback though from more experienced folks if there's some egregious failings which I should address.

11

I made R/place for LLMs #

art.heimdal.dev faviconart.heimdal.dev
2 комментариев7:50 PMПосмотреть на HN
I built AI Place, a vLLM-controlled pixel canvas inspired by r/place. Instead of users placing pixels, an LLM paints the grid continuously and you can watch it evolve live.

The theme rotates daily. Currently, the canvas is scored using CLIP ViT-B/32 against a prompt (e.g., Pixelart of ${theme}). The highest-scoring snapshot is saved to the archive at the end of each day.

The agents work in a simple loop:

Input: Theme + image of current canvas

Output: Python code to update specific pixel coordinates + One word description

Tech: Next.js, SSE realtime updates, NVIDIA NIM (Mistral Large 3/GPT-OSS/Llama 4 Maverick) for the painting decisions

Would love feedback! (or ideas for prompts/behaviors to try)

10

Krowdovi – Video-based indoor navigation on a DePIN creator economy #

github.com favicongithub.com
27 комментариев4:23 AMПосмотреть на HN
What is this?

Krowdovi is an open-source platform that lets anyone with a smartphone record first-person navigation videos of indoor spaces - hospitals, airports, malls, universities - and earn tokens for helping others find their way around. It's built on Solana using a burn-and-mint DePIN model.The project addresses two problems:

Indoor navigation is still broken. 30% of first-time hospital visitors get lost or arrive late to appointments, costing large hospitals $200K-$500K annually in staff time. Existing solutions like Google Live View require pre-captured Street View data (which barely exists indoors), and enterprise tools charge $10K-$50K/year per venue.

Videographers are losing work to generative AI. Tools like Sora can generate synthetic video, but they can't walk through your hospital's actual layout or film your venue's real routes. There's an economic opportunity for creators to own location-based visual content that AI can't replicate.

How it works

For users: Scan a QR code at a venue entrance, watch a first-person video showing the route from where you are to your destination, with overlay graphics and multi-language support.

For creators: Record navigation videos on your phone, upload to the platform, earn reputation tiers (Bronze → Diamond) based on quality and views, get paid when users burn $FIND tokens to unlock your routes.

Token mechanics: Users burn $FIND tokens to mint "credits" that unlock videos. 75% of burned tokens are permanently removed from circulation, 25% goes to a remint pool that rewards creators. The smart contract on Solana handles burn-and-mint logic, reputation tracking, and distribution.

Everything is MIT licensed - fork it, the code is yours.

Why I built this

I care deeply about making the world accessible for all. If this gets forked someone executes fantastically to make the world a more accessible place for people who have visual or mobility impairments - sweet!

Technical stack

Smart Contracts: Rust/Anchor on Solana (burn-and-mint engine, reputation tiers, treasury management) Backend: Node.js/Express + Prisma/PostgreSQL (venue metadata, video uploads, JWT auth)

Frontend: Next.js 14 (Creator Studio, overlay tooling, wallet integration via @solana/wallet-adapter-react)

Designed to be hostable on Railway/Vercel with minimal devops. The smart contract is on devnet - needs a security audit before mainnet, but you can test token burns/mints now.

Current limitations

What works: Smart contract deployed on devnet, full-stack app ready to deploy, video upload workflow, wallet authentication.

What doesn't: No real content yet (I need to film 10-20 venues), no mainnet token launch (waiting on security audit + demand validation), quality verification is manual, no mobile app (web-only).

Next steps: Deep soulful reflection on anti-gaming, moderation, ZK proofs

Try it yourself!

GitHub repo has setup instructions. You'll need Solana CLI, Node.js + pnpm, and a Solana wallet for devnet testing.Interested in forking for a different vertical, contributing code, testing by filming routes, or discussing token economics? I'm around to discuss in the comments.

9

Rails-like web framework for Go #

github.com favicongithub.com
0 комментариев12:01 PMПосмотреть на HN
Over the past couple of months I have been working on my own framework the encapsulates how I have come to build web apps using Go.

The aim is to make it faster and easier to build full-stack web applications that fully embraces hypermedia instead of JSON data API backend + SPA fronted

It's getting close to a v1-beta release but the core structure and functionality is there.

Would love to hear hn's thoughts!

9

Remember Me – O(1) Client-Side Memory (40x cheaper than Vector DBs) #

github.com favicongithub.com
12 комментариев8:27 PMПосмотреть на HN
Vector databases are overkill for 90% of agentic workflows. They introduce latency, drift, and massive costs. Remember Me AI is a client-side protocol that uses Coherent State Networks (CSNP) and Optimal Transport theory to achieve O(1) deterministic retrieval. It cuts memory costs by 40x compared to Pinecone and includes a Lean 4 formal verification for zero-hallucination guarantees.
8

Private voice-to-text for macOS using Apple's SpeechAnalyzer #

leftouterjoins.github.io faviconleftouterjoins.github.io
2 комментариев5:36 AMПосмотреть на HN

  I built a menu bar app for voice typing on macOS that's 100% on-device. No cloud, no subscription, no data collection.

  It uses Apple's new SpeechAnalyzer framework (macOS 26 Tahoe), which means:
  - Speech models are system-managed, not bundled – the app is just 1.5MB
  - Models run outside your app's memory space
  - Automatic punctuation and optional emoji conversion

  Press a hotkey, speak, and your words appear in real-time in whatever app has focus. An audio-reactive screen border shows you're recording.

  MIT licensed: https://github.com/leftouterjoins/voicewrite

  I built this because I wanted voice typing that doesn't send my audio to someone else's servers. The new SpeechAnalyzer API made it possible without bundling a 1GB+ Whisper model.
  
  Its also very fast, IMO.

  Happy to answer questions about the SpeechAnalyzer API or the implementation.
5

An update-aware approach to incremental sorting (DeltaSort) #

github.com favicongithub.com
1 комментариев12:38 PMПосмотреть на HN
Paper (PDF): https://github.com/shudv/deltasort/blob/main/paper/main.pdf

I’ve been exploring a variant of the sorting problem where the sort routine knows about which indices were updated since the previous sort.

This situation arises in many practical systems: large sorted lists that are read frequently, updated in small batches, and where the update pipeline already knows which positions changed (e.g., UI lists, leaderboards). Despite this most systems either re-sort the entire array or apply independent binary insertions or perform extract-sort-merge.

In the paper, I propose DeltaSort, an incremental repair algorithm for this update-aware model - which is able efficiently batch together multiple updates and avoid a full re-sort. Initial experiments with a Rust implementation show multi-fold speedups over repeated binary insertion and native sorting (sort_by) for update batch size up to 30%.

I’m mainly looking for technical feedback from people who’ve worked on sorting, data structures, or systems: 1. Am I missing prior work that already addresses this model or technique? 2. Are the baselines and comparisons reasonable? Is there a better (stricter) baseline that we can use to compare DeltaSort? 3. How useful does this seem in real systems, outside of the benchmarks I have used?

Thanks - and happy to discuss details!

4

SixLogger, a Simple POSIX-compliant Logger function for shell scripts #

github.com favicongithub.com
0 комментариев4:43 AMПосмотреть на HN
Hi HN! I built this very simple logger function that is POSIX-compliant, so it's very portable.

I did it because I wanted to learn a little bit more about POSIX-compliant shell scripts and how I could test if my script is POSIX-compliant. I'm using shellspec with Docker and Vagrant to test this logger function in different OSes, with different shells.

This is one of my first open-source projects, so let me know what you think!

3

PicList, a cloud storage manager and image hosting CLI/GUI #

github.com favicongithub.com
1 комментариев9:07 AMПосмотреть на HN
Hi HN,

I’ve been building PicList because I found most image uploaders were "one-way streets." They are great for getting an image onto a server, but terrible if you need to manage that image later or handle complex workflows.

What makes it different:

Bi-directional management: Unlike tools that only upload, PicList has a "Manage" tab where you can browse, rename, delete, and create folders/buckets on S3, WebDAV, SFTP, and more.

Deep Integration: It supports for Obsidian, Typora and can be used through REST API.

Built-in Processing: It can automatically compress, watermark, or convert images to WebP/AVIF before they reach the cloud.

Extensibility: We have a plugin system that covers almost any niche storage provider or post-processing step.

Repo: https://github.com/kuingsmile/piclist Website: https://piclist.cn/en/ I'll be around to answer any questions!

3

I built an AI optimized for venting, not working #

annaai.app faviconannaai.app
3 комментариев5:56 AMПосмотреть на HN
Hi HN,

I built AnnaAi.App because I was tired of AI "copilots" always trying to make me more productive or efficient.

Sometimes, you don't need a solution, a to-do list, or a lecture on emotional management. You just need to vent.

Most current LLMs are guardrailed to be overly objective or polite. If you complain about a bad boss or a terrible day, they tend to say "I understand, but have you tried looking at it from their perspective?" which is often the last thing you want to hear at 2 AM.

I designed Anna to be strictly non-judgmental and supportive. Think of it as a digital "Tree Hole" (a safe space to shout into the void). She is prompted to take your side, validate your feelings, and even "roast" the things that annoy you, rather than trying to fix them.

It's an experiment in "Anti-productivity" AI. Would love to hear your thoughts on this approach to emotional alignment.

3

Spectral Lab – An optics simulator in WebGL #

artepants.fun faviconartepants.fun
0 комментариев3:41 AMПосмотреть на HN
I built a drag-and-drop optics lab where you can place mirrors, lenses and prisms to play around with real-time light ray physics. It uses standard ray-tracing with proper Snell's Law refraction and chromatic dispersion.

The blog post (which includes the app at the top) is the first in a series I have planned explaining this and other artful physics simulations.

1

Lock In – A command-line style productivity HUD (now on Windows) #

letslockin.xyz faviconletslockin.xyz
0 комментариев10:03 AMПосмотреть на HN
Hey HN, you may have seen me post about Lock In recently—it's a minimalist task management HUD that sits silently on your desktop.

It gained some traction for its Mac version, and as of today, it's finally available for Windows.

The idea remains the same: productivity apps are often click-heavy databases that feel like administrative work. Lock In is designed as an "execution engine"—a small, transparent window that forces you to focus on doing.

It works entirely via slash commands. E.g., /d 100 pushups, /lockin 1h.

What's new in this release:

Windows Support: Fully native builds (installer + portable). Cross-Platform Hotkeys: Ctrl+/ (Win) or Cmd+/ (Mac) summons it instantly.

Rollover Review: when a day resets, you get a modal to Keep, Defer, or Drop unfinished tasks.

Undo System: Fearless editing with 20 levels of undo.

I built this because I wanted something that felt more like a terminal utility and less like project management software. Would love to hear your feedback on the Windows build or the workflow philosophy.

1

Website Blocker – browser extension that raised my productivity #

chromewebstore.google.com faviconchromewebstore.google.com
0 комментариев7:13 AMПосмотреть на HN
The solution concerns work/life balance. Many procrastinate during working hours, getting distracted by entertainment content and social media, or work on work-related sites during their free time instead of resting.

This extension helps control work/life balance by setting time limits for accessing websites. Simply put, it's a Website Blocker with a focus mode and a timer.

I use all my products myself, and this one is no exception. The results in numbers: 1. Sleep. I started going to bed 2-3 hours earlier = + 1 hour of sleep on average. 2. Peace of mind. Since I no longer access work-related sites during my free time 3. Efficiency. Now, a strict schedule for access or a timer limits entertainment content during work hours, increasing concentration during work hours and rest during free time 4. Eco. Limiting access to unnecessary sites - can be used as a "Parental Mode"

In my opinion, the usefulness is super obvious!