每日 Show HN

Upvote0

2025年10月1日 的 Show HN

20 篇
778

Autism Simulator #

autism-simulator.vercel.app faviconautism-simulator.vercel.app
856 評論2:48 PM在 HN 查看
Hey all, I built this. It’s not trying to capture every autistic experience (that’d be impossible). It’s based on my own lived experience as well as that of friends on the spectrum.

I'm trying to give people a feel for what masking, decision fatigue, and burnout can look like day-to-day. That’s hard to explain in words, but easier to show through choices and stats. I'm not trying to "define autism".

I’ve gotten good feedback here about resilience, meds, and difficulty tuning. I’ll keep tweaking it. If even a few people walk away thinking, "ah, maybe that’s why my coworker struggles in those situations," then it’s worth it.

Appreciate everyone who’s tried it and shared thoughts.

130

ChartDB Agent – Cursor for DB schema design #

app.chartdb.io faviconapp.chartdb.io
36 評論1:38 PM在 HN 查看
Last year we launched ChartDB OSS (https://news.ycombinator.com/item?id=44972238) - an open-source tool that generates ER diagrams from your database (via query/sql/dbml) without needing direct DB access.

Now we’re launching the ChartDB Agent.

It helps you design databases from scratch or make schema changes with natural language.

You can:

- Generate schemas by simply describing them in plain English

- Brainstorm new tables, columns, and relationships with AI

- Iterate visually in a diagram (ERD)

- Deterministically export SQL script

Try it out here - https://chartdb.io/ai - no signup required.

Or sign up and use it on your own database

Would love to get your feedback :)

75

Re-Implementing the macOS Spatial Finder #

github.com favicongithub.com
37 評論8:58 PM在 HN 查看
Modern macOS versions open folders in seemingly random positions and sizes. This set of scripts restores the behaviour known to classic macOS, where:

- folders remember where they were on the screen

- folders remember how big they were

This enables you to utilise the brain's superb spatial memory for file management.

29

2D Spine Animation AI for Game #

godmodeai.co favicongodmodeai.co
12 評論2:42 PM在 HN 查看
I built this 2D Spine Animation AI for Game.

Upload a game character image, it directly generate 2D spine animation for your character that can directly apply 2k+ animations.

What it does:

- Auto rigging, auto bone structure generation

- Apply 2k+ animations in a click

- Layered image output

- export directly to spine animation to edit

- export to all game engines like Unity and Godots

- 10x easier, 10x cheaper for game development

11

We built a modern research paper reader #

ontosyn.com faviconontosyn.com
5 評論12:24 PM在 HN 查看
Over the past year, we’ve been frustrated with how clunky it is to manage research papers. So we built Ontosyn, a modern research paper reader, with clean ui, better in paper navigation (bookmarks, jumping back up after clicking on a reference), an integrated ai chat with useful tools such as recommending relevant papers that you can directly add to your library, and, of course, library functionality to keep everything organized.

We launched a closed alpha last week, and our first users are quite pleased with it. One user asked for author-based search for paper recommendations yesterday, and we shipped it within an hour.

Our goal is to make it easy to stay up to date with relevant research and enjoy engaging with it.

We have lots of cool features planned, we’re still very early, and we’d love to iterate with feedback from the HN community.

You can try it at ontosyn.com

6

Next.js-like Python web framework, built for Htmx with FastAPI #

volfpeter.github.io faviconvolfpeter.github.io
6 評論12:02 PM在 HN 查看
It's very early days for the project, but I wanted to share it to see if there is interest.

It is the final piece of the FastAPI server-side rendering stack I started building with FastHX and htmy (the two dependencies of this project besides FastAPI).

Think of it as a more powerful and convenient alternative to tools like FastHTML, powered by FastAPI (without any modifications). I hope you'll like it.

5

Alloy Automation MCP – Connectivity for business-critical systems #

ai.runalloy.com faviconai.runalloy.com
2 評論2:48 PM在 HN 查看
I'm Mike, Head of Eng/Product at Alloy Automation.

We built MCP by Alloy Automation to give your AI agents structured access to business-critical systems without the integration headache. We've built MCP servers covering thousands of tools across platforms like Quickbooks, Xero, Notion, HubSpot, and Salesforce. Pick the tools you need, provision a server, and ship faster.

Need more control? Our Connectivity API gives you programmatic access to all the same tools for custom integrations beyond MCP.

Everything runs with scoped auth utilizing our battle-tested credential management system that independently manages your secrets.

Login for free and try it out here: https://ai.runalloy.com/

We'd love your feedback: what would make this usable in your stack? Happy to dive into any of the details!

3

AI analyst agent – Seamlessly switch between chat and build mode #

fabi.ai faviconfabi.ai
0 評論10:14 PM在 HN 查看
Hey, I'm Marc one of the co-founders of Fabi!

We've all seen the "chat with your data" or the "AI-as-an-assistant" when building reports, but we felt like this wasn't quite right.

So we decided to build a system that allowed users to: * Connect any data source to the AI * Purely chat with the AI and their data (so we show the output right in the chat before you even apply the changes) * Convert that chat to a repeatable analysis with a data app or workflow * Toggle back to chat mode without missing a beat

Basically this means that if you're a data person, you don't have to go through these loops of Ask AI -> Apply the code to your report -> Look at the results -> Revert or ask for updates. You can just stay in the chat interface and ask questions until you get the result you want then "Save" it to your report.

Sharing here, because making this work required some pretty cool and difficult handling of variables between the AI chat and the report.

For example, if you have a dataframe "df1" in your report, you want the AI to be able to access that dataframe and create new ones, says "df2". But you don't want df2 available in your report until you've accepted the change from the AI, otherwise the AI would be creating tons of variables that would cause a mess.

Happy to answer any questions!

2

Llmswap – Solving "Multiple Second Brains" with Per-Project AI Memory #

0 評論2:27 PM在 HN 查看
I kept seeing developers (including myself) struggle with the same problem: "I need multiple second brains for different aspects of my life, but AI keeps forgetting context between sessions."

So I built llmswap v5.1.0 with a workspace system that gives you persistent, per-project AI memory.

How it works:

  - cd ~/work/api-platform → AI loads enterprise patterns, team conventions


  - cd ~/learning/rust → AI loads your learning journey, where you struggled


  - cd ~/personal/side-project → AI loads personal preferences, experiments

Each workspace has independent memory (context.md, learnings.md, decisions.md) that persists across sessions. Your AI mentor actually remembers what you learned yesterday, last week, last month.

Key features:

  • Auto-learning journals (AI extracts key learnings from every conversation)


  • 6 teaching personas (rotate between Guru, Socrates, Coach for different perspectives)


  • Works with ANY provider (Claude Sonnet 4.5, IBM Watsonx, GPT-4 o1, Gemini, Groq, Ollama)


  • Python SDK + CLI in one tool


  • Zero vendor lock-in

Think of it as "cURL for LLMs" - universal, simple, powerful.

The workspace system is what makes this different. No competitor (Claude Code, Cursor, Continue.dev) has per-project persistent memory with auto-learning tracking.

Built for developers who:

  - Manage multiple projects and lose context switching

  - Are tired of re-explaining their tech stack every session

  - Want AI that builds on previous learnings, not starts from zero

  - Need different "modes" for work/learning/side projects

Open to feedback! Especially interested in:

  1. What other workspace features would be useful?

  2. How do you currently manage AI context across projects?

  3. Would you use auto-learning journals?


GitHub: https://github.com/sreenathmmenon/llmswap

PyPI: pip install llmswap==5.1.0

Docs: https://llmswap.org

2

Spaced Repetition for LeetCode – Stop Forgetting Problems You've Solved #

dsaprep.dev favicondsaprep.dev
0 評論12:34 PM在 HN 查看
Hi HN,

I built a spaced repetition system for Leetcode after realizing I'd "solved" 150+ problems but couldn't recall the patterns when faced with new questions in actual interviews.

The problem: Solving problems once creates the illusion of learning without actual retention. Your brain discards solutions as "one-time knowledge" unless you review them at spaced intervals.

What it does: - Automatically schedules problem reviews (1 day → 3 days → 1 week → 2 weeks → 1 month, etc.) - Adjusts intervals based on difficulty (easy problems reviewed less frequently) - Highlights overdue problems that need attention - Tracks your actual retention with notes/flashcards

Key insight: It's better to master 50 problems through 3-5 reviews each than solve 200 problems once and forget them all.

Demo: https://dsaprep.dev/tracker

I've been using this for couple of months and the difference in pattern recognition is significant. Problems that used to feel "new" now trigger automatic recognition of which approach to use.

Tech stack: React, Node.js, Express, MongoDB

Would love feedback on: - The revision interval algorithm - UI/UX for the review workflow - Pricing strategy - Whether this scratches an itch for others

Built this as a side project to solve my own problem. Happy to answer questions!

2

Spit Notes – A songwriting app that keeps lyrics and audio together #

getspitnotes.com favicongetspitnotes.com
0 評論4:42 PM在 HN 查看
Any songwriter who uses the iOS Notes app to write their lyrics has a mess of New Recording 142 voice memos in their Voice Memos app. I made Spit Notes as basically the Notes app but with a built-in voice recorder that keeps your audio files neatly organized on the same line as your lyrics. Now you can quickly capture your song ideas while driving or when you wake up in the middle of the night without worrying about losing them in your pile of untitled voice memos.

While you can attach audio to notes with other apps, adding and recording audio has a lot of friction, and often the layout of the audio elements in those apps are too pronounced to keep the text flowing seamlessly. This is not the case with Spit Notes.

I've wanted this app to exist for years but I put off making it myself because I knew it would take me a lot of time to build manually without knowing Swift. In recent years I've been writing AI-assisted code, but with AI coding agents getting better and better, 3 months ago I decided to see if I could vibe code a full product.

The code for this project was not AI-assisted, but human-assisted, with me providing the vision and feel of the app, while the AI agent takes that, makes a pass at the code base, and then I QA it, letting the AI know what worked/didn't work, and iterating.

I started with paying for Cursor and using Opus 4 but after getting insanely good initial results with Opus 4 and seeing my cursor costs start to rise, I remembered this post https://news.ycombinator.com/item?id=44167412 and took the plunge with Claude Code max $200 plan. This turned out to be an incredible value because it allowed me to use claude basically without limit. However, Gemini still had the biggest context window and as the project grew I had to use Gemini to create plans for big features and find deep bugs across all of the AI-generated modules.

Pro tip: create broad reference files for you and the AI, like an ARCHITECTURE.md where you keep a human-readable version of the technical big picture. You can then reference that for the AI so it stays aligned with your current progression.

Once the Codex CLI became available on homebrew it was a wrap. I cancelled my claude code max plan and have been happily coding without ever hitting any rate limits (other than when they made that update that accidentally reduced rate limits instead of increase them). Today, I pay for for chatgpt plus and gemini $20 plan and am able to clear most obstacles on the first or second prompt. I haven't tried opus 4.5 but since I am not really getting stuck with codex, for now I'll stick with that.

1

Comexp RVS–First real-time video-to-video search,TV monitoring and more #

chromewebstore.google.com faviconchromewebstore.google.com
0 評論10:46 AM在 HN 查看
I built Comexp RVS – a browser extension and platform for searching and analyzing video by video. Links: Extension: https://chromewebstore.google.com/detail/comexp-rvs/dnpncabg... Widget/API: https://tape.comexp.net/tools

It solves four core video problems: Find any movie or series: Paste or upload a short clip, get the original full movie/episode, even with noisy footage. No training, just real matching.

Live TV monitoring: Track when and where a story aired across hundreds of live TV channels and archives.

Smart no-repeat viewing: Filter out repeated content in compilations and playlists, so you don’t rewatch duplicate clips.

Video summarization: Watch just the most important parts—auto-summary generation for long videos.

What’s different: Unlike reverse image search or “AI” clip detectors, Comexp RVS matches dynamic video in real time using our own TAPe technology, not ML or neural nets. It works with millions of videos, runs on a single server, and doesn’t require training sets or GPU clusters.

Why I built it: Everyone consumes video, but search and analysis tools haven’t caught up. I wanted to solve “what movie is this?”, track TV facts, and let people watch smarter—not just more. Works now in Chrome. Demo/explanation and source tech docs on the site. Happy to answer questions and discuss tech details. Thanks for checking it out!

1

OneTabMan #

chromewebstore.google.com faviconchromewebstore.google.com
0 評論10:46 AM在 HN 查看
Hi HN,

I've made a Chrome extension that only allows a single tab in a browser window in an attempt to regain my focus when browsing the web and more generally, when sitting behind my computer (which is a lot).

What it does:

-Opening a new tab with the '+' button or keyboard shortcut will not work.

-Opening a link in a new tab will open the link in the current tab instead.

I have tested this on Chrome and Brave.

Writing browser extensions is not my forte, I have made this using Claude Code. It will contain bugs, feel free to submit a PR at https://github.com/MathieuBordere/onetabman

Try it out and let me know what you think!

1

Rclone UI – GUI Storage Manager #

github.com favicongithub.com
0 評論2:30 PM在 HN 查看
I became a full on data hoarder recently, but even before I was juggling multiple buckets for backups, stock assets, etc.

I settled some long time ago, but always forgot some flag, and while I do love a CLI for some endeavors, for this one I felt like a UI would make me move faster (if done right).

Some of my favorite features: - Remote Defaults (set default flags for a remote, they are applied as initial flags to every operation where that remote is selected, like a pre-fill) - Templates (a group of flags for an operation, eg. Move, that you can apply. they’re useful when you run the same operation more than once and you don’t want to remember all the flags) (regardless of remote) - Cron Jobs (templates that run periodically) - Multiple Configs (switch between different rclone config files) - Integrated Docs (hover over any flag to see what it does, without leaving the app)

It seen decent usage in recent months, both by companies and random dudes like me, and it has also been starred by NCW, the creator of rclone — an honor.

Here’s the link: https://github.com/rclone-ui/rclone-ui

Happy to share it here with everyone and take your feedback!