Show HN for December 1, 2025
35 itemsAn AI zettelkasten that extracts ideas from articles, videos, and PDFs #
Demo video: https://youtu.be/W7ejMqZ6EUQ
Repo: https://github.com/schoblaska/jargon
You can paste an article, PDF link, or YouTube video to parse, or ask questions directly and it'll find its own content. Sources get summarized, broken into insight cards, and embedded for semantic search. Similar ideas automatically cluster together. Each insight can spawn research threads - questions that trigger web searches to pull in related content, which flows through the same pipeline.
You can explore the graph of linked ideas directly, or ask questions and it'll RAG over your whole library plus fresh web results.
Jargon uses Rails + Hotwire with Falcon for async processing, pgvector for embeddings, Exa for neural web search, crawl4ai as a fallback scraper, and pdftotext for academic papers.
RFC Hub #
- Authors would change a spec after I started writing code
- It's hard to find what proposals would benefit from my review
- It's hard to find the right person to review my proposals
- It's not always obvious if a proposal has reached consensus (e.g. buried comments)
- I'm not notified if a proposal I approved is now ready to be worked on
And that's just scratching the surface. The most popular solutions (like Notion or Google Drive + Docs) mostly lack semantics. For example it's easy as a human to see a table in a document with rows representing reviewers and a checkbox representing review acceptance but it's hard to formally extract meaning and prevent a document from "being published" when criteria isn't met.
RFC Hub aims to solve these issues by building an easy to use interface around all the metadata associated with technical proposals instead of containing it textually within the document itself.
The project is still under heavy development as I work on it most nights and weekends. The next big feature I'm planning is proposal templates and the ability to refer to documents as something other than RFCs (Request for Comments). E.g. a company might have a UIRFC for GUI work (User Interface RFCs), a DBADR (Database Architecture Decision Record), etc. And while there's a built-in notification system I'm still working on a Slack integration. Auth works by sending tokens via email but of course RFC Hub needs Google auth.
Please let me know what you think!
Furnace – the ultimate chiptune music tracker #
I am still learning ImGUI and this is a master piece in my opinion.
Flowctl – Open-source self-service workflow automation platform #
This initial release includes: - SSO with OIDC and RBAC - Execution on remote nodes via SSH (fully agentless) - Approvals - Cron-based scheduling - Flow editor UI - Encrypted credentials and secrets store - Docker and Script executors - Namespaces
I built this because I needed a simple tool to manage my homelab while traveling, something that acts as a UI for scripts. At work, I was also looking for tools to turn repetitive ops/infra tasks into self-service offerings. I tried tools like Backstage and Rundeck, but they were either too complex, or the OSS versions lacked important features.
Flowctl can simply be described as a pipeline (like CI/CD systems) that people can trigger on-demand with custom inputs.
Would love to hear how you might use something like this!
Demo - https://demo.flowctl.net
Homepage - https://flowctl.net
GitHits – Code example engine for AI agents and devs (Private Beta) #
A while ago, I realized I kept giving the same advice to teammates and friends when they ran into a programming issue they couldn't easily solve: go to GitHub and look at how others solved it.
There is a huge pool of underused example material across open source. Most problems developers face are not that novel. With enough digging, someone has already solved the same issue in code or at least posted a workaround to an issue or discussion thread.
The trouble is that GitHub search is limited and works only when you already know the right keywords. You also need the time and patience to go through and read all the results, connect information across files, repositories, issues, discussions, and other metadata, and then turn that into a working solution. The same limitations apply to Stack Overflow and other search tools.
LLMs changed a lot, but they did not change this. They do not perform equally well across all programming languages, and their training data is always stale. They cannot reliably show how to combine multiple libraries in the way real projects do. For these and many other cases, they need a real, canonical code example rather than an outdated piece of documentation written for humans.
That is why I started building GitHits. It is designed to handle the work that humans and AI coding agents struggle with: finding real solutions in real repositories and connecting the dots across the open source ecosystem.
GitHits searches millions of open-source repositories at the code level, finds real code and surrounding metadata that match the intent of your blocker, and distills the patterns it finds into one example.
The initial product is in private beta, with MCP support to connect GitHits to your favorite coding agent IDE or CLI.
What makes it different from Context7 and other generic documentation search tools:
- It is built around unblocking, not general search
- It does not require manual indexing jobs
- It works for humans through the web UI and for agents through the MCP
- It clusters similar samples across repositories so you can see the common path real engineers took
- It ranks the sources using multiple signals for higher quality: the selected sources might be, for example, a combination of code files, issues, and docs
- It generates one token-efficient code example based on real sources
It is not perfect yet. Right now, GitHits supports only Python, JS, TS, C, C++, and Rust. More languages and deeper coverage are coming, and I would appreciate early feedback while the beta is still taking shape. If you have ever lost hours stuck on a blocker you knew someone else had solved already, I would love to hear what you think.
Online AI Image Quality Enhancer for Free #
CurioQuest – A simple web trivia/fun facts game #
I'd love to get feedback from actual web game enthusiasts before adding the next batch of categories and questions!
This is not a commercial product; it's a labor of love that's still in development. It doesn't have all the planned categories yet, but there's reasonable content: little over 2600 questions, 7 categories, 4 levels of difficulty per category, 2 languages (English and Portuguese), PWA so you can "install" it on your phone or desktop.
Cm-colors –I got tired of manually fixing wcag contrast,so I made this #
Can you spot AI-generated content? (spoiler: probably not) #
This isn't about testing your English degree, it's more about showing how well AI can now forge canonical cultural references. We've hit a point where the fakes are genuinely convincing. Where AI still slips up: Over-explanation (real Shakespeare is more economical) Generic metaphors vs. specific imagery Too polished (humans are messier)
But these tells are fading fast! Built with React, ~50 questions (for now). The images are tough if you haven't already seen them before. Curious what scores people get.
Photo app that does just one thing – no stories/reels/algorithm #
Downloadable Extensions for Postgres.app #
We often get requests to add specific extensions, but we can't include everything. So now we added a way to download additional extensions!
For now we have just a handful extensions for download. Is there anything else we are missing?
Generate a privacy policy for your app with one click in VS Code #
Cut multi-turn AI agent cost/latency by ~80–90% with one small change #
Two physics-based programming languages (WPE/TME and Crystalline) #
*WPE/TME* - A geometric calculus language for structural and temporal reasoning. Think: mathematical notation for encoding semantic relationships. Four parameters (domain, shell, phase, curvature) let you explicitly represent how components couple, influence each other hierarchically, and evolve over time.
*Crystalline* - A code synthesis language that generates provably optimal code through physics-guided evolution. Not template filling. It discovers novel optimizations (async I/O, streaming, parallelization, loop fusion) through energy minimization, achieving 3-4× performance improvement.
Both languages share the same geometric foundation from superconductor physics but serve completely different purposes. WPE/TME is for semantic reasoning (great for LLM scaffolding). Crystalline is for generating high-performance code.
Key differences from existing approaches: - Deterministic (same input always produces same output) - Explainable (energy equations show WHY decisions were made) - Novel code generation (genuinely discovers optimizations) - Mathematical guarantees on performance
Crystalline has a Python implementation. WPE/TME has a Python reference implementation, but it's really a notation system (like how LaTeX is a language for typesetting math).
GitHub: [will add link on launch day] Papers: [will add ResearchGate links - 3 papers explaining theory]
I'd love feedback on: 1. The language design - does geometric encoding make sense? 2. For Crystalline: what benchmarks would convince you the synthesis works? 3. For WPE/TME: would explicit structure help your AI reasoning tasks?
Happy to answer questions about the physics, the languages, or the implementations!
ReferralLoop – Waitlist platform with viral referral mechanics #
I benchmarked read latency of AWS S3, S3Express, EBS and Instance store #
Superset – Run 10 parallel coding agents on your machine #
Superset aims to be a superset of all the best AI coding tools. We want to support and stay compatible with whatever CLI agents you already use - improving your workflow instead of replacing it.
How it works: - One-click Git worktree creation with automatic environment setup - Agents and terminal tabs are isolated per worktree, preventing conflicts - Push notifications when agents are done or need your input
This lets you, for example, have Codex writing end-to-end tests in one worktree while Claude Code refactors a different module — no waiting, no lost context.
What’s next: We think there’s a big tooling + UX gap for orchestrating multiple agents. We’re experimenting with: - GitHub-style diff viewer for quick in-app code review - Merge agent to automatically generate a PR from a worktree - Create and sync worktrees in cloud VM for mobile/web access - Automatic context passing between agents using a top-level agent (e.g. Codex plan -> Claude Code implementation -> Codex review)
We’ve been using Superset to build Superset, and it’s made our coding 2-3x faster. We’d love your feedback, feature requests, and workflows to support :)
My pushback against ANPR carparks in the UK #
Scrappy Free AI Code Assistant #
I'm sure my dev velocity will slow down now, but I'll keep at it. Please clone, fork, hack away!
I built a SaaS that does nothing (for $2.99) #
The loading screen shows messages like "Installing good vibes..." and "Calibrating happiness..."
For $2.99 you get the "Premium Reality Adjustment Service" - which does exactly the same thing as the free version.
For $4.99/month, we automatically make everything OK every Friday at 1pm.
Built with Node.js, Stripe, available in 5 languages.
It's the most honest product I've ever shipped - it promises nothing and delivers exactly that.AI Agent for YC Startup School Content #
What It Does You can ask it complex, specific questions about anything covered in the curriculum, and it will give you a summarized, instant answer based only on the Startup School material. Example Questions: "What is Paul Graham's advice on picking co-founders?" or "Summarize the key takeaways on product-market fit metrics." The Goal: We wanted to create a simple, reliable shortcut for accessing the knowledge in one place.
Important Notes Not Official: This is not an official tool or service of Y Combinator or Startup School. It’s a side project built by a couple of founders (that's us!). It's a "Fun" Project: It's still a work in progress and a fun experiment in AI knowledge retrieval. If you spot any weird answers, hallucinations, or bugs, please let us know in the comments! We hope this will be useful. Keep building!
Open-Source AI CMS Editor for Magento/Adobe Commerce #
Here's a demo of what I built: https://www.youtube.com/watch?v=LcudrwsT_gk Editor Code: https://github.com/graycoreio/daffodil/tree/develop/libs/con...
I open sourced all of the code that I wrote (MIT License) and it comes in two pieces:
A pair of Angular components (and associated types/supporting infrastructure) called the `DaffAiEditorComponent` and `DaffContentSchemaRenderer` that allow you to drop in page schema and edit/visualize it. It can take a schema and produce a full page. This can be used as the foundation for building AI-driven content schema editors for any platform. Currently, the editor can only be imported if you build the @daffodil/content package locally (I’m working on releasing this shortly!).
You can find the editor code here: https://github.com/graycoreio/daffodil/tree/develop/libs/con...
You can find the frontend render here: https://github.com/graycoreio/daffodil/tree/develop/libs/con...
Magento CMS Plugin
A Magento/MageOS module that embeds the editor in the CMS, calls OpenAI for prompt-based schema generation, and exposes the resulting schema via GraphQL so Daffodil storefronts (or any headless frontend) can render it.
If you have a Magento store, you can install it with:
``` composer require graycore/magento2-cms-ai-builder ```
Repo: https://github.com/graycoreio/daffodil/tree/develop/plugins/...
I think the thing I’m most proud of is the way that I came to the conclusion of patch generation. My early attempts at driving the model to target a full schema on each prompt became woefully slow within just a few conversation loops. Reducing the output tokens here was a big win for UX and latency. In addition to performance, the model would subtly change schema in various parts of the page at random which is less than stellar.
There’s still a ton to do (I need to document all of the things and I need to make examples of rendering frontend apps with the admin content), but this was a huge milestone for me.
I plan to add streaming support to the Magento plugin along with the editor. I also want to spend some time making the extension points of "adding your own components" much simpler to do, it's a bit clunky today.
We Built a Small LLM Comparison Page and Accidentally a Platform #
So we built Fallom (homage to Asimov), a platform where you can compare how multiple models perform on your own evals or production data. You can easily see cost and performance differences and know if it’s worth switching models.
Would love feedback from anyone who’s built internal model testing pipelines. We learned a lot the hard way and are still learning.
AWAS – An open standard for AI-readable web actions #
Currently, AI agents and AI browsers essentially browse websites trying to mimic a human by clicking around, entering data in search fields, etc. This complicates things as well as waste computational resources for agents. On many websites, it doesn’t even work that well. This is an effort to standardize for AI agent access in a way that can be implemented rapidly and without disrupting user driven browsing.
I’m looking for a few developers interested in AI agents, APIs, or web standards to help refine the spec, add examples, and test it on real sites.
Repo: https://github.com/TamTunnel/AWAS
I’d really appreciate feedback, issues, or small PRs from anyone building AI tools or modern web backends.