2026년 4월 13일의 Show HN
34 개Ithihāsas – a character explorer for Hindu epics, built in a few hours #
I’ve always found it hard to explore the Mahābhārata and Rāmāyaṇa online. Most content is either long-form or scattered, and understanding a character like Karna or Bhishma usually means opening multiple tabs.
I built https://www.ithihasas.in/ to solve that. It is a simple character explorer that lets you navigate the epics through people and their relationships instead of reading everything linearly.
This was also an experiment with Claude CLI. I was able to put together the first version in a couple of hours. It helped a lot with generating structured content and speeding up development, but UX and data consistency still needed manual work.
Would love feedback on the UX and whether this way of exploring mythology works for you.
CodeBurn – Analyze Claude Code token usage by task #
Tools like ccusage give a cost breakdown per model and per day, but I wanted to understand usage at the task level.
CodeBurn reads the JSONL session transcripts that Claude Code stores locally (~/.claude/projects/) and classifies each turn into 13 categories based on tool usage patterns (no LLM calls involved).
One surprising result: about 56% of my spend was on conversation turns with no tool usage. Actual coding (edits/writes) was only ~21%.
The interface is an interactive terminal UI built with Ink (React for terminals), with gradient bar charts, responsive panels, and keyboard navigation. There’s also a SwiftBar menu bar integration for macOS.
Happy to hear feedback or ideas.
Continual Learning with .md #
For retrieval, there is a semantic filesystem that makes it easy for LLMs to search using shell commands.
It is currently a scrappy v1, but it works better than anything I have tried.
Curious for any feedback!
Equirect – a Rust VR video player #
I get all the concerns, and I review all AI code at work and most AI code for personal projects. This one in particular though, not so much. I get that's frowned on but this is a small, limited scope, personal project. Not that I didn't pay attention, Claude did do some things in strange ways and I asked it to fix them quite often. But, conversely, I have zero rust experience, zero OpenXR experience, zero wgpu expericence, next to zero relevant Windows experience.
I'm guessing I spent about ~30 hours in total prompting Claude for each step. I started with "make a windows app that opens a window". Then I had it add wgpu and draw hello triangles. Then I had it add OpenXR and draw those triangles in VR. That actually took it some time as it tried to figure out how to connect a wgpu texture to the surface being drawn in OpenXR. It figured it out though, far far faster than I would have. I'd have tried to find a working example or given up.
I then sat on that for about a month and finally got back to it this weekend and zoomed through getting Claude to make it work. The only parts I did was make some programmer art icons.
I can post the prompts in the repo if anyone is interested, and assming I can find them.
Also in the last 2 weeks I've resurrected an old project that bit-rot. Claude got it all up to date, and fixed a bunch of bugs, and checked off a bunch of features I'd always wanted to add. I also had Claude write 2 libraries, a zip library, an rar decompression library, as well as refactor an existing zip decompression library to use some modern features. It's been really fun! For those I read the code much more than I did for this one. Still, "what I time to be alive"!
Mcptube – Karpathy's LLM Wiki idea applied to YouTube videos #
But v1 re-searched raw chunks from scratch every query. So I rebuilt it.
v2 (mcptube-vision) follows Karpathy's LLM Wiki pattern. At ingest time, it extracts transcripts, detects scene changes with ffmpeg, describes key frames via a vision model, and writes structured wiki pages. Knowledge compounds across videos rather than being re-discovered. FTS5 + a two-stage agent (narrow then reason) for retrieval.
MCPTube works both as CLI (BYOK) and MCP server. I tested MCPTube with Claude Code, Claude Desktop, VS Code Copilot, Cursor, and others. Zero API key needed server-side.
Coming soon: I am also building SaaS platform. This platform supports playlist ingestion, team wikis, etc. I like to share early access signup: https://0xchamin.github.io/mcptube/
Happy to discuss architecture tradeoffs — FTS5 vs vectors, file-based wiki vs DB, scene-change vs fixed-interval sampling. Give it a try via `pip install mcptube`. Also, please do star the repo if you enjoy my contribution (https://github.com/0xchamin/mcptube)
IceGate – Observability data lake engine #
Observability costs are out of control. Teams pay per GB ingested, get locked into proprietary formats, and can't use their own data outside the vendor's UI. We built the first observability data lake engine to fix this.
This is our first release under Apache 2.0. We're building in the open and looking for contributors and early adopters who want observability without the markup.
Star us on GitHub: https://github.com/icegatetech/icegate
More info on https://icegate.tech
Documentation: https://docs.icegate.tech
OQP – A verification protocol for AI agents #
It's MCP-compatible and defines four core endpoints: - GET /capabilities — what can this agent verify? - GET /context/workflows — what are the business rules for this workflow? - POST /verification/execute — run a verification workflow - POST /verification/assess-risk — what is the risk of this change?
The analogy we keep coming back to: what OpenAPI did for REST APIs, OQP does for agentic software verification.
Early contributors include Philip Lew (XBOSoft) and Benjamin Young (W3C JSON-LD Working Group). Looking for feedback from engineers building on top of MCP, agent orchestration frameworks, or anyone who has felt the pain of "the agent shipped something wrong and we had no way to catch it."
Repo: github.com/OranproAi/open-qa-protocol
Bloomberg Terminal for LLM ops – free and open source #
LLM engineers are trading blind.
Which provider is degraded right now? What does this model actually cost when you factor in overhead, not just token price? If traffic shifts between providers, what happens to cost and latency? Is your stack dangerously concentrated on one provider?
These are operational questions every production LLM system has. Nobody's built the tooling for them until now, so most teams either fly blind or patch together status pages, spreadsheets, and gut feel.
We built the LLM Ops Toolkit to fix that:
1. Provider uptime monitor across 18+ LLM providers, live status in one view 2. Cost calculator that includes overhead, not just raw token pricing 3. Routing simulator to model cost and latency impact before you shift traffic 4. Model diversity audit to surface concentration risk before it becomes an incident
Free, open-source, no signup. Dashboard is at tools.lamatic.ai
The routing simulator is the most experimental piece and has the roughest edges. Genuinely curious how others think about provider concentration risk.
We've been treating it as dependency risk in software but that framing may not hold at scale.
Also live on Product Hunt today: producthunt.com/products/lamatic-ai
Dbg – One CLI debugger for every language (AI-agent ready) #
I built dbg to give them a real debugger experience. Since it is backend based with the few I implemented (still at basic level) it can support 15+ languages with one simple CLI (still some work needed but it is functional as it is):
LLDB, Delve, PDB, JDB, node inspect, rdbg, phpdbg, GHCi, etc. Profilers too (perf, pprof, cProfile, Valgrind…)
I also added GPU profiling via `gdbg` (CUDA, PyTorch, Triton kernels). It auto-dispatches and shares the same unified interface. (Planning to bring those advanced concepts back to the main dbg).
Works with Claude & Codex (probably works on others but didn't try them)
Quick start: ``` curl -sSf https://raw.githubusercontent.com/redknightlois/dbg/main/ins... | sh dbg --init claude (for claude) ```
Then just say: “use dbg to debug the crash in src/foo.rs”
Docs: https://redknightlois.github.io/dbg/ GitHub (MIT Licensed): https://github.com/redknightlois/dbg
Would love feedback from anyone building agents. What languages or features are you missing most? Ping me at @federicolois on X or open issues.
Turn your favorite YouTube channels into a streaming experience #
The Infinite Tolkien – endless narration of Middle-earth #
If you've read Tolkien, you know the man could spend an entire page describing a single tree. My friends and I always joked about it. When I saw the
concept of an infinite AI conversation, the connection was immediate: what if Tolkien never stopped describing things?
It's a satirical/art project, not a serious product. The voice cloning is decent but clearly not perfect — which honestly fits the spirit of it.
Source: https://github.com/Jarlakxen/infinite-tolkienBad Apple (Oscilloscope-Like) – one stroke per frame #
Lythonic – Compose Python functions into data-flow pipelines #
Async framework, mix sync/async python functions, compose them into DAGs, run them, schedule them, persist data between steps or let it flow just in memory.
GitHub: https://github.com/walnutgeek/lythonic
Docs: https://walnutgeek.github.io/lythonic/
PyPI: pip install lythonic
It is dataflow. So theoretically you can compose it with pure functions only. Lythonic requires annotations for params and returns to wire up outputs with inputs. All data saved in sqlite as json for now, and it would work for some amount of data ok.
You may use it as task flow keeping params and returns empty and maintaining all data outside of the flow.
But practically you may do well with some middle ground, just flow metadata thru, enough to make your function calls reproducible and keep some system of records that you can query reliably.
Anyway I will stop rambling ... soon.
Python 3.11+ MIT License. Minimal dependencies: Pydantic, Pyyaml, Croniter
Prepping for v0.1. Looking of feedback. v0.0.14 is out. Claude generated reasonable docs. Sorry, I would not be able to do it better. I am working on Web UI and practical E2E example app as well.
Thank you. -Sergey
Real-Time, Streaming SQL Queries on Flight Data #
Crafto – AI carousel post generator for LinkedIn and Instagram #
I am a solo builder and wanted to share a tool I’ve been working on: Crafto. It turns text, webpages, documents, and images into carousel posts for platforms like LinkedIn, Instagram, and X.
The goal is to turn plain text into engaging visuals to help marketers, educators, and influencers boost their content engagement, grow audience, and save hours of work.
This idea came from my previous work on using AI to summarize documents, meetings, blogs, and articles. While exploring that direction, I found that pure text summaries still feel a bit plain, so decided to add visual elements. I first tried infographics, but found them hard to read on mobile, especially when they have dense text and complex graphics. What I ended up is a carousel generator that combines text information with simple layout, color, and images — trying to balance text density and visual appeal, as I found whose well-designed carousels on LinkedIn and Instagram easily grab my attention.
Compared to static templates in tools like Canva, I wanted to automate the text-to-carousel generation process so a user doesn’t need to do all the manual work of picking templates, turning their long text into short summary, and filling in the text boxes. While a user can still edit the result in Crafto, I hope it would be minimal as the majority of the work is automated.
Crafto currently uses a mix of AI and curated templates. I also want to experiment with fully AI-generated carousels (without templates) to see how they compare in quality and user preference.
Would love feedback if you find this resonates, and happy to answer any question! You can also email at [email protected].
Hitoku Draft – context aware local macOS assistant #
It's context-aware; it reads your screen, documents, and active app to understand what you're working on. You can ask about PDFs, reply to emails, create calendar events, use web search, all by voice.
It supports Gemma 4 and Qwen 3.5 for text generation, plus multiple STT backends (Parakeet, Whisper, Qwen3-ASR).
Examples:
- Gemma4 in action, https://www.youtube.com/watch?v=OgfI-3YjEVU
- query a pdf document, https://www.youtube.com/watch?v=ggaDhut7FnU
- reply to email, https://www.youtube.com/watch?v=QFnHXMBp1gA
- and the usual voice dictation (with optional polishing)
I currently use it a lot with Claude Code, Obsidian and Apple Notes, or just read papers.
Code: https://github.com/Saladino93/hitokudraft/tree/litert
Download of binary: https://hitoku.me/draft/ (free with code HITOKUHN2026)
I am looking for feedback. My goal is to do AI research with clients interfacing, and I thought this is a nice little experiment I could do to iterate/fail quickly.
P.S. (if anyone has tips about this)
Current Gemma4 implementation (with small models) has some problems:
- easy to hallucinate for long contexts, so had to reset it often. Tuned some parameters, but need to find a sweet spot.
- Gemma4 with LiteRT is currently fast compared to the MLX implementation of Qwen3.5 (like 3x faster on my machine when dealing with images). But it has the price of memory spikes. I believe this is because LiteRT's WebGPU backend can allocate significantly more GPU memory than the model weights alone (I got 38GB of memory taken, for the E4B~4GB model!). I guess we need to wait for Google for this.
- App size: because no official Swift package from Google yet, have to bundle some file (LiteRT dylibs) that adds ~98 MB to a previous MLX only version (total app goes from ~50 MB to ~150 MB)
If any of this bothers you: use Qwen 3.5 instead (pure MLX), or wait for the upstream fixes from Google :)
Otherwise, for the mid-term I plan to switch to a potentially slower, but safer, MLX version for Gemma4 (hopefully on the weekend).
Kyomu - 13 puzzles from math and physics that map your cognitive style #
Aeolus – a library for unified access to air quality sensor networks #
Air quality data is now very widely available, but managing access to multiple networks is challenging when they all have different access requirements, APIs and data formats. Some great solutions exist (like openair and openAQ) but these are limited in the data they cover.
Integrating new APIs could be a full-time job, but it's something AI can do very well given a pattern. It sometimes involves working through some "interesting" problems - for example, the EEA has a csv endpoint that actually returns a .zip with mimetype "text/html"...
Beyond data access, Aeolus has basic analytics (for descriptive and regulatory stats) and graphing, as well as quality-of-life improvements like caching.
This is really for me as I build out my company working on turning air quality data into actionable information, but it's open source and freely available to all under GPLv3+. Let me know if you find it useful!
Lint-AI by RooAGI, a Rust CLI for AI Doc Retrieval #
As AI systems produce more task notes, traces, reports, and decisions, the problem is no longer just storing the documents. The harder part is finding the right pieces of evidence when the same concept is described in multiple places, often with different terminology or different framing.
Lint-AI is our current retrieval layer for that problem.
What it does today:
indexes large documentation corpora.
extracts lightweight entities and important terms.
supports hybrid retrieval using lexical, entity, term, and graph-aware scoring.
returns chunk-level evidence with --llm-context for downstream reviewer or LLM use.
*exports doc, chunk, and entity graphs
Example:
./lint-ai /path/to/docs --llm-context "where docs describe the same concept differently" --result-count 8 --simplified
That command does not decide whether documents are in contradiction. It retrieves the most relevant chunks so a reviewer layer can compare them.
Repo: https://github.com/RooAGI/Lint-AI
We’d appreciate feedback on:
retrieval/ranking design for documentation corpora
how to evaluate evidence retrieval quality for alignment workflows
what kinds of entity/relationship modeling would actually be useful here
Deconflict – Open-source WiFi planner with physics-based walls #
Asthi – Damn good asset tracker #
I've tried a few different products like Mint (before it was killed), empower, google finance, etc. but each had something missing that left me wanting more. After years of being frustrated with each solution I spent a few weeks to build my own.
Some design choices - Intentionally requires manual entry, I don't trust tracker apps to have read (definitely not write) access to my financial accounts. Certain assets like precious metals can't even use this anyway and need to be manual. - For tax advantaged accounts I try to find proxy tickers since it's custom based on the company or brokerage - Real estate requires manually entry as well but I'd love to auto pull estimated prices
I haven't spent much time on user experience since I built it for myself so would love feedback! Please send all thoughts and comments, I can iterate quickly and address any issues.
GDL – I built an AI-powered invention engine #
Claude Code skills for network engineering and homelabs #
So I'm currently taking a lot of enterprise network engineering courses where my professor's course layout is very much figure it out yourselves, go through old forums and guides, and ask AI to help explain information or protocols you don't understand. In the last course I took, I used a lot of the popular LLMs out there, and they genuinely sucked at anything related to network engineering. I would ask something and just receive incorrect, false, or completely unrelated responses over and over, to the point where it wasn’t even speeding up my learning or labs; it would still take me hours to troubleshoot. I've been using Claude code a lot recently, and I made these skills for my friends and me to help us. So far, I've been playing around with it using the old labs and work I did, and it's giving me much better, more insightful outputs. I made some homelab skills just for fun too, because I'm trying to get into that area to expand my learning with my Raspberry Pi at home. Anyways, I'm posting it here so if you guys find it useful and cool, I'd really love to hear your feedback!
Skills cover BGP troubleshooting, Cisco IOS patterns, interface health, VLAN segmentation, Pi-hole, and WireGuard. Will definitely add more depending on what kind of feedback I get.
Instructions to add these skills are located in the README
Thank you for reading!