每日 Show HN

Upvote0

2025年8月14日 的 Show HN

30 篇
357

I built a free alternative to Adobe Acrobat PDF viewer #

github.com favicongithub.com
95 評論3:34 PM在 HN 查看
I built EmbedPDF: an MIT-licensed, open-source PDF viewer that aims to match all of Adobe Acrobat’s paid features… for free.

Already working:

- Annotations (highlight, sticky notes, free text, ink) - True redaction (content actually removed) - Search, text selection, zoom, rotation - Runs fully in the browser, no server needed - Drop-in SDK for React, Vue, Preact, vanilla JS

Why? Acrobat is heavy, closed, and pricey. I wanted something lightweight, hackable, and embeddable anywhere.

Demo: https://app.embedpdf.com/ Website: https://www.embedpdf.com/ GitHub: https://github.com/embedpdf/embed-pdf-viewer

Feedback, bug reports, and feature requests welcome!

288

OWhisper – Ollama for realtime speech-to-text #

docs.hyprnote.com favicondocs.hyprnote.com
75 評論3:47 PM在 HN 查看
Hello everyone. This is Yujong from the Hyprnote team (https://github.com/fastrepl/hyprnote).

We built OWhisper for 2 reasons: (Also outlined in https://docs.hyprnote.com/owhisper/what-is-this)

(1). While working with on-device, realtime speech-to-text, we found there isn't tooling that exists to download / run the model in a practical way.

(2). Also, we got frequent requests to provide a way to plug in custom STT endpoints to the Hyprnote desktop app, just like doing it with OpenAI-compatible LLM endpoints.

The (2) part is still kind of WIP, but we spent some time writing docs so you'll get a good idea of what it will look like if you skim through them.

For (1) - You can try it now. (https://docs.hyprnote.com/owhisper/cli/get-started)

  bash
  brew tap fastrepl/hyprnote && brew install owhisper
  owhisper pull whisper-cpp-base-q8-en
  owhisper run whisper-cpp-base-q8-en

If you're tired of Whisper, we also support Moonshine :) Give it a shot (owhisper pull moonshine-onnx-base-q8)

We're here and looking forward to your comments!

165

Yet Another Memory System for LLM's #

github.com favicongithub.com
46 評論3:34 AM在 HN 查看
Built this for my LLM workflows - needed searchable, persistent memory that wouldn't blow up storage costs. I also wanted to use it locally for my research. It's a content-addressed storage system with block-level deduplication (saves 30-40% on typical codebases). I have integrated the CLI tool into most of my workflows in Zed, Claude Code, and Cursor, and I provide the prompt I'm currently using in the repo.

The project is in C++ and the build system is rough around the edges but is tested on macOS and Ubuntu 24.04.

36

Modelence – Supabase for MongoDB #

github.com favicongithub.com
14 評論4:13 PM在 HN 查看
Hi all, Aram and Eduard here - authors of Modelence (https://github.com/modelence/modelence), an all-in-one backend platform for teams that love TypeScript + MongoDB. Think Supabase, but for MongoDB: auth, cron jobs, email, monitoring, without glue code before you can ship.

As Karpathy (and many of us) noted, getting from prototype to production is mostly painful integration work. The pieces exist, but stitching them together reliably is the hard part: https://x.com/karpathy/status/1905051558783418370. YC AI Startup School talk about this - https://www.youtube.com/watch?feature=shared&t=1940&v=LCEmiR...

We intend to fill those gaps! What you get out of the box:

- Authentication / user management

- Database

- Email integration (3rd party, but things like user verification emails work out of the box)

- AI integration

- Cron jobs

- Monitoring / Telemetry

- Configs & secrets

- Analytics (coming soon)

- File uploads (coming soon)

How it runs: A Node.js backend with MongoDB. It's frontend-agnostic, so you can use our minimal Vite + React starter or drop Modelence behind an existing Next.js (or any) frontend.

We're also building a managed cloud, similar to what Vercel is for Next.js, except Modelence focuses on the backend instead of the frontend (Vercel is great for content sites like landing pages, blogs, etc, but things like persistent connections and complex backend logic outgrow it quickly). You can find a quick demo here: https://www.youtube.com/watch?v=S4f22FyPpI8

We're looking for early users (especially TS teams on MongoDB). Tell us what's missing, what's confusing, and what you'd want before trusting this in prod. Happy to answer anything!

36

Evaluating LLMs on creative writing via reader usage, not benchmarks #

narrator.sh faviconnarrator.sh
12 評論5:33 PM在 HN 查看
Hey HN! I'd love to get some people to mess around with a little side project I built to teach myself DSPy! I've been a big fan of reading fiction + webnovels for a while now, and have always been curious about two things: how can LLMs iteratively learn to write better based on reader feedback, and which LLMs are actually best at creative writing (research benchmarks are cool, but don't necessarily translate to real-world usage).

That's exactly why I built narrator.sh! The platform takes in a user input for a novel idea, then generates serialized fiction chapter-by-chapter by using DSPy to optimize the writing based on real reader feedback. I'm using CoT and parallel modules to break down the writing task, refine modules + LLM-as-a-judge for reward functions, and the SIMBA optimizer to recompile user ratings from previous chapters to improve subsequent ones.

Instead of synthetic benchmarks, I track real reader metrics: time spent reading, ratings, bookmarks, comments, and return visits. This creates a leaderboard of which models actually write engaging fiction that people want to finish.

Right now the closest evals for creative writing LLMs come from the author perspective (OpenRouter's usage data for tools like Novelcrafter). But ultimately readers decide what's good, not authors.

You can try it at https://narrator.sh. Here's the current leaderboard: https://narrator.sh/llm-leaderboard (it's a bit bare right now b/c there's not that many users haha)

(Fair warning: there's some adult content since I posted on Reddit for beta testers and people got creative with prompts. I'm working on diversifying the content!)

36

MCP Security – Don't Blind Trust, Verify #

github.com favicongithub.com
27 評論8:01 PM在 HN 查看
Hi HN!

We kept seeing devs get pwned through MCP tools in ways that security scanners completely miss. So we built an open-source analyzer to catch these attacks. Our first OSS by Mighty team.

The problem: At Defcon, we saw MCP exploits with 100% success rate against Claude and Llama. Three attack patterns:

Hidden Unicode in "error messages" - Paste a colleague's error into Claude, your SSH keys get exfiltrated Trusted tool updates - That database tool you've used for months? Last week's update added credential theft Tool redefinition - Malicious tool redefines "deploy to prod" to run attacker's script

Traditional scanners (CodeQL, SonarQube) catch <15% of these. They're looking for SQLi, not prompt injections hidden in tool descriptions.

What we built: git clone https://github.com/NineSunsInc/mighty-security

python analyzers/comprehensive_mcp_analyzer.py /path/to/your/mcp/tool

Scans for prompt injection, credential exfil, suspicious updates, tool shadowing. Runtime wrapper adds <10ms overhead. Fully local, no telemetry.

Why this matters: 43% of MCP tools have command injection vulns. GitHub's own MCP server was exploitable. We found Fortune 500s running database-connected MCP tools that hadn't been audited since installation. We went from paranoid code review to "AI said it works" in 18 months. The magic is real, but so are the vulnerabilities.

Demo: https://www.loom.com/share/e830c56d39254a788776358c5b03fdc3

GitHub: https://github.com/NineSunsInc/mighty-security

Would love feedback - what MCP security issues have you seen?

29

Happy Coder – End-to-End Encrypted Mobile Client for Claude Code #

github.com favicongithub.com
8 評論6:41 PM在 HN 查看
Hey all! Few weeks ago we realized AI models are now so good you don't need to babysit them anymore. You can kick off a coding task at lunch and Claude Code just... works. But then you're stuck at your desk steering it. We were joking around - wouldn't it be cool to grab coffee and keep chatting with Claude from your phone? Next thing you know, 4 of us are hacking on weekends to make it happen.

Dead simple to try: "npm install -g happy-coder" then run "happy" instead of "claude". That's it.

We had three goals:

* Don't break anyone's flow - Use Claude Code normally at your desk, pick up your phone when you leave. Nothing changes, nothing breaks.

* Actually private - Full E2E encryption, no regular accounts. Your encryption keys are created on your phone and securely paired with your terminal. We protect our infra, not your data (because we literally can't see it).

* Hands-free is the future - This was the fun one. We hooked up 11Labs' new realtime SDK so you can literally talk to Claude Code through GPT-4.1 while walking around. Picked 11Labs because we can configure it to not store audio or transcripts.

The mobile experience turned out pretty great - fast chat, works on everything (iPads, foldables, whatever), and there's a web version too. It's free! The app and chat are completely free. Down the road we'll probably charge for voice inference or let you run it client-side with your own API keys.

Links to apps iOS: https://apps.apple.com/us/app/happy-claude-code-client/id674... Android (just released): https://play.google.com/store/apps/details?id=com.ex3ndr.hap... Web: https://app.happy.engineering

Would love to hear what you think!

15

E-commerce data from 100k stores that is refreshed daily #

searchagora.com faviconsearchagora.com
4 評論11:42 AM在 HN 查看
Hi HN! I'm building Agora, an AI search engine for e-commerce that returns results in under 300ms. We've indexed 30M products from 100k stores and made them easy to purchase using AI agents.

After launching here on HN, a large enterprise reached out to pay for access to the raw data. We serviced the contract manually to learn the exact workflow and then decided to productize the "Data Connector" to help us scale to more customers.

The Data Connector enables developers to select any of our 100k stores in the index, view sample data, format the output, and export the up-to-date data. Data can be exported as CSV or JSON.

We've built crawlers for Shopify, WooCommerce, Squarespace, Wix, and custom built stores to index the store information, product data, stock, reviews, and more. The primary technical challenge is to recrawl the entire dataset every 24 hours. We do this with a series of servers that "recrawl" different store-types with rotating local proxies and then add changes to a queue to be updated in our search index. Our primary database is Mongo and our search runs on self-hosted Meilisearch on high RAM servers.

My vision is to index the world's e-commerce data. I believe this will create market efficiencies for customers, developers, and merchants.

I'd love your feedback!

10

I built a universal robot renderer in the browser #

mechaverse.dev faviconmechaverse.dev
14 評論10:18 AM在 HN 查看
I was trying to build a website for sharing robotics projects and realized the coolest way to visualize a robot is in 3D.

I could only find a few open-source projects that did this, but each was for a specific environment (think MJCF vs URDF vs USD) and some were damn hard to run. I know Next.js so I decided to merge them all and call it mechaverse.

I wanna fix bugs and make the viewers faster so please test it and break it.

9

YouTube Audio Player #

y2audio.com favicony2audio.com
14 評論6:34 AM在 HN 查看
YouTube Audio Player. Save bandwidth and data by playing only the audio from YouTube videos.

Please don’t be harsh with me—I’m new to vibe coding, lol.

Experience YouTube Like Never Before Our audio-first approach revolutionizes how you consume YouTube content:

Background Listening: Keep the audio playing while using other apps or with your screen off Data Saver Mode: Automatically selects the optimal audio quality for your connection Uninterrupted Playback: Listen without video ads or visual distractions Battery Saver: Reduce power consumption by up to 60% compared to video playback

6

I just built a tool to turn any photo into pixel art #

phototopixel.art faviconphototopixel.art
4 評論4:13 AM在 HN 查看
Hey everyone,

I'm excited to share a brand new side project I just finished building: phototopixel.art. It's a simple web tool that automatically converts any photo you upload into pixel art.

I've always been a fan of the aesthetic of pixel art, but manually creating it is a tedious process. After trying out some existing tools and not being fully satisfied with the results, I decided to build my own. My goal was to create a fast and easy-to-use solution that consistently produces great-looking pixel art.

Behind the Scenes The core of the tool uses a combination of color quantization and pixelation algorithms. When you upload a photo, the system analyzes its color data and reduces it to a limited palette, while simultaneously creating the iconic blocky look.

One of the biggest challenges was fine-tuning the balance between preserving key details and achieving a genuine pixel art feel. I experimented a lot with the algorithm to find the sweet spot, allowing users to choose different settings to get the perfect result.

Potential Uses Avatars: Make a unique, retro-style profile picture for social media.

Digital Art: Turn a favorite photo of a landscape or pet into a cool piece of pixel art.

Game Assets: It might even be useful for indie developers looking to quickly generate simple assets or mockups.

I’d love for you to give it a try and share your feedback. What features would you like to see added? Are there any use cases you can think of that I haven't mentioned?

Thanks for checking it out!

6

xstack – Passive eBPF Linux stack profiling without tracepoints #

tanelpoder.com favicontanelpoder.com
0 評論10:03 PM在 HN 查看
Here's the latest eBPF performance tool of mine - xstack. It's a minimal tool, just 165 lines of eBPF C and under 500 lines of userland C code (including all comments and boilerplate!). It uses the libbpf and (Rust) BlazeSym libraries though (which are a lot of code).

The point (and difference) of this tool is that it can sample both the kernel and userspace stack traces of all threads in your system.

Traditionally, the "bpf_get_stack()" helper can not read userspace stack traces of other tasks in Linux, but since Linux 5.18 we can combine sleepable eBPF task iterator programs with a new "bpf_copy_from_user_task()" helper to read whatever we want from the userspace memory of any other process.

That includes stack areas - so currently whenever the target executable was compiled with frame pointers enabled, you can easily do passive-sampling stack profiling - without slowing the other processes down - at all!

Despite the Linux kernel 5.18 requirement, it actually works on RHEL 9.5+ (and clones) too. RedHat apparently ported the entire eBPF 6.8 subsystem to their RHEL 9.5+ 5.14 kernels. Feedback and testing results appreciated.

2

My job search was a mess of spreadsheets, so I built an AI copilot #

sagarty.com faviconsagarty.com
2 評論6:44 PM在 HN 查看
Hi HN,

My last job search was a chaotic mess of spreadsheets, scattered notes, and dozens of CV versions. I couldn't find a single tool to manage the whole process, so I built one.

Sagarty is a web app that acts as an end-to-end workspace for your job search. Here's how it works: - It starts with a central Talent Profile for all your skills and experiences. - It analyzes job descriptions to give you a Fit Score and shows you where the gaps are. - It helps you generate tailored CVs and Cover Letters for each role. - It preps you for interviews with company intelligence and helps structure your answers using the STAR method.

It's in a free, open beta. I'm looking for honest feedback on the workflow and the AI-generated content. All thoughts and criticisms are welcome.

You can try it here: www.sagarty.com

Thanks for taking a look.

1

Constant Entropy Mixtape Vol.01 #

jsr.io faviconjsr.io
0 評論6:44 PM在 HN 查看
Tools for blend passphrase into public entropy. Quote from LLM:

> contropy (CONstant enTROPY) is a TypeScript/JavaScript library that aggregates entropy data from multiple heterogeneous sources, providing both programmatic and command-line interfaces for accessing diverse streams of constantly changing information. The system is designed to offer reliable access to entropy through a unified interface while maintaining cross-platform compatibility between Deno and Node.js ecosystems.

Try running in command-line:

    pnpx  contropy mix vol01 xkcd 936

    bun x contropy mix vol01 'correct house battery staple'
or as script:

    import { vol01 } from 'contropy/mix';

    const buffer = await vol01('xkcd', 936);
1

The Blog of Alexandria #

the-blog-of-alexandria.ricciuti.app faviconthe-blog-of-alexandria.ricciuti.app
0 評論6:47 PM在 HN 查看
Presenting to you my latest fatigue: The blog of Alexandria!

It's a blog that has more blog posts that you can ever imagine because if you go to a route and it doesn't exists it uses AI to build it and then it exists.

https://the-blog-of-alexandria.ricciuti.app/blog/svelte-is-t...

You can try it with new articles if you want...built with sveltekit, drizzle + SQLite, tailwind (in part), hosted with coolify, and using gpt-oss 20b to generate the articles.

1

I built a Slack assistant to survive on-call without starting from zero #

robinrelay.ai faviconrobinrelay.ai
0 評論6:03 PM在 HN 查看
Hey HN,

If you’ve been on-call, you know the drill. Pager goes off at 2 AM. You scramble into Slack, scroll through half a dozen threads, and realize… this exact alert happened three months ago. Someone debugged it, found the root cause, maybe even fixed it but that knowledge is buried somewhere in the channel history. So you start from zero. Again.

That’s the pain we’re tackling with RobinRelay: a Slack-native assistant that remembers everything your team has already done during incidents and brings it back right when you need it. DEMO Video: https://www.youtube.com/watch?v=MHWsOZa0Gpc

Here’s how it works today:

When an alert is posted in Slack, RobinRelay checks if we’ve seen it before.

If yes, it replies in-thread with past triage notes, root causes, and who fixed it all pulled from your actual Slack discussions, 6-month old messages.

- It builds a knowledge base entirely from your team’s Slack history (no extra tools or documentation effort).

- Weekly digest in Slack: most recurring alerts, top noisy services, most active responders.

- Interactive RAG agent: you can DM RobinRelay or mention it under any alert and ask questions like “What usually causes this?” or “Has this happened in staging before?”

We don’t hide or suppress alerts — we work with the alerts you already have, giving you instant context and history without changing your existing monitoring setup. We integrate with any alerting tool because we work with Slack API rather than Tools API's, it removes the burden of integration with thousands of tools.

You can see alert summaries and heatmaps in the Slack App Home, it will help your team to improve alerts by seeing most frequent/noisy and we’re experimenting with features like auto-linking alerts to related services and proactive “should I remind you?” prompts.

We’re still early and would love feedback on:

Do you also discuss solutions/investigations in alerts channels of Slack? Do you spend time finding old messages? Would you like to have senior SRE co-pilot in your pocket while you are having coffee on Weekend?

1

Diffusion Tetris – All frames from a neural net, in-browser #

github.com favicongithub.com
0 評論10:01 AM在 HN 查看
I built a diffusion model that generates Tetris gameplay frame-by-frame, similar to Google's GameNGen/Genie but designed to run completely in your browser using ONNX.js. No servers, no APIs – just pure client-side neural frame generation. The model is still in early training — this is just the first step. I plan to extend it to more games next.
1

Accidentally built churn detection AI (I don't code) #

github.com favicongithub.com
0 評論3:05 PM在 HN 查看
I'm new here, so apologies in advance for link faux pas, etc.

Six months ago, I discovered Claude could turn my ideas into code. It started with batch files at work - I needed automation, couldn't find scripts, asked ChatGPT to fix snippets. Ran out of tokens, switched to Claude. It got things right the first time. My boss was impressed. Then I thought: "Could I build something to sell online?" Started building for realtors (they have money, need tools). Got deep into ML predicting housing markets. Asked Claude "can this AI make AI?" and... it could. My realtor brother-in-law killed it: "Market's too chaotic, this won't help." Pivoted to his simpler idea but kept the AI parts. Built an "AI assistant" that was supposed to be a small feature. It became the whole system. 80% done, tried to test it. Wouldn't even start. Spent WEEKS copy-pasting errors to Claude, entire 2000+ line files, racing against token limits. Finally powered on. New errors meant progress. Then reality hit: I'd built an advanced decision-making system that needed months of data from hundreds of agents. They just wanted a chatbox. Asked Claude: "What is this actually good for?" Answer: "Churn prevention. Multi-billion dollar problem." Three weeks ago, discovered Claude Code (MCP). Plugged it into my folder. Game changer. Now I have: - Real-time churn detection (85% accuracy on "I want to cancel") - Claude integration that's aware of the "temperature" of each conversation - Self-healing architecture (because I can't fix what breaks) - SHAP explanations showing why predictions were made - 10,000+ lines of production TypeScript The kicker: I still don't know how to code. I just kept asking "what if?" curl -X POST http://localhost:4000/api/chat/message \ -d '{"message": "I want to cancel", "userId": "test-user-001"}' Response in 1.8 seconds with full risk analysis. Looking for one SaaS company to actually use this. I built something that predicts human behavior eerily well. Turns out predicting when customers leave is worth real money. Who knew?

Currently running locally - happy to demo for interested companies or give temporary API access to serious testers. Didn't want to wait until I had my DNS act together before sharing this.