毎日の Show HN

Upvote0

2025年12月18日 の Show HN

42 件
253

TinyPDF – 3KB PDF library (70x smaller than jsPDF) #

github.com favicongithub.com
32 コメント6:59 PMHN で見る
I needed to generate invoices in a Node.js app. jsPDF is 229KB. I only needed text, rectangles, lines, and JPEG images.

  So I wrote tinypdf: <400 lines of TypeScript, zero dependencies, 3.3KB minified+gzipped.

  What it does:
  - Text (Helvetica, colors, alignment)
  - Rectangles and lines
  - JPEG images
  - Multiple pages, custom sizes

  What it doesn't do:
  - Custom fonts, PNG/SVG, forms, encryption, HTML-to-PDF

  That's it. The 95% use case for invoices, receipts, reports, tickets, and labels.

  GitHub: https://github.com/Lulzx/tinypdf
  npm: npm install tinypdf
73

Composify – Open-Source Visual Editor / Server-Driven UI for React #

github.com favicongithub.com
5 コメント4:02 PMHN で見る
Everyone's shipping AI tools right now, and here I am with a visual editor. Still, I think many teams are very familiar with the problem of "marketing wants to change the landing page again."

I've run into this for years. Campaign pages come in, engineers get pulled in, and tickets stack up. It's usually the same components, just rearranged.

A few years ago, at a startup I worked at, we built an internal tool to deal with this. You register your existing React components, they show up as drag-and-drop blocks, and the result is a JSX string. No schema to learn, no changes to your component code.

We used it in production, handling real traffic in a messy, legacy-heavy environment. It held up well. Over time, it powered roughly 60% of our traffic. Marketing shipped pages without filing tickets, and product teams ran layout-level A/B tests. That experience eventually led me to clean it up and open-source it.

Composify sits somewhere between a no-code page builder and a headless CMS. Page builders like Wix or Squarespace offer drag-and-drop, but lock you into their components. There are also solid tools like Builder.io, Puck, and Storyblok, but many require you to adapt your components to their model. Composify is intentionally minimal: it lets you use your actual production components as they are.

It's still early. The docs need work, and there are rough edges. But it's running in production and has solved a real problem for us. If you already have a component library and want non-devs to compose pages from it, it might be useful.

Homepage: https://composify.js.org

Happy to answer questions or hear feedback!

42

Spice Cayenne – SQL acceleration built on Vortex #

spice.ai faviconspice.ai
4 コメント7:00 PMHN で見る
Hi HN, we’re Luke and Phillip, and we’re building Spice.ai OSS - a lightweight, portable data and AI engine and powered by Apache DataFusion & Ballista for SQL query, hybrid-search, and LLM-inference across disaggregated-storage used by enterprises like Barracuda Networks and Twilio.

We first introduced Spice [1] on HN in 2021 and re-launched it on HN [2] in 2024 re-built from the ground up in Rust.

Spice includes the concept of a Data Accelerator [3], which is a way to materialize data from disparate sources, such as other databases, in embedded databases like SQLite and DuckDB.

Today we’re excited to announce a new Ducklake-inspired Data Accelerator built on Vortex [3], a highly performant, extensible columnar data format that claims 100x faster random access, 10-20x faster scans, 5x faster writes with a similar compression ratio vs. Apache Parquet.

In our tests with Spice, Vortex performs faster than DuckDB with a third of the memory usage, and is much more scalable (multi-file). For real-world deployments, we see the DuckDB Data Accelerator often capping out around 1TB, but Spice Cayenne can do Petabyte-scale.

You can read about it at https://spice.ai/blog and in the Spice OSS release notes [4].

This is just the first version, and we’d love to get your feedback!

GitHub: https://github.com/spiceai/spiceai

[1] https://news.ycombinator.com/item?id=28448887 [2] https://news.ycombinator.com/item?id=39854584 [3] https://github.com/vortex-data/vortex [4] https://spiceai.org/blog/releases/v1.9.0

14

DocsRouter – The OpenRouter for OCR and Vision Models #

docsrouter.com favicondocsrouter.com
0 コメント7:55 AMHN で見る
Most products that touch PDFs or images quietly rebuild the same thing: a hacked-together “router” that picks which OCR/vision API to call, normalizes the responses, and prays the bill is sane at the end of the month.

DocsRouter is that layer as a product: one stable API that talks to multiple OCR engines and vision LLMs, lets you route per document based on cost/quality/latency, and gives you normalized outputs (text, tables, fields) so your app doesn’t care which provider was used.

It’s meant for teams doing serious stuff with documents: invoices/receipts, contracts, payroll, medical/admin forms, logistics docs, etc., who are either stuck on “the OCR we picked years ago” or are overwhelmed by the churn of new vision models.

Right now you get a REST API, simple SDKs (coming soon), a few pluggable backends (classic OCR + newer vision models), some basic routing policies, and a playground where you can upload a doc and compare outputs side by side.

I’d love feedback from HN on two things:

1- If you already juggle multiple OCR/vision providers, what does your homegrown router look like, and what would you need to trust an external one?

2 - Would you prefer this or use the LLM/OCR providers directly, with the possibility of changing the provider every so often?

Demo and docs are here: https://docsrouter.com

13

Toad. A unified terminal UI for coding agents #

github.com favicongithub.com
0 コメント4:49 PMHN で見る
Hi HN,

Up to the middle of 2025, I was the CEO/CTO of a startup called Textualize. Somehow I had managed to land seed funding for my Python libraries for fancy terminal output. We wrapped up after three years because the funding ran dry.

I honestly thought I was sick of coding at that point. But it turns out I was sick of the stress and working all hours. A few weeks rest was all I needed.

It was about that time that coding agents exploded, and I could no longer ignore them. I wasn't impressed with the UI these tools offered. Having worked in the terminal for a few years, I knew you could get a better user experience. And so this project was born.

I had planned to create some kind of layer between the agent's SDK and the front-end. Fortunately after I started building this, Zed Industries released Agent Client Protocol (https://agentclientprotocol.com/). Which was precisely what I needed.

I've just released the code (It was a private repo for a while). Toad (a play on Textual Code) can run a large number of AI agents, which a nicer terminal UI.

Think of it as a "bring your own agent" coding CLI. Use whatever agent you want. I'm not trying to sell you tokens.

Ask me anything. I'll be hanging around for a while, if this post takes off.

13

Paper2Any – Open tool to generate editable PPTs from research papers #

github.com favicongithub.com
2 コメント4:44 PMHN で見る
Hi HN, We are the OpenDCAI group from Peking University. We built Paper2Any, an open-source tool designed to automate the "Paper to Slides" workflow based on our DataFlow-Agent framework. The Problem: Writing papers is hard, but creating professional architecture diagrams and slides (PPTs) is often more tedious. Most AI tools just generate static images (PNGs) that are impossible to tweak for final publication. The Solution: Paper2Any takes a PDF, text, or sketch as input, understands the research logic, and generates fully editable PPTX (PowerPoint) files and SVGs. We prioritize flexibility and fidelity—allowing you to specify page ranges, switch visual styles, and preserve original assets. How it works: 1. Multimodal Reading: Extracts text and visual elements from the paper. You can now specify page ranges (e.g., Method section only) to focus the context and reduce token usage. 2. Content Understanding: Identifies core contributions and structural logic. 3. PPT Generation: Instead of generating one flat image, it generates independent elements (blocks, arrows, text) with selectable visual styles and organizes them into a slide layout. Links: - Demo: http://dcai-paper2any.cpolar.top/ - Code (DataFlow-Agent): https://github.com/OpenDCAI/DataFlow-Agent We'd love to hear your feedback on the generation quality and the agent workflow!
9

It's Like Clay but in Google Sheets #

getvurge.com favicongetvurge.com
5 コメント7:14 PMHN で見る
Hey everyone! After struggling a lot with data enrichment for SMBs, I launched a Google Sheets add-on that gives you direct access to an AI-powered webscraper.

Vurge lets you get structured information from any website right inside your Google Sheet, eliminating the need for learning a new tool or adding a new dependency for data enrichment.

Let me know what you think!

7

Open database tracking 77K public DNS servers every 10 minutes #

dnsdirectory.com favicondnsdirectory.com
5 コメント3:21 PMHN で見る
Hey HN! We built DNS Directory (https://dnsdirectory.com), a free, searchable database of public DNS servers with live monitoring every 10 minutes.

We needed to find an up-to-date list of DNS servers used by carriers around the world for a proxy fingerprinting / web-scraping project but we were shocked to find that it didn’t exist so we built it ourselves in an internal Hackathon

We’re adding more features but so far we:

Test 77K+ servers every ~10 minutes Allow filtering by uptime, location, security features (ad blocking, malware protection, DNSSEC) Show info on IPv6 support, anycast, etc. Show all historical testing information

We have no plans to monetize the site and it will stay free so it can be used as a public resource.

I’d love to hear ways we can improve the site. It works but certain things like content filtering detection are rough around the edges, and we want to add test nodes in Asia + US for better coverage as right now we just test from Amsterdam.

If you want a DNS server that isn’t already on the website then you can add them via the form and if you’re a large org that has a bunch to add then you can email me at [email protected] and we’ll ingest them.

Cheers!

6

I built a free to use logo generator using icons and colors #

smollaunch.com faviconsmollaunch.com
0 コメント4:49 PMHN で見る
I built a small tool for indie founders who just want a usable logo quickly without dealing with heavy AI generators, editors, or subscriptions.

I also added a lightweight “smart suggestions” feature where you can describe what your app does and it suggests icons and colors based on keywords. This part uses simple rules and mappings, no external AI services or image generation, so it’s fast and essentially free to run.

No accounts, no watermark, no folders of assets, just a single logo file you can download and use.

I built this as part of SmolLaunch, a launchpad for small dev tools and indie projects, but the logo generator itself is completely free.

Would love feedback on whether this is useful compared to existing logo tools!

5

Git rewind – your Git year in review #

gitrewind.dev favicongitrewind.dev
0 コメント4:56 PMHN で見る
I started an experiment to let a WASM-powered webapp interact with a local git repo and see how well this works. It turns out, it works pretty well!

I made it into a "git wrapped" tool that shows you when you committed the most, and what languages and files you most touched.

Despite the scary prompt when you use the filesystem API everything happens locally and your code stays private. (You can of course also just try it on cloned public github repos).

Let me know what you think!

4

HN Wrapped 2025 – let an LLM analyze your year on Hacker News #

hn-wrapped.kadoa.com faviconhn-wrapped.kadoa.com
0 コメント2:27 PMHN で見る
I was looking for some fun project to play around with the latest Gemini models and ended up building this :)

Enter your username and get:

- Generated roasts and stats based on your HN activity 2025

- Your personalized HN front page from 2035 (inspired by a recent Show HN [0])

- An xkcd-style comic of your HN persona

It uses the latest gemini-3-flash and gemini-3-pro-image (nano banana pro) models, which deliver pretty impressive and funny results.

Give it a try and let me know what you think :)

As an example here is dang's HN Wrapped: https://hn-wrapped.kadoa.com/dang

[0] https://news.ycombinator.com/item?id=46205632

4

Local WYSIWYG Markdown, mockup, data model editor powered by Claude Code #

nimbalyst.com faviconnimbalyst.com
0 コメント8:26 PMHN で見る
We have been getting the best results with Claude Code when we iterate with it to build full context and then use and update that context as we work.

So, we built Nimbalyst to be the local WYSIWYG editor and session manager where you iterate with Claude Code on markdown docs, diagrams, mockups, data-models, sessions, and code. Nimbalyst lets you:

- Work in one integrated tool leveraging all your context.

- Use all the power of Claude Code in a UI

- Work with Claude Code to write and edit WYSIWYG markdown, see AI changes as red/green, approve them

- Iterate on html mockups with Claude Code, annotating the mockups, and then use them as context for people and Claude Code coding

- Build your data models based on your docs/code, iterate on them with Claude Code, export them in standard formats.

- Integrate mermaid diagrams, text, tables, mockups, data models, and images in standard markdown for human/AI context

- Tie sessions to documents to sessions, find and resume sessions, treat sessions as context, run parallel sessions

- Code with Claude Code with all of this context, use / commands, see git status.

Nimbalyst is in Beta, local, and free. We would love your feedback.

3

An MPSC Queue Optimizing for Non-Uniform Bursts and Bulk Operations #

github.com favicongithub.com
0 コメント7:41 AMHN で見る
Hi HN,

I’m a C++ student and I’ve spent the last few months obsessing over concurrent data structures. I’m sharing daking::MPSC_queue, a header-only, lock-free, and unbounded MPSC queue designed to bridge the gap between linked-list flexibility and array-based throughput.

1. FACING THE CHALLENGE: THE "LINKED-LIST BOTTLENECK" In traditional MPSC linked-list queues, if multiple producers attempt to enqueue single elements at a high, uniform frequency, the resulting CAS contention causes severe Cache Line Bouncing. In these saturated uniform-load scenarios, throughput hits a physical "floor" that often underperforms compared to pre-allocated ring buffers.

2. ARCHITECTURAL SOLUTIONS FOR REAL-WORLD SCENARIOS Rather than focusing on average throughput under uniform load, this design targets two specific real-world challenges:

Scenario A: Non-Uniform Contention (Burst Resilience) In many systems, producers are mostly idle but burst occasionally. By facilitating a high-speed lifecycle where node chunks circulate from Consumer (Recycle) -> Global Stack -> Producer (Allocate) with strictly O(1) complexity, the queue can rapidly establish SPSClike performance during a burst, peaking at ~161M/s.

Scenario B: Bulk Contention Reduction The enqueue_bulk interface allows producers to pre-link an entire segment in private memory. This reduces the contention from N atomic operations down to a single atomic exchange. The larger the batch, the lower the contention.

3. IMPLICIT CHUNKING & RESOURCE LIFECYCLE Instead of fragmented allocations, memory is managed via a Page -> Chunk -> Node hierarchy.

Implicit Composition: Unlike chunked-arrays, nodes are not stored in contiguous arrays but are freely combined into logical "chunks." This maintains linked-list flexibility while gaining the management efficiency of blocks.

Zero-Cost Elasticity: The unbounded design eliminates backpressure stalls or data loss during traffic spikes, with heap allocation frequency reduced to log(N).

4. ENGINEERING RIGOR * Safety: Fully audited with ThreadSanitizer (TSAN) and ASAN. * Type Safety: Supports non-default-constructible types; noexcept is automatically deduced. * Lightweight: Zero-dependency, header-only, and compatible with C++17/20.

A NOTE ON BENCHMARKS: In the interest of full transparency, I’ve benchmarked this against moodycamel::ConcurrentQueue. In highly uniform, small-grain contention scenarios, our implementation is slightly slower. However, daking::MPSC_queue provides a 3x-4x performance leap in non-uniform bursts and bulk-transfer scenarios where "Zero-Cost Elasticity" and contention reduction are the primary goals.

I’d love to hear your thoughts on this repository!

GitHub: https://github.com/dakingffo/MPSC_queue

3

DIY E-Ink Home Dashboard Without Headless Chrome (Python/Pillow) #

tjoskar.dev favicontjoskar.dev
0 コメント1:57 PMHN で見る
Hi HN,

I wanted a dashboard for my home that didn't glow in the dark or look like a computer screen. I built this using a Raspberry Pi Zero 2 W and a Waveshare 7.5" E-Ink display.

Instead of the common approach of running a headless browser and taking screenshots (which is slow and resource-heavy), I render the image directly using Python and Pillow (PIL). This brings the refresh time down significantly and runs smoothly on the Pi Zero.

I wrote a blog post about the build process, the hardware I used, and the code structure.

I also ended up writing a short book/guide for those who want a complete step-by-step walkthrough to build the exact same setup, which is linked in the post.

Happy to answer any questions you might have to build your own display!

3

Narrativee – Make sense of your data in minutes #

narrativee.com faviconnarrativee.com
0 コメント7:39 PMHN で見る
Hi HN, I’m Safoan, founder of Narrativee.

We built Narrativee because we realized how it sucks it to get a clear picture from raw spreadsheet files. I found myself spending hours staring at rows and columns, trying to extract context and manually turn that data into documents I could actually share with my team.

The Problem Spreadsheets are great for calculation, but terrible for communication. To get the "story" behind the data, you usually have to screenshot charts or write long emails explaining what the numbers mean.

This is why we built Narrativee to remove that friction in minutes.

> You upload your spreadsheet (CSV/XLS), > Get an narrative document that explains the "what" and "why" behind your data > Edit & Share a readable report instead of raw numbers

We have a free plan, give it a try and let us know what we can improve !

2

A better interface for base model LLMs #

github.com favicongithub.com
1 コメント5:00 PMHN で見る
Ever since the GPT-2 days, I've always felt like base model LLMs were something special. It felt like an entirely new art form; Every piece was a collage made of all the written works that came before it.

But, the issue is that all of the interfaces for them have sucked.

The original OpenAI playground interface was incredibly limited. Then, Loom came along and showed the world the possibilities of branching LLM interfaces. The concept was incredible, but the actual interface was buggy and felt incredibly confusing and alien.

Loomsidian was the first branching base model interface I liked. It was really easy to use but it felt painfully limited. Exoloom came after it, and again, the concept was great but the interface just kinda sucked.

So, after trying all these different interfaces, I decided to try to learn from their mistakes and make something better.

My new interface is called Tapestry Loom. It's still a work in progress, and there's a lot more I want to add to it, but I think it has finally reached a point where it's ready for other people to try.

Let me know you think.

2

SourceMinder, a Context Aware Code Search for Solo Devs and Claude Code #

ebcode.com faviconebcode.com
3 コメント3:08 AMHN で見る
Hello HN. Here's the TLDR from the linked blog post: After running into context window issues on my first two projects, I developed a tool for making Claude Code use fewer tokens by creating an indexer that provides context in the search results. Built with sqlite and tree-sitter, it currently supports the following languages: C, Go, PHP, Python, and TypeScript. Get the code here: https://github.com/ebcode/SourceMinder

Happy to answer any questions about it here, and open to critical feedback. Thanks!

2

A lightweight DLP browser extension to prevent data leaks in LLM tools #

asturic.com faviconasturic.com
0 コメント4:29 PMHN で見る
Hi HN, I’m a student working at a small company. I noticed some of my colleagues were accidentally pasting sensitive data and internal docs into ChatGPT, Gemini, etc.

Enterprise DLPs were a bit too expensive and complex for us, so I built this browser extension. It scans prompts and docs in real-time, alerts the user, and can anonymize data before it’s sent.

I’d love to hear your thoughts on the technical approach (browser-native vs network level) and if you find this useful.

2

BustAPI – Python web framework powered by Rust #

github.com favicongithub.com
0 コメント4:06 PMHN で見る
Hi HN,

I built BustAPI to solve performance issues I kept hitting with Python APIs. It uses a Rust backend (Actix) with PyO3 so Python devs get speed without writing Rust.

Key points: - 5–10x faster than FastAPI in simple benchmarks - Familiar Flask/FastAPI-style API - Focused on admin backends and high-throughput APIs

I’d love feedback, especially on API design and real-world use cases.

2

GPT Clicker. An idle game about building an AI empire #

gpt-clicker.pixdeo.com favicongpt-clicker.pixdeo.com
0 コメント4:10 PMHN で見る
I built an idle/clicker game about the AI hype cycle, because I love following how AI develops and also love idle games. You start circa 2018 with a laptop and a transformer idea, training your first model. The game follows "real" AI history: Attention Is All You Need → GPT-1 → you get the idea.

Core loop: - Train models (click or auto) to improve quality - Serve users to earn money - Buy hardware: from a used GTX 1060 to Dyson Sphere compute arrays - Research to unlock bigger models - Balance energy/compute resources Built with SolidJS + TypeScript + Vite. ~90KB gzipped. No frameworks, no backend, runs entirely in the browser. Saves to localStorage. The progression takes ~2-3 hours to reach ASI if you're actively playing, longer if idle. Hardware scales from 1 compute/sec (laptop) to 100M compute/sec (Matrioshka Brain). I tried to capture the exponential feeling of the AI scaling laws - each era feels faster than the last, which mirrors the real industry, but I didn't want to get into politics, corp, and stuff like that at this moment.

Would love feedback on game balance and pacing. The late game might be too fast or too slow depending on your playstyle.

Hope nobody get too mad at this, remember it's a parody, just a clicker game. And yes, it was made with an agentic LLM (I refuse to say vibe coded).

2

I built an Animated UI Library with drag and drop components #

ogblocks.dev faviconogblocks.dev
0 コメント10:15 AMHN で見る
Hello Everyone,

My name is Karan, and I'm a Frontend Developer, but I feel like I'm more of a Design Engineer because of my love for creating UIs

When I started my development journey, I fell for frontend development and have been stuck with it ever since

But I noticed that many of my friends hated writing CSS because creating UIs is a very tedious and time-consuming process, and you have to be pixel-perfect

But at the same time, they also wanted their project to look premium with beautiful animations and a world-class user experience

That's when I thought

"What if anyone could integrate beautiful animated components into their website regardless of their CSS skills?"

And after six months of pain and restless nights, I finally built ogBlocks to solve this problem.

It is an Animated UI Library for React that contains all the cool animations that will make it look premium and production-grade

ogBlocks has navbars, modals, buttons, feature sections, text animations, carousels, and much more.

I know you'll love it

Best Karan

2

MiraTTS, a 48kHz Open-Source TTS at 100x Real-Time Speed #

github.com favicongithub.com
0 コメント4:24 PMHN で見る
I’ve been working on MiraTTS, a fine-tune of Spark-TTS designed for high realism and stable text-to-speech. The goal was to create an incredibly fast but high quality model.

Most open TTS models are either computationally heavy or generate 16-24kHz audio. Mira achieves high fidelity and speed by combining two things:

FlashSR: For generating crisp and clearer 48kHz audio outputs.

LMDeploy: Heavily optimized inference allowing for 100x real-time speed and low latency (roughly150ms).

I built this so local users have access to a high quality local text-to-speech model that works for any usecase. It’s currently in its early stages, and I'm currently experimenting with multilingual versions and multi-speaker versions. Streaming is coming soon as well.

Repo: https://github.com/ysharma3501/MiraTTS

Model: https://huggingface.co/YatharthS/MiraTTS

I also wrote a breakdown on how these LLM based TTS models work: https://huggingface.co/blog/YatharthS/llm-tts-models

1

TimetoTest – AI agent runs UI/API tests #

timetotest.tech favicontimetotest.tech
0 コメント4:04 PMHN で見る
I built TimetoTest because writing and maintaining test scripts is tedious. Instead of dealing with selectors, waits, and brittle automation code, you describe what you want to test in plain English.

How it works:

You write: "Verify users can checkout with a promo code" The AI agent generates a test plan (UI + API steps) It executes in a real browser with human-like interactions You get a detailed report with screenshots and logs It handles UI testing, API testing, E2E, and regression—no code required.

The goal was to make it feel like describing a test to a senior QA engineer who just goes and does it.

Would love feedback from the HN community. Happy to answer questions about the architecture or approach.

1

FastRAG Next.js RAG boilerplate with precise PDF citation highlighting #

fastrag.live faviconfastrag.live
0 コメント2:41 PMHN で見る
Hey HN,

I built this because I was tired of setting up the same RAG pipeline for every side project.

The Problem: Most RAG tutorials give you an answer but don't show where it came from. Hallucinations are a dealbreaker for real apps.

* The Solution:* I built a pipeline using Pinecone metadata to map vector chunks back to the original PDF page/text. The UI highlights the source snippet when the AI answers.

Stack:

Next.js 14 (App Router)

Pinecone (Serverless)

LangChain (Streaming)

Supabase (Auth)

Demo: https://www.fastrag.live

Happy to answer questions about the chunking strategy or the citation logic.

1

Quercle – Web Fetch/Search API for AI Agents #

quercle.dev faviconquercle.dev
0 コメント12:15 PMHN で見る
Inspiration: While building LLM agents, I needed simple web fetch + search (like Claude Code has), but existing tools gave raw HTML, irrelevant markdown, or broke on JS sites. Evolution: Started as part of another project - pivoted to standalone as it was more feasible and scoped. Trade-off: Prioritized simplicity and LLM-ready outputs (via an LLM layer) over raw speed. Now: Handles JS-heavy sites, easy integrations (LangChain, Vercel AI SDK, MCP).

Looking for early testers right now - DM me (@liran_yo on X) with your login email and I'll send you credits to try it out! Pricing shown isn't final yet - after gathering stats from users like you, I hope to lower it dramatically based on real usage patterns.

Does this solve your web data pains? Would you use it in production? What’s missing?

Thank you

1

Peachka – Protecting Videos from Stealing #

peachka.net faviconpeachka.net
0 コメント4:59 PMHN で見る
I've been learning Python and I decided to make something useful.

I created a SaaS that tries to protect videos from being downloaded in some common ways, like using browser extensions or direct video links. I think attacker will have to take some time to actually download a video from my website (at least I hope so).

You can always screen record it, but there is no protection against it, unless you use DRM.

Here is example of such video (it needs to be unlocked). https://peachka.net/v/1

I still don't know what to do with that solution, maybe offer it to creators or such.