Ежедневные Show HN

Upvote0

Show HN за 5 октября 2025 г.

19 постов
170

ut – Rust based CLI utilities for devs and IT #

github.com favicongithub.com
63 комментариев5:36 PMПосмотреть на HN
Hey HN,

I find myself reaching for tools like it-tools.tech or other random sites every now and then during development or debugging. So, I built a toolkit with a sane and simple CLI interface for most of those tools.

For the curious and lazy, at the moment, ut has tools for,

- Encoding: base64 (encode, decode), url (encode, decode)

- Hashing: md5, sha1, sha224, sha256, sha384, sha512

- Data Generation: uuid (v1, v3, v4, v5), token, lorem, random

- Text Processing: case (lower, upper, camel, title, constant, header, sentence, snake), pretty-print, diff

- Development Tools: calc, json (builder), regex, datetime

- Web & Network: http (status), serve, qr

- Color & Design: color (convert)

- Reference: unicode

For full disclosure, parts of the toolkit were built with Claude Code (I wanted to use this as an opportunity to play with it more). Feel free to open feature requests and/or contribute.

136

Pyscn – Python code quality analyzer for vibe coders #

github.com favicongithub.com
84 комментариев1:22 PMПосмотреть на HN
Hi HN! I built pyscn for Python developers in the vibe coding era. If you're using Cursor, Claude, or ChatGPT to ship Python code fast, you know the feeling: features work, tests pass, but the codebase feels... messy.

Common vibe coding artifacts:

• Code duplication (from copy-pasted snippets)

• Dead code from quick iterations

• Over-engineered solutions for simple problems

• Inconsistent patterns across modules

pyscn performs structural analysis:

• APTED tree edit distance + LSH

• Control-Flow Graph (CFG) analysis

• Coupling Between Objects (CBO)

• Cyclomatic Complexity

Try it without installation:

  uvx pyscn analyze .          # Using uv (fastest)
  pipx run pyscn analyze .     # Using pipx
  (Or install: pip install pyscn)
Built with Go + tree-sitter. Happy to dive into the implementation details!
121

ASCII Drawing Board #

delopsu.com favicondelopsu.com
46 комментариев3:36 PMПосмотреть на HN
I've made an ASCII drawing board. You can set any brush, canvas size, export art as a text file.

I want to keep it as manual/analog as possible, hence there is no tool for converting image to ASCII.

Prompting LLMs to draw ASCII art turned out to be a difficult task . So instead I decided to ask it make a drawing board. Wihtout coding agent I wouldn't even try myself. Although it is an interesting task, it would drift me away.

Basically it's just a drawing board with endless variations of textures and brushes: when you make large canvas and zoom out you don't see ASCII anymore, but textured drawing. E.g. this is a cat: https://x.com/delopsu_com/status/1971726204073136219, brush used: " ".

I would appreciate if you could give it a try and tell your people about it, if it's something that may be of their interest. Also please give any feedback here or on X.

turns out there is already (or is emerging) ASCII benchmark for LLMs.

51

DidMySettingsChange – A tool that checks changed windows settings #

github.com favicongithub.com
14 комментариев10:49 PMПосмотреть на HN
Microsoft has been under heavy scrutiny with how they manage Windows over the years, particularly concerning privacy and telemetry settings. Many users find that after disabling certain settings, these settings are mysteriously re-enabled after updates or without any apparent reason. DidMySettingsChange is a Python script designed to help users keep track of their Windows privacy and telemetry settings, ensuring that they stay in control of their privacy without the hassle of manually checking each setting. Features

    Comprehensive Checks: Automatically scans all known Windows privacy and telemetry settings.
    Change Detection: Alerts users if any settings have been changed from their preferred state.
    Customizable Configuration: Allows users to specify which settings to monitor.
    Easy to Use: Simple command-line interface that provides clear and concise output.
    Logs and Reports: Generates detailed logs and reports for auditing and troubleshooting.
14

A Vectorless LLM-Native Document Index Method #

github.com favicongithub.com
3 комментариев4:39 PMПосмотреть на HN
The word "index" originally came from how humans retrieve info: book indexes and tables of contents that guide us to the right place in documents.

Computers later borrowed the term for data structures: e.g., B-trees, hash tables, and more recently, vector indexes. They are highly efficient for machines; but abstract and unnatural: not something a human, or an LLM, can understand and directly use as a reasoning aid. This creates a gap between how indexes work for computers and how they should work for models that reason like humans.

PageIndex is a new step that "looks back to move forward". It revives the original, human-oriented idea of an index and adapts it for LLMs. Now the index itself (PageIndex) lives inside the LLM's context window: the model sees a hierarchical table-of-contents tree and reasons its way down to the right span, much like a person would retrieve information using a book's index.

PageIndex MCP shows how this works in practice: it runs as a MCP server, exposing a document's structure directly to LLMs/Agents. This means platforms like Claude, Cursor, or any MCP-enabled agent or LLM can navigate the index themselves and reason their way through documents, not with vectors/chunking, but in a human-like, reasoning-based way.

7

I am relaunching the app I made to talk to my Danish girlfriend #

menerdu.com faviconmenerdu.com
8 комментариев1:10 PMПосмотреть на HN
It has been a while since I have shared my app, that I made to be able to practice my Danish while chatting with my grilfriend - I often used GPT to write as much as I can in Danish, and I replaced the words I didn't know with {Englis words in curly braces}. While this was great, it was rather annoying to type the prompt always, so I figured a GPT wrapper would work well here.

Last time when I launched, it went viral! I was super happy (and grateful!) for all the feedback and excitement — but it also meant my tokens disappeared fast, and many people ended up trying an app that no longer worked...

So… I took a step back, trimmed a few auxiliary features, and started porting the whole thing over to Supabase with proper rate limiting.

It’s still a work in progress, but the core functionality is up and running — you can write in your target language, add words that you do not know (or context) in {curly braces}, and get a corrected version with explanations.

I figured I’d share an update now instead of waiting until everything’s perfect — would love to build it in the open and keep you all in the loop this time.

6

Volant– spin up real microVMs in 10 seconds(Docker images or initramfs) #

github.com favicongithub.com
0 комментариев10:54 PMПосмотреть на HN
I’ve been building Volant, a modular microVM orchestration engine that makes running microVMs feel as simple as Docker.

It supports cloud-init, GPU/VFIO passthrough (yes, you can run AI/ML workloads in isolated microVMs), booting Docker images via a plugin system, and Kubernetes-style deployments with replication, all from a single CLI(soon to be web UI, see next)

Coming soon: a built-in PaaS mode with snapshot-based cold start elimination, sort of like Dokploy, but designed for serverless workloads that boot from memory snapshots instead of containers.

Volant is intentionally a bit opinionated to make microVMs more accessible, but it’s fully extensible for power users.

Check out the README and the docs for more details.

It’s free and open source (under BSL), would love to hear feedback or thoughts from anyone!

tl;dr: 6-second GIF in the README shows the full flow: install → create VM → get HTTP 200.

3

I tried to remove dynamic watermarks from Sora2 videos using AI #

sora2video.us faviconsora2video.us
0 комментариев3:18 AMПосмотреть на HN
Hi HN

I recently ran into a common issue with AI-generated videos — dynamic watermarks that move across frames. Instead of manually editing them out, I decided to experiment with AI-based cleanup and wrote a small tool called *Sora2 Video Cleaner*.

It uses frame-level detection and temporal consistency to automatically identify and smooth out watermark regions, restoring a cleaner video output. The goal isn’t to “crack” anything proprietary, but to help creators improve their visuals and repurpose their content more cleanly.

Try it here: [https://sora2video.us](https://sora2video.us)

Would love feedback from the community — especially around: - Efficient ways to handle temporal noise in video cleanup - Integrating real-time processing in web pipelines

Built with Python + OpenCV + some lightweight diffusion-based filters. Happy to share more details if anyone’s curious!

2

TeXlyre: Local-first collaborative LaTeX/Typst editor #

texlyre.github.io favicontexlyre.github.io
0 комментариев2:34 PMПосмотреть на HN
I built TeXlyre, an open-source collaborative editor for LaTeX and Typst that works entirely locally-first. No data leaves your device unless you explicitly share it.

The main pain point I wanted to solve: existing solutions like Overleaf require sending your documents to their servers, and local editors don't handle real-time collaboration well. TeXlyre uses CRDTs for conflict-free collaboration while keeping everything on your machine.

Key features: - Works offline by default - Real-time collaboration via WebRTC (peer-to-peer) - Support for both LaTeX and Typst - Your data stays on your device - Open source

The architecture uses local-first principles - documents are stored in your browser's storage and synced directly between peers when collaborating. No central server sees your content.

Still early but functional. Would love feedback from the HN community, especially around the collaboration UX and any edge cases I haven't considered.

GitHub: https://github.com/texlyre/texlyre Demo: https://texlyre.github.io

1

TLD Wiki and Explorer #

dotcom.press favicondotcom.press
0 комментариев1:41 PMПосмотреть на HN
Hello! I made a top-level domain explorer and wiki with all 1,592 TLDs in IANA's root zone database.

Would appreciate any feedback or feature ideas. At the very least I'd like to add filter/sort.

Source code is public on my repo and val.town: 1. github.com/pmillspaugh/dotcom.press/pull/16 2. val.town/x/petermillspaugh/tld-wiki

1

Pluely v0.1.5 Released, Open Source Invisible AI Assistant #

pluely.com faviconpluely.com
0 комментариев11:37 PMПосмотреть на HN
Pluely is an open-source alternative to all stealth AI products/companies, and Pluely is your go-to AI companion that lives silently on your desktop as a translucent overlay—always on display, one click away. At just ~10 MB, it’s ultra-lightweight (27× smaller than Cluely), uses 50% less compute power, and launches in < 100 ms. Pluely stays on top of any window without intrusion, overlaps seamlessly with your workflow, and requires zero configuration—just your API keys and you’re ready to summon AI insights, summaries, translations, and more in an instant.

Power up with Pluely — compatible with millions of LLMs worldwide. 100% free when you bring your own API key.

So far, Pluely is better than all stealth desktop applications, with many features, custom LLMs, stealth features, and more. It's completely open source, 850+ stars on GitHub.

all with: solo contribution, $0 funding, and endless nights.

This release enhances user interface fluidity, audio processing, AI-powered system prompts, markdown rendering, customizable keyboard shortcuts, and multi-monitor support. It fixes key bugs, adds Pro-exclusive features, and optimizes performance for a smoother experience.

User Interface & Experience - Drag and drop application window - Fixed window bounce on drag with popover - Adjustable window transparency (0-100%) - Custom invisible cursor with smooth animations - Improved textarea with custom scrollbar - Enhanced chat UI with persistent state

Keyboard Shortcuts - Fully customizable global shortcuts - Hotkey configuration with conflict detection - Platform-specific default shortcuts (macOS, Windows, Linux)

Multi-Monitor Support - Multi-monitor window positioning - Automatic window repositioning at screen boundaries

Audio & Speech Recognition - Option to disable VAD with manual recording - Configurable audio recording duration (3-5 minutes) - Improved VAD configuration panel - Enhanced system audio capture with better error handling

AI & System Prompts - AI-generated system prompts for Pro users - System prompt profiles with CRUD operations

Markdown & Rendering - LaTeX/KaTeX math rendering in markdown - Enhanced code syntax highlighting with Shiki - Improved markdown styling - Support for inline and block math expressions

Bug Fixes - Fixed floating tab movement - Resolved search response overlap - Fixed Perplexity API key integration - Corrected Pluely visibility on macOS screenshare - Fixed Ollama fetch error - Resolved system audio capture on Windows 11 - Added draggable floating UI - Improved multi-monitor support

Fixed window bounce, search overlaps, API issues, and other bugs. Performance improved.

Downloads: https://pluely.com/downloads Website: https://pluely.com Repo: https://github.com/iamsrikanthnani/pluely

1

Write deep learning code on your laptop and run it instantly on GPUs #

aiengineering.academy faviconaiengineering.academy
0 комментариев1:36 PMПосмотреть на HN
If you’re doing deep learning, you know the struggle: GPUs are expensive, setting up infrastructure is a pain, and most of the time they sit idle while you write or debug code. Multi-GPU setups make things even harder.

I’ve been using Modal for everything — training, fine-tuning, evaluation, and even serving models. It lets me write code locally and run it instantly on serverless GPUs — no Kubernetes, no VM headaches, no idle GPU bills. I can attach persistent volumes for datasets and model weights, scale from 1 to 8 GPUs, and only pay for what I use.

Over time, I learned how to effectively use Modal for real-world ML workflows. And I’ve put everything we learned into hands-on tutorials that detail the full process:

Training nanoGPT from scratch by andrej karpathy Fine-tuning and deploying Gemma 3-4B with UnslothAI Multi-GPU Llama 8–70B training with axolotl_ai

1

DIY AI Dev Kit Assembly [video] #

youtube.com faviconyoutube.com
0 комментариев11:57 PMПосмотреть на HN
I have been working on this modular open source design for a while and will start taking pre-orders next week on Kickstarter.

https://www.kickstarter.com/projects/ubopod/ubo-pod-hackable...

You can watch a more in-depth video of the functionality here:

https://www.youtube.com/watch?v=zBFd0aLzWAk

this is a fully open source product (hardware & software). The software stack uses Pipecat under the hood for voice/vision ai orchestration.

The core uses a redux-based event driven reactive design that communicates with various pluggable services. For more on the software side, check the repo here:

https://github.com/ubopod/ubo_app