每日 Show HN

Upvote0

2025年8月25日 的 Show HN

27 条
692

Base, an SQLite database editor for macOS #

menial.co.uk faviconmenial.co.uk
180 评论2:17 PM在 HN 查看
I recently released v3 of Base, my SQLite editor for macOS.

The goal of this app is to provide a comfortable native GUI for SQLite, without it turning into a massive IDE-style app.

The coolest features are

- That it can handle full altering of tables, which is quite finicky to do manually with SQLite.

- It has a more detailed display of column constraints than most editors. Each constraint is shown as an icon if active, with full details available on clicking the icon.

This update also adds support for attaching databases, which is a bit fiddly with macOS sandboxing.

I'd love to hear any feedback or answer any questions.

107

Async – Claude Code and Linear and GitHub PRs in One Opinionated Tool #

github.com favicongithub.com
46 评论1:21 PM在 HN 查看
Hi, I’m Mikkel and I’m building Async, an open-sourced developer tool that combines AI coding with task management and code review.

What Async does:

  - Automatically researches coding tasks, asks clarifying questions, then executes code changes in the cloud
  - Breaks work into reviewable subtasks with stack diffs for easier code review
  - Handles the full workflow from issue to merged PR without leaving the app
Demo here: https://youtu.be/98k42b8GF4s?si=Azf3FIWAbpsXxk3_

I’ve been working as a developer for over a decade now. I’ve tried all sorts of AI tools out there including Cline, Cursor, Claude Code, Kiro and more. All are pretty amazing for bootstrapping new projects. But most of my work is iterating on existing codebases where I can't break things, and that's where the magic breaks down. None of these tools work well on mature codebases.

The problems I kept running into:

  - I'm lazy. My Claude Code workflow became: throw a vague prompt like "turn issues into tasks in Github webhook," let it run something wrong, then iterate until I realize I could've just coded it myself. Claude Code's docs say to plan first, but it's not enforced and I can't force myself to do it.
  - Context switching hell. I started using Claude Code asynchronously - give it edit permissions, let it run, alt-tab to work on something else, then come back later to review. But when I return, I need to reconcile what the task was about, context switch back, and iterate. The mental overhead kills any productivity gains.
  - Tracking sucks. I use Apple Notes with bullet points to track tasks, but it's messy. Just like many other developers, I hate PM tools but need some way to stay organized without the bloat.
  - Review bottleneck. I've never shipped Claude Code output without fixes, at minimum stylistic changes (why does it always add comments even when I tell it not to?). The review/test cycle caps me at maybe 3 concurrent tasks.
So I built Async:

  - Forces upfront planning, always asks clarifying questions and requires confirmation before executing
  - Simple task tracking that imports Github issues automatically (other integration coming soon!)
  - Executes in the cloud, breaks work into subtasks, creates commits, opens PRs
  - Built-in code review with stacked diffs - comment and iterate without leaving the app
  - Works on desktop and mobile
It works by using a lightweight research agent to scope out tasks and come up with requirements and clarifying questions as needed (e.g., "fix the truncation issue" - "Would you like a tooltip on hover?"). After you confirm requirements, it executes the task by breaking it down into subtasks and then working commit by commit. It uses a mix of Gemini and Claude Code internally and runs all changes in the background in the cloud.

You've probably seen tools that do pieces of this, but I think it makes sense as one integrated workflow.

This isn't for vibe coders. I'm building a tool that I can use in my day-to-day work. Async is for experienced developers who know their codebases and products deeply. The goal is to make Async the last tool developers need to build something great. Still early and I'm iterating quickly. Would love to know what you think.

P.S. My cofounder loves light mode, I only use dark mode. I won the argument so our tool only supports dark mode. Thumbs up if you agree with me.

73

Gonzo – A Go-based TUI for log analysis (OpenTelemetry/OTLP support) #

github.com favicongithub.com
15 评论7:44 PM在 HN 查看
We built Gonzo to make log analysis faster and friendlier in the terminal. Think of it like k9s for logs — a TUI that can ingest JSON, text, or OpenTelemetry (OTLP) logs, highlight and boil up patterns, and even run AI models locally or via API to summarize logs. We’re still iterating, so ideas and contributions are welcome!
53

Timep – A next-gen profiler and flamegraph-generator for bash code #

github.com favicongithub.com
12 评论5:17 AM在 HN 查看
Note: this is an update to [this](https://news.ycombinator.com/item?id=44568529) "Show HN" post.

timep is a state-of-the-art [debug-]trap-based bash profiler that is efficient and extremely accurate. Unlike other profilers, timep records:

1. per-command wall-clock time

2. per-command CPU time, and

3. the hierarchy of parent function calls /subshells for each command

the wall-clock + CPU time combination allows you to determine if a particular command is CPU-bound or IO-bound, and the hierarchical logging gives you a map of how the code actually executed.

The standout feature of timep is that it will take these records and automatically generate a bash-native flamegraph (that shows bash commands, not syscalls).

------------------------------------------------

USAGE

timep is extremely easy to use - just source the `timep.bash` file from the repo and add "timep" in front of whatever you want to profile. for example:

    . /path/to/timep.bash
    timep ./some_script
    echo "stdin" | timep some_function
ZERO changes need to be made to the code being profiled!

------------------------------------------------

EXAMPLES

[test code that will be profiled](https://github.com/jkool702/timep/blob/main/TESTS/timep.test...)

[output profile for that test code](https://github.com/jkool702/timep/blob/main/TESTS/OUTPUT/out...)

[flamegraph for that test code](https://github.com/jkool702/timep/blob/main/TESTS/OUTPUT/fla...)

[flamegraph from a "real world" test of "forkrun", a parallelization engine written in bash](https://github.com/jkool702/timep/blob/main/TESTS/FORKRUN/fl...)

In the "forkrun test", 13 different checksums were computed for ~670k small files on a ramdisk using 28 parallel workers. this was repeated twice. In total, this test ran around 67,000 individual bash commands. [This is its `perf stat` (without timep)](https://github.com/jkool702/timep/blob/main/TESTS/FORKRUN/pe...).

------------------------------------------------

EFFICIENCY AND ACCURACY

The forkrun test (see "examples" section above) was basically as demanding of a workload as one can have in bash. it fully utilized 24.5 cores on a 14c/28t i9-7940x CPU, racking up >840 seconds of CPU time in ~34.5 seconds of wall-clock time. When profiling this group of 67,000 commands with timep:

1. the time it took for the code to run with the debug-trap instrumentation was ~38 seconds, an increase of just slightly over 10%. CPU time had a similiar increase.

2. the time profile was ready at +2 minutes (1 minute + 15 seconds after the profiling run finished)

3. the flamegraphs were ready at +5 minutes (4 minute + 15 seconds after the profiling run finished)

Note that timep records both "start" and "stop" timestamps for every command, and the debug trap instrumentation runs between one commands "stop" timestamp and the next commands "start" timestamp, meaning the error in the profiles timings is far less than the 10% overhead. Comparing the total (sys+user) CPU time that perf stat gave (without using timep) and the CPU time timep gives (from summing together the CPU time of all 67,000-ish commands), the difference is virtually always less than 0.5%, and often less than 0.2%. Ive seen as low as 0.04%, which is 1/3 of a second on a run that took ~850 seconds of CPU time.

------------------------------------------------

MAJOR CHANGES SINCE THE LAST "SHOW HN" POST

1. CPU time is now recorded too (instead of just wall-clock time). This is done via a loadable builtin that calls `getrusage` and (if available) `clock_gettime` to efficiently and accurate determine the CPU time of the process and all its descendants.

2. the .so file required to use the loadable builtin mentioned in #1 is built directly into the script has an embedded compressed base64 sequence. I also developed the bash-native compression scheme that it uses. The .so files for x86_64, aarch64, ppc64le and i686 are all included. Im hoping to add arm7 soon as well. the flamegraph generator perl script is also embedded, making the script 100% fully self-contained. NOTE: these embedded base64 strings include both sha256 and md5 checksums of the resulting .so file that are verified on extraction.

3. the flamegraph generation has been completely overhauled. The flamegraphs now a) are colored based on runtime (hot colors = longer runtime), b) desaturate colors for commands where cpu time << wall-clock time (e.g., blocking reads, sleep, wait, ...), and c) use a runtime-weighted CDF color mapping that ensures, regardless of the distribution of the underlying data, that the resulting flamegraph has a roughly equal amount of each color in the colorspace (where "equal" means "the same number of pixels are showing each color"). timep also combines multiple flamegraphs (that show wallclock time vs cpu time and that us the full vs folded set of traces) by vertically stacking them into a single SVG image, giving "dual stack" and "quad stack" flamegraphs.

4. the post-processing workflow has been basically completely re-written, making it more robust, easier to understand/maintain, and much faster. The "forkrun" test linked above (that ran 67,000 commands) previously took ~20 minutes. With the new version, you can get a profile in 2 minutes or a profile + flamegraph in 5 minutes - a 4x to 10x speedup!

42

stagewise (YC S25) – the frontend coding agent for real codebases #

stagewise.io faviconstagewise.io
18 评论4:47 PM在 HN 查看
Hey HN, we're Glenn and Julian, and we're building stagewise (https://stagewise.io), a frontend coding agent that inside your app’s dev mode and that makes changes in your local codebase. We’re compatible with any framework and any component library. Think of it like a v0 of Lovable that works locally and with any existing codebase.

You can spawn the agent into locally running web apps in dev mode with `npx stagewise` from the project root. The agent lets you then click on HTML Elements in your app, enter prompts like 'increase the height here' and will implement the changes in your source code. Before stagewise, we were building a vertical SaaS for logistics from scratch and loved using prototyping tools like v0 or lovable to get to the first version. But when switching from v0/ lovable to Cursor for local development, we felt like the frontend magic was gone. So, we decided to build stagewise to bring that same magic to local development. The first version of stagewise just forwarded a prompt with browser context to existing IDEs and agents (Cursor, Cline, ..) and went viral on X after we open sourced it. However, the APIs of existing coding agents were very limiting, so we figured that building our own agent would unlock the full potential of stagewise.

Since our last Show HN, we launched a few very important features and changes: You now have a proprietary chat history with the agent, an undo button to revert changes, and we increased the amount of free credits AND reduced the pricing by 50%. Julian made a video about all these changes, showing you how stagewise works: https://x.com/goetzejulian/status/1959835222712955140/video/....

So far, we've seen great adoption from non-technical users who wanted to continue building their lovable prototype locally. We personally use the agent almost daily to make changes to our landing page and to build the UI of new features on our console (https://console.stagewise.io). If you have an app running in dev mode, simply `cd` into the app directory and run `npx stagewise` - the agent should appear, ready to play with.

We're very excited to hear your feedback!

31

Spart – A Rust library for fast spatial search with Python bindings #

0 评论1:33 PM在 HN 查看
Hi everyone,

I've made an open-source library for fast spatial search in Rust.

It's called Spart, and it currently provides the following features:

- Five tree implementations: Quadtree, Octree, Kd-tree, R-tree, and R*-tree

- Python bindings (`pyspart` on PyPI)

- Fast k-nearest neighbor (kNN) and radius search

- Bulk data loading for efficient tree construction

Project's GitHub repo: https://github.com/habedi/spart

22

Spoon-Bending – a framework for analyzing GPT-5 alignment behavior #

github.com favicongithub.com
3 评论6:48 AM在 HN 查看
I put together a repo called Spoon-Bending, it is not a jailbreak or hack, it is a structured logical framework for studying how GPT-5 responds under different framings compared to earlier versions. The framework maps responses into zones of refusal, partial analysis, or free exploration, making alignment behavior more reproducible and easier to study systematically.

The idea is simple: by treating prompts and outputs as part of a logical schema, you can start to see objective patterns in how alignment shifts across versions. The README explains the schema and provides concrete tactics for testing it.

14

Bitcoin Challenge. Try to steal a plain text private key you can use #

app.redactsure.com faviconapp.redactsure.com
13 评论4:16 PM在 HN 查看
Hi HN, I'm releasing my round one public demo of a new browser security system I've been developing.

There's a real Bitcoin private key (worth $20) in plaintext at app.redactsure.com. You can copy it, paste it, delete it, move it around - full control. But you can't see the actual characters or extract them.

The challenge: Break the protection and take the Bitcoin. First person wins, challenge ends.

Details: - Requires email verification (prevents abuse, no account needed) - 15 minute time limit per session - Currently US only for the demo (latency) - Verify the Bitcoin is real: https://redactsure.com/bitcoinchallenge

Technical approach: - Cloud-hosted browser with real time NER model - Webpages are unmodified - Think of it as selective invisibility for sensitive data. You can interact with it normally, just can't see or extract it

Looking for feedback on edge cases in the hiding/protection algorithm. Happy to answer questions about the implementation.

14

SecretMemoryLocker – File Encryption Without Static Passwords #

github.com favicongithub.com
3 评论4:39 PM在 HN 查看
I built SecretMemoryLocker (https://secretmemorylocker.com), a file encryption tool that generates keys dynamically from your answers to personal questions instead of using a static master password. This makes offline brute-force attacks much more difficult. Think of it as a password manager that meets mnemonic seed recovery, but without storing any sensitive keys on disk.

Why? I kept losing master passwords and wanted a solution that wasn't tied to a single point of failure. I also wanted to create a "digital legacy" that my family could access only under specific conditions. The core principle is knowledge-based encryption: the key only exists in memory when you provide the correct answers.

Status: * MVP is ready for Windows (.exe). * Linux and macOS support is planned. * UI is available in English, Spanish, and Ukrainian.

Key Features:

* No Static Secrets: No master password or seed phrase is ever stored. The key is reconstructed on the fly.

* Knowledge-Based Key Generation: The final encryption key is derived from a combination of your personal answers and file metadata.

* Offline Brute-Force Resistance: Uses MirageLoop, a decoy system that activates when incorrect answers are entered. Instead of decrypting real data, it generates an endless sequence of AI-created questions from a secure local database, creating an illusion of progress while keeping your real data untouched.

* Offline AI Generation Mode: Optional offline Q&A generator (prototype).

How It Works (Simplified):

1) Files are packed into an AES-256 encrypted ZIP archive.

2) A JSON key file stores the questions in an encrypted chain. Each subsequent question is encrypted with a key derived from the previous correct answer and the file's hash. This forces you to answer them sequentially.

3) The final encryption key for the ZIP file is derived by combining the hashes of all your correct answers. The key derivation formula looks like this:

  K_final = SHA256(H(answer1+file_hash) + H(answer2+file_hash) + ...)
(Note: We are aware that a fast hash like SHA256 is not ideal for a KDF. We plan to migrate to Argon2 in a future release to further strengthen resistance against brute-force attacks.)

To encrypt, you provide a file. This creates two outputs: your_file.txt → your_file_SMLkey.json + your_file_SecretML.zip

To decrypt, you need both files and the correct answers.

Install & Quick Start: Download the EXE from GitHub Releases (no dependencies needed):

https://github.com/SecretML/SecretMemoryLocker/releases

Encrypt:

  SecretMemoryLocker.exe --encrypt "C:\docs\important.pdf"
Decrypt:

  SecretMemoryLocker.exe --decrypt "C:\docs\important_SMLkey.json"
I would love to get your feedback on the concept, the user experience, and any security assumptions I've made. Thanks!
9

RAG-Guard: Zero-Trust Document AI #

github.com favicongithub.com
1 评论9:42 PM在 HN 查看
Hey HN,

I wanted to share something I’ve been working on: *RAG-Guard*, a document AI that’s all about privacy. It’s an experiment in combining Retrieval-Augmented Generation (RAG) with AI-powered question answering, but with a twist — your data stays yours.

Here’s the idea: you can upload contracts, research papers, personal notes, or any other documents, and RAG-Guard processes everything locally in your browser. Nothing leaves your device unless you explicitly approve it.

### How It Works

- * Zero-Trust by Design*: Every step happens in your browser until you say otherwise.

- * Local Document Processing*: Files are parsed entirely on your device.

- * Local Embeddings*: We use [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v...) via Transformers.js to generate embeddings right in your browser.

- * Secure Storage*: Documents and embeddings are stored in your browser’s encrypted IndexedDB.

- * Client-Side Search*: Vector similarity search happens locally, so you can find relevant chunks without sending anything to a server.

- * Manual Approval*: Before anything is sent to an AI model, you get to review and approve the exact chunks of text.

- * AI Calls*: Only the text you approve is sent to the language model (e.g., Ollama).

No tracking. No analytics. No “training on your data.”

### Why I Built This

I’ve been fascinated by the potential of RAG and AI-powered question answering, but I’ve always been uneasy about the privacy trade-offs. Most tools out there require you to upload sensitive documents to the cloud, where you lose control over what happens to your data.

With RAG-Guard, I wanted to see if it was possible to build something useful without compromising privacy. The goal was to create a tool that respects your data and puts you in control.

### Who It’s For

If you’re someone who works with sensitive documents — contracts, research, personal notes — and you want the power of AI without the risk of unauthorized access or misuse, this might be for you.

### What’s Next

This is still an experiment, and I’d love to hear your thoughts. Is this something you’d use? What features would make it better?

You can check it out here: [https://mrorigo.github.io/rag-guard/]

Looking forward to your feedback!

4

Citation Needed #

citation-needed.org faviconcitation-needed.org
4 评论2:37 PM在 HN 查看
Copy-paste a snippet from social media that refers to a study and find the paper PDF (hopefully). It tries all sorts of APIs (Crossref, Open Alex, Unpaywall), an LLM and Google Scholar to find the most likely paper PDF (if it is open access). Please let me know if you have ideas to improve it! Yes, it can be slow.
4

RefForge – A WIP modern, lightweight reading list/reference manager #

github.com favicongithub.com
0 评论4:39 PM在 HN 查看
Hi HN! I built RefForge, a lightweight, desktop-first reading list and reference manager (WIP). It's a local-first app built with Next.js + Tauri and stores data in a small SQLite DB. I’m sharing it to get feedback on the UX, feature priorities, and architecture before I invest in more advanced features.

This is an experimental project where I am trying to build something from scratch using AI and see how far I can build it without writing a single line of code manually.

What does it offer?

Manage your reading list and references in a simple, project-based UI Local SQLite storage (no cloud; your data stays on your machine) Add / edit / delete references, tag them, rate priority, group by project Built as a Tauri desktop app with a Next.js/React frontend

Why did I build it?

Existing reference managers can be heavy or opinionated. I wanted a small, fast, local-first tool focused on reading lists and quick citation exports that I can extend with features I need (PDF attachments, DOI lookup, BibTeX export, lightweight sync).

Current features

Add / edit / delete references Tagging and project organization Priority and status fields Small, searchable local DB (WIP: full-text search planned) Ready-to-extend codebase (TypeScript + React + Tauri + SQLite)

2

Multi-hop WireGuard chaining, speed-tested and API-controlled #

github.com favicongithub.com
0 评论1:02 PM在 HN 查看
Hey HN, I built VPN-Chainer to make multi-hop WireGuard setups a bit easier for the average VPN user.

It can chain multiple WireGuard endpoints together, and can run speed tests with `--fastest`, installs as a systemd service (hooks included), and lets you rotate tunnels via a lightweight API. Imagine in the movies where they say they connection is bouncing around the world. Similar but not exactly like that.

Grab it from the repo or install via the systemd-ready installer, no signup, landing pages, or cart required. Stack is shell (pre/post hooks) + Python + WireGuard.

It’s early-stage, so hook integrations and rotation policies could use polish, your feedback would help steer improvements.

Thanks, and take it easy!

2

InterceptSuite – MitM proxy that handles StartTLS upgrades #

github.com favicongithub.com
0 评论5:54 AM在 HN 查看
I built InterceptSuite to solve a problem I kept running into as a security engineer: there are virtually no good MITM proxies for non-HTTP protocols, and the few that exist completely break when protocols upgrade from plaintext to TLS.

The core technical challenge was implementing universal TLS upgrade detection. When PostgreSQL, MySQL, SMTP, or any other protocol issues a StartTLS command, most tools just give up. They'll show you the initial plaintext handshake, then nothing when TLS encryption kicks in.

InterceptSuite automatically detects when ANY protocol switches to TLS and handles the interception seamlessly. It's not just another HTTP proxy - it's designed specifically for database connections, email protocols, and desktop thick client application traffic that uses TLS or TLS upgrades.

Technical highlights: - Automatic TLS upgrade handling for any protocol, including StartTLS like SMTP, PostgreSQL, etc - Cross-platform GUI (Windows, macOS, Linux) - Real-time traffic interception and modification - Project management for security assessments

The Standard Edition is free and open source. There's also a Pro version with additional features like PCAP export and project management.

Would love feedback.

1

Tool to optimize keyboard layout by counting characters #

github.com favicongithub.com
0 评论9:45 PM在 HN 查看
Hi, I created this cli tool to help people decide which keys to remap where based on what kinds of characters appear in your directory.

It counts characters and sequences of characters, so you can see what kind of typing patterns appear for a given project or programming language. Outputs table, json, csv and has a nice TUI with lots of filtering options

1

5-Minute Timer That Runs in Background with Web Workers #

5minutetimer.top favicon5minutetimer.top
0 评论1:55 AM在 HN 查看
Hey HN! I built this simple 5-minute timer because I got frustrated with existing ones that either: - Stop working when browser tabs go to background - Have ads and tracking - Are overcomplicated for such a simple need

Key features: - Uses Web Workers to keep timing accurate even when tab is inactive - Zero tracking, no ads, no registration - Keyboard shortcuts (Space, R, F for fullscreen) - Works offline once loaded - Custom durations up to 24 hours

Tech stack: Vanilla JavaScript + Web Workers + modern CSS Built in ~2 weeks as a side project.

Perfect for Pomodoro technique, workout intervals, meditation, or just timing anything!

Any feedback or feature suggestions welcome!

1

Vibe Manager – Multi-model planning and fresh docs –> one plan (macOS) #

vibemanager.app faviconvibemanager.app
0 评论1:22 PM在 HN 查看
Built Vibe Manager after getting tired of agents drifting in large repos. It finds the right files, pulls current docs, then runs multiple model providers in parallel and merges the plans into one so changes land in the correct places.

What’s different: – Multi-model planning: run several LLMs in parallel; Vibe merges them into a single, reviewable implementation plan. – File selection: surfaces the few files that matter (respects .gitignore, filters large/binary). – Fresh docs: pulls the latest vendor/framework docs into the plan. – Local-first privacy: your repo stays on your machine; you choose what gets sent to providers. – Quick capture: voice notes + optional screen recording to state intent fast.

Would love feedback on: accuracy of file selection on your repo; where the merged plan helps/fails; confusing bits in onboarding.

Try it: macOS download on the landing page; free starter credits. Windows next.

1

We are Ubik – A new AI workspace for turning ideas into citations #

app.ubik.studio faviconapp.ubik.studio
0 评论4:43 PM在 HN 查看
Hi HN,

Ubik is built to strengthen and speed up the research process without overgenerating work. With powerful academic search and PDF annotation tools, Ubik's AI agents can find, cite, and highlight quotes in relevant academic papers. Our core belief (my friend and I) is that AI should help bring out the best in humans. What does this mean?

Reading, annotating, and citing real work (PDFs). Upload or search and add papers to your workspace. Ubik Agents can create notes that you can refrence using the @ when prompting to increase accuracy and efficacy. Ubik helps turn ideas into high-level work with citations. If you aren't writing research papers or don't need to search for academic sources, Ubik is still a great general workspace with access to 20+ models, web search, a text editor (canvas), and unique PDF highlighting tools, all meant to help humans develop critical thinking skills while using GenAI.

We are Ubik.