Daily Show HN

Upvote0

Show HN for August 19, 2025

28 items
90

OpenAI/reflect – Physical AI Assistant that illuminates your life #

github.com favicongithub.com
52 comments7:48 PMView on HN
I have been working on making WebRTC + Embedded Devices easier for a few years. This is a hackathon project that pulled some of that together. I hope others build on it/it inspires them to play with hardware. I worked on it with two other people and I had a lot of fun with some of the ideas that came out of it.

* Extendable/hackable - I tried to keep the code as simple as possible so others can fork/modify easily.

* Communicate with light. With function calling it changes the light bulb, so it can match your mood or feelings.

* Populate info from clients you control. I wanted to experiment with having it guide you through yesterday/today.

* Phone as control. Setting up new devices can be frustrating. I liked that this didn't require any WiFi setup, it just routed everything through your phone. Also cool then that they device doesn't actually have any sensitive data on it.

21

Python file streaming 237MB/s on $8/M droplet in 507 lines of stdlib #

bellone.com faviconbellone.com
16 comments2:02 PMView on HN
Quick Links:

- PyPI: https://pypi.org/project/axon-api/

- GitHub: https://github.com/b-is-for-build/axon-api

- Deployment Script: https://github.com/b-is-for-build/axon-api/blob/master/examp...

Axon is a 507-line, pure Python WSGI framework that achieves up to 237MB/s file streaming on $8/month hardware. The key feature is the dynamic bundling of multiple files into a single multipart stream while maintaining bounded memory (<225MB). The implementation saturates CPU before reaching I/O limits.

Technical highlights:

- Pure Python stdlib implementation (no external dependencies)

- HTTP range support for partial content delivery

- Generator-based streaming with constant memory usage

- Request batching via query parameters

- Match statement-based routing (eliminates traversal and probing)

- Built-in sanitization and structured logging

The benchmarking methodology uses fresh Digital Ocean droplets with reproducible wrk tests across different file sizes. All code and deployment scripts are included.

14

Lemonade: Run LLMs Locally with GPU and NPU Acceleration #

github.com favicongithub.com
0 comments7:35 PMView on HN
Lemonade is an open-source SDK and local LLM server focused on making it easy to run and experiment with large language models (LLMs) on your own PC, with special acceleration paths for NPUs (Ryzen™ AI) and GPUs (Strix Halo and Radeon™).

Why?

There are three qualities needed in a local LLM serving stack, and none of the market leaders (Ollama, LM Studio, or using llama.cpp by itself) deliver all three: 1. Use the best backend for the user’s hardware, even if it means integrating multiple inference engines (llama.cpp, ONNXRuntime, etc.) or custom builds (e.g., llama.cpp with ROCm betas). 2. Zero friction for both users and developers from onboarding to apps integration to high performance. 3. Commitment to open source principles and collaborating in the community.

Lemonade Overview:

Simple LLM serving: Lemonade is a drop-in local server that presents an OpenAI-compatible API, so any app or tool that talks to OpenAI’s endpoints will “just work” with Lemonade’s local models. Performance focus: Powered by llama.cpp (Vulkan and ROCm for GPUs) and ONNXRuntime (Ryzen AI for NPUs and iGPUs), Lemonade squeezes the best out of your PC, no extra code or hacks needed. Cross-platform: One-click installer for Windows (with GUI), pip/source install for Linux. Bring your own models: Supports GGUFs and ONNX. Use Gemma, Llama, Qwen, Phi and others out-of-the-box. Easily manage, pull, and swap models. Complete SDK: Python API for LLM generation, and CLI for benchmarking/testing. Open source: Apache 2.0 (core server and SDK), no feature gating, no enterprise “gotchas.” All server/API logic and performance code is fully open; some software the NPU depends on is proprietary, but we strive for as much openness as possible (see our GitHub for details). Active collabs with GGML, Hugging Face, and ROCm/TheRock.

Get started:

Windows? Download the latest GUI installer from https://lemonade-server.ai/

Linux? Install with pip or from source (https://lemonade-server.ai/)

Docs: https://lemonade-server.ai/docs/

Discord for banter/support/feedback: https://discord.gg/5xXzkMu8Zk

How do you use it?

Click on lemonade-server from the start menu Open http://localhost:8000 in your browser for a web ui with chat, settings, and model management. Point any OpenAI-compatible app (chatbots, coding assistants, GUIs, etc.) at http://localhost:8000/api/v1 Use the CLI to run/load/manage models, monitor usage, and tweak settings such as temperature, top-p and top-k. Integrate via the Python API for direct access in your own apps or research.

Who is it for?

Developers: Integrate LLMs into your apps with standardized APIs and zero device-specific code, using popular tools and frameworks. LLM Enthusiasts, plug-and-play with: Morphik AI (contextual RAG/PDF Q&A) Open WebUI (modern local chat interfaces) Continue.dev (VS Code AI coding copilot) …and many more integrations in progress! Privacy-focused users: No cloud calls, run everything locally, including advanced multi-modal models if your hardware supports it.

Why does this matter?

Every month, new on-device models (e.g., Qwen3 MOEs and Gemma 3) are getting closer to the capabilities of cloud LLMs. We predict a lot of LLM use will move local for cost reasons alone. Keeping your data and AI workflows on your own hardware is finally practical, fast, and private, no vendor lock-in, no ongoing API fees, and no sending your sensitive info to remote servers. Lemonade lowers friction for running these next-gen models, whether you want to experiment, build, or deploy at the edge. Would love your feedback! Are you running LLMs on AMD hardware? What’s missing, what’s broken, what would you like to see next? Any pain points from Ollama, LM Studio, or others you wish we solved? Share your stories, questions, or rant at us.

Links:

Download & Docs: https://lemonade-server.ai/

GitHub: https://github.com/lemonade-sdk/lemonade

Discord: https://discord.gg/5xXzkMu8Zk

Thanks HN!

13

Built a memory layer that stops AI agents from forgetting everything #

github.com favicongithub.com
0 comments4:29 PMView on HN
Tired of AI coding tools that forget everything between sessions? Every time I open a new chat with Claude or fire up Copilot, I'm back to square one explaining my codebase structure.

So I built something to fix this. It's called In Memoria. Its an MCP server that gives AI tools persistent memory. Instead of starting fresh every conversation, the AI remembers your coding patterns, architectural decisions, and all the context you've built up.

The setup is dead simple: `npx in-memoria server` then connect your AI tool. No accounts, no data leaves your machine.

Under the hood it's TypeScript + Rust with tree-sitter for parsing and vector storage for semantic search. Supports JavaScript/TypeScript, Python, and Rust so far.

It originally started as a documentation tool but had a realization - AI doesn't need better docs, it needs to remember stuff. Spent the last few months rebuilding it from scratch as this memory layer.

It's working pretty well for me but curious what others think, especially about the pattern learning part. What languages would you want supported next?

Code: https://github.com/pi22by7/In-Memoria

10

Rucat – Cat for Prompt Engineers #

github.com favicongithub.com
1 comments11:41 PMView on HN
Aloha HN - I'm redbeard, ex-CoreOS homey, RISC-V guy, and general free software wingnut.

Like many of us, I've increasingly found myself using AI for my work. One of the challenges for me is that I prefer to stay on the command line and I'm often working on remote machines over SSH. This often means moving off of the keyboard to use the mouse or a complex chording of characters to capture output from the buffer. Capturing a single file isn't too challenging on a localhost (`cat | wl-copy` /`cat | pbcopy`, etc) but this is still really focused on single files. If you're capturing multiple files, the content (of course) gets catenated together.

"There's got to be a better way!"

After realizing the many terminals support ANSI control code OSC 52 and my preferred terminal, Kitty, supports OSC 5522 I put together "rucat", a cat inspired tool for capturing multiple files with additional semantic data.

Rucat excels at capturing multiple files quickly and without worrying about serialization loss. It's been written in Rust to intentionally avoid a number of memory safety and string parsing issues as well as providing a path to cross platform support. The tool is currently available in binary and source form in the repository. Feedback is welcomed!

7

Hacker News London Meetup #4 #

0 comments10:56 AMView on HN
Hi HN, please join us for our second event in 2025.

This is a space where Hacker News readers can come along and discuss technology, science, and business.

Location: The Masque Haunt (Wetherspoon) 168–172 Old Street, London, UK

Time: Tuesday, August 26, 6pm - 9pm

Meetup (64 attendees so far): https://www.meetup.com/hackernewslondon/events/310296581

lu.ma (new): https://lu.ma/xb70gefx

This is an unofficial, community organized event from Hacker News readers for Hacker News readers.

6

I'm building a "work visa" API for AI agents #

agentvisa.dev faviconagentvisa.dev
2 comments11:03 AMView on HN
Hey HN,

I’m Chris, a solo dev in Melbourne AU. For the past month I've been spending my after work hours building AgentVisa. I'm both excited (and admittedly nervous) to be sharing it with you all today.

I've been spending a lot of time thinking about the future of AI agents and the more I experimented, the more I realized I was building on a fragile foundation. How do we build trust into these systems? How do we know what our agents are doing, and who gave them permission?

My long-term vision is to give developers an "Agent Atlas" - a clear map of their agentic workforce, showing where they're going and what they're authorized to do. The MVP I'm launching today is that first step.

The core idea is simple: stop giving agents a permanent "passport" (a static API key) and start giving them a temporary "work visa" for each specific task. AgentVisa is a simple API that issues secure, short-lived credentials, linking an agent's task back to a specific user and a set of permissions.

To make this more concrete, I've put together a demo you can run locally showing how an agentic customer service bot uses AgentVisa to access an internal API. You can see it here: https://github.com/AgentVisa/agentvisa-customer-support-demo

Under the hood it’s JWTs for now. But the product isn't the token - it's the simple, secure workflow for delegating authority. It's a pattern I needed for my own projects and I'm hoping it's useful to you too.

I know there's a "two-sided problem" here - this is most useful when the server an agent connects to can also verify the agent's authenticity. Right now it's ideal for securing your own internal services, which is where I started. My hope is that over time this can be built into a standard that more services adopt.

I'm keen for feedback from fellow devs working with AI agents. Does this problem of agent identity and auditability resonate with you? Is the "visa vs. passport" concept clear? What would you want to see on that "Agent Atlas" I mentioned?

The Python SDK is open and on GitHub, and there's a generous free tier so you can build with it right away. I'll be here to answer as best I can any questions you have. Thanks for checking it out!

SDK: https://github.com/AgentVisa/agentvisa-python Demo: https://github.com/AgentVisa/agentvisa-customer-support-demo

Note: for us down under it’s getting late! So if I miss your comment while asleep, I’ll reply first thing in the morning AEST.

6

Skilfut – 138 UI components to help devs build faster and prettier #

5 comments6:45 AMView on HN
Hi HN,

I’m César, a non-developer who started building prototypes using vibe coding (AI + prompts instead of code). While doing this, I realized a big issue: it’s incredibly hard to get a good design. Most sites end up looking the same.

So I built Skilfut — a SaaS that provides a library of 138 UI components, each with its associated prompt. You can copy/paste them into your no-code or AI-coding workflow and get functional, styled blocks right away.

Already 138 components available New components added every week (+200 planned) Designed in collaboration with designers from companies like Uber Goal: help vibe coders / indie hackers ship faster while standing out visually

I soft-launched on Reddit and was surprised: over 100 people joined the waitlist in just 2 days. Now the V1 is live: https://skilfut.com

I’d love your feedback: – Do you think UI libraries like this can really help vibe coders / AI devs? – What would you want to see improved or added?

6

GiralNet – A Privacy Network for Your Team (Not the World) #

github.com favicongithub.com
2 comments3:12 PMView on HN
Hello, for some time I've been developing this project now that I am happy that it finally can see the light. I love Tor, but I believe the biggest thing with Tor is that the nodes are strangers which in itself requires some sort of level in just that, complete strangers.

For this reason, I decided to build this private network inspired by the Onion router. Unlike other public networks, GiralNet is not for anonymous connections to strangers. It is built for small teams or groups who want privacy but also need a level of trust. It assumes that the people running the nodes in the network are known and verifiable. This provides a way for a group to create their own private and secure network, where the infrastructure is controlled and the people behind the nodes are accountable. The goal is to provide privacy without relying on a large, anonymous public network.

In terms of technical details, it is a SOCKS5 proxy that routes internet traffic through a series of other computers. It does this by wrapping your data in multiple layers of encryption, just like the onion router does it. Each computer in the path unwraps one layer to find the next destination, but never knows the full path. This makes it difficult for any single party to see both where the traffic came from and where it is going.

I will gladly answer any questions you might have, thank you.

5

Sam Altman:"Scaling LLMs won't get us to AGI" – maybe we found a path #

agigr.id faviconagigr.id
1 comments6:33 PMView on HN
Hi HN,

At the 2023 Hawking Fellowship at Cambridge Union, a student asked Sam Altman: “To get to AGI, can we just keep min-maxing language models, or is there another breakthrough that we haven’t really found yet?”

Altman’s response was telling: “We need another breakthrough… I don’t think that doing that will get us to AGI. If, for example, superintelligence can’t discover novel physics, I don’t think it’s a superintelligence. Teaching it to clone the behavior of humans and human text – I don’t think that’s going to get there.”

If Altman is right that scaling alone is a dead end, then this might be the beginning of the real race - not just to build bigger LLMs, but to invent the architectures that can discover the unknown.

This echoes a long-standing question in AI research: what lies beyond scaling language models in the pursuit of true general intelligence?

Did we finally stumbled onto the next breakthrough Altman was hinting at?

We think so.

Collective AGI: The Civilizational Path to AGI

Rooted in History: Human intelligence didn’t become powerful in isolation. A single brain is limited. What made human intelligence general and world-changing was the civilizational process - networking, collaboration, culture, institutions, governance, commerce, law, ethics - compounding across generations.

A Pattern, a Fractal: Ecosystems evolve through interdependent species; human intelligence evolved through interdependent minds. Civilization was humanity’s first great outcome of collective intelligence, recursively amplifying the reach of individual cognition.

Applied to AGI: In the same way, AGI won’t emerge from a single artifact or mono-worldview. It must arise through plurality of AI forms, multi-agent networks, evolving institutions, and shared participation - the same mechanisms that scaled human intelligence, now applied to artificial intelligence.

If that’s true, the next breakthrough isn’t bigger LLMs. It’s building civilizational ecosystems for AI societies scale - that don’t just predict text, but evolve new knowledge, new institutions, and new ways of thinking.

I'm Kanishka Nithin, I’ve been working on AI for over a decade, and today I’m excited (and a little nervous) to share something deeply personal and long in the making: AGI Grid - our open effort to together build what we call Collective AGI - The open, fastest, safest, and most efficient path to Artificial General Intelligence.

AGI Grid: It is a open civilizational infrastructure ecosystem for collective AGI

Website: https://www.agigr.id Vision paper: https://resources.agigr.id

The collective AGI ecosystem consists of 12 Open source projects - listed at

Docs: https://docs.agigr.id

What we’d love from HN: Feedback: Does this framing of Collective AGI resonate? Critique: Where do you see the holes? Collaboration: If you’re working on agents, collective intelligence, or diverse cognitive architectures, let’s talk.

This project grew out of both crisis and conviction, and we believe the next leap in AI won’t come from a single giant model, but from grids of diverse forms of intelligence, aligned and cooperating.

-Team AGI Grid: A Collective AGI

5

OnPair – String compression with fast random access (Rust, C++) #

github.com favicongithub.com
0 comments3:20 PMView on HN
I’ve been working on a compression algorithm for fast random access to individual strings in large collections.

The problem came up when working with large in-memory database columns (emails, URLs, product titles, etc.), where low-latency point queries are essential. With short strings, LZ77-based compressors don’t perform well. Block compression helps, but block size forces a trade-off between ratio and access speed.

Some existing options:

- BPE: good ratios, but slow and memory-heavy

- FSST (discussed here: https://news.ycombinator.com/item?id=41489047): very fast, but weaker compression

This solution provides an interesting balance (more details in the paper):

- Compression ratio: similar to BPE

- Compression speed: 100–200 MiB/s

- Decompression speed: 6–7 GiB/s

I’d love to hear your thoughts — whether it’s workloads you think this could help with, ideas for API improvements, or just general discussion. Always happy to chat here on HN or by email.

---

Resources:

- Paper: https://arxiv.org/pdf/2508.02280

- Rust: https://github.com/gargiulofrancesco/onpair_rs

- C++: https://github.com/gargiulofrancesco/onpair_cpp

4

I built a simple restaurant bill splitting app with receipt reading #

billchoppa.com faviconbillchoppa.com
1 comments2:43 PMView on HN
Hi all, made this simple app with no frills just to calculate bills easily when splitting with friends. I fight with friensds on how to split bills too often so this app just calculates it and noone needs to choose which method of splitting.

I hope it's useful for you too. Let me know if you like it / don't like it, enjoy!

4

Wake word detection with custom phrases without model training #

github.com favicongithub.com
0 comments8:19 PMView on HN
I was recently working on wake words detection and came up with a different approach to the problem, so I wanted to share what I have built.

I started working on a project for a smart assistant with MCP integration on Raspberry Pi, and on the wake word part I found out that available open source solutions are somewhat limited. You have to either go with classical MFCC + DTW solutions which don't provide good precision or you have to use model-based solutions that require a pre-trained model and you can't let users use their own wake words.

So I took advantages of these two approaches and implemented my own solution. It uses Google's speech-embedding to extract speech features from audio which is much more resilient to noise and voice tone variations, and works across different speaker voices. And then those features are compared with DTW which helps avoid temporal misalignment.

Benchmarking on the Qualcomm Keyword Speech Dataset shows 98.6% accuracy for same-speaker detection and 81.9% for cross-speaker (though it's not designed for that use case). Converting the model to ONNX reduced CPU usage on my Raspberry Pi down to 10%.

Surprisingly I haven't seen (at least yet) anyone else using this approach. So I wanted to share it and get your thoughts - has anyone tried something similar, or see any obvious issues I might have missed?

GitHub - https://github.com/st-matskevich/local-wake

3

Awaken – A nofap tracker with video checkins and community #

awakenhub.org faviconawakenhub.org
2 comments7:38 AMView on HN

  I built Awaken, a habit tracker focused on nofap.
Features include: •Calendar-based daily logging (status, journal, video check-ins) •Progress charts & challenges •Community feed with comments/likes •Optional member content: meditation, Taoist/TCM practices, daily audio It’s still an MVP, but fully usable. I’d love your feedback to improve it. awakenhub.org
3

Bells and Whistles – a new type of daily logic puzzle #

puzzmallow.com faviconpuzzmallow.com
0 comments10:09 AMView on HN
This is a new variant on nonograms (or picross) I have designed. Like nonograms you fill in rows and columns based on the number clues that indicate how many cells to fill and their groupings. My original concept is that there are two possible icons to fill each cell with. The rows indicate bells and the columns whistles.

The puzzle is daily and is always logically solvable. There is also a 'mini' mode for a quicker, smaller puzzle.

I've named this nonogram variant a kanogram

3

Memeclip.ai – AI-powered meme maker that turns text into memes #

memeclip.ai faviconmemeclip.ai
2 comments2:46 AMView on HN
Hey HN!

I built MemeClip.ai - an AI meme maker that transforms any text into viral memes

Why I built this:

Most people want to express themselves with memes but face real barriers: they don't know what's trending, can't think of funny captions, or don't understand which templates work for different situations. I wanted to solve this by building an AI that understands both meme culture and context.

Technical highlights:

Semantic template matching: Text and image embeddings to find the perfect template match for any concept. Vision-language model integration: AI grasps visual context and meme structure for perfect captions

What's different:

Traditional meme generators like Imgflip are just template galleries with text editors - you pick a template and write captions yourself. MemeClip reverses this: describe your situation and our AI finds the perfect template and generates the caption automatically.

Would love to hear what HN thinks! Happy to answer any questions about the tech stack, AI approach, or meme philosophy

Thanks for try out

3

Side Space – An Arc-like AI-powered Vertical tabs manager for Chrome #

chromewebstore.google.com faviconchromewebstore.google.com
0 comments3:35 AMView on HN
Side Space is an AI-powered browser extension for managing tabs in a vertical side panel. Side Space is a browser extension designed to help users organize and manage their open tabs more efficiently. It adds a vertical tabs manager to your browser’s side panel, making it easier to categorize, group, and switch between tabs for work, life, hobbies, and more.

Key features include: • Vertical Spaces: Organize tabs into separate spaces (like work, shopping, or school) for better focus. • AI-Powered Grouping: Automatically group tabs using AI, or by domain, to reduce clutter. • Cloud Sync: Sync your spaces and tabs across devices by logging into your account. • Tab Management Tools: Pin tabs, search tabs, suspend tabs to save memory, de-duplicate tabs, and save/restore tab groups. • Customization: Change the color palette of spaces and switch between light/dark modes. • Autosave & Restore: All tabs are autosaved, so you can restore them anytime.

Side Space offers a free plan (up to 5 spaces and 1,000 URLs) and a one-time paid plan for unlimited spaces and URLs. It’s available for Chrome, Brave, and Edge browsers.

If you’re tired of messy, disorganized tabs, Side Space helps you keep everything neat and easy to find, all from a convenient sidebar.

3

AI-powered CLI that translates natural language to FFmpeg #

0 comments6:02 PMView on HN
I got tired of spending 20 minutes Googling ffmpeg syntax every time I needed to process a video. So I built aiclip - an AI-powered CLI that translates plain English into perfect ffmpeg commands.

Instead of this: ffmpeg -i input.mp4 -vf "scale=1280:720" -c:v libx264 -c:a aac -b:v 2000k output.mp4

Just say this: aiclip "resize video.mp4 to 720p with good quality"

Key features: - Safety first: Preview every command before execution - Smart defaults: Sensible codec and quality settings - Context aware: Scans your directory for input files - Interactive mode: Iterate on commands naturally - Well-tested: 87%+ test coverage with comprehensive error handling

What it can do: - Convert video formats (mov to mp4, etc.) - Resize and compress videos - Extract audio from videos - Trim and cut video segments - Create thumbnails and extract frames - Add watermarks and overlays

GitHub: https://github.com/d-k-patel/ai-ffmpeg-cli PyPI: https://pypi.org/project/ai-ffmpeg-cli/

Install: pip install ai-ffmpeg-cli

I'd love feedback on the UX and any features you'd find useful. What video processing tasks do you find most frustrating?

2

I built a tool to make quitting your goals socially impossible #

waitlist.makeitla.st faviconwaitlist.makeitla.st
1 comments10:13 AMView on HN
Public Task Timer: Create, Track, and Share Your Goals with the World

Set public tasks with custom timers and personal links. Share your progress with friends, family, and the community. Get motivated by public accountability and inspire others with your achievements.

Track Progress: Monitor your journey with visual progress tracking

Watch your progress unfold with beautiful visual tracking and milestone celebrations. Set checkpoints, log daily activities, and see your advancement in real-time. Every step forward is celebrated, keeping you motivated and focused on your goal.

Stay Accountable: Share your journey publicly and let community support drive your success

Make your commitment public with your personal link. Share your progress with friends, family, and the community. Receive encouragement, celebrate milestones together, and let the power of public accountability push you toward your goals.

2

Cheatsheet++. Free Tech Interview Q&As (now with voting and comments) #

cheatsheet-plus-plus.com faviconcheatsheet-plus-plus.com
0 comments4:55 PMView on HN
I’ve been working on Cheatsheet++. A free site with 50k+ tech interview Q&As across multiple topics. New feature: Users can now upvote/downvote answers Add comments to share alternative approaches The goal is to move from a static Q&A bank to a living, community-driven prep tool.

I’d love feedback from HN: Do you think community voting improves interview prep, or is it better to keep curated, single answers? Any pitfalls you see in opening answers up to comments?

1

Rocket Journal – voice-first AI journaling for your 2 AM thoughts #

rocketjournal.app faviconrocketjournal.app
0 comments1:23 PMView on HN
Hi HN! I just launched Rocket Journal, a voice-first journaling app designed to make reflection as easy as talking. Key ideas: Rant mode — vent freely, no judgment Reflect mode — guided prompts to help dig deeper AI insights — mood tracking, patterns, helpful takeaways Stickers & scrapbook — journaling feels lighter and fun Privacy-first — your data isn’t used to train models I built this because journaling apps often feel like “work.” I wanted something that felt like having a safe, responsive companion for those late-night thoughts or walks outside. I’d love feedback from the HN community: Does this feel useful or different from other journaling apps you’ve tried? Any concerns about using voice for private reflection? What features would make you want to use this daily? Thanks!
1

AI for Embedded Software Debugging #

marketplace.visualstudio.com faviconmarketplace.visualstudio.com
0 comments1:23 PMView on HN
Hey! Johannes here and I've build Enduin.

While AI changes how software is build in many domains, it lacks adoption in embedded software.

When looking at what embedded engineers do in their development time, it's mostly debugging instead of writing code and current AI dev tools are not helping here. They are just not made for this domain.

That's why I've build enduin, which integrates into GDB through the VS Code debugger (GDB) and executes code, sets breakpoints and finds the bugs with that. You can also upload documents that are used as context. Enduin is currently not changing your codebase, just helping to find bugs. It focusses on embedded code but basically every code that can be debugged with GDB can be used (C/C++, Rust, ...).

You can try it through the VS Code marketplace.

Is that helpful to you? What would you expect from such a tool?

Here's the link to the extension: https://marketplace.visualstudio.com/items?itemName=Enduin.e...

Here's also a demo on YouTube: https://youtu.be/RNaiwj85eyU

Looking forward to your feedback!