매일의 Show HN

Upvote0

2026년 4월 22일의 Show HN

29 개
82

Broccoli, one shot coding agent on the cloud #

github.com favicongithub.com
49 댓글4:09 PMHN에서 보기
Hi HN — we built Broccoli, an open-source harness for taking coding tasks from Linear, running them in isolated cloud sandboxes, and opening PRs for a human to review.

We’re a small team, and our main company supplies voice data. But we kept running into the same problem with coding agents. We’d have a feature request, a refactor, a bug, and some internal tooling work all happening at once, and managing that through local agent sessions meant a lot of context switching, worktree juggling, and laptops left open just so tasks could keep running.

So we built Broccoli. Each task gets its own cloud sandbox to be executed end to end independently. Broccoli checks out the repo, uses the context in the ticket, works through an implementation, runs tests and review loops, and opens a PR for someone on the team to inspect.

Over the last four weeks, 100% of the PRs from non-developers are shipped via Broccoli, which is a safer and more efficient route. For developers on the team, this share is around 60%. More complicated features require more back and forth design with Codex / Claude Code and get shipped manually using the same set of skills locally.

Our implementation uses:

1. Webhook deployment: GCP 2. Sandbox: GCP or Blaxel 3. Project management: Linear 4. Code hosting & CI/CD: Github

Repo: https://github.com/besimple-oss/broccoli

We believe that if you should invest in your own coding harness if coding is an essential part of your business. That’s why we decided to open-source it as an alternative to all the cloud coding agents out there. Would love to hear your feedback on this!

13

Netlify for Agents #

netlify.ai faviconnetlify.ai
4 댓글4:57 PMHN에서 보기
I launched Netlify with a Show HN more than 11 years today, for humans.

Today we're launching our Agent first version of Netlify.

Super early days for this, but I expect it to become as important as our original launch over time.

It's as hard to perfect these flows as it was to perfect some of the initial human DX flows, since the agents are non-deterministic and keeps changing and evolving, and we'll have more to show soon on our eval tooling for this.

Try it out with an agent, and we would love feedback on what works and what doesn't as we keep iterating on making Netlify better for our new agent friends.

12

Everest Drive – a multiplayer spaceship crew simulator in the browser #

everestdrive.io faviconeverestdrive.io
4 댓글5:57 PMHN에서 보기
Hi HN! I'm working on an open-world multiplayer space sim with submarine-warfare-inspired combat. Crew a ship, haul cargo, run heists, hunt your foes with passive and active sensors. Browser-based, free, no install.

Some of its features:

  - Submarine-style passive sensors. Contacts start as a bearing line (direction, no distance), resolve into an uncertainty circle, then into a full track. You triangulate over time by moving.
  - Silent running. Cut your emissions and witnesses can't ID you.
  - Newtonian flight. No drag, no auto-brake. Flip 180° and burn to stop.                                                   
  - Boarding combat. Dock with another ship and fight through it room by room.
Architecture:

  - The server is a single Rust module compiled to WASM, running inside SpacetimeDB.
  - Clients subscribe to rows in the schema and get live deltas over websocket; writes go through reducers (transactional Rust functions). No REST, no custom netcode, no client-side authority.                                                              
  - Client is Svelte 5 + plain HTML5 canvas 2D. No game engine, no WebGL.
https://imgur.com/a/V8cHrdd

Very early, plenty of rough edges. Would love to hear what breaks for you:

https://everestdrive.io

11

Gemini Plugin for Claude Code #

github.com favicongithub.com
7 댓글4:41 AMHN에서 보기
I built a plugin that lets Claude Code delegate work to Gemini CLI.

I started this after finding myself reaching for Gemini more often on long context repo work. I have been especially liking Gemini’s codebaseinvestigator for long context.

This is inspired by openai/codex-plugin-cc.

Code Review, adversarial review. Under the hood its Gemini CLI over ACP

Would love feedback from people using Claude Code, Gemini CLI, or ACP. I am especially curious whether this feels useful outside my own workflow.

Its a great combo with Opus 4.7 + Gemini 3.1 workflows

9

ShellTalk brings deterministic text-to-bash #

barrasso.me faviconbarrasso.me
0 댓글4:21 PMHN에서 보기
Hi HN! I built a CLI tool called ShellTalk for macOS, Linux, and web (WebAssembly) that maps English text to the corresponding Bash commands.

ShellTalk is written in Swift and available under the Apache 2.0 license on GitHub. I was inspired a few weeks ago after reading the Meta-Harness paper and seeing a tool called Hunch that did something similar using the Apple Foundation model. I often forget flag names and orders, but I wanted something that worked consistently. The 3B AFM worked surprisingly well with Hunch, but it felt slow and sometimes slight changes in what I wrote would result in very different outputs.

ShellTalk attempts to match the input with an intent category (Git, File I/O, etc), then a template, and finally to slot-fill and adapt to the specific command version and BSD vs GNU syntax. It has a few other tricks including using NSSpellChecker on macOS to auto-correct certain typos, and scores the output on safety (i.e. is the action destructive or non-reversible).

It's clearly far from perfect, but has very tight testing and validation cycles compared to using an LLM, is very portable, and might eventually work in other languages or environments like Windows. I'm curious to hear what others think.

8

Aide – A customizable Android assistant (voice, choose your provider) #

aideassistant.com faviconaideassistant.com
10 댓글3:13 AMHN에서 보기
Aide is an Android app that replaces your default digital assistant. It can register as your default assistant, so corner-swipe and power-button-hold summon it instead of the Google assistant. I wanted to do something other than Google, but ChatGPT and Claude's integration couldn't do anything on device, so I built this.

Bring your own key for Claude, OpenAI, or any OpenAI-compatible endpoint (Ollama, LM Studio, vLLM). Keys are encrypted on-device; conversations go straight to your provider.

Free: chat on any provider, multi-provider switching, web search + URL fetching as built-in tools, custom system prompt, markdown.

Pro ($6.99 launch week, $9.99 after, one-time): voice input + streaming TTS, voice-first overlay, photo/file attachments, device actions (SMS/calendar/alarms, every intent confirmed), screen context, Home Assistant integration.

8

MemFactory: Unified Inference and Training Framework for Agent Memory #

arxiv.org faviconarxiv.org
0 댓글2:26 AMHN에서 보기
Memory-augmented Large Language Models (LLMs) are essential for developing capable, long-term AI agents. Recently, applying Reinforcement Learning (RL) to optimize memory operations, such as extraction, updating, and retrieval, has emerged as a highly promising research direction. However, existing implementations remain highly fragmented and task-specific, lacking a unified infrastructure to streamline the integration, training, and evaluation of these complex pipelines. To address this gap, we present MemFactory, the first unified, highly modular training and inference framework specifically designed for memory-augmented agents. Inspired by the success of unified fine-tuning frameworks like LLaMA-Factory, MemFactory abstracts the memory lifecycle into atomic, plug-and-play components, enabling researchers to seamlessly construct custom memory agents via a "Lego-like" architecture. Furthermore, the framework natively integrates Group Relative Policy Optimization (GRPO) to fine-tune internal memory management policies driven by multi-dimensional environmental rewards. MemFactory provides out-of-the-box support for recent cutting-edge paradigms, including Memory-R1, RMM, and MemAgent. We empirically validate MemFactory on the open-source MemAgent architecture using its publicly available training and evaluation data. Across the evaluation sets, MemFactory improves performance over the corresponding base models on average, with relative gains of up to 14.8%. By providing a standardized, extensible, and easy-to-use infrastructure, MemFactory significantly lowers the barrier to entry, paving the way for future innovations in memory-driven AI agents.
7

gcx – The Official Grafana Cloud CLI #

github.com favicongithub.com
0 댓글12:08 AMHN에서 보기
Hi HN,

We’re excited to share gcx, a new CLI we’ve been building for Grafana Cloud.

With the rise of agentic coding tools like Claude Code and Codex we're building faster than ever, but these agents are often blind to what’s actually happening in production.

gcx brings the full power of Grafana Cloud observability to your terminal. Query production. Investigate alerts. Let the Assistant root-cause issues. Ship fixes with observability built in. Without leaving your editor. gcx also comes packaged with a skills bundle that allow agents to see and act on your production telemetry. You can ask an agent to root-cause a latency spike, and it can actually fetch the telemetry, analyze the spans, and suggest a fix—all while having the full context of your codebase.

Do check it out and give us feedback!

Github link: https://github.com/grafana/gcx

6

Irregular German Verbs – a simple app, no ads or tracking #

bacist.com faviconbacist.com
3 댓글5:44 AMHN에서 보기
I built this based on requirements from my wife, who has been teaching German at a university for over 25 years. She genuinely needed a tool like this for this exact topic and started using it in her teaching right away.

The main idea was to make learning as comfortable and focused as possible. That’s why the app has:

– no ads – no tracking – no subscriptions – A1–A2 verbs are available for free.

5

Open Chronicle – Local Screen Memory for Claude Code and Codex CLI #

github.com favicongithub.com
1 댓글3:06 AMHN에서 보기
I built an open source version of OpenAI Chronicle.

Some design decisions I made:

1. Local first: OCR uses Apple Vision, summarization supports local AI providers via Vercel AI SDK. Nothing leaves your computer. 2. Multiple Provider: exposes MCP so any coding agents can use it. 3. Swift menubar app: efficient, low-footprint 4. Blacklist apps: password managers, messaging apps (Slack, WhatsApp, Messenger), mail clients are on default blocklist.

Current Limitations: 1. Mac only. Mac-first is a feature. 2. Small local models with weak structured-output support will fail on generateObject. 3. Retrieval is LIKE-query keyword search. FTS and optional embeddings are on the list.

Demo video (6s): https://youtu.be/V75tnvIdovc

Curious what you think the right balance between exclusionlist allowlists. Happy to answer anything.

5

CatchAll – slowest web search API that outperforms everything on recall #

platform.newscatcherapi.com faviconplatform.newscatcherapi.com
1 댓글2:09 PMHN에서 보기
Hey HN,

Artem and Maksym from NewsCatcher here.

Some of you know us as we started six years ago as two freshly graduated economics students who decided to build the best news API product.

We started NewsCatcher thinking the market for news APIs was so big that we could build a self-serve platform and get millions of $29 users.

Obviously, it was a wrong assumption. We pivoted to serve enterprises and had success with it.

But we are hackers at heart, and we want to serve hackers.

We haven't used our Launch HN yet, so consider this our smoke test.

We're looking for feedback and power users rather than revenue. So, happy to provide enough credits for any HN user who finds CatchAll useful.

CatchAll is built for one thing: retrieving every matching event from the web. The use cases that fit it are ones where missing events have real consequences — funding and M&A monitoring, regulatory and compliance feeds (FDA approvals, SEC filings, policy changes), cybersecurity incident tracking, supply chain signals.

If your pipeline consumes structured records and the answer to your query is "find all of them," that's where it works. It's not the right tool for small, bounded queries that return 5 high-precision results.

The 15-minute job time is a direct consequence of the pipeline depth: analyze, fetch, cluster, validate, extract, deduplicate. You're not getting a ranked list of links; you're getting a verified record set.

Our latest benchmark run: https://newscatcherapi.com/blog-posts/web-search-api-benchma...

4

A free tool for non-technical folks to easily publish a website #

weejur.com faviconweejur.com
9 댓글4:06 PMHN에서 보기
It's easier than ever for anyone to make a website, even without paying for a drag-and-drop builder like Squarespace. But there are still too many barriers for your average non-technical person to publish a site on the web.

I'd bet most people don't know there are free ways to host a website, and even if they find an explainer, technical platforms like Cloudflare and GitHub (let alone the command line) can be intimidating.

So I made weejur, which is basically a super simple UI front-end for GitHub Pages. You log in with OAuth, and then you can just paste HTML or upload files to publish a website. If you don't have a GitHub account, you can sign up right in the OAuth flow. It's completely free, and you can view the source here [1].

My hope is this makes it easier for people who don't know anything about web hosting to create and share their own websites.

Feel free to try it out and please share any questions/ideas/feedback!

[1] https://github.com/weejur/weejur/

4

OpenDeck – DIY MIDI Platform Based on Zephyr RTOS #

github.com favicongithub.com
3 댓글8:44 PMHN에서 보기
This is OpenDeck, my own platform for building custom MIDI controllers. I've been working on it for more than 10 years, and now I've made the biggest change so far - rewritten the entire codebase to make use of Zephyr - RTOS with which I've been working professionally for few years now. This now allows me to support many newer boards, complex features and also to modernize the codebase in general. I've been limited on all fronts before.

The platform itself allows for simple building and configuring of custom MIDI controllers - main reason for that is because it requires no coding. Load the firmware on the board, configure it via web configurator and you're good to go. The amount of configurable features is also huge. I do have extensive documentation which covers usage, configuration, flashing of various boards, customization of own boards etc. All available on GitHub.

The platform supports a large number of various boards - not only my own, custom designs, which I sell, but boards like Raspberry Pi Pico 1 and 2, STM32F4 Discovery, Teensy 4 and 4.1, nRF52840DK etc. Lots of choices. Before Zephyr, I rolled my own HAL for various platforms and YAML-based peripheral configuration, both of which is now replaced with Zephyr and its various subsystems and tools, primarily device tree. I must admit however I do not enjoy C at all so most of the stuff I use is wrapped in an external C++ library (zlibs) used as west module on which OpenDeck depends. The project itself is written in C++20. Currently I'm using Zephyr 4.4 and its MIDI 2.0 driver in MIDI 1 compatibility mode, as well as WebUSB for firmware updates, so this is a fairly modern stack.

3

Ohita – a tool to simplify API key management for AI agents #

ohita.tech faviconohita.tech
0 댓글2:08 PMHN에서 보기
I have been trying out numerous AI agent setups to find out which one I would like to run as my personal assistant. One thing that kept constantly bothering me was dealing with API keys, especially those that need jumping through hoops to keep working. Not an uncommon sight was trying to get my agent to fetch me some data or post to X/Twitter and then it would return an error as my API key had stopped working.

So I built a tool that you can give to your AI agent and with one API key it can call all of the services. The tool acts as a central auth and handles individual API's requirements like refreshing tokens, making sure rate limits are adhered, sends the correct user-agents and everything else that each API might require.

At first I wanted to provide all of the users no need to setup their own API keys, but that proved to be impossible. Most API providers state in their ToS that proxying the API is prohibited. Also there was the problem with identities: if an agent posts to Reddit or X the post is from the shared account. So I decided to add a bring-your-own-key architecture where you can setup your own keys (if you want to!) but the tool still handles all the token refreshing etc. Some generous services allow pretty lenient use of their API so I included those ready out of the box, no config required to getting started!

Right now I am happy using this tool myself but I wish more people used it so that I could work on improving it. Since I am a single dev there is a lot of work, I am adding new providers every day, fixing bugs and all that. But if anyone would give me their honest thoughts and tested the features I could work on improving the tool even more. There is an option to pay for the usage to cover some running costs but the free tier is more than enough to get building.

1

Cosmo – Desktop agent with generated UI #

buildcosmo.com faviconbuildcosmo.com
0 댓글8:33 PMHN에서 보기
Hey there! I’m Shiyuan, a recent college grad working on this.

Cosmo lets you interact with your computer through generative UI.

Examples:

- Type “github action status for my website” → desktop dashboard of all the action run statuses

- Speak “create a linear issue” → instant UI on desktop to select the assignee, labels, etc.

It is integrated with GitHub, Vercel, Jira, Supabase, Linear, Posthog, Google Workspace etc. so people can see charts, make edits, and complete tasks through these generated interfaces.

And everything happens on your desktop, it’s not another app window you have to switch to. We hope to use generative UI to kill app windows :P

The concept is still early, and generation speed may be suboptimal (meant to be instant, faster than human navigation on traditional UIs). But I’d love to hear if it can boost productivity somewhere in your workflow. Feedback is greatly appreciated!

Try: https://www.buildcosmo.com/

1

Momentum – showcase what you're shipping #

app.heytangent.com faviconapp.heytangent.com
0 댓글2:45 PMHN에서 보기
Momentum is a way to showcase what you're shipping. Things are moving faster than ever with Claude Code and Codex, and it can be hard to keep up or even share your own progress effectively. At the end of the day, the code is the progress and what runs in production, so a friend and I built this as a fun weekend project to explore the space.

Momentum grabs your PRs (backfilled up to 90 days), generates summaries of each one, calculates relevant statistics, and summarizes the narrative arcs of what's been shipped recently. We also use the project website to grab a logo, colors, and writing style (this can updated in the settings). You can edit PR summaries and narrative arcs as well as add customer quotes. As new PRs get merged to main, it uses a webhook to keep things up to date, creating new summaries and trends.

Here's a few I created from open source projects that I forked:

* https://app.heytangent.com/momentum/pandas * https://app.heytangent.com/momentum/shadcn-ui * https://app.heytangent.com/momentum/kibana

Note that when you sign up, it requests access to your repos so it can access the PRs. It uses each PR to create a summary and only provides a link to the PR if the repo is set to public. Otherwise, only the summary is shown. In the Settings page, you can delete everything and then revoke the Oauth token in GitHub should you choose to.

Let me know if you have feedback or questions!

1

I tried adding Folder Upload feature In my Website #

2 댓글7:59 PMHN에서 보기
I have created Openbeam.cloud website which Let you share upto 100gb, But today i tried sending 45 files images , videos etc, which were in folder on my system but the issue is each type of was stored into its distinctive folder types such as

Funny images , video clips, wedding pictures etc.

its was getting hectic to select each and every files into openbeam folder by folder,

So i thought of creating Folder upload which also keep the Subfolder structure intact, so I went online to research on this features availability in any other website,

which i found in plenty but those website do not keep the folder structure in that after upload and when you download the file each and every file in the single folder and it’s very confusing to find the files

so I tried implementing a folder uploading System which is added on my website, but it’s still in the beta phase, so after folder is dropped it keeps the upto 3 child folder and all the files inside each folder.

I would appreciate If you give it a try and let me know about experience. Thank you

www.openbeam.cloud/?tab=folder