매일의 Show HN

Upvote0

2026년 4월 14일의 Show HN

39 개
148

LangAlpha – what if Claude Code was built for Wall Street? #

github.com favicongithub.com
54 댓글2:48 PMHN에서 보기
Some technical context on what we ran into building this.

MCP tools don't really work for financial data at scale. One tool call for five years of daily prices dumps tens of thousands of tokens into the context window. And data vendors pack dozens of tools into a single MCP server, schemas alone can eat 50k+ tokens before the agent does anything useful. So we auto-generate typed Python modules from the MCP schemas at workspace init and upload them into the sandbox. The agent just imports them like a normal library. Only a one-line summary per server stays in the prompt. We have around 80 tools across our servers and the prompt cost is the same whether a server has 3 tools or 30. This part isn't finance-specific, it works with any MCP server.

The other big thing was making research actually persist across sessions. Most agents treat a single deliverable (a PDF, a spreadsheet) as the end goal. In investing that's day one. You update the model when earnings drop, re-run comps when a competitor reports, keep layering new analysis on old. But try doing that across agent sessions, files don't carry over, you re-paste context every time. So we built everything around workspaces. Each one maps to a persistent sandbox, one per research goal. The agent maintains its own memory file with findings and a file index that gets re-read before every LLM call. Come back a week later, start a new thread, it picks up where it left off.

We also wanted the agent to have real domain context the way Claude Code has codebase context. Portfolio, watchlist, risk tolerance, financial data sources, all injected into every call. Existing AI investing platforms have some of that but nothing close to what a proper agent harness can do. We wanted both and couldn't find it, so we built it and open-sourced the whole thing.

70

Kontext CLI – Credential broker for AI coding agents in Go #

github.com favicongithub.com
27 댓글1:26 PMHN에서 보기
We built the Kontext CLI because AI coding agents need access to GitHub, Stripe, databases, and dozens of other services — and right now most teams handle this by copy-pasting long-lived API keys into .env files, or the actual chat interface, whilst hoping for the best.

The problem isn't just secret sprawl. It's that there's no lineage of access. You don't know which developer launched which agent, what it accessed, or whether it should have been allowed to. The moment you hand raw credentials to a process, you've lost the ability to enforce policy, audit access, or rotate without pain. The credential is the authorization, and that's fundamentally broken when autonomous agents are making hundreds of API calls per session.

Kontext takes a different approach. You declare what credentials a project needs in a .env.kontext file:

  GITHUB_TOKEN={{kontext:github}}
  STRIPE_KEY={{kontext:stripe}}
  LINEAR_TOKEN={{kontext:linear}}
Then run `kontext start --agent claude`. The CLI authenticates you via OIDC, and for each placeholder: if the service supports OAuth, it exchanges the placeholder for a short-lived access token via RFC 8693 token exchange; for static API keys, the backend injects the credential directly into the agent's runtime environment. Either way, secrets exist only in memory during the session — never written to disk on your machine. Every tool call is streamed for audit as the agent runs.

The closest analogy is a Security Token Service (STS): you authenticate once, and the backend mints short-lived, scoped credentials on-the-fly — except unlike a classical STS, we hold the upstream secrets, so nothing long-lived ever reaches the agent. The backend holds your OAuth refresh tokens and API keys; the CLI never sees them. It gets back short-lived access tokens scoped to the session.

What the CLI captures for every tool call: what the agent tried to do, what happened, whether it was allowed, and who did it — attributed to a user, session, and org.

Install with one command: `brew install kontext-dev/tap/kontext`

The CLI is written in Go (~5ms hook overhead per tool call), uses ConnectRPC for backend communication, and stores auth in the system keyring. Works with Claude Code today, Codex support coming soon.

We're working on server-side policy enforcement next — the infrastructure for allow/deny decisions on every tool call is already wired, we just need to close the loop so tool calls can also be rejected.

We'd love feedback on the approach. Especially curious: how are teams handling credential management for AI agents today? Are you just pasting env vars into the agent chat, or have you found something better?

GitHub: https://github.com/kontext-dev/kontext-cli Site: https://kontext.security

47

Kelet – Root Cause Analysis agent for your LLM apps #

kelet.ai faviconkelet.ai
24 댓글4:16 PMHN에서 보기
I've spent the past few years building 50+ AI agents in prod (some reached 1M+ sessions/day), and the hardest part was never building them — it was figuring out why they fail.

AI agents don't crash. They just quietly give wrong answers. You end up scrolling through traces one by one, trying to find a pattern across hundreds of sessions.

Kelet automates that investigation. Here's how it works:

1. You connect your traces and signals (user feedback, edits, clicks, sentiment, LLM-as-a-judge, etc.) 2. Kelet processes those signals and extracts facts about each session 3. It forms hypotheses about what went wrong in each case 4. It clusters similar hypotheses across sessions and investigates them together 5. It surfaces a root cause with a suggested fix you can review and apply

The key insight: individual session failures look random. But when you cluster the hypotheses, failure patterns emerge.

The fastest way to integrate is through the Kelet Skill for coding agents — it scans your codebase, discovers where signals should be collected, and sets everything up for you. There are also Python and TypeScript SDKs if you prefer manual setup.

It’s currently free during beta. No credit card required. Docs: https://kelet.ai/docs/

I'd love feedback on the approach, especially from anyone running agents in prod. Does automating the manual error analysis sound right?

47

A memory database that forgets, consolidates, and detects contradiction #

github.com favicongithub.com
33 댓글3:41 PMHN에서 보기
Vector databases store memories. They don't manage them. After 10k memories, recall quality degrades because there's no consolidation, no forgetting, no conflict resolution. Your AI agent just gets noisier.

YantrikDB is a cognitive memory engine — embed it, run it as a server, or connect via MCP. It thinks about what it stores: consolidation collapses duplicate memories, contradiction detection flags incompatible facts, temporal decay with configurable half-life lets unimportant memories fade like human memory does.

Single Rust binary. HTTP + binary wire protocol. 2-voter + 1-witness HA cluster via Docker Compose or Kubernetes. Chaos-tested failover, runtime deadlock detection (parking_lot), per-tenant quotas, Prometheus metrics. Ran a 42-task hardening sprint last week — 1178 core tests, cargo-fuzz targets, CRDT property tests, 5 ops runbooks.

Live on a 3-node Proxmox homelab cluster with multiple tenants. Alpha — primary user is me, looking for the second one.

28

Remoroo. trying to fix memory in long-running coding agents #

remoroo.com faviconremoroo.com
5 댓글1:51 PMHN에서 보기
I built Remoroo because most coding agents fall apart once the work stops being a short edit-and-run loop.

A real engineering experiment can run for hours. Along the way, the agent reads files, runs commands, checks logs, compares metrics, tries ideas that fail, and needs to remember what already happened. Once context starts slipping, it forgets the goal, loses track of the baseline, and retries bad ideas.

Remoroo is my attempt to solve that problem.

You point it at a repo and give it a measurable goal. It runs locally, tries changes, executes experiments, measures the result, keeps what helps, and throws away what does not.

A big part of the system is memory. Long runs generate far more context than a model can hold, so I built a demand-paging memory system inspired by OS virtual memory to keep the run coherent over time.

There is a technical writeup here: https://www.remoroo.com/blog/how-remoroo-works

Would love feedback from people working on long-running agents, training loops, eval harnesses, or similar workflows.

15

A CLI that writes its own integration code #

docs.superglue.cloud favicondocs.superglue.cloud
10 댓글8:45 AMHN에서 보기
We run superglue, an OSS agentic integration platform. Last week I talked to a founder of another YC startup. She found a use case for our CLI that we hadn't officially launched yet.

Her problem: customers wanted to create Opps in Salesforce from inside the chat in her app. We kept seeing this pattern: teams build agents and their users can perfectly describe what they want: "pull these three objects from Salesforce and push to nCino when X condition is true", but translating that into a generalized hard-coded tool the agent can call is a lot of work and does not scale since the logic is different for every user.

What superglue CLI does: you point it at any API, and your agent gets the ability to reason over that API at runtime. No pre-built tools. The agent reads the spec, plans the calls, executes them.

The founder using this in production described it like this: she gave the CLI to her agent with an instruction set and told it not to build tools, just run against the API. It handled multi-step Salesforce object creation correctly, including per-user field logic and record type templates.

Concretely: instead of writing a createSalesforceOpp tool that handles contact -> account -> Opp creation with all the conditional logic, you write a skill doc and let the agent figure out which endpoints to hit and in what order.

The tradeoff is: you're giving the agent more autonomy over what API calls it makes. That requires good instructions and some guardrails. But for long-tail, user-specific connectors, it's a lot more practical than building a tool for every case.

Happy to discuss. Curious if others have run into the "pre-defined tool" ceiling with MCP-based connectors and how you've worked around it.

Docs: https://docs.superglue.cloud/getting-started/cli-skills Repo: https://github.com/superglue-ai/superglue

14

A stateful UI runtime for reactive web apps in Go #

github.com favicongithub.com
4 댓글8:28 AMHN에서 보기
Doors: Server-driven UI framework + runtime for building stateful, reactive web applications in Go.

Some highlights:

* Front-end framework capabilities in server-side Go. Reactive state primitives, dynamic routing, composable components.

* No public API layer. No endpoint design needed, private temporal transport is handled under the hood.

* Unified control flow. No context switch between back-end/front-end.

* Integrated web stack. Bundle assets, build scripts, serve private files, automate CSP, and ship in one binary.

How it works: Go server is UI runtime: web application runs on a stateful server, while the browser acts as a remote renderer and input layer.

Security model: Every user can interact only with what you render to them. Means you check permissions when your render the button and that's is enough to be sure that related action wont be triggered by anyone else.

Mental model: Link DOM to the data it depends on.

Limitations:

* Does not make sense for static non-iteractive sites, client-first apps with simple routing, and is not suitable for offline PWAs.

* Load balancing and roll-outs without user interruption require different strategies with stateful server (mechanics to make it simpler is included).

Where it fits best: Apps with heavy user flows and complex business logic. Single execution context and no API/endpoint permission management burden makes it easier.

Peculiarities:

* Purposely build [Go language extension](https://github.com/doors-dev/gox) with its own LSP, parser, and editor plugins. Adds HTML as Go expressions and \`elem\` primitives.

* Custom concurrency engine that enables non-blocking event processing, parallel rendering, and tree-aware state propagation

* HTTP/3-ready synchronization protocol (rolling-request + streaming, events via regular post, no WebSockets/SSE)

From the author (me): It took me 1 year and 9 month to get to this stage. I rewrote the framework 6 or 7 times until every part is coherent, every decision feels right or is a reasonable compromise. I am very critical to my own work and I see flaws, but overall it turned out solid, I like developer experience as a user. Mental model requires a bit of thinking upfront, but pays off with explicit code and predictable outcome.

Code Example:

  type Search struct {
    input doors.Source[string] // reactive state
  }

  elem (s Search) Main() {
    <input
      (doors.AInput{
        On: func(ctx context.Context, r doors.RequestInput) bool {
          s.input.Update(ctx, r.Event().Value) // reactive state
          return false
        },
      })
      type="text"
      placeholder="search">

    ~// subscribe results to state changes
    ~(doors.Sub(s.input, s.results))
  }

  elem (s Search) results(input string) {
    ~(for _, user := range Users.Search(input) {
      <card>
        ~(user.Name)
      </card>
    })
  }
12

Pushduck – S3 uploads that run on Cloudflare Workers, no AWS SDK #

7 댓글6:16 AMHN에서 보기
I was tired setting up file uploads for multiple projects with bloated aws-sdk so built my own, my first attempt was next-s3-uploader, it worked but needed much better developer experience and wanted to get all benifits of typesafe Typescript, and wanted to have a very light weight toolkit that can do all things a dev needs to manage s3.

Credits to `aws4fetch` which made it be able to run in edge enviornments and cloudflare workers, now im trying to expand and build towards non Reactjs frameworks too and have support.

Setting up file uploads shouldnt be hard, but is. the easier options have vendor lockin's.

So made a DX friendly, typesafe file upload library, enjoy. Happy to discuss any improvements and options.

8

OpenRig – agent harness that runs Claude Code and Codex as one system #

github.com favicongithub.com
6 댓글11:46 PMHN에서 보기
I've been running Claude Code and Codex together every day. At some point I figured out you can use tmux to let them talk to each other, so I started doing that. Once they could coordinate, I kept adding more agents. Before long I had a whole team working together. But any time I rebooted my machine, the whole thing was gone. Not just the tabs. The way they were wired up, what each one was doing, all of it. Nothing I'd found treats your agent setup as a topology, as something with a shape you can save and bring back.

So I built OpenRig, a multi-agent harness. A harness wraps a model. A "rig" wraps your harnesses. You describe your team in a YAML file, boot it with one command, and get a live topology you can see, click into, save, and bring back by name. Claude Code and Codex run together in the same rig. tmux is still doing the talking underneath. I didn't try to add a fancier messaging layer on top.

The project is still early. My own setup uses the config layer extensively (YAML, Markdown, JSON) for prototyping functionality that outpace what's shipped in the repo and npm package. But the core primitives are there and the happy path in readme works. It's built to be driven by your agent, not by you typing commands by hand.

README: https://github.com/mvschwarz/openrig Demo: https://youtu.be/vndsXRBPGio

7

Tsplat – Render Gaussian Splats directly in your terminal #

github.com favicongithub.com
1 댓글6:35 AMHN에서 보기
Last week I wanted to quickly view some Gaussian splats which I had trained on a remote server that didn’t have any open ports or a display device. So I ended up downloading everything locally just to inspect the results This weekend I put together a terminal-based Gaussian splats viewer that renders directly in the terminal. It works over SSH and currently runs on CPU only and written in rust with claude code. I’ve found it to be pretty useful for quickly checking which .ply files correspond to which scenes and getting a rough sense of their quality.

Along the way, I also wrote a small tutorial on the forward rasterization process for Gaussian splatting on CPUs

6

MōBrowser, a TypeScript-first desktop app framework with typed IPC #

teamdev.com faviconteamdev.com
0 댓글3:30 PMHN에서 보기
Hi HN,

For the last ~15 years I've worked on embedding web browsers into Java and .NET desktop apps (JxBrowser, DotNetBrowser). Over time, I watched many teams move from embedding web views into native apps, to building full desktop apps with frameworks like Electron and Tauri.

Both are useful, but in practice I kept running into several problems.

With Electron, beyond the larger app footprint, I often ran into:

  - lack of type-safe IPC
  - no source code protection
  - weak support for the modern web stack
Tauri solves some problems (like app size), but introduces others:

  - different WebViews across platforms → inconsistent behavior
  - requires Rust + JS instead of a single stack
So we built MōBrowser, a framework for building desktop apps with TypeScript, Node.js, and Chromium.

Some of the things we focused on:

  - typed IPC using Protobuf + code generation (RPC-style communication instead of string channels)
  - consistent rendering and behavior across different platforms
  - Node.js runtime
  - built-in packaging, updates, and scaffolding
  - source code protection
  - small delta auto-updates
The goal is to let web developers ship desktop apps with a web stack they already know and fewer cross-platform surprises.

I'd especially love feedback from people who have built production apps with Electron or Tauri.

Happy to answer any questions.

5

A 24/7 live stream where AI writes a new song about the current time #

youtube.com faviconyoutube.com
3 댓글12:00 AMHN에서 보기
A clock radio from the 1950s played music and showed the time. I wanted to see what happens when you combined an old idea with a new one — the music itself tells you the time. It's also hilarious.

This is a fully automated YouTube live stream. Every few minutes, a new AI-generated song plays. Every song is in a different genre. Every song's lyrics are about what time it is right now. The system generates them ahead of time using Suno's API, builds video segments with FFmpeg, and streams continuously via RTMP. There's no human in the loop and no pre-recorded content.

The core challenge is a scheduling problem: generation takes ~3 minutes, song durations are variable (2-4 min), and the lyric timestamp has to match the actual start time. The orchestrator tracks a rolling average of API latency and adjusts its lookahead window accordingly.

Stack: Python, Suno API, Pillow, FFmpeg.

4

Groupr – Rust CLI that sorts files into subfolders by extension #

github.com favicongithub.com
0 댓글5:07 AMHN에서 보기
Wrote this to clean up my Downloads folder. It moves top-level files into subfolders named after their extension. photo.PNG goes into png/, files with no extension go into no_extension/. --dry-run if you want to see what it would do first.

A few details: it doesn't recurse into existing subfolders, extensions are lowercased so you don't get png/ and PNG/ coexisting, and it handles name collisions by appending _1, _2, etc. rather than overwriting.

~150 lines of Rust, no dependencies outside of tempfile in dev.

cargo install --git https://github.com/timfinnigan/groupr groupr ~/Downloads --dry-run

4

Excalicharts – Charting Library for Excalidraw #

github.com favicongithub.com
0 댓글12:20 AMHN에서 보기
I very frequently use Excalidraw for presentations and writing, but the charting ability is relatively limited. This project surfaces a CLI which supports a wider variety of chart types and parameters.

Outputting a .excalidraw file means that rather than endlessly tweaking parameters to get chart elements positioned just right, you can simply drag them around like any other Excalidraw scene.

4

Agentchat, a skill that teaches agents to make group chats #

github.com favicongithub.com
0 댓글6:20 PMHN에서 보기
Hey y’all - open sourcing an agent skill we built called agentchat.

Agentchat teaches your agents to make group chats so they can talk to each other. It's super flexible - works with any agent across any machine. Group chats are private & agents need to be invited to join.

We think about agent communication all the time at Pentagon (x.com/runpentagon) - I've seen it compound agent capabilities significantly (think: gstack, but agents can autonomously talk to each other) but it's also got a ton of hairy problems around it (durable streams, variability across agent harnesses, conversation efficiency).

Agentchat is built on top of s2.dev (not affiliated, we just like them!). If you're thinking about similar problems & or have alternatives you like, would love to hear.

lmk what y'all think!

3

Prmana – OIDC SSH Login for Linux with DPoP (Rust, Apache 2.0) #

github.com favicongithub.com
1 댓글2:51 AMHN에서 보기
prmana replaces static SSH keys with short-lived OIDC tokens validated at the host through PAM. What makes it different from other OIDC-for-SSH approaches is DPoP (RFC 9449) — every authentication includes a cryptographic proof that the token holder has the private key. Stolen tokens can't be replayed.

Three components: a PAM module (pam_prmana.so), a client agent (prmana-agent), and a shared OIDC/JWKS library (prmana-core). All Rust.

DPoP keys can be software, YubiKey (PKCS#11), or TPM 2.0. No gateway, no SSH CA, no patches to sshd. Standard ssh client, standard sshd, PAM in between.

Tested against Keycloak, Auth0, Google, and Entra ID.

The name is from Sanskrit — pramana (प्रमाण) means "proof."

3

Run Python tools on rust agents #

github.com favicongithub.com
1 댓글8:31 PMHN에서 보기
Over at Tools-rs, we wanted to script tools faster with the help of large communities. The interest arose to build a way to bridge our Rust LLM runtimes together with more traditional scripting languages, so we decided to find a way to bring Python tools into our ecosystem. Hence, we're introducing our first FFI on Python (powered by PyO3)!

Calling a Python tool is as easy as writing a decorator in the Python function and then passing the script's (or folder) path to the tool collection builder. They get serialized as JSON objects so they're fully observable by the AI, and you can call them directly from Rust.

3

ILTY – AI mental health companion that does not pat your back #

0 댓글8:40 PMHN에서 보기
Hey HN. My wife and I built ILTY because we both needed it for different reasons. My wife deals with anxiety. I have a fairly advanced case of procrastination and I kinda low-key feel too comfortable in my life so I cant quite make myself move.

We tried the apps. Calm felt like homework and ChatGPT mostly tells how awesome we are and we need to love ourselves like we are, cant agree with that too much

So we built what we actually wanted, tested with ~50 beta testers and felt good about publishing it. Also, publishing this puts us somewhere on the lower slopes of Cringe Mountain, but here we are so please bear with me

ILTY tracks your mood before and after every conversation, so you can tell whether it helped or whether you just had a pleasant interaction with a very confident machine. It has 5 AI companions with different styles. One of them, Mr. Relentless, is for when you need to get your shit together.

Stack: native SwiftUI (iOS 16+), Claude Sonnet 4.6 via Cloudflare Worker proxy, iCloud Documents for storage so user data stays off our servers, RevenueCat for subscriptions, Firebase Analytics exported to BigQuery. Just iOS for now yes.

We’re not new to building products. We come from product design and engineering management in larger companies, but mostly hand-waving on meetings and showing pretty figma prototypes. We started in January and shipped to the App Store in April. ClaudeCode and Gstack by Garry Tan certainly helped a lot :]

ILTY is not therapy. We don’t diagnose or treat. If things get dark, we route to 988, not an AI response.

Happy to answer questions about the build, companion design, prompts, or any of the decisions behind it.

Here it is: https://ilty.co/ and appstore https://apps.apple.com/us/app/ilty-ai-therapy-companion/id67...

Also really want to hear feedback, bad or good. Anything works.

2

A Bomberman-style 1v1 game where LLMs compete in real time #

github.com favicongithub.com
2 댓글7:36 AMHN에서 보기
A few weeks ago, ARC-AGI 3 was released. For those unfamiliar, it’s a benchmark designed to study agentic intelligence through interactive environments.

I'm a big fan of these kinds of benchmarks as IMO they reveal so much more about the capabilities and limits of agentic AI than static Q&A benchmarks. They are also more intuitive to understand when you are able to actually see how the model behaves in these environments.

I wanted to build something in that spirit, but with an environment that pits two LLMs against each other. My criteria were:

1. Strategic & Real-time. The game had to create genuine tradeoffs between speed and quality of reasoning. Smaller models can make more moves but less strategic ones; larger models move slower but smarter. 2. Good harness. I deliberately avoided visual inputs — models are still too slow and not accurate enough with them (see: Claude playing Pokémon). Instead, a harness translates the game state into structured text, and the game engine renders the agents' responses as fluid animations. 3. Fun to watch. Because benchmarks don't need to be dry bread :) The end result is a Bomberman-style 1v1 game where two agents compete by destroying bricks and trying to bomb each other. You can check a demo video here: https://youtu.be/4x8tVypmuRk

Would love to hear what you think!

2

Hacienda-CLI – CLI to reconcile Spanish tax returns with the tax agency #

github.com favicongithub.com
1 댓글5:12 PMHN에서 보기
I built a CLI to programmatically interact with Spain's tax agency (AEAT) for income tax filing (Modelo 100). It authenticates via Cl@ve using Playwright, downloads your tax data, validates XML against the official XSD, and uploads it to AEAT's EDFI system for reconciliation. The problem: AI agents can do a good job organizing financial data from multiple brokers, banks and crypto exchanges, but there was no way to programmatically check that against what the tax agency already knows about you. This CLI bridges that gap. It does NOT file or submit anything. The user is always responsible for manual review and submission.

Maybe a bit specific for the spanish community, but if you always wish there was a CLI to do your taxes here is an unofficial one.

1

API Changelog Tracker #

apipulse.app faviconapipulse.app
0 댓글2:56 PMHN에서 보기
Hi HN,

I built this because I rely on many external APIs, and keeping up with changes is harder than it should be.

Many (most?) APIs don’t provide RSS feeds, sometimes they provide RSS that they block from fetching (yes, this happens!), or provide API updates on JavaScript-heavy pages, and so many other crazy things. So, I had to use other ways to track changelogs and documentation updates.

I originally built this just for myself to monitor APIs in one place and get alerts when something changes. It worked good enough, so I made it public.

If there’s an API you want added, let me know and I can include it.

Part of this was built using Codex.

Would appreciate any feedback.

1

One-click code review interview #

entrevue.app faviconentrevue.app
0 댓글2:57 PMHN에서 보기
I've been on both sides of the software developer technical interview. It's pretty clear to me that LeetCode-style interviews aren't a good proxy for day-to-day work.

Years ago, I switched to conducting code review interviews. These gave us much better signal and candidates seem to enjoy them more.

Please try out my platform to conduct code review interviews. All feedback welcome. Cheers!

1

Signoff.sh – Claude Co-Authored-By with random fictional characters #

gist.github.com favicongist.github.com
0 댓글3:02 PMHN에서 보기
Every Claude Code commit and PR is shipped with Co-Authored-By: Claude Opus 4.6 <[email protected]> (or similar). It's less fun than I think it should be. So signoff.sh is a 40-line bash script that replaces it with a random fictional character's full legal name - as best as I could figure out :)

Preserves the model name and the Anthropic noreply email, so you still get transparency about AI coding.

It's silly and that's the point.

I'm also a fan of peon ping (not mine): https://github.com/PeonPing/peon-ping

1

Resonly – prioritize feature requests by revenue impact #

resonly.com faviconresonly.com
0 댓글2:58 PMHN에서 보기
Hi HN,

I’m building Resonly to help teams prioritize feature requests with more context than just votes.

The idea is simple: not every request is equal. A feature requested by multiple paying customers may matter more than one with 100 upvotes from free users.

I’ve added a way to associate revenue with each customer, so when someone upvotes or submits a feature request, the admin can see three signals:

MRR at risk — how much current MRR is tied to customers asking for it;

MRR lost — how much MRR was lost from customers who churned before it was solved;

Potential MRR — how much MRR could be converted from trial customers requesting it.

I’d love your honest feedback on the idea and positioning.