매일의 Show HN

Upvote0

2025년 10월 14일의 Show HN

18 개
59

Metorial (YC F25) – Vercel for MCP #

github.com favicongithub.com
25 댓글2:49 PMHN에서 보기
Hey HN! We're Wen and Tobias, and we're building Metorial (https://metorial.com), an integration platform that connects AI agents to external tools and data using MCP.

The Problem: While MCP works great locally (e.g., Cursor or Claude Desktop), server-side deployments are painful. Running MCP servers means managing Docker configs, per-user OAuth flows, scaling concurrent sessions, and building observability from scratch. This infrastructure work turns simple integrations into weeks of setup.

Metorial handles all of this automatically. We maintain an open catalog of ~600 MCP servers (GitHub, Slack, Google Drive, Salesforce, databases, etc.) that you can deploy in three clicks. You can also bring your own MCP server or fork existing ones.

For OAuth, just provide your client ID and secret and we handle the entire flow, including token refresh. Each user then gets an isolated MCP server instance configured with their own OAuth credentials automatically.

What makes us different is that our serverless runtime hibernates idle MCP servers and resumes them with sub-second cold starts while preserving the state and connection. Our custom MCP engine is capable of managing thousands of concurrent connections, giving you a scalable service with per-user isolation. Other alternatives either run shared servers (security issues) or provision separate VMs per user (expensive and slow to scale).

Our Python and TypeScript SDKs let you connect LLMs to MCP tools in a single function call, abstracting away the protocol complexity. But if you want to dig deep, you can just use standard MCP and our REST API (https://metorial.com/api) to connect to our platform.

You can self-host (https://github.com/metorial/metorial) or use the managed version at https://metorial.com.

So far, we see enterprise teams use Metorial to have a central integration hub for tools like Salesforce, while startups use it to cut weeks of infra work on their side when building AI agents with integrations.

Our Repos:

* Metorial: https://github.com/metorial/metorial

* MCP Containers: https://github.com/metorial/mcp-containers

SDKs:

* Node/TypeScript: https://github.com/metorial/metorial-node

* Python: https://github.com/metorial/metorial-python

Demo video: https://www.youtube.com/watch?v=07StSRNmJZ8

We'd love to hear feedback from the HN community, especially if you've dealt with deploying MCP at scale!

31

Wispbit – Keep codebase standards alive #

wispbit.com faviconwispbit.com
14 댓글7:52 PMHN에서 보기
Hey HN! Ilya and Nikita here. We're building wispbit (https://wispbit.com) - a tool that helps keep codebase standards alive.

With the help of AI coding tools, engineers are writing more code than ever. Code output has increased, but the tooling to manage this hasn't improved. Background agents still write bad code, and your IDE still writes slop without the right context.

So we built wispbit. It works by scanning your codebase for patterns you already use, and coming up with rules. Rules are kept up to date as standards change, and you can edit rules any time.

You can enforce these rules during code review, and because we have this rules system, you can run a CLI locally to review using these rules. You can think of it as a portable rules file that you can bring anywhere.

We put a lot of work into making a system that produces good rules and avoids slop. For repository crawling, we have an agent that dispatches subagents, similar to Anthropic's research agent. These subagents will go through and look for common patterns within modules and directories, and report back to the main agent, which synthesizes the results. We also do a historical scan on your pull request comments, determine which ones were addressed, filter out comments that wouldn't make a good rule, and use that to create or update rules.

Our early users are seeing 80%+ resolution rates, meaning that 80% of comments that wispbit makes are resolved.

Long-term, we see ourselves being a validation layer for AI-written code. With tools like Devin and Cursor, we find ourselves having to re-prompt the same solution many times. We still don't know the long-term implications on AI-assisted codebases, so we want to get in front of that as soon as possible.

We've opened up signups for free to HN folks at https://wispbit.com. We're also around to chat and answer questions!

27

I tracked the adoption of AI coding extensions in VS Code since 2022 #

bloomberry.com faviconbloomberry.com
11 댓글1:12 PMHN에서 보기
For the past 4 years, I've been tracking the install counts of AI coding extensions in the Visual Studio Code marketplace (GitHub Copilot, Claude Code, OpenAI Codex, etc.)

Today, I built an interactive dashboard that lets you see daily install counts for any of them over time.

The chart shows GitHub Copilot by default, and you can overlay or swap in any of the other 20+ tools to compare. You can also see important dates in each tool's history (pricing changes, major releases, etc.) and how daily installs changed around those dates.

Important caveats:

1) This only tracks VS Code extension installs, not CLI usage or other IDEs like JetBrains.

2) Cursor isn't included since it's a standalone editor (VS Code fork), not a marketplace extension. I added a second chart showing Cursor discussion forum activity as a proxy for its growth.

Obviously not apples to apples, but felt I needed to measure Cursor's growth somehow.

3) This tracks daily installs per day, NOT total installs. Otherwise the charts would be boring and always go to the top and right.

4) The dashboard was coded using an AI coding assistant too. I used regular Claude :)

18

docker/model-runner – an open-source tool for local LLMs #

github.com favicongithub.com
9 댓글12:44 PMHN에서 보기
Hey Hacker News,

We're the maintainers of docker/model-runner and wanted to share some major updates we're excited about.

Link: https://github.com/docker/model-runner

We are rebooting the community:

https://www.docker.com/blog/rebooting-model-runner-community...

At its core, model-runner is a simple, backend-agnostic tool for downloading and running local large language models. Think of it as a consistent interface to interact with different model backends. One of our main backends is llama.cpp, and we make it a point to contribute any improvements we make back upstream to their project. It also allows people to transport models via OCI registries like Docker Hub. Docker Hub hosts our curated local AI model collection, packaged as OCI Artifacts and ready to run. You can easily download, share, and upload models on Docker Hub, making it a central hub for both containerized applications and the next wave of generative AI.

We've been working hard on a few things recently:

- Vulkan and AMD Support: We've just merged support for Vulkan, which opens up local inference to a much wider range of GPUs, especially from AMD. - Contributor Experience: We refactored the project into a monorepo. The main goal was to make the architecture clearer and dramatically lower the barrier for new contributors to get involved and understand the codebase. - It's Fully Open Source: We know that a project from Docker might raise questions about its openness. To be clear, this is a 100% open-source, Apache 2.0 licensed project. We want to build a community around it and welcome all contributions, from documentation fixes to new model backends.

Our goal is to grow the community. We'll be here all day to answer any questions you have. We'd love for you to check it out, give us a star if you like it, and let us know what you think.

Thanks!

13

Free API to extract PDF data #

0 댓글4:10 PMHN에서 보기
Hi HN,

Like everyone, I'm working on an product that uses LLMs to extract data from photos and documents. Part of the processing pipeline is extracting data from PDFs as raw text or a raster image.

As part of our leadgen strategy, we've opened our REST API that lets you process pages of a PDF. The API is completely free to use anonymously, but is rate limited to 1 page per 30 seconds. Creating a free account removes this restriction.

The two endpoints are:

- https://extract.dev/api/pages/extract/raster - Rasterize a page of a PDF

- https://extract.dev/api/pages/extract/text - Extract text from a page of a PDF

Both have the same request format:

    {
        "file": "https://assets.extract-cdn.com/data/hd-receipt.pdf",
        "page": 1
    }
I've outlined more of the documentation here: https://extract.dev/docs

Under the hood, the API is using Poppler to extract text and rasterize pages. Note that the text extraction functionality extracts actual text encoded in the PDF, and does not employ an OCR model. Give it a spin, I'm interested in your feedback if this is useful or not.

7

Relaya – Agent calls businesses for you #

relaya.ai faviconrelaya.ai
0 댓글9:02 PMHN에서 보기
HN - this is rish.

Ended up on a 2h call with Chase which resulted in building Relaya to automate this. For simple tasks (e.g. checking if something is in store, making a reservation that isn't on opentable, etc.), it will just do the full task. No need to dial anything.

For more complex tasks e.g. applying Chase Sapphire credits to rebook an existing flight, it will wade through all the menus and holds and connect you with the agent directly.

At the very least, only talk to humans and reduces an avg. 20-30 min call to 2-3 mins (~20+ ppl have tried so far).

Next, I imagine will be able to automate a certain % of the more complex calls as well. Looking for feedback.

3

Ark v0.6.0 – Go ECS with new declarative event system #

github.com favicongithub.com
0 댓글7:04 AMHN에서 보기
Ark is a high-performance Entity Component System (ECS) library for Go.

Ark v0.6.0 introduces a new event system built around lightweight, composable observers. These allow applications to react to ECS lifecycle changes like entity creation/removal, component updates, relation changes using declarative filters and callbacks. Observers follow the same patterns as Ark’s query system, making them easy to integrate and reason about.

Custom events are also supported. They can be emitted manually and observed with the same filtering logic, making them ideal for modeling domain-specific interactions such as input handling, and other reactive game logic.

As a new performance-related feature, filters and queries are now concurrency-safe and can be executed in parallel.

This release also includes a load of performance improvements, from faster archetype switching over optimized query and table creation to improved performance of bitmask operations. The new World.Shrink method helps reclaim unused memory in dynamic workloads.

Docs have been expanded with a full guide to the event system, examples for both built-in and custom events, and an Ebiten integration example. A cheat sheet for common operations has been added. Finally, Ark now has 100% test coverage.

Changelog: https://github.com/mlange-42/ark/blob/main/CHANGELOG.md Repo: https://github.com/mlange-42/ark

Would love feedback from anyone building games, simulations, or ECS tooling in Go.

2

Pathwave.io – MCP and mobile app to manually approve AI actions #

web.pathwave.io faviconweb.pathwave.io
0 댓글8:31 PMHN에서 보기
Hey HN,

On my — very limited :D — spare time, I’ve been working on Pathwave.io — a tool to add a manual approval step to AI automations.

The first version I’m releasing today is focused on that manual approval flow — no payments yet, just secure "approve / deny" operations that can be triggered from your AI or backend via our API. You can check out the docs here: https://web.pathwave.io/docs

How it works:

* Your AI or app sends an approval request through our API

* The user gets a push notification in the Pathwave mobile app

* They can approve or deny in real time

* Your system gets the result instantly

The goal is to create a trust layer for AI actions — first for simple approvals, later for financial and high-risk operations.

I’d love feedback from the HN community, especially around:

* Use cases where you’d want AI to "ask first"

* Thoughts on developer experience from the docs

* Suggestions before expanding to payment integrations

Thanks for taking a look — you can try it out or read the quickstart here: https://web.pathwave.io/docs

— Felipe

2

Color – first smart AI aggregator #

color.ag faviconcolor.ag
2 댓글12:46 PMHN에서 보기
Color.ag detects your question’s category and intent, matches it to the most relevant benchmark, and routes it to the top-performing AI models to ensure you get the best answer to your question. The AI models answers are then combined into a consensus summary for the smartest, most balanced response. No need to keep up with the latest models all the time! See the top 3 AI answers for your question and get consensus on them.
2

Get a PMF score for your website, based on simulated user data #

semilattice.ai faviconsemilattice.ai
0 댓글4:53 PMHN에서 보기
Hey HN,

This B2B PMF Scorer takes a website URL and generates a PMF analysis with a score /100, specific recommendations, and simulated user research findings.

Note: it takes ~5 minutes to run! Sample: https://semilattice.ai/pmf-report-1760452680406.pdf

It's a demo of what you can do with Semilattice (simulated user research), but we hope it's useful in itself.

It parses the marketing messages from the website to match one of our audience models, then uses that model to predict 12 PMF-related research questions around the product, the messaging, and the audience generally. With that simulated data, it generates the PDF (markdown available too.)

About Semilattice: we're building user insights as infrastructure. Our API can predict how specific target audiences answer arbitrary questions, i.e. near-instant surveys. We build audience models using real ground truth data and have eval tools so there's always an estimated accuracy (currently >87% on average). Docs here: https://docs.semilattice.ai

We want to be the API layer better decisions and more personalised UX/content. Reach out if you’d like to build with us.