매일의 Show HN

Upvote0

2025년 7월 12일의 Show HN

20 개
118

ArchGW – an intelligent edge and service proxy for agents #

15 댓글11:55 PMHN에서 보기
Hey HN!

This is Adil, Salman and Jose and and we’re behind archgw [1]. An intelligent proxy server designed as an edge and AI gateway for agents - one that natively know how to handle prompts, not just network traffic. We’ve made several sweeping changes so sharing the project again.

A bit of background on why we’ve built this project. Building AI agent demos is easy, but to create something production-ready there is a lot of repeat low-level plumbing work that everyone is doing. You’re applying guardrails to make sure unsafe or off-topic requests don’t get through. You’re clarifying vague input so agents don’t make mistakes. You’re routing prompts to the right expert agent based on context or task type. You’re writing integration code to quickly and safely add support for new LLMs. And every time a new framework hits the market or is updated, you’re validating or re-implementing that same logic—again and again.

Putting all the low-level plumbing code in a framework gets messy to manage, harder to update and scale. Low-level work isn't business logic. That’s why we built archgw - an intelligent proxy server that handles prompts during ingress and egress and offers several related capabilities from a single software service. It lives outside your app runtime, so you can keep your business logic clean and focus on what matters. Think of it like a service mesh, but for AI agents.

Prior to building archgw, the team spent time building Envoy [2] at Lyft, API Gateway at AWS, specialized NLP models at Microsoft Research and worked on safety at Meta. archgw was born out of the belief that rule-based, single-purpose tools that handle the work around resiliency, processing and routing prompts should move into a dedicated infrastructure layer for agents, but built on the battle-tested foundational of Envoy Proxy.

The intelligence in archgw comes from our fast Task-specific LLMs [3] that can handle things like agent routing and hand off, guardrails and preference-based intelligent LLM calling. Here are some additional details about the open source project. archgw is written in rust, and the request path has three main parts:

* Listener subsystem which handles downstream (ingress) and upstream (egress) request processing. * Prompt handler subsystem. This is where archgw makes decisions on the safety of the incoming request via its prompt_guard hooks and identifies where to forward the conversation to via its prompt_target primitive. * Model serving subsystem is the interface that hosts all the lightweight LLMs engineered in archgw and offers a framework for things like hallucination detection of our these models

We loved building this open source project, and our belief is that this infra primitive would help developers build faster, safer and more personalized agents without all the manual prompt engineering and systems integration work needed to get there. We hope to invite other developers to use and improve Arch. Please give it a shot and leave feedback here, or at our discord channel [4] Also here is a quick demo of the project in action [5]. You can check out our public docs here at [6]. Our models are also available here [7].

[1] https://github.com/katanemo/archgw [2] https://www.envoyproxy.io/ [3] https://huggingface.co/collections/katanemo/arch-function-66... [4] https://discord.com/channels/1292630766827737088/12926307682... [5] https://www.youtube.com/watch?v=I4Lbhr-NNXk [6] https://docs.archgw.com/ [7] https://huggingface.co/katanemo

89

DesignArena – crowdsourced benchmark for AI-generated UI/UX #

designarena.ai favicondesignarena.ai
28 댓글3:07 PMHN에서 보기
I’ve been using AI to generate some repetitive frontend (guilty), and while most outputs felt vibe-coded, some results were surprisingly good. So I cleaned it up and made a ranking game out of it with friends, and you can check it out here: https://www.designarena.ai/vote

/vote: Your prompt will be answered by four random, anonymous models. You pick the one you prefer and crown the winner, tournament-style.

/leaderboard: See the current winning models, as dictated by voter preferences.

/play: Iterate quickly by seeing four models respond to the same input and pressing space to regenerate the results you don’t lock-in.

We were especially impressed with the quality of DeepSeek and Grok, and variance between categories (To judge by the results so far, OpenAI is very good for game dev, but seems to suck everywhere else).

We’ve learned a lot, and are curious to hear your comments and questions. Excited to make this better!

77

BinaryRPC – Lightweight WebSocket-based RPC framework in modern C++ #

github.com favicongithub.com
44 댓글4:32 PMHN에서 보기
Hi HN,

I’m a recent CS graduate. During the past few months I wrote BinaryRPC, an open-source RPC framework in modern C++20 focused on low-latency, binary WebSocket messaging.

Why I built it * Wanted first-class session support, pluggable QoS levels and a simple middleware chain (global, specific, multi handler) without extra JSON/XML parsing. * Easy developer experience

A quick feature list * Binary WebSocket frames – minimal overhead * Built-in session layer (login / reconnect / heartbeat) * QoS1 / QoS2 with automatic ACK & retry * Plugin system – rooms, msgpack, etc. can be added in one line * Thread-safe core: RAII + folly

Still early (solo project), so any feedback on design, concurrency model or missing must-have features would help a lot.

Thanks for reading!

also see "Chat Server in 5 Minutes with BinaryRPC": https://medium.com/@efecanerdem0907/building-a-chat-server-i...

50

I made a JSFiddle-style playground to test and share prompts fast #

langfa.st faviconlangfa.st
17 댓글5:41 PMHN에서 보기
I built this out of frustration as I lead the development of AI features at Yola.com.

Prompt testing should be simple and straightforward. All I wanted was a simple way to test prompts with variables and jinja2 templates across different models, ideally somthing I could open during a call, run few tests, and share results with my team. But every tool I tried hit me with a clunky UI, required login and API keys, or forced a lengthy setup process.

And that's not all.

Then came the pricing. The last quote I got for one of the tools on the market was $6,000/year for a team of 16 people in a use-it-or-loose-it way. For a tool we use maybe 2–3 times per sprint. That’s just ridiculous!

IMO, it should be something more like JSFiddle. A simple prompt playground that does not require you to signup, does not require API keys, and let's experiment instantly, i.e. you just enter a browser URL and start working. Like JSFiddle has. And mainly, something that costs me nothing if I'm or my team is not using it.

Eventually I gave up looking for solution and decided to build it by myself.

Here it is: https://langfa.st

Help me find what's wrong or missing or does not work from you perspctive.

P.S. I did not put any limits or restrictions yet, so test it wisely. Don't make me broke, please.

19

Cogency – Cognitive Architecture for AI Agents #

github.com favicongithub.com
9 댓글2:06 PMHN에서 보기
Yesterday I built something that probably shouldn’t exist yet. In 9 hours, I created a cognitive architecture demonstrating emergent reasoning.

It follows a 5-step loop: Plan → Reason → Act → Reflect → Respond. Adding a WebSearchTool to test extensibility, the agent initially failed its first search, reflected on poor results, adapted its query, and then succeeded. This behavior wasn’t programmed; it emerged naturally from the architecture.

Five hours later, I integrated a FileManagerTool — it worked on the first try. Like code compiling first time, except this was intelligence composing zero-config.

Key insight: separating cognitive operations from tool orchestration enables true composability. Most frameworks conflate these, resulting in brittle, unpredictable agents.

Commit timeline: https://github.com/iteebz/cogency

It’s pip-installable (pip install cogency) with production-ready components. Currently dogfooding across projects.

Seeking feedback from the community on the approach and implementation.

5

VibeKin – Gated Discord Tribes via Personality Matching #

tgc.fly.dev favicontgc.fly.dev
0 댓글2:02 AMHN에서 보기
I built an app that matches users to exclusive Discord communities based on a 25-question personality quiz. Inspired by HEXACO but with a novel fuzzy-clustering twist, it creates a "harmony genome" to gate access, ensuring tight-knit tribes (e.g., wellness or creative niches). Think Reddit but curated via psych. Launched to test the idea—feedback on algo, niches, or scaling?
5

I build an iOS App for parents to plan meal, create recipes, lunchboxes #

apps.apple.com faviconapps.apple.com
0 댓글11:48 PMHN에서 보기
Hi, I built this iOS App that would let parents create profiles for my children, plan their meals, put their meal preferences, recipes, lunchboxes, export the plan to their calendar, and share links to the timetable with others, and of course an AI helping with the plan and recipe . For now it has a free plan and also paid plan. Initially i built this as a web app but then after feedbacks from close people i developed this iOS app. I would really appreciate your feedback.
4

I Built a Stick-On Wireless Lamp That Installs in 30 Seconds #

shopinfinitylamp.store faviconshopinfinitylamp.store
2 댓글8:26 PMHN에서 보기
Hi HN!

I recently built a simple, rechargeable wall lamp that doesn't require any tools, wires, or drilling. It sticks to surfaces using adhesive pads, rotates 360°, and charges via USB-C. The goal was to make lighting *super minimal, renter-friendly, and easy to install*.

The idea came from personal frustration — I live in a rented apartment where I can’t drill holes, and I wanted a modern-looking light I could reposition easily.

I know this isn’t a software product, but I figured some of you might appreciate the problem-solving side of it — designing minimal hardware that’s useful, elegant, and simple. Would love feedback on the product or the landing page:

Happy to answer any questions about the design, battery, lighting specs, remote control logic, etc.

Thanks!

3

Transition – AI Triathlon Coach #

transition.fun favicontransition.fun
0 댓글2:39 AMHN에서 보기
Hey HN, I’m Alex, a triathlete, dad, and software engineer.

I’ve been building Transition — an app for triathletes that creates adaptive training plans based on your goals, schedule, and workout data (Garmin, Strava, etc).

Most plans are static, which never really worked for me as a parent and someone with an unpredictable schedule. Transition adjusts every week using your actual workouts and progress, so the plan changes when you miss a session, set a new PR, or need to shift your priorities.

I built this because nothing else was flexible enough for my life, and I’m curious if others have the same problem.

It’s in beta and free to try. I’d love feedback from the HN crowd — especially around the training logic, onboarding, or any ways to make it more useful for real athletes.

Website: https://www.transition.fun/

2

I automated code security to help vibe coders from getting busted #

elara-app.ai faviconelara-app.ai
1 댓글4:20 PMHN에서 보기
Hi HN!

I’m the developer of Elara, a tool that automatically scans your code for security issues like misconfigurations, secrets, and risky packages, so you can focus on building without stressing about all this stuff. It’s designed to be simple and fast.

I see so many people launching products online without even knowing what security risks they might have. If you’re a developer or into tech, you know how hard it is to keep systems safe. Yet shockingly it feels like nobody really cares. I want to help folks catch these issues early, before they get burned.

Elara runs multiple security scanners simultaneously, aggregates the results into a single interface, and gives you an actionable to-do list to fix the problems.

It’s super simple to try, just log in with GitHub and see for yourself.

Would really appreciate your feedback!

2

I made a game which forces me to workout (do a chin-up, save a cat) #

old.reddit.com faviconold.reddit.com
0 댓글10:40 AMHN에서 보기
I'm creating a fitness game on the web, where you do exercises to save pets

Here's a live demo (camera access required): https://www.funwithcomputervision.com/chinup/

I've added push-up mode as well, and you can choose whether you want to rescue cats or dogs :)

Tech stack: mediapipe computer vision (body pose tracking model), threejs, tonejs

I'm actively working on this, so please let me know your feedback / other exercises you want added!

2

ElizaOS 1.0 - Powerful and useful multi-agent framework in Typescript #

github.com favicongithub.com
0 댓글12:24 AMHN에서 보기
We've just released elizaOS 1.0 this week. Try it yourself: `npx @elizaos/cli start`

elizaOS is a powerful new kind of agent framework. We've spent the last year iterating on a modular, plugin-based runtime with real use cases in real businesses: agents performing complex financial agents with realtime trading, camera and screen vision integration, voice and even 2D and 3D games.

Our project is MIT licensed, with contributions from over 600 contributors on Github. We have a crowd-funded in-house team of 10 developers. We see a lot of people using our agents to bring their existing startups to where their users are: on Discord, Telegram, Slack, or even text message and voice via our Twilio plugin.

While elizaOS is a powerful framework for developers, non-coders can build agents, too! Our GUI supports full agent creation and plugin customization, and our CLI has Claude Code-base generation of new plugins built-in. We've also made the framework very vibe-code friendly.

While we've been mainly focused on "useful agents", elizaOS is also a very powerful no-code character framework. Imagine character.ai but open source and fundamentally multi-agent, and your agents can have plugins to interact with the real world, from having their own e-mail address to being a full fledged voice assistant with screen vision and shell access (what could go wrong?).

We support pretty much every major inference provider, and you can also bring your own open source models. If you want to build agents that seamlessly integrate with your team or community, our project is probably a good option. The runtime will even run natively in your browser!

Github here: https://github.com/elizaOS/eliza

You can find our docs here: https://eliza.how/