每日 Show HN

Upvote0

2025年7月22日 的 Show HN

27 篇
124

Any-LLM – lightweight and open-source router to access any LLM Provider #

github.com favicongithub.com
68 評論5:40 PM在 HN 查看
We built any-llm because we needed a lightweight router for LLM providers with minimal overhead. Switching between models is just a string change : update "openai/gpt-4" to "anthropic/claude-3" and you're done.

It uses official provider SDKs when available, which helps since providers handle their own compatibility updates. No proxy or gateway service needed either, so getting started is pretty straightforward - just pip install and import.

Currently supports 20+ providers including OpenAI, Anthropic, Google, Mistral, and AWS Bedrock. Would love to hear what you think!

112

The Magic of Code – book about the wonders and weirdness of computation #

themagicofcode.com faviconthemagicofcode.com
29 評論12:05 PM在 HN 查看
I recently published a book called “The Magic of Code” which is about the delights of the computational world, examining computing as a kind of “humanistic liberal art” that connects to so many topics, from art and biology to philosophy and language. The link I’ve shared is to a page on my book’s website where you can download a pdf of the introduction, to give HN readers a taste of what is inside.

Right now there is so much worry and concern around technology that I feel like some people—though not the folks here—have forgotten how much fun that code and computation can also be. So I wanted to rekindle some of that sense of wonder.

But, as I’ve written elsewhere, this is also the kind of book I wish I had when I was younger and getting interested in computers. I’ve always enjoyed the kinds of writing that talks about computing but in the context of so many other big ideas, especially ones I’ve explored at various points in my own life, from evolution to simulation. And that’s what I tried to do.

But while “The Magic of Code” is certainly for a wide audience, and for people who are unfamiliar with programming and code, I’ve also (hopefully!) designed it to be of interest to those who are more expert in this realm, with lots of rabbit holes and strange ideas to pursue. And if there exists a genre of book to explain to outsiders why you love a topic, this is in that genre, for computing and code. I think the HN community will really enjoy it.

90

Phind.design – Image editor & design tool powered by 4o / custom models #

phind.design faviconphind.design
22 評論5:44 PM在 HN 查看
Hi HN,

Today we’re launching phind.design (https://phind.design), an image editor and design tool that uses 4o and custom models to allow users to generate and edit designs for anything from logos and advertisements to creative website and app designs.

4o is great at producing a first version of an image, but is not capable of editing it without messing up other parts of it. We fix this by running Flux Kontext alongside 4o image gen in the chat, as well as by introducing a precision editor powered by custom models where a user indicates an area to modify and we guarantee that only that area will be modified.

Our precision editor is state-of-the-art on image editing in our tests and allows inserting new additional images into the existing image. The latter allows users to insert a logo, product, or face into an image without messing up other parts of the image, and even fix logos and faces that were messed up by 4o. Text editing with the precision edit model is still a work in progress, and we will fix it with the next iteration of that model. We recommend using the chat for editing text for now.

Example: Insert UT Austin logo into helicopter ad (https://phind.design/edit?chat=cmd27o2n10001l704h6865f3u)

We also always produce multiple variations for image generations and edits, as we think this variety is important for getting exactly what you asked for.

Example: Paul Graham in startup heaven (https://phind.design/edit?chat=cmd23h91c000jky04no5d92uy)

One thing we’re excited about is adding more variation into AI-generated websites, as many website builders all use the same CSS libraries, so many websites end up looking the same. We hope to allow builders and creatives to make truly unique designs in 1/10th the time it currently takes with existing tools.

Example: Make me a popeyes landing page where the eyes are actually popping out (https://phind.design/edit?chat=cmd25imtm0001jr046nsag4lu) Example: A train map with sandwich ingredients replacing subway stops. (https://phind.design/edit?chat=cmd23i98c0001ie04l56npyj3)

As engineers who have been frustrated by the time commitment it takes to learn Figma or Photoshop, we hope that phind.design makes it incredibly easy to go from zero to one on your wildest creative ideas.

The editor is far from perfect, particularly when it comes to text. We are working on it and have a new custom precision editing model on the way. In the meantime, we’re excited to hear your comments and feedback!

82

A word of the day that doesn't suck #

34 評論11:31 AM在 HN 查看
I’ve long thought that the Word of the Day was a wasted genre. The goal should be to give you words you can use; to enrich your understanding of words you already know; or at least to use words to tell you something neat about the world.

Instead, what you usually get is words that will never be used in conversation, held up as curios. Some examples from Dictionary.com’s daily email: thewless, balladmonger, vagility, contextomy. These words are... not useful.

I’ve always thought I could do better. My friend Ben recently created a daily puzzle game, called Bracket City, launched here on HN [1], which I like because it takes about the same amount of time as Wordle but has some of the variety and artistry of a good crossword.

Ben agreed to let me write a word of the day for the game’s audience. We’ve collected them all here: https://bracket.city/words. It’s such a joy to write -- every day, I pay homage to a word I love or use or have newly discovered. I find myself paying more attention to words I encounter, thinking if they deserve a place.

It’s also fun for another reason. Many years ago I wrote a blog post, "You’re probably using the wrong dictionary" [2], that made the rounds and actually still finds new readers today. It was about how the modern-day dictionaries we find by default on our iPhones and web browsers are actually kind of bureaucratic and lifeless. Through a writer I love, John McPhee, I rediscovered Webster’s 1913 dictionary, which feels like it was written by a thinking person who loved words. I still consult it all the time. Writing a word of the day has reminded me just how delightful and useful Webster’s old dictionary is -- and reacquainted me with the OED, which I now look to every day, and which I discovered you can access with your library card.

Some of my favorite entries so far: sophisticated, twee, gravitas, blockbuster, meteorologist, send, bid. There are more than 175 now -- and more coming once a day, every day, for as long as Bracket City stands.

To sign up to see each word of the day as it’s published, go to https://bracket.city/words.

[1] https://news.ycombinator.com/item?id=43622719

[2] https://jsomers.net/blog/dictionary

12

Go Command-streaming lib for distributed systems (3x faster than gRPC) #

github.com favicongithub.com
4 評論2:57 PM在 HN 查看
I created cmd-stream-go, a high-performance client-server library based on the Command Pattern, where Commands are first-class citizens.

Why build around Commands? As serializable objects, they can be sent over the network and persisted. They also provide a clean way to model distributed transactions through composition, and naturally support features like Undo and Redo. These qualities make them a great fit for implementing consistency patterns like Saga in distributed systems.

On the performance side, sending a Command involves minimal overhead — only its type and data need to be transmitted. In benchmarks focused on raw throughput (measured using 1, 2, 4, 8, and 16 clients in a simple request/response scenario), cmd-stream/MUS (cmd-stream/Protobuf) is about 3x (2.8x) faster than gRPC/Protobuf, where MUS is a serialization format optimized for low byte usage. This kind of speedup can make a real difference in high-throughput systems or when you're trying to squeeze more out of limited resources.

By putting Commands at the transport layer, cmd-stream-go avoids the extra complexity of layering Command logic on top of generic RPC or REST.

The trade-offs: it’s currently Go-only and maintained by a single developer.

If you’re curious to explore more, you can check out the cmd-stream-go repository (https://github.com/cmd-stream/cmd-stream-go), see performance benchmarks (https://github.com/ymz-ncnk/go-client-server-benchmarks), or read the series of posts on Command Pattern and how it can be applied over the network (https://medium.com/p/f9e53442c85d).

I’d love to hear your thoughts — especially where you think this model could shine, any production concerns, similar patterns or tools you’ve seen in practice.

Feel free to reach me as ymz-ncnk on the Gophers Slack or follow https://x.com/cmdstream_lib for project updates.

5

Code Mind Maps – A Fresh Perspective on Code Navigation #

github.com favicongithub.com
2 評論3:39 AM在 HN 查看
For years, I’ve been obsessed with mapping code visually — originally by copy-pasting snippets into FreeMind to untangle large code bases in big complex projects. It worked, but it was clunky.

Now, I’ve built a VS Code/Visual Studio extension to do this natively: Code Mind Map. You can use it to add selected pieces of code to a mind map as nodes and then click to jump to the code from the map.

Developers say it’s especially useful for:

Untangling legacy code Onboarding into large codebases Debugging tangled workflows

Please try it out and let me know what you think!

5

Giti – Natural Language to Git Commands with Local LLM #

github.com favicongithub.com
0 評論2:57 PM在 HN 查看
Hi HN,

I built Giti, a command-line tool that converts plain English into actual Git commands using a fast, local language model (Qwen2.5-Coder, ~1 GB).

Example:

Input: giti "undo last commit"

Output: git reset --soft HEAD~1

No internet required after setup. No API keys. You can also run it in an interactive shell to chain commands naturally.

Key features: - Natural language to Git translation - Local LLM powered by Qwen2.5-Coder in GGUF format - Works fully offline after model download - Dry-run mode to preview commands before running - Interactive shell mode for session-based workflows - Context file support to teach Giti your custom Git habits

Quick install: - Clone the repo - Install llama-cpp-python - Add giti to your PATH - Download the 1GB model from HuggingFace - Run giti "your query."

You can also enhance its accuracy using context files in a simple Q&A format like:

USER: How to start new feature? BOT: git checkout main && git pull && git checkout -b feature/<name>

This lets Giti learn your workflow and generate project-specific Git commands.

Thanks for checking it out.

4

How Claude Code Improved My Dev Workflow #

1 評論9:27 PM在 HN 查看
I've been using Claude Code for the past month and it's transformed my productivity.

Show HN: Title: A New Era for Software Development: AI Pair Programming Vs. Traditional Methods

Today, let's dive deep into a topic that's revolutionizing the software development landscape - AI Pair Programming. By shedding light on the specific problems it solves, we will explore a real-life case study, and provide actionable insights to help you enhance your workflow.

Firstly, let's understand the challenge. Traditional pair programming, while being highly beneficial, is often hindered by scheduling conflicts, differing skill levels, and varying coding styles. Here's where AI pair programming steps in. It leverages artificial intelligence to generate code suggestions, allowing developers to work more efficiently.

Consider the case of CodeStream, a software startup. They adopted AI pair programming using GitHub Copilot. The results? Their development speed increased by 30%, and the number of bugs decreased by 25%. They also witnessed a significant improvement in code quality. This case study clearly illustrates the transformative power of AI in software development.

So how can you incorporate this cutting-edge method into your workflow? Here are three actionable insights:

1. *Embrace AI tools:* Start by integrating AI-based coding assistants like GitHub Copilot or Kite into your development environment. These tools provide real-time, context-aware code suggestions, significantly reducing your coding time.

2. *Upskill:* AI pair programming is not about replacing human developers. It's about augmenting their abilities. Therefore, it's crucial to upskill and stay updated with AI advancements in your field.

3. *Continuously Test:* While AI tools can write code, they're not infallible. Regularly testing the AI-generated code ensures optimal performance.

Now, where does the Keychron K8 Pro Keyboard fit into this workflow? For a start, it's a tool that enhances programming efficiency. With its hot-swappable keys and customizable layouts, developers can create shortcuts for frequently used coding commands, thereby streamlining their workflow. Its Bluetooth 5.1 feature allows for smooth

What tools are you using to improve your coding workflow?

Disclosure: Post contains affiliate links.

3

A P2P Contribution-Based Protocol for Civic Trust #

github.com favicongithub.com
0 評論5:54 AM在 HN 查看
Most governance relies on centralized control and monetary incentives. This protocol enables decentralized civic trust using non-monetary contribution signals.

It tracks: - contribution history - accumulated prestige - system activity

These are integrated using a harmonizing logic that evolves over time and converges toward π, representing systemic equilibrium.

Designed to support self-organizing communities, with concepts inspired by biology and open collaboration.

Full documentation and mathematical specification here: https://github.com/contribution-protocol/contribution-protoc...

3

Nightcrawler – A scanner that finds low-hanging fruit while you work #

github.com favicongithub.com
1 評論7:14 AM在 HN 查看
Hi HN,

I wanted to share a project I built in a strange but productive pair-programming "trip" with a large language model. The goal was to create my own automated "First Officer"—a tool that handles the tactical grunt work of finding common vulnerabilities while I focus on the strategic, human-led parts of a security assessment.

The result is Nightcrawler, an open-source CLI proxy and scanner built on Python & mitmproxy.

How it works: You run it and browse a target app through it. While you navigate, Nightcrawler passively finds insecure headers, outdated JS, and JWTs, while its active scanners autonomously test every discovered link and form for XSS, SQLi, Directory Traversal, and more.

The development process felt exactly like Captain Picard directing Commander Riker. I'd give the strategic orders ("We need to detect Stored XSS"), and the LLM would execute the tactical implementation. It was incredibly fast, but also highlighted the current limits of AI—it required constant human oversight to fix the subtle bugs and "hallucinations" it introduced.

The tool is still in beta (pip install nightcrawler-mitm). I'd love to get your feedback, bug reports, or ideas on what to build next.

Thanks for checking it out!

2

Create your color palettes in context, not isolation #

colorpal-sage.vercel.app faviconcolorpal-sage.vercel.app
0 評論4:56 PM在 HN 查看
As a developer working with UI/UX teams, I’ve seen how much of a pain it still is to create accessible, well-balanced color palettes.

A colleague of mine (UI/UX designer) mentioned how frustrating it is to:

- Generate tints and shades from a brand color - Check WCAG accessibility contrast - Preview how those colors will actually look on buttons and components - Then jump between 2–3 tools just to get something usable

So I built a tool to help fix that.

1. Choose a base color 2. Generate automatic tints/shades 3. Get WCAG contrast ratings live (against black/white backgrounds) 4. See automatically suggested complementary colors 5. And drop your palette directly onto real UI components (buttons for now, more coming) to visualise how your palette actually looks in a design system.

You get to create your color palettes in context, not isolation

Here’s the tool (free, no signup required to get started): https://colorpal-sage.vercel.app/

I'd appreciate feedback from this community on: - Is the new UX clear or confusing? - Is the “component playground” something you’d actually use? - Anything that feels unnecessary or missing? - Anything else?

I am genuinely grateful for any insights from designers or developers working with colour systems.

Thanks in advance!

2

SynSniff- Detect Minecraft Client OS via TCP/IP Fingerprinting #

github.com favicongithub.com
0 評論1:39 PM在 HN 查看
SynSniff uses passive TCP/IP fingerprinting to reveal details about a players connection. Think of it as p0f for Minecraft.

On our server we've seen many banned players attempt to evade their ban by using alternative Accounts. To combat this we analyze secondary traits, most notably their IP address. But building an effective anti-evasion system requires multiple datapoints that are hard for clients to spoof. That's what motivated this project.

SynSniff uses pcap to inspect incoming TCP/IP packets and correlates them with player joins by comparing source IP and port.

It then attempts to identify the player's operating system by comparing TCP/IP characteristics (like TCP options, window size) to reference samples from Linux, Windows and macOS. While the current detection logic is relatively simple, the results so far have been surprisingly accurate.

There's also an API for plugin developers to access fingerprint and OS data and integrate it into their systems.

2

SandCrab – An AWS S3 GUI for macOS #

sandcrab.io faviconsandcrab.io
2 評論2:26 PM在 HN 查看
Hello HN. SandCrab is an app I originally had the idea for and hacked together over a couple of weekends last October. More recently, I’ve been working on adding new features to it as I use it more as my daily tool to interact with my s3 buckets. A few recent features I am excited about include the ability to search across all of your buckets using wildcards and regular expressions as well as searching and filtering by metadata (e.g. content type, file size, date created, etc). I’ve also added support for configuring and generating presigned urls directly in the app which is nice. The app can also copy, move and "rename" (which performs a copy-and-delete under the hood) objects within or across buckets. I’ve also recently added more flexible auth options, so that in addition to providing the app with security credentials stored in your keychain, the app can also support role assumption as well as simply using your local .aws profiles without the need to actually give the app credentials directly.
2

Stop buying expensive gear, improve your processing skills instead #

youtube.com faviconyoutube.com
0 評論8:04 AM在 HN 查看
Hey HN, It's easy to get caught in the trap of thinking the next expensive piece of gear is what's holding your astrophotography back. I've found that the biggest gains often come from improving my processing skills, not the hardware. To test this idea, I took some of my old data going back four years and reprocessed it from scratch using my current workflow. The difference is night and day!

I made a video walking through the entire process to show how much potential is hiding in the data you probably already have. It covers my main steps in Siril, Photoshop and SETI Astro Suite from the initial background extraction and color calibration to using modern tools like Cosmic Clarity and Graxpert to really make the details pop.

My hope is to show that focusing on your post-processing technique is often the most effective upgrade you can make. Happy to answer any questions about the workflow!

2

BrightShot – AI photo enhancement and virtual staging for real estate #

bright-shot.com faviconbright-shot.com
4 評論8:46 AM在 HN 查看
Hi HN, I'm Pau and I just launched Brightshot (https://bright-shot.com/) — a tool that uses AI to enhance and virtually stage real estate photos.

The idea started when I was looking to buy my own apartment in my hometown. I was constantly frustrated by poor-quality listing photos — dark rooms, cluttered spaces, and angles that didn’t show much. I couldn’t tell whether the place had natural light, or even what it really looked like. The only way to know was to visit in person, which I didn’t always have time for.

That’s when I thought: what if there was a way to improve those photos automatically? Not just by making them look a bit nicer, but by actually revealing the potential of the space — clearer lighting, better framing, less visual noise, and even virtual staging. That’s what Brightshot does.

You can upload any photo — a living room, kitchen, exterior, whatever — and Brightshot will improve it automatically. It enhances lighting and clarity, removes visual clutter, and can even virtually furnish an empty space. You can also generate different lighting versions of the same room (like how it looks in the morning vs. evening), or create a short video walkthrough based on a single image.

I built it using a set of AI models tailored to real estate photography. Right now I’m working on batch editing for agencies, more customizable staging styles, and integrations for property platforms.

Would love to hear your thoughts — especially if you’ve ever rented, sold, or photographed a place. Does this solve a real pain? What’s missing? Would you trust AI-enhanced images for your own listing?

Thanks for reading! — Pau

1

ModelMashup – Chat with Multiple LLMs Simultaneously #

modelmashup.vercel.app faviconmodelmashup.vercel.app
0 評論4:57 PM在 HN 查看
You can converse with LLMs from multiple providers at the same time and compare results. Looking for feedback on what customisation would be useful for you.

I reckon this could be useful for people who want to try coding with multiple LLMs and pick the best one.

I'll be looking into adding multimodal support soon.

1

Best AI Tool Finder #

bestaitoolfinder.com faviconbestaitoolfinder.com
0 評論3:44 AM在 HN 查看
Are you overwhelmed by the explosion of new AI tools and struggling to find the right ones for your needs? Best AI Tool Finder is a comprehensive directory featuring over 1,500 handpicked AI tools, each thoroughly evaluated for usefulness, reliability, and real-world impact.

What sets us apart:

Expert-Curated Reports: Detailed reviews and “Deep Research Reports” assess tools across 13+ key criteria, including usability, data sources, transparency, and value—leveraging both expert and AI-led fact-checking.

Fresh & Trusted: Only tools with a proven track record and positive reviews across platforms are listed, with every addition vetted for at least 3 days after public release.

Search & Discovery: Effortlessly browse by category (Automation, Content Generation, Data Analysis, Marketing, Coding, and more), or jump straight to trending tags (AI Agent, AI Avatar, Workflow, etc.).

Cutting-Edge Trends: Stay updated on global AI news and the latest breakthroughs with our curated news summary and in-depth market analysis.

Open to Feedback: Correction and listing requests are welcomed to ensure accuracy and up-to-date information.

If you need to discover, compare, and select the best AI tools for productivity, automation, or business transformation—from startups to enterprises—check out Best AI Tool Finder!

User-centric, clean design, and completely free to use Website: https://bestaitoolfinder.com/

1

OLLS – A Vendor-Neutral Standard for LLM Inputs and Outputs #

0 評論4:56 PM在 HN 查看
We have just launched the Open LLM Specification (OLLS) – a community-driven standard that unifies how developers interact with large language models (LLMs) across providers such as OpenAI, Anthropic, Google, and others.

Right now, every provider has different request/response formats, which makes integration painful:

Parsing responses is inconsistent

Switching models needs custom wrappers

Error handling and metadata vary wildly

OLLS defines a simple, extensible JSON spec for both inputs (prompts, parameters, metadata) and outputs (content, reasoning, usage, errors). Think of it like OpenAPI for LLMs—portable, predictable, and provider-agnostic.

GitHub Repo - https://github.com/julurisaichandu/open-llm-specification Example input/output formats, goals, and roadmap Looking for contributors, feedback, and real-world use cases!

Let’s build a unified LLM interface—contribute ideas or join the discussion