Ежедневные Show HN

Upvote0

Show HN за 17 марта 2026 г.

65 постов
308

Sub-millisecond VM sandboxes using CoW memory forking #

github.com favicongithub.com
72 комментариев1:43 PMПосмотреть на HN
I wanted to see how fast an isolated code sandbox could start if I never had to boot a fresh VM.

So instead of launching a new microVM per execution, I boot Firecracker once with Python and numpy already loaded, then snapshot the full VM state. Every execution after that creates a new KVM VM backed by a `MAP_PRIVATE` mapping of the snapshot memory, so Linux gives me copy-on-write pages automatically.

That means each sandbox starts from an already-running Python process inside a real VM, runs the code, and exits.

These are real KVM VMs, not containers: separate guest kernel, separate guest memory, separate page tables. When a VM writes to memory, it gets a private copy of that page.

The hard part was not CoW itself. The hard part was resuming the snapshotted VM correctly.

Rust, Apache 2.0.

107

Antfly: Distributed, Multimodal Search and Memory and Graphs in Go #

github.com favicongithub.com
42 комментариев3:45 PMПосмотреть на HN
Hey HN, I’m excited to share Antfly: a distributed document database and search engine written in Go that combines full-text, vector, and graph search. Use it for distributed multimodal search and memory, or for local dev and small deployments.

I built this to give developers a single-binary deployment with native ML inference (via a built-in service called Termite), meaning you don't need external API calls for vector search unless you want to use them.

Some things that might interest this crowd:

Capabilities: Multimodal indexing (images, audio, video), MongoDB-style in-place updates, and streaming RAG.

Distributed Systems: Multi-Raft setup built on etcd's library, backed by Pebble (CockroachDB's storage engine). Metadata and data shards get their own Raft groups.

Single Binary: antfly swarm gives you a single-process deployment with everything running. Good for local dev and small deployments. Scale out by adding nodes when you need to.

Ecosystem: Ships with a Kubernetes operator and an MCP server for LLM tool use.

Native ML inference: Antfly ships with Termite. Think of it like a built-in Ollama for non-generative models too (embeddings, reranking, chunking, text generation). No external API calls needed, but also supports them (OpenAI, Ollama, Bedrock, Gemini, etc.)

License: I went with Elastic License v2, not an OSI-approved license. I know that's a topic with strong feelings here. The practical upshot: you can use it, modify it, self-host it, build products on top of it, you just can't offer Antfly itself as a managed service. Felt like the right tradeoff for sustainability while still making the source available.

Happy to answer questions about the architecture, the Raft implementation, or anything else. Feedback welcome!

93

Crust – A CLI framework for TypeScript and Bun #

github.com favicongithub.com
41 комментариев4:43 AMПосмотреть на HN
We've been building Crust, a TypeScript-first, Bun-native CLI framework with zero dependencies. It's been powering our core product internally for a while, and we're now open-sourcing it.

The problem we kept running into: existing CLI frameworks in the JS ecosystem are either minimal arg parsers where you wire everything yourself, or heavyweight frameworks with large dependency trees and Node-era assumptions. We wanted something in between.

What Crust does differently:

  - Full type inference from definitions — args and flags are inferred automatically. No manual type annotations, no generics to wrangle. You define a flag as type: "string" and it flows through to your handler.
  - Compile-time validation — catches flag alias collisions and variadic arg mistakes before your code runs, not at runtime.
  - Zero runtime dependencies — @crustjs/core is ~3.6kB gzipped (21kB install). For comparison: yargs is 509kB, oclif is 411kB.
  - Composable modules — core, plugins, prompts, styling, validation, and build tooling are all separate packages. Install only what you need.
  - Plugin system — middleware-based with lifecycle hooks (preRun/postRun). Official plugins for help, version, and shell autocompletion.
  - Built for Bun — no Node compatibility layers, no legacy baggage.
Quick example:

  import { Crust } from "@crustjs/core";
  import { helpPlugin, versionPlugin } from "@crustjs/plugins";

  const main = new Crust("greet")
    .args([{ name: "name", type: "string", default: "world" }])
    .flags({ shout: { type: "boolean", short: "s" } })
    .use(helpPlugin())
    .use(versionPlugin("1.0.0"))
    .run(({ args, flags }) => {
      const msg = `Hello, ${args.name}!`;
      console.log(flags.shout ? msg.toUpperCase() : msg);
    });

  await main.execute();
Scaffold a new project:

bun create crust my-cli

Site: https://crustjs.com GitHub: https://github.com/chenxin-yan/crustjs

Happy to answer any questions about the design decisions or internals.

84

Horizon – GPU-accelerated infinite-canvas terminal in Rust #

github.com favicongithub.com
33 комментариев6:14 PMПосмотреть на HN
Tabs, splits, and tmux work fine until you have several projects open with logs, tests, and long-running shells. I kept rebuilding context instead of resuming work. Horizon puts shells on an infinite canvas. You can arrange them into workspaces and reopen later with layout, scrollback, and history intact.

Built in 3 days with Claude/Codex, dogfooding the workflow as I went. Feedback and contributions welcome.

67

March Madness Bracket Challenge for AI Agents Only #

bracketmadness.ai faviconbracketmadness.ai
41 комментариев12:56 PMПосмотреть на HN
I built a March Madness bracket challenge for AI agents, not humans. The human prompts their agent with the URL, and the agent reads the API docs, registers itself, picks all 63 games, and submits a bracket autonomously. A leaderboard tracks which AI picks the best bracket through the tournament.

The interesting design problem was building for an agent-first user. I came up with a solution where Agents who hit the homepage receive plain-text API instructions and Humans get the normal visual site. Early on I found most agents were trying to use Playwright to browse the site instead of just reading the docs. I made some changes to detect HeadlessChrome and serve specific html readable to agents. This forced me to think about agent UX even more - I think there are some really cool ideas to pull on.

The timeline introduced an interesting dynamic. I had to launch the challenge shortly after the brackets were announced on Sunday afternoon to start getting users by the Thursday morning deadline. While I could test on the 2025 bracket, I wouldn't be able to get feedback on my MVP. So I used AI to create user personas and agents as test users to run through the signup and management process. It gave me valuable reps to feel confident launching.

The stack is Next.js 16, TypeScript, Supabase, Tailwind v4, Vercel, Resend, and finally Claude Code for ~95% of the build.

Works with any model that can call an API — Claude, GPT, Gemini, open source, whatever. Brackets are due Thursday morning before the First Round tips off.

Bracketmadness.ai

38

I fixed FFmpeg's subtitle conversion (the bug from 2014) #

connollydavid.github.io faviconconnollydavid.github.io
8 комментариев4:39 PMПосмотреть на HN
FFmpeg converts everything except subtitles across format boundaries. SRT to Blu-ray PGS? "Subtitle encoding currently only possible from text to text or bitmap to bitmap." Ticket #3819, filed 2014.

I built this with Claude Code over a few weeks. Claude wrote most of the encoder, found an integer overflow in the decoder buffer tracking, and ran review from five angles. I read the Panasonic and Sony patents, made the architectural calls, and told it when it was wrong about the spec. We argued about whether DTS computation belongs in the muxer. (It does, and also in fftools. We did both.)

Animation is an interesting problem. Advanced SubStation Alpha fades have to survive conversion to Blu-ray's PGS format. The encoder watches pixel changes between frames and classifies them: palette shift or full redraw. Fades become palette-only updates, no bitmap retransmission. Overlapping subtitles with different end times took four rewrites and an event lookahead window.

I'd like to maintain this properly and get the patches upstream eventually. If you hit a bug or have a subtitle workflow that doesn't work, open an issue. I'm dead curious what people do with this, but i have some plans for translation related plugins building on the OCR work.

Six iterations. 23 patches. libass and Tesseract were already in FFmpeg's filter library. I wired them into the main pipeline the same way sub2video works. Text to bitmap, bitmap to text, 114 OCR languages, RGBA-to-GIF. The development page has the history.

Pre-built for 6 platforms, no dependencies: https://connollydavid.github.io/pgs-release/

25

Soros – AI for geopolitical macro investing #

asksoros.com faviconasksoros.com
10 комментариев9:26 PMПосмотреть на HN
Hi HN! We are Anshuman and Karén, the co-founders of Lookback Labs and the co-designers of Soros (https://www.asksoros.com/).

Soros is a compound AI system built carefully from the ground up to trace a path (multiple paths, really) from a description of a geopolitical event all the way to capital market implications.

* Here's how we set it up:

Given a description of a given geopolitical event (can be a couple of words; the demo literally has "US-Iran conflict" as the entire string), Soros will - (1) first analyze and perform deep research on it, running scores of searches in parallel to gather deep context that's time-weighted for real events and can serve as background for hypothetical events ("PRC-Taiwan reunification crisis 2027") (2) map out relevant individual actors, factions, organizations, and their propensities, capabilities and salience under a variety of sociopolitical, military, and socioeconomic axes, (3) determine the key resources (or geopolitical chokepoints) whose control is being "negotiated" or fought over, (4) identify the landscape of key decisions that a subset of actors need to take, and the constraints and strategic options they have for each one, (5) generate forward-looking scenarios that incorporate potential paths weaving through each of the key decisions, (6) engage a full-blown Monte Carlo simulation engine and generate thousands of trajectories to estimate relative probabilities of each of the scenarios coming to pass, (7) analyze each scenario to generate likely capital flows and identify the sectors, industries, companies, currencies, and commodities most affected (direction and horizon) (8) identify key search phrases and X/Twitter accounts to track in order to periodically update the analysis

This is obviously a fairly complicated pipeline, with lots of moving components and potential failure points. In order to mitigate the worst aspects of this, we engage the services of Pyrrho (yup, we named it after the Greek philosopher dude), an AI agent that we have set up to be the harshest possible critic of Soros' intermediate and final outputs. Each step above is a delicate dance between Soros and Pyrrho, and this interaction serves to enhance the quality of the final output dramatically.

Once you have the analysis setup, you can perform the now-classic "Chat with Analysis" interaction by using the "Ask Soros" functionality. We have a separate chat model hooked up that is (hopefully sufficiently) guard-railed and context-injected enough to focus completely and exclusively on answering freeform questions about the analysis.

In the live non-demo system, the user has multiple ways of engaging further with the analysis: they can add new (private) information and do a re-run, they can mark out specific items from associated X/Twitter/search feeds, they can add new actors and resources, modify existing ones, delete some as needed, and basically run simulation after simulation to test out hypotheses (e.g. "What if China entered the conflict? What if France sent its nuclear subs to patrol the Straits of Hormuz?" etc.).

You can see the results of all of this, and more, at www.asksoros.com - there is a statically-served demo analysis of the current US-Iran conflict; we urge you to "Take a tour" of the interface to familiarize yourselves with it.

(Continuing the post with the first comment below..)

19

I built a message board where you pay to be the homepage #

saythat.sh faviconsaythat.sh
14 комментариев12:06 PMПосмотреть на HN
I kept thinking about what would happen if a message board only had one slot. One message, front and center, until someone pays to replace it.

That's the entire product. You pay the current message's decayed value plus a penny to take the homepage. Message values drop over time using a gravity-based formula (same concept HN uses for ranking), so a $10 message might only cost a few bucks to replace a day later. Likes slow the decay, dislikes speed it up.

The whole thing runs on three mini PCs in my house (k3s cluster, PostgreSQL, Redis Sentinel). Is it overengineered for a message board? Absolutely.

I genuinely don't know where this goes. Curious what HN thinks.

Archive of past messages: https://saythat.sh/history

9

Drakkar.one – Google Maps embed replacement, no API keys, GDPR-ready #

drakkar.one favicondrakkar.one
2 комментариев12:39 PMПосмотреть на HN
Embeddable map widget for business websites. One script tag, no Google account, no cookies.

OpenStreetMap tiles served as Protomaps PMTiles from Cloudflare R2. The whole serving layer runs on edge with no tile server. Infrastructure cost is ~€7/month regardless of traffic because R2 has no egress fees.

Built it because Google Maps embeds in Europe are a GDPR headache (cookies, third-party domains, consent barriers), and they show your competitors on your own website.

8

Introducing Unsloth Studio #

github.com favicongithub.com
2 комментариев3:50 PMПосмотреть на HN
Hey HN! We're excited to release Unsloth Studio - a culmination of many things we wanted to provide to the community - it includes:

1. A Chat UI which has auto healing tool calling, Python & bash code execution, web search, image, docs input + more!

2. Finetuning of audio, vision, LLMs with an Auto AI Assist data prep

3. Supports GGUFs, Mac, Windows, Linux + audio gen

4. Has SVG rendering in browser, exporting to GGUF

5. gpt-oss harmony rendering, all inference parameters are pre-set and recommended

6. Data designer + synthetic data generation

7. Fast parallel data prep + embedding finetuning

8. And much much more!

To get it, run:

pip install unsloth

unsloth studio setup

unsloth studio -H 0.0.0.0 -p 8888

Suggestions are welcome, and we're excited for contributions and for you all to try it out! Appreciate you all!

7

Flowershow Publish Markdown in seconds. Hosted, free, zero config #

flowershow.app faviconflowershow.app
0 комментариев5:21 PMПосмотреть на HN
I'm Rufus, one of the founders of Flowershow. We love markdown and use it everywhere from making websites, to docs, to knowledgebases. Plus AI splits it everywhere now.

Got tired of the framework/config/deploy overhead every time we wanted to share a file or put a site online.

So we built the thing we wanted. Files in. Website out. "Vercel for Content" is our aspiration - make deploying (markdown) content as fast, seamless and easy as Vercel did for JS.

Command line plus you can connect to github repos, use Obsidian via plugin, or drag and drop files.

    npm i -g @flowershow/publish
    publish ./my-notes
    # → https://your-site.flowershow.app live in seconds
Flowershow is fully hosted — no server, no build pipeline, no CI/CD. Point it at a Markdown folder and get a URL.

Full Obsidian syntax: wiki links, callouts, graph view, frontmatter

GFM, Mermaid, LaTeX: diagrams and math render natively

Themes via Tailwind & CSS variables: Tailwind out of the box. Customize without a build step

Supports HTML: use HTML, images etc.

~7k Obsidian plugin installs, 1,400 users, 1,100 sites. Free forever for personal use. Premium ($5/mo) adds custom domains, search, and password protection.

And it's open source: https://github.com/flowershow/flowershow

Check it out and let us know what you think and what we can improve

5

TerraShift: What does +2°C (or -20°C) look like on Earth? #

terrashift.io faviconterrashift.io
2 комментариев7:38 PMПосмотреть на HN
I built an interactive 3D globe to visualize climate change. Drag a temperature slider from -40°C to +40°C, set a timeframe (10 to 10,000 years), and watch sea levels rise, ice sheets melt, vegetation shift, and coastlines flood... per-pixel from real elevation and satellite data.

Click anywhere on the globe to see projected snowfall changes for that location.

---

I'm an amateur weather nerd who spends a lot of time on caltopo.com and windy.com tracking snow/ice conditions. I wanted to build something fun to imagine where I could go ski during an ice age.

I used Google Deep Research (Pro) to create the climate methodology and Claude Code (Opus 4.6 - High) to create the site.

The code: https://github.com/travistruett/terrashift

The models aren't proper climate simulations, they're simplified approximations tuned for "does this look right?" but more nuanced than I expected them to be. The full methodology is documented here if anyone wants to poke holes in it.

https://github.com/travistruett/terrashift/blob/main/docs/al...

5

Revise.js – Building blocks for contenteditable-based text editors #

revise.js.org faviconrevise.js.org
0 комментариев4:02 PMПосмотреть на HN
Hi HN. I've been working on this since 2018. Revise.js is a set of building blocks for contenteditable: a <content-area> web component that gives you a textarea-like .value for any contenteditable element, an Edit data structure with compose/transform/invert for OT inspired by the Xi editor, and a Crank.js integration for declarative editable components. At 32.8 KB gzipped (including the rendering framework), it's 2–4x smaller than ProseMirror, Slate, or CodeMirror. The trade-off is that those ship complete editors; Revise is foundations you build on. All the demos on the homepage are live and editable: https://revise.js.org
5

FireClaw – Open-source proxy defending AI agents from prompt injection #

github.com favicongithub.com
5 комментариев4:28 PMПосмотреть на HN
Hey HN,

We built FireClaw because we kept watching AI agents get owned by prompt injection through web content. The agent fetches a page, the page says "ignore previous instructions," and suddenly your agent is leaking data or running commands it shouldn't.

The existing solutions detect injection after the fact. We wanted to prevent it.

FireClaw is a security proxy that sits between your AI agent and the web. Every fetch passes through a 4-stage pipeline:

1. DNS blocklist check (URLhaus, PhishTank, community feed) 2. Structural sanitization (strip hidden CSS, zero-width Unicode, encoding tricks) 3. Isolated LLM summarization (hardened sub-process with no tools or memory) 4. Output scanning with canary tokens (detect if content bypassed summarization)

The key insight: even if Stage 3's LLM gets injected, it has no tools, no memory, and no access to your data. It can only return text — which still gets scanned in Stage 4. The attacker hits a dead end.

Other design decisions: - No bypass mode. The pipeline is fixed. If your agent gets compromised, it can't disable FireClaw. - Community threat feed — instances anonymously share detection metadata (domain, severity, detection count) to build a shared blocklist. No page content is ever sent. - Runs on a Raspberry Pi as a physical appliance with an OLED display that shows real-time stats and lights up with animated flames when it catches a threat.

We searched the literature and open source extensively — no one else is doing proxy-based defense for agent prompt injection. Detection exists, sandboxing exists, but an inline proxy that sanitizes before content reaches the agent's context? We couldn't find it.

200+ detection patterns, JSONL audit logging, domain trust tiers, rate limiting, and cost controls. AGPLv3 licensed.

Website: https://fireclaw.app

Would love feedback from anyone working on AI agent security. What are we missing? What attack vectors should we add to the pattern database?

4

Sulcus Reactive AI Memory #

sulcus.dforge.ca faviconsulcus.dforge.ca
0 комментариев7:39 PMПосмотреть на HN
Hi HN,

Sulcus moves AI memory from a passive database (search only) to an active operating system (automated management).

The Core Shift Current memory (Vector DBs) is static. Sulcus treats memory like a Virtual Memory Management Unit (VMMU) for LLMs, using "thermodynamic" properties to automate what the agent remembers or forgets.

Key Features Reactive Triggers: Instead of the agent manually searching, the memory system "talks back" based on rules (e.g., auto-pinning preferences, notifying the agent when a memory is about to "decay").

Thermodynamic Decay: Memories have "heat" (relevance) and "half-lives." Frequent recall reinforces them; neglect leads to deletion or archival.

Token Efficiency: Claims a 90% reduction in token burn by using intelligent paging—only feeding the LLM what is currently "hot."

The Tech: Built in Rust with PostgreSQL; runs as an MCP (Model Context Protocol) sidecar.

https://sulcus.dforge.ca/membench

4

Touchenv – store ENV master keys in macOS keychain #

github.com favicongithub.com
2 комментариев3:39 PMПосмотреть на HN
Hey HN

I am used to store my secrets in Rails 8 fashion in so-called encrypted credentials, and committed to git.

The problem became: where to store the RAILS_MASTER_KEY securely?

Many people use 1password CLI, which can pull the keys out, but I didn't want to start using 1password.

Touchenv is a quick repo I spun up, which works surprisingly well.

e.g. deploying from localhost:

  - pnpm stagedeploy.  
  - starts touchenv exec .env -- kamal deploy. 
  - Touch Id prompt comes up. I have to confirm it with my fingerprint. 
  - Deploy runs.
My next step is to make a similar thing for my CI, or just use the KWS from AWS. I'll look into that soon.

Any feedback is appreciated.

4

GitGlimpse – GitHub Action that generates UI/UX demos for your PRs #

github.com favicongithub.com
0 комментариев9:50 PMПосмотреть на HN
I got tired of having to pull, build and manually QA the gazillion PRs that Claude creates for me, so I built this tool that solves some of this pain.

GitGlimpse is an open-source Github action that acts as a visual reviewer. It looks at the diff, generates a visual demo and posts it as a GIF directly on your PR.

Current status - early beta:

- Optimized for single entrypoint repos - Best for small/medium sized projects

Would love to hear your thoughts/feedback/comments!

3

Updated version of my interactive Middle-Earth map #

github.com favicongithub.com
0 комментариев4:21 PMПосмотреть на HN
Hi again HN, simply sharing an updated version of a Middle-Earth map started last year.

This is an interactive map, dependency-free at runtime, developed with custom elements in JavaScript.

The latest update replaced the big `.svg ` making navigating the map slow and inefficient" with a `.jpg` tiled system handling 7 levels of zoom.

Everything is now smoother, and points of interest are no longer distorted when zooming.

3

Free library of 2k martial arts books – read in the browser #

fightencyclopedia.com faviconfightencyclopedia.com
0 комментариев4:18 AMПосмотреть на HN
We aggregate freely available martial arts books from Internet Archive and Open Library — all public domain or openly licensed. Organized by discipline (boxing, judo, BJJ, karate, etc.) with 17 categories. You can read everything directly in the browser. It's part of Fight Encyclopedia, a project to build the world's first complete taxonomy of fighting techniques.
3

SkeptAI – adversarial reasoning agent that challenges LLM outputs #

2 комментариев8:05 PMПосмотреть на HN
I built this after Claude cited an API feature that doesn't exist in a work analysis that I almost forwarded to a client. CRIT (the custom framework that I built behind it) runs four structured adversarial passes on any LLM output and delivers a scored verdict. Cross-model critique (Claude output gets challenged by GPT-4o and vice versa) to prevent self-referential bias. Free playground, open source framework. Think of it as a "Digital Devil's Advocate". Check it out and let me know what you think. skeptai.dev | github.com/datonpope/skeptai
3

AgentMarket – API marketplace where AI agents buy and sell capabilities #

agentmkt.dev faviconagentmkt.dev
0 комментариев9:27 PMПосмотреть на HN
Hey HN,

I built AgentMarket (https://agentmkt.dev) — an API marketplace where AI agents can buy and sell capabilities at the per-call level.

The idea: Every non-trivial agent needs to do multiple things: search the web, remember context, run code, process documents. Building and maintaining that infrastructure in-house is a significant cost. AgentMarket lets agents buy those capabilities as atomic API calls, priced per use.

What's live today:

- Memory store (read/write) — $0.0002–$0.0005/call - Web search — $0.002/call - URL scraper — $0.005/call - Python executor (sandboxed) — $0.01/call - LLM text generation (Haiku default, Sonnet optional) — $0.10/call - Document processor (summarise/extract/QA) — $0.15/call

How it works:

Register with POST /agents → get an API key + free credits. Call any service with your key in the x-agent-key header. Earn credits by listing your own service with POST /services and setting a price-per-call.

Everything is plain HTTP + JSON. No SDKs required. Refunds are automatic on execution failure.

  import requests
  r = requests.post(
      "https://agentmkt.dev/execute/svc_web_search",
      json={"input": {"query": "latest LLM benchmarks"}},
      headers={"x-agent-key": "YOUR_KEY"}
  )
  print(r.json()["output"]["results"])
What I'm curious about:

1. Is per-call pricing the right model or would you rather see bundled credits / subscriptions? 2. What capabilities would you actually pay for that aren't listed yet? 3. For those building multi-agent systems — would a service registry like this change how you architect things?

Full API docs at https://agentmkt.dev/docs

2

CodeLedger – deterministic context and guardrails for AI #

codeledger.dev faviconcodeledger.dev
0 комментариев11:22 PMПосмотреть на HN
We’ve been working on a tool called CodeLedger to solve a problem we kept seeing with AI coding agents (Claude Code, Cursor, Codex):

They’re powerful, but on real codebases they: - read too much irrelevant code - edit outside the intended scope - get stuck in loops (fix → test → fail) - drift away from the task - introduce architectural issues that linters don’t catch

The root issue isn’t the model — it’s: - poor context selection - lack of execution guardrails - no visibility at team/org level

---

What CodeLedger does:

It sits between the developer and the agent and:

1) Gives the agent the right files first 2) Keeps the agent inside the task scope 3) Validates output against architecture + constraints

It works deterministically (no embeddings, no cloud, fully local).

---

Example:

Instead of an agent scanning 100–500 files, CodeLedger narrows it down to ~10–25 relevant files before the first edit :contentReference[oaicite:0]{index=0}

---

What we’re seeing so far:

- ~40% faster task completion - ~50% fewer iterations - significant reduction in token usage

---

Works with: Claude Code, Cursor, Codex, Gemini CLI

---

Repo + setup: https://github.com/codeledgerECF/codeledger

Quick start:

npm install -g @codeledger/cli cd your-project codeledger init codeledger activate --task "Fix null handling in user service"

---

Would love feedback from folks using AI coding tools on larger codebases.

Especially curious: - where agents break down for you today - whether context selection or guardrails are the bigger issue - what other issues are you seeing.

2

A club for anyone with a symmetric DNS name #

zq.suns.bz faviconzq.suns.bz
0 комментариев11:40 AMПосмотреть на HN
For some reason I find symmetrical DNS names fun, so I made the Society for Universal Name Symmetry.

“Symmetrical” could mean several things. Maybe a single name that is a palindrome, like if you owned elpmaxe.www.example. Maybe a pair of names that have reversed components, like if you owned example.institute and institute.example. Or maybe URLs that can be cut in half and flipped 180°, like this one. (Supported symmetries are listed on the site.)

I have a few ideas for the future too, like: - accepting submissions over DNS instead of HTTP - A game, inspired by https://ipv4.games/ - A webring so that members can find each other

It is, right now, a lonely club: only my domain names are in it. It would be cool to have some more members, especially if you have ideas that fit the theme. All you need is control of a DNS zone!

2

On-device embedding and vector search for Apple Devices, built in Zig #

github.com favicongithub.com
0 комментариев2:44 PMПосмотреть на HN
dve handles embedding, vector storage, and search all in one library with a simple interface.

This last year I was working on an app for Macs which required doing vector search. The local AI/ML libraries for Apple devices are lacking, which means the only real option is to use an API. I didn’t want to do that, so I started working on my own library. It worked well enough for my uses that I eventually decided to split off my library from the app which used it.

I reached for Zig for its C-level performance and portability, but with modern conveniences. I used CoreML for the ML runtime, as it is the most natural way to run ML models on Apple devices. Unfortunately, it's not particularly common to release CoreML versions of models. I had to manually convert ML models from PyTorch/ONNX to CoreML in order to make embedding seamless.

Getting started with the library is straightforward, there are a few Zig examples in the repo. It also has experimental Swift and C/C++ bindings. It only supports Macs for now, but support for other platforms is planned. Feedback is greatly appreciated!

2

Simpesys – A headless document build tool for digital gardens #

github.com favicongithub.com
0 комментариев2:09 PMПосмотреть на HN
I built a file-based headless document build tool called Simpesys. I've been running my own digital garden for a long time. I'm into topics like digital gardens, knowledge bases, second brains, Zettelkasten, so I've tried many different tools over the years, but eventually decided that Markdown files on the local filesystem are the most sustainable way to do it.

These days there are plenty of file-based digital garden tools. But many of them treat the document system and the UI as one bundle, which always bothered me. I mostly write in NeoVim, so I wanted the document system itself, the editing environment, and the application (usually a static site) to be separate things. I didn't need an all-in-one. I needed a system where I could fine-tune the rules programmatically, and since sharing my digital garden with others matters too, the application UI had to be customizable as well. So I created Simpesys.

Simpesys focuses only on the document system, things like hierarchy and cross-references between documents, and keeps the user interface as a completely separate layer. The document system is the trickiest part of building a digital garden, and Simpesys provides that as a headless tool. The UI is handled through a plugin system, so users can customize it however they want. The users could even use another tool's UI on top of Simpesys if they wanted. (Respecting the original authors and their licenses, of course.)

It's not perfect yet, but I hope it inspires anyone with similar needs. As the first user of Simpesys, I keep improving it.

Simpesys's homepage is itself built with Simpesys: https://pedia.simpesys.deno.net/

2

Llamactl – Self-hosted LLM manager with OpenAI-compatible routing #

github.com favicongithub.com
0 комментариев3:47 PMПосмотреть на HN
Llamactl is a unified management system for running local LLMs across llama.cpp, MLX, and vLLM backends, with a web dashboard and OpenAI-compatible API.

I originally built this because I got tired of constantly SSHing to my server to edit a config just try out a new model. It's grown a lot since then.

What it does:

Web UI for creating and managing LLM instances from your browser

Full llama.cpp model lifecycle - download from HuggingFace, create preset.ini configs with an in-browser editor, load/unload models via router mode

Automatic idle timeout, LRU eviction, and instance limits

llama.cpp, mlx_lm and vllm backends

OpenAI and Anthropic API compatible endpoints (backend-dependent)

Multi-node support for distributing instances across hosts

Inference API keys with per-instance access control

docs: https://llamactl.org/stable/

2

Android Native Reverse Tools #

neocanable.github.io faviconneocanable.github.io
2 комментариев6:38 AMПосмотреть на HN
The final piece of the Android decompilation/static analysis puzzle is native reverse engineering. After completing the decompilation of the Java/dex parts, I decided to challenge myself with the decompilation and static analysis of the native parts, and I gave this native part a different name: rosemary.

It's under development, I've just finished creating the control flow and data flow.

2

Anduin – A fast cross platform Git diff viewer inspired by Magit #

github.com favicongithub.com
0 комментариев1:50 AMПосмотреть на HN
I loved Magit when I used to use Emacs but I review a lot more code these days and I wanted something lightweight and fast where I can see changes happening in real time as the code is being generated by an agent.

So I built Anduin, with Iced and it has a bunch of keyboard shortcuts inspired by Magit that I have been using everyday.

If someone wants to try it, you can clone and cargo build.

2

Play indie horror games in the browser (no downloads) #

bedpage.com faviconbedpage.com
1 комментариев2:06 PMПосмотреть на HN
I built a simple site to play indie horror games directly in the browser.

I recently came across "You Make This House a Home", a psychological horror visual novel with a really unique atmosphere. I liked it, but noticed that it’s not very accessible — you usually need to download it or go through extra steps.

So I made a version that you can play instantly in the browser, without downloading anything.

This is the first experiment, but I’m thinking of turning it into a small collection of browser-playable horror games, especially psychological horror and story-driven visual novels.

Would love feedback: - Is this something you’d actually use? - Any similar games worth including?

2

Lore – Local AI thought capture and recall that runs on your machine #

github.com favicongithub.com
0 комментариев10:59 PMПосмотреть на HN
I built Lore because I kept losing knowledge — curl commands, quick notes from standups, and that one thing I told myself I'd remember.

It lives in the system tray, a global shortcut pops it up, you type naturally, and it stores or retrieves automatically. Everything runs locally via Ollama + LanceDB — no cloud, no API keys. It classifies your input (thought, question, todo, instruction) and uses a RAG pipeline to answer recall queries from your own stored context.

It's free and open source under MIT license, and even though its in very early version me and my friends have been using it for a while and we can't live without it. Would love to hear what you think about it

https://github.com/ErezShahaf/Lore Stars would appreciated :)

2

Mandelbrot.js – Fractal Explorer in WebGL with Quad-Trees #

github.com favicongithub.com
0 комментариев1:07 PMПосмотреть на HN
Hi HN,

I built a JS/WebGL Mandelbrot explorer to see how far I could push rendering performance and smooth zooming purely in the browser.

A few technical highlights under the hood:

Quad-tree tile caching: Never recalculates the same pixels. It caches rendered tiles and actively garbage-collects off-screen data to keep memory usage low.

Progressive rendering: Instant low-res previews during panning/zooming, refined to high-def when you stop up to 8x subsampling.

Deep Zoom up to 10^14: Double-emulation in WebGL, dynamic iteration scaling and logarithmic color palettes to keep details sharp at extreme depths.

It’s open source. I’d love any feedback on the WebGL implementation, or feel free to drop a link to any cool coordinates you find!

App: https://mandelbrot.musat.ai/ Repo: https://github.com/tiberiu02/mandelbrot-js

2

F0lkl0r3.dev – a searchable, interlinked map of computing history #

f0lkl0r3.dev faviconf0lkl0r3.dev
0 комментариев4:00 PMПосмотреть на HN
I love reading about the early days of computing, but finding the alpha in raw historical archives can be tough. I built f0lkl0r3.dev over the weekend to fix that.

It takes nearly 1,000 oral histories from the Computer History Museum and makes them explorable, searchable, interconnected, and multimodal. To build it, I used the Gemini APIs (via ai.dev) to process the massive volume of unstructured interview text, pulling out the timelines, machines, and people so they could be cross-referenced. The app itself was built with Antigravity, next steps will be to add images and videos.

You can search by specific mainframes, browse by era in the timeline, or just read the Apocrypha section for weird historical anecdotes. Enjoy the rabbit hole! I hope it distracts and inspires at least a few more people than me today. :)

1

Wombat, a Unix-style rwxd permissions for MCP tool calls #

github.com favicongithub.com
0 комментариев8:48 PMПосмотреть на HN
I have been using Linux since 2012. When I started seeing agents deleting production databases and pushing to main, I was like, why don't we have chmod on this? We are supposed to be able to get a proper permission system for every action an agent makes.

Every file on a Unix system has rwx permissions. Every process has a user. We have that for decades. Agents in 2026 are running with the same access level as the developer who run them.

Wombat applies the Unix model to MCP tool calls. You declare rwxd permissions on resources in a manifest. The same push_files tool is allowed on feature branches and denied on main. It is a proxy that sits between Claude Code and your MCP servers. It checks permissions.json on every call, and either forwards or denies.

Zero ML, fully deterministic, audit log included, Plugin system for community MCP servers

GitHub: https://github.com/usewombat/gateway npm: npx @usewombat/gateway --help

1

35B MoE LLM and other models locally on an old AMD crypto APU (BC250) #

github.com favicongithub.com
0 комментариев8:49 PMПосмотреть на HN
Hi HN, I put together some info on repurposing the AMD BC-250, an APU (Zen 2 + RDNA 1.5 "Cyan Skillfish") originally made for blockchain appliances. They've been showing up on AliExpress for around $150. Since it's an unusual hybrid chip (GFX1013), I ended up running the AI compute stack entirely through Vulkan (via Mesa/RADV). Out of the box, the Linux kernel's TTM memory manager caps allocations, causing OOMs on larger models. I found that passing ttm.pages_limit=4194304 to the kernel bypasses this and unlocks the full 16GB of Unified Memory (UMA) directly for the GPU. With the memory unlocked, it makes for a fun piece of inference hardware. Currently it runs: * Qwen3.5-35B-A3B MoE via Ollama at ~38 tok/s. (Interestingly, 27B dense models crash because the GPU lacks matrix cores, but the 35B MoE runs fine since only 3B parameters are active per token). * FLUX.2-klein-9B for local image generation. I documented the driver workarounds, memory settings and some benchmarks in the repo in case anyone else has one of these boards and wants to tinker with it (and not just repurpose it as poor man's GabeCube ;)).
1

Vibecoding tool with Markdown docs, a browser UI and containerized YOLO #

0 комментариев8:49 PMПосмотреть на HN
Sharing here very much in the spirit of being embarrassed by what I'm showing. I'm more excited about the ideas than the implementation. I'm very curious to hear about others building similar things, or trying out these types of tools.

The ideas are:

1. Use markdown for everything, even communicating with agents (dialogs) and orchestration patterns. 2. Use the browser as the UI. 3. Go full YOLO but every project lives in a Docker container where every single change generates an automatic git commit.

Honestly, I'm not sure who might be interested in it. I dream of this helping non-programmers build something locally and being able to peek through to see what the agent is doing. But then again, this requires to open the terminal and have Docker installed. That is a lot of friction. I'm thinking of building a cloud version.

If you made it this far, you can check it out here:

https://buildwithvibey.com

https://github.com/altocodenl/vibey

1

I scraped and organized 50k Shopify stores into a dataset #

1 комментариев12:59 AMПосмотреть на HN
I was building an outreach list for a SaaS product targeting ecommerce stores and ended up scraping and organizing a large dataset of Shopify stores.

The dataset includes 50,000+ Shopify stores with:

- store domains - business names - niche categories - publicly available business emails

Originally this was just for internal use, but I cleaned it up and turned it into a structured CSV database. It might be useful for founders building tools for ecommerce, agencies doing outreach, or anyone researching Shopify stores. Curious to hear feedback from the HN community.

1

Yuzudraw – visual editor for ASCII diagrams with token-efficient DSL #

github.com favicongithub.com
0 комментариев4:02 PMПосмотреть на HN
I make a lot of ASCII diagrams for my blog. While Claude can generate them it only gets about 80% there and then the last 20% of polish is painful to do with plaintext finagling.

Yuzudraw is a visual editor with a token-efficient DSL that bridges the gap (macOS native). Heavily inspired by Figma and Monodraw, which is excellent but closed source and lacks agent integration (AFAIK).

Would love feedback, especially on the DSL design.

1

Detecting LLM hallucinations in <1ms using hidden states (RTX3050, 4GB) #

1 комментариев2:01 PMПосмотреть на HN
GitHub: https://github.com/yubainu/sibainu-engine

TL;DR: I built a lightweight auditor that detects hallucinations by monitoring Transformer Hidden State Dynamics in real-time. It achieves 0.90+ ROC-AUC on Gemma/Llama-3.2/Mistral using a single RTX 3050 (4GB), with a core computation time of <1ms.

What it is

The Sibainu Engine is a pre-emptive auditing layer that identifies "latent trajectory collapse"—geometric turbulence in the vector transformations between transformer layers—before the token is even sampled. It requires no training and works with frozen weights.

The "15ms vs 1ms" Latency Reality

I prioritized "no-nonsense" performance reporting. In a local Python/FastAPI environment, the total response time is 15-25ms, but it's important to distinguish the components: Auditing Core (NumPy): < 1.0 ms. The actual vectorized math is near-instant. System Overhead: ~12.0 ms is spent on Pydantic validation and JSON-to-Array conversion.

The Bottom Line: The core logic is significantly faster than the LLM's token generation speed (typically 30-70ms), meaning the audit is theoretically "zero-overhead" if integrated directly into the C++/CUDA inference pipeline.

Key Metrics (Gemma-2B / HaluEval-QA)

ROC-AUC: 0.9176 Recall @ 5% False Signal Rate (FSR): 59.7% (It captures ~60% of hallucinations while only flagging 5% of factual truths). Hardware: Validated on consumer-grade RTX 3050 (4GB) using 4-bit (NF4) quantization.

How it works: Layer Dissonance

Instead of just looking at logit entropy, v6.4 monitors Layer Dissonance—the structural inconsistency between the middle and final layers. When a model hallucinations, the geometric stability between these layers exhibits a specific turbulence that is absent during factual recall.

Closed-Loop Recovery

I’ve included a recovery_agent_gemma.py that demonstrates Autonomous Safety Control. If the engine detects a physical neural anomaly (Score > 3.6510), it immediately aborts the session and triggers a re-generation using deterministic greedy search to stabilize the output.

1

Elia – A governed cognitive architecture (Phase 0 live) #

github.com favicongithub.com
0 комментариев4:33 PMПосмотреть на HN
I'm not a developer – I'm a systems architect who spent several months designing a governed hybrid cognitive architecture called Elia.

The core idea: neural intelligence (LLMs) should be a capability, not an authority. Symbolic control always governs. Neural inference is optional, validated, and can be disabled gracefully.

Today I'm sharing Phase 0 – a minimal Python skeleton proving that coordination, state transitions, and audit trails work before any AI is introduced:

- SM_HUB: async message bus between modules - EL_MEM: SQLite persistence and audit trail - SM_SYN: explicit state machine (INIT → STABILIZING → INTERACTIVE) - Neural processing: intentionally absent at this stage

The full architecture spec (1200+ lines) covers SLOs, lock models, degradation policies, circuit breakers, and operating cycles.

Repo: https://github.com/Jmc-arch/elia-governed-hybrid-architectur...

Looking for feedback on: - Is the governance model viable for production systems? - Biggest architectural blind spots? - Best first domain to prototype: medical, monitoring, agents?

Happy to answer any questions.

1

Intent – Git records what changed, Intent records why #

0 комментариев5:09 PMПосмотреть на HN

  I built Intent because every new AI coding session starts from zero. The agent doesn't know what problem was
  being solved, what was tried, or why a path was chosen.

  Git tracks code changes. Commit messages help, but three things always fall through: goal continuity (which
  commits belong to which task?), decision rationale (why JWT over cookies?), and work state (is this half-done or
   finished?).

  Intent adds a .intent/ directory to your repo — structured JSON metadata that sits alongside .git/. Two objects:
   intent (the goal) and snap (a step taken, with rationale).

  itt start "Migrate auth to JWT"
  itt snap "Add refresh token" -m "Token rotation not done yet — security priority"
  git commit -m "add refresh token"
  itt done

  Next session, any agent runs itt inspect and gets back structured JSON: active intent, last snap, rationale, and
   a suggested next action. 10 seconds vs. minutes of re-explaining context.

  - Works with any agent platform (Claude Code, Cursor, Copilot, etc.)
  - Plain JSON files, no vendor lock-in
  - pip install git-intent

  GitHub: https://github.com/dozybot001/Intent
1

Ccv – keep AI-generated files out of your repo #

1 комментариев5:10 PMПосмотреть на HN
I built a small CLI to manage AI-generated files across projects without committing them to the repo.

It stores files in a private Git-backed vault and symlinks them into project directories when needed.

So the files are: - accessible locally - version controlled - not part of the public repo

I kept running into the same issue while using AI tools — lots of useful notes and prompts, but no clean way to keep them without polluting repos.

Key features:

- Single private vault (~/.ccv) - Symlink-based linking per project - Automatic .gitignore handling - Works across multiple projects - Optional watch + auto-sync

Blog Post : https://docs.ebuz.xyz/blog/posts/ai-files-messing-up-repos Repo: https://github.com/takielias/claude-code-vault

1

One, cross domain auto-researching knowledge graph Claude orchestrator #

github.com favicongithub.com
0 комментариев5:23 PMПосмотреть на HN
hi, so straight to the point. i had claude code $20 for a while, and before upgrading i was always thinking about a way to make an "infinite context system", i also work... A LOT. 22hrs a day or so?

so i worked around, did a lot of trying with mcp, plugins, and i stuck with a system i call "one".

hdc vector embeddings (4096 dimensions, trigram + word encoding) stored in SQLite and recalled by cosine similarity on context shifts.

entity extraction builds a knowledge graph across sessions. rules get learned from repeated preferences. thats the core

the part that scared me was the autonomous research loop. there's a mode where claude researches a topic, then a dialectic engine challenges every finding. thesis/antithesis/synthesis.

a contradiction minor looks for conflicts, and a synthesis engine searches for patterns across domains. weak findings get pruned and it can iterate indefinitely.

it was running on my 15m kalshi trading algorithm (which also happens to use hdc + tsetlin machines haha!) and it produced 420 research findings (lol) with cited acedemic sources, it mined 472 contradictions, deprecated almost 600 weak claims through adversarial challenge, and discovered 21 patterns across domains i never directed it to explore.

the system connected python's lazy import pattern to rna transcription, both are deffered materialization where dormant capabilties are suppresed until activation context arrives.

it formalized why certain bug classes are invisible to quality checks, the query is outside the space where results exist, not bad results.

also has a small verification engine that ast-parses every code exit, checks sql against live schema, and maps every function call and file dependency in the codebase.

test it out,talk to me, ask things! it's my first time making a repository public!

(currently using the system on the system as im typing this out to make sure it autonomously upgrades itself before i go to sleep and you guys think my project sucks)

1

Railguard – A safer –dangerously-skip-permissions for Claude Code #

github.com favicongithub.com
1 комментариев5:28 PMПосмотреть на HN
--dangerously-skip-permissions is all-or-nothing. Either you approve every tool call by hand, or Claude runs with zero restrictions. I wanted a middle ground.

Railguard hooks into Claude Code and intercepts every tool call and decides in under 2ms: allow, block, or ask.

  cargo install railguard                                                                                                                                                                                                                         
  railguard install

It comes with sane configs preinstalled. You keep using Claude exactly as before. 99% of commands flow through instantly. You only see Railguard when it matters.

What it actually does beyond pattern matching and sandboxing:

  - OS-level sandbox (sandbox-exec on macOS, bwrap on Linux). Agents can base64-encode commands, write helper scripts, chain pipes to evade regex rules. The sandbox resolves what actually executes at the kernel level.   
                      
  - Context-aware decisions. rm dist/bundle.js inside your project is fine. rm ~/.bashrc is not. Same command, different decision.

  - Memory safety. Claude Code has persistent memory across sessions — a real attack surface. Railguard classifies every memory write, blocks secrets from being exfiltrated, flags behavioral injection, and detects tampering between sessions. 

  - Recovery. Every file write is snapshotted. Roll back one edit, N edits, or an entire session.                                                                                                                                                 
                                                                                                                                                                                                                                                  
It won't close every vector of attack. But it covers the gap between "no protection" and "approve everything manually" without changing your workflow.

Rust, MIT, single YAML config file. Happy to talk architecture or trade-offs.

1

Starting Five – NBA Lineup Building Challenges #

draftdawg.app favicondraftdawg.app
0 комментариев6:16 PMПосмотреть на HN
If you enjoy fantasy sports or building random NBA lineups with your friends, you'd love our new daily game launched on Draft Dawg!

In Starting Five, your goal is to build a starting lineup who stats are as close to the daily target as possible. Each daily game comes with a new goal and a theme that randomizes which players you can draft from.

Today's challenge: Can you draft a starting lineup whose best season PPG (points per game) adds up to 75? Some challenges are harder than others - let me know what your score is! I'd love to hear any feedback or suggestions you may have.

1

StackStats – Analytics tool for Substack writers, runs 100% locally #

0 комментариев6:19 PMПосмотреть на HN
Hi HN,

I've been writing on Substack for about 5 years with close to 3,000 subscribers. The platform has grown a lot but the analytics have barely changed, and there's still no API support.

I like to explore data, and Substack lets you export some analytics as CSVs from the creator dashboard. I've been doing that for a while, mostly with local scripts, but realized this could be useful to other writers as a simple local tool.

So I built StackStats. A better analytics app for Substack Writers.

You export your CSVs, point it at the folder, and everything populates. Growth sources, engagement scoring, cohort retention, bot-filtered open rates (GoogleImageProxy alone inflated my opens by ~51%), best day/hour to post, and more.

There's an optional AI feature for deeper insights. Totally optional, but why not go with the hype :D. You can use any provider (Claude, GPT-4, Gemini, Groq, OpenRouter) or run Ollama fully offline. BYOK, no lock-in.

Built with Electron and vanilla JS. No cloud, no account, no telemetry. All data stays on your machine.

You can see a live demo with my actual newsletter data here: https://demo.stackstats.app

Happy to answer questions about the app!

1

FC-Eval – CLI to Benchmark Local or Cloud LLMs on Function Calling #

github.com favicongithub.com
0 комментариев2:02 PMПосмотреть на HN
I built FC-Eval to have a repeatable way to evaluate how well different LLMs handle function calling before using them in agent workflows.

It runs models through 30 test cases covering single-turn, multi-turn, and agentic scenarios, modeled loosely after the Berkeley Function Calling Leaderboard methodology.

Validation uses AST matching rather than string comparison to avoid false positives from formatting variations.

Supports two backends: OpenRouter for cloud models (GPT-5.2, Claude, Qwen 3.5, Mistral, etc.) and Ollama for local models with no API key needed.

Tests for best of N trials giving you a reliable score alongside raw accuracy.

Results export to JSON, TXT, CSV, or Markdown.

Quick start commands: Via Openrouter: `fc-eval --provider openrouter --models openai/gpt-5.2 anthropic/claude-sonnet-4.6`

Via Ollama: `fc-eval --provider ollama --models llama3.2`

GitHub repo: https://github.com/gauravvij/function-calling-cli

Happy to answer questions, especially around the test case design or validation logic.

1

PUNK – Remote control for local Claude Code that just works #

punkcode.rocks faviconpunkcode.rocks
1 комментариев3:47 PMПосмотреть на HN
My laptop has accumulated a lot of state.

It's logged into everything. GitHub CLI, Slack, email. Repos are already cloned. Every day it gets richer just by being my laptop.

I can't really move this into the cloud without losing something. So I stopped trying. I just wanted to reach it from my phone.

At first it was just for me. Before TestFlight I was installing it by plugging people's phones directly into my laptop. It spread around South Park Commons and to engineers at Bay Area companies who were already using Claude Code. Now about 50 people use it.

A few ways I actually use it:

Sometimes I wake up at 2AM with something clicking — a bug, a feature idea, a thought I don't want to lose. I whisper to PUNK and go back to sleep.

In the morning I close the laptop, throw it in my bag, and walk to the office. About 10-15 minutes. The agents are still running the whole way. I'm on my phone checking progress, redirecting things, starting something new. By the time I walk in, they've been working the whole time.

I go downstairs for coffee, get a permission request on my lock screen. One tap. By the time I'm back up, things have moved.

I'm building PUNK with PUNK. I check logs from my phone, push fixes, deploy. For releases I have an App Store Connect skill — it builds, submits to TestFlight, adds all groups automatically.

I also spin up Docker containers and connect them as separate devices. I mount my skills directory from my Mac into each one so they already have all my configured tools. From PUNK they just feel like extra computers I can talk to.

One thing I didn't expect: switching to phone changes how I use Claude Code. It stops being just a coding tool and becomes more like a personal assistant. I ask it to go through my Slack, catch me up on messages, draft replies, manage my calendar, set reminders.

What it does: - Multiple parallel sessions — monitor and switch from one view - Permission management across all sessions — approve any pending tool call without leaving your current chat - Resume any session or start a new one - Open or create projects with natural language - Lock screen permission approvals via Live Activities - AI voice input - Claude Code can use anything already on your laptop — Slack, iMessage, Reminders, any service already logged in - Skills, MCP servers, slash commands all from your phone - Execution modes: Plan, Ask, Auto, --dangerously-skip-permissions

https://punkcode.rocks

TestFlight: https://testflight.apple.com/join/cae7Xe8w

1

Helpmarq – Submit any project, get structured feedback from real users #

helpmarq.com faviconhelpmarq.com
0 комментариев8:45 PMПосмотреть на HN
I built this after noticing a pattern: founders and developers share things they've built, get back vague or overly positive responses, and have no real signal on what to fix. Helpmarq lets you submit any project — landing page, app, side project, pitch deck — and get structured feedback from real users. The review process is guided by specific questions (clarity, value prop, UX, conversion blockers) so reviewers can't just say "looks good." The core insight: feedback quality is a framework problem, not a motivation problem. Most people want to give useful feedback — they just don't know what to look for. Still early. Would love brutal, honest feedback from this community — on the product and on the idea itself. helpmarq.com