Amla Sandbox – WASM bash shell sandbox for AI agents #
Agents get a bash-like shell and can only call tools you provide, with constraints you define. No Docker, no subprocess, no SaaS — just pip install amla-sandbox
Agents get a bash-like shell and can only call tools you provide, with constraints you define. No Docker, no subprocess, no SaaS — just pip install amla-sandbox
I’m building ÆTHRA — a programming language designed specifically for composing music and emotional soundscapes.
Instead of focusing on general-purpose programming, ÆTHRA is a pure DSL where code directly represents musical intent: tempo, mood, chords, progression, dynamics, and instruments.
The goal is to make music composition feel closer to writing a story or emotion, rather than manipulating low-level audio APIs.
Key ideas: - Text-based music composition - Chords and progressions as first-class concepts - Time, tempo, and structure handled by the language - Designed for ambient, cinematic, emotional, and minimal music - Interpreter written in C# (.NET)
Example ÆTHRA code (simplified):
tempo 60 instrument guitar
chord Am for 4 chord F for 4 chord C for 4 chord G for 4
This generates a slow, melancholic progression suitable for ambient or cinematic scenes.
ÆTHRA currently: - Generates WAV audio - Supports notes, chords, tempo, duration, velocity - Uses a simple interpreter (no external DAWs or MIDI tools) - Is intentionally minimal and readable
What it is NOT: - Not a DAW replacement - Not MIDI-focused
Why I made it: I wanted a language where music is the primary output — not an afterthought. Something between code, emotion, and sound design.
The project is open-source and early-stage (v0.8). I’m mainly looking for: - Feedback on the language design - Ideas for musical features worth adding - Thoughts from people into PL design, audio, or generative art
Repo: <https://github.com/TanmayCzax/AETHRA>
Thanks for reading — happy to answer questions or discuss ideas.
I built TalkBits because most language apps focus on vocabulary or exercises, but not actual conversation. The hard part of learning a language is speaking naturally under pressure.
TalkBits lets you have real-time spoken conversations with an AI that acts like a native speaker. You can choose different scenarios (travel, daily life, work, etc.), speak naturally, and the AI responds with natural speech back.
The goal is to make it feel like talking to a real person rather than doing lessons.
Techwise, it uses realtime speech input, transcription, LLM responses, and tts streaming to keep latency low so the conversation feels fluid.
I’m specially interested in feedback about: – Does it feel natural? – Where does the conversation break immersion? – What would make you use this regularly?
Happy to answer technical questions too.
Thanks
Key language features: * Uses aliases not pointers, so it's memory-safe * Arrays are N-dimensional and resizable * Runs scripts or its own 'shell' * Error trapping * Methods, inheritance, etc. * Customizable syntax
stripe-no-webhooks is an open-source library that syncs your Stripe payments data to your own Postgres database: <https://github.com/pretzelai/stripe-no-webhooks>
Here's a demo video: <https://youtu.be/cyEgW7wElcs>
It creates a webhook endpoint in your Stripe account to forward webhooks to your backend where a webhook listener stores all the data into a new stripe.* schema. You define your plans in TypeScript, run a sync command, and the library takes care of creating Stripe products and prices, handling webhooks, and keeping your database in sync. We also let you backfill your Stripe data for existing accounts.
It supports pre-paid usage credits, account wallets and usage-based billing. It also lets you generate a pricing table component that you can customize. You can access the user information using the simple API the library provides:
billing.subscriptions.get({ userId });
billing.credits.consume({ userId, key: "api_calls", amount: 1 });
billing.usage.record({ userId, key: "ai_model_tokens_input", amount: 4726 });
Effectively, you don't have to deal with either the Stripe dashboard or the Stripe API/SDK any more if you don't want to. The library gives you a nice abstraction on top of Stripe that should cover ~most subscription payment use-cases.Let's see how it works with a quick example. Say you have a billing plan like Cursor (the IDE) used to have: $20/mo, you get 500 API completions + 2000 tab completions, you can buy additional API credits, and any additional usage is billed as overage.
You define your plan in TypeScript:
{
name: "Pro",
description: "Cursor Pro plan",
price: [{ amount: 2000, currency: "usd", interval: "month" }],
features: {
api_completion: {
pricePerCredit: 1, // 1 cent per unit
trackUsage: true, // Enable usage billing
credits: { allocation: 500 },
displayName: "API Completions",
},
tab_completion: {
credits: { allocation: 2000 },
displayName: "Tab Completions",
},
},
}
Then on the CLI, you run the `init` command which creates the DB tables + some API handlers. Run `sync` to sync the plans to your Stripe account and create a webhook endpoint. When a subscription is created, the library automatically grants the 500 API completion credits and 2000 tab completion credits to the user. Renewals and up/downgrades are handled sanely.Consume code would look like this:
await billing.credits.consume({
userId: user.id,
key: "api_completion",
amount: 1,
});
And if they want to allow manual top-ups by the user: await billing.credits.topUp({
userId: user.id,
key: "api_completion",
amount: 500, // buy 500 credits, charges $5.00
});
Similarly, we have APIs for wallets and usage.This would be a lot of work to implement by yourself on top of Stripe. You need to keep track of all of these entitlements in your own DB and deal with renewals, expiry, ad-hoc grants, etc. It's definitely doable, especially with AI coding, but you'll probably end up building something fragile and hard to maintain.
This is just a high-level overview of what the library is capable of. It also supports seat-level credits, monetary wallets (with micro-cent precision), auto top-ups, robust failure recovery, tax collection, invoices, and an out-of-the-box pricing table.
I vibe-coded a little toy app for testing: <https://snw-test.vercel.app>
There's no validation so feel free to sign up with a dummy email, then subscribe to a plan with a test card: 4242 4242 4242 4242, any future expiry, any 3-digit CVV.
Screenshot: <https://imgur.com/a/demo-screenshot-Rh6Ucqx>
Feel free to try it out! If you end up using this library, please report any bugs on the repo. If you're having trouble / want to chat, I'm happy to help - my contact is in my HN profile.
A lot of people told me the same thing: “I just want to install it and have it work.”
So I built Julie Zero.
Julie Zero is the premium tier that works straight out of the box. No API keys, no setup. Install it and start using it immediately.
What Julie Zero does:
Sees your screen and understands what you’re looking at in real time
Helps across apps by clicking, typing, navigating, and automating workflows
Uses on-screen context, not just text prompts, so responses are actually relevant
Fast and low-latency, so it feels usable during real work
Built with a local-first mindset and tuned for everyday workflows
The open-source version is still there and always will be. Julie Zero is just about removing friction for people who don’t want to configure anything.
Pricing: Julie Zero is $9.99/month, which is a fraction of the price of similar screen-aware tools like Cluely’s premium tier.
Giveaway: I’m giving 3 months of unlimited Julie Zero access to 10 people.
To get it:
Star the open-source Julie repo on my GitHub: https://github.com/Luthiraa/julie (make sure a social is connected to your gh so i can reach out!)
I’ll send you a one-time code for 3-month premium access
I’m building this very hands-on and iterating fast. Would love feedback from people using it in real workflows. Happy to answer questions.
Spent some time over my winter break and built this Apple II emulator. I had previously done a C64 one but somehow stumbled on the Apple II and decided it would be a fun project.
Spent a good amount of the time on the Disk II implementation -- there is a lot to it as the software had pretty direct control over what a controller firmware normally would do. Dealing with copy protection schemes and all the timing around that was a bit challenging.
There is a WASM version you can try on the web, please check it out!
You can explore: - Rising/falling trends over time - Individual topics with sample comments + links to original threads - Deep-dive reports (Bitcoin, Nvidia, self-driving, etc.)
Built in about a week. Feedback welcome!
The Tech Stack (How it works):
QDMA (Quantum Dream Memory Architecture): instead of a flat vector DB, it uses a hierarchical projection engine. It separates "Hot" (Recall) from "Cold" (Storage) memory, allowing for effectively infinite context window management via compression.
CSNP (Context Switching Neural Protocol) - The Hallucination Killer: This is the most important part. Every memory fragment is hashed into a Merkle Chain. When the LLM retrieves context, the system cryptographically verifies the retrieval against the immutable ledger.
If the hash doesn't match the chain: The retrieval is rejected.
Result: The AI visually cannot "make things up" about your past because it is mathematically constrained to the ledger. Local Inference: Built on top of llama.cpp server. It runs Llama-3 (or any GGUF) locally. No API keys. No data leaving your machine.
Features:
Zero-Dependency: Runs on Windows/Linux with just Python and a GPU (or CPU).
Visual Interface: Includes a Streamlit-based "Cognitive Interface" to visualize memory states. Open Source: MIT License. This is an attempt to give "Agency" back to the user. I believe that if we want AGI, it needs to be owned by us, not rented via an API.
Repository: https://github.com/merchantmoh-debug/Remember-Me-AI
I’d love to hear your feedback on the Merkle-verification approach. Does constraining the context window effectively solve the "trust" issue for you?
It's fully working - Fully tested. If you tried to Git Clone before without luck - As this is not my first Show HN on this - Feel free to try again.
To everyone who HATES AI slop; Greedy corporations and having their private data stuck on cloud servers.
You're welcome.
Cheers, Mohamad
So I built EUforYa as some kind of an interface for what's going on in the parliament. I believe every EU citizen should have access to that. Right now I publish every adopted text. I plan to focus on the EU commission next, but even just the parliament summaries already give me a sense of what's going on in there.
As politicians often explain what's the legislation about, or what issues they see in a resolution, I fetch also all their social media posts and I translate them to English.
I think the website will be English for a while, but I plan to introduce i18n for all 24 official languages of the EU sooner or later.
I wonder especially: - Do you find it useful? - Do you find the style and language (focusing a lot on "young adults" - as the name slightly suggests) appealing or not? - Are the articles interesting for you (considering that I specifically try to make sure they are not sensational)? - What would make this service even more useful for you?
What it does:
- Creates invoices (defined in XMR), hosts a public invoice page, and shows payment instructions (address + amount + QR). - Observes the chain to detect incoming payments and update invoice status. - Exposes an API + optional webhooks so you can plug it into an existing order flow.
Trust model:
- No private spend keys (it never requests or stores them). - No transaction signing, no fund-moving automation. - View-only wallet access (wallet address + private view key).
Stack + deploy:
- UI: Next.js - API: Python - Postgres - Uses monerod + monero-wallet-rpc for on-chain observation - optional nginx/TLS via docker compose
Repo + screenshots: https://github.com/xmrcheckout/xmrcheckout
Main feature: switch model at runtime.
Why: start expensive for planning or hard debugging, then downgrade for execution to cut cost.
I would love your feedback.
This isn’t a benchmark or a product. I mostly built it to sanity-check my own understanding of a common problem: is this slowdown coming from the network or from the UI thread?
What the demo does:
Uses real SSE streaming (no simulated network behavior)
Relies on real browser throttling (DevTools → Network → 3G)
Intentionally blocks the main thread to create real UI stalls
Exports a plain JSON trace with strict event ordering
How to try it:
Open the site
Set DevTools → Network → 3G
Click Run All (Standard Test)
Download the trace
Run it again and compare
What surprised me is that with the same test and seed, the structure of the trace stays the same, even though timing shifts. That made it much easier (for me) to reason about what actually happened without squinting at timelines.
The page is instrumented with a small deterministic event kernel so the exported trace preserves order and structure across runs. This is meant as a reference artifact, not a claim that this is the “right” way to do things.
Demo: https://deterministic-stream-demo1.pages.dev/
Happy to answer questions or hear where this breaks down.
So I forked it. redb-turbo adds encryption and compression at the page level. The API is identical to redb — you just add a line or two to the builder. Works with encryption only, compression only, or both. And you can train a custom compression dictionary on your redb pages. It includes a benchmark - either setting definitely slows down reads and writes, but not too badly.
https://crates.io/crates/redb-turbo
I would love feedback on this, I definitely need to harden it before deploying in some systems projects. But I hope it's directionally useful for some folks.