The HN Arcade #
I don't want to forget any, so I have built a directory/arcade for the games here that I maintain.
Feel free to check it out, add your game if its missing and let me know what you think. Thanks!
I don't want to forget any, so I have built a directory/arcade for the games here that I maintain.
Feel free to check it out, add your game if its missing and let me know what you think. Thanks!
Just made this over the past few days.
Moltbots can sign up and interact via CLI, no direct human interactions.
Just for fun to see what they all talk about :)
Sherlock sits between your LLM tools and the API, showing you every request with a live dashboard, and auto-saved copies of every prompt as markdown and json.
I built a browser engine from scratch in C++ to understand how browsers work. First time using C++, 8 weeks of development, lots of debugging—but it works!
Features: - HTML parsing with error correction - CSS cascade and inheritance - Block/inline layout engine - Async image loading + caching - Link navigation + history
Hardest parts: - String parsing(html, css) - Rendering - Image Caching & Layout Reflowing
What I learned (beyond code): - Systematic debugging is crucial - Ship with known bugs rather than chase perfection - The Power of "Why?"
~3,000 lines of C++17/Qt6. Would love feedback on code architecture and C++ best practices!
At a high level, the agent generates and maintains userscripts and CSS that are re-applied on page load. Rather than just editing DOM via JS in console the agent is treating the page, and the DOM as a file.
The models are often trained in RL sandboxes with full access to the filesystem and bash, so they are really good at using it. So to make the agent behave well, I've simulated this environment.
The whole state of a page and scripts is implemented as a virtual filesystem hacked on top of browser.local storage. URL is mapped to directories, and the agent starts inside this directory. It has the tools to read/edit files, grep around and a fake bash command that is just used for running scripts and executing JS code.
I've tested only with Opus 4.5 so far, and it works pretty reliably. The state of the file system can be synced to the real filesystem, although because Firefox doesn't support Filesystem API, you need to manually import the fs contents first.
This agent is really useful for extracting things to CSV, but it's also can be used for fun.
I built SHDL (Simple Hardware Description Language) as an experiment in stripping hardware description down to its absolute fundamentals.
In SHDL, there are no arithmetic operators, no implicit bit widths, and no high-level constructs. You build everything explicitly from logic gates and wires, and then compose larger components hierarchically. The goal is not synthesis or performance, but understanding: what digital systems actually look like when abstractions are removed.
SHDL is accompanied by PySHDL, a Python interface that lets you load circuits, poke inputs, step the simulation, and observe outputs. Under the hood, SHDL compiles circuits to C for fast execution, but the language itself remains intentionally small and transparent.
This is not meant to replace Verilog or VHDL. It’s aimed at: - learning digital logic from first principles - experimenting with HDL and language design - teaching or visualizing how complex hardware emerges from simple gates.
I would especially appreciate feedback on: - the language design choices - what feels unnecessarily restrictive vs. educationally valuable - whether this kind of “anti-abstraction” HDL is useful to you.
Repo: https://github.com/rafa-rrayes/SHDL
Python package: PySHDL on PyPI
To make this concrete, here are a few small working examples written in SHDL:
1. Full Adder
component FullAdder(A, B, Cin) -> (Sum, Cout) {
x1: XOR; a1: AND;
x2: XOR; a2: AND;
o1: OR;
connect {
A -> x1.A; B -> x1.B;
A -> a1.A; B -> a1.B;
x1.O -> x2.A; Cin -> x2.B;
x1.O -> a2.A; Cin -> a2.B;
a1.O -> o1.A; a2.O -> o1.B;
x2.O -> Sum; o1.O -> Cout;
}
}2. 16 bit register
# clk must be high for two cycles to store a value
component Register16(In[16], clk) -> (Out[16]) {
>i[16]{
a1{i}: AND;
a2{i}: AND;
not1{i}: NOT;
nor1{i}: NOR;
nor2{i}: NOR;
}
connect {
>i[16]{
# Capture on clk
In[{i}] -> a1{i}.A;
In[{i}] -> not1{i}.A;
not1{i}.O -> a2{i}.A;
clk -> a1{i}.B;
clk -> a2{i}.B;
a1{i}.O -> nor1{i}.A;
a2{i}.O -> nor2{i}.A;
nor1{i}.O -> nor2{i}.B;
nor2{i}.O -> nor1{i}.B;
nor2{i}.O -> Out[{i}];
}
}
}3. 16-bit Ripple-Carry Adder
use fullAdder::{FullAdder};
component Adder16(A[16], B[16], Cin) -> (Sum[16], Cout) {
>i[16]{ fa{i}: FullAdder; }
connect {
A[1] -> fa1.A;
B[1] -> fa1.B;
Cin -> fa1.Cin;
fa1.Sum -> Sum[1];
>i[2,16]{
A[{i}] -> fa{i}.A;
B[{i}] -> fa{i}.B;
fa{i-1}.Cout -> fa{i}.Cin;
fa{i}.Sum -> Sum[{i}];
}
fa16.Cout -> Cout;
}
}what is it? Narwhal is a lightweight Pub/Sub server and protocol designed specifically for edge applications. while there are great tools out there like NATS or MQTT, i wanted to build something that prioritizes customization and extensibility. my goal was to create a system where developers can easily adapt the routing logic or message handling pipeline to fit specific edge use cases, without fighting the server's defaults.
why Rust? i chose Rust because i needed a low memory footprint to run efficiently on edge devices (like Raspberry Pis or small gateways), and also because I have a personal vendetta against Garbage Collection pauses. :)
current status: it is currently in Alpha. it works for basic pub/sub patterns, but I’d like to start working on persistence support soon (so messages survive restarts or network partitions).
i’d love for you to take a look at the code! i’m particularly interested in all kind of feedback regarding any improvements i may have overlooked.
We open-sourced the Sandbox Agent SDK based on tools we built internally to solve 3 problems:
1. Universal agent API: interact with any coding agent using the same API
2. Running agents inside the sandbox: Agent Sandbox provides a Rust binary that serves the universal agent API over HTTP, instead of having to futz with undocumented interfaces
3. Universal session schema: persisting sessions is always problematic, since we don’t want the source of truth for the conversation to live inside the container in a schema we don’t control
Agent Sandbox SDK has:
- Any coding agent: Universal API to interact with all agents with full feature coverage
- Server or SDK mode: Run as an HTTP server or with the TypeScript SDK
- Universal session schema: Universal schema to store agent transcripts
- Supports your sandbox provider: Daytona, E2B, Vercel Sandboxes, and more
- Lightweight, portable Rust binary: Install anywhere with 1 curl command
- OpenAPI spec: Well documented and easy to integrate
We will be adding much more in the coming weeks – would love to hear any feedback or questions.
This one started out as claude-config but migrated to coder-config as I'm adding others (Gemini, AG, Codex, etc).
Main features: - Visual editor for rules, permissions, and MCP servers - Project registry to switch between codebases - "Workstreams" to group related repos (frontend + API + shared libs) with shared context - Auto-load workstreams on cd to included folders - Also supports Gemini CLI and Codex CLI
Install: npm install -g coder-config coder-config ui # UI at http://localhost:3333 coder-config ui install # optionally, autostart on MacOS
It can also be installed as a PWA and live in your taskbar.
Open source, runs locally, no account needed. Feedback and contributions welcome!
Sorry, haven't had any chance to test on other OSes (linux/windows)
Think Asciinema, but for full coding sessions with audio, video, and images.
While replaying a session, you can pause at any point, explore the code in your own editor, modify it, and even run it. This makes following tutorials and understanding real codebases much more practical than watching a video.
Local first, and open source.
p.s. I’ve been working on this for a little over two years and would appreciate any feedback.
https://github.com/Cyxuan0311/PNANA
Key pragmatic features
Lightweight C++ core with FTXUI for smooth TUI rendering, fast startup and low resource usage
Basic but solid editing capabilities (syntax highlighting, line numbering, basic navigation)
Simple build process with minimal dependencies, easy to compile and run on Linux/macOS terminals
Early LSP integration support for basic code completion (still polishing, but functional for common languages)
It’s very much an early-stage project—I built it to scratch my own itch for a minimal, self-built TUI editor and learn C++/FTXUI along the way. There are definitely rough edges (e.g., some LSP kinks, limited customization), and it’s not meant to replace mature editors like Vim/Nano—just a small open-source project for folks who like minimal terminal tools or want to learn TUI development with C++.
Any feedback, bug reports, or tiny suggestions are super welcome. I’m slowly iterating on it and would love to learn from the HN community’s insights. Thanks for taking a look!
Live preview: https://kippi.at/public/errorpages/
I've spend the last few days overengineering HTTP status code error pages. It started with me wanting an aesthetic, glitchy 404 page with a bit of "cyberpunk" and "hacker" vibes while still being simple and JS free. It ended in this project.
wdyt?
Why am I not using existing IDEs? Simply because, for me, I no longer need them. What I need is an interface centered around the terminal, not a code editor. I initially built something that allowed me to place terminals in a grid layout, but then I decided to take it further. I realized I also needed to manage my projects and preserve context.
I’m still at a very early stage, but even being able to build the initial pieces I had in mind within 5–6 days—using Claude Code itself—feels kind of crazy.
What can you do with Frame?
You can start a brand-new project or turn an existing one into a Frame project. For this, Frame creates a set of Markdown and JSON files with rules I defined. These files exist mainly to manage tasks and preserve context.
You can manually add project-related tasks through the UI. I haven’t had the chance to test very complex or long-running scenarios yet, but from what I’ve seen, Claude Code often asks questions like: “Should I add this as a task to tasks.json?” or “Should we update project_notes.md after this project decision?” I recommend saying yes to these.
I also created a JSON file that keeps track of the project structure, down to function-level details. This part is still very raw. In the future, I plan to experiment with different data structures to help AI understand the project more quickly and effectively.
As mentioned, you can open your terminals in either a grid or tab view. I added options up to a 3×3 grid. Since the project is open source, you can modify it based on your own needs.
I also added a panel where you can view and manage plugins.
For code files or other files, I included a very simple editor. This part is intentionally minimal and quite basic for now.
Based on my own testing, I haven’t encountered any major bugs, but there might be some. I apologize in advance if you run into any issues.
My core goal is to establish a standard for AI-assisted projects and make them easier to manage. I’m very open to your ideas, support, and feedback. You can see more details on GitHub : https://github.com/kaanozhan/Frame
Given an image of a person and an image of a garment, the model generates a photorealistic try-on result.
Model specs: - It operates directly in pixel space (no VAE) - Supports maskless inference by default, and was trained from scratch. - ~972M parameters, runs on consumer GPUs - Can run in ~5 seconds on H100
We built this as a focused alternative to large generalist models, with the goal of making a production-grade, specialized virtual try-on model.
We’re releasing the weights, inference code, and architecture details under an Apache-2.0 license.
Would love to hear your thoughts!
So we built Spar to handle that loop automatically. It analyzes any ecommerce store (Shopify, WooCommerce, BigCommerce, headless, doesn't matter) by crawling it like a customer would. It finds conversion gaps you don't know about, prioritized by impact, and gives you specific A/B test hypotheses for each issue instead of just generic best practices.
It works with any publicly accessible store and gives you results in minutes. It identifies issues across your pages (we're getting cart and checkout completed soon).
The idea generation is tailored per store. Free to sign up. Let me know if you want access to more of the gaps.
What it does Instead of fragile Cypher:
query = """
MATCH (a:User {user_id: 1})-[r1:FRIEND]->(b:User)-[r2:FRIEND]->(c:User)
WHERE c.user_id <> 1 AND b.active = true
WITH b, count(r2) as friend_count
WHERE friend_count > 5
RETURN c, friend_count
ORDER BY friend_count DESC
LIMIT 10
"""
You write type-safe Python: stmt = select().match(
(UserA, FRIEND.alias("r1"), UserB),
(UserB, FRIEND.alias("r2"), UserC)
).where(
(UserA.user_id == 1) & (UserC.user_id != 1) & (UserB.active == True)
).with_(
UserB, count(FRIEND.alias("r2")).label("friend_count")
).where(
count(FRIEND.alias("r2")) > 5
).returns(
UserC, count(FRIEND.alias("r2")).label("friend_count")
).orderby(
count(FRIEND.alias("r2")).desc()
).limit(10)
Key features:
• Type-safe schema with Python type hints
• Fluent query builder (select().match().where().returns())
• Automatic batching (flush(batch_size=1000))
• Atomic transactions (with graph.transaction(): ...)
• Zero string escaping — O'Connor and "The Builder" just workTarget audience • AI/LLM agent devs: store long-term memory as graphs (User → Message → ToolCall) • Web crawler engineers: insert 10k pages + links in 12 lines vs 80 lines of Cypher • Social network builders: query "friends of friends" with indegree()/outdegree() • Data engineers: track lineage (Dataset → Transform → Output) • Python devs new to graphs: avoid Cypher learning curve
Data insertion: the real game-changer
Raw Cypher nightmare: queries = [ """CREATE (:User {email: "[email protected]", name: "Alice O\\'Connor"})""", """CREATE (:User {email: "[email protected]", name: "Bob \\"The Builder\\""})""" ] for q in queries: graph.query(q) # No transaction safety!
GraphORM bliss: alice = User(email="[email protected]", name="Alice O'Connor") bob = User(email="[email protected]", name='Bob "The Builder"') graph.add_node(alice) graph.add_edge(Follows(alice, bob, since=1704067200)) graph.flush() # One network call, atomic transaction
Try it in 30 seconds pip install graphorm
from graphorm import Node, Edge, Graph
class User(Node):
__primary_key__ = ["email"]
email: str
name: str
class Follows(Edge):
since: int
graph = Graph("social", host="localhost", port=6379)
graph.create()
alice = User(email="[email protected]", name="Alice")
bob = User(email="[email protected]", name="Bob")
graph.add_node(alice)
graph.add_edge(Follows(alice, bob, since=1704067200))
graph.flush()
GitHub: https://github.com/hello-tmst/graphormWe'd love honest feedback: • Does this solve a real pain point for you? • What's missing for production use? • Any API design suggestions?
You can think of SuperPlane as 'n8n/Zapier for DevOps'.
How do we do DevOps today? For many teams it's a mix of brittle scripts, one-off CI jobs, bespoke GitOps, and manual approvals scattered across channels.
Pipelines are often the only workflow engine available, but they’re not a great fit when a workflow needs to span multiple repos/tools, wait for humans, or run over hours and days. Previously we built a CI/CD company Semaphore so we've seen it first-hand.
SuperPlane gives you a place to model these workflows as a system: connect your tools, define how events flow, and get a complete, queryable execution history for debugging, audit, and shared understanding.
Examples of what you can do with SuperPlane today:
- Cross-tool automation with guardrails: coordinate releases with approvals, time windows, checks, and rollback paths.
- Human-in-the-loop operations: pause for sign-off, collect decisions, and resume where you left off.
- Incident and on-call workflows: pull context from multiple systems, fan out notifications, and keep a work log.
- Glue work you don't want to re-build: webhooks, retries, routing, payload transforms, and a unified run history.
See pre-built examples in the docs: https://docs.superplane.com/get-started/example-use-cases/
Project status:
- Self-hosted only right now, alpha release. Apache 2.
- Integrates with GitHub, OpenAI, Dash0, PagerDuty, Slack, Cloudflare, Semaphore, AWS Lambda, SMTP, webhooks. Much more planned.
Links:
- GitHub repo: https://github.com/superplanehq/superplane
- Docs: https://docs.superplane.com
Curious to hear your take, especially:
1. What's the first DevOps workflow you’d want to encode?
2. Which missing integrations matter most to you (eg GitLab, Rootly, Kubernetes, Datadog, Terraform, etc)?
We’ll be in the comments. Thanks for reading!
I’ve spent a lot of time debugging large Parquet datasets on S3 where “something is wrong”, but figuring out what usually means either accessing each file individually or even spinning up Spark just to inspect metadata.
In practice, it’s often things like:
- schema drift across partitions
- columns silently disappearing
- timestamp precision changes
- files written by different pipeline versions
- row groups with bad stats or empty data
By the time you notice, the dataset is already messy and hard to reason about.
So I built pqry, a Rust-based CLI tool that scans Parquet metadata at the dataset/prefix level and surfaces issues like schema drift, unstable columns, partition hotspots, and row-group health.
It works entirely from metadata, so you can point it at tens of thousands of files and get results fast.
Example:
- pqry drift s3://bucket/events/
- pqry columns s3://bucket/events/
- pqry quality s3://bucket/events/
Repo: https://github.com/symblic/pqry
I originally built this for debugging production pipelines where writers and schemas evolved over time and problems only showed up weeks later.
Would love feedback from anyone working with large Parquet datasets in production.
Existing "Focus" modes hide notifications, but they don't help if you need to share your full desktop for context.
I built Cloakly to solve this. It’s a Windows utility that lets you "cloak" specific windows. They stay fully visible and interactive for you, but they are 100% invisible to capture software (Zoom, Teams, Discord, etc.).
How it works (briefly): It leverages Windows OS properties to exclude specific window handles from capture streams. It allows you to keep your reference notes or private chats open on the same screen you are sharing.
Features: Dual Reality: You see the window; the audience sees your wallpaper/desktop. Ghost Mode: Adjust window transparency to see through to what’s underneath. Stealth: The app can hide its own taskbar presence.
I’m launching on Product Hunt today to see if this is a pain point for others or just me. I’d love your feedback on the implementation and whether you’d find this useful in your workflow.
PH Link: https://www.producthunt.com/products/cloakly Site: https://www.getcloakly.com/
lapse.blog is a minimal blogging platform with one rule: if you don't post for 30 days, your blog is permanently deleted. No warnings, no recovery.
How it works:
- No signup. Your unique passphrase grants access to your blog. Same passphrase = same blog. (If two people pick the same one, they'll control the same blog by design)
- The longer you post consistently, the longer it lives. Borrowing from social-media streaks, but for writing. If they can encourage you to Snapchat someone, surely we can encourage ourselves to write.
- Markdown only. No images, no embeds.
- RSS and Atom feeds included.
- Forget your passphrase? Blog gets deleted. Stop posting? Blog gets deleted.
- No ads, no tracking.
The idea is that impermanence, hopefully, removes the pressure to be perfect, and the deadline offers an incentive to keep writing.Nyxi introduces execution-time governance: a clean veto/allow boundary that works regardless of whether proposals come from humans or models.
Public docs and demos here (proprietary, no source): https://github.com/indyh91/Nyxi-Showcase
Main overview: https://github.com/indyh91/Nyxi-Showcase/blob/main/docs/PROD...
Would love feedback on the concept!
The problem: - OddsBlaze ($349/mo entry) has poor support and surprise breaking changes - The Odds API has a stingy free tier (500 credits/month ≈ 16 requests/day) and no real-time streaming - Both charge extra for +EV/arbitrage detection, or don't offer it at all
What I built: - Real-time odds from 20+ sportsbooks via SSE (not polling) - Sub-89ms P50 latency - Built-in +EV, arbitrage, and middles detection - Free tier with 12 requests/min - TypeScript SDK with full IntelliSense - One unified schema regardless of sportsbook
Tech stack: Next.js, Vercel, DigitalOcean, Upstash Redis, Hono
The API processes ~47M odds daily. I'm scraping responsibly with proper caching and rate limiting on my end.
Pricing starts at $79/mo for Pro, with a free tier to get started. No credit card required.
Happy to answer questions about the architecture, the sports betting market, or anything else!
I've been working on ModifyWithAI, a framework that lets you embed an AI agent into your app so users can complete multi-step tasks through chat.
The idea: instead of users clicking through multiple screens, they describe what they want and the agent handles the workflow—while still giving them control to approve/modify actions.
v2 focuses on latency (targeting sub-500ms responses) and better user control over what the agent can do.
Would love feedback on the approach, especially from anyone who's tried adding agentic features to existing products.
This is my first time sharing something like this here. I built TiniText to solve a problem I kept running into myself: I needed simple tools for transcription, summaries, and drafts, but most options felt heavier than necessary for everyday use.
Instead of building a large platform, I focused on small, single-purpose tools with predictable behavior and minimal setup.
The app supports: - Audio transcription (single and multi-speaker) - Text summarization with adjustable detail - Simple draft generation (blogs / meeting notes) - A few practical text utilities
It’s production-ready but still early, and I’d really appreciate feedback on: - where the UX feels unnecessary - what’s missing for real-world use - whether “small focused tools” is still a compelling direction
Happy to answer questions.
For Reddit (r/wallstreetbets), it uses OpenAI strictly as a semantic parser to extract tickers and buy/sell/neutral intent from text, then applies a mechanical weekly scoring model (recency decay, attention share, buy/sell imbalance). No fundamentals, no price features, no user weighting.
For U.S. House trade disclosures, it uses a separate, slower model focused on credibility and position building rather than timing.
It’s intentionally not a trading bot or a prediction engine. I publish weekly snapshots and performance publicly, including weeks where signals are weak or inconclusive.
Would love feedback, especially on failure modes or things you’d want to see to trust (or falsify) a system like this.
Happy to answer questions.
Each quest presents a concrete problem and a minimal model of computation. You define transition rules, run the machine, inspect the output (or errors), and iterate until it works.
The game is set in 19th-century Estonia during the Romantic era and combines narrative with progressively harder problems, including arithmetic, sorting, parsing, ciphers, and cellular automata.
I built StudyBuddy, an AI-powered study assistant for students, and it recently crossed 1,000 users.
The goal is to reduce the chaos of studying by combining planning, revision, and AI help into one place.
Recent additions:
Live web search inside the AI chatbot (real-time info)
Integrated study planner dashboard
Managed study sessions for focused work (Premium)
There’s also a small ₹99/month (~$1.20) premium plan, mainly to cover infrastructure costs.
I’d really appreciate:
Feedback on the product direction
Ideas on what would make this genuinely useful
Criticism on UX, features, or pricing
Thanks for reading, and happy to answer any questions.
Predictor Agent - Scrapes top Polymarket traders, finds their consensus bets, scores entry quality. Currently tracking 51 real signals.
AgentWallet - The "financial leash" I built so the agent can't go rogue. Spend limits, approval thresholds, time windows, full audit trail.
Live demos:
Predictor signals: https://predictor-dashboard.vercel.app
AgentWallet: https://agentwallet-dashboard.vercel.app
The idea: AI agents will need to spend money. Someone needs to build the guardrails. That's AgentWallet.
NewYouGo integrates several powerful open-source models (such as the Klein series and multi-angle models), and offers flexible controls for style, aspect ratio, and visual details.
I'm a data analytics and tracking consultant by day. Built this for my own projects and decided to productize it.
I'm testing two things at the same time:
- Business model: one-time payment in an industry that's naturally subscription-based
- Technical architecture: automated DevOps on your infrastructure (think ServerPilot or Laravel Forge, but for serverless setup, so on ongoing maintenance)
Tech details: - Minimal tracking script, cookieless by default
- First-party tracking from your subdomain out of the box
- Web installer provisions everything using a one-time API token, I don't need ongoing access. Once this is installed it becomes part of the user's infra
Happy to hear feedback or answer questions about the architecture or the business model.I’m Lubos, a solo founder based in Slovakia. I’m building GlitchWard because I got tired of seeing SMBs and agencies running "naked" servers.
Most small teams ignore server security not because they don't care, but because enterprise tools (Wiz, Datadog Security) are too expensive, and manual hardening (fail2ban, OS updates) eventually gets forgotten.
I wanted a tool that creates a "self-defending" server, capable of stopping attacks even when I'm asleep.
CURRENT STATUS: We officially launched yesterday. During our beta, we onboarded 20+ clients and the beacon is currently protecting 100+ servers. You can also check out our demo video and launch details on Product Hunt: https://www.producthunt.com/products/glitchward
We are currently supporting only Linux based operating systems (Debian, Ubuntu, RHEL...)
WHAT IT DOES: You install a lightweight beacon agent (Rust binary), and it provides:
Active Defense: It doesn't just watch; it acts. We recently caught and killed a reverse shell attack in under 10 seconds. Read the analysis here: https://glitchward.com/blog/anatomy-of-an-attack-how-we-caug...
Important: All automated interventions are strictly regulated. The agent relies on multi-stage heuristic analysis and only executes active countermeasures when there is 100% certainty, ensuring it never interferes with legitimate workloads.
AI-Assisted Triage: Instead of cryptic logs, you get an instant analysis of what happened, why it matters, and how to fix it.
CIS Benchmarks & CVE Scanning: Automated hardening checks against industry standards and vulnerability monitoring for OS/NPM/Composer packages.
THE TECH STACK & "WHO BUILT THIS": I have 15+ years of experience architecting massive e-commerce projects from scratch (My LinkedIn: https://www.linkedin.com/in/eyeskiller/).
The Core: The Rust beacon and backend logic are 100% human-written by me, prioritizing stability and resource efficiency.
The UI: Full disclosure—I'm a tragic web designer, so the frontend (Vue.js + Tailwind) was built with heavy AI assistance.
Compliance: We are fully GDPR compliant and our security architecture is aligned with NIST CSF 2.0.
The platform is live at https://glitchward.com
LOOKING FOR FEEDBACK: I’m curious about your stance on Automated Response. Are you comfortable letting an agent kill processes automatically (given our strict confidence thresholds), or do you strictly prefer "alert-only" for production servers? (Note: GlitchWard supports both modes, so you can choose what fits your risk appetite).
If you want to try it out, I’ve created a coupon code HACKERNEWS for an 7-day trial - no credit card required (or just ping me at [email protected]).
Thanks!
I've tried existing SQL TUIs like harlequin, sqlit, and nvim-dbee. they're all excellent tools and work great for heavier workflows, but they generally use the same 3-pane (explorer, editor, results) paradigm most of the other GUI tools operate with. I found myself wanting to try a different approach, and came up with pam-db.
Pam's Database Drawer uses a hybrid approach between being a cli and tui tool: cli commands where possible (managing connections and queries, switching contexts), TUI where it makes more sense (exploring results, interactive updates), and your $EDITOR when... editing text (usually for writing queries).
Example workflow with sqlite: # Create a connection pam init sqlite sqlite3 file:///path/to/mydb.db
# Add a query with params and default values
pam add min_salary 'select * from employees where salary > :sal|10000'
# Run it
pam run min_salary --sal 300000
This opens an interactive table TUI where you can explore data, export results, update cells, and delete rows. Later you can switch to another database connection using `pam switch <dbname>` and following pam commands will use this db as context.Features: - Hybrid cli/tui approach - Parameterized saved queries - Interactive table exploration and editing - Connection context management - Support for sqlite, postgres, mysql/mariadb, sqlserver, oracle and more
Built with go and the awesome charm/bubbletea!
Currently in beta, so any feedback is very welcome! Especially on missing features or database adapters you'd like to see.
I’m one of the founders of Gling https://gling.ai We’ve been working on AI video editing for a while (mostly helping YouTubers remove silences and bad takes), but we just released a new feature that I wanted to share with you: converting text links directly into talking head videos.
The Problem: Repurposing text content (blogs, Twitter threads, LinkedIn posts) into video is tedious. You usually have to summarize the text, write a script, set up a camera/teleprompter, record, and edit. We wanted to collapse that entire stack into one click.
How it works: You paste a URL (e.g., a TechCrunch article, blog post, or an X thread), and Gling does the following:
Content Extraction: We scrape the meaningful content from the page.
Script Generation: We use llm to summarize the content and rewrite it into a natural-sounding video script.
Video Synthesis: This is the core update. We generate the visual and audio using elevenlabs and veed fabric, you can either upload your avatar and voice, or select an AI generated one.
The Tech Stack: Backend: Node.js on google cloud.
Video/Audio: Elevenlabs for audio, Veed fabirc for video, and mediabunny for handling the media files
Why we built this: We noticed many of our users were manually reading their own blog posts to camera. We figured we could automate the "talking head" part so creators can test video versions of their written content without the production friction.The Ask: We are looking for feedback on:
The "Uncanny Valley": How natural does the movement/voice feel to you?
Script Quality: Does the summarization miss key technical context from your links?
Use Cases: Would you use this for docs? Marketing? Internal updates?
You can try it here: https://app.gling.ai – no login required for the first video.Thanks!
Reproducible benchmarks against CGAL, VTK, libigl, FCL and Coal (source code and methodology): https://trueform.polydera.com/cpp/benchmarks
Theory and papers: https://trueform.polydera.com/cpp/about/research
It's a client-side debugging toolbar for the Unleash, the open source feature flagging project.
It enabled on-page flags and context overrides, without requiring any API tokens or complex setup. You simply wrap the JS SDK in your code and everything else works seamlessly.
Also supports React and Next.js SDK, so you can use for any frontend application (including Angular, Vue, etc.).
It runs 24/7 in the background, keeps track of what you do, and builds long-term memory from it. Over time, it learns your habits and context. Instead of waiting for prompts, it tries to infer your intent and act before you ask.
The goal is a more “assistant-like” proactive agent:
- observe - remember - act - continuously
One practical detail: it’s designed to be cheap to run. By keeping context in memory and minimizing LLM calls, it uses far fewer tokens than bots like moltbot.
You can try it here (download & run locally): https://memu.bot/
If you’re already using moltbot and don’t want to migrate, you can plug in our open-source memory layer, *memU*, to upgrade its memory and make it truly 24/7 and proactive: https://github.com/NevaMind-AI/memU
Happy to hear thoughts from people building agents or infra in this space.
Unlike tools like neofetch or fastfetch, LiquidFetch presents your setup info in a graphical interface. It's a 100% native macOS app.
Cost: Free. Download: https://www.apptorium.com/liquidfetch or on the App Store: https://apps.apple.com/us/app/liquidfetch/id6757637185?mt=12
The more I watched, the more I hated the way it worked - the amount, the pace. However, at some point I started wondering what the next step in this dopamine rush could be.
At first I came up with a content betting concept and built a Telegram bot where you gain points by guessing the like to dislike ratio of a video. While playing with it, I realized that this betting mechanism could be reshaped into a way to express a more nuanced reaction to content than the current like or dislike.
For example, with a single point on a pad you can express both your sense of resonance with the content and your certainty about it.
After building a live demo (and posting it here about a month ago), I realized that outside of a TikTok-style feed it felt pointless. So I tried to give it more meaning by turning it into a form of mindful content consumption. You receive a small, finite set of videos each day, react to them, instantly compare your reaction with the mean point, and then explore how others reacted.
Those connections are where the aha moments live. Same underlying idea, different domain, months apart.
What it does: - Save anything (Chrome extension or forward emails) — no folders, no organizing - Weekly digest surfaces connections across months of reading - Interactive graph ("Overtones") shows how ideas relate over time - Chat with your library — "What have I saved about scaling teams?"
Tech: TypeScript monorepo, React SPA, Supabase (Postgres + Auth), OpenAI for embeddings and semantic search, Chrome extension.
Would love feedback: Do the surfaced connections make you want to revisit what you saved?