Made a little Artemis II tracker #
https://artemis-ii-tracker.com/
For those of us who apparently need a dedicated place to monitor this mission instead of behaving like well-adjusted people.
https://artemis-ii-tracker.com/
For those of us who apparently need a dedicated place to monitor this mission instead of behaving like well-adjusted people.
It is a small web app. Currently only Unit 01 is available. I will feed in the rest of the units later down the road as I use it myself.
Some of the features includes
- Slow and fast audio with every single word and sentence in this app. You can play them with a click of a button. No need to rewind a tap back and forth. - Flashcards with keyboard control to quickly go through the material and drill them out.
You can access the website at https://detawk.com/ . There is a demo video on the landing page. Give it a look before signing up.
If you have any questions or feedback for me, let me know. I hope you like the app.
I have been working on a desktop P2P messenger called Kiyeovo for the last ~8 months, and I just published its beta version.
Quick backstory: It started out as a CLI application for my Graduate Thesis, where I tried to make the most secure and private messenger application possible. Then, I transformed it into a desktop application, gave it "clearnet" support and added a bunch of features.
Short summary:
The app runs in 2 completely isolated modes:
- fast mode: relay/DCUtR -> lower latency, calls support
- anonymous mode: Tor message routing -> slower, anonymous
These modes use different protocol IDs, DHT namespaces, pubsub topics and storage scopes so there’s no data crossover between them.
Messaging works peer-to-peer when both parties are online, but falls back to DHT "offline buckets" when one of them is not. To ensure robustness, messages are ACK-ed and deleted after being read.
Group chats use GossipSub for realtime messaging. Group messages are also saved to offline buckets in order for offline users to be able to read them upon logging in. Kick/Join/Leave events are also propagated using the DHT. Group metadata and all offline data is of course encrypted.
Other features: Chats are E2E, file sharing is supported, 1:1 audio/video calls are supported (only in fast mode though, using WebRTC)
Tradeoffs: Tor has noticeable latency, offline delivery is not immediately guaranteed, but rather "eventually consistent"; beta version does not have group calls yet.
I’d appreciate feedback, that's why I posted this as a beta version
In one sentence: we built a free product that allows patients and providers to upload medical charts in PDFs and automatically identify issues before they turn into treatment mistakes or denied insurance claims.
Create a free account:
Step-by-step interactive demo for patients:
https://app.arcade.software/share/snXezdUhG0zGh5JxZNeB
Step-by-step interactive demo for providers:
https://app.arcade.software/share/bxuXt2mwsz2UabvghFFK
A year ago, we initially launched our demo on HN (https://news.ycombinator.com/item?id=44063000), and since then, we've developed a full audit product that integrates with Electronic Medical Records (EMR) systems via API across multiple clinics and hospitals.
Why is this important?
Medical records are crucial, but they are also susceptible to mistakes. As the Office of National Coordinator on Healthcare Technology notes, reviewing your own records is vital because "you may have forgotten to tell your healthcare provider something or they may have forgotten to write it down. The staff in your provider’s office are busy people who make mistakes just like everyone else." For Patients: Reviewing your records helps ensure their accuracy, which can prevent potential issues—for example, an empty "Allergies" field could be disastrous in an emergency room visit. The Health Insurance Portability and Accountability Act (HIPAA) guarantees your right to review and request corrections for errors you find. Our free tool allows anyone, regardless of medical knowledge, to flag potential discrepancies (like a mismatch in date of birth, medications, or allergies) to discuss with their provider. (See the official government guide: https://healthit.gov/get-it-check-it-use-it/check-it/ and this article: https://californiahealthline.org/news/check-your-medical-rec...). As an example, I (Dmitry) used the analysis on 800 pages of my own 5-year medical history and identified three issues to discuss with my primary care physician: mismatched medications, allergies, and outdated vaccinations. For Healthcare Providers: The product is a significant time-saver, automating the quality and compliance review that often takes dozens of hours of manual "scrubbing" each month. The free version includes two specialized rule libraries for providers: Acute care and Substance Use Disorder treatment.
OUR ASK:
We value community feedback on what we're building, especially from healthcare providers. We want to demonstrate how efficient and time-saving automated review is compared to manual chart scrubbing.
Please share a link to our provider-specific interactive demo with the doctors and nurses in your network. It could save them a lot of time that they could spend with patients. To help the doctors and nurses in your network save valuable time that they could be spending with patients, please share the link to our provider-specific interactive demo with them.
https://app.arcade.software/share/bxuXt2mwsz2UabvghFFK
Thank you.
Feedback welcome, especially from legal researchers or comparative law folks.
Source and pipeline: github.com/joaoli13/constitutional-map-aiThe basic problem: quantum hardware is here and already competitive on certain optimization problems, but for most people, there's no way to access it. The machines cost millions and the hardware and research are gated by the companies who own them.
Also, quantum providers regularly have machines sitting idle because demand isn't consistent, and that's a problem because many architectures need to be cooled near absolute zero and can't just be turned off. There's currently no equivalent of spinning up an on-demand cloud instance for quantum compute.
So we're building one. Quip.Network is a spot clearinghouse and marketplace where quantum providers contribute excess capacity, developers deploy their best solvers to an open library, and anyone can submit a workload and get a result without needing to own or understand the hardware. Classical operators (CPUs, GPUs, TPUs) can also participate in solving and verifying.
The first quantum subnet was built in close collaboration with D-Wave, the world's leading quantum computing company. It focuses on optimization problems, the kind that appear across finance, logistics, and manufacturing. It runs on annealing QPUs and has demonstrated competitive performance on solution quality, speed, and energy cost relative to classical computing approaches. The mining protocol is designed around these benchmarks, so participants compete to find better solutions.
We had about 13,000 signups before launch. The codebase is fully open source because we think quantum advantage should be a verifiable result, not a marketing claim. We want people running nodes, challenging our implementations, and submitting proofs of work optimized for their own hardware.
Unlike GPU clusters where one more processor is a linear improvement, the value of adding just one more QPU to your cluster is exponential. It won't be enough to be just AWS, GCP, or IBM. To solve the toughest problems, we'll want to connect together every processor on Earth and have them operate as one giant quantum system. That's why we think a distributed system is the right approach, and that's why our mission is to build the worldwide quantum computer.
Happy to answer anything!
Docs: quip.gitbook.io/docs | GitHub: github.com/quipnetwork
Rewrote it from scratch in Go. The entire thing is a single binary with no external dependencies:
1. Certificate generation uses Go's crypto/x509 (no OpenSSL)
2. Certificates are generated in memory and streamed directly — nothing is stored on the server
3. RSA 2048/4096 and ECDSA P-256/P-384
4. Subject Alternative Names (required by browsers since Chrome 58)
5. ZIP (PEM files) or PFX/PKCS#12 output
You comments / suggestions / bug reports are very welcome. Thanks.
As senior SWE at Twenty.com (open source CRM), I had these quite often.
Every day I needed to check something in Postgres, I had to wait 30 seconds for DBeaver to load or fight pgAdmin's UI. So I built Paul. Yes our database configuration has too many schemas (3000 schemas) for those clients, but still, it was not Postgres fault. Only the client that couldn't handle it.
Paul is a native macOS app, light (<20MB) and opens in 2 seconds. You can imagine how fast it feels compared to the 5 minutes I had to wait when opening DBeaver...
I did not go very deep in the DBA features, nor in the UI. I kept Paul simple: you can browse tables, filter them, and sort them.
A few distinctions: - Paul's read-only by default: you have to explicitly switch to edit mode in the settings to allow INSERT, UPDATE, or DELETE. This makes it safe to point at production. - I added an agent mode (read-only) to interact faster with the database, without SQL knowledge. Nothing fancy, It basically is a wrapper around openAI and Anthorpic sdks. Still useful for some SQL formulas i don't use often.
It's of course Free, no account required. Works offline.
I'd love feedback on what's missing or what could be better. This is a solo project and I'm building it for me first, but open to add features if anyone feels like it could use it.
The result: open-agent-sdk — a drop-in replacement for claude-agent-sdk that's fully open source and doesn't spawn a CLI subprocess.
Why this matters if you've built with claude-agent-sdk:
claude-agent-sdk is just a thin wrapper around the Claude Code binary. It works, but it's a black box — when something breaks, you're stuck.
Every query creates a new Claude Code process. That's fine on a laptop, not fine when you're running thousands of concurrent agents in the cloud.
What open-agent-sdk does differently:
Pure function calls, no CLI process spawning — cloud-native from day one Fully compatible interface with claude-agent-sdk — swap the package name, done
MIT licensed — fork it, patch it, make it yours
I've made a Dis virtual machine and Limbo programming language compiler (called RiceVM) in Rust. It can run Dis bytecode (for example, Inferno OS applications), compile Limbo programs, and includes a fairly complete runtime with garbage collection, concurrency features, and many of the standard modules from Inferno OS's original implementation.
The project is still in an early stage, but if you're interested in learning more about RiceVM or trying it out, you can check out the links below:
Project's GitHub repo: https://github.com/habedi/ricevm
RiceVM documentation: https://habedi.github.io/ricevm/
I built a SvelteKit + Tauri writing app that treats revision as a first-class concept. You can fork any sentence or passage into branches, switch between versions inline, and keep everything in one document instead of maintaining separate draft files.
It's different from Git in the sense that you can try different combinations of these branches AND that these branches go infinitely nested
The editor component was made with CodeMirror, and I tried really hard with implementing robust data persistence by storing with SQLite.
There's also optional AI feedback if you bring your own API key, but it's not the point of the app (enable it in settings).
Still in beta and rough around the edges. Would love feedback!
A pattern we keep seeing: products add AI agents that write, edit, and approve things. Human actions get logged. Agent actions don't. Same workflow, different accountability.
We shipped Activity Logs to fix this.
Same record for humans and AI agents. Immutable by default. Auto-captures collaboration events, plus createActivity() for your own: https://velt.dev/activity-logs
Curious how others are handling this.
I've always loved ergonomic split keyboards, but I hated the desk clutter. Most splits on the market either require that annoying TRRS bridge cable between the two halves, or if they are wireless, they are often housed in bulky 3D-printed or plastic cases.
I wanted something that felt premium like a custom mechanical keyboard but was light enough to throw in a backpack. So, I built the Elytra.
A few technical details:
Truly Wireless: No cables between the halves or to the computer.
Material: CNC-machined aluminum chassis. We used a biomimetic design on the underside to shave off weight without losing structural integrity.
Weight: It weighs only 420g (0.93 lbs), which is weirdly light for a full-metal board.
Firmware: Powered by ZMK.
I would love to hear your feedback on the layout, the industrial design, or any questions about the manufacturing process. Happy to answer anything!
Generous free tier (happy to give more credits, lmk) + super low pricing. Lmk any thoughts or improvement ideas :)
But what if a device could be defined just by answering questions?
Instead of writing code, you describe: - what the device should do - under what conditions it is allowed - what should never happen
From those answers, a real Matter device is generated (ESP32-C6).
An action-first system where devices are defined, validated, and then executed.
It’s not just no-code.
Devices are not programmed — they are defined.
GitHub: https://github.com/anna-soft/Nemo-Anna
If you want to try the flow directly: https://anna.software
I’d really appreciate feedback, especially on: - whether this model makes sense - where it breaks in real-world use - how it could work with AI controlling physical devices
I'm looking for a technical audit of my memory decay logic in core/dream.ts. With 100 nodes already active, I want to ensure the consolidation won't bottleneck at scale.
I (ironically, depending your penchant) used an AI judge to score every top HN post and its top comments all the way back to 2010 (over 300,000 individual judgments!), and created a nice interface with the historic charts.
The scoring is off of how much disgruntled AI pessimism the post and top comments show. You can use that score to either only show (in Doom Only mode) or hide (in calm mode) all the doom static.
This is part joke and partly to answer my own curiosity about how real my feeling that the comments and posts have grown increasingly overbearingly negative towards and focused on the worst of what results from AI in the past year.
The trends is stark. My intuition was reinforced by the analysis; disgruntled pessimism about AI is at an all time high in the last few weeks and on track to continue doubling annually since ChatGPT released.
Historic charts are at bottom of the page.
Most extractors are either fast but lose structure (markitdown, pymupdf4llm) or accurate but slow (docling). Ours ties with docling on accuracy but is orders of magnitude faster.
https://github.com/pspdfkit/pdf-to-markdown
We'd love feedback on it, and ofc send us files that break it.
I built a simple tool to download Instagram Reels without login.
I noticed many tools are full of ads, so I made a cleaner version.
Would love feedback: https://downreels.com
Thanks!
I saw where ChatGPT Checkout was heading, another proprietary marketplace where they control which sellers get access and priority. More gatekeeping, more rent-seeking. It was clear Shopify stores would get preferential treatment.
So I built "the / marketplace," a fully open-source alternative with a flexible adapter system. It already supports Openfront (the Shopify alternative I’ve been developing), but it’s designed to work with any e-commerce platform, not just the ones OpenAI decides to favor. You can bring your own OpenRouter key or connect to Ollama locally, so your shopping data stays private and under your control.
Right now it has 2 demo shops, Impossible Tees and Nimbus Gallery.
As we launch more Openfronts for restaurants, gyms, and other verticals, those should be able to plug into the chat.
Demo: https://marketplace.openship.org
Repo: https://github.com/openshiporg/marketplace
Learn more: https://openship.org/products/marketplace
That said, we built the tool not knowing anything about SEO, we just wanted to experiment and see what could be done.
Digging more into the SEO space now, I can't decide: - SEO is dying, abandon the ship and focus on other priorities - double down, empower people still doing SEO reports to save time
Personally, I enjoyed making the tool, but I don't think I want to be an SEO consultant.
This kind of tool plays into the edge-case heavy work that Marc Andreessen recently tweeted about – AI excels at scaling across the messy long tail of problems that humans fatigue on. It goes way beyond what a human can do in that time by analyzing tons of data.
What do you think? Any feedback is greatly appreciated!
www.easyseo.online