The current sky at your approximate location, as a CSS gradient #
Source code and additional information is available on GitHub: https://github.com/dnlzro/horizon
Source code and additional information is available on GitHub: https://github.com/dnlzro/horizon
This is an alpha release with a basic UI as we focus on testing core functionality. Try it out, share your link, and experience raw, honest, and clean anonymous messaging like never before.
To test the moderation you can send messages to me at https://subrosa.vercel.app/martianmanhunter
Relevant links:
https://subrosa.vercel.app/ : Homepage
https://subrosa.vercel.app/signup
https://subrosa.vercel.app/login
https://subrosa.vercel.app/dashboard: Where you can see the messages you received
https://subrosa.vercel.app/[username] : Your personal link that you can post on your socials etc. to attract comments.
P.S. Please dont share personal or sensitive information.
So I built a free site, Findeaze, that connects people headed to the same city (often for school or work) so they can plan the move, housing, and commute together rather than having to do all of it alone.
It’s still early, so the community is small. If you try it now, you might not instantly find a match. But every post helps the network grow and makes it easier for the next person to connect.
If you try it, please let me know what works well and what I could improve.
I made a tool to practice System Design for technical interviews.
I am currently practicing System Design interviews and honestly this is not my forte. Since I am also trying to keep my programming skills sharp and get more products to show off in my portfolio, I thought maybe I could implement a tool that would help people practice this kind of interview as well.
It seems most of people preparing for such interviews are using:
* videos of recorded practice interviews * books (especially Designing Data-Intensive Applications) * mock interviews with FAANG engineers ($$)
The goal of this tool is to provide a whiteboard to the user, then use AI (Gemini) to assess the user submission with respect to some metadata about the problem that guide the LLM about what a good solution should look like.
Not selling anything, there is no paywall, no login, not asking for email etc. For now, I'm just trying to see if someone else finds this useful, and if this idea got legs. If it doesn't I would open-source it (code needs some cleaning).
Hope someone finds this useful: https://system-design-6m8.pages.dev/
Any feedback is appreciated!
PS: There are a few bugs that I am working on, noticeably the anchoring of edges can be buggy at times but this is only a display issue.
PPS: Right now, it's using Gemini free tier.
Links
- README: https://github.com/runtime-org/runtime/blob/main/README.md - Skills guide: https://github.com/runtime-org/runtime/blob/main/SKILLS.md
Why did I build it?
I was using browser automation for my own work, but it got slow and expensive because it pushed huge chunks of a page to the model. I also saw agent systems like browser-use that try to stream the live DOM/processed and “guess” the next click. It looked cool, but it felt heavy and flaky.
I asked a few friends what they really wanted to have a browser that does some of their jobs, like repetitive tasks. All three said: “I want to teach my browser or just explain to it how to do my tasks.” Also: “Please don’t make me switch browsers—I already have my extensions, theme, and setup.” That’s where Runtime came from: keep your browser, keep control, make automation predictable
Runtime takes a task in chat (I’m open to challenging the User experience of conversing with runtime), then runs a short plan made of skills. A skill is a set of functions: it has inputs and an expected output. Examples: “search a site,” “open a result,” “extract product fields,” “click a button,” “submit a form.” Because plans use skills (not whole pages), prompts stay tiny, process stays deterministic and fast.
What’s different
- Uses your browser (Chrome/Edge, soon Brave). No new browser to install. - Deterministic by design. Skills are explicit and typed; runs are auditable. - Low token use. We pass compact actions, not the full DOM. And most importantly, we don’t take screenshots at all. We believe screenshots are useless if we use selectors to navigate. - Human-in-the-loop. You can watch the steps and stop/retry anytime.
Who it's for?
People who do research/ops on the web: pull structured info, file forms, move data between tools, or run repeatable flows without writing a full RPA script or without using any API. It’s just “runtime run at runtime”
Try this first (5–10 minutes)
1. Clone the repo and follow the quickstart in the README. 2. Run a sample flow: search → open → extract fields. 3. Read `SKILLS.md`, then make one tiny skill for a site you use daily.
What’s not perfect yet
Sites change. Skills also change, but we will post about addressing this issue.
I’d love to hear where it breaks.
Feedback I’m asking for
- Is the skills format clear? Being declarative, does that help? - Where does the planner over-/under-specify steps? - Which sites should we ship skills for first?
Happy to answer everything in the comments, and would love a teardown. Thanks!
Bayang
To make it more engaging, we added features like interactive 3D achievement badges that you earn as you participate. We also integrated AI that lets you create a unique profile picture just by typing a description of what you want. Behind the scenes, AI also helps keep the discussions constructive.
The platform is live, and as it's open-source, we are constantly working on improving it. We would love to hear your feedback!
Live (https://www.goat.uz) Repository (https://github.com/umaarov/goat-dev)
I figured using a web tool for markdown to epub would save me a click.
Perhaps you'll find it useful.
(Code was written by GPT-5.)
I’ve been working on SentiCall, an AI-powered call assistant that helps handle phone calls more effectively. It’s designed for people who deal with frequent meetings, international calls, or need real-time note-taking.
What it does:
1. Real-time transcription of the other person’s speech
2. Instant translation for cross-language conversations
3. AI-generated smart reply suggestions you can send or have spoken for you
4. Post-call summaries to capture key points
Why I built it: I often found myself struggling to both listen and take notes during calls, especially when dealing with different languages. Existing solutions were either too slow, too manual, or didn’t integrate well with real-time conversation.
Stack: Flutter for cross-platform UI, Rust for backend. OpenAI for LLM, Google Cloud for STT/TTS.
Currently on App Store. Would love feedback on:
Features you think are missing for productivity-focused calls
Any privacy/security concerns you’d like to see addressed