Daily Show HN

Upvote0

Show HN for November 2, 2025

14 items
26

Auto-Adjust Keyboard and LCD Brightness via Ambient Light Sensor[Linux] #

github.com favicongithub.com
2 comments11:03 AMView on HN
I have always wanted cool features in Linux because I use it day to day as my OS. I have always wanted to implement this feature and do it properly: one that automatically adjusts keyboard and LCD backlights using data from the Ambient Light Sensor.

I enjoy low-level programming a lot. I delved into writing this program in C. It came out well and worked seamlessly on my device. Currently, it only works for keyboard lights. I designed it in a way that the support for LCD will come in seamlessly in the future.

But, in the real world, people have different kinds of devices. And I made sure to follow the iio implementation on the kernel through sysfs. I would like feedback. :)

20

I built a smart blocker after destroying my dopamine baseline #

chromewebstore.google.com faviconchromewebstore.google.com
6 comments5:47 PMView on HN
I'm a solo dev. A few years ago, I got addicted to Reddit. Spent months in that loop.

Being a programmer, I thought I'd be clever. Redirected Reddit's domain to nowhere in my DNS file. Worked great until I'd just... open the file and undo it 20 minutes later.

So I made it irreversible. Locked the DNS file so it can't be edited unless I boot my Mac in safe mode. And if I do that, there's a script that instantly locks it again. Haven't used Reddit since last year.

Problem solved, right?

Wrong. I just replaced Reddit with Twitch and YouTube. Started keeping streams running in the background while I coded. This went on for almost a year.

It killed my ability to focus. If you know about dopamine, you know your brain releases it when it wants you to repeat an activity. The constant background streams destroyed my dopamine baseline. When I tried to code without anything running one day, it felt genuinely weird. Hard problems that used to be interesting just felt like grinding.

I tried blocking Twitch and YouTube the same way I blocked Reddit. But I actually need YouTube for learning. I watch programmers on Twitch I learn from. I couldn't just nuke them entirely.

So I built something smarter.

The first version was terrible. Blocked things it shouldn't, let through things it should've blocked. Really buggy and annoying.

Then I added AI. I tell it what I'm working on, and it blocks anything unrelated to that task. This was the breakthrough. I need YouTube for tutorials, but I don't need 3-hour video essay rabbit holes. The extension knows the difference now.

It reminds me in the moment. Not after I've already wasted an hour. Right when I'm about to click into the distraction, it stops me and makes me think: "Is this what I'm supposed to be doing right now?"

The result: I actually enjoy hard problems again. Turns out I wasn't burned out, I'd just wrecked my brain's reward system.

Then I had to market this thing, so I started using Twitter. And oh boy, Twitter is addicting. You post something and wait for the notification to light up. I had to install my own extension on my Twitter Chrome profile. It's wild how effective it is when something reminds you "you're here to market, not scroll" right as you're about to fall into the feed.

It's still hard sometimes. Your brain will try to disable it. But having something that catches you in the moment before you lose an hour makes all the difference.

It's a Chrome extension, currently at beta v1.0.43: https://chromewebstore.google.com/detail/memento-mori/fhpkan...

It's free, no signup, no payment. Just install it.

Fair warning: it's still in beta. There will be bugs. But it works well enough that I use it daily, and it's helped me get my focus back.

Built this to fix my own problem. Figured other devs might be in the same boat.

Question for HN: Anyone else dealt with this? The programming with streams thing destroyed my focus for almost a year before I realized what was happening. What worked for you?

14

Give your coding agents the ability to message each other #

github.com favicongithub.com
2 comments9:39 PMView on HN
I submitted this earlier but it didn’t get any traction. But it’s blowing up on Twitter, so I figured I would give it another shot here.

The system is quick and easy to setup and works surprisingly well. And it’s not just a fun gimmick; it’s now a core part of my workflow.

7

Open data reveals “100% renewable” UK energy isn’t really 100% #

matched.energy faviconmatched.energy
1 comments11:52 AMView on HN
UK suppliers claim “100% renewable” even when selling fossil power at night.

Newly launched not-for-profit Matched Clean Power Index [1] shows, using open data, each supplier’s true renewable share hour by hour.

Built by a small team of engineers and energy analysts — including a former Tesla engineer — it combines half-hourly data from Elexon (demand), National Grid ESO (generation), and Ofgem (REGOs) to calculate the real renewable fraction for every major UK supplier. It's the first open dataset of its kind [2].

The data exposes a £1 billion-a-year distortion: consumers pay for “green” certificates that don’t align clean supply with demand. Redirecting that could fund storage and flexibility instead.

The best suppliers match 69–88% of their demand with real-time renewables — far better than today’s “100%” claims.

We’d love your thoughts on:

- Next features/datasets: storage, nuclear, or CO₂ intensity?

- API design: what endpoints or update cadence would be useful?

- Visualisation: how would you show renewable matching over time?

[1] https://matched.energy/clean-power-index

[2] https://matched.energy/methodology/v1

7

Carrie, for what Calendly can't do #

getcarrie.com favicongetcarrie.com
0 comments2:40 PMView on HN
Hey everyone,

Through my career, I've spent too many hours and too much mental load on busywork like scheduling and following up on people's availabilities.

So, I built Carrie. You simply cc her into your emails, and she sorts out meeting times across time zones, finds what works best for everyone, confirms the meeting and sends the invite. She handles scenarios beyond what Calendly can handle and it’s been freeing me up from the back-and-forth of juggling different meeting requests.

I’ve been testing this with a beta group of users and am now looking to expand the user pool (please feel free to join the waitlist if you're interested).

Would also love feedback on whether this seems useful and what seems to be missing to make this part of your workflow. Thanks!

4

Chatolia – create, train and deploy your own AI agents #

chatolia.com faviconchatolia.com
1 comments9:08 PMView on HN
Hi everyone,

I've built Chatolia, a platform that lets you create your own AI chatbots, train them with your own data, and deploy them to your website.

It is super simple to get started:

- Create your agent - Train it with your data - Deploy it anywhere

You can start for free, includes 1 agent and 500 message credits per month.

Would love to hear your thoughts,

https://www.chatolia.com

3

I made a YouTube thumbnail always display the latest comment [video] #

youtube.com faviconyoutube.com
0 comments6:08 AMView on HN
Behind the scenes

1. I pull the latest comments using the YouTube data API.

2. A next.js app using ImageResponse from next/og generates a new thumbnail image by overlaying the latest comment text on the original thumbnail.

3. I use the YouTube API to update the thumbnail for that video.

The text is also truncated to 65 characters and I have comment filtering for the video set to "strict"

It all runs as a cron job every 15 minutes. I wanted it to run more frequently, but each thumbnail update costs ~50 credits, and YouTube only gives 10k API credits per day.

There's also an undocumented limit to the number of times you can change your thumbnail in a 24h period, which I hit the first day when running every 10 minutes. I slowed it down and also added a cache so it only updates if the comment has changed since the last run.

Link to video: https://www.youtube.com/watch?v=HbUgsprjNVY

Link to view thumbnail: https://www.youtube.com/@SmithOffGrid/search?query=HbUgsprjN...

=-=-=

It was pretty fun to build this out. Not the first to do something like this though.

Others I could find:

Tom Scott: https://www.youtube.com/watch?v=BxV14h0kFs0

Sean Hodgins: https://www.youtube.com/watch?v=FV2OqOJcQRc

MrBeast: https://www.youtube.com/watch?v=YSoJPA8-oHc

Hyperplexed: https://www.youtube.com/watch?v=6fAQ_y-1SxI

3

Jv 0.1 – Zero-runtime Java sugar language for Java 25 #

github.com favicongithub.com
1 comments1:59 PMView on HN
I’ve just shipped the first public release of jv — a Kotlin-inspired sugar layer that transpiles directly to readable Java 25 (with Java 21 fallback) and depends on no runtime shim.

The toolchain is implemented entirely in Rust, focusing on performance and developer experience. Its UX is inspired by Python’s modern package manager uv, aiming for fast, intuitive, and clean CLI workflows.

The CLI ships as a cross-platform bundle with the stdlib baked in, auto-detects local JDK toolchains, and lets you override entrypoints for custom workflows.

On the language side, I’ve added generic function signatures, record component access, optional parentheses on zero-arg calls, richer string interpolation, and a smarter sequence pipeline that preserves element types.

Under the hood, a new Rowan-based front-end drives improved lowering so that when/switch expressions, range patterns, and inferred signatures compile cleanly to Java.

Feedback and questions welcome. More details → https://project-jvlang.github.io/en/ and https://github.com/project-jvlang/jv-lang

2

I built a tool to version control datasets (like Git, but for data) #

shodata.com faviconshodata.com
2 comments1:26 PMView on HN
Hey everyone,

As a founder, I've been frustrated for years with how my team manages datasets for ML. It always ends up as data_final_v3_fixed.csv in an S3 bucket or a massive Git LFS file that nobody understands.

So, I built Shodata. It’s an open platform (like GitHub) but built specifically for dataset workflows.

The core idea is simple: you upload a file. A new version (v2, v3, etc.) is automatically created when you upload a new file with the same name. You receive a discussion board on every dataset, a complete history, and clean previews and statistics for every version.

To show how it works, I seeded it with a dataset I'm tracking: a log of LLM hallucinations. When I find new ones, I just upload the new file and it versions the dataset.

The platform is an MVP. It has a generous free tier (includes 3 personal private datasets & 10GB storage) and a single Pro plan that unlocks team/organization features (like Org creation and shared private datasets).

I’m looking for feedback from fellow engineers and ML folks on the workflow. Is this useful? What’s missing?

You can check out the platform here: https://shodata.com

And the LLM log dataset: https://shodata.com/shodata/llm-hallucinations

2

CommoWatch – Alerts for Commodity Prices (Gold, Oil, Wheat, etc.) #

4 comments1:23 PMView on HN
Hey HN

I’m a solo developer building CommoWatch, a minimal web app to track commodity prices and get alerts when they hit your target.

The idea is simple:

- You pick the commodities you care about (gold, oil, wheat, gas...)

- You set the prices you want to be notified at

- You get an email or SMS when it happens.

It’s meant to be compact, fast, and useful for traders, investors, or even small business owners who follow material costs.

I’m starting small — just a few commodities, hourly updates, and email alerts first — to validate if people actually find it useful.

If this sounds interesting, you can join the waitlist here: https://getwaitlist.com/waitlist/31756

I’d love to hear your thoughts: What feature would make this genuinely useful to you? Or what do you think most people following commodity prices actually need?

Thanks!

1

Polym – App for knowledge retention and recall. Remember what you learn #

polymapp.com faviconpolymapp.com
0 comments6:14 AMView on HN
Polym is a mobile app to improve retention and recall of foundational knowledge, such as logic, math, psychology, economics, computer science, philosophy, history, and more.

For those always learning, I think the comparison below articulates how Polym can help:

Current Lifelong Learner → Aspiring Polymath Re-reads and reviews hastily compiled notes → Reviews expert-crafted notes and active learning sets Reviews niche and/or soon-to-be outdated concepts → Masters foundational knowledge, such as logic, economics, math, and psychology Spends hours taking notes and creating flashcards → Focuses on learning since notes and cards are taken care of Engages in infrequent text active learning → Engages in multimodal retrieval practice using spaced repetition