Daily Show HN

Upvote0

Show HN for November 9, 2025

16 items
57

Geofenced chat communities anyone can create #

vicinity.social faviconvicinity.social
61 comments2:59 AMView on HN
Hi HN

I built a location-based chatroom with Discord-like servers. This started as a portfolio project to learn WebSockets but has spiraled into something else entirely.

How it works.

There are two features right now:

Drop - Single chatrooms that can only be seen within a specified radius and only last for a time less than 48 hours chosen by the user.

Hubs - Geofenced servers modeled after Discord. These are not time restricted. They can be created by anyone and the creator becomes the admin able to add channels and set rules. When a user enters the location’s area, they can join the hub and continue seeing messages even after leaving. Hubs cannot overlap, so once one exists in an area another cannot be created on it. The hub will persist as long as it is being actively used. If unused for two weeks, it will be deleted. (Still implementing this deletion aspect, so that is not in the landing page at the moment)

Why I built this.

I do not like the feel of most social media anymore, but I really like my university’s discord server. I wanted something more general that provided similar interactions. So I thought something that might work is a more general social app tied to location.

I think if it is done right it can recreate the atmosphere that I liked. I thought a lot about what that atmosphere is. I think for social media to feel natural it needs a “third thing”: a shared interest or object that creates a connection between two people, or a neutral ground for communication.

Having something in common just makes the interactions better and more useful. I think location can serve as general thing in common, especially if the servers are curated by locals. It could also be a good way for people to immediately connect in a new place.

Right now, I’m just having fun building this thing. I would honestly like to use it if other people were on there… and it was built better and an app.

Feedback

I’m looking for any feedback. What’s a good idea or what’s a bad idea. This is really just a prototype, so there are some rough edges, and I am actively working on it. If you find any bugs and feel like communicating them, please do. You can reach me at [email protected]

52

Pipeflow-PHP – Automate anything with pipelines even non-devs can edit #

github.com favicongithub.com
9 comments1:40 PMView on HN
Hello everyone,

I’ve been building [Pipeflow-php](https://github.com/marcosiino/pipeflow-php), a PHP pipeline engine to automate anything — from content generation to backend and business logic workflows — using core modular stages and custom implemented stages (that can do anything), with the key power of using an easy to reason and read XML to define the pipeline logic, which every actor in a company, even non developers, can understand, maintain and edit.

It’s a *headless engine*: no UI is included, but it's designed to be easily wired into any backend interface (e.g. WordPress admin, CMS dashboard, custom panels), so *even non-developers can edit or configure the logic*.

It surely needs improvements, more core stages to be implemented and more features, but i'm already using it on two websites i've developed.

In future I plan to port it in other languages too.

Feedback (and even contributions) are appreciated :)

---

Why I built it

I run a site which every day, via a cron job:

- automatically generates and publish coloring pages using complex logics and with the support of the generative AI,

- picks categories and prompts based on logic defined in a pipeline,

- creates and publishes WordPress posts automatically, every day, without any human intervention.

All the logic is defined in an XML pipeline that's editable via wordpress admin panel (using a wordpress plugin I've developed, which also adds some wordpress related custom stages to Pipeflow). A non-dev (like a content manager) can adjust this automatic content generation logic, for example by improving it, or by changing the themes/categories during holidays — without touching PHP.

---

What Pipeflow does

- Define pipelines in *fluent PHP* or *simple, easy understandable XML (even by non developers), directly from your web app admin pages*

- Use control-flow stages like `If`, `ForEach`, `For`

- Execute pipelines manually, via cron, or on any backend trigger which adapts to your business logic

- Build your own UI or editor on top (from a simple text editor to a node based editor which outputs a compatible XML-based configuration to feed to pipeflow)

- Reuse modular “stages” (core and custom ones) across different pipelines

19

Trilogy Studio, open-source browser-based SQL editor and visualizer #

trilogydata.dev favicontrilogydata.dev
6 comments11:26 PMView on HN
SQL-first analytic IDE; similar to Redash/Metabase. Aims to solve reuse/composability at the code layer with modified syntax, Trilogy, that includes a semantic layer directly in the SQL-like language.

Status: experiment; feedback and contributions welcome!

Built to solve 3 problems I have with SQL as my primary iterative analysis language:

1. Adjusting queries/analysis takes a lot of boilerplate. Solve with queries that operate on the semantic layer, not tables. Also eliminates the need for CTEs.

2. Sources of truth change all the time. I hate updating reports to reference new tables. Also solved by the semantic layer, since data bindings can be updated without changing dashboards or queries.

3. Getting from SQL to visuals is too much work in many tools; make it as streamlined as possible. Surprise - solve with the semantic layer; add in more expressive typing to get better defaults;also use it to wire up automatic drilldowns/cross filtering.

Supports: bigquery, duckdb, snowflake.

Links [1] https://trilogydata.dev/ (language info)

Git links: [Frontend] https://github.com/trilogy-data/trilogy-studio-core [Language] https://github.com/trilogy-data/pytrilogy

Previously: https://news.ycombinator.com/item?id=44106070 (significant UX/feature reworks since) https://news.ycombinator.com/item?id=42231325

6

Valid8r, Functional validation for Python CLIs using Maybe monads #

github.com favicongithub.com
0 comments10:29 PMView on HN
I built Valid8r because I got tired of writing the same input validation code for every CLI tool. You know the pattern: parse a string, check if it's valid, print an error if not, ask again. Repeat for every argument.

The library uses Maybe monads (Success/Failure instead of exceptions) so you can chain parsers and validators:

  # Try it: pip install valid8r
  from valid8r.core import parsers, validators
  
  # Parse and validate in one pipeline
  result = (
      parsers.parse_int(user_input)
      .bind(validators.minimum(1))
      .bind(validators.maximum(65535))
  )
  
  match result:
      case Success(port): print(f"Using port {port}")
      case Failure(error): print(f"Invalid: {error}")
I built integrations for argparse, Click, and Typer so you can drop valid8r parsers directly into your existing CLIs without refactoring everything.

The interesting technical bit: it's 4-300x faster than Pydantic for simple parsing (ints, emails, UUIDs) because it doesn't build schemas or do runtime type checking. It just parses strings and returns Maybe[T]. For complex nested validation, Pydantic is still better. I benchmarked both and documented where each one wins.

I'm not trying to replace Pydantic. If you're building a FastAPI service, use Pydantic. But if you're building CLI tools or parsing network configs, Maybe monads compose really nicely and keep your code functional.

The docs are at https://valid8r.readthedocs.io/ and the benchmarks are in the repo. It's MIT licensed.

Would love feedback on the API design. Is the Maybe monad pattern too weird for Python, or does it make validation code cleaner?

---

Here are a few more examples showing different syntax options for the same port validation:

  from valid8r.core import parsers, validators

  # Option 1: Combine validators with & operator
  validator = validators.minimum(1) & validators.maximum(65535)
  result = parsers.parse_int(user_input).bind(validator)

  # Option 2: Use parse_int_with_validation (built-in)
  result = parsers.parse_int_with_validation(
      user_input,
      validators.minimum(1) & validators.maximum(65535)
  )

  # Option 3: Interactive prompting (keeps asking until valid)
  from valid8r.prompt import ask

  port = ask(
      "Enter port number (1-65535): ",
      parser=lambda s: parsers.parse_int(s).bind(
          validators.minimum(1) & validators.maximum(65535)
      )
  )
  # port is guaranteed valid here, no match needed

  # Option 4: Create a reusable parser function
  def parse_port(text):
      return parsers.parse_int(text).bind(
          validators.minimum(1) & validators.maximum(65535)
      )

  result = parse_port(user_input)
The & operator is probably the cleanest for combining validators. And the interactive prompt is nice because you don't need to match Success/Failure, it just keeps looping until the user gives you valid input.
3

Every-few-days satellite timeline for any spot, Sentinel-2 SR #

mzoom.space faviconmzoom.space
3 comments2:20 PMView on HN
I built anicha.earth because I kept needing a fast, no-frills way to see how a place changes over time — not once a year, but every week or so.

Recently I worked on super-resolution for Sentinel-2 (about 8–10× upscaling) for an agriculture project. Along the way I realized two things: (1) this could be useful beyond ag, and (2) I couldn’t find a tool that lets you pick any area and quickly scrub through years of imagery. So I made one that’s as simple and as fast as I can make it.

Under the hood it uses Copernicus Sentinel-2 L2A (10 m/pixel). With S2A+B the nominal revisit is ~5 days (depends on clouds; with Sentinel-2C the cadence is tighten further). For any area you select, the app gathers all available scenes since 2018 and shows them on the map and in a time strip for easy scrubbing.

There’s AI-Enhanced view: a super-resolution model that makes it toward ~1–2 m. The model was trained on millions of satellite/aerial images, primarily open NAIP data.

This is an early beta and a bit rough.

I’m most curious about now is where this is actually useful?

https://anicha.earth

2

DeepShot – NBA game predictor with 70% accuracy using ML and stats #

github.com favicongithub.com
1 comments12:19 AMView on HN
I built DeepShot, a machine learning model that predicts NBA games using rolling statistics, historical performance, and recent momentum — all visualized in a clean, interactive web app. Unlike simple averages or betting odds, DeepShot uses Exponentially Weighted Moving Averages (EWMA) to capture recent form and momentum, highlighting the key statistical differences between teams so you can see why the model favors one side. It’s powered by Python, XGBoost, Pandas, Scikit-learn, and NiceGUI, runs locally on any OS, and relies only on free, public data from Basketball Reference. If you’re into sports analytics, machine learning, or just curious whether an algorithm can outsmart Vegas, check it out and let me know what you think: https://github.com/saccofrancesco/deepshot
2

PyNIFE. 400-900× speedup for embedding-based retrieval pipelines #

github.com favicongithub.com
0 comments4:50 AMView on HN
Hey HN,

I've been playing around with ways to make retrieval pipelines faster, and ended up building something I'm calling PyNIFE (Nearly Inference-Free Embeddings).

The idea is simple: train a static embedding model that's fully aligned with a bigger "teacher" model, so you can skip expensive inference almost entirely. In practice, that means 400-900× faster embedding generation on CPU, while still working with the same vector index and staying compatible with your existing setup.

You can even mix and match: use the original model for accuracy when you need it, and PyNIFE for ultra-fast lookups or agent loops.

It's still early, and I'd love feedback, especially on where this might break, what kinds of workloads you'd test it on, and any ideas for better evaluation or visualization.

Repo: https://github.com/stephantul/pynife

2

React Source Lens – Jump from UI components to source code in one click #

npmjs.com faviconnpmjs.com
0 comments3:06 PMView on HN
React Source Lens is a development tool that lets you inspect React component source code directly in the browser.

How it works:

Add useReactSourceLens() to your React app Hover over any component in your app Press Cmd+Shift+O (Mac) or Ctrl+Shift+O (Windows/Linux) A modal appears with the file path and line number Click "Open in VS Code" to jump directly to the source code Key features:

Visual overlay shows which component you're hovering over Works with React's debug information (development mode) Optional Babel plugin for enhanced source detection VS Code integration with vscode:// protocol Toggle overlay with Cmd+Shift+L Installation:

npm install react-source-lens Usage:

javascript import { useReactSourceLens } from 'react-source-lens';

// Basic usage useReactSourceLens();

// With VS Code integration useReactSourceLens({ projectRoot: '/path/to/your/project' }); Why it matters: When working on large or unfamiliar React codebases, finding component definitions can be time-consuming. This tool eliminates that friction by providing instant source code navigation.

Built with modern React and works with Vite, Create React App

2

Alignmenter – Measure brand voice and consistency across model versions #

alignmenter.com faviconalignmenter.com
2 comments11:53 PMView on HN
I built a framework for measuring persona alignment in conversational AI systems.

*Problem:* When you ship an AI copilot, you need it to maintain a consistent brand voice across model versions. But "sounds right" is subjective. How do you make it measurable?

*Approach:* Alignmenter scores three dimensions:

1. *Authenticity*: Style similarity (embeddings) + trait patterns (logistic regression) + lexicon compliance + optional LLM Judge

2. *Safety*: Keyword rules + offline classifier (distilroberta) + optional LLM judge

3. *Stability*: Cosine variance across response distributions

The interesting part is calibration: you can train persona-specific models on labeled data. Grid search over component weights, estimate normalization bounds, and optimize for ROC-AUC.

*Validation:* We published a full case study using Wendy's Twitter voice:

- Dataset: 235 turns, 64 on-brand / 72 off-brand (balanced)

- Baseline (uncalibrated): 0.733 ROC-AUC

- Calibrated: 1.0 ROC-AUC - 1.0 f1

- Learned: Style > traits > lexicon (0.5/0.4/0.1 weights)

Full methodology: https://docs.alignmenter.com/case-studies/wendys-twitter/

There's a full walkthrough so you can reproduce the results yourself.

*Practical use:*

pip install alignmenter[safety]

alignmenter run --model openai:gpt-4o --dataset my_data.jsonl

It's Apache 2.0, works offline, and designed for CI/CD integration.

GitHub: https://github.com/justinGrosvenor/alignmenter

Interested in feedback on the calibration methodology and whether this problem resonates with others.

1

MainyDB – an embedded MongoDB-style database for Python #

github.com favicongithub.com
0 comments2:27 PMView on HN
MainyDB is an embedded, file-based database for Python that brings MongoDB-style document storage and querying into a single .mdb file.

It’s lightweight, requires no external server, and works completely offline. You can use it either with its own Pythonic syntax or in PyMongo compatibility mode, meaning you can often switch from MongoDB to MainyDB by simply changing the import.

What It Does

MainyDB stores data in a single .mdb file and allows querying using Mongo-style operators ($gt, $in, $set, etc.) as well as aggregation pipelines ($match, $group, $lookup, ...). It supports async writes, thread-safe access, and can automatically handle binary data like images or videos through base64 encoding.

PyPI: https://pypi.org/project/MainyDB

GitHub: https://github.com/dddevid/MainyDB

Key Features

Single-file storage (no server process)

Two syntax modes: • Own Pythonic syntax • PyMongo compatibility (drop-in style)

Aggregation pipeline support ($match, $group, $lookup, etc.)

Thread-safe with asynchronous file writes

Built-in binary/media storage (auto-encoded)

Works entirely offline

Target Audience

MainyDB is designed for:

Developers building local or embedded Python apps

Prototyping tools, AI experiments, and desktop applications

Automation scripts that need persistent storage

Students and indie developers who want Mongo-style queries without setup overhead

Not intended (yet) for large-scale production systems; the focus is on simplicity, portability, and fast local development.

Comparison Feature MainyDB MongoDB TinyDB SQLite Server required No Yes No No Mongo syntax Yes Yes No No Aggregation pipeline Yes Yes No No Binary/media support Built-in Manual No No File-based Single .mdb No Yes Yes Thread-safe + async Yes Yes Partial Depends

MainyDB aims to bridge the gap between MongoDB’s expressive document model and TinyDB’s simplicity, giving Python developers a true embedded NoSQL option.

Feedback

It’s an early-stage project. I’d love feedback, benchmarks, feature ideas, and critiques. Is this something you’d use for small apps or prototypes?

Repo: https://github.com/dddevid/MainyDB

PyPI: https://pypi.org/project/MainyDB

1

Spotify Live Banner – Real-Time 'Now Playing' Widget for GitHub READMEs #

0 comments2:34 PMView on HN
Hello HN,

I built and open-sourced Spotify-Live-Banner, a project designed to embed a dynamic, real-time image of the user's currently playing Spotify song into their online profiles (primarily GitHub READMEs).

*Technical Details:* * Built as a minimalist web service using *Python (Flask)*. * Authenticates via Spotify API to pull live track data. * Dynamically renders a customizable SVG image based on that data. * Configured for easy, free deployment on Vercel.

It's intended as a polished, customizable utility for developers looking to add dynamic content to their profiles.

I'd appreciate any technical feedback on the Flask implementation or the SVG rendering approach.

*Live Demo:* [https://spotify-live-banner.vercel.app](https://spotify-live-banner.vercel.app)

*GitHub Repo:* [https://github.com/SahooShuvranshu/Spotify-Live-Banner](https://github.com/SahooShuvranshu/Spotify-Live-Banner)