Posted by david927 11/9/2025
Ask HN: What Are You Working On? (Nov 2025)
Historical public companies Merton Probabilities of Default.
A project just for fun and still having to finish a couple of things.
I plan to make the datasets public (everything but some raw market data as vendors don't allow that) and also about to add the explanation of what Merton PD is.
https://reformeuropa.net/raea.html
Currently its at like 90% completion but there are some subtleties that probably need to be worked out a bit more. The PDF linked from that page explains all the details (although for reading just peeking at the charts on Page 4, 5, & 7 should get someone to reading it fine enough). Currently both Alice In Wonderland and Dr Jekyll are fully transcribed into the reform if someone wants to jump into seeing it in action. Certainly interested in thoughts and complaints of the system.
Also looking here sometime soon to playing around with an improved SI unit system. So if anyone has any new ideas here too I'd be very interested.
Examples of things to be touched upon would be like: - Make g (not kg) the base mass unit. Making 1 m^3 of water = 1 g - Bring commas to be the universal decimal point separator.
State manager isn't there yet, but it's coming.
- I will not consider it feature-complete
- It might be a waste of time if the performance isn't what I imagined
Basically, I have a pain point with pytest being a bit slow. Nim and Rust (and other languages) have ways to transpile Python code into the other language. I know some Rust tools to run the tests, but they have some differences from pytest.
My idea is to have a runner that transpiles the code to either Nim or Rust, compiles it, and runs the compiled tests. Test discovery for sure would improve, but I have no idea at all if the compile + run time would be smaller than just running pytest normally. There are a lot of challenges in this project, so I'll probably use it to learn another language and some new skills, instead of building something aiming to be usable out there.
I'm exploring building a weekly curation service for professionals who want to write on LinkedIn but struggle with "what's worth writing about."
The thesis: In the AI era, execution (writing) is commoditized. The real bottleneck is editorial judgment... knowing what topics matter before they're obvious.
The concept: Weekly email with 5-7 curated topics (tech trends, policy shifts, market movements). Each topic comes with sources, multiple angles, and context Choose your perspective, AI drafts a polished article
Why I think this could work: I've been manually doing this for myself for years. Pattern recognition at scale is hard to automate, but pairing human curation with AI execution might work.
Target market: ~30M professionals who should be building thought leadership but don't have time to spend on research.
Current status: Validating demand before building. The hard part isn't the AI, it's systematizing the trend-spotting and curation process without losing signal quality.
My point to help to build your own MentalOS that works for, to live smoother lives without huge up and downs.
This involves making it lazy for polars, allowing it to read arbitrarily large files no longer requiring loading the entire dataframe into memory. When a large dataframe initially displays, no summary stats will be available. Summary stats are computed in the background in groups of columns. Then results are cached per column. To accomplish this I wrote a polars plugin in rust that computes hashes of columns. Dealing with large data like this is tricky, operations sometimes crash, sometimes take all available memory, and sometimes they just run for a very long time. I have also been building an execution framework for Buckaroo. It uses multiprocessing based timeouts, and the caching to execute summary stats in the background.
Being able to control the execution, recover from timeouts, crashes and memory exhaustion opens up some interesting debugging tools. I have written methods that take arbitrary groups of polars expressions and produce a minimal reproduction test case through a git-bisect like process.
All of this assures that if individual columns of a dataframe fits into memory, summary stats will be computed for the entire dataframe in the background. And because it is cached, the next time you open the same dataframe, the stats will be display instantly. When exploring data I do this in an adhoc way manually (splitting up a dataframe by columns and rows), but it is error prone. This should all be automatic.
I will be presenting this at PyData Boston in December.
The Column's the limit: interactive exploration of larger than memory data sets in a notebook with Polars and Buckaroo
-https://salespark.app/apps/discount-spark: A Shopify app that allows merchants to create more powerful discount codes so they can create stronger offers for their customers.
What I recently built but didn't find a successful product market fit:
-https://wordazzle.com: A word game that's designed to expand your vocabulary with exceptional words.
-https://spicychess.com: Chess, meet boxing! Imagine playing chess BUT you can also smack your opponent. Now, if you smack em enough times to drain their health completely(yes, you have a health bar), you can steal their turn. It's fun, a little evil, but after thousands of $ spent on marketing, never found critical mass.
https://metro.scopecreeplabs.com
https://metro.scopecreeplabs.com/abc
https://metro.scopecreeplabs.com/video
Offline first, everything is saved to Local Storage. Sharing is also purely serverless - the ABC text and the video chapter defs are shared via query parameters after compression and base64 encoding.
Tech: React, Material UI, ToneJS, ABCJS, CloudFlare (Pages, Workers, D1 Storage)
Next: Add an AI assistant to the ABC editor - editing ABC text gets painful fast.