Top
Best
New

Posted by david927 11 hours ago

Ask HN: What Are You Working On? (March 2026)

What are you working on? Any new ideas that you're thinking about?
152 points | 515 commentspage 2
ehnto 1 hour ago|
An immobiliser for my car. Had trouble finding devices that would cover the specific attack vectors my car would be susceptible to. Checked my insurance, no specific clauses around immobilisers, check relevant road laws, no issues there.

I have a fairly novel approach to operating it, and in the case of one time theft prevention security through obscurity is actually a great approach. The assailant only has a short time to pull the car apart and solve the puzzle, couple that with genuine security techniques, a physical aspect, and it should be pretty foolproof.

It can still be towed away, etc, not much to be done there except brute force physical blocks. Most cars get stolen here to do crime in that night so it's not as common.

Havoc 9 minutes ago|
I’ve seen implementations which need a magnet pressed to the door plastic in a specific place. Security through obscurity as you say but clever anyway - you basically need both a magnet on hand and know where to hold it.

Or time to pull the car apart

ptak_dev 1 hour ago||
JetSet AI (https://bit.ly/4besn7l) — flight search in plain English instead of the usual date-picker maze.

Type "cheapest flight from London to Tokyo, flexible on dates in April" and it returns live results with real pricing. I compared a few against Google Flights and they matched. Not mocked data.

The part I found interesting: it runs on a dedicated VM so it keeps context across the conversation. If you say "actually make that business class" or "what about flying into Osaka instead" it knows what you were looking at. Most chat-based search tools lose that between messages.

I didn't build it from scratch — it's a pre-built app in the SuperNinja App Store that I deployed and have been extending. The deploy itself took about 60 seconds. The extending part is what I've been spending time on: describing changes in plain text and watching them go live without touching a repo.

Still figuring out what the right UX is for flexible-date search. Curious if anyone has opinions on that.

popupeyecare 8 hours ago||
Im building https://trypixie.com to legally employ my 7 year old child, save on taxes and contribute to her Roth IRA.

Im also building https://www.keepfiled.com, a microsaas to save emails (or email attachments) to google drive

I almost forgot, I also built https://statphone.com - One emergency number that rings your whole family and breaks through DND.

I love building. I built all these for myself. unfortunately I suck at marketing so I barely have customers.

sunnybeetroot 5 hours ago||
Amazing landing for statphone, mind if I ask if it’s using any sort of UI library?
popupeyecare 12 minutes ago||
Nope. Just iterated slowly. Started by looking templates I liked on dribbble.
sudokatsu 6 hours ago|||
Statphone is such a genius idea - very cool.
Reforest8973 7 hours ago|||
TryPixie is a great idea
samename 4 hours ago||
these are all great ideas!
mikeayles 2 hours ago||
Rewriting the backend Bitwise Cloud, my semantic search for embedded systems docs Claude Code plugin from Python to Go.

The problem was the ML dependencies. The backend uses BGE-small-en-v1.5 for embeddings and FAISS for vector search. Both are C++/Python. Using them from Go means CGO, which means a C toolchain in your build, platform-specific binaries, and the end of go get && go build.

So I wrote both from scratch in pure Go.

goformer (https://www.mikeayles.com/blog/goformer/) loads HuggingFace safetensors directly and runs BERT inference. No ONNX export step, no Python in the build pipeline. It produces embeddings that match the Python reference to cosine similarity > 0.9999. It's 10-50x slower than ONNX Runtime, but for my workload (embed one short query at search time, batch ingest at deploy time) 154ms per embedding is noise.

goformersearch (https://www.mikeayles.com/blog/goformersearch/) is the vector index. Brute-force and HNSW, same interface, swap with one line. I couldn't justify pulling in FAISS for the index sizes I'm dealing with (10k-50k vectors), and the pure Go HNSW searches in under 0.5ms at 50k vectors. Had to settle for HNSW over FAISS's IVF-PQ, but at this scale the recall tradeoff is fine.

The interesting bit was finding the crossover point where HNSW beats brute-force. At 384 dimensions it's around 2,400 vectors. Below that, just scan everything, the graph overhead isn't worth it. I wrote it up with benchmarks against FAISS for reference.

Together they're a zero-dependency semantic search stack. go get both libraries, download a model from HuggingFace, and you have embedding generation + vector search in a single static binary. No Python, no Docker, no CGO.

Is it better than ONNX/FAISS? Heck no. I just did it because I wanted to try out Go.

goformer: https://github.com/MichaelAyles/goformer

goformersearch: https://github.com/MichaelAyles/goformersearch

dataviz1000 10 hours ago||
I'm using TimescaleDB to manage 450GB of stocks and options data from Massive (what used to be polygon.io), and I've been getting LLM agents to iterate over academic research to see if anything works to improve trading with backtesting.

It's an addictive slot machine where I pull the lever and the dials spin as I hope for the sound of a jackpot. 999 out of 1000 winning models do so because of look-ahead bias, which makes them look great but are actually bad models. For example, one didn't convert the time zone from UTC to EST, so five hours of future knowledge got baked into the model. Another used `SELECT DISTINCT`, which chose a value at random during a 0–5 hour window — meaning 0–5 hours of future knowledge got baked in. That one was somehow related to Timescale hypertables.

Now I'm applying the VIX formula to TSLA options trades to see if I can take research papers about trading with VIX and apply them to TSLA.

Whatever the case, I've learned a lot about working with LLM agents and time-series data, and very little about actually trading equities and derivatives.

(I did 100% beat SPY with a train/out-of-sample test, though not by much. I'll likely share it here in a couple weeks. It automates trading on Robinhood, which is pretty cool.)

mmaunder 9 hours ago||
Nice. I played with this a bit. Agents are very good at Rust and CUDA so massive parallelization of compute for things like options chains may give you an edge. Also, you may find you have a hard time getting very low latency connection - one that is low enough in ms so that when you factor in the other delays, you still have an edge. So one approach might be to acknowledge that as a hobbyist you can't compete on lowest-latency, so you try to compete on two other fronts: Most effective algorithm, and ability to massively parallelize on consumer GPU what would take others longer to calculate.

Best of luck. Super fun!

PS: Just a follow-up. There was a post here a few days ago about a research breakthrough where they literally just had the agent iterate on a single planning doc over and over. I think pushing chain of thought for SOTA foundational models is fertile ground. That may lead to an algorithmic breakthrough if you start with some solid academic research.

dzink 9 hours ago|||
Fun fact - some of it may be a subset of all data and with trimmed outlying points, so when you set some stop loss conditions they get tripped in the real world, but not by your dataset. Get data from my sources.
happiness0067 9 hours ago|||
Relateable. If I had a dollar for every time I ran into issues with time zones, that would be a profitable strategy in and of itself.
arthurcolle 9 hours ago|||
did RH open up API for trading?
dataviz1000 9 hours ago||
I developed a Claude skill that will interact with and press every button intercepting every request / response on a website building a Typescript API. I only have $10 in that account so there isn't much damage that it can do. Probably get me banned but I don't use Robinhood for real trading.
tgrowazay 9 hours ago|||
I tried exactly this - loading polygon.io data into TimescaleDB, and it was very inefficient.

Ended up using ClickHouse - much smaller on disk, and much faster on all metrics.

dataviz1000 8 hours ago||
Interesting. I'm not familiar with ClickHouse. I've been manually triggering compression and continuous aggregates have been a huge boon. The database has been the least of my concerns. Can you tell me more about it?
gregleon 4 hours ago||
You can take a look at this: https://github.com/ClickHouse/stockhouse The database schema is optimised for stock data
sourishkundu23 7 hours ago|||
[dead]
huflungdung 9 hours ago||
[dead]
spaceman3 1 hour ago||
Been working on a solution to my meeting fatigue. I sit in way too many of them where I'm only there "just in case someone has a question" and realized I needed a way to safely not care about my meetings.

The idea is: you join a meeting, hit start on the app, minimize, and go do actual work (or go make a coffee). When someone says your name or any keyword(s) you set, you get a native macOS notification with enough context to jump back in without looking lost. It uses whisper and is 100% local and doesnt leave traces, also very OE friendly.

https://pingmebud.com

Would love to hear what you think, especially if you're drowning in meetings too.

John23832 1 hour ago|
How is this different than @‘ing someone in the chat apps we all already use, and that you’ll actually be responding to the ping in?
unlimit 1 hour ago||
Building a boring POS (1) using various AI tools just to check what can I do with these tools. I have used claude, gemini and now using antigravity. I have not done a single edit manually.

I got it all done in probably an hour or two. But done in 10-15 min blocks over many days.

(1) https://pos.unlimit.in

arvida 2 hours ago||
Been working on https://localhero.ai, its my service to automate on-brand translations for product teams. I've been doing outreach to Swedish companies/people, getting some good interest from a few that want to automate their localization workflow but don't want the work of maintaining own solutions. Even though you can build a version working with coding agents these days, there is a lot of stuff around it to make it work well over time in a product org. On the tech side for Localhero, one thing I've been working on how it learns from manual edits. Like when a PM or designer tweaks copy in the Localhero UI, those things now better feed back into a translation memory and influence future translations. It's like a self-learning loop, turns out a pretty nice combo of using old-school techniques and offloading some work to LLMs.

Also been spending some time on my old side project https://infrabase.ai, an directory of AI infra related tools. Redesigned the landscape page (https://infrabase.ai/landscape), going through product submissions and content, optimizing a bit for seo/geo.

rahimnathwani 8 hours ago||
When GPT-4.5 came out, I used it to write a couple of novels for my son. I had some free API credits, and used a naive workflow:

while word_count < x: write_next_chapter(outline, summary_so_far, previous_chapter_text)

It worked well enough that the novels were better than the median novel aimed at my son's age group, but I'm pretty sure we can do better.

There are web-based tools to help fiction authors to keep their stories straight: they use some data structures to store details about the world, the characters, the plot, the subplots etc., and how they change during each chapter.

I am trying to make an agent skill that has two parts:

- the SKILL.md that defines the goal (what criteria the novel must satisfy to be complete and good) and the general method

- some other md files that describe different roles (planner, author, editor, lore keeper, plot consistency checker etc.)

- a python file which the agent uses as the interface into the data structure (I want it to have a strong structure, and I don't like the idea of the agent just editing a bunch of json files directly)

For the first few iterations, I'm using cheap models (Gemini Flash ones) to generate the stories, and Opus 4.6 to provide feedback. Once I think the skill is described sufficiently well, I'll use a more powerful model for generation and read the resulting novel myself.

rond2911 5 hours ago||
this is fascinating. I would like to try this as a side project as well.

some other md files that describe different roles (planner, author, editor, lore keeper, plot consistency checker etc.)

- What are these meant to be exactly? are these sub agents in the workflow or am i completely misunderstanding?

andsoitis 4 hours ago||
Do you mind posting these novels?
VadimPR 1 hour ago|
https://telephone.health, which shows how well LLMs can take narrative medical text, convert it to a structured form (FHIR R4, for application consumption), and then convert it back to narrative text for human consumption.

Interesting findings include Mistral doing better than Gemini 3 Pro in certain usescases, cross-LLM works better than one LLM to another, oh and - the cost all of of this. So, so expensive.

More comments...