Top
Best
New

Posted by david927 1 day ago

Ask HN: What are you working on? (May 2026)

What are you working on? Any new ideas that you're thinking about?
265 points | 984 commentspage 21
chaoxu 1 day ago|
I’m really interested in AI4MATH, as I believe it will eventually replace me.

I'm working on a mathematical knowledge base software.

It's kinda like a local Github for math. In fact the backend is actually a Forgejo instance, I'm building a frontend for human and also a harness for agents that automatically consumes the knowledge base and expand on it. I realized the Issue/PR/review workflow works well for maintaining knowledge base too.

The motivation is actually help mathematicians/me TODAY to able to do math together with human/AI.

The knowledge base keeps mathematical writing as plain Markdown, but adds stable IDs, backlinks, search, draft changes, review, approvals, and merge. The agent side can read the same pages, follow the same references, propose edits, and go through the same review process as a human.

I’m not using formalization here. Everything is still natural-language proofs. The practical reason is that many areas I care about are not easy to formalize yet because it is not in mathlib.

I see this as a transition project: useful before autoformalization really works well, and maybe still useful afterward as the place where humans and agents organize exploration.

And1 17 hours ago||
I've started watching soccer more seriously in prep for the world cup (I'm in Canada) and watching live games is never going to happen, so I only watch replays.

There's so many games played per week, I want to find the best/most exciting games to watch, without spoilers. I built a little model to classify games and give me control over the level of spoilers shown so I can watch the best games of the week.

https://nospoilersclub.com

ryanczak 13 hours ago||
refactoring memory management in https://github.com/ryanczak/daemoneye to better support continuous operation over long time horizons where state of monitored services/things drifts and knowledge becomes stale over time.
edumucelli 9 hours ago||
Fast, customizable, and extensible dock application -- fully written in Python, with built-in applets and full Linux desktop integration.

https://docking.cc

thekevan 1 day ago||
A desktop client for Repomix. Repomix is a CLI which allows you to summarize all the code in a repo in one txt or md file so you can in turn feed it to an AI model for analysis. It absolutely gets the job done it its current state, but it is a personal project so there may be a few rough edges.

https://github.com/KevanMacGee/Repomix-Desktop

It's open source and has no official connection to Repomix. But the developer, yamadashy on Github, knows about it and seemed to like it enough to add it to the Repomix website under the community projects.

I like being able to paste all the code into a browser window and have lengthy discussions with ChatGPT, Gemini and GLM. Doing so in the browser saves tokens over doing it in Cursor or Codex. I like using the Projects feature in ChatGPT in the browser and Notebooks with Gemini because that gives the model context and history on whatever I am working on. It was one part scratching my own itch, one part learning about Python and Customtinker.

It's made specifically for when you just want to get the code and paste it, no muss or fuss. It doesn't have support for flags (yet?) like the CLI because again it is built for speed. Besides, when I want flags, I like using the CLI instead to get granular. Repomix Desktop is for "just give me the code."

I'm a self taught coder so I'm very open to feedback.

arvida 1 day ago||
Mainly working on https://localhero.ai, automating i18n translations for product teams. Basically runs as a GitHub Action, translating new strings on PRs matching your brand voice and glossary. Got our first fully selfserve customer a few weeks back (found us through the docs). Interesting work lately has been improving how the system learns from manual edits, when someone tweaks a translation in the UI, it feeds back into translation memory and influences future translations in a smart way. Also did stuff like improving our agent skill, so coding agents get glossary/style guide context automatically and they can write source copy that better matches the brand.

Been pushing some new stuff on https://infrabase.ai as well, my AI infrastructure tools directory. Traffic growing steadily from comparison and alternatives pages. Interesting finding is that blog posts rank better but get fewer clicks now because AI Overviews, interactive comparison pages still earn clicks. ChatGPT has also started citing the site more as a source. Adding new content and polishing existing parts of it, added a page focusing on EU based services at https://infrabase.ai/european.

nicoinstrument 1 day ago||
I'm learning about inference by running vLLM on a k8s cluster (EKS), building a gateway to keep a <2s TTFT SLO.

Most recent ha-ha moment: I kept wondering if it was normal that my cluster was only able to process 4 requests per second per vLLM engine (just seemed really low to me).

I realized a better metric is in-flight requests... Each engine is processing 70 requests at any given time, streaming tokens for over 30s.

Code: https://github.com/Nicolas-Richard/vllm-on-eks

korbonits 8 hours ago||
Have you considered using vLLM on top of Ray Serve (on EKS with KubeRay)? KubeRay makes Ray cluster-aware and there could be some optimizations you could make e.g. keeping that GPU fully utilized all the time :)
nicoinstrument 1 hour ago||
Thanks for the suggestion! Have you found that Ray Serve’s built-in autoscaling plays nicely with custom SLO-based concurrency limits, or do you usually let Ray handle the load balancing entirely?"
iugtmkbdfil834 1 day ago||
Deeper dives into those uncover interesting limitations that don't seem to be documented anywhere. On the other hand, it is through those reverse shibboleths that I am now able to tell that my boss's boss has no idea what he is talking about llm-wise.
chandureddyvari 1 day ago||
For a long time I wondered how SV startups got such pretty landing pages (here’s a comment I left 2 years back: https://news.ycombinator.com/item?id=37421273). I wanted one for my side projects but couldn’t afford an agency, and the templates online were boring. Creating the page was only half the problem. I also needed somewhere to collect emails for the waitlist.

After AI happened, I built an app (promptfunnels) to scratch my own itch and generate funnels (fancy name for landing pages with a purpose).

Then came the harder part: marketing it. Coming from a tech background, I knew nothing about marketing, so I started reading and came across the $100M Leads book. I realized codifying those principles together with funnels and marketing automation had a real market. My family, friends, and acquaintances became the first customers. A friend joined me as cofounder and we both quit our jobs to do this full time.

As we talked to other startup founders, they kept describing a tangential problem they called GTM. At the core it was the same thing we were solving: marketing for non-marketers. So we pivoted to RevMozi(https://revmozi.com/), which helps non-marketers do both inbound and outbound GTM.

We’re dogfooding the product and coming out of beta next month.

Wish us luck.

nottorp 1 day ago|
> how SV startups got such pretty landing pages

Umm where? They are indistinguishable from each other. Not pretty.

chandureddyvari 1 day ago||
some of them are non existent today. Check the parent thread - some good recommendations(for 2023) on both functional websites and pretty websites. At that time if I recall linear landing page was all rage, and there were many copycats.
vinayak-shukla 1 day ago|
while I was using claude code, I was playing some lofi music in the background while it was 'Combobulating' and I thought what if it could auto-play lofi beats while working and stop when it has finished running. So I built a claude code plugin, I call it vibe-coding. Can check out/add the repo as a marketplace and plugin from here: https://github.com/Vinayak-Shukla/vibe-coding
More comments...