Posted by david927 1 day ago
Ask HN: What are you working on? (May 2026)
The basic idea is "music with source code." Instead of prompting for finished audio files, you use an LLM to help write and revise a SuperCollider-based system that runs in the browser via WebAssembly [1]. The result is executable music: inspectable, editable, versionable, and controllable at runtime.
I’m especially interested in adaptive sound for software: games, creative tools, meditation apps, AI agents, interactive art. Places where a static audio file feels too dead, but hiring a composer/sound designer for every variation is unrealistic.
It’s early, but the thesis is that LLMs make algorithmic music much more approachable because code becomes a conversational medium. I wrote a longer piece about the idea here: https://x.com/osetinsky/status/2053674503801028944?s=20
You can check it out here: https://underscore.audio
[1] shout outs to:
- Sam Aaron for building SuperSonic, allowing for SuperCollider in the browser as an AudioWorklet: https://sonic-pi.net/supersonic/demo.html. Earlier, pre-LLM versions of Underscore relied on low-latency WebRTC implementations for streaming SC synths running on servers to browsers in real-time
- James McCartney, creator of SuperCollider: https://supercollider.github.io/
I’ve split the experience into two parts: a mobile-friendly app at https://app.orcamarka.com for bookmarking websites, text snippets, or images into a pure text format, and a reader part at https://m.orcamarka.com optimized specifically for the limited browsers on devices like the Kindle (the site will automatically redirect you to app if it detects a more capable browser). To bypass the pain of typing URLs on E-ink, the reader part displays a QR code that you scan with the app to instantly sync and load your text.
I’ve been using this personally for a month and it has significantly shifted my long-form reading from my phone to my Kindle. Since it’s a web app, there’s no installation required and it's completely free.
I’ve tried to design it to be intuitive enough to use without instructions, but I’m looking for beta testers to try it out and let me know where I can improve the workflow!
Its not far along, but I'm trying to expand upon the ideas of Lisp into a new programming language I call Grasp. If Lisp is a list processing language, Grasp is a graph processing language
The app has a lot of UX details that I've really enjoyed working on. I wrote up some notes about it here: https://www.freshcardsapp.com/3/
Separately, also working on a Zettelkasten notes app that pushes you to make small, atomic notes that you can organize in "collections" to provide structure beyond just hyperlinking in the note text: https://understory.ussherpress.com/ This has been a lot of fun iterating on. I started with a Miller Columns UI, like Finder, to visualize the graph of connections between notes, but I found that it was too overwhelming to use, so I scaled back and went with a more Notational Velocity-like quick search bar with note addressing. The app UI mimics a browser because I found that it works really well for something like this. I need to polish it a bit more and want to find people who will give it a beta test to help me iterate on the ideas some more.
Just posted a first early demo and sample orchestrator system prompt yesterday: https://x.com/Westoncb/status/2053429329233895857
You initialize the system with an objective and a number of rounds to run for, and it loads the current config (orchestrator + specialist prompts and LLM configs) and begins working on it. You can manually step one round at a time or just let it run.
Rather than accumulating a single long work log/context, at each round specialists apply patches to a number of named 'artifacts' with different roles (e.g. uncertainties, dead ends, findings), which are injected into prompts during subsequent rounds.
The engine is written in rust and there's a web UI (and CLI). You can use the built in config editor to define specialists (and their prompts), what the artifact set is, orchestrator prompting etc.
There's so many games played per week, I want to find the best/most exciting games to watch, without spoilers. I built a little model to classify games and give me control over the level of spoilers shown so I can watch the best games of the week.
Currently it covers 6 regions, 250+ subscription services, across 30+ categories, recognizing 850+ billing name patterns. It even has built in smart alerts for different services and region specific considerations. (FTC's Adobe settlement, Hola VPN Danger, UK Price Hike Exit Rights, Cloud Act Warning, etc)
It adds up monthly spend/annual spend. Identifies alternative saving opportunities/more ethical options.
I have plans to add additional regions but that will take extra research to understand the realities of those markets and the providers within them. I also don't speak any other languages, so this may also be a bit of a hurdle.
- Integrated with lots of open-source and commercial simulators and models for chemistry, materials science, biology… As well as connections to service labs and robot labs to easily perform physical experiments.
- autoresearch / AlphaEvolve like optimization loop following the scientific method: observation, hypothesis, experiment, theory. Combined with a long-term self-learning memory like Karpathy’s Wiki.
You can work with it interactively like with a coding agent to research and execute experiments efficiently. You can also treat it like a graduate student, giving it long-term research goals, having it work 24/7, making smart decisions about where to use your limited resource budget, checking-in with it periodically as a supervisor to guide its direction.
Not all of this is shipped yet, but we’ve been online for a while and it should be plenty useful to any scientist/engineer already.
The App helps Product Managers, Sales Reps and Architects quickly understand an enterprise software APIs. LLM turns the raw documentation into beautiful process flows, sequence diagrams and integration requirements.
Hope to launch soon ;)