Posted by david927 1 day ago
Ask HN: What are you working on? (May 2026)
I quit Figma about 4mo ago to start working on this, and the gpt-image-2 drop really legitized the bet. I recently release Brands for diffui, which let you establish a design system and consistently generate with it. I made a Brand out of the recent UFO files release, which allow for some really fun designs:
https://diffui.ai/brand/2ff1b00a-d698-43ea-a42e-7c4a2e670c04 (no account required to generate with this if you want to try)
[Error] Failed to load resource: the server responded with a status of 504 (Gateway Time-out) (generate, line 0)
My "prompt" was, uh, simple: "Turns out, you don't need water to live."
These embed a remote browser in an iframe to give you “embed anything browser view” custom elements. The demos focus on retro desktops to emphasize the browser - as these common web tropes, the retro desktop, can never actually ship a real browser without something like bbx.
https://browserbox.io https://github.com/BrowserBox/BrowserBox
Hyper-Frame is supposed to be the "developer" demo that engineers will understand what they can do with it. I think it succeeds at that. I'm glad you found it useful.
The desktops are more labor of love, nostalgic, imaginative. I grew up in that time. They complete the "art" of web desktops by giving them internet access, which otherwise all omit. I don't care that they bury it seamlessly rather than making it obvious. I like that it's integrated as it would be in an OS, that's part of it. Your point is accurate that they do not surface bbx obviously.
So these desktops and glitch are more meant to spark imagination, maybe prompt product ideas for people who could be inspired by that. It's supposed to, I suppose, work subliminally, by letting you play around with it in an immersive setting. I suppose it's a different buyer profile or purchase stage they are meant to be honey for, not the "give me what I want now" seeking, but the more playful, relaxed, idea-sparking stage/persona. It's meant as an art gallery :)
You probably got annoyed doing it - that's okay, it's probably not really for you.
I feel the set of demos taken together cover the things I was wanting to express about this. I'm very happy with them - both individually and all together.
Thanks for looking - and for your great compliment - yes windows is all HTML, notice it says Windows 98-and-a-half ! :) They are also really just meant to be fun, and I had fun creating them. And meant as a show off lol :) - I like it when people enjoy a beautiful time playing around with them.
It works on MacOS, built with Swift and Metal. My goal is to make a super fast, and free, focus stacking program. I provided a notarized MacOS DMG for the initial release, but if built yourself, it will run on an M4/M5 series iPad Pro as well.
The core ability I wanted was to support RAW files as inputs, with DNG files as outputs. This is done using either LibRaw, or Adobe DNG Converter (runtime options).
I have been really into macro photography the last couple years, and have been slowly working on trying to build my own program to handle the focus stacking.
Its very rough, but it uses clevis and a custom tang server to unlock servers with a tap on your phone instead of a password or traditional tang network unlock. I like it because it means that even if someone steals your hardware they can't unlock it without you approving the unlock. Eager for feedback
It's a durable orchestration system for AI code generation which solves the problem of not being able to trust LLMs to complete long running (and high quality) implementations without having to babysit them and monitor the process, which is what I think is the most exhausting part of coding with AI.
You start with a spec or programmatic task list and the engine runs the whole workflow: implementation, verification, review, fixes, and finalization.
It treats agentic coding like a durable CI-style process, with state, retries, reviewer feedback, commits, and auditability built in. It's externally orchestrated, meaning it's not the agent running the loop, it's simply agents being used as tools and spawned in the loop as needed without awareness of the loop itself.
It's going to be open sourced soon and it's not meant to replace your IDE or Agentic Harness of choice. You keep using codex/claude code/open code/cursor/pi whatever you want and simply delegate the actual implementation to the engine, through MCP/CLI and other integration points.
It supports any LLM provider so you can have GPT 5.5 implementing and a mix of Opus 4.7 / Deepseek v4 Pro / GPT 5.5 reviewing at every phase for example.
Sign up on the website or follow us on https://x.com/enginedotbuild or me personally on https://x.com/aljosa , desperately need more followers :D
Tinder meets Discord and, somehow, they have their way with Uber/Calendly.
It's live if you want to test it: https://jynx.app/
Let me know what you think of it. The main goals I want to achieve are: 1. help with social isolation 2. help e-sport team with sourcing and organizing
Working with Apple was also challenging because I had to purchase an Apple Watch or iPhone (the data is stored locally only, with no server or API to call, which is great from a privacy perspective) and then deploy specific code on the device.
I’m not sure if this helps your use case, but I was planning to make the API public and create a CLI (similar to Sentry or Grafana’s gcx) to access it. But if you want a local first option, not the best solution
Device based strength tracking is still so weird to me.
Then you have friends and family that don't have the same devices than you and are nice enough to want / try your app.
I think this is a perfect example... somewhere out there a genius and a grug are happily exercising together for the simple joy of doing so and feeling good in their bodies, and nearby is a midwit with the GDP of a small village worth of wearable electronics wondering where the joy has gone as he laments the 0.1% of VO2MAX he's dropped since his last gadget-run.
The setup is done via one prompt ('Use https://skills.superlog.sh to install Superlog in this project'), and everything on the platform is usable via MCP so that you don't have to spend time configuring yet another UI.
Do one thing and do it right.
Where I could see this succeeding is if you embrace the monitoring agent role. Customers can expose their coding agents, setup however they like, as an MCP server that your monitoring agent can plug into. If something goes wrong, your monitoring agent gives their coding agent the best context it can, and steps out of the way.
Recently I have had trouble with Sentry. I have a site that has a lot of data coming in (2M page views per month) and Sentry starts being unusable for a solo developer. And on the other hand, I have several Django projects where I want to have common way to handle bugs.
I am feeling Sentry UI is too complex for my use cases, and on the other hand, I would like to automate the process as much as possible -- and the idea of automatic bug fixing is neat!
I am experimenting with Bugsink. Supporting Bugsink internally but build some tooling around it for automatic bug detection and fixing would actually be a sweet spot for me.
- h ttps://github.com/rumca-js/Internet-Places-Database - Internet places / YouTube channels
- h ttps://github.com/rumca-js/awesome-database-feeds - feeds / RSS locations
- h ttps://github.com/rumca-js/awesome-database-top - smaller database from above
- h ttps://github.com/rumca-js/awesome-database-awesomelists - links from 'awesome lists'
- h ttps://github.com/rumca-js/RSS-Link-Database-2026 - 2026 year link metadata
- h ttps://github.com/rumca-js/RSS-Link-Database-2025 - 2025 year link meta data
- h ttps://github.com/rumca-js/crawler-buddy - crawler engine