Posted by embedding-shape 1/27/2026
https://tinyapps.org/network.html
Of course, "AI-generated browser is 1MB" is neither here nor there.
Neat collection of apps nonetheless, some really impressive stuff in there.
Most of the time when I develop professionally, I restart the session after each successful change, for this project, I initially tried to let one session go as long as possible, but eventually I reverted back to my old behavior of restarting from 0 after successful changes.
For knowing what file it should read/write, it uses `ls`, `tree` and `ag ` most commonly, there is no out-of-band indexing or anything, just a unix shell controlled by a LLM via tool calls.
> since getting the agent to self-select the right scope is usually the main bottleneck
I haven't found this to ever be the bottleneck, what agent and model are you using?
- clear code structure and good architecture(modular approach reminiscent of Blitz but not as radical, like Blitz-lite).
- Very easy to follow the code and understand how the main render loop works:
- For Mac: main loop is at https://github.com/embedding-shapes/one-agent-one-browser/blob/master/src/platform/macos/windowed.rs#L74
- You can see clearly how UI events as passed to the App to handle.
- App::tick allows the app to handle internal events(Servoshell does something similar with `spin_event_loop` at https://github.com/servo/servo/blob/611f3ef1625f4972337c247521f3a1d65040bd56/components/servo/servo.rs#L176)
- If a redraw is needed, the main render logic is at https://github.com/embedding-shapes/one-agent-one-browser/blob/master/src/platform/macos/windowed.rs#L221 and calls into `render` of App, which computes a display list(layout) and then translates it into commands to the generic painter, which internally turns those into platform specific graphics operations.
- It's interesting how the painter for Mac uses Cocoa for graphics; very different from Servo which uses Webrender or Blitz which(in some path) uses Vello(itself using wgpu). I'd say using Cocoa like that might be closer to what React-Native does(expert to comfirm this pls?). Btw this kind of platform specific bindings is a strength of AI coding(and a real pain to do by hand).- Nice modularity between the platform and browser app parts achieved with the App and Painter traits.
How to improve it further? I'd say try to map how the architecture correspond to Web standards, such as https://html.spec.whatwg.org/multipage/webappapis.html#event...
Wouldn't have to be precise and comprehensive, but for example parts of App::tick could be documented as an initial attempt to implement a part of the web event-loop and `render` as an attempt at implementing the update-the-rendering task.
You could also split the web engine part from the app embedding it in a similar way to the current split between platform and app.
Far superior, and more cost effective, than the attempt at scaling autonomous agent coding pursued by Fastrender. Shows how the important part isn't how many agents you can run in parallel, but rather how good of an idea the human overseeing the project has(or rather: develops).
Agree with your conclusion :)
Would be interesting to see how the architecture/design would change if I had focused the agent on making as modular and reusable code as possible, as currently I had some constraints about it, but not too strict. You bring up lots of interesting points, thanks again!
It implies that the agents could only do this because they could regurgitate previous browsers from their training data.
Anyone who's watched a coding agent work will see why that's unlikely to be what's happening. If that's all they were doing, why did it take three days and thousands of changes and tool calls to get to a working result?
I also know that AI labs treat regurgitation of training data as a bug and invest a lot of effort into making it unlikely to happen.
I recommend avoiding the temptation to look at things like this and say "yeah, that's not impressive, it saw that in the training data already". It's not a useful mental model to hold.
But yes, with enough prodding they will eventually build you something that's been built before. Don't see why that's particularly impressive. It's in the training data.
But if even the AI agent seems to struggle, you may be doing something unprecedented.
They're equally useful for novel tasks because they don't work by copying large scale patterns from their training data - the recent models can break down virtually any programming task to a bunch of functions and components and cobble together working code.
If you can clearly define the task, they can work towards a solution with you.
The main benefit of concepts already in the training data is that it lets you slack off on clearly defining the task. At that point it's not the model "cheating", it's you.
You need to see the big picture and visions of the future state in order to ensure what is being built will be able to grow and breathe into that. This requires an engineer. An agent doesn’t think much about the future, they think about right now.
This browser toy built by the agent, it has NO future. Once it has written the code, the story is over.
I'd find it very interesting to see some compelling examples along those line.
That transcript viewer itself is a pretty fun novel piece of software, see https://github.com/simonw/claude-code-transcripts
Denobox https://github.com/simonw/denobox is another recent agent project which I consider novel: https://orphanhost.github.io/?simonw/denobox/transcripts/ses...
Agent engineering seems to be (from the outside!) converging on quality lived experience. Compared to Stone Age manual coding it’s less about technical arguments and more about intuition.
Vibes in short.
You can’t explain sex to someone who has not had sex.
Any interaction with tools is partly about intuition. It’s a difference of degree.
That said, I think some credit is due. This is still a nice weekend project as far as LLMs go, and I respect that you had a specific goal in mind (showing a better approach than Cursor's nonsense, that gets better results in less time with less cost) and achieved it quickly and decisively. It has not really changed my priors on LLMs in any way, though. If anything it just confirms them, particularly that the "agent swarm" stuff is a complete non-starter and demonstrates how ridiculous that avenue of hype is.
Yeah, that's obviously a lot harder, but doable. I've built it for clients, since they pay me, but haven't launch/made public something of my own, where I could share the code, I guess might be useful next project now.
> This is just, yet another, proof-of-concept.
It's not even a PoC, it's a demonstration of how far off the mark Cursor are with their "experiment" where they were amazed by what "hundreds of agents" build for week(s).
> there's no telling how closely the code mirrors existing open-source implementations if you aren't versed on the subject
This is absolutely true, I tried to get some better answers on how one could even figure that out here: https://news.ycombinator.com/item?id=46784990
this is a feature
It's great to see him make this. I didn't know that he had a blog but looks good to me. Bookmarked now.
I feel like although Cursor burned 5 million$, we saw that and now Embedding shapes takeaway
If one person with one agent can produce equal or better results than "hundreds of agents for weeks", then the answer to the question: "Can we scale autonomous coding by throwing more agents at a problem?", probably has a more pessimistic answer than some expected.
Effectively to me this feels like answering the query which was being what if we have thousands of AI agents who can build a complex project autonomously with no Human. That idea seems dead now. Humans being in the loop will have a much higher productivity and end result.
I feel like the lure behind the Cursor project was to find if its able to replace humans completely in a extremely large project and the answer's right now no (and I have a feeling [bias?] that the answer's gonna stay that way)
Emsh I have a question tho, can you tell me about your background if possible? Have you been involved in browser development or any related endeavours or was this a first new one for you? From what I can feel/have talked with you, I do feel like the answer's yes that you have worked in browser space but I am still curious to know the answer.
A question which is coming to my mind is how much would be the difference between 1 expert human 1 agent and 1 (non expert) say Junior dev human 1 agent and 1 completely non expert say a normal person/less techie person 1 agent go?
What are your guys prediction on it?
How would the economics of becoming an "expert" or becoming a jack of all trades (junior dev) in a field fare with this new technology/toy that we got.
how much productivity gains could be from 1 non expert -> junior dev and the same question for junior -> senior dev in this particular context
[0] Cursor Is Lying To Developers… : https://www.youtube.com/watch?v=U7s_CaI93Mo
(If it was that's bad news for them as a company that sells tools to human developers!)
It was about scaling coding agents up to much larger projects by coordinating and running them in parallel. They chose a web browser for that not because they wanted to build a web browser, but because it seemed like the ideal example of a well specified but enormous (million line+) project which multiple parallel agents could take on where a single agent wouldn't be able to make progress.
embedding-shape's project here disproves that last bit - that you need parallel agents to build a competent web renderer - by achieving a more impressive result with just one Codex agent in a few days.
I think how I saw things was that somehow Cursor was/is still targetted very heavily on vibe coding in a similar fashion of bolt.dev or lovable and I even saw some vibe coders youtube try to see the difference and honestly at the end Cursor had a preferable pricing than the other two and that's how I felt Cursor was.
Of course Cursor's for the more techie person as well but I feel as if they would shift more and more towards Claude Code or similar which are subsidized by the provider (Anthropic) itself, something not possible for Cursor to do unless burning big B's which it already has done.
So Cursor's growth was definitely towards the more vibe coders side.
Now coming to my main point which is that I had the feeling that what cursor was trying to achieve wasn't trying to replace humans entirely but replace humans from the loop Aka Vibe coding. Instead of having engineers, if suppose the Cursor experiment was sucessful, the idea (which people felt when it was first released instantly) was that the engineering itself would've been dead & instead the jobs would've turned into management from a bird's eye view (not managing agent's individually or being aware of what they did or being in any capacity within the loop)
I feel like this might've been the reason they were willing to burn 5 million$ for.
If you could've been able to convince engineers considering browsers are taken as the holy grail of hardness that they are better off being managers, then a vibe coding product like Cursor would be really lucrative.
Atleast that's my understanding, I can be wrong I usually am and I don't have anything against Cursor. (I actually used to use Cursor earlier)
But the embedding shapes project shows that engineering is very much still alive and beneficial net. He produced a better result with very minimal costs than 5 million$ inference costs project.
> embedding-shape's project here disproves that last bit - that you need parallel agents to build a competent web renderer - by achieving a more impressive result with just one Codex agent in a few days.
Simon, I think that browsers got the idea of this autonomous agents partially because of your really famous post about how independent tests can lead to easier ports via agents. Browsers have a lot of independent tests.
So Simon, perhaps I may have over-generalized but do you know of any ideas where the idea of parallel agents is actually good now that browsers are off, personally after this project, I can't really think of any. When the Cursor thing first launched or when I first heard of it recently, I thought that browsers did make sense for some reason but now that its out of the window, I am not sure if there are any other projects where massively parallel agents might be even net positive over 1 human + 1 agent as Emsh.
The reason I got excited about the Cursor FastRender example was that it seemed like the first genuine example of thousands of agents achieving something that couldn't be achieved in another way... and then embedding-shapes went and undermined it with 20,000 lines of single-agent Rust!
I kind of left the agents to do what they wanted just asking for a port.
Your website does look rotated and the image is the only thing visible in my golang port.
Let me open source it & I will probably try to hammer it some more after I wake up to see how good Kimi is in real world tasks.
https://github.com/SerJaimeLannister/golang-browser
I must admit that its not working right now and I am even unable to replicate your website that was able to first display even though really glitchy and image zoomed to now only a white although oops looks like I forgot the i in your name and wrote willson instead of willison as I wasn't wearing specs. Sorry about that
Now Let me see yeah now its displaying something which is extremely glitchy
https://github.com/SerJaimeLannister/golang-browser/blob/mai...
I have a file to show how glitchy it is. I mean If anything I just want someone to tinker around with if a golang project can reasonably be made out of this rust project.
Simon, I see that you were also interested in go vibe coding haha, this project has independent tests too! Perhaps you can try this out as well and see how it goes! It would be interesting to see stuff then!
Alright time for me to sleep now, good night!
https://bsky.app/profile/emsh.cat/post/3mdgobfq4as2p
But basically I got curious and you can see from my other comments on you how much I love golang so decided to port the project from rust to golang and emsh predicts that the project's codebase can even shrink to 10k!
(although one point tho is that I don't have CC, I am trying it out on the recently released Kimi k2.5 model and their code but I decided to use that to see the real world use case of an open source model as well!)
Edit: I had written this comment just 2 minutes before you wrote but then I decided to write the golang project
I mean, I think I ate through all of my 200 queries in kimi code & it now does display me a (browser?) and I had the shell script as something to test your website as the test but it only opens up blank
I am gonna go sleep so that the 5 hour limits can get recharged again and I will continue this project.
I think it will be really interesting to see this project in golang, there must be good reason for emsh to say the project can be ~10k in golang.
Oh no, don't read too much into my wild guesses! Very hunch-based, and I'm only human after all.
>one agent
>one browser
>one million nvidia gpu
A more real answer: Read the first 6 words of the submission article.
All, just to have fun.
Very sad.
This is exactly what's wrong with Cursor's approach, and why we need better tools for collaborating, so we don't loose the engineering part of software development.
I, just like you, am fucking tired of all the slop being constantly pushed as something great. This + my previous blog entries are all about pushing back on the slop.
They fail to see where the value really is. They try to cheat the system, get admiration of others, but without putting in any value.