Top
Best
New

Posted by embedding-shape 9 hours ago

Show HN: One Human + One Agent = One Browser From Scratch in 20K LOC(emsh.cat)
Related: https://simonwillison.net/2026/Jan/27/one-human-one-agent-on...
93 points | 54 commentspage 2
storystarling 2 hours ago|
How did you handle the context window for 20k lines? I assume you aren't feeding the whole codebase in every time given the API costs. I've struggled to keep agents coherent on larger projects without blowing the budget, so I'm curious if you used a specific scoping strategy here.
embedding-shape 41 minutes ago||
I didn't, Codex (tui/cli) did, it does it all by itself. I have one REQUIREMENTS.md which is specific to the project, a AGENTS.md that I reuse across most projects, then I give Codex (gpt-5.2 with reasoning effort set to xhigh) a prompt + screenshot, tells it to get it to work somewhat similar, waits until it completes, reviewed that it worked, then continued.

Most of the time when I develop professionally, I restart the session after each successful change, for this project, I initially tried to let one session go as long as possible, but eventually I reverted back to my old behavior of restarting from 0 after successful changes.

For knowing what file it should read/write, it uses `ls`, `tree` and `ag ` most commonly, there is no out-of-band indexing or anything, just a unix shell controlled by a LLM via tool calls.

simonw 2 hours ago|||
GPT-5.2 has a 400,000 token context window. Claude Opus 4.5 is just 200,000 tokens. To my surprise this doesn't seem to limit their ability to work with much larger codebases - the coding agent harnesses have got really good at grepping for just the code that they need to have in-context, similar to how a human engineer can make changes to a million lines of code without having to hold it all in their head at once.
storystarling 49 minutes ago||
That explains the coherence, but I'm curious about the mechanics of the retrieval. Is it AST-based to map dependencies or are you just using vector search? I assume you still have to filter pretty aggressively to keep the token costs viable for a commercial tool.
simonw 2 minutes ago||
No vector search, just grep.
nurettin 2 hours ago||
You don't load the entire project into the context. You let the agent work on a few 600-800 line files one feature at a time.
storystarling 45 minutes ago||
Right, but how does it know which files to pick? I'm curious if you're using a dependency graph or embeddings for that discovery step, since getting the agent to self-select the right scope is usually the main bottleneck.
embedding-shape 38 minutes ago||
I gave you a more complete answer here: https://news.ycombinator.com/item?id=46787781

> since getting the agent to self-select the right scope is usually the main bottleneck

I haven't found this to ever be the bottleneck, what agent and model are you using?

rahimnathwani 6 hours ago||
This is awesome. Would you be willing to share more about your prompts? I'm particularly interested in how you prompted it to get the first few things working.
embedding-shape 5 hours ago|
Yes, I'm currently putting it all together and will make it public via the blog post. Just need to go through all of it first to ensure nothing secret/private leaks, will update once I've made it public.
jacquesm 5 hours ago||
This post is far more interesting than many others on the same subject, not because of what is built but because of how it it is built. There is a ton of noise on this subject and most of it seems to focus on the thing - or even on the author - rather than on the process, the constraints and the outcome.
embedding-shape 5 hours ago|
Thanks, means a lot. As the author of one such article (that might have been the catalyst even), I'm guilty of this myself, and as I dove deeper into understanding what Cursor actually built, and what they think was the "success", the less sense everything made to me.

That's why taking a step back and seeing what's actually hard in the process and bad with the output, felt like it made more sense to chase after, rather than anything else.

jacquesm 4 hours ago||
I think the Cursor example is as bad as it gets and this is as good as it gets.

FWIW I ran your binary and was pleasantly surprised, but my low expectations probably helped ;)

embedding-shape 4 hours ago||
I'm glad I could take people on a journey that first highlighted what absolutely sucks, to presenting something that seemingly people get pleasantly surprised by! Can't ask for more really :)
jacquesm 4 hours ago||
What is interesting is that yours is the first example of what this tech can do that resonates with me, the things I've seen posted so far do not pass the test for excitement, it's just slop and it tries to impress by being a large amount of slop. I've done some local experiments but the results were underwhelming (to put it mildly) even for tiny problems.

The next challenge I think would be to prove that no reference implementation code leaked into the produced code. And finally, this being the work product of an AI process you can't claim copyright, but someone else could claim infringement so beware of that little loophole.

embedding-shape 3 hours ago||
Knowing you browse HN quite a lot (not that I'm not guilty of that too), that's some high praise! Thank you :)

I think the focus with LLM-assisted coding for me has been just that, assisted coding, not trying to replace whole people. It's still me and my ideas driving (and my "Good Taste", explained here: https://emsh.cat/good-taste/), the LLM do all the things I find more boring.

> prove that no reference implementation code leaked into the produced code

Hmm, yeah, I'm not 100% sure how to approach this, open to ideas. Basic comparing text feels like it'd be too dumb, using an LLM for it might work, letting it reference other codebase perhaps. Honestly, don't know how I'd do that.

> And finally, this being the work product of an AI process you can't claim copyright, but someone else could claim infringement so beware of that little loophole.

Good point to be aware of, and I guess I by instinct didn't actually add any license to this project. I thought of adding MIT as I usually do, but I didn't actually make any of this so ended up not assigning any license. Worst case scenario, I guess most jurisdictions would deem either no copyright or that I (implicitly) hold copyright. Guess we'll take that if we get there :)

rvz 3 hours ago||
> I'm going to upgrade my prediction for 2029: I think we're going to get a production-grade web browser built by a small team using AI assistance by then.

That is Ladybird Browser if that was not already obvious.

dewey 3 hours ago||
For the curious, they have a reasonable AI policy:

https://github.com/LadybirdBrowser/ladybird/blob/master/CONT...

simonw 3 hours ago||
Ladybird (a project I deeply respect) had a several year head start.
Imustaskforhelp 2 hours ago||
I feel like I have talked to Embedding-shape on Hackernews quite a lot that I recognize him. So it was a proud like moment when I saw his hackernews & github comments on a youtube video [0]about the recent cursor thing

It's great to see him make this. I didn't know that he had a blog but looks good to me. Bookmarked now.

I feel like although Cursor burned 5 million$, we saw that and now Embedding shapes takeaway

If one person with one agent can produce equal or better results than "hundreds of agents for weeks", then the answer to the question: "Can we scale autonomous coding by throwing more agents at a problem?", probably has a more pessimistic answer than some expected.

Effectively to me this feels like answering the query which was being what if we have thousands of AI agents who can build a complex project autonomously with no Human. That idea seems dead now. Humans being in the loop will have a much higher productivity and end result.

I feel like the lure behind the Cursor project was to find if its able to replace humans completely in a extremely large project and the answer's right now no (and I have a feeling [bias?] that the answer's gonna stay that way)

Emsh I have a question tho, can you tell me about your background if possible? Have you been involved in browser development or any related endeavours or was this a first new one for you? From what I can feel/have talked with you, I do feel like the answer's yes that you have worked in browser space but I am still curious to know the answer.

A question which is coming to my mind is how much would be the difference between 1 expert human 1 agent and 1 (non expert) say Junior dev human 1 agent and 1 completely non expert say a normal person/less techie person 1 agent go?

What are your guys prediction on it?

How would the economics of becoming an "expert" or becoming a jack of all trades (junior dev) in a field fare with this new technology/toy that we got.

how much productivity gains could be from 1 non expert -> junior dev and the same question for junior -> senior dev in this particular context

[0] Cursor Is Lying To Developers… : https://www.youtube.com/watch?v=U7s_CaI93Mo

simonw 2 hours ago|
I don't think the Cursor thing was about replacing humans entirely.

(If it was that's bad news for them as a company that sells tools to human developers!)

It was about scaling coding agents up to much larger projects by coordinating and running them in parallel. They chose a web browser for that not because they wanted to build a web browser, but because it seemed like the ideal example of a well specified but enormous (million line+) project which multiple parallel agents could take on where a single agent wouldn't be able to make progress.

embedding-shape's project here disproves that last bit - that you need parallel agents to build a competent web renderer - by achieving a more impressive result with just one Codex agent in a few days.

Imustaskforhelp 1 hour ago||
> I don't think the Cursor thing was about replacing humans entirely.

I think how I saw things was that somehow Cursor was/is still targetted very heavily on vibe coding in a similar fashion of bolt.dev or lovable and I even saw some vibe coders youtube try to see the difference and honestly at the end Cursor had a preferable pricing than the other two and that's how I felt Cursor was.

Of course Cursor's for the more techie person as well but I feel as if they would shift more and more towards Claude Code or similar which are subsidized by the provider (Anthropic) itself, something not possible for Cursor to do unless burning big B's which it already has done.

So Cursor's growth was definitely towards the more vibe coders side.

Now coming to my main point which is that I had the feeling that what cursor was trying to achieve wasn't trying to replace humans entirely but replace humans from the loop Aka Vibe coding. Instead of having engineers, if suppose the Cursor experiment was sucessful, the idea (which people felt when it was first released instantly) was that the engineering itself would've been dead & instead the jobs would've turned into management from a bird's eye view (not managing agent's individually or being aware of what they did or being in any capacity within the loop)

I feel like this might've been the reason they were willing to burn 5 million$ for.

If you could've been able to convince engineers considering browsers are taken as the holy grail of hardness that they are better off being managers, then a vibe coding product like Cursor would be really lucrative.

Atleast that's my understanding, I can be wrong I usually am and I don't have anything against Cursor. (I actually used to use Cursor earlier)

But the embedding shapes project shows that engineering is very much still alive and beneficial net. He produced a better result with very minimal costs than 5 million$ inference costs project.

> embedding-shape's project here disproves that last bit - that you need parallel agents to build a competent web renderer - by achieving a more impressive result with just one Codex agent in a few days.

Simon, I think that browsers got the idea of this autonomous agents partially because of your really famous post about how independent tests can lead to easier ports via agents. Browsers have a lot of independent tests.

So Simon, perhaps I may have over-generalized but do you know of any ideas where the idea of parallel agents is actually good now that browsers are off, personally after this project, I can't really think of any. When the Cursor thing first launched or when I first heard of it recently, I thought that browsers did make sense for some reason but now that its out of the window, I am not sure if there are any other projects where massively parallel agents might be even net positive over 1 human + 1 agent as Emsh.

simonw 1 hour ago||
No, I'm still waiting to see concrete evidence that the "swarms of parallel agents" thing is worthwhile. I use sub-agents in Claude Code occasionally - for problems that are easily divided - and that works fine as a speed-up, but I'm still holding out for an example of a swarm of agents that's really compelling.

The reason I got excited about the Cursor FastRender example was that it seemed like the first genuine example of thousands of agents achieving something that couldn't be achieved in another way... and then embedding-shapes went and undermined it with 20,000 lines of single-agent Rust!

Imustaskforhelp 35 minutes ago|||
Edit 2: looks like the project took literally the last token I had to create a big buggy implementation in golang haha!

I kind of left the agents to do what they wanted just asking for a port.

Your website does look rotated and the image is the only thing visible in my golang port.

Let me open source it & I will probably try to hammer it some more after I wake up to see how good Kimi is in real world tasks.

https://github.com/SerJaimeLannister/golang-browser

I must admit that its not working right now and I am even unable to replicate your website that was able to first display even though really glitchy and image zoomed to now only a white although oops looks like I forgot the i in your name and wrote willson instead of willison as I wasn't wearing specs. Sorry about that

Now Let me see yeah now its displaying something which is extremely glitchy

https://github.com/SerJaimeLannister/golang-browser/blob/mai...

I have a file to show how glitchy it is. I mean If anything I just want someone to tinker around with if a golang project can reasonably be made out of this rust project.

Simon, I see that you were also interested in go vibe coding haha, this project has independent tests too! Perhaps you can try this out as well and see how it goes! It would be interesting to see stuff then!

Alright time for me to sleep now, good night!

Imustaskforhelp 54 minutes ago|||
Haha yea, Me and emsh were actually talking about it on bluesky (which I saw after seeing your bluesky, I didn't know both you and emsh were on bsky haha)

https://bsky.app/profile/emsh.cat/post/3mdgobfq4as2p

But basically I got curious and you can see from my other comments on you how much I love golang so decided to port the project from rust to golang and emsh predicts that the project's codebase can even shrink to 10k!

(although one point tho is that I don't have CC, I am trying it out on the recently released Kimi k2.5 model and their code but I decided to use that to see the real world use case of an open source model as well!)

Edit: I had written this comment just 2 minutes before you wrote but then I decided to write the golang project

I mean, I think I ate through all of my 200 queries in kimi code & it now does display me a (browser?) and I had the shell script as something to test your website as the test but it only opens up blank

I am gonna go sleep so that the 5 hour limits can get recharged again and I will continue this project.

I think it will be really interesting to see this project in golang, there must be good reason for emsh to say the project can be ~10k in golang.

embedding-shape 35 minutes ago||
> I think it will be really interesting to see this project in golang, there must be good reason for emsh to say the project can be ~10k in golang.

Oh no, don't read too much into my wild guesses! Very hunch-based, and I'm only human after all.

augusteo 2 hours ago|
[flagged]
simonw 2 hours ago||
(This relates to my note at the end of https://simonwillison.net/2026/Jan/27/one-human-one-agent-on... )

The things that make me think this is still a huge project include:

1. JavaScript and the DOM. There's a LOT there, especially making sure that when the DOM is updated the page layout reflows promptly and correctly.

2. Security. Browsers are an incredibly high-risk environment, especially once you start implementing JavaScript. There are a ton of complex specs involved here too, like CORS and CSP and iframe sandbox and so on. I want these to be airtight and I want solid demonstrations of how airtight they are.

3. WebAssembly in its various flavors, including WebGPU and WebGL

4. It has to be able to render the real Web - starting with huge and complex existing applications like Google Maps and Google Docs and then working through that long tail of weird old buggy websites that the other browsers have all managed to render.

I expect that will keep people pretty busy for a while yet, no matter how many agents they throw at it.

augusteo 1 hour ago||
simonw replied to me! Achievement unlocked! Big fan!

And yes there's definitely still a lot to do. Security is def a big one.

Very exciting time to be alive.

Yoric 2 hours ago|||
Erm... security?
croisillon 2 hours ago||
are all your comments written by AI? jfc
layer8 1 hour ago|||
From his profile, this is scary:

> I lead AI & Engineering at Boon AI (Startup building AI for Construction).

augusteo 1 hour ago||||
I work in AI and wrote code with AI everyday. The robots haven't replaced me yet, but I'll let you know :)
penic 2 hours ago|||
dude these people are deranged it's unreal