Top
Best
New

Posted by samwillis 1/14/2026

Scaling long-running autonomous coding(cursor.com)
290 points | 197 comments
simonw 1/14/2026|
"To test this system, we pointed it at an ambitious goal: building a web browser from scratch."

I shared my LLM predictions last week, and one of them was that by 2029 "Someone will build a new browser using mainly AI-assisted coding and it won’t even be a surprise" https://simonwillison.net/2026/Jan/8/llm-predictions-for-202... and https://www.youtube.com/watch?v=lVDhQMiAbR8&t=3913s

This project from Cursor is the second attempt I've seen at this now! The other is this one: https://www.reddit.com/r/Anthropic/comments/1q4xfm0/over_chr...

snowmobile 1/16/2026||
Well, it doesn't surprise me that this project is just a non-compiling clone of an existing browser. Says a lot about AI in general, don't you think? https://news.ycombinator.com/item?id=46649046
mrefish 1/14/2026|||
Time to raise the bar. By 2029 someone will build a new browser using mainly AI-assisted coding and the surprise is that it was designed to be used by pelicans.
embedding-shape 1/15/2026||
> Time to raise the bar

Lets make someone pass the one we have, this experiment didn't seem to yield a functioning browser, why would we raise the bar?

mrefish 1/15/2026||
> why would we raise the bar?

The web needs to be more p5n friendly.

scott_waddell 1/27/2026|||
That timeline feels aggressive but not impossible. The tooling has gotten scary good - I've seen so many people (including myself) prototype complex UIs in hours that would've taken weeks before. Browser engines are a real challenge though.
jcfrei 1/15/2026|||
Surely a smart implementation would just find the chromium source on github, do some cosmetic rewrites and strip out all none-essential features?
simonw 1/15/2026||
You'd be able to see it doing that by looking at the transcript. You could then tell it not to!
snowmobile 1/16/2026||
I suppose Cursor forgot to tell their AI that, before claiming that it built everything "from-scratch"
afishhh 1/15/2026|||
> The other is this one: https://www.reddit.com/r/Anthropic/comments/1q4xfm0/over_chr...

I took a 5-minute look at the layout crate here and... it doesn't look great:

1. Line height calculation is suspicious, the structure of the implementation also suggests inline spans aren't handled remotely correctly

2. Uhm... where is the bidi? Directionality has far reaching implications on an inline layout engine's design. This is not it.

3. It doesn't even consider itself a real engine:

        // Estimate text width (rough approximation: 0.6 * font_size * char_count)
        // In a real implementation, this would use font metrics
        let char_count = text.chars().count() as f32;
        let avg_char_width = font_size * 0.5; // Approximate average character width
        let text_width = char_count * avg_char_width;
I won't even begin talking about how this particular aspect that it "approximates" also has far reaching implications on your design...

I could probably go on in perpetuity about the things wrong with this, even test it myself or something. But that's a waste of time I'm not undertaking.

Making a "browser" that renders a few particular web pages "correctly" is an order of magnitude easier than a browser that also actually cares about standards.

If this is how "A Browser for the modern age." looks then I want a time machine.

PaulHoule 1/15/2026|||
I saw a "web browser" that was AI generated in maybe 2k lines of python based on tkinter that tried to support CSS and probably was able to render some test cases but didn't at all have the shape of a real web browser.

It reminds of having AI write me an MUI component the other day that implemented the "sx" prop [1] with some code that handles all the individual properties that were used by the component in that particular application, it might have been correct, the component at all was successful and well coded... but MUI provides a styled() function and a <Box> component, either one of which could have been used to make this component handle all the properties that "sx" is supposed to handle with as little as one line of code. I asked the agent "how would I do this using the tools that MUI provides to support sx" and had a great conversation and got a complete and clear understanding about the right way to do it but on the first try it wrote something crazy overcomplicated to handle the specific case as opposed to a general-purpose solution that was radically simple. That "web browser" was all like that.

[1] you can write something like sx={width: 4} and MUI multiplies 4 by the application scale and puts on, say, a width: 20px style

logicallee 1/16/2026||
Thank you for the detailed feedback, though we would prefer for you to comment on the announcement threads where you see it. We really appreciate the feedback.

You're referring to State of Utopia's[1] web browser, currently available here:

https://taonexus.com/publicfiles/jan2026/172toy-browser.py.t... (turn the volume down if you play the included easter egg mini-game as it's very loud.)

10-minute livestream demonstration:

https://www.youtube.com/watch?v=4xdIMmrLMLo&t=45s

That livestream demonstration is side-by-side with Chrome, rendering very simple pages.

It compiles, renders simple web pages and is able to post.

The differences between cursor's browser and our browser:

    - Cursor's long-running autonomously coded browser: over a million lines of code and a trillion tokens, which is computationally intensive and has a high cost.
    - State of Utopia's browser: under 3000 lines of code.

    - Cursor's browser: does not compile at present.  There's no way to use it.
    - State of Utopia's browser: compiles in every version.  You can use it right away, and it includes a fun easter-egg game.

    - Cursor's browser: can't make form submissions
    - State of Utopia's browser: can make form submissions.
I'm submitting this using that browser. (I don't know if it will really post or not.)

We are taking feature requests!! Submit your requested feature here:

https://pollunit.com/polls/ahysed74t8gaktvqno100g

We are happy to put any feature you want into the web browser.

[1] will be available at https://stateofutopia.com or https://stofut.com for short (St. of Ut.)

bouk 1/15/2026|||
Sure, but getting this far would be inconceivable just half a year ago. It will only get better as time passes
cube00 1/15/2026|||
On Jan 1 2026

> Given how badly my 2025 predictions aged I'm probably going to sit that one out! [1]

Seven days later you appear on the same podcast you appeared on in 2025 to share your LLM predictions for 2026.

What changed?

[1]: https://news.ycombinator.com/item?id=46450269

marcellus23 1/15/2026|||
He changed his mind? The comment you're citing seems partly tongue-in-cheek anyway, but even if it wasn't, how is this some kind of gotcha?
simonw 1/15/2026|||
Bryan got in touch and said "you're being too hard on yourself, those predictions were actually pretty good".
leptons 1/15/2026|||
Great, they can call it "artificial Internet Explorer", or aIE for short.
carlesonielfa 1/15/2026|||
Its impressive, but how sure are we that the code for the current open source browsers isn't part of the model's training data?
simonw 1/15/2026||
It turns out the Cursor one is stitching together a ton of open source components already.

That said, I don't really find the critique that models have browser source code in their training data particularly interesting.

If they spat out a full, working implementation in response to a single prompt then sure, I'd be suspicious they were just regurgitating their training data.

But if you watch the transcripts for these kinds of projects you'll see them make thousands of independent changes, reacting to test failures and iterating towards an implementation that matches the overall goals of the project.

The fact that Firefox and Chrome and WebKit are likely buried in the training data somewhere might help them a bit, but it still looks to me more like an independent implementation that's influenced by those and many other sources.

troupo 1/16/2026||
> The fact that Firefox and Chrome and WebKit are likely buried in the training data somewhere might help them a bit, but it still looks to me more like an independent implementation that's influenced by those and many other sources.

They generate a statistically appropriate token based on a very small context window. And they are slightly nerfed not to reproduce everything verbatim because that would bring all sorts of lawsuits.

Of course they are not reproducing Webkit or Blink or Firefox verbatim. However, it's not an "independent implementation". That's why it's "stringing together a bunch of open-source components": https://news.ycombinator.com/item?id=46649586

Edit: also, this "independent implementation" cannot be compiled by their own CI and doesn't work, apparently.

bob1029 1/14/2026|||
The goal I am currently using for long horizon coding experiments is implementation of a PDF rasterizer given an ISO32000 specification document.
llothar68 1/28/2026|||
I am currently using AI totry to improve pdfium to make it multithreaded and a few more features. Like partial network loading.
xenni 1/15/2026|||
We're almost there, I've been working on something similar using a markdown'd version of the ISO32000 spec
hahahahhaah 1/15/2026|||
Web browser should be easy as source exists. Fix all SVG bugs in my browser tho...
viraptor 1/15/2026||
There are 3.5 serious open codebases of web browsers currently. Only two are full featured. It's not nothing, but it's very far from "source exists so it's easy to copy what they do".
machiaweliczny 1/15/2026|||
But detailed specs exists for both HTML and JS and tests also exists and unlimited amount of test data. You can just try running webpage or program and also have reference implementations - it's much easier for agents to understand that. Also HTML they know super well from scraping whole internet but still impressive.
llothar68 1/28/2026|||
Ladybird and servo and quite a few older are also considerable
cheevly 1/14/2026|||
2029? I have no idea why you would think this is so far off. More like Q2 2026.
xmprt 1/14/2026|||
You're either overestimating the capabilities of current AI models or underestimating the complexity of building a web browser. There are tons of tiny edge cases and standards to comply with where implementing one standard will break 3 others if not done carefully. AI can't do that right now.
tocsa 1/25/2026|||
Even though several people seconded the complexity of a browser, I must add one more take and bring up one of my all time favorite blog posts, back from 2000 (I am old), when browsers were already complex, Joel Spolsky's Joel On Software episode "Things You Should Never Do, Part I" https://www.joelonsoftware.com/2000/04/06/things-you-should-... His first example was Netscape browser v6.0, and why there wasn't a v5.0 after v4.0, why it took three years: "They did it by making the single worst strategic mistake that any software company can make: They decided to rewrite the code from scratch." I think this blog post is very relevant here.
torginus 1/15/2026||||
Even if AI will not achieve the ability to perform at this level on its own, it clearly is going to be an enormous force multiplier, allowing highly skilled devs to tackle huge projects more or less on their own.
thesz 1/15/2026||
Skilled devs compress, not generate (expand).

https://www.youtube.com/watch?v=8kUQWuK1L4w

The "discoverer" of APL tried to express as many problems as he could with his notation. First he found that notation expands and after some more expansion he found that it began shrinking.

The same goes to Forth, which provides means for a Sequitur-compressed [1] representation of a program.

[1] https://en.wikipedia.org/wiki/Sequitur_algorithm

Myself, I always strive to delete some code or replace some code with shorter version. First, to better understand it, second, to return back and read less.

rvz 1/15/2026||||
It's most likely both.

> There are tons of tiny edge cases and standards to comply with where implementing one standard will break 3 others if not done carefully. AI can't do that right now.

Firstly the CI is completely broken on every commit, all tests have failed and its and looking closely at the code, it is exactly what you expect for unmaintainable slop.

Having more lines of code is not a good measure of robust software, especially if it does not work.

rlt 1/15/2026|||
Not only edge cases and standards, but also tons of performance optimizations.
gordonhart 1/14/2026||||
Web browsers are insanely hard to get right, that’s why there are only ~3 decent implementations out there currently.
qingcharles 1/15/2026||
The one nice thing about web browsers is that they have a reasonably formalized specification set and a huge array of tests that can be used. So this makes them a fairly unique proposition ideally suited to AI construction.
pleurotus 1/15/2026||
As far as I read on Ladybird's blog updates, the issue is less the formalised specs, and more that other browsers break the specs, so websites adjust, so you need to take the non-compliance to specs into account with your design
johnfn 1/15/2026||||
You should make your own predictions, and then we can do a retrospective on who was right.
mkoubaa 1/14/2026||||
Yeah if you let them index chromium I'm sure it could do it next week. It just won't be original or interesting.
geeunits 1/14/2026|||
[flagged]
dang 1/15/2026||
Please don't cross into personal attack on HN.

https://news.ycombinator.com/showhn.html

keepamovin 1/15/2026||
That makes a lot of sense for massive-scale efforts like a browser, using coordinated agents to push toward a huge, well defined target with existing benchmarks and tests.

My angle has been a bit different: scaling autonomous coding for individual developers, and in a much simpler way. I love CLI agents, but I found myself wasting time babysitting terminals while waiting for turns to finish. At some point it clicked: what if I could just email them?

Email sounds backward, but that’s the feature. It’s universal, async, already collaborative. The agent sends me a focused update, I reply with guidance, and it keeps working on a server somewhere, or my laptop, while I’m not glued to my desk. There’s still a human in the loop, just without micromanagement.

It’s been surprisingly joyful and productive, and it feels closer to how real organizations already work. I’ve put together a small, usable tool around this and shared it here if anyone wants to try it or kick the tires: https://news.ycombinator.com/item?id=46629191

embedding-shape 1/14/2026||
Did anyone manage to run the tests from the repository itself? The code seems filled with errors and warnings, as far as I can tell none of them because of the platform I'm on (Linux). I went and looked at the Action workflow history for some pages, and seems CI been failing for a while, PRs also all been failing CI but merged. How exactly was this verified to be something to be used as an successful example, or am I misunderstanding what point they are trying to make? They mention a screenshot, but they never actually mention if their goal was successfully met, do they?

I'm not sure the approach of "completely autonomous coding" is the right way to go. I feel like maybe we'll be able to use it more effectively if we think of them as something to be used by a human to accomplish some thing instead, lean into letting the human drive the thing instead, because quality spirals so quickly out of control.

snek_case 1/15/2026||
I found the codebase very hard to navigate. Hundreds (over a thousand?) tiny files with less than 200 lines of code, in deeply nested subdirectories. I wanted to find where the JavaScript engine was, and where the DOM implementation was located, and I couldn't easily find it, even using the GitHub search feature. I'm not exactly sure what this browser implements and how.

Even their README is kind of crappy. Ideally you want installation instructions right near the top, but it's broken into multiple files. The README link that says "running + architecture" (but the file is actually called browser_ui.md???) is hard to follow. There is no explicit list of dependencies, and again no explanation of how JavaScript execution works, or how rendering works, really.

It's impressive that they got such a big project to be built by agents and to compile, but this codebase... Feels like AI slop, and you couldn't pay me to maintain it. You could try to get AI agents to maintain it, but my prediction is that past some scale, they would have a hard time figuring out their own mess. You would just be left with permanent bugs you can't easily fix.

bonesss 1/15/2026|||
So the chain of events here is: copy existing tutorials and public/available code, train the model to spit it out-ish when asked, a mature-ish specification is used, and now they jitter and jumble towards a facsimile of a junior copy paste outsourcing nightmare they can’t maintain (creating exciting liabilities for all parties involved).

I can’t shake the feeling that simply being a shameless about copy-paste (ie copyright infringement), would let existing tools do much the same faster and more efficiently. Download Chromium, search-replace ‘Google’ with ‘ME!’, run Make… if I put that in a small app someone would explain that’s actually solvable as a bash one-liner.

There’s a lot of utility in better search and natural language interactions. The siren call of feedback loops plays with our sense of time and might be clouding or sense of progress and utility.

kungfuscious 1/15/2026||
You raise a good point, which is that autonomous coding needs to be benchmarked on designs/challenges where the exact thing being built isn't part of the model's training set.
NitpickLawyer 1/15/2026||
swe-REbench does this. They gather real issues from github repos on a ~monthly basis, and test the models. On their leaderboard you can use a slider to select issues created after a model was released, and see the stats. It works for open models, a bit uncertain on closed models. Not perfect, but best we have for this idea.
datsci_est_2015 1/15/2026||||
To steelman the vibecoders’ perspective, I think the point is that the code is not meant for you to read.

Anyone who has looked at AI art, read AI stories, listened to AI music, or really interacted with AI in any meaningfully critical way would recognize that this was the only predictable result given the current state of AI generated “content”. It’s extremely brittle, and collapses at the smallest bit of scrutiny.

But I guess (to continue steelmanning) the paradigm has shifted entirely. Why do we even need an entire browser for the whole internet? Why can’t we just vibe code a “browser” on demand for each web page we interact with?

I feel gross after writing this.

embedding-shape 1/15/2026|||
If it's not meant to be read, and not meant to be run since it doesn't compile and doesn't seem like it's been able to for quite some time, what is this mean to demonstrate?

That agents can write a bunch of code by themselves? We already knew that, and what's even the point of that if the code doesn't work?

I feel like I'm still missing what this entire project and blogpost is about. Is it supposed to be all theoretical or what's the deal?

datsci_est_2015 1/15/2026||
You and me both, bud. I often feel these days that humanity has never had a more fractured reality, and worse, those fractures are very binary and tribal. I cope by trying to find fundamental truths that are supported by overwhelming evidence rather than focus on speculation.

I guess the fundamental truth that I’m working towards for generative AI is that it appears to have asymptotic performance with respect to recreating whatever it’s trying to recreate. That is, you can throw unlimited computing power and unlimited time at trying to recreate something, but there will still be a missing essence that separates the recreation from the creation. In very small snippets, and for very large compute, there may be reasonable results, but it will never be able to completely replace what can be created in meatspace by meatpeople.

Whether the economics of the tradeoff between “nearly recreated” and “properly created” is net positive is what I think this constant ongoing debate is about. I don’t think it’s ever going to be “it always makes sense to generate content instead of hire someone for this”, but rather a more dirty, “in this case, we should generate content”.

embedding-shape 1/15/2026||
No, but this blogpost is on a whole other level. Usually at least the stuff they showcase at least does something, not shovelware that doesn't compile.
snek_case 1/15/2026|||
I've had AI write some very nice, readable code, but I make it go one function at a time.
datsci_est_2015 1/15/2026||
Writing code one function at a time is not the the 100x speed up being hyped all over HN. I also write my code one function at a time, often assisted by various tools, some of them considered “AI”.

Writing code one function at a time is the furthest thing than what is being showcased in TFA.

embedding-shape 1/15/2026|||
> It's impressive that they got such a big project to be built by agents and to compile

But that's the thing, it doesn't compile, has a ton of errors, CI seems broken since long... What exactly is supposed to impressive here, that it managed to generate a bunch of code that doesn't even compile?

What in the holy hackers is this even about? Am I missing something obvious here? How is this news?

underdeserver 1/15/2026|||
Looks like it doesn't compile for at least one other guy (I myself haven't tried): https://github.com/wilsonzlin/fastrender/issues/98

Yeah, answers need to be given.

snek_case 1/15/2026||
Cursor is in the business of selling you more tokens, so it makes sense that they would exaggerate the capabilities of their models, and even advertise it being used to produce lots of code over weeks. This would probably cost you thousands in API usage fees.
askl 1/15/2026|||
> What in the holy hackers is this even about? Am I missing something obvious here?

It's about hyping up cursor and writing a blog post. You're not supposed to look at or use the code, obviously.

idopmstuff 1/15/2026|||
> I'm not sure the approach of "completely autonomous coding" is the right way to go.

I suspect the author of the post would agree. This feels much more like a experiment to push the limits of LLMs than anything they're looking to seriously use as a product (or even the basis of a product).

I think the more interesting question is when the approach of completely autonomous coding will be the right way to go. LLMs are definitely progressing along a spectrum of: Can't do it -> Can do it with help -> Can do it alone but code isn't great -> Can do it alone with good code. Right now I'd say they're only in that final step for very small projects (e.g. simple Python scripts), but it seems like an inevitability that they will get there for increasingly large ones.

csomar 1/15/2026|||
You can stop reading the article from here:

> Today's agents work well for focused tasks, but are slow for complex projects.

What does slow mean? Slower than humans? Need faster GPUs? What does it even imply? Too slow to produce the next token? Too slow in attempts to be usable? Need human intervention?

This piece is made and written to keep the bubble inflating further.

seanc 1/15/2026||
Code filled with errors and warnings? PR's merged with failing CI?

So I guess they've achieved human parity then?

(I'll see myself out)

trjordan 1/14/2026||
This is going to sound sarcastic, but I mean this fully: why haven't they merged that PR.

The implied future here is _unreal cool_. Swarms of coding agents that can build anything, with little oversight. Long-running projects that converge on high-quality, complex projects.

But the examples feel thin. Web browsers, Excel, and Windows 7 exist, and they specifically exist in the LLM's training sets. The closest to real code is what they've done with Cursor's codebase .... but it's not merged yet.

I don't want to say, call me when it's merged. But I'm not worried about agents ability to produce millions of lines of code. I'm worried about their ability to intersect with the humans in the real world, both as users of that code and developers who want to build on top of it.

dust42 1/15/2026||
> This is going to sound sarcastic, but I mean this fully: why haven't they merged that PR.

I would go even further, why have they not created at least one less complex project that is working and ready to be checked out? To me it sounds like having a carrot dangle in front of the face of VC investors: 'Look, we are almost there to replace legions of software developers! Imagine the market size and potential cost reductions for companies.'

LLMs are definitely an exciting new tool and they are going to change a lot. But are they worth $B for everything being stamped 'AI'? The future will tell. Looking back the dotcom boom hype felt exactly the same.

The difference with the dotcom boom is that at the time there was a lot more optimism to build a better future. The AI gold rush seems to be focused on getting giga-rich while fscking the bigger part of humanity.

risyachka 1/14/2026|||
>> why haven't they merged that PR.

because it is absolutely impossible to review that code and there is gazillion issues there.

The only way it can get merged is YOLO and then fix issues for months in prod which kinda defeats the purpose and brings gains close to zero.

mkoubaa 1/15/2026||
On the other hand, finding fixing issues for months is still training data
orlp 1/15/2026|||
> Long-running projects that converge on high-quality, complex projects

In my experience agents don't converge on anything. They diverge into low-quality monstrosities which at some point become entirely unusable.

embedding-shape 1/15/2026||
Yeah, I don't think they're built for that either, you need a human to steer the "convergtion", otherwise they indeed end up building monstrosities.
viraptor 1/15/2026|||
> Web browsers, Excel, and Windows 7 exist, and they specifically exist in the LLM's training sets.

There's just a bit over 3 browsers, 1 serious excel-like and small part of windows user side. That's really not enough for training for replicating those specific tasks.

energy123 1/15/2026|||
> Long-running projects that converge

This is how I think about it. I care about asymptotics. What initial conditions (model(s) x workflow/harness x input text artefacts) causes convergence to the best steady state? The number of lines of code doesn't have to grow, it could also shrink. It's about the best output.

dist-epoch 1/14/2026||
Pretty much everything exists in the training sets. All non-research software is just a mishmash of various standard modules and algorithms.
galaxyLogic 1/14/2026||
Not everything, only code-bases of existing (open-source?) applications.

But what would be the point of re-creating existing applications? It would be useful if you can produce a better version of those applications. But the point in this experiment was to produce something "from scratch" I think. Impressive yes, but is it useful?

A more practically useful task would be for Mozilla Foundation and others to ask AI to fix all bugs in their application(s). And perhaps they are trying to do that, let's wait and see.

mkoubaa 1/15/2026|||
You have to be careful which codebase to try this on. I have a feeling if someone unleashed agents on the Linux kernel to fix bugs it'd lead to a ban on agents there
conradev 1/15/2026|||
Re-creating closed source applications as open source would have a clear benefit because people could use those applications in a bunch of new ways. (implied: same quality bar)
ZitchDog 1/14/2026||
I used similar techniques to build tjs [1] - the worlds fastest and most accurate json schema validator, with magical TypeScript types. I learned a lot about autonomous programming. I found a similar "planner/delegate" pattern to work really well, with the use of git subtrees to fan out work [2].

I think any large piece of software with well established standards and test suites will be able to be quickly rewritten and optimized by coding agents.

[1] https://github.com/sberan/tjs

[2] /spawn-perf-agents claude command: https://github.com/sberan/tjs/blob/main/.claude/commands/spa...

micimize 1/14/2026||
> While it might seem like a simple screenshot, building a browser from scratch is extremely difficult.

> Another experiment was doing an in-place migration of Solid to React in the Cursor codebase. It took over 3 weeks with +266K/-193K edits. As we've started to test the changes, we do believe it's possible to merge this change.

In my view, this post does not go into sufficient detail or nuance to warrant any serious discussion, and the sparseness of info mostly implies failure, especially in the browser case.

It _is_ impressive that the browser repo can do _anything at all_, but if there was anything more noteworthy than that, I feel they'd go into more detail than volume metrics like 30K commits, 1M LoC. For instance, the entire capability on display could be constrained to a handful of lines that delegate to other libs.

And, it "is possible" to merge any change that avoids regressions, but the majority of our craft asks the question "Is it possible to merge _the next_ change? And the next, and the 100th?"

If they merge the MR they're walking the walk.

If they present more analysis of the browser it's worth the talk (not that useful a test if they didn't scrutinize it beyond "it renders")

Until then, it's a mountain of inscrutable agent output that manages to compile, and that contains an execution pathway which can screenshot apple.com by some undiscovered mechanism.

meander_water 1/15/2026||
The lowest bar in agentic coding is the ability to create something which compiles successfully. Then something which runs successfully in the happy path. Then something which handles all the obvious edge cases.

By far the most useful metric is to have a live system running for a year with widespread usage that produces a lower number of bugs than that of a codebase created by humans.

Until that happens, my skeptic hat will remain firmly on my head.

embedding-shape 1/14/2026||
> it's a mountain of inscrutable agent output that manages to compile

But is this actually true? They don't say that as far as I can tell, and it also doesn't compile for me nor their own CI it seems.

sashank_1509 1/15/2026|||
Oh it doesn’t compile? that’s very revealing
rvz 1/15/2026||
Some people just believe anything said on X these days. No timeline from start to finish, just "trust me bro".

If you can't reproduce or compile the experiment then it really doesn't work at all and nothing but a hype piece.

micimize 1/15/2026|||
Hah I don't know actually! I was assuming it must if they were able to get that screenshot video.
Snuggly73 1/15/2026||
error: could not compile `fastrender` (lib) due to 34 previous errors; 94 warnings emitted

I guess probably at some point, something compiled, but cba to try to find that commit. I guess they should've left it in a better state before doing that blog post.

jaggederest 1/15/2026||
I find it very interesting the degree to which coding agents completely ignore warnings. When I program I generally target warning-free code, and even with significant effort in prompting, I haven't found a model that treats warnings as errors, and they almost all love the "ignore this warning" pragmas or comments over actually fixing them.
ianbutler 1/15/2026|||
Yeah I've had problems with this recently. "Oh those are just warnings." Yes but leaving them will make this codebase shit in short time.

I do use AI heavily so I resorted to actually turning on warnings as errors in the rust codebases I work in.

micimize 1/15/2026||
Easiest to have different agents or turns that set aside the top-level goal via hooks/skills/manual prompt/etc. Heuristically, a human will likely ignore a lot of warnings until they've wired up the core logic, then go back and re-evaluate, but we still have to apply steering to get that kind of higher-order cognitive pattern.

Product is still fairly beta, but in Sculptor[^1] we have an MCP that provides agent & human with suggestions along the lines of "the agent didn't actually integrate the new module" or "the agent didn't actually run the tests after writing them." It leads to some interesting observations & challenges - the agents still really like ignoring tool calls compared to human messages b/c they "know better" (and sometimes they do).

[^]: https://imbue.com/sculptor/

conception 1/15/2026||||
You can use hooks to keep them from being able to do this btw
jaggederest 1/15/2026||
I generally think of needing hooks as being a model training issue - I've had to use them less as the models have gotten smarter, hopefully we'll reach the point where they're a nice bonus instead of needed to prevent pathological model behavior.
suriya-ganesh 1/15/2026|||
unfortunately this is not the most common practice. I've worked on rust codebases with 10K+ warning. and rust was supposed to help you.

It is also close to impossible run any node ecosystem without getting a wall of warnings.

You are an extreme outlier for putting in the work to fix all warnings

embedding-shape 1/15/2026|||
> It is also close to impossible run any node ecosystem without getting a wall of warnings.

Haven't found that myself, are you talking about TypeScript warnings perhaps? Because I'm mostly using just JavaScript and try to steer clear of TypeScript projects, and AFAIK, JavaScript the language nor runtimes don't really have warnings, except for deprecations, are those the ones you're talking about?

jaggederest 1/15/2026|||
`cargo clippy` is also very happy with my code. I agree and I think it's kind of a tragedy, I think for production work warnings are very important. Certainly, even if you have a large number of warnings and `clippy` issues, that number ideally should go down over time, rather than up.
Snuggly73 1/15/2026||
And there is the thing about the cost. The blog post says that they've spent trillions (plural!) of tokens on that experiment.

Looking at OAI API pricing, 5.2 Codex is $14 per 1 million output tokens. Which makes cool $14m for 1 trillion tokens (multiplied by whatever the plural is). For something that "kind of works".

Its a nice ad for OAI and Anysphere, but maybe next time - just donate the money to a browser team?

tehsauce 1/15/2026||
I was excited to try it out so I downloaded the repo and ran the build. However there were 100+ compilation errors. So I checked the commit history on github and saw that for at least several pages back all recent commits had failed in the CI. It was not clear which commit I should pick to get the semi-working version advertised.

I started looking in the Cargo.toml to at least get an idea how the project was constructed. I saw there that rather than being built from scratch as the post seemed to imply that almost every core component was simply pulled in from an open source library. quickjs engine, wgpu graphics, winit windowing & input, egui for ui, html parsing, the list goes on. On twitter their CEO explicitly stated that it uses a "custom js vm" which seemed particularly misleading / untrue to me.

Integrating all of these existing components is still super impressive for these models to do autonomously, so I'm just at a loss how to feel when it does something impressive but they then feel the need to misrepresent so much. I guess I just have a lot less respect and trust for the cursor leadership, but maybe a little relief knowing that soon I may just generate my own custom cursor!

jkelleyrtp 1/15/2026||
WGPU for render, winit for window, servo css engine, taffy for layout sounds eerily similar to our existing open source Rust browser blitz.

https://github.com/dioxuslabs/blitz

Maybe we ended up in the training data!

satvikpendem 1/15/2026||
I follow Dioxus and particularly blitz / #native on your Discord and I noticed the exact same thing too. There was a comment in a readme in Cursor's browser repo they linked mentioning taffy and I thought, hang on, it's definitely not from scratch, as they advertise. People really do believe everything they read on Twitter.

Great work by the way, blitz seems to be coming along nicely, and I even see you guys created a proto browser yourselves which is pretty cool, actually functional unlike Cursor's.

whatever1 1/15/2026|||
You are doing it wrong.

Take a screenshot and take it to your manager / investor and make a presentation “Imagine what is now possible for our business”.

Get promoted / exit, move to other pastures and let them figure it out.

eeL3bo1mohn7pee 1/15/2026|||
Of 63295 workflow runs, apparently only 1426 have been successful.

It's hard to avoid the impression that this is an unverified pile of slop that may have actually never worked.

The CI process certainly hasn't succeeded for the vast majority of commits.

Baffling, really.

alfalfasprout 1/16/2026||
You should see the code. It's true slop. The organization makes no sense.
wilsonzlin 1/16/2026|||
Thanks for the feedback. There were some build errors which have now been resolved; the CI test that was failing was not a standard check CI, and it's now been updated. Let me know if you have any further issues.

> On twitter their CEO explicitly stated that it uses a "custom js vm" which seemed particularly misleading / untrue to me.

The JS engine used a custom JS VM being developed in vendor/ecma-rs as part of the browser, which is a copy of my personal JS parser project vendored to make it easier to commit to.

I agree that for some core engine components, it should not be simply pulling in dependencies. I've begun the process of removing many of these and co-developing them within the repo alongside the browser. A reasonable goal for "from scratch" may be "if other major browsers use a dependency, it's fine to do so too". For example: OpenSSL, libpng, HarfBuzz, Skia. The current project can be moved more towards this direction, although I think using libraries for general infra that most software use (e.g. windowing) can be compatible with that goal.

I'd push back on the idea that all the agents did was wire up dependencies — the JS VM, DOM, paint systems, chrome, text pipeline, are all being developed as part of this project, and there are real complex systems being engineered towards the goal of a browser engine, even if not there yet.

polyglotfacto 1/21/2026|||
> there are real complex systems being engineered towards the goal of a browser engine, even if not there yet.

In various comments in https://news.ycombinator.com/item?id=46624541 I have explained at length why your fleet of autonomous agents failed miserably at building something that could be seen as a valid POC.

One example: your rendering loop does not follow the web specs and makes no sense.

https://github.com/wilsonzlin/fastrender/blob/19bf1036105d4e...

The above design document is simply nonsense; typical AI hallucinated BS. Detailed critique at https://news.ycombinator.com/item?id=46705625

The actual code is worse; I can only describe it as a tangle of spaghetti. As a Browser expert I can't make much, if anything, out of it. In comparison, when I look at code in Ladybird, a project I am not involved in, I can instantly find my way around the code because I know the web specs.

So I agree this isn't just wiring up of dependencies, and neither is it copied from existing implementations: it's a uniquely bad design that could never support anything resembling a real-world web engine.

Now don't get me wrong, I do think AI could be leveraged to build a web engine, but not by unleashing autonomous agents. You need humans in the loop at all levels of abstractions; the agents should only be used to bang out features re-using patterns established or vetted by human experts.

If you want to do this the right way, get in touch: https://github.com/gterzian

fwip 1/16/2026|||
When you say "have now been resolved" - did the AI agent resolve it autonomously, did you direct it to, or did a human do it?
neuronexmachina 1/16/2026||
Looks like Cursor Agent was at least somewhat involved: https://github.com/wilsonzlin/fastrender/commit/4cc2cb3cf0bd...
embedding-shape 1/16/2026||
Looks like a bunch of different users (including Google's Jules made one commit) been contributing to the codebase, and the recent "fixes" includes switching between various git users. https://gist.github.com/embedding-shapes/d09225180ea3236f180...

This to me seems to raise more questions than it answers.

mjmas 1/17/2026||
The ones at *.ec2.internal generally mean that the git config was never set up ans it defaults to $(id -un)@$(hostname)
embedding-shape 1/17/2026||
Indeed. Extra observant people will notice that the "Ubuntu" username was used only twice though, compared to "root" that was used +3700 times. And observant people who've dealt with infrastructure before, might recognize that username as the default for interactive EC2 instances :)
handfuloflight 1/15/2026||
Let us all generate our own custom cursors.
jphelan 1/14/2026||
This looks like extremely brittle code to my eyes. Look at https://github.com/wilsonzlin/fastrender/blob/main/crates/fa...

What is `FrameState::render_placeholder`?

``` pub fn render_placeholder(&self, frame_id: FrameId) -> Result<FrameBuffer, String> { let (width, height) = self.viewport_css; let len = (width as usize) .checked_mul(height as usize) .and_then(|px| px.checked_mul(4)) .ok_or_else(|| "viewport size overflow".to_string())?;

    if len > MAX_FRAME_BYTES {
      return Err(format!(
        "requested frame buffer too large: {width}x{height} => {len} bytes"
      ));
    }

    // Deterministic per-frame fill color to help catch cross-talk in tests/debugging.
    let id = frame_id.0;
    let url_hash = match self.navigation.as_ref() {
      Some(IframeNavigation::Url(url)) => Self::url_hash(url),
      Some(IframeNavigation::AboutBlank) => Self::url_hash("about:blank"),
      Some(IframeNavigation::Srcdoc { content_hash }) => {
        let folded = (*content_hash as u32) ^ ((*content_hash >> 32) as u32);
        Self::url_hash("about:srcdoc") ^ folded
      }
      None => 0,
    };
    let r = (id as u8) ^ (url_hash as u8);
    let g = ((id >> 8) as u8) ^ ((url_hash >> 8) as u8);
    let b = ((id >> 16) as u8) ^ ((url_hash >> 16) as u8);
    let a = 0xFF;

    let mut rgba8 = vec![0u8; len];
    for px in rgba8.chunks_exact_mut(4) {
      px[0] = r;
      px[1] = g;
      px[2] = b;
      px[3] = a;
    }

    Ok(FrameBuffer {
      width,
      height,
      rgba8,
    })
  }
} ```

What is it doing in these diffs?

https://github.com/wilsonzlin/fastrender/commit/f4a0974594e3...

I'd be really curious to see the amount of work/rework over time, and the token/time cost for each additional actual completed test case.

blibble 1/14/2026||
this is certainly an interesting way to pull out an attribute from a tag: https://github.com/wilsonzlin/fastrender/blob/main/crates/fa...
blamestross 1/14/2026||
I suppose brittle code is fine if you have cursor to update and fix it. Ideal really, keeps you dependent.
xmprt 1/15/2026|||
To be fair, that was always the case when working with external contractors. And if agentic AI companies can capture that market, then that's still a pretty massive opportunity.
janstice 1/16/2026||
At least AI is (and unlike many contract dev shops) keen to write unit tests…
torginus 1/15/2026||
Personally what I don't like about this now that I think about it, is that they didn't scale up gradually, let's say there there's a ladder of complexity in software, starting at a simple React CRUD app, going on to something more complex, such as a Paint clone, to something even more complex, like a file manager etc, ending up at one of the most complex pieces of software ever made, a web browser.

I'd want to see some system, that 100%s the first task, saturation, does a great job on the next, then does a valiant effort on the third, then finally makes something promising but as yet unusable on the last.

This way we could see that scaling up difficulty results in a gradual decline in quality, and could have a decent measurement of where we are at and where we are going.

mk599 1/14/2026|
Define "from scratch" in "building a web browser from scratch". This thing has over 100 crates as dependencies... To implement css layouting, it uses Taffy, a crate used by existing browser implementations...
rvz 1/15/2026||
When I see hundreds of crates being used in a project, I have to just scratch my head and ask: What the f___?

If one vulnerability exists in those crates well, thats that.

qingcharles 1/15/2026||
And it's not necessarily a bad move to use all those dependencies, but you're right it makes the claim shady.

I can create a web browser in under a minute in Copilot if I ask it to build a WinForms project that embeds the WebView2 "Edge" component and just adds an address bar and a back button.

More comments...