Posted by samwillis 1/14/2026
It a broken mess that probably implements 0.00001% of Excel. And its 1.2m locs.
With codebases developed in this way - either they need to figure out how agents are going to maintain them (in which case SWE as we know is dead - it will only be limited to those that can spend trillions of tokens, or they are going to remain weird demos.
https://taonexus.com/publicfiles/jan2026/172toy-browser.py.t... (turn the sound down, it's a bit loud if you interact with the built-in Tetris clone.)
You can run it after installing the packages, "pip install requests pillow urllib3 numpy simpleaudio"
I livestreamed the latest version here 2 weeks ago, it's a ten minute video:
https://www.youtube.com/watch?v=4xdIMmrLMLo&t=45s
I'm posting from that web browser. As an easter egg, mine has a cool Tetris clone (called Pentrix) based on pieces with 5 segments, the button for this is at the upper-right.
If you have any feature suggestions for what you want in a browser, please make them here:
* A small statically generated Hugo website but with some clever linking/taxonomy stuff. This was a fairly self-contained project that is now 'finished' but wouldn't hvae taken me more than a few days to code up from scratch. * A scientific simulation package, to try and do a clean refresh of an existing one which i can point at for implementation details but which has some technical problems I would like to reduce/remove.
Claude code absolutely smashed the first one - no issues at all. With the second, no matter what I tried, it just made lots of mistakes, even when I just told it to copy the problematic parts and transpose them into the new structure. It basically got to a point where it wasn't correct and it didn't seem to be able to get out of a bit of a 'doom loop' and required manual intervention, no matter how much prompting and hints I gave it.
Did sign up for Claude Code myself this week, too, given the $10/month promo. I have experience with AI by using AWS Kiro at work and directly prompting Claude Opus for convos. After just 2 days and ~5-6 vibe coding sessions in total I got a working Life-OS-App created for my needs.
- Clone of Todoist with the features that I actually use/want. Projects, Tags, due dates, quick adding with a todoist like text-aware input (e.g. !p1, Today etc.)
- A fantastical like calendar. Again, 80% of the features I used from Fantastical
- A Habit Tracker
- A Goal Tracker (Quarterly / Yearly)
- A dashboard page showing todays summary with single click edit/complete marking
- User authentication and sharing of various features (e.g. tasks)
- Docker deployment which will eventually run on my NAS
I'm going to add a few more things and cancel quite a few subscriptions. It one-shots all tasks within minutes. It's wild. I can code but didn't bother looking at the code myself, because ... why.
Even though do not earn US Tech money, am tempted to buy the max subscription for a month or two although the price is still hard to swallow.
Claude and vibe coding is wild. If I can clone todoist within a few vibe coding sessions and then implement any additional/new feature I want within minutes instead proposing, praying and then waiting for months, why would I pay $$$...
I'm very bullish on LLMs building software, but this doesn't mean the death of software products anymore than 3D printers meant the death of factories.
The hype may be similar, if that's your point then I agree, but the weakness of 3D printing is the range of materials and the conditions needed to work with them (titanium is merely extremely difficult, but no sane government will let the general public buy tetrafluoroethylene as a feedstock), while the weakness of machine learning (even more broadly than LLMs) is the number of examples they require in order to learn stuff.
It kinda blows my mind that this is possible, to build a browser engine that approximates a somewhat working website renderer.
Even if we take the most pessimistic interpretation of events ( heavy human steering, relies on existing libraries, sloppy code quality at places, not all versions compile etc)
The positive views are mostly from people who point out that what matters in the end is what the code does, not what it looks like, e.g. users don't see the code, nor do they care about the code, and that even for businesses who do care, LLMs may be the ones who have to pay down any technical debt that builds up.
* Anyone in a field where mistakes are expensive. In one project, I asked the LLM to code-review itself and it found security vulnerabilities in its own solutions. It's probably still got more I don't know about.
** In the original sense of just letting the LLM do whatever it wanted in response to the prompt, never reading or code reviewing the result myself until the end.
That is true in a way, although even for agents readability matters.
But the code here does not actually do the right thing, and the way it is written also means it never could.
Web devs do care whether the engine runs their code according to Web standards(otherwise it's early IE all over), and end-users do care that websites work as their devs intended to.
Current state is throw-away level quality.
I've critiqued it at length in the other post, see https://news.ycombinator.com/item?id=46705625
A well-architected POC built in a week with a clear path to scaling it to a full implementation down the line would be impressive, but that's not what this is.
The current code output is basically throw-away level quality AI hallucinated BS.
It turns out to matter a whole lot less than you would expect. Coding Agents are really good at using grep and writing out plans to files, which means they can operate successfully against way more code than fits in their context at a single time.
Interestingly, recently it seems to me like codex is actually compressing early and often so that it stays in the smarter-feeling reasoning zone of the first 1/3rd of the window, which is a neat solution for this, albeit with the caveat of post-compression behavior differences cropping up more often.
Presumably the security and validation of code still needs work, I haven't read anything that indicates those are solved yet, so people still need to read and understand the code, but we're at the "can do massive projects that work" stage.
Division of labor and planning and hierarchy are all rapidly advancing, the orchestration and coordination capabilities are going to explode in '26.
Who created those agents and gives them the tasks to work on. Who created the tests? AI, or the humans?
If AI could reach the point where we actually trusted the output, then we might stop checking it.
It's a very real issue, people just seem to assume their code is wrong rather than the compiler. I've personally reported 12 GCC bugs over the last 2 years and there's 1239 open wrong-code bugs currently.
Here's an example of a simple one in the C frontend that has existed since GCC 4.7: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105180
All code interactions all happen through agents.
I suppose the question is if the agents only produce Swiss cheese solutions at scale and there's no way to fill in those gaps (at scale). Then yeah fully agentic coding is probably a pipe dream.
On the other hand if you can stand up a code generation machine where it's watts + Gpus + time => software products. Then well... It's only a matter of time until app stores entirely disappear or get really weird. It's hard to fathom the change that's coming to our profession in this world.
AI coding agents are still a huge force-multiplier if you take this approach, though.
It would be walking the motorcycle.
Interesting times ahead.
boys are trying to single shot a browser when a moderate complex task can derail a repo. there’s no good amount of info which might be deliberate but from what i can pick, their value add was “distributed computing and organisational design” but that too they simplified. i agree that simplicity is always the first option but flat filesystem structure without standards will not work. period.