Posted by SergeAx 6 hours ago
Yet here we are, what looks like a massive undertaking for vibe coding.
Time will tell how this will turn out. Would be nice if the Bun maintainers could give some clarification about what they’re doing here, and why they’re doing this.
It's probably a bit of both.
Anyone can hack up a quick PoC, even without LLMs, the hard part is writing code that is correct and maintainable.
Bold of you to assume they have the expertise.
[0] https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...
The patch would have been rejected either way because it was out of date and conflicted with other work going on.
(Though I don't know if this particular patch series would get accepted on its own merits.)
split into a bunch of much smaller changes?
There's no reason to assume my generic statement was talking about the ugly version rather than the nicely organized version.
Sucks for people who were invested in contributing to Bun and don't like working with AI tools to be sure, but I think the writing was on the wall for them pretty much immediately post-acquisition. You must admit, it's hard to predict that 100% of source lines will be written by AI if you're not walking the walk!
That is if you use something like C, C+=, Java, .NET, Go. With Javascript and Python I don't think knowing assembly would make any difference because it's hard to optimize the code in these languages for how the CPU and memory works.
The same applies to vibe coding: the best "vibe coder" will paradoxically be the person with enough knowledge and curiosity to understand programming, how computer works and the subject at hand; one that could write the whole thing from scratch so they have enough judgement to review generated code.
Of course the vast majority will be mediocre vibe coders, and even worse programmers; at least that's the direction we're going.
- the scale of how much and how fast you can generate code with AI vs how fast can you write code for compiler
- the mental model of what is being generated and how much the contributor understands and owns the generated code
So it is not, by your own admission, "exactly, literally the same".
Vide-coders often don't read, let alone understand, the code they send for PRs.
Zig, as programming language, has a multiplier codebase. A bug may affect a significant larger portion of users than most libraries or binaries will, as it's a fundamental building block of everything that uses Zig. Just that could be worth the extra scrutiny on every individual commit.
There's also the usual arguments: copyright ethics, environmental ethics and maintainer burden.
Couldn't you say exactly the same about bun?
I guess there are 2 philosophies in software development: move fast and break things and move at a pace that guarantees everything is rock solid.
Most commercial software, Anthropic included is taking the former path, while most infrastructure teams are taking the later.
I guess Linux and FreeBSD kernels are also not accepting LLM based contributions yet.
PostgreSQL, a famously slow and rock solid project, accepts LLM-based contributions. But they are held to the same high standard, if you cannot explain the patch you submitted it likely get rejected.
Both appear to be[1][2]. FreeBSD doesn't have a formal policy yet, but they appear to be leaning towards admitting some degree of LLM contribution.
[1]: https://docs.kernel.org/process/coding-assistants.html
[2]: https://forums.freebsd.org/threads/will-freebsd-adopt-a-no-a...
Zig is famous for taking the former path! Anyone using Zig for a few years knows every release breaks things, and they are still making huge changes which I would classify as “moving fast”, like the recent IO changes!
You can be against a particular technology without being "anti-technology".
See DRM/surveillance/bad self driving implementations.
So the next step will be that bun will be directly re-written from scratch at every iteration, the repository will only contains the specs for the LLMs.
Caching locally the generated code will be authorized for some transition period, but as it’s obviously very dangerous to let people tweak what exactly computers are doing, forbidding such a practice using safe secure boot mandatory mode is already planed. Only nazi pedophiles would do otherwise anyway, thus the enactment of the companion law is an obvious go to.
The emitted AST has a lower defect rate since it incorporates strong types and in-built error handling. Other pros include native code and portability, but downside is the compile time.
People say same about Go as well that it's type system and limited feature set makes it the best AI friendly language but there too, it just seems like a hunch rather than a proven fact.
Let me elaborate further - it's like the proficiency of LLMs in writing English vs writing Sawahili or Kurdish.
The types of a program are like Swahili or Kurdish etc even worse because those languages still have sizeable chuck on the Internet and digital archives but types of a program are very specific to it.
Programming languages, in contrast, are constructed and vary much more in their designs. They are formal languages, making them closer to math than spoken language. LLMs being able to describe concepts more thoroughly and precisely through more expressive semantics obviously makes some languages more suitable than others.
The type system of a language is just one aspect of it that allows the language to provide guarantees to the LLM (and the user) about correctness of the code it's writing.
I am not speaking about specific types in specific programs. I am talking about the ability to describe complex constraints that LLMs (and humans) end up using to make writing correct code easier and more productive. Some programming languages absolutely are more effective at this than others, and that's always been true even before LLMs.
The last time I had a go with Haskell, the errors reminded me so much of hellish compilers from the 80s and 90s that I quickly gave up. Been there, not doing that again.
As a downside, the compile time is somewhat offset once you're using agents (and especially parallel agents) anyway. Since all of your edits cost a round-trip API call to a third party server, you can accept a slightly slower compile step.
Lock the syntax/api together for a couple of years. Allow AI code in Zag.
Review after a few years, see which is better.
And will Rust team accept their vibe coded patches?
fwiw, I suspect it's less of an undertaking than you may think. I've been playing with AI to rewrite Postgres in Rust[0] over the past couple of weeks and I found the AI to be exceptional at doing rewrites. Having an existing codebase you can reference prevents a lot of the problems you have with vibecoding. You have an existing architecture that works well and have a test suite that you can test against
Over the course of a month I've gone from nothing to passing over 95% of the Postgres test suite. Given Jarred built Bun, I bet he'll be able to go much faster
That's because it's not vibe coding - stingraycharles doesn't seem to understand what vibe coding is. Vibe coding was defined here https://x.com/karpathy/status/1886192184808149383
> There's a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.
This is very far from Anthropic's migration plans.
My benchmark is basically, "are you letting the AI drive."
In this case, an AI appears to have written the migration guide...
And then that leaks outside their social and age groups, because other people hear the incorrect usage, get confused, and incorporate that confusion into their own use of the term.
with superpowers, i see a lot of specs -> impl plan -> execute plan
They recently proposed some of their internal tools to be the official Rust implementation[0] of Connect RPC[1]. As a protobuf based library set, this includes a new Rust-based protobuf compiler, Buffa[2].
[0]: https://github.com/orgs/connectrpc/discussions/7#discussionc...
Claude has absolutely no idea what it's doing with bleeding edge zig unless you feed it source and guide it closely (in which case it's useful for focused work) - I'm building a game engine & tcp/udp servers with it and it requires a hands-on approach and actually understanding what's being built.
I imagine these are not really concerns with rust at this point.
In my ideal world the team behind bun would be putting in the work to keep up with modern zig, but it's starting to look like they are running mostly on vibes in which case rust might be a better choice.
I think this is true regardless of what language you’re using.
I’ve built a lot in Zig and there’s no difference between vibing stuff in it versus TypeScript/React. Claude can “one-shot” them both, and will mimic existing code or grep the standard library to figure everything out.
Virtually all crates are still at version 0.x and introduce constant breaking changes: [https://00f.net/2025/10/17/state-of-the-rust-ecosystem/](https://00f.net/2025/10/17/state-of-the-rust-ecosystem/)
If you don’t want to use obsolete versions of dependencies, you need to explicitly tell the model that. Then you have to hope it can adopt new APIs it wasn’t trained on, rewrite existing code to handle the breaking changes, and keep your fingers crossed that nothing else breaks in the process.
LLMs perform much better with Go, not only because of the lack of hidden control flow (LLMs can deal with that, but it costs a lot of tokens) but mainly because both the language and its dependencies introduce very few breaking changes.
Which isn't particularly difficult - the language docs and std source come with the installation, so all you need to do is tell Claude where those directories are in your skill/plugin/CLAUDE.md.
> and guide it closely (in which case it's useful for focused work)
It does struggle sometimes with writing code that compiles and uses the APIs correctly. My approach to that so far has been to write test blocks describing the desired interface + semantics, and asking Claude to (`zig test` -> fix errors) in a loop until all the tests pass.
Here, I just did a quick test with claude.
1. "make a simple tcp echo server that uses rust"
compiles and runs - took a few seconds to generate.
2. "make a simple tcp echo server that uses zig"
result: compile error, took literal minutes of spinning and thinking to generate
response: "ziglang.org isn't in the allowed domains. Let me check if there's another way, or just verify the code compiles conceptually and present it clean."
/opt/homebrew/Cellar/zig/0.15.2/lib/zig/std/Io/Writer.zig:1200:9: error: ambiguous format string; specify {f} to call format method, or {any} to skip it @compileError("ambiguous format string; specify {f} to call format method, or {any} to skip it"); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
3. "make a simple tcp echo server that uses zig 0.16"
result: compile error:
zig build-exe main.zig main.zig:30:21: error: no field named 'io' in struct 'process.Init.Minimal' const io = init.io; ^~
4. "make a simple tcp echo server that uses zig 0.15"
result: compile error
zig build-exe main.zig /nix/store/as1zlvrrwwh69ii56xg6yd7f6xyjx8mv-zig-0.15.2/lib/std/Io/Writer.zig:1200:9: error: ambiguous format string; specify {f} to call format method, or {any} to skip it @compileError("ambiguous format string; specify {f} to call format method, or {any} to skip it");
Rust took seconds and just works. Zig examples took minutes and don't work out of the box. The DX & velocity isn't even close.
1. the language and stdlib are written by people who know what they're doing 2. packages in the ecosystem, at the barest level, are written by those who didn't leave after a few compile errors they couldn't reason about
I think the changes are improvements, but there's a real cost to language churn, and every time it happens, the graveyard of projects grows just that little bit larger.
Zig is a great language and I want to see it succeed, but this is a prudent move for Bun.
Sometimes it is worth it, but it may also kill projects. A risky move. And AI doesn't help its cause. AI can save a lot of time when making ports, it is one of the things it does best, but it doesn't protect from regressions.
I am not using Bun in production, but if I was, I would consider it a risk. Not because of Rust vs Zig, but for changing things that work.
How is it an incorrect interpretation? Jared is indeed pitching/suggesting/predicting that human contribution will not be allowed in the near future, i.e. banned.
The person upthread should have said "predicting".
Among them:
- much easier to iterate on (due to the language being simpler and compilation much faster)
- native C/C++ interops (Zig can compile C and C++ and mix it with Zig) which is crucial for a node-replacement runtime that runs an open source JS engine
- fewer dependencies and trivial static linking
I guess that now that they've been acquired by Anthropic there's this combination of having both in-house Rust talent, AI which does better on Rust, and the funding and resources necessary to undertake such a migration.
I think there are even longer term plays that Anthropic should be looking at, in this space, but it seems like they've decided rust is the right thing, so fair play. I would be (am!) thinking about making an LLM optimized high level language that you can generate / train on intensively because you control the language spec.
Claude struggling at Zig: the above + memory safety issues if you run “fast” mode.
It is generally true that Rust code tends to be written in a way that the compiler catches the issue at compile time. The same is not as true for Zig, Python or JS
So the difference is not in writing new stuff but in maintaining the existing codebase. Rust's rigidity makes it potentially harder to break stuff compared to Zig's general flexibility. As a project grows and matures, different types of contributors naturally come in and it's unreasonable to expect everyone to learn about historical footguns that may have accumulated.
something JS-adjacent could certainly be more known than an obscure language but are that many people using drop-in node replacements?
But I can’t reconcile the reasoning about “strong, thorough compiler” with the fact that LLMs are also fantastic at Ruby.
They also write really great posix shell (including very sophisticated scripts) and python.
Something more subtle is going on.
Has anyone made any cross language benchmarks for LLMs? I wonder if rust's conceptual complexity makes it harder for LLMs to write? If all you care about is working software, which language is best for LLMs? Python, because there's more example code? Go or Java, because they're simpler languages? Ruby because its terse? Rust because of the compiler? I'd love to see a comparison!
I believe now we have all but we fail at choosing.
It doesn’t look like that at all. Do you think that all use of AI is vibe coding?
https://github.com/oven-sh/bun/compare/claude/phase-a-port
This single commit is 65k lines of additions
https://github.com/oven-sh/bun/commit/ffa6ce211a0267161ae48b...
There's a decent article by Simon Willison that talks about this: https://simonwillison.net/2025/Mar/19/vibe-coding/
> I’m seeing people apply the term “vibe coding” to all forms of code written with the assistance of AI. I think that both dilutes the term and gives a false impression of what’s possible with responsible AI-assisted programming.
> (programming, neologism) A method of programming in which a developer generates code by repeatedly prompting a large language model.
But pointing your AI at an entire codebase to transpile pretty much entirely by itself? Yeah vibe coding is a fitting term.
Even if you wrote it a small essay on how to Rust. That improves the situation but doesn't change the core autonomy/hope of the task.
As much as I find the word "vibe" generally annoying (in all contexts), I actually really like "vibe coding" as "LLM did everything and I didn't even look at it". It's a succinct, useful way to describe that mode of doing things. Diluting it down to "LLM-assisted coding" makes it useless.
You're absolutely right.
"+27,939Lines changed: 27939 additions & 0 deletions"
of new rust code
This is obviously very different from that, but the way the commit looks doesn't make it so.
Like maybe you get the LLM to try _really hard_ to churn through everything, but this feels like a big case of "perils of the lack of laziness".
Of course if you have a good idea for how to deal with allocations etc "idiomatically" already maybe that works out well. And to the credit of the port guide writer bun seems to have its explicit allocations that are already mapping pretty well to Rust.
My only experience with ports so far is Python to Go, and it's been near flawless (just enough stupid shit to make me feel justified to be in the loop).
Especially for memory management the right and wrong abstractions in Rust can lead to a factor of 5 or 10 extra amount of difficulty. The right memory management abstraction and your code can be a straight line port (or even cleaner!), the wrong one and you're going to just be spending a lot of tokens to have a machine spin around in circles trying to untie itself
GC'd languages don't have this problem, though obviously you can still generate stupid amount of pain for yourself by doing something wrong
The slides: https://go.dev/talks/2015/gogo.slide#3
An interesting similarity:
>We had our own C compiler just to compile the runtime.
The Bun team maintain their own fork of Zig too
The LLM is non-deterministic. You could have it independently do the conversion 10 times, and you'd get 10 different results, and some of them might even be wildly different. There's no way to validate that without reviewing it fully, in its entirety, each time.
That's not to say the human-written deterministic conversion tool is going to be perfect or infallible. But you can certainly build much more confidence with it than you can with the LLM.
So, Anthropic acquires Bun team because claude-code uses Bun. They port Bun from Zig to Rust presumably because Rust "is better" (imagine big air quotes here). Again presumably, they want to make claude-code "better". Why make it so complicated? With all the power of LLMs they have, surely they can make claude-code the best possible by writting it in Rust directly.
It's easy to just see Bun as a marketing stunt, as well.
[0]: https://github.com/oven-sh/bun/compare/claude/phase-a-port
-------------------------------------------------
Language files blank comment code
-------------------------------------------------
Zig 1298 79693 60320 571814
TypeScript 2600 67434 115281 471122
JavaScript 4344 36947 37653 290873
C++ 583 27129 19117 215531
C 111 21577 83914 199576Whether or not they can clean it up is an interesting question.
but also telling a LLM to do a line-by-line translation and giving it a file _is guaranteed to never truly be a line-by-line translation_ due to how LLMs work. But thats fine you don't tell it to do line-by-line to actually make it work line by line but to try to "convince" it to not do any of the things which are the opposite (like moving things largely around, completely rewriting components based on it "guessing" what it is supposed to do etc.). Or in other words it makes the result more likely to be behavior (incl. logic bug) compatible even through it doesn't do line-by-line. And that then allow you to fuzz the behavior for discrepancies in the initial step before doing any larger refactoring which may include bug fixes.
Through tbh. I would prefer if any zip -> terrible rust part where done with a deterministic, reproducible, debug-able program instead of a LLM. The LLM then can be used to support incremental refactoring. But the initial "bad" transpilation is so much code that using an LLM there seems like an horror story, wrt. subtle hallucinations and similarr.
(would teach me a little about Zig, about which i know 0)
which is needed, as making things safe often requires refactoring not localized to a single function/code block and doing that while transpiling isn't the best idea. In general I would recommencement a non LLM based transpilation (if possible) and then use an LLM to do bit by bit as localized as possible bottom up refactoring to get ride of unsafe code potentially at some runtime performance cost, followed by another top down refactoring to make thing nice and fast. And human supervision to spot parts where paradigms clash so hard that you have to do some larger changes already during the bottom up step.
anyways that means segfaults likely would stay segfaults in the initial transpilled version
I think people here are reading too much into it.