I attempted to validate this: You'd need >75 TFlop/s to get into the top50 in the TOP500[0] rankings in 2009. M4 Max review says 18.4 TFlop/s at FP32, but TOP500 uses LINPACK, which uses FP64 precision.
An M2 benchmark gives a 1:4 ratio for double precision, so you'd get maybe 9 TFlop/s at FP64? That wouldn't make it to TOP500 in 2009.
Well, no. The particular thread of execution might have been spending 95% of time waiting for I/O, but a server (the machine serving the thousands of connections) would easily run at 70%-80% of CPU utilization (because above that, tail latency starts to suffer badly). If your server had 5% CPU utilization under full load, you were not running enough parallel processes, or did not install enough RAM to do so.
Well, it's a technicality, but the post is devoted to technicalities, and such small blunders erode the trust to the rest of the post. (I'm saying this as a fan of Bun.)
> The package managers we benchmarked weren't built wrong, they were solutions designed for the constraints of their time.
> Buns approach wasn't revolutionary, it was just willing to look at what actually slows things down today.
> Installing packages 25x faster isn't "magic": it's what happens when tools are built for the hardware we actually have.
Some more conversation a week or so ago I had:
Also: I love that super passionate people still exist, and are willing to challenge the statut quo by attacking really hard things - things I don't have the brain to even think about. It's not normal that we have better computers each month and slower softwares. If only everyone (myself included) were better at writing more efficient code.
Amazing to see it being used in a practical way in production.
There'll probably be a strategy (AEO?) for this in the future for newcomers and the underrepresented, like endless examples posted by a sane AI to their docs and github for instance so it gets picked up by training sets or live, tool calling, web-searches.
I do almost all of my development in vanilla js despite loathing the node ecosystem, so i really should have checked it out sooner.
Much better than Node.
However...!
I always managed to hit a road block with Bun and had to go back to Node.
First it was the crypto module that wasn't compatible with Nodejs signatures (now fixed), next Playwright refused working with Bun (via Crawlee).
Does it work if I have packages that have nodejs c++ addons?
They're still missing niche things and they tend to target the things that most devs (and their dependencies) are actually using.
But I can see they have it in their compat stuff now and it looks like it's working in the repl locally: https://docs.deno.com/api/node/dgram/
A single static zig executable isn’t the same as a a pipeline of package management dependencies susceptible to supply chain attacks and the worst bitrot we’ve had since the DOS era.
Zero.
I'm guessing you're looking at the `devDependencies` in its package.json, but those are only used by the people building the project, not by people merely consuming it.
Lydia is very good at presenting complex ideas simply and well. I've read and watched most of her work or videos. She really goes to great lengths in her work to make it come to life. Highly recommend her articles and YouTube videos.
Though she's been writing less I think due to her current job
This leads them to the incorrect conclusion that bun fresh runs are faster than npm cached, which doesn’t seem to be the case.
I wonder why that is? Is it because it is a runtime, and getting compatibility there is harder than just for a straight package manager?
Can someone who tried bun and didn't adopt it personally or at work chime in and say why?
[0] https://aleyan.com/blog/2025-task-runners-census/#javascript...
Last big issue I had with Bun was streams closing early:
https://github.com/oven-sh/bun/issues/16037
Last big issue I had with Deno was a memory leak:
https://github.com/denoland/deno/issues/24674
At this point I feel like the Node ecosystem will probably adopt the good parts of Bun/Deno before Bun/Deno really take off.
https://github.com/oven-sh/bun/commit/b474e3a1f63972979845a6...
But the language haven’t even reached 1.0 yet. A lot of the strategies for doing safe Zig isn’t fully developed.
Yet, TigerBeetle is written in Zig and is an extremely robust piece of software.
I think the focus of Bun is probably more on feature parity in the short term.
bun is a bumpy road that sees very low traffic. So you are likely to hit some bumps.
I actually think Bun is so good that it will still net save you time, even with these annoyances. The headaches it resolves around transpilation, modules, workspaces etc, are just amazing. But I can understand why it hasn't gotten closer to npm yet.
Sure, they have some nice stuff that should also be added in Node, but nothing compelling enough to deal with ecosystem change and breakage.
It's a cool project, and I like that they're not using V8 and trying something different, but I think it's very difficult to sell a change on such incremental improvements.
That said, for many work projects, I need to access MS-SQL, which the way it does socket connections isn't supported by the Deno runtime, or some such. Which limits what I can do at work. I suspect there's a few similar sticking points with Bun for other modules/tools people use.
It's also very hard to break away from entropy. Node+npm had over a decade and a lot of effort to build that ecosystem that people aren't willing to just abandon wholesale.
I really like Deno for shell scripting because I can use a shebang, reference dependencies and the runtime just handles them. I don't have the "npm install" step I need to run separately, it doesn't pollute my ~/bin/ directory with a bunch of potentially conflicting node_modules/ either, they're used from a shared (configurable) location. I suspect bun works in a similar fashion.
That said, with work I have systems I need to work with that are already in place or otherwise chosen for me. You can't always just replace technology on a whim.
https://dev.to/hamzakhan/rust-vs-go-vs-bun-vs-nodejs-the-ult...
2x in specific microbenchmarks doesn’t translate to big savings in practice. We don’t serve a static string with an application server in prod.
I write a lot of one off scripts for stuff in node/ts and I tried to use Bun pretty early on when it was gaining some hype. There were too many incompatibilities with the ecosystem though, and I haven't tried since.
LLMs default to npm
Repo: https://github.com/carthage-software/mago
Announcement 9 months ago:
https://www.reddit.com/r/PHP/comments/1h9zh83/announcing_mag...
For now its main features are 3: formatting, linting and fixing lint issues.
I hope they add package management to do what composer does.
Thats closer to how pnpm achieves speed up though. I know there is 'rv' recently, but havent tried it.
>Brought to you by Spinel
>Spinel.coop is a collective of Ruby open source maintainers building next-generation developer tooling, like rv, and offering flat-rate, unlimited access to maintainers who come from the core teams of Rails, Hotwire, Bundler, RubyGems, rbenv, and more.
> Bun takes a different approach by buffering the entire tarball before decompressing.
But seems to sidestep _how_ it does this any differently than the "bad" snippet the section opened with (presumably it checks the Content-Length header when it's fetching the tarball or something, and can assume the size it gets from there is correct). All it says about this is:
> Once Bun has the complete tarball in memory it can read the last 4 bytes of the gzip format.
Then it explains how it can pre-allocate a buffer for the decompressed data, but we never saw how this buffer allocation happens in the "bad" example!
> These bytes are special since store the uncompressed size of the file! Instead of having to guess how large the uncompressed file will be, Bun can pre-allocate memory to eliminate buffer resizing entirely
Presumably the saving is in the slow package managers having to expand _both_ of the buffers involved, while bun preallocates at least one of them?
A few things:
- I feel like this post repurposed could be a great explanation on why io_uring is so important.
- I wonder if Zig recently io updates in v0.15 make any perf improvement to Bun beyond its current fast perf.