I attempted to validate this: You'd need >75 TFlop/s to get into the top50 in the TOP500[0] rankings in 2009. M4 Max review says 18.4 TFlop/s at FP32, but TOP500 uses LINPACK, which uses FP64 precision.
An M2 benchmark gives a 1:4 ratio for double precision, so you'd get maybe 9 TFlop/s at FP64? That wouldn't make it to TOP500 in 2009.
Well, no. The particular thread of execution might have been spending 95% of time waiting for I/O, but a server (the machine serving the thousands of connections) would easily run at 70%-80% of CPU utilization (because above that, tail latency starts to suffer badly). If your server had 5% CPU utilization under full load, you were not running enough parallel processes, or did not install enough RAM to do so.
Well, it's a technicality, but the post is devoted to technicalities, and such small blunders erode the trust to the rest of the post. (I'm saying this as a fan of Bun.)
> The package managers we benchmarked weren't built wrong, they were solutions designed for the constraints of their time.
> Buns approach wasn't revolutionary, it was just willing to look at what actually slows things down today.
> Installing packages 25x faster isn't "magic": it's what happens when tools are built for the hardware we actually have.
Some more conversation a week or so ago I had:
That's even less accurate. By two orders of magnitude. High-end servers in 2009 had way more than 4GB. The (not even high-end) HP Proliant I installed for a small business in 2008, that was already bought used at the time, had 128GB of RAM.
I understand why one would want to make an article entertaining but that seriously makes me doubt the rest of the articles when diving into a topic I don't know as much.
Also: I love that super passionate people still exist, and are willing to challenge the statut quo by attacking really hard things - things I don't have the brain to even think about. It's not normal that we have better computers each month and slower softwares. If only everyone (myself included) were better at writing more efficient code.
Amazing to see it being used in a practical way in production.
There'll probably be a strategy (AEO?) for this in the future for newcomers and the underrepresented, like endless examples posted by a sane AI to their docs and github for instance so it gets picked up by training sets or live, tool calling, web-searches.
For future languages, maybe it's better to already have a dev name and a release name from the get go.
I do almost all of my development in vanilla js despite loathing the node ecosystem, so i really should have checked it out sooner.
Much better than Node.
However...!
I always managed to hit a road block with Bun and had to go back to Node.
First it was the crypto module that wasn't compatible with Nodejs signatures (now fixed), next Playwright refused working with Bun (via Crawlee).
Does it work if I have packages that have nodejs c++ addons?
They're still missing niche things and they tend to target the things that most devs (and their dependencies) are actually using.
But I can see they have it in their compat stuff now and it looks like it's working in the repl locally: https://docs.deno.com/api/node/dgram/
A single static zig executable isn’t the same as a a pipeline of package management dependencies susceptible to supply chain attacks and the worst bitrot we’ve had since the DOS era.
Zero.
I'm guessing you're looking at the `devDependencies` in its package.json, but those are only used by the people building the project, not by people merely consuming it.
I wonder why that is? Is it because it is a runtime, and getting compatibility there is harder than just for a straight package manager?
Can someone who tried bun and didn't adopt it personally or at work chime in and say why?
[0] https://aleyan.com/blog/2025-task-runners-census/#javascript...
Considering how many people rely on a tailwind watcher to be running on all of their CSS updates, you may find that bun is used daily by millions.
We use Bun for one of our servers. We are small, but we are not goofing around. I would not recommend them yet for anything but where they have a clear advantage - but there are areas where it is noticeably faster or easier to setup.
Last big issue I had with Bun was streams closing early:
https://github.com/oven-sh/bun/issues/16037
Last big issue I had with Deno was a memory leak:
https://github.com/denoland/deno/issues/24674
At this point I feel like the Node ecosystem will probably adopt the good parts of Bun/Deno before Bun/Deno really take off.
https://github.com/oven-sh/bun/commit/b474e3a1f63972979845a6...
I actually think Bun is so good that it will still net save you time, even with these annoyances. The headaches it resolves around transpilation, modules, workspaces etc, are just amazing. But I can understand why it hasn't gotten closer to npm yet.
bun is a bumpy road that sees very low traffic. So you are likely to hit some bumps.
But the language haven’t even reached 1.0 yet. A lot of the strategies for doing safe Zig isn’t fully developed.
Yet, TigerBeetle is written in Zig and is an extremely robust piece of software.
I think the focus of Bun is probably more on feature parity in the short term.
Sure, they have some nice stuff that should also be added in Node, but nothing compelling enough to deal with ecosystem change and breakage.
It's a cool project, and I like that they're not using V8 and trying something different, but I think it's very difficult to sell a change on such incremental improvements.
It was better than npm with useful features, but then npm just added all of those features after a few years and now nobody uses it.
You can spend hours every few years migrating to the latest and greatest, or you can just stick with npm/node and you will get the same benefits eventually
In the interim, I am very glad we haven't waited.
Also, we switched to Postgres early, when my friends were telling me that eventually MySQL will catch up. Which in many ways, they did, but I still appreciate that we moved.
I can think of other choices we made - we try to assess the options and choose the best tool for the job, even if it is young.
Sometimes it pays off in spades. Sometimes it causes double the work and five times the headache.
That said, for many work projects, I need to access MS-SQL, which the way it does socket connections isn't supported by the Deno runtime, or some such. Which limits what I can do at work. I suspect there's a few similar sticking points with Bun for other modules/tools people use.
It's also very hard to break away from entropy. Node+npm had over a decade and a lot of effort to build that ecosystem that people aren't willing to just abandon wholesale.
I really like Deno for shell scripting because I can use a shebang, reference dependencies and the runtime just handles them. I don't have the "npm install" step I need to run separately, it doesn't pollute my ~/bin/ directory with a bunch of potentially conflicting node_modules/ either, they're used from a shared (configurable) location. I suspect bun works in a similar fashion.
That said, with work I have systems I need to work with that are already in place or otherwise chosen for me. You can't always just replace technology on a whim.
https://dev.to/hamzakhan/rust-vs-go-vs-bun-vs-nodejs-the-ult...
2x in specific microbenchmarks doesn’t translate to big savings in practice. We don’t serve a static string with an application server in prod.
I write a lot of one off scripts for stuff in node/ts and I tried to use Bun pretty early on when it was gaining some hype. There were too many incompatibilities with the ecosystem though, and I haven't tried since.
LLMs default to npm
This leads them to the incorrect conclusion that bun fresh runs are faster than npm cached, which doesn’t seem to be the case.
Lydia is very good at presenting complex ideas simply and well. I've read and watched most of her work or videos. She really goes to great lengths in her work to make it come to life. Highly recommend her articles and YouTube videos.
Though she's been writing less I think due to her current job
Vitamins/supplements? Sleep? Exercise? Vacations?
I have sprints of great productivity but it's hard to keep it for long.
Repo: https://github.com/carthage-software/mago
Announcement 9 months ago:
https://www.reddit.com/r/PHP/comments/1h9zh83/announcing_mag...
For now its main features are 3: formatting, linting and fixing lint issues.
I hope they add package management to do what composer does.
Thats closer to how pnpm achieves speed up though. I know there is 'rv' recently, but havent tried it.
>Brought to you by Spinel
>Spinel.coop is a collective of Ruby open source maintainers building next-generation developer tooling, like rv, and offering flat-rate, unlimited access to maintainers who come from the core teams of Rails, Hotwire, Bundler, RubyGems, rbenv, and more.
A few things:
- I feel like this post repurposed could be a great explanation on why io_uring is so important.
- I wonder if Zig recently io updates in v0.15 make any perf improvement to Bun beyond its current fast perf.
So many of these concepts (Big O, temporal and spatial locality, algorithmic complexity, lower level user space/kernel space concepts, filesystems, copy on write), are ALL the kinds of things you cover in a good CS program. And in this and similar lower level packages, you use all of them to great effect.
CS is the study of computations and their theory (programming languages, algorithms, cryptography, machine learning, etc).
SE is the application of engineering principles to building scalable and reliable software.