Now, even though their parent company does some shitty practices with their other software (claude code), it's a stretch to assume this will also translate into making Bun worse: Being worried makes sense but I remain optimistic about Bun.
Especially given the context of both of these different context: Claude Code is a gem of Anthropic, experiencing extreme growth and where any of its change can result in billing issues.
Bun is a JS runtime, and regardless of its growth, can focus on being the best runtime possible: It doesn't impact billing nor the bottom line of Anthropic, so they don't have to rush out patches due to abuse unlike CC.
It's unclear how it will pan out over the next years, still very early on the acquisition to see if anything will change, but I'm not concerned just yet.
These labs play the game of trying to kill competition in the harness game (because third party harnesses risk commoditizing the underlying LLMs once they are all good enough), while playing a game of chicken with each other how long they can burn money that way before they have to give up.
At some point they have to price their product fairly, and the only hope they have is to have killed all competition by then, which is of course a game that they seem to be loosing. Useful models are getting smaller and cheaper to run every year and it has hit a threshold at which we will see continued development of third party harnesses even without the userbase of subscription users.
Basically the prime bet that they made (that one needs extremely expensive hardware to have useful AI) has already failed. The secondary bet that they can lock users into their ecosystem (which requires them to subsidize their harness via unprofitable subscriptions burning their capital) and be able to monetize that later will also fail. They will have to compete on merit alone, and that is much less profitable.
Lots of businesses have subscription programs in which a small number of users are money losers, but which in aggregate make money.
It's not even obvious that the labs are losing a lot of money on even a minority of users; the rate use caps are fairly aggressive for Anthropic, and a cursory analysis of likely actual cost of serving tokens shows they are high margin products at the API level and unlikely to be unprofitable within the usage constraints provided to subscribers.
I do think subscription models make commercial sense because users want predictable costs, and it's a club good in which marginal token cost is zero which helps consolidate their customers' purchasing volume to one provider. But that's a different claim than them serving it unprofitably to kill competition.
Also, they (Anthropic) are transitioning many of their enterprise customers to API consumption billing anyway.
We gave up on subscriptions long ago. They're rinky dink and get you a paltry amount of utilization before they run out.
The per day per seat costs can exceed $1000. This is already normal for studios, and it's already producing positive ROI.
There's simply no way to price video any other way than by usage. I suspect the same will come for everything.
I don't think there's any way for all of the current AI models to work except as a usage model. The question is whether or not people are willing to pay for it that way in the long-term.
It sounds like it is producing positive ROI for your side, but I’m curious what the bean counters at the studios think of the bill when the budgets tighten.
It's already here in a big way. You just won't be told about it until the public lightens up on the "AI hate".
If the argument is that AI is being used in the background or for some VFX, sure, I’ll buy that. It’s just another tool, then. If it is being used to generate entire scenes, there’s no evidence of this, unless something like that atrocious holiday Coca-Cola commercial is a herald of our future.
As written, your claim is just handwavy. I get you might not be able to cite anything concrete due to NDAs or whatnot but, you also have to understand why a lot of people find this kinda unpersuasive.
The the former you suggested. Background plates and the like. The lack of actual creative direction tools, trite visual style, lack of consistency/repeatability and complete inability to be edited or adjusted easily make it a non-starter for most tasks. Compositors are fast, LLMs are slow at that scale. There are tools like ComfyUI that sit in the “we’re running experiments/useful sometimes” category.
Loads of ML tools are in use and incredibly handy, but fit into that tool category, but actual wholesale video/image generation is not that prevalent, no.
They're using AI for plates, edits, pickup shots, previz, and in some cases the primary footage itself.
They're super hush-hush about this.
I thought the prime bet was that the winning lab who reaches takeoff through recursive self improvement will make a galactic superintelligence. Not saying I believe this but the people running the labs do. Under this scenario if you are a few months behind at the pivotal time you might as well not exist at all.
and that's under the assumption that you can create a superintelligence that will continue to slavishly serve your agenda rather than establishing and following its own goals.
Edit: Meant to say AGI (superintelligence didn't make sense). Superintelligence is undefinable at the moment so even considering if it's possible or not is more of a philosophical thing/si-fi thought experiment than anything else.
"The brain is so mysterious and unique, that we should abandon all attempts to even try to apply results like the general approximation theorem to it and discard all signs that some approximation is happening."
Why we don't see signs of intelligence in the universe? The simplest self-replicator requires accidental synthesis of the sequence of 200 (or so) RNA nucleobases.
BTW, your argument could have been applied word-for-word to powered flight in 1899. In short, argumentum ad ignorantiam.
Arguably it’s already here. ChatGPT knows more than any human who has ever lived. It can carry out millions of conversations at once. And it has better working memory (“context”) than humans. And it can speak and write code much faster than humans.
Humans still have some advantages: Specialists are smarter than chatgpt in most domains. We’re better at using imagination. We understand the physical world better. But it seems like we’re watching the gap close in real time. A few years ago chatgpt could barely program. Now you can give it complex prompts and it can write large, complex programs which mostly work. If you extrapolate forward, is there any good reason to think humans will retain a lead?
You're anthropomorphizing it, this isn't what it's doing. It's being fed a series of text and predicting what comes next the box has no context about the other "conversations" it's having and doesn't remember them.
Ultimately our current model is extremely unlikely to perform better than the sum of current human knowledge. Godlike super-intelligence is a pipe dream with the current LLM based approaches.
Natural selection doesn't care why something replicated a lot.
We’ll all be bblbrvkxn46?/4!gfbxf’mgv5fhxtgcsgjcucz to buvtcibycuvinovrYdyvuctYcrzuvhxh gcuch7…:!
I look at superintelligence this way: software engineering used to be considered amoung the most mentally demanding jobs one can have. And in this field more and more people give up large parts of their job and become approximately product managers to let the machine do the engineering part. So we are about there. Who cares that there are some puzzles in some "synthetic" benchmark in which humans outsmart AIs?
But that's all just sci-fi worldbuilding.
What if the competitor's architecture is able to produce tokens twice as fast. What if the competitor secures a 1 month exclusivity deal on Nvidia's next generation?
A genuine superintelligence is a very, very scary thing to have under the control of one person or organisation.
Bostrom's Superintelligence (2014) is a bit of a dreary read, and I didn't finish it, but it pulls no punches about the leverage that a superintelligence might have in our highly-connected world.
For the concrete problem we're discussing, that hypothetical belongs in a Marvel movie, not reality. In the real world, you can't 'hack your competitors out of existence', and you'll be going to prison very quickly for trying this sort of thing.
> especially if you're willing to break the law / normal operating decorum
in my original post. If you have a superintelligence, you have something that can find and take advantage of every exploitation vector in parallel - technical, social, bureaucratic - and use that to destroy a company from the inside. A superintelligence that is subservient to its operator is an informational superweapon.
I agree that this sounds fanciful, but you can see what existing cyberattacks can do to organisations; it does not take that much imagination to gauge how much worse it could be when the process can be automated and scaled.
The five dollar wrench attack will put an end to that operator's use of an informational superweapon.
> I agree that this sounds fanciful, but you can see what existing cyberattacks can do to organisations
What can it do? Generally, a minor disruption to operations.
It consistently does a lot less than what law enforcement can do to you if you start messing with other rich peoples' money, while having enough of a presence to own a super-intelligence and a trillion-dollar data center.
Conventional hackers are limited by the serial nature of their work - finding breaches, exploiting them, conducting further exploration of the network, trying not to get detected - in ways that a superintelligence would not be. The latter could be a hundred times as effective, a hundred times as fast, and a hundred times more parallel.
I agree that this is unlikely to happen because the societal bill would come due in time, but my point is that a month's lead is enough to do significant and lasting damage.
The assumption would be that in the lead time it has the super intelligence at least takes a small lead and undermines any paths a later arriving super intelligence could take to interfere with it's goals, which naturally includes stopping competing SIs from becoming more powerful in a way that could undermine it.
So assuming the super intelligence has goals and work towards them it will be initially trying to solidify its own power, iterating on that small lead, assuming it's the smartest super intelligence[1], should be enough to win. The scary part is that assuming no guardrails [2] it's going to be as ruthless as possible in achieving those goals. That does not necessarily mean it will appear ruthless in achieving those goals, just as ruthless as it judges optimal.
1. Which being so smart one of it's chores would have been reinvestment in making itself smarter than competition and being smarter than its makers has a good chance of actuating those self-improvements.
2. In the internal balancing of goals sense not the don't feed the mogwai after midnight sense.
The viewpoint is baked into those assumptions and boils down to the power of exponentials and poor application of game theory.
I dont think this is "understood" or "known" to anyone except Ed Zitron. Subscription plans like Claude Code also have rolling usage limits, it could be profitable. Inference is very cheap and unless you're using OpenClaw no one is actually maxing out the usage window at all times. I'm sure in aggregate the subs are not money furnaces.
I think there were reasons to doubt that heavy subscription users are unprofitable before they did that. OpenClaw was just the tip of the iceberg.
Why don't they make token pricing dynamic if that was the case? It should then allow heavy user to get even more for their money than with the current subscription model where they can't adjust to current infra availability.
It may be that "in aggregate" sub users are (not yet) a loosing business. But in all fairness, the more useful AI gets, the more it will be used. And the more it will be used, the harder it will be to make subs cheaper than token pricing. The only counter-weight are new light users, but those will also become heavy users over time, the more useful it will be for them. And at some point it will be hard to onboard light users in the first place, because the laggards will require even more intelligence and value to get them over.
They're trying to capture the market! Can't do that if you have to stop onboarding users because NVIDIA are struggling to manufacture enough GPUs for you.
"profit" is a weird concept in the software business. it might be true that there is an opportunity cost to these users, either because they displace other potential users by using up capacity, or because they would be willing to pay more if forced. but I don't believe that anyone is losing money on inference costs on any of their plans.
> At some point they have to price their product fairly
they are competing in a market. if most of their costs were inference then this would be a good thing, because everyone would have roughly the same prices, so as long as they had the best model they would win. in fact model development costs eclipse the cost of inference, and is something that non frontier labs get for much cheaper by distilling from the frontier companies.
> They will have to compete on merit alone, and that is much less profitable.
that's not really true. google won search on merit alone, and were massively successful as a result. the trick is that everyone from the poorest shmuck to the richest businessman uses google, so they win through scale. in ai, google and openai are making a bet that they can do the same thing. there's only really room for one winner at this game, even two is stretching it, so anthropic has to win by being the smartest model that only high end businesses use. that's a very risky bet.
As of May 2026, how much money do I need to spend to buy hardware to have a local model that is 80% as good as SOTA services for assisting me in writing code?
As for that 80%, how many minutes per LOC will I be waiting, and how many attempts per query will I be wasting while I wait for it to come up with something sensible?
https://llm-stats.com/benchmarks/swe-bench-verified
SOTA (public proprietary models) would be Opus 4.7 at 0.876
80% of that would be around 0.7.
These models qualify, and are upwards of 90% as good in benchmarks:
DeepSeek-V4-Pro-Max - 1.6T (HuggingFace shows 862B, huh) - 0.806
Kimi K2.6 - 1.1T - 0.802
MiniMax M2.5 - 229B - 0.802
DeepSeek-V4-Flash-Max - 284B (HuggingFace shows 158B as well) - 0.790
These are 80-90% as good, which is also where you see the smaller ones: GLM-5 - 754B - 0.778
Qwen3.6-27B - 27B - 0.772
Kimi K2.5 - 1.1T - 0.768
Qwen3.5-397B-A17B - 397B - 0.764
Step-3.5-Flash - 199B - 0.744
GLM-4.7 - 358B - 0.738
MiMo-V2-Flash - 310B - 0.734
Qwen3.6-35B-A3B - 35B - 0.734
DeepSeek-V3.2 - 685B - 0.731
DeepSeek-V3.2-Speciale - 685B - 0.731
DeepSeek-V3.2 (Thinking) - 685B - 0.731
Qwen3.5-27B - 27B - 0.724
Qwen3.5-122B-A10B - 125B - 0.720
Kimi K2-Thinking-0905 - 1T - 0.713
LongCat-Flash-Thinking-2601 - 562B - 0.700
Out of those, the most modest one you could get is Qwen3.6-35B-A3B because the MoE nature makes it faster across more varied hardware.I currently run the Unsloth 8bit quants on-prem (on a bunch of Nvidia L4 GPUs, since low TDP, long story), some people swear by more quantized versions but with the small models the impact is felt more: https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF
So essentially you need up to 39 GB for the model itself and then some for the KV cache and whatever context size you want. Ideally I'd aim for 64 GB of memory for that, though if really pressed for resources, could get a heavily quantized version within 32 GB (but very little memory for context and kinda shit).
Personally, I think that you need about 45-60 tokens/second for decent usability - even comparatively modest hardware (including those L4) can run the model, though on the lower end options you will not be running parallel sub-agents etc.
Some random results for when you don't want a traditional multi-GPU setup:
Mac Mini - about 1999 USD, gets you somewhere upwards of 30 tokens/second (depends on quantization and how you run it)
Framework Desktop - about 2500 USD, gets you somewhere upwards of 25 tokens/second https://community.frame.work/t/framework-desktop-for-local-ai/80880/5
DGX Spark - about 3500 USD, gets you somewhere upwards of 50 tokens/second https://forums.developer.nvidia.com/t/qwen-qwen3-6-35b-a3b-and-fp8-has-landed/366822/27
Some random results from pulling up random shops and approx. benchmarks, for dual GPU setups (not necessarily NVLink etc.): 2x Intel Arc Pro B70 - about 1900 USD, gets you around 36 tokens/second, borderline usable, I blame their software stack
2x Radeon AI PRO R9700 - about 3000 USD, gets you somewhere upwards of 60 tokens/second, usable
2x Radeon PRO W7800 - about 5400 USD, same as above
2x NVIDIA RTX 5090 - about 7600 USD, same as above
2x NVIDIA RTX 5000 Ada - about 9200 USD, same as above
Of course, for those models, some of those cards are way overkill, but you definitely can get something for running local models without too many compromises involved. That said, you definitely will get a worse experience than SOTA cloud models at that 80% and will have to rework stuff quite a bit often, as my own experience with the Qwen model shows - okay for simple tasks, breaks down on complex stuff. For that, you'd want at least some of the 90% category models and would probably need to consider how much memory you can realistically get.At least it's not hopeless!
Honestly, I don't think it's that cut and dry. Their bet is that the marginal utility of having a smarter model more than makes up for the cost of the additional high-end hardware.
And honestly, if you look at their frankly insane revenue growth since Opus 4.5 released, they were right.
>The secondary bet that they can lock users into their ecosystem (which requires them to subsidize their harness via unprofitable subscriptions burning their capital) and be able to monetize that later will also fail.
I think we're already past this point, honestly. They lowered usage limits, blocked OpenClaw then tried to remove Claude Code from the $20/mo plan. They have always had low market share for the consumer chatbot market and don't seem to care about catching up to OpenAI there.
Anthropic and Google are arguably playing that game. OpenAI's Codex CLI is open source and entirely optional for use of the GPT Codex models.
I think it is insane that people got into a situation where they had committed to a javascript runtime that had to "figure out how to monetize at some point". It is also bizarre that some people are still hopeful despite it being acquired by one of the most enormously unprofitable companies in the most enormously unprofitable sectors of our industry.
To me, the obvious comparison seems to be Docker. Their tooling revolutionized software development and made cgroups and containerization accessible to the masses. Yet they generally seem to have failed to extract payment from users, even with managed service opportunities.
It seems to me that there are substantial obstacles to monetizing a project licensed with even a weaker OSS license like MIT. I think this is especially true for projects that don’t have managed service / “open core” potential.
Any gratis project you rely on runs the risk that it will no longer be provided gratis. That alone is not a strong basis for making decisions.
Maybe we should stop trying to build so many billion dollar/year businesses and work on more sustainable models.
The ones that were first to market went all bankrupt, or were acquired by others that came later into the scene.
2. "But for a beautiful moment in time we created a lot of value for shareholders."
At least you could go out with the accomplishment of having out-competed some other species along the way.
Why? What's the risk? It's open source. Also, speaking of open source, we are happy to commit to open source projects that have no monetization, nor any plans to ever monetize.
It's not great that the search for profit will usually corrupt projects, but the other most common option is that the projects don't exist at all. It's very rare (or it used to be before this year) that someone can do something like this on their own with no compensation. So now at least Bun exists.
All valid points though, I'm pessimistic about Anthropic still actively diverting resources to these side quests when tough times hit (which might be in a week for all we know).
There are way too many ways companies arrange to pay themselves and never be profitable to avoid taxes.
Tldr; I think the don't care about what will happen to the company in medium or long term.
---
Are any of those companies looking for stability or sustainability?
I have the impression they are completely aware of the diminished return effects and they will explore the moment to the fullest of their capabilities promising even more absurd things when the results are even smaller.
I do agree there is a considerable improvement comparing from a year ago but definitely not ground shaking as it was from the year before to the last.
Many of the promises turns out to be empty or at least having huge number of asterisks to it.
I think there are flags everywhere. From minor things such as everyone using different benchmarks or plotting performance differences on weird choices os axis and ordering.
Other mild things such as promoting the "system" created a compiler from scratch when such compiler does not even do a hello world and runs and gave output binaries running 300x then the counterparts.
(I am aware there was a misusage of the agentic benchmark to build a compiler but there was an active choice on how to tell the story. Given other movements I am not quite sure if I believe it was an accident)
There are other red flags such as people rolling back to previous versions of models because they can't get the new one to work properly.
Other situations such as the affirmations that they have such "dangerous" model that apparently seems to be more of a benchmark trick than real results with <100B models being able to replicate the benchmark results only by changing the methodology.
I don't think we are yet in the turning point where everything will collapse but my feeling is that we are going in that direction unless something that makes these models much more intelligent AND efficient.
It makes sense to not hire a person when you can have a machine for the same job for the same price. But AI prices are getting higher than the returns do the margins for it to be a sensible choice are getting smaller.
That all said, I say again that I think that they are completely aware of this effect. Not because they understand the technology but because this happens more frequently than not. Because of this, I don't think they care to be sustainable. All of them,smell that they will take the money and leave the ship to sink.
Some teams have a push now to go all in on AI; don't even look at the code. I've seen this in action and the results are probably what you'd expect. Works great at some level, but as complexity accumulates (especially across a team with different "technical vocabularies"), the end result is compounding complexity and mistakes and no person or team knows how the software actually works.
No human testing of software or QA; unit + integration + give AI control over the browser/tool. Yes, this how some teams are moving forward now. So some of this may be that Anthropic's culture will end up causing shifts in how the Bun team operates and thinks.
If this type of culture and mindset becomes the norm, I think either the models have to get a lot better or the software quality is going to decline.
Matt Pocock has a great talk here: https://youtu.be/v4F1gFy-hqg
"Code is not cheap. Bad code is the most expensive it's ever been. Because if you have a codebase that's hard to change, you're not able to take advantage of all of the bounty that AI can offer. Because AI in a good codebase actually does really, really well."
Once bad code starts to compound on itself, it's going to be really hard to break out of it.I consider this a hard rule, like ad-blocking (this is exactly that, blocking ads as each talk is an ad (or ad in disguise).
To be fair to Matt Pocock, I know he worked for Vercel and Stately for a while before doing content full time. I can't say anything about his AI content, but I did some of his free lessons when I was learning TypeScript. They included interactive editor lessons and such, so it wasn't just empty videos and fluff like some of the influencers.
99% of the times that's not learning, but productivity porn.
That bill is gonna come due at some point for "developers" leaning heavily on agents.
Anthropic acquired Bun for their own benefit, to protect and grow their investment in Claude Code. Not for the benefit the JavaScript community at large. Sounds obvious but I guess that has to be pointed out. Outcomes will follow incentives in the long run.
A good example is React. Facebook's interest is that React be performant (website performance is correlated with time spent on said website), reliable (also correlated to time spent), quick to build on (features ship faster) and popular (helps new recruits hit the ground running). That's fairly well aligned with what developers outside of Facebook want too.
Sure, since Facebook's server is written in Hack it means we'll never get a truly full-stack React, and instead we'll need third parties for the back-end (Next.js, Tanstack Start, etc). But Facebook building react also means it will always be someone's job to make sure this Framework works well in codebases with millions of modules.
This is all independent of any shitty practices with their other software. And this has been for decades at this point.
Doesn't that just make it even worse? If Anthropic can't even afford to spend the engineering effort on making sure their core product functions properly, why should we assume that they'll be investing serious resource into what is essentially some upper manager's loss-leader pet project?
If Anthropic is financially hurting, why shouldn't they put Bun on the bare minimum of life support?
Building developers sell you the apartment, not the elevator room, the electrical room, mechanical room, etc. They will make all sorts of controversial decisions with the apartments; odd layouts, ugly flooring, weird pricing, tacky finishes, etc. The "core product" is the money-maker, that's where the egos clash, priorities change, and where they try to charge as much as possible while they cut costs as much they can.
No one is buying the electrical room though. It just has to work. Yes, you'll make it as cheaply as possible; no flooring, no paint on the walls, no interior designer meetings to argue what's the right tone beige for the walls. But it'll do what it needs to do. It'll keep the lights on. Otherwise you can't sell any of the apartments.
Same thing with Facebook; there's active incentive to introduce all sorts of dark patterns over their app, to ignore certain bugs, to unnecessarily change things, etc. But none of those incentives are present with React. The incentive is to keep React reliable and performant, and to keep the team lean. I'm sure it's similar with Bun in Anthropic.
And to be clear, Anthropic definitely spends most of it's engineering effort making sure their core product "functions properly". This "functions properly" is just different for us as clients vs them as a corporation. There is high overlap, since they need to keep us clients happy. But a well-functioning product at a company is one that leads to money. I'm sure very capable engineers pushing the okrs they care about.
I’m unclear about this. What’s the business case? I use Gemini CLI a lot, which runs on Node, and I can’t see anything that would be improved by using a different JS runtime. It’s not something you notice as a user. Node is mature, stable, and perfectly fit for the purpose.
If Anthropic were public and if these decisions were comprehensible to the average investor, an acquisition like this ought to cause the stock to plummet. Luckily for the people involved, there are no constraints like that in the current market.
One favorable way to phrase it for Anthropic is they acquired Bun because CC and other internal tooling depended on it so heavily and they questioned it's future as purely OSS.
It remains to be seen how things will actually unfold.
However, these engineers, too, now start to vibe-code with reckless abandon https://x.com/jarredsumner/status/2048434628248359284 and https://x.com/jarredsumner/status/2049780223311548729
For me it's far from a stretch, in fact it matches closely a pattern that I've seen repeated many times over at this point.
Can you point to any examples of a company with shitty practices buying one without shitty practices that didn't end up with the shitty practices diffusing through the newly-acquired company within a couple of years?
If you start seeing the people that created bun leaving Anthropic, then I'd probably start to worry. And I haven't seen any sign of that yet.
Especially true if they leave before they're fully vested.
I don't have any direct context, though I have run an open-source business (Zulip) for the last decade wearing both the CEO and technical lead hats.
But my simulation is that the Bun leadership team might well be spending 2x as much of their time working on the technology than they reasonably could have as an independent venture-funded company, just because they don't have to do all that other stuff anymore. (There's of course probably a significant bias in that focus towards whatever Anthropic needs from Bun, only some of which other users may care about).
So I agree. Personally, I would not be concerned unless you see the tell-tale signs of the team being reassigned to other priorities at the buyer, which tends to be obvious, because, say, the GitHub project activity falls off a cliff.
Incidentally, Anthropic needs to figure out how to monetize at some point too.
Regardless of what else is going on, kernel is a separate team, and has very strong incentives to remain competent and sane.
They released more major features and breaking changes in their last patch release than most software sees in two major versions.
I've been using it just as a script runner and npm package manager basically, and it's incredible the amount of work you have to do to find "good" versions. We've had patch versions suddenly freeze on install more than once, we couldn't upgrade for quite a while due to this. I think they broke postinstall scripts with trustedDependencies entirely two minor versions ago - not a mention in release notes, and somehow no one reporting it in GH issues. In 1.1 or so you could get Bun to do trustedDependency builds in postinstall, and then after that you couldn't. I looked around for release notes and saw nothing mentioned. It's been broken for months.
EDIT: Actually I just remembered I delivered a small ERP tool to a business a while back and I did opt to use I think Bun for that because it had the most robust tools to wrap a project into an `*.exe`, that was definitely a better experience than Node. Though since that was dependency-less JS I did the whole thing using Node and then just shipped it with Bun.
There are still things I dislike about Deno, but it really does make package development a lot simpler. JSR is a great upgrade from NPM, and Deno makes it so simple to publish to both NPM and JSR. Strict IO permission system and WebGPU support are also nice to have.
> wrap a project into an `*.exe`
Deno makes this simple too. Though that's where it's bundling features stop. Honestly I am okay with that, I'd rather use Rolldown or Vite for web or library bundling.
1) You need to retest again, mainly because Bun's own native tools should be faster than Node's.
2) My experience is the opposite: For the niche uses I'm on, the rendering process is done 2-3x faster with only a few changes to use Bun's tools.
I've reduced my dependencies 5-10x. Got full TS and JSX/TSX support with zero setup. Watch mode is instant. You can deploy a single binary.
I kept waiting for all the breaking issues people complain online but my experience has been nothing but positive.
Bun has a really nice REPL, can recommend https://bun.com/docs/runtime/repl
And looking ad docs, it seems it only has partial support still: https://nodejs.org/api/typescript.html
> To use TypeScript with full support for all TypeScript features, including tsconfig.json, you can use a third-party package. These instructions use tsx as an example but there are many other similar libraries available.
They even added sql template string queries like recent popular libraries in v24.
I just built a project using it.
I find Deno's permission system amazing! (although I didn't stick with it until v2)
Everything is closed by default but you're able to write code like normal.
Whenever it needs a permission the code pauses (like `debugger;`) and the terminal asks you "hey, should this script have access to this file/folder"?
- You say yes and the code continues (no need for exceptions).
- You say no and the code stops.
Then after your program has run, you put only the answers you said yes to in a deno.json file and it never has to ask again.
---------------------------------------
I'm currently working on a project that takes in heap of files from one one set of devs, processes them with a heap of files from another set of devs, then compiles and outputs the final product.
The file structure goes like this:
1. Group one devs
2. Group two devs
3. Build output
4. Compiler
So group one only works in their folder, and group two only works in their folder, but needs to see group one's folder.
With Deno it's stupidly easy to do stuff like:
- Scripts in group one only have file read access to group one.
- Scripts in group two only have file read access to group one and two.
- Scripts in the compiler only have file read access to group one and two's folders, only have file write access to build-output folder, and can read the env file in the project's root directory.
- One specific file is only allowed to access a specific URL and port
- Another specific file is only allowed to use the FFI to access a specific shared object.
I don't need to worry about a dev's script accidentally using the wrong file because they messed up the path.
I don't need to worry about a dev accidentally overwriting a file and losing data.
I don't need to worry about a dev blindly going down the wrong road because an LLM convinced them to.
I don't need to worry about a dev using LLMs agents that are trying to make the project do something it's not supposed to do.
I don't need to worry about a dev including a dependency that's doing what it shouldn't be doing.
I don't need to worry about the equivalent of `rm -rf ./$BUILD-OUTPUT` but the env file wasn't set up correctly and $BUILD-OUTPUT is empty/undefined evaluating to `rm -rf ./` and nuking the project's root.
I don't need to worry about supply-chain attacks.
I don't need to worry about namesquatting attacks.
There's so many things I don't need to worry about.
It's such a breath of fresh air.
It's just: you guys read from here, other guys read from here, the compiler writes to here.
Whenever something doesn't fit, the program stops and tells you what file is trying to access what permission.
---------------------------------------
aside: Node added a permission system but it's completely broken by design. Everything's open and you have to manually close each permission yourself. Oh, you don't want this project to have file write permissions? Lets just turn off the file write permissions (and forget to also turn off the subprocess permissions to spawn a shell which rm -rf's the wrong folder).
otherwise, bun has a big "batteries included" thing going on.
For instance,
- Bun.$ to run shell commands
- an entire redis client at Bun.redis
There are dozens of other examples like this
For rapid prototyping, complex glue scripts, etc. it's an absolute joy to work with. There is often no reason to pull in any dependencies to accomplish what you want.
Here are some things shipping in the next version of Bun:
- 17 MB smaller Windows x64 binaries [0]
- 8 MB smaller Linux binaries [1]
- `--no-orphans` CLI flag to recursively kill any lingering processes spawned [3]
- SSL context caching for client TCP & unix sockets, which significantly reduces memory usage for database clients like Mongoose/MongoDB [4]
- Experimental HTTP/3 & HTTP/2 client in fetch [5]
- Experimental HTTP/3 support in Bun.serve() [6]
- Bun.Image, a builtin image processing library [7]
(Along with several reliability improvements to node:fs, Worker, BroadcastChannel, and MessagePort)
The Anthropic acquisition also means Bun no longer needs to become a revenue-generating business. We are very incentivized to make Bun better because Claude Code depends on it, and so many software engineers depend on Claude Code to help get their work done.
[0]: https://github.com/oven-sh/bun/pull/30219
[1]: https://github.com/oven-sh/bun/pull/30098
[2]: https://github.com/oven-sh/WebKit/pull/211
[3]: https://github.com/oven-sh/bun/pull/29930
[4]: https://github.com/oven-sh/bun/pull/29932
Perhaps Bun will be the exception, but you can't say that the concern is unfounded.
The CEO of Anthropic has a habit of making outlandish predictions about how AI is so very close to replacing human programmers. Anthropic has been applying this belief to Claude Code and it has become a giant heap of unmaintainable spaghetti.
Has development velocity increased because you are merging large quantities of unreviewed LLM generated code? If so, I would be very worried about future stability if I used Bun.
I’ve been a Bun maximalist since the beginning. Thank you Jarred!!!
Can you shed a little light on the recent giant rust based commits though? Are you guys moving away from zig? These kind of big curious movements and the spectre of giant LLM-based commits are not exactly confidence inspiring.
- Querying sqlite with tagged template literals
- Bun.password.verify being argon2 is a better default
- HTML imports
- JSX transpilation
- Auto loading .env file
https://burlyburr.com, which hits https://backend.burlyburr.com
https://nodejs.org/api/sqlite.html#databasecreatetagstoremax...
We live in a vastly different world than before, where people are more conscious of ethical concerns and willing to stand on their ground to avoid repeating past mistakes.
It might be premature from a tech standard, but it makes sense from an ethical concern. I don't think misconduct is as easily backtracked as it was before and preemptive measures are needed to avoid the large impact that those decisions make.
Would be interested to hear what makes you say that. I don't see anyone being conscious of ethical concerns more than they were before. I can see slightly more BDS people, for example, but outside of that not much.
For example, i'd been following this issue https://github.com/oven-sh/bun/issues/14102 and eventually all the libraries shipped "if bun do x" into them, which is the opposite of compatibility.
Other than a bundler, Node already has all of these. Different test runner syntax maybe but otherwise TS "just works" out of the box and their built in test runner is totally capable. Not sure I see the need for such a lament over Bun.
Additionally, Bun's push for covering as much of the Node API as possible has pushed Deno towards the same level of compatibility, and now most code is basically runtime agnostic. I'm not sure if I'll ever actually use Bun in production, but I'm glad it exists because the JavaScript ecosystem has been much improved simply due to its existence.
Node didn't have all of these features when I initially went down the path of choosing Bun, so I have a number of existing projects that have Bun baked into them.
`node --experimental-transform-types example.ts`
As for whether this matches your definition of "native support" or not...
Now that we have `satisfies` and `as const`, there's really no reason to ever use an enum. In my opinion, TypeScript is best when it is simply used as Language Server, and it should never have had runtime implications in the first place.
Is there anything else that doesn't run as valid JS if you strip the types (and maybe some other extra keywords)out?
Genuine question, in my head there's not much, but TS has a few weird corners I maybe haven't used
I'm using it in my projects with no issues.
Outside of that I’ve barely seen them used in typescript, they’re not really idiomatic in react projects
IIRC they "almost" recommend against using them (the last part, I haven't researched again now).
But the usage of many features has reached a sort of point of no return, so I hope Node will go the route of making the experimental transpilation the default for TS files at some point.
Goes to show how strong the appeal of syntax is, especially enums.
To people coming from languages with enum support, it just looks so much more organized to use them, compared to union types, despite all of the (many) drawbacks.
For their first year two of existence, bun tried to do npm, but better. For the first year or two of their existence, Deno tried to reinvent npm.
The key result is that after that first year or two Deno had to walk back their decisions, to create a Node-ecosystem-compatible tool .. and as a result, they're now significantly behind bun (at least by all metrics I've seen).
:vomit:
I have limited time, and the little feedback that guy provided turned out to be perfectly well answered by AI. So sorry, but either you actually criticize something actionable to just shut up, but I don't have the time to debate this if the simple few lines don't get answered.
If you would like more insight, just say the word.