But that's missing the point of the writing tools example: The models can unilaterally do this, and if prompted properly they will do this today. Writing an agent loop that will encourage the agent to build up it's own library of tools and prefer that library first, using a small (possibly local) model to select, is literally no more than a couple of hundred lines of code.
I wrote a toy coding agent a while back, and I started just with the ability for it to run shell commands. Every other tool the model wrote itself. At some point it even tried to restart itself to get access to the tool it had just written.
The providers only control a miniscule proportion of tooling, and the models themselves are capable enough that they will never control most of the tooling.
As a result, a lot of effort is going into optimising this to reduce token use.
But what we're seeing is that the main driver for this is not to spend less, but to do more, because we get enough value out of these models that every cost saving measure makes it possible to at a minimum spend the same to do more, but often even to spend more.
I'm doing projects now that would be financially unviable just 6 months ago.
Today the same argument is rehashed - it's outrageous that VS Code uses 1 GB of RAM, when Sublime Text works perfectly in a tiny 128 MB.
But notice that the tiny/optimized/good-behaviour of today, 128 MB, is 30 times larger than the outrageous decadent amount from Wirth's time.
If you told Wirth "hold my bear", my text-editor needs 128 MB he would just not comprehend such a concept, it would seem like you have no idea what numbers mean in programming.
I can't wait for the day when programmers 20 years from now will talk about the amazingly optimized editors of today - VS Code, which lived in a tiny 1 GB of RAM.
Both, compute and memory, are getting closer to fundamental physical limits and it is unlikely that the next 60 years will be in any way like last 60 years.
While the argument for compute is relatively simple it is a bit harder to understand for memory. We are not near to any limit for the size of our memory but the limiting factor is how much storage we can bring how close to our computing units.
Now, there is still way to make and low hanging fruit to pick but I think we will eventually see a renaissance of appreciation for effective programs in our lifetimes.
In theory, yes. But I bet that the forces of enshittification will be stronger. All software will be built to show ads, and since there is no limit to greed, the ad storage and surveillance requirements will expand to include every last byte of your appliance's storage and memory. Interaction speed will be barely enough to not impact ad watching performance too severely. Linux will not be an out, since the megacorps will buy legislation to require "approved" devices and OSs to interact with indispensable services.
You'll get bitten. I think you mean "hold my beer."
I find this amusing as my Czech girlfriend -- well, now my wife -- used to consistently make the same substitution. :-)
This and a huge amount of size in applications is features and libraries that allow compatibility between all kinds of different formats. Being able to open and convert almost everything is a boon, while also being a security nightmare.
Lastly I bet a lot of these applications could be way smaller if a lot of the UI prettiness was stripped from them.
Maybe one day that will change
This isn't a different question from "Why is Microsoft office everywhere"
And "Why are all the popular things gigantic".
Features, not speed, are what gets people to adopt software in the vast majority of cases.