Top
Best
New

Posted by signa11 2 days ago

Wirth's Revenge(jmoiron.net)
201 points | 97 commentspage 2
nemo1618 1 day ago|
Just wait. In a few years we'll have computer-use agents that are good enough that people will stop making APIs. Why bother duplicating that effort, when people can just direct their agent to click around inside the app? Trillions of matmuls to accomplish the same result as one HTTP request.
abeppu 1 day ago||
I would love to see an llm system oriented around trying to "jit" out fast/cheap combinations of tool invocations from calls. The case where heartbeat calls Opus, where the transformer model itself doesn't know the time but has access to some tools, should be solvable if Opus has said the equivalent of "it's still night, and to determine this I checked the time using the `date` utility to get the epoch time and if it had responded something greater than $X I would have said it's morning". I.e. the smart model can often tell a less capable framework how it can answer the same question cheaply in the future. But incentives aren't really aligned for that presently.
vidarh 1 day ago|
Well, you can have the less capable system ask the more capable system to give it a tool to solve the problems it doesn't know how to solve, instead of solving it. If the problem really needs the more advanced system, that tool might call a model.
abeppu 1 day ago||
Or tell you how to use the tools you already have, but sure whatever. The point is, "LLMs that actually do stuff" end up partly looking like expensive logic to decide what to do (like an interpreter) and actually doing it, in terms of some preexisting set of primitives. The expensive logic does get you a high degree of flexibility for less programming cost, but you pay a lot for interpretive overhead. I really do think something that looks like a tracing hit (and a standard set of tools or primitives) could find "straight line" paths a reasonable fraction of calls. But if the interpreter is a service that you pay for by the token, and which is already losing money, why should it be written to do so?
vidarh 1 day ago||
> Or tell you how to use the tools you already have, but sure whatever.

But that's missing the point of the writing tools example: The models can unilaterally do this, and if prompted properly they will do this today. Writing an agent loop that will encourage the agent to build up it's own library of tools and prefer that library first, using a small (possibly local) model to select, is literally no more than a couple of hundred lines of code.

I wrote a toy coding agent a while back, and I started just with the ability for it to run shell commands. Every other tool the model wrote itself. At some point it even tried to restart itself to get access to the tool it had just written.

The providers only control a miniscule proportion of tooling, and the models themselves are capable enough that they will never control most of the tooling.

As a result, a lot of effort is going into optimising this to reduce token use.

But what we're seeing is that the main driver for this is not to spend less, but to do more, because we get enough value out of these models that every cost saving measure makes it possible to at a minimum spend the same to do more, but often even to spend more.

I'm doing projects now that would be financially unviable just 6 months ago.

1970-01-01 1 day ago||
He was wrong up until we found the end of Moore's law. Now that hardware cannot get exponentially faster, we are forced to finally write good code. The kind that isn't afraid to touch bare metal. We don't need another level of abstraction. Abstraction does not help anyone. I would love to point to some good examples but it's still too early for this to be seen globally. Your 1000 container swarm for a calculator app is still state of the art infrastructure.
firmretention 1 day ago||
The Reiser footnote was on point. I couldn't resist clicking it to find out if it was the same Reiser I was thinking about.
hpdigidrifter 22 hours ago||
Enjoyed the article but a note to the author it's not nice to read on mobile (Firefox Android)
dist-epoch 1 day ago||
Wirth was complaining about the bloated text editors of the time which used unfathomable amounts of memory - 4 MB.

Today the same argument is rehashed - it's outrageous that VS Code uses 1 GB of RAM, when Sublime Text works perfectly in a tiny 128 MB.

But notice that the tiny/optimized/good-behaviour of today, 128 MB, is 30 times larger than the outrageous decadent amount from Wirth's time.

If you told Wirth "hold my bear", my text-editor needs 128 MB he would just not comprehend such a concept, it would seem like you have no idea what numbers mean in programming.

I can't wait for the day when programmers 20 years from now will talk about the amazingly optimized editors of today - VS Code, which lived in a tiny 1 GB of RAM.

weinzierl 1 day ago||
This will probably not happen, because of physics.

Both, compute and memory, are getting closer to fundamental physical limits and it is unlikely that the next 60 years will be in any way like last 60 years.

While the argument for compute is relatively simple it is a bit harder to understand for memory. We are not near to any limit for the size of our memory but the limiting factor is how much storage we can bring how close to our computing units.

Now, there is still way to make and low hanging fruit to pick but I think we will eventually see a renaissance of appreciation for effective programs in our lifetimes.

zombot 1 day ago||
> I think we will eventually see a renaissance of appreciation for effective programs in our lifetimes.

In theory, yes. But I bet that the forces of enshittification will be stronger. All software will be built to show ads, and since there is no limit to greed, the ad storage and surveillance requirements will expand to include every last byte of your appliance's storage and memory. Interaction speed will be barely enough to not impact ad watching performance too severely. Linux will not be an out, since the megacorps will buy legislation to require "approved" devices and OSs to interact with indispensable services.

pjmlp 1 day ago|||
Hence why I actually happy for the RAM prices getting back to how used to be, maybe new generations rediscover how to do much with little.
lproven 15 hours ago|||
> "hold my bear"

You'll get bitten. I think you mean "hold my beer."

I find this amusing as my Czech girlfriend -- well, now my wife -- used to consistently make the same substitution. :-)

pixl97 1 day ago||
With all that said, applications back in the day being talked about were ridiculously easy to crash, and in the early MacOS and Win3.1 days, I mean the entire computer to the point of hitting the reset switch.

This and a huge amount of size in applications is features and libraries that allow compatibility between all kinds of different formats. Being able to open and convert almost everything is a boon, while also being a security nightmare.

Lastly I bet a lot of these applications could be way smaller if a lot of the UI prettiness was stripped from them.

jokoon 1 day ago||
Hardware is cheaper than programmers

Maybe one day that will change

srean 1 day ago||
Thanks to AI driven scarcity of hardware it's already coming true.
tgv 1 day ago||
Idk. What are these programmers doing afterwards? Build more shoddy code? Perhaps it's a better idea to focus on what's necessary and not run from feature to feature at top speed. This might require some rethinking in the finance department, though.
pixl97 1 day ago||
If writing thin code doesn't pay the bills then programmers will write thick code to eat tomorrow.
Lapsa 1 day ago||
"an interactive text editor could be designed with as little as 8,000 bytes of storage" - meanwhile Microsoft adds copilot integration to Notepad
pixl97 1 day ago|
I mean, it could be that small... but why isn't it?

This isn't a different question from "Why is Microsoft office everywhere"

And "Why are all the popular things gigantic".

Features, not speed, are what gets people to adopt software in the vast majority of cases.

user____name 1 day ago||
I don't buy the features argument, there's plenty of bloated software today that ships with less features than a precursor and is a hundred times larger.
gostsamo 1 day ago|
I suspect that the next generation of agenticly trained llm-s will have a mode where they first consider solving the problem by writing a program first before doing stuff by hand. At least, it would be interesting if in a few months the llm greets me with "Keep in mind that I run best on ubuntu with uv already installed!".
More comments...