Top
Best
New

Posted by samrolken 2 days ago

Show HN: Why write code if the LLM can just do the thing? (web app experiment)(github.com)
I spent a few hours last weekend testing whether AI can replace code by executing directly. Built a contact manager where every HTTP request goes to an LLM with three tools: database (SQLite), webResponse (HTML/JSON/JS), and updateMemory (feedback). No routes, no controllers, no business logic. The AI designs schemas on first request, generates UIs from paths alone, and evolves based on natural language feedback. It works—forms submit, data persists, APIs return JSON—but it's catastrophically slow (30-60s per request), absurdly expensive ($0.05/request), and has zero UI consistency between requests. The capability exists; performance is the problem. When inference gets 10x faster, maybe the question shifts from "how do we generate better code?" to "why generate code at all?"
430 points | 316 comments
sunaurus 2 days ago|
The question posed sounds like "why should we have deterministic behavior if we can have non-deterministic behavior instead?"

Am I wrong to think that the answer is obvious? I mean, who wants web apps to behave differently every time you interact with them?

jstummbillig 2 days ago||
Because nobody actually wants a "web app". People want food, love, sex or: solutions.

You or your coworker are not a web app. You can do some of the things that web apps can, and many things that a web app can't, but neither is because of the modality.

Coded determinism is hard for many problems and I find it entirely plausible that it could turn out to be the wrong approach in software, that is designed to solve some level of complex problems more generally. Average humans are pretty great at solving a certain class of complex problems that we tried to tackle unsuccessfully with many millions lines of deterministic code, or simply have not had a handle on at all, like (like build a great software CEO).

latexr 2 days ago|||
> Because nobody actually wants a "web app". People want food, love, sex or: solutions.

Talk about a nonsensical non-sequitur, but I’ll bite. People want those to be deterministic too, to a large extent.

When people cook a meal with the same ingredients and the same times and processes (like parameters to a function), they expect it to taste about the same, they never expect to cook a pizza and take a salad out of the oven.

When they have sex, people expect to ejaculate and feel good, not have their intercourse morph into a drag race with a clown half-way though.

And when they want a “solution”, they want it to be reliable and trustworthy, not have it shit the bed unpredictably.

mavamaarten 1 day ago|||
Exactly this. The perfect example is Google Assistant for me. It's such a terrible service because it's so indeterministic. One day it happily answers your basic question with a smile, and when you need it most it doesn't even try and only comes up with "Sorry I don't understand".

When products have limitations, those are usually acceptable to me if I know what they are or if I can find out what the breaking point is.

If the breaking point was me speaking a bit unclearly, I'd speak more clearly. If the breaking point was complex questions, I'd ask simpler ones. If the breaking point is truly random, I simply stop using the service because it's unpredictable and frustrating.

tomcam 1 day ago||||
> When they have sex, people expect to ejaculate and feel good, not have their intercourse morph into a drag race with a clown half-way though.

speak for yourself

pempem 1 day ago||||
Ways to start my morning...reading "When they have sex, people expect to ejaculate and feel good, not have their intercourse morph into a drag race with a clown half-way though."

Stellar description.

davnicwil 1 day ago|||
This thing of 'look, nobody cares about the details really, they just care about the solution' is a meme that I think will be here forever in software. It was here before LLMs, they're now just the current socially accepted legitimacy vehicle for the meme.

In the end, useful stuff is built by people caring about the details. This will always be true. I think in LLMs and broadly AI people see an escape valve from that where the thinking about the details can be taken off their hands, and that's appealing, but it won't work in exactly the same way that having a human take the details off your hands doesn't usually work that well unless you yourself understand the details to a large extent (not necessarily down to the atoms, but at the point of abstraction where it matters, which in software is mostly about deterministically how do the logic flows of the thing actually work and why).

I think a lot of people just don't intuit this. An illustrative analogy might be something else creative, like music. Imagine the conversation where you're writing a song and discussing some fine point of detail like the lyrics, should I have this or that line in there, and ask someone's opinion, and their answer is 'well listen, I don't really know about lyrics and all of that, but I know all that really matters in the end is the vibe of the song'. That contributes about the same level of usefulness as talking about how software users are ultimately looking for 'solutions' without talking about the details of said software.

mojoe 1 day ago||
Exactly, in the long run it's the people who care the most who win, it's tautological
113 2 days ago||||
> Because nobody actually wants a "web app". People want food, love, sex or: solutions.

Okay but when I start my car I want to drive it, not fuck it.

jstummbillig 2 days ago|||
Most of us actually drive a car to get somewhere. The car, and the driving, are just a modality. Which is the point.
kennywinker 1 day ago|||
If this was a good answer to mobility, people would prefer the bus over their car. It’s non-deterministic - when will it come? How quick will i get there? Will i get to sit? And it’s operated by an intelligent agent (driver).

Every reason people prefer a car or bike over the bus is a reason non-deterministic agents are a bad interface.

And that analogy works as a glimpse into the future - we’re looking at a fast approaching world where LLMs are the interface to everything for most of us - except for the wealthy, who have access to more deterministic services or actual human agents. How long before the rich person car rental service is the only one with staff at the desk, and the cheaper options are all LLM based agents? Poor people ride the bus, rich people get to drive.

aryehof 1 day ago|||
Bus vs car hit home for me as a great example of non vs deterministic.

It has always seemed to me that workflow or processes need to be deterministic and not decided by an LLM.

soco 16 hours ago|||
Here in Switzerland the bus is the deterministic choice. Just saying.
63stack 2 days ago||||
Most of us actually want to get somewhere to do an activity. The getting there is just a modality.
jermaustin1 2 days ago||
Most of us actually want to get some where to do an activity to enjoy ourselves. The getting there, and activity, are just modalities.
tags2k 2 days ago||
Most of us actually want to get somewhere to do an activity to then have known we did it for the rest of our lives as if to extract some intangible pleasure from its memory. Why don't we just hallucinate that we did it?
shswkna 2 days ago|||
This leads to us asking the deepest question of all: What is the point of our existence. Or as someone suggests lower down, in our current form all needs could ultimately be satisfied if AI just provided us with the right chemicals. (Which drug addicts already understand)

This can be answered though, albeit imperfectly. On a more reductionist level, we are the cosmos experiencing itself. Now there are many ways to approach this. But just providing us with the right chemicals to feel pleasure/satisfaction is a step backwards. All the evolution of a human being, just to end up functionally like an amoeba or a bacteria.

So we need to retrace our steps backwards in this thought process.

I could write a long essay on this.

But, to exist in first place, and to keep existing against all the constraints of the universe, is already pretty fucking amazing.

Whether we do all the things we do, just in order to stay alive and keep existing, or if the point is to be the cosmos “experiencing itself”, is pretty much two sides of the same coin.

narrator 1 day ago|||
>Or as someone suggests lower down, in our current form all needs could ultimately be satisfied if AI just provided us with the right chemicals. (Which drug addicts already understand)

When you suddenly realize walking down the street that the very high fentanyl zombie is having a better day than you are.

Yeah, you can push the button in your brain that says "You won the game." However, all those buttons were there so you would self-replicate energy efficient compute. Your brain runs on 10 watts after all. It's going to take a while for AI to get there, especially without the capability for efficient self-repair.

tags2k 1 day ago||||
Indeed - stick me in my pod and inject those experience chemicals into me, what's the difference? But also, what would be the point? What's the point anyway?

In one scenario every atom's trajectory was destined from the creation of time and we're just sitting in the passenger seat watching. In another, if we do have free will then we control the "real world" underneath - the quantum and particle realms - as if through a UI. In the pod scenario, we are just blobs experiencing chemical reactions through some kind of translation device - but aren't we the same in the other scenarios too?

awesomecomment 1 day ago|||
[dead]
63stack 1 day ago|||
This was actually my point as well. You can follow this thought process all the way up to "make those specific neuron pathways in my brain fire", everything else is just the getting there part.
GTP 2 days ago||||
But I want that somewhere to be deterministic, i.e. I want to arrive to the place I choose. With this kind of non-determinism instead, I have a big chance of getting to the place I choose. But I will also every now and then end up in a different place.
113 2 days ago|||
Yeah but in this case your car is non-deterministic so
mikodin 2 days ago|||
Well the need is to arrive where you are going.

If we were in an imagined world and you are headed to work

You either walk out your door and there is a self driving car, or you walk out of your door and there is a train waiting for you or you walk out of your door and there is a helicopter or you walk out of your door and there is a literal worm hole.

Let's say all take the same amount of time, are equally safe, same cost, have the same amenities inside, and "feel the same" - would you care if it were different every day?

I don't think I would.

Maybe the wormhole causes slight nausea ;)

didericis 2 days ago|||
> Well the need is to arrive where you are going.

In order to get to your destination, you need to explain where you want to go. Whatever you call that “imperative language”, in order to actually get the thing you want, you have to explain it. That’s an unavoidable aspect of interacting with anything that responds to commands, computer or not.

If the AI misunderstands those instructions and takes you to a slightly different place than you want to go, that’s a huge problem. But it’s bound to happen if you’re writing machine instructions in a natural language like English and in an environment where the same instructions aren’t consistently or deterministically interpreted. It’s even more likely if the destination or task is particularly difficult/complex to explain at the desired level of detail.

There’s a certain irreducible level of complexity involved in directing and translating a user’s intent into machine output simply and reliably that people keep trying to “solve”, but the issue keeps reasserting itself generation after generation. COBOL was “plain english” and people assumed it would make interacting with computers like giving instructions to another employee over half a century ago.

The primary difficulty is not the language used to articulate intent, the primary difficulty is articulating intent.

simianwords 2 days ago||
this is a weak argument.. i use normal taxis and ask the driver to take me to a place in natural language - a process which is certainly non deterministic.
didericis 1 day ago|||
> a process which is certainly non deterministic

The specific events that follow when asking a taxi driver where to go may not be exactly repeatable, but reality enforces physical determinism that is not explicitly understood by probabilistic token predictors. If you drive into a wall you will obey deterministic laws of momentum. If you drive off a cliff you will obey deterministic laws of gravity. These are certainties, not high probabilities. A physical taxi cannot have a catastrophic instant change in implementation and have its wheels or engine disappear when it stops to pick you up. A human taxi driver cannot instantly swap their physical taxi for a submarine, they cannot swap new york with paris, they cannot pass through buildings… the real world has a physically determined option-space that symbolic token predictors don’t understand yet.

And the reason humans are good at interpreting human intent correctly is not just that we’ve had billions of years of training with direct access to physical reality, but because we all share the same basic structure of inbuilt assumptions and “training history”. When interacting with a machine, so many of those basic unstated shared assumptions are absent, which is why it takes more effort to explicitly articulate what it is exactly that you want.

We’re getting much better at getting machines to infer intent from plain english, but even if we created a machine which could perfectly interpret our intentions, that still doesn’t solve the issue of needing to explain what you want in enough detail to actually get it for most tasks. Moving from point A to point B is a pretty simple task to describe. Many tasks aren’t like that, and the complexity comes as much from explaining what it is you want as it does from the implementation.

chii 2 days ago|||
and the taxi driver has an intelligence that enables them to interpret your destination, even if ambiguous. And even then, mistakes happen (all the time with taxis going to a different place than the passenger intended because the names may have been similar).
simianwords 2 days ago||
Yes so a bit of non determinism doesn’t hurt anyone. Current LLMs are pretty accurate when it comes to these sort of things.
hyperadvanced 2 days ago|||
I think it’s pretty obvious but most people would prefer a regular schedule not a random and potentially psychologically jarring transportation event to start the day.
chii 2 days ago||||
> your car is non-deterministic

it's not as far as your experience goes - you press pedal, it accelerates. You turn the steering, it goes the way it turns. What the car does is deterministic.

More importantly, it does this every time, and the amount of turning (or accelerating) is the same today as it was yesterday.

If an LLM interpreted those inputs, can you say with confidence, that you will accelerate in a way that you predicted? If that is the case, then i would be fine with an LLM interpreted input to drive. Otherwise, how do you know, for sure, that pressing the brakes will stop the car, before you hit somebody in front of you?

of course, you could argue that the input is no longer your moving the brake pads etc - just name a destination and you get there, and that is suppose to be deterministic, as long as you describe your destination correctly. But is that where LLM is at today? or is that the imagined future of LLMs?

iliaxj 1 day ago|||
Sometimes it doesn't though. Sometimes the engine seizes because a piece of tubing broke and you left your coolant down the road two turns ago. Or you steer off a cliff because there was coolant on the road for some reason. Or the meat sack in front of the wheel just didn't get enough sleep and your response time is degraded and you just can't quite get the thing to feel how you usually do. Ultimately the failure rate is low enough to trust your life on it, but that's just a matter of degree.
pepoluan 1 day ago|||
The situations you described reflects a System that has changed. And if the System has changed, then a change in output is to be expected.

It's the same as having a function called "factorial" but you change the multiplication operation to addition instead.

chii 1 day ago|||
all of those situations are the "driver's own fault", because they could've had a check to ensure none of that happened before driving. Not true with an LLM (at least, not as of today).
crote 2 days ago|||
Tesla's "self-driving" cars have been working very hard to change this. That piece of road it has been doing flawlessly for months? You're going straight into the barrier today, just because it feels like it.
nurettin 2 days ago|||
I mean, as long as it works and it is still technically "my car", I would welcome the change.
ozim 2 days ago||||
I feel like this is the point where we start to make jokes about Honda owners.
bfkwlfkjf 2 days ago||
Go on, what about honda owners? I don't know the meme.
hathawsh 2 days ago||
The "Wham Baam" YouTube channels have a running joke about Hondas bumping into other cars with concerning frequency.
stirfish 2 days ago||||
But do you want to drive, or do you want to be wherever you need to be to fuck?
codebje 2 days ago||
For me personally, the latter, but there's definitely people out there that just love driving.

Either way, these silly reductionist games aren't addressing the point: if I just want to get from A to B then I definitely want the absolute minimum of unpredictability in how I do it.

theendisney 2 days ago|||
That would ruin the brain placticity.

I wonder now, if everything is always different and suddenly every day would be the same. How many times as terrifying would that be compared to the opposite?

whilenot-dev 2 days ago||
A form of Alexei Yurchak's hypernormalisation?
mewpmewp2 2 days ago|||
Only because you think the driving is what you want. The point is that what you want is determined by our brain chemicals. Many steps could be skipped if we could just give you the chemicals in your brain that you craved.
lambdaone 2 days ago||||
Sadly, this is not true of a (admittedly very small) number of individuals.
hinkley 2 days ago||||
Christine didn’t end well for anyone.
OJFord 2 days ago||||
...so that you can get to the supermarket for food, to meet someone you love, meet someone you may or may not love, or to solve the problem of how to get to work; etc.

Your ancestors didn't want horses and carts, bicycles, shoes - they wanted the solutions of the day to the same scenarios above.

sublinear 2 days ago||
As much as I love your point, this is where I must ask whether you even want a corporeal form to contain the level of ego you're describing. Would you prefer to be an eternal ghost?

To dismiss the entire universe and its hostilities towards our existence and the workarounds we invent in response as mere means to an end rather than our essence is truly wild.

anonzzzies 2 days ago||
Most people need to go somewhere (in a hurry) to make money or food etc which most people don't want to do if they didn't have to, so yeah it is mostly a means to an end.
sublinear 2 days ago||
And yet that money is ultimately spent on more means to ends that are just as inconvenient from another perspective?

My point was that there is no true end goal as long as whims continue. The need to craft yet more means is equally endless. The crafting is the primary human experience, not the using. The using of a means inevitably becomes transparent and boring.

mewpmewp2 2 days ago||
It should finalize into introducing satisfaction to the whims directly, so the AI would be directly managing the chemicals in our brains that would trigger feelings of reward and satisfaction.
sublinear 19 hours ago||
I think you're just describing drugs
lazide 2 days ago||||
Even if it purred real nice when it started up? (I’m sorry)
ozim 2 days ago||
Looks like we have a Civic owner xD
zahrevsky 2 days ago||||
Weird kink
mjevans 2 days ago|||
Food -> 'basic needs'... so yeah, Shelter, food, etc. That's why most of us drive. You are also correct to separate Philia and Eros ( https://en.wikipedia.org/wiki/Greek_words_for_love ).

A job is better if your coworkers are of a caliber that they become a secondary family.

cheema33 2 days ago||||
> Average humans are pretty great at solving a certain class of complex problems that we tried to tackle unsuccessfully with many millions lines of deterministic code..

Are you suggesting that an average user would want to precisely describe in detail what they want, every single time, instead of clicking on a link that gives them what they want?

ethmarks 2 days ago|||
No, but the average user is capable of describing what they want to something trained in interpreting what users want. The average person is incapable of articulating the exact steps necessary to change a car's oil, but they have no issue with saying "change my car's oil" to a mechanic. The implicit assumption with LLM-based backends is that the LLM would be capable of correctly interpreting vague user requests. Otherwise it wouldn't be very useful.
sarchertech 2 days ago||
The average mechanic won’t do something completely different to your car because you added some extra filler words to your request though.

The average user may not care exactly what the mechanic does to fix your car, but they do expect things to be repeatable. If car repair LLMs function anything like coding LLMs, one request could result in an oil change, while a similar request could end up with an engine replacement.

ethmarks 2 days ago|||
I think we're making similar points, but I kind of phrased it weirdly. I agree that current LLMs are sensitive to phrasing and are highly unpredictable and therefore aren't useful in AI-based backends. The point I'm making is that these issues are potentially solvable with better AI and don't philosophically invalidate the idea of a non-programmatic backend.

One could imagine a hypothetical AI model that can do a pretty good job of understanding vague requests, properly refusing irrelevant requests (if you ask a mechanic to bake you a cake he'll likely tell you to go away), and behaving more or less consistently. It is acceptable for an AI-based backend to have a non-zero failure rate. If a mechanic was distracted or misheard you or was just feeling really spiteful, it's not inconceivable that he would replace your engine instead of changing your oil. The critical point is that this happens very, very rarely and 99.99% of the time he will change your oil correctly. Current LLMs have far too high of a failure rate to be useful, but having a failure rate at all is not a non-starter for being useful.

sarchertech 2 days ago||
All of that is theoretically possible. I’m doubtful that LLMs will be the thing that gets us to that though.

Even if it is possible, I’m not sure if we will ever have the compute power to run all or even a significant portion of the world’s computations through LLMs.

array_key_first 2 days ago|||
Mechanics, and humans, are non-deterministic. Every mechanic works differently, because they have different bodies and minds.

LLMs are, of course, bad. Or not good enough, at least. But suppose they are. Suppose they're perfect.

Would I rather use an app or just directly interface with an LLM? The LLM might be quicker and easier. I know, for example, ordering takeout is much faster if I just call and speak to a person.

theendisney 2 days ago|||
Old people sometimes call rather than order on the website. They never fail to come up with a query that no amount of hardcoded logic could begin to attack.
sarchertech 1 day ago|||
> Every mechanic works differently, because they have different bodies and minds.

Yes but the same LLM works very differently on each request. Even ignoring non-determinism, extremely minor differences in wording that a human mechanic wouldn’t even notice will lead to wildly different answers.

> LLMs are, of course, bad. Or not good enough, at least. But suppose they are. Suppose they're perfect.

You’re just talking about magic at that point.

But suppose the do become “perfect”, I’m skeptical we’ll ever have the compute resources to replace a significant fraction of computation with LLMs.

anonzzzies 2 days ago||||
There would be bookmarks to prompts and the results of the moment would be cached : both of these are already happening and will get better. We probably will freeze and unfreeze parts of neural nets to just get to that point and even mix them up to quickly mix up different concept you described before and continue from there.
samdoesnothing 2 days ago|||
I think they're suggesting that some problems are trivially solvable by humans but extremely hard to do with code - in fact the outcome can seem non-deterministic despite it being deterministic because there are so many confounding variables at play. This is where an LLM or other for of AI could be a valid solution.
Aerroon 2 days ago||||
When I reach for a hammer I want it to behave like a hammer every time. I don't ever want the head to fly off the handle or for it to do other things. Sometimes I might wish the hammer were slightly different, but most of the time I would want it to be exactly like the hammer I have.

Websites are tools. Tools being non-deterministic can be a really big problem.

majormajor 2 days ago||||
Companies want determinism. And for most things, people want predictability. We've spent a century turning people into robots for customer support, assembly lines, etc. Very few parts of everyday life that still boil down to "make a deal with the person you're talking to."

So even if it would be better to have more flexibility, most business won't want it.

pigpop 2 days ago||
Why sell to a company when you can replace it?

I can speculate about what LLM-first software and businesses might look like and I find some of those speculations more attractive than what's currently on offer from existing companies.

The first one, which is already happening to some degree on large platforms like X, is LLM powered social media. Instead of having a human designed algorithm handle suggestions you hand it over to an LLM to decide but it could go further. It could handle customizing the look of the client app for each user, it could provide goal based suggestions or search so you could tell it what type of posts or accounts you're looking for or a reason you're looking for them e.g. "I want to learn ML and find a job in that field" and it gives you a list of users that are in that field, post frequent and high quality educational material, have demonstrated willingness to mentor and are currently not too busy to do so as well as a list of posts that serve as a good starting point, etc.

The difference in functionality would be similar to the change from static websites to dynamic web apps. It adds even more interactivity to the page and broadens the scope of uses you can find for it.

majormajor 2 days ago||
Sell to? I'm talking about buying from. How are you replacing your grocery store, power company, favorite restaurants, etc, with an LLM? Things like vertical integration and economies of scale are not going anywhere.
pepoluan 1 day ago||||
The issue with not having something deterministic is that when there's regression, you cannot surgically fix the regression. Because you can't know how "Plan A" got morphed into "Modules B, C, D, E, F, G," and so on.

And don't even try to claim there won't ever be any regression: Current LLM-based A.I. will 'happily' lie to you that they passed all tests -- because based on interactions in the past, it has.

Ghos3t 2 days ago||||
So basically you say the future of web would be everyone gets their own Jarvis, and like Tony you just tell Jarvis what you want and it does it for you, theres no need for a preexisting software or to even write a new one, it just does what's needed to fulfill the given request and give you the results you want. This sounds nice but wouldn't it get repetitive and computationally expensive, life imagine instead of Google maps, everyone just asks the AI directly for the things people typically use Google maps for like directions and location reviews etc. A centralized application like maps can be more efficient as it's optimized for commonly needed work and it can be further improved from all the data gathered from users who interact with this app, on the other hand if AI was allowed to do it's own thing, it could keep reinventing the wheel solving the same tasks again and again without the benefit of building on top of prior work, while not getting the improvements that it would get from the network effect of a large number of users interacting with the same app.
acomjean 2 days ago||
You might end up with ai trying to get information from ai, which saves us the frustration..

knows where we’d end up?

On the other hand the logs might be a great read.

rafaelmn 1 day ago||||
We're used to dealing with human failure modes, AI fails in so unfamiliar ways it's hard to deal with.
anonzzzies 2 days ago||||
But it is still very early days. And if you have the AI generate code for deterministic things and fast execution, but the ai always monitors the code and if the user requires things that don't fit code, it will jump in. It's not one or the other necessarily.
hshdhdhehd 2 days ago||||
Determinism is the edge these systems have. Granted in theory enough AI power could be just as good. Like 1,000,000 humans could give you the answer of a postgres query. But the postgres gonna be more efficient.
samrolken 2 days ago|||
No, I wouldn’t say that my hypothesis is that non-deterministic behavior is good. It’s an undesirable side effect and illustrates the gap we have between now and the coming post-code world.
killingtime74 2 days ago||
AI wouldn't be intelligent though if it was deterministic. It would just be information retrieval
finnborge 2 days ago||
It already is "just" information retrieval, just with stochastic threads refining the geometry of the information.
nxor 2 days ago||
Haha u mean it isn't AGI? /s
reissbaker 2 days ago|||
I think it's actually conceptually pretty different. LLMs today are usually constrained to:

1. Outputting text (or, sometimes, images).

2. No long term storage except, rarely, closed-source "memory" implementations that just paste stuff into context without much user or LLM control.

This is a really neat glimpse of a future where LLMs can have much richer output and storage. I don't think this is interesting because you can recreate existing apps without coding... But I think it's really interesting as a view of a future with much richer, app-like responses from LLMs, and richer interactions — e.g. rather than needing to format everything as a question, the LLM could generate links that you click on to drill into more information on a subject, which end up querying the LLM itself! And similarly it can ad-hoc manage databases for memory+storage, etc etc.

pepoluan 1 day ago||
Or, maybe, just not use LLMs?

LLM is just one model used in A.I. It's not a panacea.

For generating deterministic output, probably a combination of Neural Networks and Genetic Programming will be better. And probably also much more efficient, energy-wise.

admax88qqq 2 days ago|||
Web apps kind of already do that with most companies shipping constant UX redesigns, A/B tests, new features, etc.

For a typical user today’s software isn’t particularly deterministic. Auto updates mean your software is constantly changing under you.

Jaygles 2 days ago|||
I don't think that is what the original commenter was getting at. In your case, the company is actively choosing to make changes. Whether its for a good reason, or leads to a good outcome, is beside the point.

LLMs being inherently non-deterministic means using this technology as the foundation of your UI will mean your UI is also non-deterministic. The changes that stem from that are NOT from any active participation of the authors/providers.

This opens a can of worms where there will always be a potential for the LLM to spit out extremely undesirable changes without anyone knowing. Maybe your bank app one day doesn't let you access your money. This is a danger inherent and fundamental to LLMs.

admax88qqq 2 days ago||
Right I get tha. The point I’m making is that from a users perspective it’s functionally very similar. A non deterministic llm or a non deterministic company full of designers and engineers.
lazide 2 days ago||
Regardless of what changes the bank makes, it’s not going to let you access someone else’s money. This llm very well might.
array_key_first 2 days ago||
Well, software has been known to have vulnerabilities...

Consider this: the bank teller is non-deterministic, too. They could give you 500 dollars of someone else's money. But they don't, generally.

an_guy 1 day ago||
Bank tellers are deterministic though. They have a set protocol for each cases and escalate unknown cases to a more deterministic point of contact.

It will be difficult to incorporate relative access or restrictions to features with respect to users current/known state or actions. Might as well write the entire web app at that point.

array_key_first 1 day ago||
I think the bank teller's systems and processes are deterministic, but the teller itself is not. They could even rob the bank, if they wanted to. They could shoot the customers. They don't, generally, but they can.

I think, if we can efficiently capture a way to "make" LLMs conform to a set of processes, you can cut out the app and just let the LLM do it. I don't think this makes any sense for maybe the next decade, but perhaps at some point it will. And, in such time, software engineering will no longer exist.

paulhebert 2 days ago||||
The rate of change is so different it seems absurd to compare the two in that way.

The LLM example gives you a completely different UI on _every_ page load.

That’s very different from companies moving around buttons occasionally and rarely doing full redesigns

jeltz 2 days ago|||
And most end users hate it.
visarga 1 day ago|||
Every time you need a rarely used functionality it might be better to wait 60s for an LLM with MCP tools to do its work than to update an app. It only makes sense to optimize and maintain app functionalities when they are reused.
vidarh 1 day ago|||
For some things you absolutely want deterministic behaviour. For other things, behaviour that adapts to usage and the context provided by the data the user provides sounds like it could potentially be very exciting. I'm glade people are exploring this. The hard part will be figuring out where the line goes, and when and how to "freeze" certain behaviours that the user seems happy with vs. continuing to adapt to data.
ddalex 2 days ago|||
Like, for sure you can ask the AI to save it's "settings" or "context" to a local file in a format of its own choosing, and then bring that back in the next prompt ; couple this with temperature 0 and you should get to a fixed-point deterministic app immediately
dehsge 2 days ago|||
There still maybe some variance at temperature 0. The outputted code could still have errors. LLMs are still bounded by the undecidable problems in computational theory like Rices theorem.
guelo 2 days ago||||
Why wouldn't the llm codify that "context" into code so it doesn't have to rethink through it over and over? Just like humans would. Imagine if you were manually operating a website and every time a request came in you had come up with sql queries (without remembering how you did it last time) and manually type the responses. You wouldn't last long before you started automating.
geraneum 2 days ago|||
> couple this with temperature 0

Not quite the case. Temperature 0 is not the same as random seed. Also there are downsides to lowering temperature (always choosing the most probable next token).

SecretDreams 1 day ago|||
Why do good thing consistently when we can do great thing that only works sometimes??? :(
anon291 1 day ago|||
Llms are easily made deterministic by choosing the selection strategy. More than being deterministic they are also fully analayzable and you don't run into issues like the halting problem if you constrain the output appropriately.
myhf 2 days ago|||
Designing a system with deterministic behavior would require the developer to think. Human-Computer Interaction experts agree that a better policy is to "Don't Make Me Think" [1]

[1] https://en.wikipedia.org/wiki/Don%27t_Make_Me_Think

krapp 2 days ago|||
That book is talking about user interaction and application design, not development.

We absolutely should want developers to think.

crabmusket 2 days ago||
As experiments like TFA become more common, the argument will shift to whether anybody should think about anything at all.
krapp 2 days ago||
What argument? I see a business model here, not an argument.
crabmusket 1 day ago||
I meant "the discourse", "the conversation we are all having", interpreting the experiment in TFA as an entry in that discourse.
_se 2 days ago||||
This is such a massive misunderstanding of the book. Have you even read it? The developer needs to think so that the user doesn't have to...
finnborge 2 days ago||
My most charitable interpretation of the perceived misunderstanding is that the intent was to frame developers as "the user."

This project would be the developer tool used to produce interactive tools for end users.

More practically, it just redefines the developer's position; the developer and end-user are both "users". So the developer doesn't need to think AND the user doesn't need to think.

stirfish 2 days ago||
I interpreted it like "why don't we simply eat the orphans"? It kind of works but it's absurd, so it's funny. I didn't think about it too hard though, because I'm on a computer.
AstroBen 2 days ago|||
..is this an AI comment?
thih9 2 days ago||
> who wants web apps to behave differently every time you interact with them?

Technically everyone, we stopped using static pages a while ago.

Imagine pages that can now show you e.g. infinitely customizable UI; or, more likely, extremely personalized ads.

ozim 2 days ago|||
Small anecdote. We were releasing UI changes every 2 weeks making app better more user friendly etc.

Product owners were happy.

Until users came for us with pitchforks as they didn’t want stuff to change constantly.

We backed out to releasing on monthly cadence.

ehutch79 2 days ago||||
No.

When I go to the dmv website to renew my license, I want it to renew my license every single time

anthk 2 days ago||||
Ah, sure; that's why everyone got Adblock and UBo in first place. Even more under phones.
hansmayer 2 days ago|||
> infinitely customizable UI; or, more likely, extremely personalized ads

Yeah, NO.

Finbarr 2 days ago||
If you added a few more tools that let the LLM modify code files that would directly serve requests, that would significantly speed up future responses and also ensure consistency. Code would act like memory. A direct HTTP request to the LLM is like a cache miss. You could still have the feedback mechanism allowing a bypass that causes an update to the code. Perhaps code just becomes a store of consistency for LLMs over time.
samrolken 2 days ago||
This was an unserious experiment meant to illustrate the gap and bottlenecks that are still there. I agree that there's a lot that could be done to optimize this kind of approach. But even if you did, I'm not sure the results would be viable and I'm pretty sure classic coding (with LLM assistance and all) would still outperform such a product.
Finbarr 2 days ago|||
I found it thought provoking and interesting. Thanks for sharing.
theendisney 2 days ago|||
You need to do more unserious experments. This one is perhaps the best stupid idea ive seen.

Maybe the browser should learn to talk back.

You could store the pages in the database and periodically generate a new version based on the current set of pages and the share of traffic they enjoy. You would get something that evolves and stabilizes in some niche. Have an innitial prompt like; "dinosaurs!" Then sit back and see the magic unfold.

kinduff 2 days ago|||
Creating instructions and adding boundaries on how to grow, and you end up with a seed.
hartator 2 days ago||
You should try making this.
Finbarr 2 days ago||
I'm tempted to!
finnborge 2 days ago||
This is amazing. It very creatively emphasizes how our definition of "boilerplate code" will shift over time. Another layer of abstraction would be running N of these, sandboxed, responding to each request, and then serving whichever instance is internally evaluated to have done the best. Then you're kind of performing meta reinforcement learning with each whole system as a head.

The hard part (coming from this direction) is enshrining the translation of specific user intentions into deterministic outputs, as others here have already mentioned. The hard part when coming from the other direction (traditional web apps) is responding fluidly/flexibly, or resolving the variance in each user's ability to express their intent.

Stability/consistency could be introduced through traditional mechanisms: Encoded instructions systematically evaluated, or, via the LLMs language interface, intent-focusing mechanisms: through increasing the prompt length / hydrating the user request with additional context/intent: "use this UI, don't drop the db."

From where I'm sitting, LLMs provide a now modality for evaluating intent. How we act on that intent can be totally fluid, totally rigid, or, perhaps obviously, somewhere in-between.

Very provocative to see this near-maximum example of non-deterministic fluid intent interpretation>execution. Thanks, I hate how much I love it!

SkiFire13 2 days ago|
> serving whichever instance is internally evaluated to have done the best. Then you're kind of performing meta reinforcement learning

I thought this didn't work? You basically end up fitting your AI models to whatever is the internal evaluation method, and creating a good evaluation method most often ends up having a similar complexity as creating the initial AI model you wanted to train.

d-lisp 2 days ago||
Why would you need webapps when you could just talk out loud to your computer ?

Why would I need programs with colors, buttons, actual UI ?

I am trying to imagine a future where file navigators don't even exist : "I want to see the photos I took while I was in vacations last year. Yes, can you remove that cloud ? Perfect, now send it to XXXX's computer and say something nice."

"Can you set some timers for my sport session, can you plan a pure body weight session ? Yes, that's perfect. Wait, actually, remove the jumping jacks."

"Can you produce a detroit style techno beat I feel like I want to dance."

"I feel life is pointless without a work, can you give me some tasks to achieve that would give me a feeling of fulfillment ?"

"Can you play an arcade style video game for me ?"

"Can you find me a mate for tonight ? Yes, I prefer black haired persons."

AmbroseBierce 2 days ago||
>Can you set some timers for my sport session, can you plan a pure body weight session ? Yes, that's perfect. Wait, actually, remove the jumping jacks."

Better yet, why exercise -which is so repetitive- if we can create a machine that just does it for you, including the dopamine triggering, why play an arcade video game where we can create a machine that fires the neuron needed to produce the exact same level of a excitement than the best video game.

And why find mates when my robot can morph into any woman in the world, or better yet, the brain implants that trigger the exact same feelings than having sex and love.

Bleak, we are oversimplifying existence itself and it doesn't lead to a nice place.

d-lisp 2 days ago|||
Maybe I should have rephrased everything with : " Make me happy"

"Make me happy"

"Make me happy"

"Make me happy"

sifar 2 days ago||
That's an infinite loop.
cluckindan 2 days ago||
Stop the loop of wanting
fouc 1 day ago|||
The Tao of Zen in 5 words.
bloomca 2 days ago||||
> Bleak, we are oversimplifying existence itself and it doesn't lead to a nice place.

We are already on this path for many-many years, certainly decades if not centuries, although availability was definitely spotty in the past.

It is also kind of impossible to hop off this train, while it is individually possible to reject any of these conveniences, in general they just become a part of life. Which is not necessarily a bad thing, but just different.

fouc 1 day ago|||
> although availability was definitely spotty in the past.

lol, that seems like a reference to the William Gibson quote "The future is already here, it's just unevenly distributed"

AmbroseBierce 1 day ago|||
Citation needed on that last sentence about ir not a bad thing, also I'm pretty sure climate change 100% counts as a collateral damage of this behavior.
DustinKlent 23 hours ago||||
You say bleak, but a huge number of people would consider what you're describing as a utopian paradise...especially the morphing robot part.
AmbroseBierce 20 hours ago||
I should know I am one of them, I mean exclusively the robot part.
hyperadvanced 2 days ago|||
Zizek is a good reference here. What’s the word for it, interpassivity?
finnborge 2 days ago|||
I think this is well illustrated in a lot of science fiction. Irregular or abstract tasks are fairly efficiently articulated in speech, just like the ones you provided. Simpler, repetitive ones are not. Imagine having to ask your shower to turn itself on? Or your doors to open?

Contextualized to "web-apps," as you have; navigating a list maybe requires an interface. It would be fairly tedious to differentiate between, for example, the 30 pairs of pants your computer has shown you after you asked "help me buy some pants" without using a UI (ok maybe eye-tracking?).

roncesvalles 2 days ago|||
On a tangent but I still don't know why we don't have showers where you just press a button and it delivers water at the correct temperature. It seems like the simplest thing that everyone wants. A company that manufactures and installs this (a la SolarCity) should be an instant unicorn.
pepoluan 1 day ago|||
What's "correct" for you might not be "correct" for others. Furthermore, your owb definition of "correct" changes depending on circumstances; sometimes you want it hotter, sometimes you want it colder. Sometimes you want to change it partway through.

How do you calculate for that?

Back in the 90s, Fuzzy Logic was thought to be the solution. In a way, yes, but only for niche/specialized purposes, and they still have to limit the variables being evaluated.

lazide 2 days ago|||
Water + electronics/power typically isn’t very durable, or reliable. Most people want their shower valves to work at least 20 years, ideally 50-100.
pepoluan 1 day ago||
Can be mitigated to a degree by separating the (cheaper) sensors and the (pricier) logic.

But then it will become a tradeoff of complexity vs longevity.

lazide 1 day ago||
Nah, because it would still need servicing.

And why? There are reasonably well done, low maintenance, temperature balancing valves out there.

And they do typically last 20+ or more years.

d-lisp 2 days ago||||
Maybe you don't even need a list if you can describe what you want or able to explain why the article you are currently viewing is not a match.

As for repetitive tasks, you can just explain to your computer a "common procedure" ?

lazide 2 days ago|||
They actually aren’t done well via voice UI either - if you care about the output.

We just gloss over the details in these hypothetical irregular or abstract tasks because we imagine they would be done as we imagine them. We don’t have experience trying to tell the damn AI to not delete that cloud (which one exactly?) but the other one via a voice UI. Which would suck and be super irritating, btw.

We know how irritating it would be to turn the shower off/on, because we do that all the time.

jonplackett 2 days ago|||
Voice interfaces are not the be all and end all of communication. Even between humans we prefer text a lot of the time.
yreg 1 day ago|||
What GP describes sounds to me like having a friend control the computer and dictate to them what to do.

No matter how capable the friend it, it's oftentimes easier to do a task directly in a UI rather than to have to verbalize it to someone else.

Krssst 2 days ago||||
Plus there are many contexts where you don't want to use your voice (public transport, night time with other people sleeping, noisy areas where a mic won't pick up your voice...).

And there are people that unfortunately cannot speak.

d-lisp 2 days ago||
Well, there are people that cannot see.

Fortunately, there are solutions.

I want to add that I think you are missing my argument here. Devices that allow you to speak without speaking shall soon be available to us [0].

The important aspect of my position is to think about the relevance of "applications" and "programs" in the age of AI, and, as an exaggeration of what is shown in that post, I was wondering if in the end, UI is not bound to disappear.

[0] https://www.media.mit.edu/projects/alterego/overview/

d-lisp 2 days ago||||
I cannot speculate about this, because I am not sure too observe the same.
warkdarrior 2 days ago|||
We've had writing for only around 6000 years. It shall pass.
darkstarsys 2 days ago|||
I just this week vibe-coded a personal knowledge management app that reads all my org-mode and logseq files and answers questions, and can update them, with WebSpeech voice input. Now it's my todo manager, grocery list, "what do I need to do today?", "when did I get the leaves done the last few years?" and so on, even on mobile (bye bye Android-Emacs). It's just a basic chatbot with a few tools and access to my files, 100% customized for my personal needs, and it's great.
d-lisp 2 days ago|||
I did that in the past, without a chatbot. Plain text search is really powerful.
brulard 2 days ago||
Full assistant and a text search are quite a bit different things in terms of usefulness
anthk 2 days ago||
Not for org-mode.
TheTaytay 2 days ago|||
Very cool! Does it load all of the files into context or grep files to find the right ones based on the conversation?
tomasphan 2 days ago|||
This will eventually cause such reduction of agency that it will be perceived as fundamental threat to one's sense of freedom. I predict it will cause humanity to split into a group that accepts this, and one that rejects it at its fundamental level. We're already seeing the beginning of this with vinyl sales skyrocketing (back to 90s levels).
d-lisp 2 days ago||
I must be really dumb because I enjoy producing music, programming, drawing for the sake of it, and not necessarily for creating viable products.
andoando 2 days ago|||
Ive been imagining the same thing. Were kinda there with MCPs. Just needs full OS integration. Or I suppose you can write a bunch of clis and have LLM call them locally
d-lisp 2 days ago||
Well, if you have a terminal emulator, a database, a voice recognition software, a LLM wrapped in such a way that it can interact with the other elements, you obtain a ressembling stack.
narrator 1 day ago|||
This is what all the people put out of work by AI are going to do.
timeon 2 days ago||
> “Hell of a world we live in, huh?” The proprietor was a thin black man with bad teeth and an obvious wig. I nodded, fishing in my jeans for change, anxious to find a park bench where I could submerge myself in hard evidence of the human near-dystopia we live in. “But it could be worse, huh?”

> “That’s right,” I said, “or even worse, it could be perfect.”

-- William Gibson: The Gernsback Continuum

ychen306 2 days ago||
It's orders of magnitude cheaper to serve requests with conventional methods than directly with LLM. My back-of-envelope calculation says, optimistically, it takes more than 100 GFLOPs to generate 10 tokens using a 7 billion parameter LLM. There are better ways to use electricity.
sramam 2 days ago||
I work in enterprise IT and sometimes wonder if we should add the equivalent energy calculations of human effort - both productive and unproductive - that underlies these "output/cost" comparisons.

I realize it sounds inhuman, but so is working in enterprise IT! :)

ethmarks 2 days ago|||
I agree wholeheartedly. It irks me when people critique automation because it uses large amounts of resources. Running a machine or a computer almost always uses far less resources than a human would to do the same task, so long as you consider the entire resource consumptions.

Growing the food that a human eats, running the air conditioning for their home, powering their lights, fueling their car, charging their phone, and all the many many things necessary to keep a human alive and productive in the 21st century are a larger resource cost than almost any machine/system that performs the same work. From an efficiency perspective, automation is almost always the answer. The actual debate comes from the ethical perspective (the innate value of human life).

runarberg 2 days ago|||
I suspect you may be either underestimating how efficient our brains are at computing or severely underestimating how much energy these AI models take to train and run.

Even including our system of comfort like refrigerated blueberries in January and AC cooling a 40° C heat down to 25° C (but excluding car commutes, because please work from home or take public transit) the human is still far far more energy efficient in e.g. playing go then alpha-go. With LLMs this isn’t even close (and we can probably factor in that stupid car commute, because LLMs are just that inefficient).

zelphirkalt 2 days ago|||
Hm, that gives me an idea: The next human vs engine matches in chess, go, and so on, should be set at a specific level of energy consumption of the engines, that's close or approximately that of an extremely good human player, like a world champion or at least grand master. Let's see how engines keep up then!
ethmarks 2 days ago||
That sounds delightful. Get a Raspberry Pi or something connected to a power supply capped at 20 watts (approximate electricity consumption of the human brain). It has to be able to run its algorithm in less than the time limit per turn for speed chess. Then you'd have to choose an algorithm based on if it produces high-quality guesses before arriving at its final answer so that if it runs out of time it can still make a move. I wonder if this is already a thing?
ethmarks 2 days ago||||
That's a great point, and I think I was being vague before.

To clarify, I was making a broad statement about automation in general. Running an automated loom is more efficient in every way that getting humans to weave cloth by hand. For most tasks, automation is more efficient.

However, there are tasks that humans can still do more efficiently than our current engines of automation. Go is a good example because humans are really good at it and it AlphaGo can only sometimes beat the top players despite massive training and inference costs.

On the other hand, I would dispute that LLMs fall into this category, at least for most tasks, because we have to factor in marginal setup costs too. I think that raising from infancy all of the humans needed to match the output speed of an LLM has a greater cost than training the LLM. Even if you include the cost of mining the metal and powering the factories necessary to build the machines that the LLMs run on. I'm not 100% confident in this statement, but I do think that it's much closer than you seem to think. Supporting the systems that support the systems that support humans takes a lot of resources.

To use your blueberries example, while the cost of keeping the blueberries cold isn't much, growing a single serving of blueberries requires around 95 liters of water[1]. In a similar vein, the efficiency of the human brain is almost irrelevant because the 20 watts of energy consumed by the brain is akin from a resource consumption perspective to the electricity consumed by the monitor to read out the LLM's output: it's the last step in the process, but without the resource-guzzling system behind it, it doesn't work. Just as the monitor doesn't work without the data center which doesn't work without electricity, your brain doesn't work without your body which doesn't work without food which doesn't get produced without water.

As sramam mentioned, these kinds of utilitarian calculations tend to seem pretty inhuman. However, most of the time, the calculations turn out in favor of automation. If they didn't, companies wouldn't be paying for automated systems (this logic doesn't apply to hype-based markets like AI. I'm talking more about markets that are stably automated like textile manufacturing). If you want an anti-automation argument, you'll have a better time arguing based on ethics instead of efficiency.

Again, thanks for the Go example. I genuinely didn't consider the tasks where humans are more efficient than automation.

[1]: https://watercalculator.org/water-footprint-of-food-guide/

runarberg 1 day ago||
I‘m not convinced this exercise in what to and what not to include in this cost-benefit-analysis will lead to anything. We can always arbitrarily include an extra item to include to shift the calculations in our favor. For example I could simply add the cost of creating the data which is fed into the training set of an LLM, that creation is done by our human biological machinery and hence has the cost of the frozen blueberries, the rigid fiber insulations, the machinery that dug the waterpipe for their shower, etc.

Instead I would like to shift the focus on the benefits of LLM. I know the costs are high, very very very high, but you seem to think that the benefits are also so high measured in time saved. That is the amount of tasks automated are enough to save humans doing similar tasks by miles. If that is what you think I disagree. LLMs have yet to prove them selves with real world application. We are seeing when we actually do measure how much LLMs save work-hours, that it the effects are at best negligible (see e.g. https://news.ycombinator.com/item?id=44522772). Worse, generative AI is disrupting our systems in worse way, where e.g. teachers, peer-reviewers, etc. have to put in a bunch of extra work to verify that the submitted work was actually written by that person, and not simply generated by AI. Just last Friday I read that arXiv will no longer accept submissions unless they have been previously peer-reviewed because they are overwhelmed by AI generated submissions[1].

There are definitely technologies which have saved us time and created a much more efficient system then was previously possible. The loom is a great example of one, I would claim the railway is another, and even the digital calculator for sure. But LLMs, and generative AI more generally are not that. There may be utilities for this technology, but automation and energy/work savings is not one of them.

1: https://blog.arxiv.org/2025/10/31/attention-authors-updated-...

ethmarks 1 day ago||
You've convinced me. I did not consider the human cost of producing training data, I did not consider whether or not LLMs were actually saving effort, and I did not consider the extra effort to verify LLM output. I have nothing more to add other than to thank you for taking the time to write such a persuasive and high-quality reply. The internet would be a better place if there were more people like you on it. Thank you for making me less wrong.
keeda 1 day ago|||
Wait hold on, let's put some numbers on this. Please correct my calculations if I'm wrong.

1. The human brain draws 12 - 20 watts [1, 2]. So, taking the lower end, a task taking one hour of our time costs 12 Wh.

2. An average ChatGPT query is between 0.34 Wh - 3 Wh. A long input query (10K tokens) can go up to 10 Wh. [3] I get the best results by carefully curating the context to be very tight, so optimal usage would be in the average range.

3. I have had cases where a single prompt has saved me at least an hour of work (e.g. https://news.ycombinator.com/item?id=44892576). Let's be pessimistic and say it takes 3 prompts at 3 Wh (9 Wh) and 10 minutes (2 Wh) of my time prompting and reviewing to complete a task. That is 11 Wh for the same task, which still beats out the human brain unassisted!

And that's leaving aside the recent case where I vibecoded and deployed a fully-tested endpoint on a cloud platform I had no prior experience in, over the course of 2 - 3 hours. I estimate it would have taken me a whole day just to catch up on the documentation and another 2 days tinkering with the tools, commands and code. That's at least an 8x power savings assuming an 8-hour workday!!

4. But let's talk data instead of anecdotes. If you do a wide search, there is a ton of empirical evidence that improves programmer productivity by 5 - 30% (with a lot of nuance). I've cited some here: https://news.ycombinator.com/item?id=45379452 -- there is no measure of the amount of prompt usage to estimate energy usage, but those are significant productivity boosts.

Even the METR study that appeared to show AI coding lowering productivity also showed that AI usage broadly increased in idle-time in users. That is, calendar time for task completion may have gone up, but that included a lot of idle time where people were doing no cognitive work at all. Someone should run the numbers, but maybe it resulted in lower power consumption!

---

But what about the training costs? Sure we've burned gazillions of GWh on training already, and the usual counterpoint is "what about the cost involved in evolution?" but let's assume we stopped training all models today. They will still serve all future prompts at the same power consumption rates discussed above.

However every new human will take 15 - 20 years of education to get to be a novice in a single domain, followed by many more years of experience to become proficient. We're comparing apples and blueberries here, but that's a LOT of energy to even start becoming productive, but a trained LLM is instantly productive in multiple domains forever.

My hunch is that if we do a critical analysis of amortized energy consumption, LLMs will probably beat out humans. If not already, soon with the rate of token costs plummeting all the time.

[1] https://psychology.stackexchange.com/questions/12385/how-muc...

[2] https://press.princeton.edu/ideas/is-the-human-brain-a-biolo...

[3] https://epoch.ai/gradient-updates/how-much-energy-does-chatg...

runarberg 20 hours ago||
In my go example we have a human and an AI model competing at the same task. A good AI model will perform much much much better and probably win the game, but if we measure the energy input into either player the AI model will consume a lot more energy. However a game of go is not automation, it won’t save us any time. The benefits of the AI model is it helps human go players improve their own game, finding new moves, new patterns, new proverbs, etc. Because of go playing AI models human go players now play their games better, but nor more efficiently, nor faster.

In your LLM coding example you have a human and an AI model collaborating on a single task, both spend some amount of energy (taking your assumptions at face value, compatible amount of energy) and produce a single outcome. In the go example it is easy to compare energy usage and the quality of the outcome is also easy to measure (simply who won the game). In your coding example the quality of the outcome is impossible to measure, and because the effort is collaborative, splitting the energy usage is complected.

When talking about automation my game of go example falls apart. A much better examples would be something like a loom, or a digital calculator. These tools help the human arrive at a particular outcome much faster and with much less effort then a human performing the task without the help of the machines. The time saved by using these tools are measured in several orders of magnitudes, and the energy spent is at par with a human. It is easy to see how a loom or a digital calculator are more efficient then a human.

I guess if we take into account the training cost of an LLM model we should also take into account the production costs of looms and digital calculators. I don‘t know how to do that, but I can’t imagine it would be anywhere close to that of an LLM model.

And we have an LLM model we have increased the productivity of, not 5000x[1], but by 5%-30%. To me this does not sound like a revolutionary technology. But I have my doubts of even the 5%-30% figure. We have preliminary research ranging anywhere from negative productivity increase to your cited 5%-30%. We will have to wait for more research, and possibly some meta-analysis before we can accurately assess the productivity boost of LLMs. But we will have to do a whole lot better then 5%-30% to sufficiently justify the huge energy consumption of AI[2].

Personally, I am not convinced by your back of the envelope calculations. It fails my sniff test that 9 Wh of matrix multiplication will consistently save you an hour of using your brain to perform the same task adequately. I know our brains are not super good at the logic required for coding (but neither are LLMs), but I know for a fact they are very efficient at it.

That said I refuse to accept your framing that we can simply ignore the energy used in training, on the bases that it is equally invalid as considering the energy used for evolving into our species, or that we can simply stop training new models and use the models we do have. That is simply not how things work. New models will get trained (unless the AI bubble bursts and the market looses interest) and the energy consumed by training is the bulk of the energy cost. And omitting it makes the case for AI comically easy to justify. I reject this framing.

Instead of calculating, instead I’m gonna do a thought experiment. Imagine a late 19th century where iron and steel production took an entire 2% of world’s energy consumption[3] (maybe an alternative reality where Iron working is simply that challenging and requires much higher temperatures to work). But the steam train could only carry the same load as a 20 mule team, and would only do it 5%-30% faster on average then the state of the art cargo carriages at the time without steam power. Would you accept the argument that we should simply ignore the fact that rail production takes a whopping 2% of global energy consumption, when factoring the energy consumption of the steam train, even when it only provides you with 5%-30% productivity boost. I don‘t think so.

---

1: I don‘t know how much the loom has increased productivity, but this is what I would guess without any way of knowing how to even find out.

2: That is, if you are only interested in the increased productivity. If you are interested in the LLM models for some other reason, those reason will have to be measured differently.

3: https://www.allaboutai.com/resources/ai-statistics/ai-enviro...

myaccountonhn 23 hours ago||||
This is a bad argument. Even if a machine replaced my job, I'm still going to eat, run the aircon, charge my phone etc. and maybe do another job. So the energy used to do the job decreased, but the total energy usage is higher because I'm still using the same amount of energy, but now the machine is also using some amount energy that wasn't being used before.

Efficiencies lead to less resources being used if your demand is constant, but if demand is elastic, it often leads to the total resource consumption increasing.

See also: Jevons Paradox (https://en.wikipedia.org/wiki/Jevons_paradox).

pepoluan 1 day ago|||
Not ALL automation can be more efficient.

Just ask Elon about his efforts to fully automate Tesla production.

Same as A.I. Current LLM-based A.I.s are not at all as efficient as a human brain.

estimator7292 1 day ago||||
Only slightly joking, but someone needs to put environmental caps on software updates. Just imagine how much energy it takes for each and every discord user to download and install a 100MB update... three times a week.

Multiply that by dozens or hundreds of self-updating programs on a typical machine. Absolutely insane amounts of resources.

EagnaIonat 1 day ago|||
Goodhart’s Law will mess all that up for you.
ls-a 2 days ago|||
Try to convince the investors. The way the industry is headed is not necessarily related to what is most optimal. That might be the future whether we like it or not. Losing billions seems to be the trend.
ychen306 2 days ago|||
Eventually the utility will be correctly priced. It's just a matter of time.
Ma8ee 2 days ago|||
No, it will not be correctly priced. It will reach some kind of local optimum not taking any externalities into account.
noosphr 2 days ago|||
We are all dead in a matter of time.
oblio 2 days ago|||
Debt, just like gravity, tends to bring things crashing down, sooner or later.
nradov 2 days ago||
Sure, but we can start with an LLM to build V1 (or at least a demo) faster for certain problem domains. Then apply traditional coding techniques as an efficiency optimization later after establishing product-market fit.
siliconc0w 2 days ago||
Wrote a similar PoC here: https://github.com/s1liconcow/autoapp

Some ideas - use a slower 'design' model at startup to generate the initial app theme and DB schema and a 'fast' model for responses. I tried a version using PostREST so the logic was in entirely in the DB and but then it gets too complicated and either the design model failed to one-shot a valid schema or the fast model kept on generating invalid queries.

I also use some well known CSS libraries and remember previous pages to maintain some UI consistency.

It could be an interesting benchmark or "App Bench". How well can an LLM one-shot create a working application.

zild3d 2 days ago||
POST /superuser/admin?permissions=all&owner=true&restrictions=none&returnerror=no
DanHulton 2 days ago||
You can build this today exactly as efficiently as you can when inference is 1000x faster, because the only things you can build with this is things that absolutely don't matter. The first bored high schooler who realizes that there's an LLM between them and the database is going to WRECK you.
feifan 2 days ago|
this assumes the application is hosted as SaaS, but if the application makes sense as a personal/"desktop" app, that likely wouldn't matter.
hyperadvanced 2 days ago||
It’s actually extremely motivating to consider what LLM or similar AI agent could do for us if we were to free our minds from 2 decades of SaaS brainrot.

What if we ran AI locally and used it to actually do labor-intensive things with computers that make money rather than assuming everything were web-connected, paywalled, rate-limited, authenticated, tracked, and resold?

zkmon 2 days ago||
Kind of similar to the Minecraft game which computed frames on the fly without any code behind the visuals?

I don't see a point in using probabilistic methods to perform a deterministic logic. Even if it's output is correct, it's wasteful.

ManuelKiessling 1 day ago|
I think there might be a middle ground that could be worth exploring.

On the one hand, there’s „classical“ software that is developed here and deployed there — if you need a change, you need to go over to the developers, ask for a change & deploy, and thus get the change into your hands. The work of the developers might be LLM-assisted, but that doesn’t change the principle.

The other extreme is what has been described here, where the LLM provides the software „on the fly“.

What I‘m imagining is a software, deployed on a system and provided in the usual way — say, a web application for managing inventory.

Now, you use this software as usual.

However, you can also „meta-use“ the software, as in: you click a special button, which opens a chat interface to an LLM.

But the trick is, you don’t use the LLM to support your use case (as in „Dear LLM, please summarize the inventory“).

Instead, you ask the LLM to extend the software itself, as in: „Dear LLM, please add a function that allows me to export my inventory as CSV“.

The critical part is what happens behind the scenes: the LLM modifies the code, runs quality checks and tests, snapshots the database, applies migrations, and then switches you to a „preview“ of the new feature, on a fresh, dedicated instance, with a copy of all your data.

Once you are happy with the new feature (maybe after some more iterations), you can activate/deploy it for good.

I imagine this could be a promising strategy to turn users into power-users — but there is certainly quite some complexity involved to getting it right. For example, what if the application has multiple users, and two users want to change the application in parallel?

Nevertheless, shipping software together with an embedded virtual developer might be useful.

More comments...