Top
Best
New

Posted by samrolken 11/1/2025

Show HN: Why write code if the LLM can just do the thing? (web app experiment)(github.com)
I spent a few hours last weekend testing whether AI can replace code by executing directly. Built a contact manager where every HTTP request goes to an LLM with three tools: database (SQLite), webResponse (HTML/JSON/JS), and updateMemory (feedback). No routes, no controllers, no business logic. The AI designs schemas on first request, generates UIs from paths alone, and evolves based on natural language feedback. It works—forms submit, data persists, APIs return JSON—but it's catastrophically slow (30-60s per request), absurdly expensive ($0.05/request), and has zero UI consistency between requests. The capability exists; performance is the problem. When inference gets 10x faster, maybe the question shifts from "how do we generate better code?" to "why generate code at all?"
436 points | 324 commentspage 8
drbojingle 11/2/2025|
I think what your missing bud is that "writing the code" is caching for the LLM. Do you think caching is going away?
daxfohl 11/1/2025||
What happens when you separate the client and the server into their own LLMs? Because obviously we need another JS framework.
johnrob 11/2/2025||
This definitely has that “toy” feel to it that a lot of eventually mainstream ideas have. It can’t work! But… could it?
martini333 11/1/2025||
> ANTHROPIC_MODEL=claude-3-haiku-20240307

Why?

cheema33 11/1/2025|
> ANTHROPIC_MODEL=claude-3-haiku-20240307 > Why?

Probably because of cost and speed. Imagine asking a tool to get a list of your Amazon orders. This experiment shows it might code a solution and execute it and come back to you in 60 seconds. You cannot rely on the results because LLMs are non-deterministic. If you use a thinking model like GPT-5, the same might take 10 minutes to execute and you still cannot rely on the results.

steve1977 11/2/2025||
You could also ask why use AI when writing the code is trivial?
unbehagen 11/1/2025||
Amazing! Very similar approach, would love to heae what you think: https://github.com/gerkensm/vaporvibe
cadamsdotcom 11/1/2025||
Everything in engineering is a tradeoff.

Here you’re paying for decreased upfront effort with per-request cost and response time (which will go down in future for sure). Eventually the cost and response time will both be low enough that it’s not worth the upfront effort of coding the solution. Just another amazing outcome of technology being on a continual path of improvement.

But “truly no-code” can never be deterministic - even though it’ll get close enough in future to be indistinguishable. And it’ll always be an order of magnitude less efficient than code.

This is why we have LLMs write code for us: they’re codifying the deterministic outcome we desire.

Maybe the best solution is a hybrid: after a few requests the LLM should just write code it can use to respond every time from then on.

sixdimensional 11/1/2025|
I think your last comment hints at the possibility- runtime generated and persisted code... e.g. the first time you call a function that doesn't exist, it persists if it fulfills the requirement... and so the next time you just call the materialized function.

Of course the generated code might not work in all cases or scenarios, or may have to be generated multiple times, and yes it would be slower the first time.. but subsequent invocation would just be the code that was generated.

I'm trying to imagine what this looks like practically.. it's a system that writes itself as you use it? I feel like there is a thread to tug on there actually.

daxfohl 11/1/2025||
So basically we need a JIT compiler for LLMs.
ls-a 11/1/2025||
You just justified the mass layoffs for me
daxfohl 11/1/2025|
"What hardware giveth, software taketh away." IOW this is exactly how things will work once we get that array of nuclear powered GPU datacenters.
More comments...