Top
Best
New

Posted by samrolken 3 days ago

Show HN: Why write code if the LLM can just do the thing? (web app experiment)(github.com)
I spent a few hours last weekend testing whether AI can replace code by executing directly. Built a contact manager where every HTTP request goes to an LLM with three tools: database (SQLite), webResponse (HTML/JSON/JS), and updateMemory (feedback). No routes, no controllers, no business logic. The AI designs schemas on first request, generates UIs from paths alone, and evolves based on natural language feedback. It works—forms submit, data persists, APIs return JSON—but it's catastrophically slow (30-60s per request), absurdly expensive ($0.05/request), and has zero UI consistency between requests. The capability exists; performance is the problem. When inference gets 10x faster, maybe the question shifts from "how do we generate better code?" to "why generate code at all?"
434 points | 318 commentspage 8
socketcluster 3 days ago|
If anyone is interested in a CRUD serverless backend, I built https://saasufy.com/

I'm looking for users who want to be co-owners of the platform. It supports pretty much any feature you may need to build complex applications including views/filtering, indexing (incl. Support for compound keys), JWT auth, access control. Efficient real-time updates. It's been battle tested with apps with relatively advanced search requirements.

johnrob 3 days ago||
This definitely has that “toy” feel to it that a lot of eventually mainstream ideas have. It can’t work! But… could it?
daxfohl 3 days ago||
What happens when you separate the client and the server into their own LLMs? Because obviously we need another JS framework.
steve1977 2 days ago||
You could also ask why use AI when writing the code is trivial?
martini333 3 days ago||
> ANTHROPIC_MODEL=claude-3-haiku-20240307

Why?

cheema33 3 days ago|
> ANTHROPIC_MODEL=claude-3-haiku-20240307 > Why?

Probably because of cost and speed. Imagine asking a tool to get a list of your Amazon orders. This experiment shows it might code a solution and execute it and come back to you in 60 seconds. You cannot rely on the results because LLMs are non-deterministic. If you use a thinking model like GPT-5, the same might take 10 minutes to execute and you still cannot rely on the results.

unbehagen 3 days ago||
Amazing! Very similar approach, would love to heae what you think: https://github.com/gerkensm/vaporvibe
sameerds 2 days ago||
Everyone seems to be missing the point. Using an LLM to perform book keeping like this is akin to a business in the dot-com era hiring a programmer to help them go online. But since it's an LLM, the next step would be different. The LLM might initially do all the actions itself, but eventually it should train optimised pathways just for this purpose. It would become an app that isn't actually written out in code. Or alternatively, the LLM might actually dump its optimized logic into a program that it runs as a tool.
cadamsdotcom 3 days ago||
Everything in engineering is a tradeoff.

Here you’re paying for decreased upfront effort with per-request cost and response time (which will go down in future for sure). Eventually the cost and response time will both be low enough that it’s not worth the upfront effort of coding the solution. Just another amazing outcome of technology being on a continual path of improvement.

But “truly no-code” can never be deterministic - even though it’ll get close enough in future to be indistinguishable. And it’ll always be an order of magnitude less efficient than code.

This is why we have LLMs write code for us: they’re codifying the deterministic outcome we desire.

Maybe the best solution is a hybrid: after a few requests the LLM should just write code it can use to respond every time from then on.

sixdimensional 3 days ago|
I think your last comment hints at the possibility- runtime generated and persisted code... e.g. the first time you call a function that doesn't exist, it persists if it fulfills the requirement... and so the next time you just call the materialized function.

Of course the generated code might not work in all cases or scenarios, or may have to be generated multiple times, and yes it would be slower the first time.. but subsequent invocation would just be the code that was generated.

I'm trying to imagine what this looks like practically.. it's a system that writes itself as you use it? I feel like there is a thread to tug on there actually.

daxfohl 3 days ago||
So basically we need a JIT compiler for LLMs.
ls-a 3 days ago|
You just justified the mass layoffs for me
More comments...