Top
Best
New

Posted by samrolken 3 days ago

Show HN: Why write code if the LLM can just do the thing? (web app experiment)(github.com)
I spent a few hours last weekend testing whether AI can replace code by executing directly. Built a contact manager where every HTTP request goes to an LLM with three tools: database (SQLite), webResponse (HTML/JSON/JS), and updateMemory (feedback). No routes, no controllers, no business logic. The AI designs schemas on first request, generates UIs from paths alone, and evolves based on natural language feedback. It works—forms submit, data persists, APIs return JSON—but it's catastrophically slow (30-60s per request), absurdly expensive ($0.05/request), and has zero UI consistency between requests. The capability exists; performance is the problem. When inference gets 10x faster, maybe the question shifts from "how do we generate better code?" to "why generate code at all?"
433 points | 317 commentspage 4
kesor 2 days ago|
This is just like vibe coding. In vibe coding, you snapshot the results of the LLM's implementation into files that you reuse later.

This project could use something like that. Perhaps ask the LLM to implement a way to store/cache the snapshots of its previous answers. That way, the more you use it, the faster it becomes.

samrolken 3 days ago||
Wow, thanks everyone. First HN post ever and it’s this intentionally terrible experiment that I thought was the dumbest weekend project I ever did, and it hit the front page. Perfect.

I’ve been reading through all the comments and the range of responses is really great and I'm so thankful for everyone to take the time to comment... from from “this is completely impractical” to “but what if we cached the generated code?” to “why would anyone want non-deterministic behavior?” All valid! Though I think some folks are critiquing this as if I was trying to build something production-ready, when really I was trying to build something that would break in instructive ways.

Like, the whole point was to eliminate ALL the normal architectural layers... routes, controllers, business logic, everything, and see what happens. What happens is: it’s slow, expensive, and inconsistent. But it also works, which is the weird part. The LLM designed reasonable database schemas on first request, generated working forms from nothing but URL paths, returned proper JSON from API endpoints. It just took forever to do it. I kept the implementation pure on purpose because I wanted to see the raw capabilities and limitations without any optimizations hiding the problems.

And honestly? I came away thinking this is closer to viable than it should be. Not viable TODAY. Today it’s ridiculous. But the trajectory is interesting. I think we’re going to look back at this moment and realize we were closer to a real shift than we thought. Or maybe not! Maybe code wins forever. Either way, it was a fun weekend. If anyone wants to discuss this or work on projects that respond faster than 30 seconds per request, I’m available for full stack staff engineer or tech co-founder work: sam@samrolken.org or x.com/samrolken

indigodaddy 3 days ago||
This is absolutely awesome. I had some ideas in my head that were very muddy and fuzzy re how to implement, eg like have the LLM just on demand/dynamically create/serve some 90s retro style html/website from a single entry field/form (to describe the website), etc, but I just couldn't begin to figure out how to go about it or where to start. But I love your idea about just putting the description in the route-- makes a lot of sense (I think I saw something else in the last few months on HN front page that was similar with putting whatever in a URI/domain route, but I think it was more of "redirect to whatever external website/page is most appropriate/relevant to the described route"- so a little similar but you've taken this to the next level).

I guess there are many of us out there with these same thoughts/ideas and you've done an awesome job articulating and implementing it, congrats!

jasonthorsness 3 days ago||
I tried this as well at https://github.com/jasonthorsness/ginprov (hosted at https://ginprov.com). After a while it sort of starts to all look the same though.
brokensegue 3 days ago||
Generating code will always be more performant and reliable than this. Just consider the security implications of this design...
samrolken 3 days ago|
Exactly. It even includes built-in prompt injection as a "feedback form".
CoderLim110 2 days ago||
I’ve been thinking about similar questions myself:

1、If code generation eventually works without human intervention, and every Google search could theoretically produce a real-time, custom-generated page, does that mean we no longer need people to build websites at all? At that point, “web development” becomes more like intent-shaping rather than coding.

2、I’m also not convinced that chat is the ideal interface for users. Natural language feels flexible, but it can also be slow, ambiguous, and cognitively heavier than clicking a button. Maybe LLM-driven systems will need new UI models that blend conversation with more structured interaction, instead of assuming chat = the future.

Curious how others here think about those two points.

_heimdall 2 days ago|
If (1) is true, there is no use for the web at all really.

The only value of an LLM generating a realistic HTML page as an answer is to make it appear as though the answer was found on a preexisting page, lending the answer some level of validity.

If users really are fine with the LLM just generating the answer on the fly, doing so in HTML is completely unnecessary. Just give the user answers in text form.

CoderLim110 2 days ago||
True. Most users just want their problem solved — they don’t care how it’s solved.
hoppp 3 days ago||
Because LLMs have a big chance to screw things up. They can't take responsibility. A person can take responsibility for code, but can they do the same for tool calling? Not really, because it's probabilistic. A webs service shouldn't be probabilistic
crazygringo 3 days ago||
This is incredibly interesting.

Now what if you ask it to optimize itself? Instead of just:

  prompt: `Handle this HTTP request: ${method} ${path}`,
Append some simple generic instructions to the prompt that it should create a code path for the request if it doesn't already exist, and list all existing functions it's already created along with the total number of times each one has been called, or something like that.

Even better, have it create HTTP routings automatically to bypass the LLM entirely once they exist. Or, do exponential backoff -- the first few times an HTTP request is called where a routing exists, still have the LLM verify that the results are correct, but decrease the frequency as long as verifications continue to pass.

I think something like this would allow you to create a version that might then be performant after a while...?

sixdimensional 3 days ago|
This brings a whole new meaning to "memoizing", if we just let the LLM be a function.

In fact, this thought has been percolating in the back of my mind but I don't know how to process it:

If LLMs were perfectly deterministic - e.g. for the same input we get the same output - and we actually started memoizing results for input sets by materializing them - what would that start to resemble?

I feel as though such a thing might start to resemble the source information the model was trained on. The fact that the model compresses all the possibilities into a limited space is exactly what makes it more valuable - instead of having to store every input, function body and outputs by memoizing that an LLM could generate, it just stores the model.

But this blows my mind somehow because if we DID store all the "working" pathways, what would that knowledgebase effectively represent and how would intellectual property work anymore in that case?

Thinking about functional programming, to me the potential to think of the LLM as the "anything" function, where a deterministic seed and input always produces the same output, with a knowledgebase of pregenererated outputs to use to speed up the retrieval of acceptable results for a given seed and set of inputs.... I can't put my finger on it.. is it a basically just a search engine then?

Let me try another way...

If I have a ask an LLM to generate a function for "what color is the fruit @fruit?", where fruit is the variable, and I memoize that @fruit = banana + seed 3 is "yellow", then the set of the prompt, input "@fruit", seed = 3, output = "yellow"... then this is now a fact that I could just memoize.

Would that be faster to retrieve the memoized result than calculating the result via the LLM?

And, what do we do with the thought that that set of information is "always true" with regards to intellectual property?

I honestly don't know yet.

isuckatcoding 2 days ago||
Security concerns aside, this might be good for quick API or UI design exploration. Almost like an OpenAPI spec or Figma doc that gets produced at the end of this.

So guess kind of like v0

conartist6 2 days ago|
Stuff like this just makes me embarrassed. There are so many real problems in the world, but people still want to spend their time trying to build a perpetual motion machine.
batch12 2 days ago||
Lighten up. People spend their time doing lots of things they enjoy regardless of the value others place on their efforts. Instead of projecting embarrassment, go save the world if that makes you happy.
cindyllm 2 days ago||
[dead]
More comments...