Top
Best
New

Posted by samrolken 11/1/2025

Show HN: Why write code if the LLM can just do the thing? (web app experiment)(github.com)
I spent a few hours last weekend testing whether AI can replace code by executing directly. Built a contact manager where every HTTP request goes to an LLM with three tools: database (SQLite), webResponse (HTML/JSON/JS), and updateMemory (feedback). No routes, no controllers, no business logic. The AI designs schemas on first request, generates UIs from paths alone, and evolves based on natural language feedback. It works—forms submit, data persists, APIs return JSON—but it's catastrophically slow (30-60s per request), absurdly expensive ($0.05/request), and has zero UI consistency between requests. The capability exists; performance is the problem. When inference gets 10x faster, maybe the question shifts from "how do we generate better code?" to "why generate code at all?"
436 points | 324 commentspage 10
Yumako 11/1/2025|
Honestly if you ask yourself this you need to understand better why clients pay us.
julianlam 11/1/2025||
I can't wait to build against an API whose outputs can radically change by the second!

Usually I have to wait for the company running the API to push breaking changes without warning.

finnborge 11/1/2025||
In N years the idea of requiring a rigid API contract between systems may be as ridiculous as a Panda being unable to understand that Bamboo is food unless it is planted in the ground.

Abstractly, who cares what format the information is shared in? If it is complete, the rigidity of the schema *could* be irrelevant (in a future paradigm). Determinism is extremely helpful (and maybe vitally necessary) but, as I think this intends to demonstrate, *could* just be articulated as a form of optimization.

Fluid interpretation of API results would already be useful but is impossibly problematic. How many of us already spend meaningful amounts of time "cleaning" data?

samrolken 11/1/2025||
As an unserious experiment, I deliberately left this undefined for max hallucinations chaos. But in practice you could easily add the schemata for stuff in the application-defining prompt. Not that I’m saying that makes this approach any more practical…
dboreham 11/1/2025||
Another version of this question: why have high level languages if AI writes the code abd tests it?
taotau 11/2/2025||
Because high level languages are where the libraries that do all of the heavy lifting exist. Libraries provide a suite of tools for absstracting away all of the complexities of creating a 'simple' web app. I think a lot of newer devs dont realise how many shoulders of giants they are standing on, and all the complexities involved in performing a simpl fetch requeust.

Sure an LLM could write it's own libraries and abstractions in a low level language, and im sure there are some assembler or c level web api wrappers, but they would be nowhere near as comprehensive or battle tested as the ones available for high level languages.

This could definitely change in the future. I think we need a coding platform that is designed for optimised LLM use, but that still allows humans to understand and write it. Kind of a markdown for code. Sort of like what OP is trying to do, but with the built in benefit of having a common shared suite of tools for interoperability.

rererereferred 11/2/2025||
Yes, height of the language aside, why add a dependency to leftpad when the LLM can build the code for you every time? Extrapolate this to ORMs, why use the ORM when the LLM can build a custom query and map it to objects? And this will probably be more performant. Then extrapolate to the whole web framework? Where should we draw the line?
samrolken 11/1/2025||
Most of today’s top models do a decent job with assembly language!
sonicvroooom 11/3/2025||
bus factor.
tekbruh9000 11/1/2025|
You're still operating with layers of lexical abstraction and indirection. Models full of dated syntactic and semantic concepts about software that waste cycles.

Ultimately useless layers of state that the goal you set out to test for inevitably complicates the process.

In chip design land we're focused on streamlining the stack to drawing geometry. Drawing it will be faster when the machine doesn't have decades of programmer opinions to also lose cycles to the state management.

When there are no decisions but extend or delete a bit of geometry we will eliminate more (still not all) hallucinations and false positives than we get trying to organize syntax which has subtly different importance to everyone (misunderstanding fosters hallucinations).

Most software out there is developer tools, frameworks, they need to do a job.

Most users just want something like automated Blender that handles 80% of an ask (look like a word processor or a video game) they can then customize and has a "play" mode that switches out of edit mode. That’s the future machine and model we intend to ship. Fonts are just geometric coordinates. Memory matrix and pixels are just geometric coordinates. The system state is just geometric coordinates[1].

Text driven software engineering modeled on 1960-1970s job routines, layering indirection on math states in the machine, is not high tech in 2025 and beyond. If programmers were car people they would all insist on a Model T being the only real car.

Copy-paste quote about never getting one to understand something when their paycheck depends on them not understanding it.

Intelligence gave rise to language, language does not give rise to intelligence. Memorization and a vain sense of accomplishment that follows is all there is to language.

[1]https://iopscience.iop.org/article/10.1088/1742-6596/2987/1/...

finnborge 11/1/2025|
I'm not sure I follow this entirely, but if the assertion is that "everything is math" then yeah, I totally agree. Where I think language operates here is as the medium best situated to assign objects to locations in vector space. We get to borrow hundreds of millions of encodings/relationships. How can you plot MAN against FATHER against GRAPEFRUIT using math without circumnavigating the human experience?
tekbruh9000 11/5/2025||
When I write to an unknown audience, unable to know in advance what terms they rely on, I tend to circumlocute to build emotional subtext. They might only get some percent but it may be familiar enough terms to act as middleware to the rest.

The words Man, father, and grapefruit aren't essential to existence of man, father, grapefruit. All existed before language.

What you mean by "human experience" is "bird song my culture uses to describe shared space". Leave meaning to be debated in meat space and include the current geometry of the language in the model. Just make it mutable.

The machine can just focus on rendering geometry to the pixel limit of the machine using electrical theory; it doesn't need to care internally if it's text with meaning. It's only represented like that on the screen anyway. Compress the information required to just geometric representation and don't anthropomorphize machine state manipulation.