Top
Best
New

Posted by nreece 10/27/2025

AI can code, but it can't build software(bytesauna.com)
262 points | 177 commentspage 3
johnnienaked 10/28/2025|
Quit saying AI can code. AI can't do anything that wasn't done by actual humans before. AI is a plagiarism machine.
cdelsolar 10/28/2025||
I definitely disagree. I'm a software engineer, but have been heavily using AI the last few months and have gotten multiple apps to production since then. I have to guide the LLM along, yes, but it's perfectly capable of doing everything needed up to and including building the cloudformation templates for Fargate or whatever.
aurintex 10/28/2025||
This is a great read and something I've been grappling with myself.

I've found it takes significant time to find the right "mode" of working with AI. It's a constant balance between maintaining a high-level overview (the 'engineering' part) while still getting that velocity boost from the AI (the 'coding' part).

The real trap I've seen (and fallen into) is letting the AI just generate code at me. The "engineering" skill now seems to be more about ruthless pruning and knowing exactly what to ask, rather than just knowing how to write the boilerplate.

orionblastar 10/27/2025||
I see so many people on the Internet who claim they can fix AI VIBE Code. Nothing new I've been Super Debugging crappy code for 30 years to make it work.
jongjong 10/28/2025||
>> hey, I have this vibe-coded app, would you like to make it production-ready

This makes me cringe because it's a lot harder to get LLMs to generate good code when you start with a crappy codebase. If you start with a good codebase, it's like the codebase is coding itself. The former approach trying to get the LLM to write clean code is akin to mental torture, the second approach is highly pleasant.

preommr 10/27/2025||
These discussions are so tiring.

Yes, they're bad now, but they'll get better in a year.

If the generative ability is good enough for small snippets of code, it's good enough for larger software that's better organized. Maybe the models don't have enough of the right kind of training data, or the agents don't have the right reasoning algorithms. But it is there.

CivBase 10/28/2025||
Problem is, as the author points out, designing software solutions is a lot more complicated than writing code. AI might get better in a year, but when will it be good enough? Does our current approach to AI even produce an economical solution to this problem, even if it's technically possible?
phyzome 10/28/2025|||
I've been hearing "they'll be better in a few months/years" for a few years now.
Esophagus4 10/28/2025||
But hasn’t the ecosystem as a whole been getting better? Maybe or maybe not on the models specifically, but ChatGPT came out and it could do some simple coding stuff. Then came Claude which could do some more coding stuff. Then Cursor and Cline, then reasoning models, then Claude Code, then MCPs, then agents, then…

If we’re simply measuring model benchmarks, I don’t know if they’re much better than a few years ago… but if we’re looking at how applicable the tools are, I would say we’re leaps and bounds beyond where we were.

gitaarik 10/28/2025||
So what's your point exactly? That LLMs cán write software, just not yet?
thegrim33 10/28/2025||
And here I am, using AI twice within the last 12 hours, to ask it two questions about an extremely well used, extremely well documented, physics library, and both times having it return to me sample code which makes use of library methods which don't exist. When I tell it this, I get the "Oh, you're so right to point that out!" response, and get new code returned, which still just blatantly doesn't work.
theshrike79 10/28/2025||
Someone had a blog post that said if a LLM hallucinates a method in your library, that means it should statistically have a method like that. LLMs work on probabilities and if the math says something should be there, who are you to argue =)

Also use MCPs like codex7 and Agentic LLMs for more interactivity instead of just relying on a raw model.

drcxd 10/28/2025||
Hello, have you ever tried using the coding agents?

For example, you can pull the library code to your working environment and install the coding agent there as well. Then you can ask them to read specific files, or even all files in the library. I believe (according to my personal experience) this would significantly decrease the possibility of hallucinating.

gherkinnn 10/28/2025||
It is only a matter of years for all the idea guys in my org to realise this.

"But AI can build this in 30min"

zeckalpha 10/28/2025|
I think this can be extended (but not necessarily fully mitigated) by working with non-SWE agents interacting with the same codebase. Drafting product requirements, assess business opportunities, etc. can be done by LLMs.
More comments...