Top
Best
New

Posted by dbalatero 9/3/2025

Where's the shovelware? Why AI coding claims don't add up(mikelovesrobots.substack.com)
762 points | 482 commentspage 4
goalieca 9/3/2025|
There's a relatively monotonous task in software engineering that pretty much everyone working no a legacy c/c++ code base has had to face: static analysis and compiler warnings. That seems about as boring and routine of an operation that exists. As simple as can be. I've seen this task farmed out to interns paid barely anything just to get it done.

My question to HN is... can LLMs do this? Can they convert all the unsafe c-string invocations to safe. Can they replace system calls with posix calls. Can they wrap everything in a smart pointer and make sure that mutex locks are added where needed.

jes5199 9/3/2025|
if you have a static analysis tool that gives a list of problems to fix, and something like a unit test suite that makes sure nothing got badly broken due to a sloppy edit, then yes. If you don’t have these things, you’ll accumulate mistakes
goalieca 9/4/2025||
We’re talking legacy code bases. Automated Test coverage is generally poor.
thewarrior 9/3/2025||
While I agree with the points he’s raising let me play devils advocate.

There’s a lot more code being written now that’s not counted in these statistics. A friend of mine vibe coded a writing tool for himself entirely using Gemini canvas.

I regularly vibe code little analyses or scripts in ChatGPT which would have required writing code earlier.

None of these are counted in these statistics.

And yes AI isn’t quite good enough to super charge app creation end to end. Claude has only been good for a few months. That’s hardly enough time for adoption !

This would be like analysing the impact of languages like Perl or Python on software 3 months after their release.

philipwhiuk 9/4/2025|
But that isn't the argument presented. The argument is that it's 10x to developer productivity, not just 'the odd script'.

And if it's not that, then the silly money valuations don't make any sense.

mysterydip 9/3/2025||
Good article, gave me some points I hadn't considered before. I know there are some AI generated games out there, but maybe the same people were using asset flips before?

I'd also be curious how the numbers look for AI generated videos/images, because social media and youtube seem absolutely flooded with the stuff. Maybe it's because the output doesn't have to "function" like code does?

Grammatical nit: The phrase is "neck and neck", like where two race horses are very close in progress

lowbloodsugar 9/3/2025||
Turns out AI can’t help script kiddies write production ready applications. Also turns out that AI is good for some things and not others, and a coin toss isn’t a good method to decide which tasks to do using AI. I read that JavaScript is by far the most popular language: still not using it for the mission critical software I write. So it doesn’t bother me that 90% of HN is “AI sucks!” stories. I find it extremely effective when used appropriately. YMMV.
timdiller 9/3/2025||
I haven't found ChatGPT helpful in speeding up my coding because I don't want to give up understanding the code. If I let ChatGPT do it, then there are inevitable mistakes, and it sometimes hallucinates libraries, etc. I have found it very useful in guiding me through the dev-ops of working with and configuring AWS instances for a blog server, for a git server, etc. As a small business owner, that has been a big time saver.
flyinglizard 9/3/2025||
I get excellent productivity gains from AI. Not everywhere, and not linearly. It makes the bad stuff about the work (boilerplate, dealing with things outside my specialties) tolerable and the good stuff a bit better. It makes me want to create more. Business guys missing some visualization? Hell why not, few minutes on Aider and it's there. Let's improve our test suites. And let's migrate away from that legacy framework or runtime!

But my workflow is anything but "let her rip". It's very calculated, orderly, just like mastering any other tool. I'm always in the loop. I can't imagine someone without serious experience getting good stuff, and when things go bad, oh boy you're bringing a firehose of crap into your org.

I have a junior programmer who's a bright kid but lacking a lot of depth. Got him a Cursor subscription, tracking his code closely via PRs and calling out the BS but we're getting serious work done.

I just can't see how this new situation calls for less programmers. It will just bring about more software, surely more capable software after everyone adjusts.

degamad 9/3/2025|
> It will just bring about more software

But according to the graphs in the article, after three years of LLM chatbots and coding assistants, we're seeing exactly the same rate of new software...

scotty79 9/3/2025||
How widely is AI adopted in the wider IT industry anyways? I imagine $200 per month subscription isn't that popular with people refusing to pay for their IDEs and going with free alternatives instead. And month worth of free tier of AI agent can be spent in two intense evenings.

So who pays for AIs for developers? Mostly corpos. And the speed of individual developer was never a limiting factor in corpos. Average corporate development was always 10 times slower than indie. So even doubling it won't make any impression.

I don't know if I'm faster with AI at a specific task, but I know that I'm doing things I wouldn't touch because I hate the tedium. And I'm doing them while cooking and eating dinner and thinking about wider context and next things to come. So for me it feels worth it.

I think it might be something like with cars and safety. Any car safety improvements are going to be offset by the drivers driving faster and more recklessly. So maybe any speed improvements that AI might make for the project is nullified by developers doing things they would just skip without it.

ares623 9/4/2025||
This is the part that really grinds my gears. Careless People.

> The impact on human lives is incredible. People are being fired because they’re not adopting these tools fast enough. People are sitting in jobs they don’t like because they’re afraid if they go somewhere else it’ll be worse. People are spending all this time trying to get good at prompting and feeling bad because they’re failing.

bastawhiz 9/3/2025||
The amount of shovelware is not a reliable signal. You know what's almost empty for the first time in almost a decade? My backlog. Where AI tools shine is taking an existing codebase and instructions, and going to town. It's not dreaming up whole games from scratch. All the engineers out there didn't quit their jobs to build new stuff, they picked up new tools to do their existing jobs better (or at least, to hate their jobs less).

The shovelware was always there. And it always will be. But that's doesn't mean it's splurting out faster, because that's not what AI does. Hell, if anything I expect that there's less visible shovelware because when it does get created, it's less obvious (and perhaps higher quality).

At some point, the quality of uninspired projects will be lifted up by the baseline of quality that mainstream AI allows. At what point is that "high enough that we can't tell what's garbage"? We've perhaps found ourselves at or around that point.

ketozhang 9/4/2025|
The data is surprising. However, I do wish this article looked carefully into barriers of entry as it can explain the lack of increases in your data.

For example, in Steam, it costs $100 to release a game. You may extend your game with what's called a DLC and that costs $0 to release. If I were to build shovelware with especially with AI-generated content, I'd more keen to make a single game with a bunch of DLC.

For game development, integration of AI into engines is another barrier. There aren't that many choices of engines that gives AI an interface to work with. The obvious interface is games that can be entirely build with code (e.g., pygame; even Godot is a big stretch)

More comments...