Top
Best
New

Posted by dbalatero 9/3/2025

Where's the shovelware? Why AI coding claims don't add up(mikelovesrobots.substack.com)
762 points | 482 commentspage 3
neilv 9/3/2025|
There's also the questionable copyright/IP angle.

As an analogy, can you imagine being a startup that hired a developer, and months later finding out the bulk of the new Web app they "coded" for you was actually copy&pasted open source code, loosely obfuscated, which they were passing it off as something they developed, and to which the company had IP rights?

You'd immediately convene the cofounders and a lawyer, about how to make this have never happened.

First you realize that you need to hand the lawyer the evidence (against the employee), and otherwise remove all traces of that code and activity from the company.

Simultaneously, you need to get real developers started rushing to rewrite everything without obvious IP taint.

Then one of you will delicately ask whether firing and legal action against the employee is sufficient, or whether the employee needs to sleep with the fishes to keep them quiet.

The lawyer will say this kind of situation isn't within the scope of their practice, but here's the number of a person they refer to only as 'the specialist'.

Soon, not only are you losing the startup, and the LLC is being pierced to go after your personal assets, but you're also personally going to prison. Because you were also too cheap to pay the professional fee for 'the specialist', and you asked ChatGPT to make the employee have a freak industrial shredder accident.

All this because you tried to cheap out, and spend $20 or $200 on getting some kind of code to appear in your repo, while pretending you didn't know where it came from.

falcor84 9/3/2025||
That's a fantastic piece of short fiction, but it is fiction. In practice though, I've seen so many copy&pasted unsourced open source snippets in proprietary code that I've lost all ability to be surprised by it, and I can't think of any one time where the company was sued about that, let alone anyone facing any personal repercussions, not even those junior devs. And if anything, by being "lossy encyclopedias" rather than copy-pasters, LLMs significantly reduce this ostensible legal liability.

Oh, and then you have the actual tech giants offering legal commitment to protect you against any copyright claims:

https://blogs.microsoft.com/on-the-issues/2023/09/07/copilot...

https://cloud.google.com/blog/products/ai-machine-learning/p...

neilv 9/3/2025|||
The festival of pillaging open source is suddenly so ubiquitous, and protected by deep-pocketed exploiters selling pillaging shovels, that everyone else is just going to try to get their share of the loot?

You might be right, but the point needs to be made.

neilv 9/4/2025||||
Though one recent related scandal was:

https://techcrunch.com/2024/09/30/y-combinator-is-being-crit...

InCom-0 9/4/2025|||
You can copy paste unsourced open source snippets just fine, ain't nothing wrong with that (usually) It is another story whether anyone should do that for other reasons having nothing to do with open source or licensing.
Ekaros 9/4/2025||
On other side I wonder how long until we get fist IP theft case. And in discovery all the logs with all chatbots are requested. And the end result is that well it was mostly AI produced so no copy right protection so no damages...
neilv 9/4/2025||
Interesting. I wonder whether investors and M&A care. (I'm thinking "data room" due diligence over whether you own the IP.)

Maybe investors will care, but for now they stand to make more money from "AI" gold rush startups, and don't want to be a wet blanket on "AI" at all by bringing up concerns.

ethin 9/4/2025||
I agree quite strongly with this article. I've used AI for some thing,s but when it comes to productivity I don't use it in big codebases I contribute to or code which I want to put into production. I've mainly only used it to build little concept demos/prototypes, and even then I build on top of a framework I wrote by hand like last year or so. And I only use AI to get familiar enough with the general patterns for a library I'm not familiar with (mainly because I'd like to avoid diving into tests to learn how the library works). But even then, I always have the docs open, and API docs, and I very carefully review and thoroughly test on my own system and with what I'm really trying to do before I even consider it something I'd give to others. Even so, I wouldn't say I've gotten a productivity increase, because (1) I don't measure or really care about productivity with these kinds of things, and (2) I'm the one who already knows what I want to accomplish, and just need a bit of help trying to work towards that goal.
smjburton 9/3/2025||
I generally agree with the sentiment of the article, but the OP should also be looking at product launch websites like ProductHunt, where there are tens to hundreds of vibe coded SaaS apps listed daily.

From my experience, it's much easier to get an LLM to generate code for a React/Tailwind CSS web app than a mobile app, and that's why we're seeing so many of these apps showing up in the SaaS space.

mattmanser 9/3/2025|
I actually just looked, if anything the PH data supports his theory, assuming the website I found is scraping this data accurately.

In fact it looks like there were less products launched last month on PH than the same period a year ago.

https://hunted.space/monthly-overview

It's a bit hard as they're not summing by months but it looks like less to me quickly scanning it.

And as Claude Code has only really been out 3/4 months you'd be expecting launches to be shooting up week-by-week right about now as all the vibe products get finished.

They're not, see the 8 week graph:

https://hunted.space/stats

smjburton 9/4/2025||
I'm surprised, but you're right... Thanks for sharing this site, it'll be interesting to dig into the data.
elzbardico 9/3/2025||
Got lots of data in my own work. The mission: Demonstrate the gains of AI to C-level.

Well... no significant effects show except for a few projects. It was really hard torturing the data to come to my manager's desired conclusion.

protocolture 9/3/2025||
AI has made me a 10x hobby engineer. IE if I need skills I dont have to do work thats just for me. Its great.

Its sometimes helpful when writing an email but otherwise has not touched any of my productive work.

momiforgot 9/4/2025||
LLM-powered shovelware sits in the same box as coke-induced business ideas. Both give you the dopamine rush of being “on top of it” until the magic wears off and you’re scrubbing your apartment floor with a toothbrush at 4 AM, or stuck debugging a DB migration that Claude Code has been mangling for five hours straight.
tezza 9/4/2025||
“Where is the shovelware?”… It’s Coming!

Changing domain to writing and images and video you can see LinkedIn is awash with everyone generating everything by LLMs. The posting cadence has quickened too as people shout louder to raise their AI assisted voice over other people’s.

We’ve all seen and heard the AI images and video tsunami

So why not software (yet but soon)??

Firstly, Software often has a function and AI tool creations cannot make that work. Lovable/Bolt etc are too flakey to live up to their text to app promises. A shedload of horror debugging or a lottery win of luck is required to fashion an app out of that. This will improve over time but the big question is, by enough?

And secondly, like on LinkedIn above: perhaps the standards of the users will drop? LinkedIn readers now tolerate the llm posts, it is not a mark of shame. Will the same reduction in standards in software users open the door to good-enough shovelware?

RachelF 9/4/2025||
Software standards are already falling, sadly. I look at the recent problems with Microsoft Windows, Teams and Outlook and despair.

How much of it is to be blamed on AI, and how much on a culture of making users test their products, I do not know.

ares623 9/4/2025||
Yeah users' expectations of their software has definitely been declining.

Everything has been enshittified so much that nothing phases them anymore.

rsynnott 9/4/2025||
I mean, LinkedIn, even before the advent of LLMs, has been the worst and most bullshit-heavy of the social networks. There's a reason that r/LinkedInLunatics exists. "It can write a LinkedIn post" is not necessarily good evidence that it can do anything useful.
bergie 9/4/2025||
Exactly what I wanted to say. LinkedIn was slop before there was AI slop. So that's probably where LLM generated stuff fits the best. That, and maybe Medium.
rsynnott 9/4/2025||
Even medium, you'll sometimes see people who can write properly on medium. LinkedIn is kind of fascinating in that, even before LLMs, everything highly rated on LinkedIn was in that grotesque almost content-free style beloved by LLMs.
quantum2022 9/4/2025||
You're missing the forest for the trees. It speeds up people who don't know how to program 100%. We could see a flourishing of ideas and programs coming out of 'regular' people. The kind of people that approach programmers with the 'I have an idea' and get ignored. Maybe the programs will be basic, but they'll be a template for something better, which then a programmer might say 'I see the value in that idea' and help develop it.

It'll increase incremental developments manyfold. A non-programmer spending a few hours on AI to make their workflow better and easier and faster. This is what everyone here keeps missing. It's not the programmers that should be using AI; it's 'regular' people.

LEDThereBeLight 3 days ago||
Great point, one that I think gets missed when you write code for a living and are comfortable with the ecosystem. But I remember when I was getting started making projects - even figuring out what I needed to google to get unblocked could feel impossible. I think AI will help bring a lot of people who are “on the border” between building something with code or giving up on the idea the boost over that line.
aranelsurion 9/4/2025|||
I think that was the point they made.

If AI enables regular folks to make programs, even if the worst quality shovelware, there should’ve been an explosion in quantity. All the programs that people couldn’t made, they would start making them in the past two years.

Gormo 9/4/2025||
> It speeds up people who don't know how to program 100%.

I'm not sure how that challenges the point of the article, which is that metrics of the total volume of code being publicly released is not increasing. If LLMs are opening the door to software development for many people whose existing skills aren't yet sufficient to publish working code, then we'd expect to see a vast expansion in the code output by such people. If that's not happening, why not?

Kiro 9/3/2025||
> If so many developers are so extraordinarily productive using these tools, where is the flood of shovelware?

On my computer. Once I've built something I often realize the problems with the idea and abandon the project, so I'm never shipping it.

daxfohl 9/3/2025|
So is a flood of unshippable code now an indicator of increased productivity?
Kiro 9/4/2025||
That's what the author argues, yes. It would work fine as shovelware but I have no interest in shipping that.

In fact, being able to throw it out like this is a big time saver in itself. I've always built a lot of projects but when you've spent weeks or months on something you get invested in it, so you ship it even though you no longer believe in it. Now when it only takes a couple of days to build something you don't have the same emotional attachment to it.

codeulike 9/4/2025|
Its really interesting to bring graphs of 'new ios releases per month' or 'total domain name registrations' into the argument - thats a good way of keeping the argument tied to the real world
caro_kann 9/4/2025|
Especially Steam games graph as well. Most of the non game developers at least once wanted to release their own games and feels like this is the good time to do that.
More comments...