Top
Best
New

Posted by speckx 10/27/2025

It's insulting to read AI-generated blog posts(blog.pabloecortez.com)
1300 points | 540 commentspage 9
bluSCALE4 10/27/2025|
This is how I feel about some LinkedIn folks that are going all in w/ AI.
saltysalt 10/27/2025||
I'm pretty certain that the only thing reading my blog these days is AI.
holdenc137 10/27/2025||
I assume this is a double-bluff and the blog post WAS written by an AI o_O ?
tdiff 10/27/2025||
> Here is a secret: most people want to help you succeed.

Most people dont care.

invisibleink 10/29/2025||
YES and, also let's not use printing press, photography, word processors, spell checkers, internet and search engines because they lack human touch, make us lazy, prevent deep thinking blah blah...
philipwhiuk 10/29/2025|
Just because all those other inventions didn't wreck humanity doesn't mean this one won't.
invisibleink 10/29/2025||
The point is, every new technology attracts its share of romantic skeptics, and every time they fail then they retreat to the same tired line:

"Just because all those other inventions didn't wreck humanity doesn't mean this one won't"

But that’s not an argument, it’s an evasion.

Given past inventions didn’t destroy us despite similar concerns, then the burden is on you to show why this one is fundamentally different and uniquely catastrophic.

iMax00 10/27/2025||
I read anything as long as there is new and useful information
__alexander 10/27/2025||
I feel the same way about AI generated README.md on Github.
K0balt 10/28/2025||
It wouldn’t be so bad if it wasn’t unbearable to read.
latexr 10/27/2025||
This assumes the person using LLMs to put out a blog post gives a single shit about their readers, pride, or “being human”. They don’t. They care about the view so you load the ad which makes them a fraction of a cent, or the share so they get popular so they can eventually extract money or reputation from it.

I agree with you that AI slop blog posts are a bad thing, but there are about zero people who use LLMs to spit out blog posts which will change their mind after reading your arguments. You’re not speaking their language, they don’t care about anything you do. They are selfish. The point is themselves, not the reader.

> Everyone wants to help each other.

No, they very much do not. There are a lot of scammers and shitty entitled people out there, and LLMs make it easier than ever to become one of them or increase the reach of those who already are.

babblingfish 10/27/2025||
If someone puts an LLM generated post on their personal blog, then their goal isn't to improve their writing or learn on a new topic. Rather, they're hoping to "build a following" because some conman on twitter told them it was easy. What's especially hilarious is how difficult it is to make money with a blog. There's little incentive to chase monetization in this medium, and yet people do it anyways.
JohnFen 10/27/2025|||
> They are selfish. The point is themselves, not the reader.

True!

But when I encounter a web site/article/video that has obviously been touched by genAI, I add that source to a blacklist and will never see anything from it again. If more people did that, then the selfish people would start avoiding the use of genAI because using it will cause their audience to decline.

latexr 10/27/2025||
> I add that source to a blacklist

Please do tell more. Do you make it like a rule in your adblocker or something else?

> If more people did that, then the selfish people would start avoiding the use of genAI because using it will cause their audience to decline.

I’m not convinced. The effort on their part is so low that even the lost audience (which will be far from everyone) is still probably worth it.

JohnFen 10/27/2025|||
I was using "blacklist" in a much more general sense, but here's how it actually plays out. Most of my general purpose website reading is done through an RSS aggregator. If one of those feeds starts using genAI, then I just drop it out of the aggregator. If it's a website that I found through web search, then I use Kagi's search refinement settings to ensure that site won't come up again in my search results. If it's a YouTube channel I subscribe to, I unsubscribe. If it's one that YouTube recommended to me, I tell YouTube to no longer recommend anything from that channel.

Otherwise, I just remember that particular source as being untrustworthy.

robin_reala 10/28/2025|||
I use Kagi for this: you can block domains from appearing in your search results. https://kagi.com/settings/user_ranked
YurgenJurgensen 10/27/2025||
Don’t most ad platforms and search engines track bounce rate? If too many users see that generic opening paragraph, bullet list and scattering of emoji, and immediately hit back or close, they lose revenue.
latexr 10/27/2025||
Assuming most people can detect LLM writing quickly. I don’t think that’s true. In this very submission we see people referencing cases where colleagues couldn’t detect something is written by LLM even after reading everything.
voidhorse 10/27/2025|
If you struggle with communication, using AI is fine. What matters is caring about the result. You cannot just throw it over the fence.

AI content in itself isn't insulting, but as TFA hits upon, pushing sloppy work you didn't bother to read or check at all yourself is incredibly insulting and just communicates to others that you don't think their time is valuable. This holds for non-AI generated work as well, but the bar is higher by default since you at least had to generate that content yourself and thus at least engage with it on a basic level. AI content is also needlessly verbose, employs trite and stupid analogies constantly, and in general has the nauseating, bland, soulless corporate professional communication style that anyone with even a mote of decent literary taste detests.

More comments...