Top
Best
New

Posted by speckx 10/27/2025

It's insulting to read AI-generated blog posts(blog.pabloecortez.com)
1300 points | 540 commentspage 4
jayers 10/27/2025|
I think it is important to make the distinction between "blog post" and other kinds of published writing. It literally does not matter if your blog post has perfectly correct grammar or misspellings (though you should do a one-pass revision for clarity of thought). Blog posts are best for articulating unfinished thoughts. To that end, you are cheating yourself, the writer, if you use AI to help you write a blog post. It is through the act of writing it that you begin to grok with the idea.

But you bet that I'm going to use AI to correct my grammar and spelling for the important proposal I'm about to send. No sense in losing credibility over something that can be corrected algorithmically.

olooney 10/27/2025||
I don't see the objection to using LLMs to check for grammatical mistakes and spelling errors. That strikes me as a reactionary and dogmatic position, not a rational one.

Anyone who has done any serious writing knows that a good editor will always find a dozen or more errors in any essay of reasonable length, and very few people are willing to pay for professional proofreading services on blog posts. On the other side of the coin, readers will wince and stumble over such errors; they will not wonder at the artisanal authenticity of your post, but merely be annoyed. Wabi-sabi is an aesthetic best reserved for decor, not prose.

keiferski 10/27/2025||
Yes, I agree. There's nothing wrong with using an LLM or a spell-checker to improve your writing. But I do think it's important to have the LLM point out the errors, not rewrite the text directly. This lets you discover errors but avoid the AI-speak.
CuriouslyC 10/27/2025|||
The fact that you were downvoted into dark grey for this post on this forum makes me very sad. I hope it's just that this article is attracting a certain kind of segment of the community.
olooney 10/27/2025|||
I'm pretty sure my mistake was assuming people had read the article and knew the author veered wildly halfway through towards also advocating against using LLMs for proofreading and that you should "just let your mistakes stand." Obviously no one reads the article, just the headline, so they assumed I was disagreeing with that (which I was not.) Other comments that expressed the same sentiment as mine but also quoted that part did manage to get upvoted.

This is an emotionally charged subject for many, so they're operating in Hurrah/Boo mode[1]. After all, how can we defend the value of careful human thought if we don't rush blindly to the defense of every low-effort blog post with a headline that signals agreement with our side?

[1]: https://en.wikipedia.org/wiki/Emotivism

philipwhiuk 10/29/2025|||
No it's because he introduced an obscure term that came out of nowhere which is both a poor communication style and indicative of AI.
ryanmcbride 10/27/2025||
You thought we wouldn't notice that you used AI on this comment but you were wrong.
olooney 10/27/2025||
Here is a piece I wrote recently on that very subject. Why don't you read that to see if I'm a human writer?

https://www.oranlooney.com/post/em-dash/

philipwhiuk 10/29/2025||
[flagged]
ryanmcbride 10/30/2025||
That wasn't actually why I posted that I was just guessing and thought it'd be funny if I was right.
akshatjiwan 10/27/2025||
I don't know. Content matters more to me. Many of the articles that I read have so little information density that I find it hard to justify spending time on them.I often use AI to summarise text for me and then lookup particular topics in detail if I like.

Skimming was pretty common before AI too. People used to read and share notes instead of entire texts. AI has just made it easier.

Reading long texts is not a problem for me if its engaging. But often I find they just go on and on without getting to the point. Especially news articles.They are the worst.

mirzap 10/27/2025||
This post could easily be generated by AI, no way to tell for sure. I'm more insulted if the title or blog thumbnail is misleading, or if the post is full of obvious nonsense, etc.

If a post contains valuable information that I learn from it, I don't really care if AI wrote it or not. AI is just a tool, like any other tool humans invented.

I'm pretty sure people had the same reaction 50 years ago, when the first PCs started appearing: "It's insulting to see your calculations made by personal electronic devices."

Frotag 10/27/2025||
The way I view it is that the author is trying to explain their mental model, but there's only so much you can fit into prose. It's my responsibility to fill in the missing assumptions / understand why X implies Y. And all the little things like consistent word choice, tone, and even the mistakes helps with this. But mix in LLMs and now there's another layer / slightly different mental model I have to isolate, digest, and merge with the author's.
RIMR 10/27/2025||
>No, don't use it to fix your grammar, or for translations

Okay, I can understand even drawing the line at grammar correction, in that not all "correct" grammar is desirable or personal enough to convey certain ideas.

But not for translation? AI translation, in my experience, has proven to be more reliable than other forms of machine translation, and personally learning a new language every time I need to read something non-native to me isn't reasonable.

jexe 10/27/2025|
Reading an AI blog post (or reddit post, etc) just signals that the author actually just doesn't care that much about the subject.. which makes me care less too.
More comments...