Top
Best
New

Posted by speckx 2 days ago

It's insulting to read AI-generated blog posts(blog.pabloecortez.com)
1056 points | 477 commentspage 3
throwawa14223 2 days ago|
I should never spend more effort reading something than the author spent writing it. With AI-generated texts the author effort approaches zero.
elif 2 days ago||
I feel like this has to be AI generated satire as art
thire 2 days ago|
Yes, I was almost hoping for a "this was AI-generated" disclaimer at the end!
braza 2 days ago||
> No, don't use it to fix your grammar, or for translations, or for whatever else you think you are incapable of doing. Make the mistake. Feel embarrassed. Learn from it. Why? Because that's what makes us human!

For essays, honestly, I do not feel so bad, because I can see that other than some spaces like HN the quality of the average online writer has dropped so much that I prefer to have some machine-assisted text that can deliver the content.

However, my problem is with AI-generated code.

In most of the cases to create trivial apps, I think AI-generated code will be OK to good; however, the issue that I'm seeing as a code reviewer is that folks that you know their code design style are so heavily reliant on AI-generated code that you are sure that they did not write and do not understand the code.

One example: Working with some data scientists and researchers, most of them used to write things on Pandas, some trivial for loops, and some primitive imperative programming. Now, especially after Claude Code, most of the things are vectorized, with some sort of variable naming with way-compressed naming. Sometimes folks use Cython in some data pipeline tasks or even using functional programming to an extreme.

Good performance is great, and leveling up the quality of the codebase it's a net positive; however, I wonder in some scenario when things go south and/or Claude code is not available if those folks will be able to fix it.

rphv 16 hours ago|
You don't need to understand code for it to be useful, any more than you need to know assembly to write Python.
rootedbox 2 days ago||
I fixed it.

It appears inconsiderate—perhaps even dismissive—to present me, a human being with unique thoughts, humor, contradictions, and experiences, with content that reads as though it were assembled by a lexical randomizer. When you rely on automation instead of your own creativity, you deny both of us the richness of genuine human expression.

Isn’t there pride in creating something that is authentically yours? In writing, even imperfectly, and knowing the result carries your voice? That pride is irreplaceable.

Please, do not use artificial systems merely to correct your grammar, translate your ideas, or “improve” what you believe you cannot. Make errors. Feel discomfort. Learn from those experiences. That is, in essence, the human condition. Human beings are inherently empathetic. We want to help one another. But when you interpose a sterile, mechanized intermediary between yourself and your readers, you block that natural empathy.

Here’s something to remember: most people genuinely want you to succeed. Fear often stops you from seeking help, convincing you that competence means solitude. It doesn’t. Intelligent people know when to ask, when to listen, and when to contribute. They build meaningful, reciprocal relationships. So, from one human to another—from one consciousness of love, fear, humor, and curiosity to another—I ask: if you must use AI, keep it to the quantitative, to the mundane. Let your thoughts meet the world unfiltered. Let them be challenged, shaped, and strengthened by experience.

After all, the truest ideas are not the ones perfectly written. They’re the ones that have been felt.

tasuki 2 days ago|
Heh, nice. I suppose that was AI-generated? Your beginning:

> It appears inconsiderate—perhaps even dismissive—to present me, a human being with unique thoughts, humor, contradictions, and experiences, with content that reads as though it were assembled by a lexical randomizer.

I like that beginning than the original:

> It seems so rude and careless to make me, a person with thoughts, ideas, humor, contradictions and life experience to read something spit out by the equivalent of a lexical bingo machine because you were too lazy to write it yourself.

No one's making anyone read anything (I hope). And yes, it might be inconsiderate or perhaps even dismissive to present a human with something written by AI. The AI was able to phrase this much better than the human! Thank you for presenting me with that, I guess?

iamwil 2 days ago||
Lately, I've been writing more on my blog, and it's been helpful to change the way that I do it.

Now, I take a cue from school, and write the outline first. With an outline, I can use a prompt for the LLM to play the role of a development editor to help me critique the throughline. This is helpful because I tend to meander, if I'm thinking at the level of words and sentences, rather than at the level of an outline.

Once I've edited the outline for a compelling throughline, I can then type out the full essay in my own voice. I've found it much easier to separate the process into these two stages.

Before outline critiquing: https://interjectedfuture.com/destroyed-at-the-boundary/

After outline critiquing: https://interjectedfuture.com/the-best-way-to-learn-might-be...

I'm still tweaking the developement editor. I find that it can be too much of a stickler on the form of the throughline.

whshdjsk 1 day ago|
And yet, Will, with all due respect, I can’t hear your voice in any of the 10 articles I skimmed. It’s the same rhetorical structure found in every other LLM blog.

I suppose if to make you feel like it’s better (even if it isn’t), and you enjoy it, go ahead. But know this: we can tell.

iamwil 1 day ago||
The essays go back a couple years. How did I use LLMs to write in 2021 and 2022?

If you're talking about something more recent, there's only two essays I wrote with the outlining and throughline method I described above. And all of essays, I wrote every word you read on the page with my fingers tapping on the keyboard.

Hence, I'm not actually sure you can tell. I believe you think I'm just one-shotting these essays by rambling to an LLM. I can tell you for sure the results from doing that is pretty bad.

All of them have the same rhetorical structure...probably because it's what I write like without an LLM, and it's what I prompted the LLM, playing a role as a development editor to critique outlines to do! So if you're saying that I'm a bad writer (fair), that's one thing! But I'm definitely writing these myself. shrug

foxfired 2 days ago||
Earlier this year, I used AI to help me improve some of my writing on my blog. It just has a better way of phrasing ideas than me. But when I came back to read those same blog posts a couple months later, you know after I've encountered a lot more blog posts that I didn't know were AI generated at the time, I saw the pattern. It sounds like the exact same author, +- some degree of obligatory humor, writing all over the web with the same voice.

I've found a better approach to using AI for writing. First, if I don't bother writing it, why should you bother reading it? LLMs can be great soundboards. Treat them as teachers, not assistants. Your teacher is not gonna write your essay for you, but he will teach you how to write, and spot the parts that need clarification. I will share my process in the coming days, hopefully it will get some traction.

jdnordy 2 days ago||
Anyone else suspicious this might be satire ironically written by an LLM?
carimura 2 days ago||
I feel like sometimes I write like an LLM, complete with [bad] self-deprecating humor, overly-explained points because I like first principals, random soliloquies, etc. Makes me worry that I'll try and change my style.

That said, when I do try to get LLMs to write something, I can't stand it, and feel like the OP here.

masly 2 days ago||
In a related problem:

I recently interviewed a person for a role as senior platform architect. The person was already working for a semi reputable company. In the first interview, the conversation was okay but my gut just told me something was strange about this person.

We have the candidate a case to solve with a few diagrams, and to prepare a couple slides to discuss the architecture.

The person came back with 12 diagrams, all AI generated, littered with obvious AI “spelling”/generation mistakes.

And when we questioned the person about why they think we would gain trust and confidence in them with this obvious AI generated content, they became even aggressive.

Needless to say it didn’t end well.

The core problem is really how much time is now being wasted in recruiting with people who “cheat” or outright cheat.

We have had to design questions to counter AI cheating, and strategies to avoid wasting time.

invisibleink 18 hours ago|
YES and, also let's not use printing press, photography, word processors, spell checkers, internet and search engines because they lack human touch, make us lazy, prevent deep thinking blah blah...
philipwhiuk 18 hours ago|
Just because all those other inventions didn't wreck humanity doesn't mean this one won't.
invisibleink 11 hours ago||
The point is, every new technology attracts its share of romantic skeptics, and every time they fail then they retreat to the same tired line:

"Just because all those other inventions didn't wreck humanity doesn't mean this one won't"

But that’s not an argument, it’s an evasion.

Given past inventions didn’t destroy us despite similar concerns, then the burden is on you to show why this one is fundamentally different and uniquely catastrophic.

More comments...