Posted by speckx 3 days ago
That said, when I do try to get LLMs to write something, I can't stand it, and feel like the OP here.
I recently interviewed a person for a role as senior platform architect. The person was already working for a semi reputable company. In the first interview, the conversation was okay but my gut just told me something was strange about this person.
We have the candidate a case to solve with a few diagrams, and to prepare a couple slides to discuss the architecture.
The person came back with 12 diagrams, all AI generated, littered with obvious AI “spelling”/generation mistakes.
And when we questioned the person about why they think we would gain trust and confidence in them with this obvious AI generated content, they became even aggressive.
Needless to say it didn’t end well.
The core problem is really how much time is now being wasted in recruiting with people who “cheat” or outright cheat.
We have had to design questions to counter AI cheating, and strategies to avoid wasting time.
But you bet that I'm going to use AI to correct my grammar and spelling for the important proposal I'm about to send. No sense in losing credibility over something that can be corrected algorithmically.
Anyone who has done any serious writing knows that a good editor will always find a dozen or more errors in any essay of reasonable length, and very few people are willing to pay for professional proofreading services on blog posts. On the other side of the coin, readers will wince and stumble over such errors; they will not wonder at the artisanal authenticity of your post, but merely be annoyed. Wabi-sabi is an aesthetic best reserved for decor, not prose.
This is an emotionally charged subject for many, so they're operating in Hurrah/Boo mode[1]. After all, how can we defend the value of careful human thought if we don't rush blindly to the defense of every low-effort blog post with a headline that signals agreement with our side?
Skimming was pretty common before AI too. People used to read and share notes instead of entire texts. AI has just made it easier.
Reading long texts is not a problem for me if its engaging. But often I find they just go on and on without getting to the point. Especially news articles.They are the worst.
If a post contains valuable information that I learn from it, I don't really care if AI wrote it or not. AI is just a tool, like any other tool humans invented.
I'm pretty sure people had the same reaction 50 years ago, when the first PCs started appearing: "It's insulting to see your calculations made by personal electronic devices."