Posted by speckx 10/27/2025
For essays, honestly, I do not feel so bad, because I can see that other than some spaces like HN the quality of the average online writer has dropped so much that I prefer to have some machine-assisted text that can deliver the content.
However, my problem is with AI-generated code.
In most of the cases to create trivial apps, I think AI-generated code will be OK to good; however, the issue that I'm seeing as a code reviewer is that folks that you know their code design style are so heavily reliant on AI-generated code that you are sure that they did not write and do not understand the code.
One example: Working with some data scientists and researchers, most of them used to write things on Pandas, some trivial for loops, and some primitive imperative programming. Now, especially after Claude Code, most of the things are vectorized, with some sort of variable naming with way-compressed naming. Sometimes folks use Cython in some data pipeline tasks or even using functional programming to an extreme.
Good performance is great, and leveling up the quality of the codebase it's a net positive; however, I wonder in some scenario when things go south and/or Claude code is not available if those folks will be able to fix it.
It appears inconsiderate—perhaps even dismissive—to present me, a human being with unique thoughts, humor, contradictions, and experiences, with content that reads as though it were assembled by a lexical randomizer. When you rely on automation instead of your own creativity, you deny both of us the richness of genuine human expression.
Isn’t there pride in creating something that is authentically yours? In writing, even imperfectly, and knowing the result carries your voice? That pride is irreplaceable.
Please, do not use artificial systems merely to correct your grammar, translate your ideas, or “improve” what you believe you cannot. Make errors. Feel discomfort. Learn from those experiences. That is, in essence, the human condition. Human beings are inherently empathetic. We want to help one another. But when you interpose a sterile, mechanized intermediary between yourself and your readers, you block that natural empathy.
Here’s something to remember: most people genuinely want you to succeed. Fear often stops you from seeking help, convincing you that competence means solitude. It doesn’t. Intelligent people know when to ask, when to listen, and when to contribute. They build meaningful, reciprocal relationships. So, from one human to another—from one consciousness of love, fear, humor, and curiosity to another—I ask: if you must use AI, keep it to the quantitative, to the mundane. Let your thoughts meet the world unfiltered. Let them be challenged, shaped, and strengthened by experience.
After all, the truest ideas are not the ones perfectly written. They’re the ones that have been felt.
> It appears inconsiderate—perhaps even dismissive—to present me, a human being with unique thoughts, humor, contradictions, and experiences, with content that reads as though it were assembled by a lexical randomizer.
I like that beginning than the original:
> It seems so rude and careless to make me, a person with thoughts, ideas, humor, contradictions and life experience to read something spit out by the equivalent of a lexical bingo machine because you were too lazy to write it yourself.
No one's making anyone read anything (I hope). And yes, it might be inconsiderate or perhaps even dismissive to present a human with something written by AI. The AI was able to phrase this much better than the human! Thank you for presenting me with that, I guess?
I suppose I am writing to you because I can no longer speak to anyone. As people turn to technology for their every word, the space between them widens, and I am no exception. Everyone speaks, yet no one listens. The noise fills the room, and still it feels empty.
Parents grow indifferent, and their children learn it before they can name it. A sickness spreads, quiet and unseen, softening every heart it touches. I once believed I was different. I told myself I still remembered love, that I still felt warmth somewhere inside. But perhaps I only remember the idea of it. Perhaps feeling itself has gone.
I used to judge the new writers for chasing meaning in words. I thought they wrote out of vanity. Now I see they are only trying to feel something, anything at all. I watch them, and sometimes I envy them, though I pretend not to. They are lost, yes, but they still search. I no longer do.
The world is cold, and I have grown used to it. I write to remember, but the words answer nothing. They fall silent, as if ashamed. Maybe you understand. Maybe it is the same with you.
Maybe writing coldly is simply compassion, a way of not letting others feel your pain.
Now, I take a cue from school, and write the outline first. With an outline, I can use a prompt for the LLM to play the role of a development editor to help me critique the throughline. This is helpful because I tend to meander, if I'm thinking at the level of words and sentences, rather than at the level of an outline.
Once I've edited the outline for a compelling throughline, I can then type out the full essay in my own voice. I've found it much easier to separate the process into these two stages.
Before outline critiquing: https://interjectedfuture.com/destroyed-at-the-boundary/
After outline critiquing: https://interjectedfuture.com/the-best-way-to-learn-might-be...
I'm still tweaking the developement editor. I find that it can be too much of a stickler on the form of the throughline.
I suppose if to make you feel like it’s better (even if it isn’t), and you enjoy it, go ahead. But know this: we can tell.
If you're talking about something more recent, there's only two essays I wrote with the outlining and throughline method I described above. And all of essays, I wrote every word you read on the page with my fingers tapping on the keyboard.
Hence, I'm not actually sure you can tell. I believe you think I'm just one-shotting these essays by rambling to an LLM. I can tell you for sure the results from doing that is pretty bad.
All of them have the same rhetorical structure...probably because it's what I write like without an LLM, and it's what I prompted the LLM, playing a role as a development editor to critique outlines to do! So if you're saying that I'm a bad writer (fair), that's one thing! But I'm definitely writing these myself. shrug
---
Honestly, it feels rude to hand me something churned out by a lexical bingo machine when you could’ve written it yourself. I'm a person with thoughts, humor, contradictions, and experience not a content bin.
Don't you like the pride of making something that's yours? You should.
Don't use AI to patch grammar or dodge effort. Make the mistake. Feel awkward. Learn. That's being human.
People are kinder than you think. By letting a bot speak for you, you cut off the chance for connection.
Here's the secret: most people want to help you. You just don't ask. You think smart people never need help. Wrong. The smartest ones know when to ask and when to give.
So, human to human, save the AI for the boring stuff. Lead with your own thoughts. The best ideas are the ones you've actually felt.
I would have written "lexical fruit machine", for its left to right sequential ejaculation of tokens, and its amusingly antiquated homophobic criminological implication.
That said, when I do try to get LLMs to write something, I can't stand it, and feel like the OP here.
It’s really funny how many business deals would be better if people would put the requests in an AI to explain what exactly is requested. Most people are not able to answer and if they’d use an AI they could respond in a proper way without wasting everyone’s time. But at least not using an AI shows the competency (or better - incompetence) level.
It’s also sad that I need to tell people to put my message in an AI to don’t ask me useless questions. And AI can fill most of the gaps people don’t get it. You might say my requests are not proper, but then how an AI can figure out what I want to say? I also put my requests in an AI when I can and can create eli5 explanations of the requests “for dummies”
I recently interviewed a person for a role as senior platform architect. The person was already working for a semi reputable company. In the first interview, the conversation was okay but my gut just told me something was strange about this person.
We have the candidate a case to solve with a few diagrams, and to prepare a couple slides to discuss the architecture.
The person came back with 12 diagrams, all AI generated, littered with obvious AI “spelling”/generation mistakes.
And when we questioned the person about why they think we would gain trust and confidence in them with this obvious AI generated content, they became even aggressive.
Needless to say it didn’t end well.
The core problem is really how much time is now being wasted in recruiting with people who “cheat” or outright cheat.
We have had to design questions to counter AI cheating, and strategies to avoid wasting time.