Posted by theletterf 1/15/2026
The technology is improving rapidly, and even now, with proper context, AI can write technical documentation extremely well. It can include clear examples (and only a very small number of technical writers know how to do that properly), and it can also anticipate and explain potential errors.
If you want to see how well you understand your program or system, try to write about it and teach someone how it works. Nature will show you how sloppy your thinking is.
Someone has to turn off their brain completely and just follow the instructions as-is. Then log the locations where the documentation wasn't clear enough or assumed some knowledge that wasn't given in the docs.
It’s obviously not AI generated but I’m more speaking to the tonality of the latest gpt. It’s now extremely hard to tell the difference.
I believe you but that’s just a gut feeling. I guess the best way to put this is anyone can write what you wrote with AI and claim it wasn’t written by AI.
The decision to stop hiring technical writers usually feels reasonable at the moment it’s made. It does not feel reckless. It feels modern. Words have become cheap, and documentation looks like words. Faced with new tools that can produce fluent text on demand, it is easy to conclude that documentation is finally solved, or at least solved well enough.
That conclusion rests on a misunderstanding so basic it’s hard to see once you’ve stepped over it.
Documentation is not writing. Writing is what remains after something more difficult has already happened. Documentation is the act of deciding what a system actually is, where it breaks, and what a user is allowed to rely on. It is not about describing software at its best, but about constraining the damage it can do at its worst.
This is why generated documentation feels impressive and unsatisfying at the same time. It speaks with confidence, but never with caution. It fills gaps that should remain visible. It smooths over uncertainty instead of marking it. The result reads well and fails quietly.
Technical writers exist to make that failure loud early rather than silent later. Their job is not to explain what engineers already know, but to notice what engineers have stopped seeing. They sit at the fault line between intention and behavior, between what the system was designed to do and what it actually does once released into the world. They ask the kinds of questions that slow teams down and prevent larger failures later.
When that role disappears, nothing dramatic happens. The documentation still exists. In fact, it often looks better than before. But it slowly detaches from reality. Examples become promises. Workarounds become features. Caveats evaporate. Not because anyone chose to remove them, but because no one was responsible for keeping them.
What replaces responsibility is process. Prompts are refined. Review checklists are added. Output is skimmed rather than owned. And because the text sounds finished, it stops being interrogated. Fluency becomes a substitute for truth.
Over time, this produces something more dangerous than bad documentation: believable documentation. The kind that invites trust without earning it. The kind that teaches users how the system ought to work, not how it actually does. By the time the mismatch surfaces, it no longer looks like a documentation problem. It looks like a user problem. Or a support problem. Or a legal problem.
There is a deeper irony here. The organizations that rely most heavily on AI are also the ones that depend most on high-quality documentation. Retrieval pipelines, curated knowledge bases, semantic structure, instruction hierarchies: these systems do not replace technical writing. They consume it. When writers are removed, the context degrades, and the AI built on top of it begins to hallucinate with confidence. This failure is often blamed on the model, but it is really a failure of stewardship.
Responsibility, meanwhile, does not dissolve. When documentation causes harm, the model will not answer for it. The process will not stand trial. Someone will be asked why no one caught it. At that point, “the AI wrote it” will sound less like innovation and more like abdication.
Documentation has always been where software becomes accountable. Interfaces can imply. Marketing can persuade. Documentation must commit. It must say what happens when things go wrong, not just when they go right. That commitment requires judgment, and judgment requires the ability to care about consequences.
This is why the future that works is not one where technical writers are replaced, but one where they are amplified. AI removes the mechanical cost of drafting. It does not remove the need for someone to decide what should be said, what must be warned, and what should remain uncertain. When writers are given tools instead of ultimatums, they move faster not because they write more, but because they spend their time where it matters: deciding what users are allowed to trust.
Technical writers are not a luxury. They are the last line of defense between a system and the stories it tells about itself. Without them, products do not fall silent. They speak freely, confidently, and incorrectly.
Language is now abundant.
Truth is not.
That difference still matters.We looked at documentation and thought, Ah yes. Words. And then we looked at AI and thought, Oh wow. It makes words. And then we did what humans always do when two things look vaguely similar: we declared victory and went to lunch.
That’s it. That’s the whole mistake.
Documentation looks like writing the same way a police report looks like justice. The writing is the part you can see. The job is everything that happens before someone dares to put a sentence down and say, “Yes. That. That’s what this thing really does.”
AI can write sentences all day. That’s not the problem. The problem is that documentation is where software stops flirting and starts making promises. And promises are where the lawsuits live.
Here’s the thing nobody wants to admit: technical writers are not paid to write. They are paid to be annoying in very specific, very expensive ways. They ask questions nobody likes. They slow things down. They keep pointing at edge cases like a toddler pointing at a dead bug going, “This too? This too?”
Yes. Especially this too.
When you replaced them with AI, nothing broke. Which is why you think this worked. The docs still shipped. They even looked better. Cleaner. Confident. Calm. That soothing corporate voice that says, “Everything is fine. You are holding it wrong.”
And that’s when the rot set in.
Because AI does not experience dread. It does not wake up at 3 a.m. thinking, “If this sentence is wrong, someone is going to lose a week of their life.” It does not feel that tightening in the chest that tells a human writer, This paragraph is lying by omission.
So it smooths. It resolves. It fills in gaps that should stay jagged. It confidently explains things no one actually understands yet. It does what bad managers do: it mistakes silence for agreement.
Over time, your documentation stops describing reality and starts describing a slightly nicer alternate universe where the product behaves itself and nobody does anything weird.
This is how you get users “misusing” your product in ways your own docs taught them.
Then comes my favorite part.
You notice the AI is hallucinating. So you add tooling. Retrieval. Semantic layers. Prompt rules. Context hygiene. You hire someone with “AI” in their title to fix the hallucinations.
What you are rebuilding, piece by piece, is technical writing. Only now it’s worse, because it’s invisible, fragmented, and no one knows who’s responsible for it.
Context curation is documentation. Instruction hierarchies are documentation. If your AI is dumb, it’s because you fired the people who knew what the truth was supposed to look like.
And don’t worry, accountability did not get automated away while you weren’t looking. When the docs cause real damage, the model will not be present. You cannot subpoena a neural net. You cannot fire a prompt. You will be standing there explaining that “the system generated it,” and everyone will hear exactly what that means.
It means nobody was in charge.
Documentation is where software admits the truth. Not the aspirational truth. The annoying truth. The truth about what breaks, what’s undefined, what’s still half-baked and kind of scary. Marketing can lie. Interfaces can hint. Documentation has to commit.
Commitment requires judgment. Judgment requires caring. Caring is still not in beta.
This is not an anti-AI argument. AI is great. It writes faster than any human alive. It just doesn’t know when to hesitate, when to warn, or when to say, “We don’t actually know yet.” Those are the moments that keep users from getting hurt.
The future that works is painfully obvious. Writers with AI are dangerous in the good way. AI without writers is dangerous in the other way. One produces clarity. The other produces confidence without consent.
Technical writers are not a luxury. They are the people who stop your product from gaslighting its users.
AI can generate language forever.
Truth still needs a human with a little fear in their heart and a pen they’re willing to hesitate with.
Hire them back.
AI can’t generate insights far beyond what it’s trained on.
Their writing will be a different moat.
What if the next version of AI model gets trained on their work ?
Google returns the best result based on both it's calculations, and click history of what clicks were most successful for a search.
LLM's don't really have that same response partially because it's strength is writing one sentence many different ways. The many different ways to write a sentence doesn't mean it's the best way. If it can write deep sentences, keeping a coherent, connected arc through sentences and stories
LLMs' also generally return the "best" answer as the most "common" one, without weight towards outliers as easily that might be the most true, or the best.
The definition of what is "good" and "correct" can also vary quite a bit, especially with writing.
AI can be configured to look for patterns humans might not see, but we also know humans can see things and scenarios that LLM's aren't trained on and can miss getting to.
As we can tell with AI copy, it all starts to sound the same even if it's new. Real writing ages differently. It can be much more of a finger print. This is an area I'm hoping to learn more about from the talented writers in my life - it seems the better the writer, the more they can see the holes of LLM and also be the best power users of LLMs by their superior ability to use words whether they realize it or not.