Top
Best
New

Posted by theletterf 6 hours ago

A letter to those who fired tech writers because of AI(passo.uno)
211 points | 130 comments
nicbou 2 hours ago|
I write documentation for a living. Although my output is writing, my job is observing, listening and understanding. I can only write well because I have an intimate understanding of my readers' problems, anxieties and confusion. This decides what I write about, and how to write about it. This sort of curation can only come from a thinking, feeling human being.

I revise my local public transit guide every time I experience a foreign public transit system. I improve my writing by walking in my readers' shoes and experiencing their confusion. Empathy is the engine that powers my work.

Most of my information is carefully collected from a network of people I have a good relationship with, and from a large and trusting audience. It took me years to build the infrastructure to surface useful information. AI can only report what someone was bothered to write down, but I actually go out in the real world and ask questions.

I have built tools to collect people's experience at the immigration office. I have had many conversations with lawyers and other experts. I have interviewed hundreds of my readers. I have put a lot of information on the internet for the first time. AI writing is only as good as the data it feeds on. I hunt for my own data.

People who think that AI can do this and the other things have an almost insulting understanding of the jobs they are trying to replace.

Nextgrid 2 hours ago||
The problem is that so many things have been monopolized or oligopolized by equally-mediocre actors so that quality ultimately no longer matters because it's not like people have any options.

You mention you've done work for public transit - well, if public transit documentation suddenly starts being terrible, will it lead to an immediate, noticeable drop in revenue? Doubt it. Firing the technical writer however has an immediate and quantifiable effect on the budget.

Apply the same for software (have you seen how bad tech is lately?) or basically any kind of vertical with a nontrivial barrier to entry where someone can't just say "this sucks and I'm gonna build a better one in a weekend".

nicbou 2 hours ago|||
You are right. We are seeing a transition from the user as a customer to the user as a resource. It's almost like a cartel of shitty treatment.

I don't work for the public transit company; I introduce immigrants to Berlin's public transit. To answer to the broader question, good documentation is one of the many little things that affect how you feel about a company. The BVG clearly cares about that, because their marketing department is famously competent. Good documentation also means that fewer people will queue at their service centre and waste an employee's time. Documentation is the cheaper form of customer service.

Besides, how people feels about the public transit company does matter, because their funding is partly a political question. No one will come to defend a much-hated, customer-hostile service.

theptip 17 minutes ago|||
Counterpoint - I think it’s going to become much easier for hobbyists and motivated small companies to make bigger projects. I expect to see more OSS, more competition, and eventually better quality-per-price (probably even better absolute quality at the “$0 / sell your data” tier).

Sure, the megacorps may start rotting from the inside out, but we already see a retrenchment to smaller private communities, and if more of the benefits of the big platforms trickle down, why wouldn’t that continue?

Nicbou, do you see AI as increasing your personal output? If it lets enthusiastic individuals get more leverage on good causes then I still have hope.

samiv 11 minutes ago||
When it became cheaper to publish text did the quality go up?

When it became cheaper to make games did the quality go up?

When it became cheaper to mass produce X (sneakers, tshirts, anything really) did the quality go up?

It's a world that is made of an abundance of trash. The volume of low quality production saturates the market and drowns out whatever high quality things still remain. In such a world you're just better of reallocating your resources from the production quality towards the the shouting match of marketing and try to win by finding ways to be more visible than the others. (SEO hacking etc shenanigans)

When you drive down the cost of doing something to zero you you also effectively destroy the economy based around that thing. Like online print, basically nobody can make a living with focusing on publishing news or articles but alternative revenue streams (ads) are needed. Same for games too.

tempodox 22 minutes ago||||
> Documentation is the cheaper form of customer service.

Thank you so much for saying this. Trying to convince anyone of the importance of documentation feels like an uphill battle. Glad to see that I'm not completely crazy.

rkomorn 45 minutes ago||||
> We are seeing a transition from the user as a customer to the user as a resource.

I'd argue that this started 30 years ago when automated phone trees started replacing the first line of workers and making users figure out how to navigate where they needed to in order to get the service they needed.

I can't remember if chat bots or "knowledge bases" came first, but that was the next step in the "figure it out yourself" attitude corporations adopted (under the guise of empowering users to "self help").

Then we started letting corporations use the "we're just too big to actually have humans deal with things" excuse (eg online moderation, or paid services with basically no support).

And all these companies look at each other to see who can lower the bar next and jump on the bandwagon.

It's one of my "favorite" rants, I guess.

The way I see this next era going is that it's basically going to become exclusively the users' responsibility to figure out how to talk to the bots to solve any issue they have.

apercu 2 hours ago|||
“It's almost like a cartel of shitty treatment.”

Thank you. I love it when someone poetically captures a feeling I’ve been having so succinctly.

randmeerkat 1 hour ago|||
> Thank you. I love it when someone poetically captures a feeling I’ve been having so succinctly.

It’s almost like they’re a professional writer…

bell-cot 1 hour ago||||
Enshittificartelization?
alameenpd 51 minutes ago|||
It’s almost like they are a professional writer
publicdebates 44 minutes ago||
Word for word, 53 minutes later? Why?

I have exactly 1 guess but am waiting to say it.

fainpul 14 minutes ago||
His other comment in this thread is also a clone of someone else's comment.
FeteCommuniste 1 hour ago||||
> You mention you've done work for public transit - well, if public transit documentation suddenly starts being terrible, will it lead to an immediate, noticeable drop in revenue? Doubt it. Firing the technical writer however has an immediate and quantifiable effect on the budget.

Exactly. If the AI-made documentation is only 50% of the quality but can be produced for 10% of the price, well, we all know what the "smart" business move is.

marcosdumay 22 minutes ago||
> If the AI-made documentation is only 50% of the quality

AI-made documentation has 0% of the quality.

As the OP pointed, AI can only document things that somebody already wrote down. That's no documentation at all.

psychoslave 15 minutes ago|||
>it's not like people have any options.

That’s one way to frame it. An other one is, sometime people are stuck in a situation where all options that come to their mind have repulsive consequences.

As always some consequences are deemed more immediate, and other will seem remoter. And often the incentives can be quite at odd between expectations in the short/long terms.

>this sucks and I'm gonna build a better one in a weekend

Hey, this is me looking at the world this morning. Bear with me, the bright new harmonious world should be there on Monday. ;)

GuB-42 35 minutes ago|||
And that's exactly the same for coding!

Coding is like writing documentation for the computer to read. It is common to say that you should write documentation any idiot can understand, and compared to people, computers really are idiots that do exactly as you say with a complete lack of common sense. Computers understand nothing, so all the understanding has to come from the programmer, which is his actual job.

Just because LLMs can produce grammatically correct sentences doesn't mean they can write proper documentation. In the same way, just because they are able to produce code that compiles doesn't mean they can write the program the user needs.

boilerupnc 34 minutes ago|||
Well said. I try to capture and express this same sentiment to others through the following expression:

“Technology needs soul”

I suppose this can be generalized to “__ needs soul”. Eg. Technical writing needs soul, User interfaces need soul, etc. We are seriously discounting the value we receive from embedding a level of humanity into the things we choose (or are forced) to experience.

ChrisMarshallNY 2 hours ago|||
Thanks so much for this!

Nicely written (which, I guess, is sort of the point).

TimByte 34 minutes ago|||
The hard part is the slow, human work of noticing confusion, earning trust, asking the right follow-up questions, and realizing that what users say they need and what they actually struggle with are often different things
gausswho 1 hour ago|||
I like the cut o' your jib. The local public transit guide you write, is that for work or for your own knowledge base? I'm curious how you're organizing this while keeping the human touch.

I'm exploring ways to organize my Obsidian vault such that it can be shared with friends, but not the whole Internet (and its bots). I'm extracting value out the curation I've done, but I'd like to share with others.

DeepSeaTortoise 49 minutes ago|||
Why shouldn't AI be able to sufficiently model all of this in the not far future? Why shouldn't have it have sufficient access to new data and sensors to be able to collect information on its own, or at least the system that feeds it?

Not from a moral perspective of course, but the technical possibility. And the overton window has shifted already so far, the moral aspect might align soon, too.

IMO there is an entirely different problem, that's not going to go away just about ever, but could be solved right now easily. And whatever AI company does so first instantly wipes out all competition:

Accept full responsibility and liability for any damages caused by their model making wrong decisions and either not meeting a minimum quality standard or the agreed upon quality.

You know, just like the human it'd replace.

rsynnott 45 minutes ago||
> Accept full responsibility and liability for any damages caused by their model making wrong decisions and either not meeting a minimum quality standard or the agreed upon quality.

That's not sufficient, at least from the likes of OpenAI, because, realistically, that's a liability that would go away in bankruptcy. Companies aren't going to want to depend on it. People _might_ take, say, _Microsoft_ up on that, but Microsoft wouldn't offer it.

ajuc 16 minutes ago|||
Replacement will be 80% worse, that's fine. As long as it's 90% cheaper.

See Duolingo :)

sevensor 34 minutes ago|||
See also: librarians, archivists, historians, film critics, doctors, lawyers, docents. The déformation professionnelle of our industry is to see the world in terms of information storage, processing, and retrieval. For these fields and many others, this is like confusing a nailgun for a roofer. It misses the essence of the work.
rasmus-kirk 37 minutes ago|||
Spot on! I think LLM's can help greatly in quickly putting that knowledge in writing, including using it to review written materials for hidden prerequisite assumptions that readers might not be aware of that. It can also help newer hires in how to write and more clearly. LLM's are clearly useful in increasing productivity, but management that think that they even close to ready to replace large sections of practically any workforce are delusional.
rtgfhyuj 1 hour ago|||
sounds like a bunch of agents can do a good amount of this. A high horse isn’t necessary
lillecarl 51 minutes ago||
A good amount != this. AI being able to do the easy parts of something doesn't replace the hard ones.
chiefalchemist 2 hours ago|||
I don't write for a living, but I do consider communication / communicating a hobby of sorts. My observations - that perhaps you can confirm or refute - are:

- Most people don't communicate as thoroughly and complete - written and verbal - as they think they do. Very often there is what I call "assumptive communication". That is, sender's ambiguity that's resolved by the receiver making assumptions about what was REALLY meant. Often, filling in the blanks is easy to do - as it's done all the time - but not always. The resolution doesn't change the fact there was ambiguity at the root.

Next time you're communicating, listen carefully. Make note of how often the other person sends something that could be interpreted differently, how often you assume by using the default of "what they likely meant was..."

- That said, AI might not replace people like you. Or me? But it's an improvement for the majority of people. AI isn't perfect, hardly. But most people don't have the skills a/o willingness to communicate at a level AI can simulate. Improved communication is not easy. People generally want ease and comfort. AI is their answer. They believe you are replaceable because it replaces them and they assume they're good communicators. Classic Dunning-Kruger.

p.s. One of my fave comms' heuristics is from Frank Luntz*:

"It's not what you say, it's what they hear." (<< edit was changing to "say" from "said".)

One of the keys to improved comms is to embrace that clarify and completeness is the sole responsibility of the sender, not the receiver. Some people don't want to hear that, and be accountable, especially then assumption communication is a viable shortcut.

* Note: I'm not a fan of his politics, and perhaps he's not The Source of this heuristic, but read it first in his "Words That Work". The first chapter of "WTW" is evergreen comms gold.

LtWorf 24 minutes ago||
LLMs are good at writing long pages of meaningless words. If you have a number of pages to turn in with your writing assignment and you've only written 3 sentences they will help you produce a low quality result that will pass the requirements.
chiefalchemist 14 minutes ago||
Low-quality is relative. LLMs' low-quality is most people's above-average. The fact the copy - either way - is likely to go through some sort of copy-by-committee process makes the case for LLMs even stronger (i.e., why waste your time). Not always, but quite often.
PlatoIsADisease 2 hours ago|||
>insulting

As as writer, you know this makes it seem emotional rather than factual?

Anyway, I agree with what you are saying. I run a scientific blog that gets 250k-1M users per year, and AI has been terrible for article writing. I use AI for ideas on brainstorming and ideas for titles(which ends up being inspiration rather than copypaste).

nicbou 2 hours ago||
My whole comment was about the need for a thinking, feeling human being. Is it surprising that I am emotional about it?
PlatoIsADisease 55 minutes ago|||
Emotion takes away from the idea. Instead of thinking: "Oh this is a great point. There is immense economic value here."

It becomes: This person is fearful of their job and used feeling to justify their belief.

speed_spread 1 hour ago|||
Funnily, of all your comment, the only word I objected to was the one right before "insulting": "almost". Thinking that LLM can replace humans outright expresses hubris and disdain in a way that I find particularly aggravating.
block_dagger 1 hour ago||
…says every charlatan who wanted to keep their position. I’m not saying you’re a charlatan but you are likely overestimating your own contributions at work. Your comment about feeding on data - AI can read faster than you can by orders of magnitude. You cannot compete.
pachorizons 1 hour ago|||
"you are likely overestimating your own contributions at work"

Based on what? Your own zero-evidence speculation? How is this anything other than arrogant punting? For sure we know that the point was something other than how fast the author reads compared to an AI, so what are we left with here?

g947o 26 minutes ago||||
I think the article is clear enough in defeating every one of your argument.
Larrikin 37 minutes ago||||
This kind of low effort little thinking comment is what AI is competing with at scale, not OP.
Croak 1 hour ago|||
Ai doesn't read it guesses.
elzbardico 30 minutes ago||
A somewhat related anecdote:

Two years ago, I asked chatgpt to rewrite my resume. It looked fantastic at a first sight, then, one week later I re-read it, and feel ashamed to have sent it to some prospective employers. It was full of cringe inducing babble.

You see, for an LLM there are no hierarchies other than what it observed in their training, and even then, applying it in a different context may be tricky for them. Because it can describe hierarchies, relationships by mimicry, but it doesn't actually have a model of them.

Just an example: It may be able to generate text that recognizes that a PhD title is a step above from a Master’s degree, but sometimes it won't be able to translate this fact (instead of the description of this fact) into the subtle differences in attention and emphasis we do in our written text to reflect those real world hierarchies of value. It can repeat the fact to you, can even kind of generalize it, but it won't take a decision based on it.

It can, even more now, get a very close simulation of this, because relative importance of stuff would have been semantically capture, and it is very good at capturing those subtle semantical relationships, but, in linguistic terms, it absolutely sucks at pragmatics.

An example: Let's say in one of your experiences, you improved a model that detected malignancy in a certain kind of tumor images, improving its false negative rate to something like 0.001%, then in the same experience you casually mention that you tied the CEOs toddler tennis shoes once. Given your prompt to write a resume according to the usual resume enhancement formulas, there's a big chance it will emphasize the irrelevant tennis lace tying activity in a ridiculously pompous manner, making it hierarchically equivalent to your model kung-fu accomplishments.

So in the end, you end up with some bizarre stuff that looks like:

"Tied our CEO's toddler tennis shoes, enabling her to raise 20M with minimal equity dilution in our Series B round"

squigz 19 minutes ago|
You had an LLM rewrite your resume, and then sent it to employers... without proofreading it? That was certainly a choice.
drob518 2 hours ago||
The best tech writers I have worked with don’t merely document the product. They act as stand-ins for actual users and will flag all sorts of usability problems. They are invaluable. The best also know how to start with almost no engineering docs and to extract what they need from 1-1 sit down interviews with engineering SMEs. I don’t see AI doing either of those things well.
TimByte 33 minutes ago||
In my experience, great tech writers quietly function as a kind of usability radar. They're often the first people to notice that a workflow is confusing
throwaw12 1 hour ago|||
> They act as stand-ins for actual users and will flag all sorts of usability problems

True, but it raises another question, what were your Product Managers doing in the first place if tech writer is finding out about usability problems

dxdm 17 minutes ago||
Realistically, PMs incentives are often aligned elsewhere.

But even if a PM cares about UX, they are often not in a good position to spot problems with designs and flows they are closely involved in and intimately familiar with.

Having someone else with a special perspective can be very useful, even if their job provides other beneficial functions, too. Using this "resource" is the job of the PM.

falcor84 2 hours ago||
> I don’t see AI doing either of those things well.

I think I agree, at least in the current state of AI, but can't quite put my finger on what exactly it's missing. I did have some limited success with getting Claude Code to go through tutorials (actually implementing each step as they go), and then having it iterate on the tutorial, but it's definitely not at the level of a human tech writer.

Would you be willing to take a stab at the competencies that a future AI agent would require to be excellent at this (or possibly never achieve)? I mean, TFA talks about "empathy" and emotions and feeling the pain, but I can't help feel that this wording is a bit too magical to be useful.

drob518 2 hours ago|||
I don’t know that it can be well-defined. It might be asking something akin to “What makes something human?” For usability, one needs a sense of what defines “user pain” and what defines “reasonableness.” No product is perfect. They all have usability problems at some level. The best usability experts, and tech writers who do this well, have an intuition for user priorities and an ability to identify and differentiate large usability problems from small ones.
falcor84 1 hour ago||
Thinking about this some more now, I can imagine a future in which we'll see more and more software for which AI agents are the main users.

For tech documentation, I suppose that AI agents would mainly benefit from Skills files managed as part of the tool's repo, and I absolutely do imagine future AI agents being set up (e.g. as part of their AGENTS.md) to propose PRs to these Skills as they use the tools. And I'm wondering whether AI agents might end up with different usability concerns and pain-points from those that we have.

TimByte 31 minutes ago||||
A good tech writer knows why something matters in context: who is using this under time pressure, what they're afraid of breaking, what happens if they get it wrong
CuriouslyC 41 minutes ago||||
Current AI writing is slightly incoherent. It's subtle, but the high level flow/direction of the writing meanders so things will sometimes seem a bit non-sequitur or contradictory.
richardw 45 minutes ago||||
It has no sense of truth or value. You need to check what it wrote and you need to tell it what’s important to a human. It’ll give you the average, but misses the insight.
SecretDreams 1 hour ago|||
> but can't quite put my finger on what exactly it's missing.

We have to ask AI questions for it to do things. We have to probe it. A human knows things and will probe others, unprompted. It's why we are actually intelligent and the LLM is a word guesser.

tlogan 4 minutes ago||
However, the writing is on the wall: AI will completely replace technical writers.

The technology is improving rapidly, and even now, with proper context, AI can write technical documentation extremely well. It can include clear examples (and only a very small number of technical writers know how to do that properly), and it can also anticipate and explain potential errors.

DeborahWrites 2 hours ago||
Yeah. AI might replace tech writers (just like it might replace anyone), but it won't be a GOOD replacement. The companies with the best docs will absolutely still have tech writers, just with some AI assistance.

Tech writing seems especially vulnerable to people not really understanding the job (and then devaluing it, because "everybody can write" - which, no, if you'll excuse the slight self-promotion but it saves me repeating myself https://deborahwrites.com/blog/nobody-can-write/)

In my experience, tech writers often contribute to UX and testing (they're often the first user, and thus bug reporter). They're the ones who are going to notice when your API naming conventions are out of whack. They're also the ones writing the quickstart with sales & marketing impact. And then, yes, they're the ones bringing a deep understanding of structure and clarity.

I've tried AI for writing docs. It can be helpful at points, but my goodness I would not want to let anything an AI wrote out the door without heavy editing.

Nextgrid 2 hours ago||
> it won't be a GOOD replacement

See my other comment - I'm afraid quality only matters if there is healthy competition which isn't the case for many verticals: https://news.ycombinator.com/item?id=46631038

FeteCommuniste 1 hour ago||
> AI might replace tech writers (just like it might replace anyone), but it won't be a GOOD replacement.

[insert Pawn Stars meme]: "GOOD docs? Sorry, best I can do is 'slightly better than useless.'"

topaz0 52 minutes ago||
with the occasional "much worse than useless" thrown in as a bonus
sehugg 3 hours ago||
The best tech writers I've known have been more like anthropologists, bridging communication between product management, engineers, and users. With this perspective they often give feedback that makes the product better.
TimByte 28 minutes ago|
AI can help with synthesis once those insights exist, but it doesn't naturally occupy that liminal space between groups, or sense the cultural and organizational gaps
jillesvangurp 13 minutes ago||
I'm currently in the middle of restructuring our website. 95% of the work is being done by codex. That includes content writing, design work, implementation work, etc. But it's a lot of work for me because I am critical about things like wording/phrasing and not hallucinating things we don't actually do. That's actually a lot of work. But it's editorial work and not writing work or programming work. But it's doing a pretty great job. Having a static website with a site generator means I can do lots of changes quickly via agentic coding.

My advise to tech writers would be to get really good at directing and orchestrating AI tools to do the heavy lifting of producing documentation. If you are stuck using content management systems or word processors, consider adopting a more code centric workflow. The AI tools can work with those a lot better. And you can't afford to be doing things manually that an AI does faster and better. Your value is making sure the right documentation gets written and produced correctly; correcting things that need correcting/perfecting. It's not in doing everything manually; you need to cherry pick where your skills still add value.

Another bit of insight is that a lot of technical documentation now has AIs as the main consumer. A friend of mine who runs a small SAAS service has been complaining that nobody actually reads his documentation (which is pretty decent) and instead relies on LLMs to do that for them. The more documentation you have, the less people will read all of it. Or any of it.

But you still need documentation. It's easier than ever to produce it. The quality standards for that documentation are high and increasing. There are very few excuses for not having great documentation.

TimByte 43 minutes ago||
The failure mode isn't just hallucinations, it's the absence of judgment: what not to document, what to warn about, what's still unstable, what users will actually misunderstand
ainiriand 2 hours ago||
And here I am, 2026, and one of my purposes for this year is to learn to write better, communicate more fluently, and convey my ideas in a more attractive way.

I do not think that these skills are so easily replaced; certainly the machine can do a lot, but if you acquire those skills yourself you shape your brain in a way that is definitely useful to you in many other aspects of life.

In my humble opinion we will be losing that from people, the upscaling of skills will be lost for sure, but the human upscaling is the real loss.

jraph 2 hours ago|
> but if you acquire those skills yourself you shape your brain in a way that is definitely useful to you in many other aspects of life.

Yep, and reading you will feel less boring.

The uniform style of LLMs gets old fast and I wouldn't be surprised if it were a fundamental flaw due to how they work.

And it's not even sure speed gains from using LLMs make up for the skill loss in the long term.

duskdozer 2 hours ago|||
Seriously. It wasn't always this way, but now as soon I notice the LLM-isms in a chunk of text, I can feel my brain shut off.
jraph 2 hours ago|||
You're absolutely right! It's not just the brain shutoffs—it's the feeling of death inside.
duskdozer 1 hour ago||
Sure! Here are some ways you can deal with the feeling of death inside, otherwise known as "brain shutoffs":

<list of emoji-labeled bold headers of numbered lists in format <<bolded category> - description>>

Is there anything else I can help you with?

ainiriand 54 minutes ago||
Damn, you are terrible!
elcapitan 2 hours ago|||
Scanning for LLM garbage is now one of the first things I do when reading a larger piece of text that has been published post ChatGPT.
ainiriand 51 minutes ago|||
It is such a challenge! As English is not my first language I have to do some mind gimnastics to really convey my thoughts. 'On writing well' is on my list to read, it is supposed to help.
osigurdson 7 minutes ago|
>> liability doesn’t vanish just because AI wrote it

I think this is going to be a defining theme this year.

More comments...