Posted by theletterf 6 hours ago
I revise my local public transit guide every time I experience a foreign public transit system. I improve my writing by walking in my readers' shoes and experiencing their confusion. Empathy is the engine that powers my work.
Most of my information is carefully collected from a network of people I have a good relationship with, and from a large and trusting audience. It took me years to build the infrastructure to surface useful information. AI can only report what someone was bothered to write down, but I actually go out in the real world and ask questions.
I have built tools to collect people's experience at the immigration office. I have had many conversations with lawyers and other experts. I have interviewed hundreds of my readers. I have put a lot of information on the internet for the first time. AI writing is only as good as the data it feeds on. I hunt for my own data.
People who think that AI can do this and the other things have an almost insulting understanding of the jobs they are trying to replace.
You mention you've done work for public transit - well, if public transit documentation suddenly starts being terrible, will it lead to an immediate, noticeable drop in revenue? Doubt it. Firing the technical writer however has an immediate and quantifiable effect on the budget.
Apply the same for software (have you seen how bad tech is lately?) or basically any kind of vertical with a nontrivial barrier to entry where someone can't just say "this sucks and I'm gonna build a better one in a weekend".
I don't work for the public transit company; I introduce immigrants to Berlin's public transit. To answer to the broader question, good documentation is one of the many little things that affect how you feel about a company. The BVG clearly cares about that, because their marketing department is famously competent. Good documentation also means that fewer people will queue at their service centre and waste an employee's time. Documentation is the cheaper form of customer service.
Besides, how people feels about the public transit company does matter, because their funding is partly a political question. No one will come to defend a much-hated, customer-hostile service.
Sure, the megacorps may start rotting from the inside out, but we already see a retrenchment to smaller private communities, and if more of the benefits of the big platforms trickle down, why wouldn’t that continue?
Nicbou, do you see AI as increasing your personal output? If it lets enthusiastic individuals get more leverage on good causes then I still have hope.
When it became cheaper to make games did the quality go up?
When it became cheaper to mass produce X (sneakers, tshirts, anything really) did the quality go up?
It's a world that is made of an abundance of trash. The volume of low quality production saturates the market and drowns out whatever high quality things still remain. In such a world you're just better of reallocating your resources from the production quality towards the the shouting match of marketing and try to win by finding ways to be more visible than the others. (SEO hacking etc shenanigans)
When you drive down the cost of doing something to zero you you also effectively destroy the economy based around that thing. Like online print, basically nobody can make a living with focusing on publishing news or articles but alternative revenue streams (ads) are needed. Same for games too.
Thank you so much for saying this. Trying to convince anyone of the importance of documentation feels like an uphill battle. Glad to see that I'm not completely crazy.
I'd argue that this started 30 years ago when automated phone trees started replacing the first line of workers and making users figure out how to navigate where they needed to in order to get the service they needed.
I can't remember if chat bots or "knowledge bases" came first, but that was the next step in the "figure it out yourself" attitude corporations adopted (under the guise of empowering users to "self help").
Then we started letting corporations use the "we're just too big to actually have humans deal with things" excuse (eg online moderation, or paid services with basically no support).
And all these companies look at each other to see who can lower the bar next and jump on the bandwagon.
It's one of my "favorite" rants, I guess.
The way I see this next era going is that it's basically going to become exclusively the users' responsibility to figure out how to talk to the bots to solve any issue they have.
Thank you. I love it when someone poetically captures a feeling I’ve been having so succinctly.
It’s almost like they’re a professional writer…
I have exactly 1 guess but am waiting to say it.
Exactly. If the AI-made documentation is only 50% of the quality but can be produced for 10% of the price, well, we all know what the "smart" business move is.
AI-made documentation has 0% of the quality.
As the OP pointed, AI can only document things that somebody already wrote down. That's no documentation at all.
That’s one way to frame it. An other one is, sometime people are stuck in a situation where all options that come to their mind have repulsive consequences.
As always some consequences are deemed more immediate, and other will seem remoter. And often the incentives can be quite at odd between expectations in the short/long terms.
>this sucks and I'm gonna build a better one in a weekend
Hey, this is me looking at the world this morning. Bear with me, the bright new harmonious world should be there on Monday. ;)
Coding is like writing documentation for the computer to read. It is common to say that you should write documentation any idiot can understand, and compared to people, computers really are idiots that do exactly as you say with a complete lack of common sense. Computers understand nothing, so all the understanding has to come from the programmer, which is his actual job.
Just because LLMs can produce grammatically correct sentences doesn't mean they can write proper documentation. In the same way, just because they are able to produce code that compiles doesn't mean they can write the program the user needs.
“Technology needs soul”
I suppose this can be generalized to “__ needs soul”. Eg. Technical writing needs soul, User interfaces need soul, etc. We are seriously discounting the value we receive from embedding a level of humanity into the things we choose (or are forced) to experience.
Nicely written (which, I guess, is sort of the point).
I'm exploring ways to organize my Obsidian vault such that it can be shared with friends, but not the whole Internet (and its bots). I'm extracting value out the curation I've done, but I'd like to share with others.
Not from a moral perspective of course, but the technical possibility. And the overton window has shifted already so far, the moral aspect might align soon, too.
IMO there is an entirely different problem, that's not going to go away just about ever, but could be solved right now easily. And whatever AI company does so first instantly wipes out all competition:
Accept full responsibility and liability for any damages caused by their model making wrong decisions and either not meeting a minimum quality standard or the agreed upon quality.
You know, just like the human it'd replace.
That's not sufficient, at least from the likes of OpenAI, because, realistically, that's a liability that would go away in bankruptcy. Companies aren't going to want to depend on it. People _might_ take, say, _Microsoft_ up on that, but Microsoft wouldn't offer it.
See Duolingo :)
- Most people don't communicate as thoroughly and complete - written and verbal - as they think they do. Very often there is what I call "assumptive communication". That is, sender's ambiguity that's resolved by the receiver making assumptions about what was REALLY meant. Often, filling in the blanks is easy to do - as it's done all the time - but not always. The resolution doesn't change the fact there was ambiguity at the root.
Next time you're communicating, listen carefully. Make note of how often the other person sends something that could be interpreted differently, how often you assume by using the default of "what they likely meant was..."
- That said, AI might not replace people like you. Or me? But it's an improvement for the majority of people. AI isn't perfect, hardly. But most people don't have the skills a/o willingness to communicate at a level AI can simulate. Improved communication is not easy. People generally want ease and comfort. AI is their answer. They believe you are replaceable because it replaces them and they assume they're good communicators. Classic Dunning-Kruger.
p.s. One of my fave comms' heuristics is from Frank Luntz*:
"It's not what you say, it's what they hear." (<< edit was changing to "say" from "said".)
One of the keys to improved comms is to embrace that clarify and completeness is the sole responsibility of the sender, not the receiver. Some people don't want to hear that, and be accountable, especially then assumption communication is a viable shortcut.
* Note: I'm not a fan of his politics, and perhaps he's not The Source of this heuristic, but read it first in his "Words That Work". The first chapter of "WTW" is evergreen comms gold.
As as writer, you know this makes it seem emotional rather than factual?
Anyway, I agree with what you are saying. I run a scientific blog that gets 250k-1M users per year, and AI has been terrible for article writing. I use AI for ideas on brainstorming and ideas for titles(which ends up being inspiration rather than copypaste).
It becomes: This person is fearful of their job and used feeling to justify their belief.
Based on what? Your own zero-evidence speculation? How is this anything other than arrogant punting? For sure we know that the point was something other than how fast the author reads compared to an AI, so what are we left with here?
Two years ago, I asked chatgpt to rewrite my resume. It looked fantastic at a first sight, then, one week later I re-read it, and feel ashamed to have sent it to some prospective employers. It was full of cringe inducing babble.
You see, for an LLM there are no hierarchies other than what it observed in their training, and even then, applying it in a different context may be tricky for them. Because it can describe hierarchies, relationships by mimicry, but it doesn't actually have a model of them.
Just an example: It may be able to generate text that recognizes that a PhD title is a step above from a Master’s degree, but sometimes it won't be able to translate this fact (instead of the description of this fact) into the subtle differences in attention and emphasis we do in our written text to reflect those real world hierarchies of value. It can repeat the fact to you, can even kind of generalize it, but it won't take a decision based on it.
It can, even more now, get a very close simulation of this, because relative importance of stuff would have been semantically capture, and it is very good at capturing those subtle semantical relationships, but, in linguistic terms, it absolutely sucks at pragmatics.
An example: Let's say in one of your experiences, you improved a model that detected malignancy in a certain kind of tumor images, improving its false negative rate to something like 0.001%, then in the same experience you casually mention that you tied the CEOs toddler tennis shoes once. Given your prompt to write a resume according to the usual resume enhancement formulas, there's a big chance it will emphasize the irrelevant tennis lace tying activity in a ridiculously pompous manner, making it hierarchically equivalent to your model kung-fu accomplishments.
So in the end, you end up with some bizarre stuff that looks like:
"Tied our CEO's toddler tennis shoes, enabling her to raise 20M with minimal equity dilution in our Series B round"
True, but it raises another question, what were your Product Managers doing in the first place if tech writer is finding out about usability problems
But even if a PM cares about UX, they are often not in a good position to spot problems with designs and flows they are closely involved in and intimately familiar with.
Having someone else with a special perspective can be very useful, even if their job provides other beneficial functions, too. Using this "resource" is the job of the PM.
I think I agree, at least in the current state of AI, but can't quite put my finger on what exactly it's missing. I did have some limited success with getting Claude Code to go through tutorials (actually implementing each step as they go), and then having it iterate on the tutorial, but it's definitely not at the level of a human tech writer.
Would you be willing to take a stab at the competencies that a future AI agent would require to be excellent at this (or possibly never achieve)? I mean, TFA talks about "empathy" and emotions and feeling the pain, but I can't help feel that this wording is a bit too magical to be useful.
For tech documentation, I suppose that AI agents would mainly benefit from Skills files managed as part of the tool's repo, and I absolutely do imagine future AI agents being set up (e.g. as part of their AGENTS.md) to propose PRs to these Skills as they use the tools. And I'm wondering whether AI agents might end up with different usability concerns and pain-points from those that we have.
We have to ask AI questions for it to do things. We have to probe it. A human knows things and will probe others, unprompted. It's why we are actually intelligent and the LLM is a word guesser.
The technology is improving rapidly, and even now, with proper context, AI can write technical documentation extremely well. It can include clear examples (and only a very small number of technical writers know how to do that properly), and it can also anticipate and explain potential errors.
Tech writing seems especially vulnerable to people not really understanding the job (and then devaluing it, because "everybody can write" - which, no, if you'll excuse the slight self-promotion but it saves me repeating myself https://deborahwrites.com/blog/nobody-can-write/)
In my experience, tech writers often contribute to UX and testing (they're often the first user, and thus bug reporter). They're the ones who are going to notice when your API naming conventions are out of whack. They're also the ones writing the quickstart with sales & marketing impact. And then, yes, they're the ones bringing a deep understanding of structure and clarity.
I've tried AI for writing docs. It can be helpful at points, but my goodness I would not want to let anything an AI wrote out the door without heavy editing.
See my other comment - I'm afraid quality only matters if there is healthy competition which isn't the case for many verticals: https://news.ycombinator.com/item?id=46631038
[insert Pawn Stars meme]: "GOOD docs? Sorry, best I can do is 'slightly better than useless.'"
My advise to tech writers would be to get really good at directing and orchestrating AI tools to do the heavy lifting of producing documentation. If you are stuck using content management systems or word processors, consider adopting a more code centric workflow. The AI tools can work with those a lot better. And you can't afford to be doing things manually that an AI does faster and better. Your value is making sure the right documentation gets written and produced correctly; correcting things that need correcting/perfecting. It's not in doing everything manually; you need to cherry pick where your skills still add value.
Another bit of insight is that a lot of technical documentation now has AIs as the main consumer. A friend of mine who runs a small SAAS service has been complaining that nobody actually reads his documentation (which is pretty decent) and instead relies on LLMs to do that for them. The more documentation you have, the less people will read all of it. Or any of it.
But you still need documentation. It's easier than ever to produce it. The quality standards for that documentation are high and increasing. There are very few excuses for not having great documentation.
I do not think that these skills are so easily replaced; certainly the machine can do a lot, but if you acquire those skills yourself you shape your brain in a way that is definitely useful to you in many other aspects of life.
In my humble opinion we will be losing that from people, the upscaling of skills will be lost for sure, but the human upscaling is the real loss.
Yep, and reading you will feel less boring.
The uniform style of LLMs gets old fast and I wouldn't be surprised if it were a fundamental flaw due to how they work.
And it's not even sure speed gains from using LLMs make up for the skill loss in the long term.
<list of emoji-labeled bold headers of numbered lists in format <<bolded category> - description>>
Is there anything else I can help you with?
I think this is going to be a defining theme this year.