Posted by theletterf 1/15/2026
But, when we use "AI" acronym, our brains still recognize "intelligence" attribute and tend to perceive LLMs as more powerful than they actually are.
Current models are like trained parrots that can draw colored blocks and insert them into the appropriate slots. Sure, much faster and with incomparably more data. But they're still parrots.
This story and the discussions remind me of reports and articles about the first computers. People were so impressed by the speed of their mathematical calculations that they called them "electronic brains" and considered, even feared, "robot intelligence."
Now we're so impressed by the speed of pattern matching that we called them "artificial intelligence," and we're back to where we are.
If the business can no longer justify 5 engineers, then they might only have 1.
I've always said that we won't need fewer software developers with AI. It's just that each company will require fewer developers but there will be more companies.
IE:
2022: 100 companies employ 10,000 engineers
2026: 1000 companies employ 10,000 engineers
The net result is the same for emplyoment. But because AI makes it that much more efficient, many businesses that weren't financially viable when it needed 100 engineers might become viable with 10 engineers + AI.
Do you not see the logic?
It’s not controversial economics that lower prices drive more demand.
Au contraire. It's not very often that the cost of labor actually drops to anywhere close to zero, but we have some examples. The elevator operator is a prime example. When it was costly to hire an operator we could only hire a few of them. Nowadays anyone who is willing to operate an elevator just has to show up and they automatically get the job.
If 1,000 engineers are worth having around, why not an infinite number of them, just like those working as elevator operators? Again, there is no cost in this hypothetical scenario.
> Cost is not the only driver to demand.
Technically true, but we're not talking about garbage here. Humans are always valuable to some degree, just not necessarily valuable enough when there is a cost to balance. But, again, we're talking about zero cost. I expect you are getting caught up in thinking about scenarios where labor still has a cost, perhaps confusing zero cost with zero payroll?
Five engineers could be turned into maybe two, but probably not less.
It's the 'bus factor' at play. If you still want human approvals on pull requests then If one of those engineers goes on vacation or leaves the company you're stuck with one engineer for a while.
If both leave then you're screwed.
If you're a small startup, then sure there are no rules and it's the wild west. One dev can run the world.
Peak productivity has always been somewhere between 1-3 people, though if any one of those people can't or won't continue working for one reason or another, it's generally game over for the project. So you hire more.
This is why small software startups time and time again manage to run circles around with organizations with much larger budgets. A 10 person game studio like Team Cherry can release smash hit after smash hit, while Ubisoft with 170,000% the personnel count visibly flounders. Imagine doing that in hardware, like if you could just grab some buddies and start a business successfully competing with TSMC out of your garage. That's clearly not possible. But in software, it actually is.
Is the tech writers backlog also seemingly infinite like every tech backlog I've ever seen?
I am not even quite sure I know how to manage a team of more than two programmers right now. Opus 4.5, in the hands of someone who knows what they are doing, can develop software almost as fast as I can write specs and review code. And it's just plain better at writing code than 60% of my graduating class was back in the day. I have banned at least one person from ever writing a commit message or pull request again, because Claude will explain it better.
Now, most people don't know to squeeze that much productivity out of it, most corporate procurement would take 9 months to buy a bucket if it was raining money outside, and it's possible to turn your code into unmaintainable slop at warp speed. And Claude is better at writing code than it is at almost anything else, so the rest of y'all are safe for a while.
But if you think that tech writers, or translators, or software developers are the only people who are going to get hit by waves of downsizing, then you're not paying attention.
Even if the underlying AI tech stalls out hard and permanently in 2026, there's a wave of change coming, and we are not ready. Nothing in our society, economy or politics is ready to deal with what's coming. And that scares me a bit these days.
Only because it has access to vast amount of sample code to draw a re-combine parts. Did You ever considered emerging technologies, like new languages or frameworks that may be a much better suited for You area but they are new, thus there is no codebase for LLM to draw from?
I'm starting to think about a risk of technological stagnation in many areas.
Try it. The pattern matching these things do is unlike anything seen before.
I'm writing a compiler for a language I designed, and LLMs have no trouble writing examples and tests. This is a language with syntax and semantics that does not exist in any training set because I made it up. And here it is, a machine is reading and writing code in this language with little difficulty.
Caveat emptor: it is far from perfect. But so are humans, which is where the training set originated.
> I'm starting to think about a risk of technological stagnation in many areas.
That just does not follow for me. We're in an era where advancements in technology continues to be roughly quadratic [1]. The implication you're giving is that the advancements are a step function that will soon (or has already) hit its final step.
This suggests that you are unfamiliar or unappreciative of how anything progresses, in any domain. Creativity is a function of taking what existed before and making it your own. "Standing on the shoulders of giants", "pulling oneself up by the bootstraps", and all that. None of that is changing just because some parts of it can now be automated.
Stagnation is the very last thing I would bet on. In part because it means a "full reset" and loss of everything, like most apocalyptic story lines. And in part because I choose to remain cautiously optimistic.
I suspect a lot of folks are asking ChatGPT to summarize it…
I can’t imagine just letting an LLM write an app, server, or documentation package, wholesale and unsupervised, but have found them to be extremely helpful in editing and writing portions of a whole.
The one thing that could be a light in the darkness, is that publishers have already fired all their editors (nothing to do with AI), and the writing out there shows it. This means there’s the possibility that AI could bring back editing.
i wrote a 5 page essay in November. the AI editor had sixty-something recommendations, and i accepted exactly one of them. it was a suggestion to hyphenate the adjectival phrase "25-year-old". i doubt that it had any measurable impact on the effectiveness of the essay.
thing is, i know all the elements of style. i know proper grammar and accepted orthographic conventions. i have read and followed many different style guides. i could best any English teacher at that game. when i violate the principles (and i do it often), i do so deliberately and intentionally. i spent a lot of time going through suggestions that would only genericize my writing. it was a huge waste of my time.
i asked a friend to read it and got some very excellent suggestions: remove a digressive paragraph, rephrase a few things for persuasive effect, and clarify a sentence. i took all of these suggestions, and the essay was markedly improved. i'm skeptical that an LLM will ever have such a grasp of the emotional and persuasive strength of a text to make recommendations like that.
That makes a lot of sense, but right now, the editing seems to be completely absent, and, I suspect, most writers aren’t at your level (I am sure that I’m not).
It may be better than nothing.
I was terrible writer, but we had to write good docs and make it easy for our customers to integrate with our products. So, I prepared the context to our tech writers and they have created nice documentation pages.
The cycle was (reasonably takes 1 week, depending on tech writer workload):
1. prepare context
2. create ticket to tech writers, wait until they respond
3. discuss messaging over the call
4. couple days later I get first draft
5. iterate on draft, then finally publish it
Today its different: 1. I prepare all the context and style guide, then feed them into LLM.
1.1. context is extracted directly from code by coding agents
2. I proofread it and 97% of cases accept it, because it follows the style guide and mostly transforms my context correctly into customer consumable content
3. Done. less than 20 minutes
Tech writers were doing amazing job of course, but I can get 90-95% quality in 1% of the time spend for that work.People boast about the gains with LLMs all the damn time and I'm sceptical of it all unless I see their inputs.
Technical writing is part of the job of software engineering. Just like “tester” or “DBA”, it was always going to go the way of the dodo.
If you’re a technical writer, now’s the time to reinvent yourself.
You're going to get some text out of a typical engineer, but the writing quality, flow, and fit for the given purpose is not going to come close to someone who does it every day.
Where I work we have professional technical writers and the quality vs your typical SW engineer is night and day. Maybe you got lucky with the rare SW engineer that can technical write.
Two years ago, I asked chatgpt to rewrite my resume. It looked fantastic at a first sight, then, one week later I re-read it, and feel ashamed to have sent it to some prospective employers. It was full of cringe inducing babble.
You see, for an LLM there are no hierarchies other than what it observed in their training, and even then, applying it in a different context may be tricky for them. Because it can describe hierarchies, relationships by mimicry, but it doesn't actually have a model of them.
Just an example: It may be able to generate text that recognizes that a PhD title is a step above from a Master’s degree, but sometimes it won't be able to translate this fact (instead of the description of this fact) into the subtle differences in attention and emphasis we do in our written text to reflect those real world hierarchies of value. It can repeat the fact to you, can even kind of generalize it, but it won't take a decision based on it.
It can, even more now, get a very close simulation of this, because relative importance of stuff would have been semantically capture, and it is very good at capturing those subtle semantical relationships, but, in linguistic terms, it absolutely sucks at pragmatics.
An example: Let's say in one of your experiences, you improved a model that detected malignancy in a certain kind of tumor images, improving its false negative rate to something like 0.001%, then in the same experience you casually mention that you tied the CEOs toddler tennis shoes once. Given your prompt to write a resume according to the usual resume enhancement formulas, there's a big chance it will emphasize the irrelevant tennis lace tying activity in a ridiculously pompous manner, making it hierarchically equivalent to your model kung-fu accomplishments.
So in the end, you end up with some bizarre stuff that looks like:
"Tied our CEO's toddler tennis shoes, enabling her to raise 20M with minimal equity dilution in our Series B round"
After all, if he didn't feel foolish for it, he wouldn't've held it in his memory, and thus wouldn't've shared it with us.
Who among us hasn't written an angry email, re-(re-)read it, smugly hit send, slept on it, then regretted the sending?
Always had to contract external people to get stuff done really well. One was a bored CS university professor, another was a CTO in a struggling tiny startup who needed cash.
Kudos to all technical writer who made my job as software engineer easier.
Obviously we still need people to oil the machine, but... a person who deeply understands the product, can communicate shortcomings in process or user flows, can quickly and effectively organize their thoughts and communicate them, can navigate up and down abstraction levels and dive into details when necessary - these are the skills LLMs require.