Posted by smartmic 6/26/2025
I don't genuinely expect the author of a blogpost who titles their writing "AI is Dehumanization Technology" to be particularly receptive of a counterargument, but hear me out.
I can think of few things as intellectually miserable as the #help channels of the many open source projects on Discord, for example. Should I wager a guess, if these projects all integrated LLMs into their chatbots for just a few bucks and let them take on the brunt of the interactions, all participants on either side would be left with much more capacity to maintain and express empathy and care, or to nurture social connections. This extends beyond these noncommercial contexts of course, to stuff like customer service for example.
It is sad to me that the skill required to navigate everyday life are being delegated to technology. Pretty soon it won’t matter what you think or feel about your neighbors because you will only ever know their tech-mediated facade.
Isn’t this basically what technology does? I suppose there is also technology to do things that weren’t possible at all before, but the application is often automation of something in someone’s everyday life that is considered burdensome.
Also, efficiency.
I think everyone in tech consulting can tell you that inserting another party (outsourcing) in a previously two-party transaction rarely produces better outcomes.
Human-agent-agent-human communication doesn't fill me with hope, beyond basic well-defined use cases.
Edit: one point i forgot to make is that it has already become absurd how different someones online persona or confidence level is when they are AFK, its as if theyve been reduce to an infantile state.
It's an interesting thing I've noticed as well. At least in some cases it's due to constraints. Written communication invariably affords the opportunity to step back and think before sending. Many social contexts do not, making them almost entirely different skillets.
This would be a good counter if this were all that this technology is being used for.
They can of course still argue that it's majority-bad for whatever list of reasons, but that's not what bugs me. What bugs me is the absolute tunnel vision of "for principled reasons I must find this thing completely and utterly bad, no matter what". Because this is what the title, and the tone, and just about everything else in this article comes across as to me, and I find it equal parts terrifying and disagreeable.
AI companies also only sell the public on the upside of these technologies. Behind closed doors they are investing hard on this with the hope to reduce or eliminate their labor cost with no regard to any damage to society.
Do you hold the position that a thing is bad because it is possible to do harm, or that it is predominantly causing harm?
Most criticisms cite examples demonstrating the existence of harm because proving existence requires a single example. Calculating the sum of an effect is much harder.
Even if the current impact of a field is predominantly harmful, it does not stand that the problem is with the what is being attempted. Consider healthcare, a few hundred years ago much of healthcare did more harm than good, charlatans and frauds were commonplace. Was that because healthcare itself was a bad thing? Was it a mistake to even go down that path?
So I think your argument is kind of misleading.
I am advocating adopting methods of improvement rather than abandoning the persuit of beneficial results.
I think science was just a part of the solution to healthcare, much of the advance was also in what was considered allowable or ethical. There remains a great deal of harmful medical practices that are used today in places where regulation is weak.
Science has done little to stop those harms. The advances that led to the requirement for a scientific backing were social. That those practices persist in some places is not a scientific issue but a social one.
That ultimately enabled "doctors" to be quite useful. But the fact that the "profession" existed earlier is not what allowed it to bloom.
Mere moments later...
> Even if the current impact of a field is predominantly harmful
So let's just skip the first part then, you conceded it's predominantly harmful. On this we agree.
> it does not stand that the problem is with the what is being attempted.
Well, it's not a logical 1-to-1, no. But I would say if the current impact of a field is predominantly harmful, then revisiting what is being attempted isn't the worst idea.
> Consider healthcare, a few hundred years ago much of healthcare did more harm than good, charlatans and frauds were commonplace. Was that because healthcare itself was a bad thing? Was it a mistake to even go down that path?
If OpenAI and company were still pure research projects, this would hold some amount of water, even if I would still disagree with it. However that exempts the context that OpenAI is actively (and under threat of financial ruin) turning itself into a for-profit business, and is actively selling it's products, as are it's competitors, to firms in the market with the explicit notion of reducing headcount for the same productivity. This doesn't need a citation, look at any AI product marketing and you see a consistent theme is the removal of human labor and/or interaction.
>So let's just skip the first part then, you conceded it's predominantly harmful. On this we agree.
I'm afraid if you interpret that statement as a concession of a fact, I don't think we can have a productive conversation.
Sure, if you want to make sure you don't get any more contributors, you can try to replace that with a chatbot that will always reply immediately, but might just be wrong like 40% of the time, is not actually working on the project and will certainly not help in building sociel interactions between the project and its users.
Most interactions start with users being vague. This can already result in some helpers getting triggered, and starting to be vaguely snarky, but usually this is resolved by using prepared bot commands... which these users sometimes just won't read.
Then the misunderstandings start. Or the misplaced expectations. Or the lies. Or maybe the given helper has been having a bad day, but due to their long time presence in the project, they won't be moderated out properly. And so on. It's just not a good experience.
Ever since I left, I got screencaps of various kinds of conversations. In some cases, the user was being objectively insufferable - I don't think it's fair to expect a human to put up with that. Other times, the helper was being unnecessarily mean - they did not appreciate my feedback on that. Neither happens with LLMs. People don't grow resentful of the never ending horde of what feels like increasingly clueless users, and innocent folk don't get randomly chewed out for not living up to the optimality expectations of those who tend to 1000s of cases similar to theirs every week.
While direct human support is invaluable in many cases, I find it really hard to believe how our industry has completely forgotten the value of public support forums. Here are some pure advantages over Discord/Slack/<Insert private chat platform of your liking>
- Much much better search functionality out of the box, because you can leverage existing search engines.
- From the above it follows that high value contributors do not need to spend their valuable time repeatedly answering the same basic questions over and over.
- Your high value contributors don't have to be employees of the company, as many enthusiastic power users often participate and contribute in such places.
- Conversations are _much_ easier to follow without having to resort to hidden threads and forums posts on Discord that no one will ever read or search.
- Over time you build a living library of supporting documentation instead of useful information being strewn in many tiny conversations over months.
- No user expectation to be helped immediately. A forum sets the expectation that this is an async method of communication, so you're less likely to see entitled aggravating behavior (though you won't see many users giving you good questions with relevant information attached even on forums).
If we make information more accessible, support will reduce in volume. Currently there's a tendency for domain experts to hoard all relevant information in their heads, and dole it out at their discretion in various chat forums. Forums whose existence is often not widely known to begin with (not to mention gated behind making accounts in certain apps the users may or may not care about/want to).
So my point is: instead of trying to automate a decidedly bad solution to make it scalable and treating that as a selling point of AI, we could instead make the information more accessible in the first place?
This meant you had a fairly low and consistent ceiling for messages. What you'd also observe over the years is a gradual decline in question quality. According to every helper that is. How come?
Admittedly we'll never really know, so this is speculation on my part, but I think it was exactly because of the better availability of information. During these years, we tried cultivating other resources and implementing features with the specific goal of improving UX. It worked. So the only people still "needing" assistance were those who failed to navigate even this better UX. Hence, worse questions, yet never ending.
Another issue with this idea is that navigating through the sheer volume of information can become challenging. AWS has a pretty decent documentation for example, but if you don't know the given service's docs you're paging through somewhat well, it's a chore to find anything. Keyword search won't be super helpful either. This is because it's a lot of prose, and not a lot of structure. Compare this to the autogenerated docs of AWS CLI, and you'll find a stark difference.
Finding things, especially among a lot of faff, is tiring. Asking a natural language question is trivial. The rest is on people to believe that AI isn't the literal devil, unlike what blogposts like the OP would like one to believe.
I’m not sure there is a solution to help people who don’t come to the table willing to put in the effort required to get help. This seems like a deep problem present in all kinds of ways in society, and I don’t think smarter chatbots are the solution. I’d love to be wrong.
If such a dataset exists, I don't have it. Most I have is the anecdotal experiences of not having to be afraid of asking silly questions from LLMs, and learning things I could then cross-validate to be correct without tiring anyone.
No matter your position on the AI helpfulness, asking volunteers to not only spend time helping support a free software project but to also pony up money is just doubling down on the burden free software maintainers face as was highlighted in the recent libxml2 discussion.
But then one could also just argue that this is something the individual projects can decide for themselves. Not really for either of us to make this call. You can consider what I said as just an example you disagree with in that case.
Also, try to come up with a less esoteric example than Discord Help channels. In fact, this is the issue with most defenses of LLMs. The benefits are so niche, or minor, that the example itself shows why they are not worth the money being poured in
Should be fairly obvious, but I disagree. Also I think you mean asocial, not antisocial. What's uniquely draconian about automated systems though? They're even susceptible to the same social engineering attacks humans are (it's just referred to as jailbreaking instead).
> Also, try to come up with a less esoteric example than Discord Help channels.
No.
> The benefits are so niche, or minor, that the example itself shows why they are not worth the money being poured in
Great. This is already significantly more intellectually honest than the entire blogpost.
“I’m not a tech worker…” they like to tinker with code and local Linux servers.
They have not seen how robotic the job has become and felt how much pressure there is to act like a copy-paste/git pull assembly line.
As a dev I have to be already quite desperate if I engage with a chat bot.
Discord is better suited for developers working together, not for publishing the results to an audience.
Maybe. But only if the LLMs are correct. Which they too frequently aren't.
So the result is that the tech industry has figured out how to not only automate making people angry and frustrated, they've managed to do it at scale.
Yay.
The former is an admittedly frustrating aspect in our transactional relationships with companies, while the others are the foundations of a functioning society throughout our civilization. Conflating business interactions with society needs is a familiar trope on HN IMO
Often you give what you get.
If you're nice to the customer service people on the phone, frequently they loosen up and are nice right back at you. Some kind of crazy "human" thing, I guess.
The simple fact of the matter is, there is a sharp gap between what an AI can do, and what a human does in any role involving communications, especially customer service.
Worse, there are psychological responses that naturally occur when you do any number of a few specific things that escalate conflict if you leave this to an AI. A qualified CSR person is taught how to de-escalate, diffuse, and calm the person who has been wound up to the point of irrationality. They are the front-line punching bags.
AI can't differentiate between what's acceptable, and what's not because the tokens it uses to identify these contexts have two contradictory states in the same underlying tokens. This goes to core classical computer science problems of halting, and other aspects.
The companies that were ahead of the curb for this invested a lot into this almost a decade and a half ago, and they found that in most cases these types of systems exponentiated the issues once they did finally get to a person, and they took it out on that person irrationally because they were the representative of the company that put them through what amounts to torture.
Some examples of behavior that causes these types of responses are when you are being manipulated in a way that you know is manipulation, it causes stress through perceptual blindspots causing an inconsistent internal mental state resulting in confusion. When that happens it causes a psychological reversal often of irrational anger. An infinite or byzantine loop designed to run people in circular hamster wheels is one such structure.
If you've ever been in a social interaction where you offer an olive branch and they seem to accept it, but at the last minute through it back in your face, you've experienced this. The smart individual doesn't ever do this because they know they will make an enemy for life who will always remember.
This is also how through communication, you can impose coercive cost on people, and companies have done this for years where anti-trust and FTC weren't being enforced. These triggers are inherent to a lesser or greater degree in all of us, every person alive.
The imposition of personal cost through this and other psychological blindspots is how torturous and vexatious processes are created.
Empathy and care are a two way street. It requires both entities to be acting in good faith through reflective appraisal. When this is distorted, it drives people crazy, and there is a critical saturation point where assumptions change because the environment has changed. If people show the indicators that they are acting in bad faith, others will treat them automatically as acting in bad faith. Eventually, the environment dictates that those people must prove they are acting in good faith (somehow) but proving this is quite hard. The environment switches from innocent benefit of the doubt to, guilty until proven innocent.
These erosions of the social contract while subtle, dictate social behavior. Can you imagine a world where something bad happens to you, and everyone just turns their backs, or prevents you from helping yourself?
Its the slipperly slope of society failing back to violence, few today commenting on things like this have actually read the material published by the greats on the social contract and don't know how society arose from the chaos of violence.
I'm not saying those things aren't valuable, or that humans can't express social and spiritual value in those ways, but that human value doesn't only exist there. And so, to give AI the power of complete dehumanization is to reduce humans to just pattern followers. I don't believe that is the case.
This sounds sort of like a "God of the gaps" argument.
Yes, we could say that humanity is left to express itself in the margins between the things machines have automated away. As automation increases its capabilities, we just wander around looking for some untouched back-alley or dark corner the robots haven't swept through yet and do our dancing and poetry slams there until the machines arrive forcing us to again scurry away.
But at that point, who is the master, us or the machines?
It would but I don't think that's what they're saying. The agent of dehumanization isn't the technology, but the selection of what the technology is applied to. Or like the quip "we made an AI that creates, freeing up more time for you to work."
Wherever human value, however you define that, exists or is created by people, what does it look like to apply this technology such that human value increases? Does that look like how we're applying it? The article seems to me to be much more focused on how this is actually being used right now rather than how it could be.
Resilience and strength in our civilisation comes from confidence in our competence,
not sanctifying patterns so we don’t have to think.
We need to encourage and support fluidity, domain knowledge is commoditised, the future is fluid composition.
Maybe there would be merit to this notion if society provided the necessary safety net for this person to start over.
I don't think we should assume most people are capable of what you describe. Assigning "should" to this assumes what you're describing is psychologically tenable across a large population.
> too much rigidity or militant clinging to ideas is insecurity or attempts at absolving personal responsibility.
Or maybe some people have a singular focus in life and that's ok. And maybe we should be talking about the responsibility of the companies exploiting everyone's content to create these models, or the responsibility of government to provide relief and transition planning for people impacted, etc.
To frame this as a personal responsibility issue seems fairly disconnected from the reality that most people face. For most people, AI is something that is happening to them, not something they are responsible for.
And to whatever extent we each do have personal responsibility for our careers, this does not negate the incoming harms currently unfolding.
Strong disagree, that’s not OK, it’s fragile
People can self assign any value whatsoever… that doesn’t change.
If they expect external validation then that’s obviously dependent on multiple other parties.
People tend to talk about any AI related topic comparing it to any industrial shift that happened in the past.
But its much Much MUCH bigger this time. Mostly because AI can make itself better, it will be better and it is better with every passing month.
Its a matter of years until it can completely replace humans in any form of intellectual work.
And those are not mine words but smartest ppl in the world, like AI grandfather.
We humans think we are special. That there wont be something better than us. But we are in the middle of the process of creating something better.
It will be better. Smarter. Not tired. Wont be sick. Wont ever complain.
And it IS ALREADY and WILL replace a lot of jobs and it will not create new ones purely due to efficiency gains and lack of brainpower in majority of ppl who will be laid off.
Not everyone is a noble prize winner. And soon we will need only such ppl to advance AI.
Can it? I'm pretty sure current AI (not just LLMs, but neural nets more generally) require human feedback to prevent overfitting. Fundamentally eschewing any fear or hope of the singularity as predicted.
AI can not make itself better because it can not meaningfully define what better means.
Its only the beginning. aI agents are able to simulate tasks, get better at them and make themselves better.
At this point its silly to say otherwise.
This is sensationalism. There’s no evidence in favor of it. LLMs are useful in small, specific contexts with many guardrails and heavy supervision. Without human-generated prior art for that context they’re effectively useless. There’s no reason to believe that the current technical path will lead to much better than this.
Automation has costs and imagining what LLMs do now as the start of the self-improving, human replacing machine intelligence is pure fantasy.
A demo is one thing. Being deployed in the real world is something else.
The only thing I've seen humanoid robots doing is dancing and occasionally a backflip or two. And even most of that is with human control.
The only menial task I ever saw a humanoid robot do so far is to take bags off of a conveyor belt, flatten them out and put them on another belt. It did it at about 1/10th the speed of a human, and some still ended up on the floor. This was about a month ago, so the state of the art is still in the demo stage.
The risk raised in the article is that AI is being promoted beyond its scope (pattern recognition/creation) to legal/moral choice determination.
The techo-optimists will claim that legal/moral choices may be nothing more than the sum of various pattern-recognition mechanisms...
My take on the article is that this is missing a deep point: AI cannot have a human-centered morality/legality because it can never be human. It can only ever amplify the existing biases in its training environments.
By decoupling the gears of moral choice from human interaction, whether by choice or by inertia, humanity is being removed from the mechanisms that amplify moral and legal action (or, in some perverse cases, amplify the biases intentionally)
even further, AI has only learned through what we've articulated and recorded, and so its inherent biases are only that of our recordings. I'm not sure how that sways the model, but I'm sure that it does.
The reductio on this is the hollowing out of the hard-for-humans problem domain, leaving us to fight for the scraps of the easy-for-humans domain. At first glance this sounds like a win. Who wouldn't want something else to solve the hard problems? The big issue with this is easy-for-human problems are often dull, devoid of meaning, and low-wage. Paradoxically, the hardest problems have always been the ones that make work meaningful.
We stand at the crossroads where one path leads to an existence where with a poverty of meaning and although humans create and play by their own rules, we feel powerless to change it. What the hell are we doing?
[1] I know it's a bit hard to define, but I'd vaguely say that it's significantly better in the majority of intelligence areas than the vast majority of the population. Also it should be scalable. If we can make it slightly better than human by burning the entire Earth's energy, then it doesn't make much sense.
Want to make a movie? The goal should be getting the movie out, seen by people and reviewed.
Whether it's people captured by film, animations in Blender or AI slop, what matters is the outcome. Is it good? Do people like it?
I do the infrastructure at a department of my Uni as sort of a side-gig. I would have never had the time to learn Ansible, borg, FreeIPA, wireguard, and everything else I have configured now and would have probably resorted to a bunch of messy shell scripts that don't work half the time like the people before me.
But everything I was able to set up I was able to set up in days, because of AI.
Sure, it's really satisfying because I also have a deep understanding of the fundamentals, and I can debug problems when AI fails, and then I ask it "how does this work" as a faster Google/wiki.
I've tried windsurf but given up because the AI does something that doesn't work and I can give it the prompts to find a solution (+ think for myself) much faster than it can figure out itself (and probably at the cost of a lot less tokens).
But the fact that I enjoy the process doesn't matter. And the moment I can click a button and make a webapp, I have so many ideas in my drawer for how I could improve the network at Uni.
I think the problem people have is that they work corporate jobs where they have no freedom to choose their own outcomes so they are basically just doing homework all their life. And AI can do homework better than them.
> Want to make a movie? The goal should be getting the movie out, seen by people and reviewed.
This especially falls apart when it comes to art, which is one of the most “end-goal” processes. People make movies because they enjoy making movies, they want movies to be enjoyed by others because they want to share their art, and they want it to be commercially successful so that they can keep making movies. For the “enjoying a movie” process, do you truly believe that you’d be happy watching only AI-generated movies (and music, podcasts, games, etc.) created on demand with little to no human input for the rest of your life? The human element is truly meaningless to you, it is only about the pixels on the screen? If it is, that’s not wrong - I just think that few people actually feel this way.
This isn’t an “AI bad” take. I just think that some people are losing sight of the role of technology. We can use AI to enable more people than ever before to spend time doing the things they want to do, or we can use it to optimize away the fun parts of life and turn people even further into replaceable meat-bots in a great machine run by and for the elites at the top.
Reducing the human experience as a means to an end is the core idea of dehumanization. Kant addressed this in the "humanity formula of the categorical imperative:
"Act in such a way that you treat humanity, whether in your own person or in the person of any other, always at the same time as an end, never merely as a means."
I'm curious how you feel about the phrase "the real treasure was the friends we made along the way." What does it mean to you?This explanation might've been passable four years ago, but it's woefully out of date now. "Mostly plausible-sounding word salad"?
It is word salad, unless you’re a young, underpaid contractor from a country previously colonised by the British or the United States.
How could you possibly judge such a diverse set of outputs? There are thousands of models, that can each be steered/programmed with prompts and with a lot parameter-twiddling, it's always impossible you could say "the chat bots" and give some sort of one-size-fits-all judgement of all LLMs. I think your reply shows a bit of ignorance if that's all you've seen.
Oxford Dictionaries says "word salad" is "a confused or unintelligible mixture of seemingly random words and phrases", and true, I'm no native speaker, but that's not commonly the output I get from LLMs. Sometimes though, some people’s opinions on the internet feel like word salad, but I guess it's hard to distinguish from bait too.
I remain an optimist. I believe AI can actually give us more time to care for people, because the computers will be able to do more themselves and between each other. Unproven thesis, but so is the case laid out in this article.
There are some jobs that humans really shouldn't be doing. And now, we're at the point where we can start offloading that to machines.
I can't really say I've seen that. The article seems to be about adoption of AI in the Canadian public sector, not something I'm really familiar with as a Brit. The government here hopes to boost the economy with it and Hassabis at Deepmind hopes to advance science and cure diseases.
I think AI may well make the world more humane by dealing with a variety of our problems.
If you cut off a bird's wings, it can't bird in any real meaningful sense. If you cut off humans from others, I don't think we can really be human either.
Also the anti-technology stance is good for humanity since it fundamentally introduces opposition to progress and questions the norm, ultimately killing the weak/inefficient parts of progress.
It's just that greed took over, and it took over big time.
Several shitty decisions in a row: scaling it too much, stealing data, marketing it before it can deliver, government use. The list goes on and on.
This idea that there's something inherent about the technology that's dehumanizing is an old trick. The issue lies in whoever is making those shitty decisions, not the tech itself.
There's obviously a fog surrounding every single discussion about this stuff. Probably the outcome of another remarkably shitty decision by someone (so many people parroting marketing ideas, it's dumb).
We'll be ok as humans, trust me on this one. It's too big to not fail.