Top
Best
New

Posted by smartmic 6/26/2025

AI Is Dehumanization Technology(thedabbler.patatas.ca)
157 points | 172 comments
perching_aix 6/26/2025|
> Rather than enhancing our human qualities, these systems degrade our social relations, and undermine our capacity for empathy and care.

I don't genuinely expect the author of a blogpost who titles their writing "AI is Dehumanization Technology" to be particularly receptive of a counterargument, but hear me out.

I can think of few things as intellectually miserable as the #help channels of the many open source projects on Discord, for example. Should I wager a guess, if these projects all integrated LLMs into their chatbots for just a few bucks and let them take on the brunt of the interactions, all participants on either side would be left with much more capacity to maintain and express empathy and care, or to nurture social connections. This extends beyond these noncommercial contexts of course, to stuff like customer service for example.

reval 6/26/2025||
The counter-counter-argument is that the messy part of human interaction is necessary for social cohesion. I’ve already witnessed this erosion prior to LLMs in the rise of SMS over phone calls (personal) and automated menu systems for customer service (institutional).

It is sad to me that the skill required to navigate everyday life are being delegated to technology. Pretty soon it won’t matter what you think or feel about your neighbors because you will only ever know their tech-mediated facade.

tobr 6/26/2025|||
> It is sad to me that the skill required to navigate everyday life are being delegated to technology.

Isn’t this basically what technology does? I suppose there is also technology to do things that weren’t possible at all before, but the application is often automation of something in someone’s everyday life that is considered burdensome.

reval 7 days ago||
That’s right- technology does replace pieces of our everyday life. I fear what happens when social ties are what are replaced. For example, technology allows us to travel long distances while keeping in touch with our loved ones. What happen when the keeping in touch part is what gets replaced?
perching_aix 6/26/2025||||
I'm not sure I'd agree with characterizing heavily asymmetric social interactions, such as customer service folks assisting tens or hundreds of people on the same issues every week and similar, a "necessarily messy part of human interaction for social cohesion".
svieira 6/26/2025||
It is well noted that it is very hard to get in contact with a human at Google when you have a problem. And then we wonder why Google never seems to understand its user base.
perching_aix 6/26/2025|||
I don't think these two are actually related, and the automated contact options Google and other megacorporations provide were significantly behind on these developments the last time I tried interacting with them. Namely, e.g. Meta has basically no support line. There was even a thread here a few days ago chronicling that.
fc417fc802 7 days ago|||
Talking to a human doesn't imply that management necessarily cares about you or your usercase. Automated help used to be categorically bad due to lack of technology. Now it has the potential to be good. The ability of the tech and the alignment of the process are entirely orthogonal.
ethbr1 6/27/2025||||
> the messy part of human interaction is necessary for social cohesion

Also, efficiency.

I think everyone in tech consulting can tell you that inserting another party (outsourcing) in a previously two-party transaction rarely produces better outcomes.

Human-agent-agent-human communication doesn't fill me with hope, beyond basic well-defined use cases.

jofla_net 6/26/2025|||
It is, in fact, all insulation. The technology, that is. It cuts out face-to-face, vid-to-vid, voice-to-voice, and even direct text as in sms or email. To the point that agents will be advocating for users instead of people even typing back to one another. Until and unless it affects the reproduction cycle, and I think it already has, people will fail to socialize since there is also zero customary expectation to do so (that was the surprisingly good thing about old world customs), so only the overtly gregarious will end up doing it. Kind of a long tailed hyperbolic endgame but, well, there it is.

Edit: one point i forgot to make is that it has already become absurd how different someones online persona or confidence level is when they are AFK, its as if theyve been reduce to an infantile state.

fc417fc802 7 days ago||
> how different someones online persona or confidence level is when they are AFK

It's an interesting thing I've noticed as well. At least in some cases it's due to constraints. Written communication invariably affords the opportunity to step back and think before sending. Many social contexts do not, making them almost entirely different skillets.

tines 6/26/2025|||
> I can think of few things as intellectually miserable as the #help channels of the many open source projects on Discord, for example. Should I wager a guess, if these projects all integrated LLMs into their chatbots for just a few bucks and let them take on the brunt of the interactions, all participants on either side would be left with much more capacity to maintain and express empathy and care, or to nurture social connections. This extends beyond these noncommercial contexts of course, to stuff like customer service for example.

This would be a good counter if this were all that this technology is being used for.

perching_aix 6/26/2025|||
I don't think it's necessary for me to counter everything they're saying. They're making a unilateral judgement - as long as I can demonstrate a good counter for one part of it, the unilateral judgement will fail to hold.

They can of course still argue that it's majority-bad for whatever list of reasons, but that's not what bugs me. What bugs me is the absolute tunnel vision of "for principled reasons I must find this thing completely and utterly bad, no matter what". Because this is what the title, and the tone, and just about everything else in this article comes across as to me, and I find it equal parts terrifying and disagreeable.

asciimov 6/26/2025||
> What bugs me is the absolute tunnel vision of "for principled reasons I must find this thing completely and utterly bad, no matter what".

AI companies also only sell the public on the upside of these technologies. Behind closed doors they are investing hard on this with the hope to reduce or eliminate their labor cost with no regard to any damage to society.

perching_aix 6/26/2025||
I don't think fighting misrepresentation with misrepresentation is a winning strategy.
Lerc 6/26/2025|||
>This would be a good counter if this were all that this technology is being used for.

Do you hold the position that a thing is bad because it is possible to do harm, or that it is predominantly causing harm?

Most criticisms cite examples demonstrating the existence of harm because proving existence requires a single example. Calculating the sum of an effect is much harder.

Even if the current impact of a field is predominantly harmful, it does not stand that the problem is with the what is being attempted. Consider healthcare, a few hundred years ago much of healthcare did more harm than good, charlatans and frauds were commonplace. Was that because healthcare itself was a bad thing? Was it a mistake to even go down that path?

adamc 6/26/2025|||
I don't think the path was fixed by healthcare, per se. It was fixed by adopting scientific investigation.

So I think your argument is kind of misleading.

Lerc 6/26/2025||
Why misleading?

I am advocating adopting methods of improvement rather than abandoning the persuit of beneficial results.

I think science was just a part of the solution to healthcare, much of the advance was also in what was considered allowable or ethical. There remains a great deal of harmful medical practices that are used today in places where regulation is weak.

Science has done little to stop those harms. The advances that led to the requirement for a scientific backing were social. That those practices persist in some places is not a scientific issue but a social one.

adamc 6/26/2025||
Because adopting having "doctors", for example, isn't really what made for better healthcare. We had doctors for centuries (arguably millenia) who were useful in very limited cases, and probably harmful most of the rest of the time. What made for better healthcare was changing the way we investigated problems.

That ultimately enabled "doctors" to be quite useful. But the fact that the "profession" existed earlier is not what allowed it to bloom.

ToucanLoucan 6/26/2025|||
> Do you hold the position that a thing is bad because it is possible to do harm, or that it is predominantly causing harm?

Mere moments later...

> Even if the current impact of a field is predominantly harmful

So let's just skip the first part then, you conceded it's predominantly harmful. On this we agree.

> it does not stand that the problem is with the what is being attempted.

Well, it's not a logical 1-to-1, no. But I would say if the current impact of a field is predominantly harmful, then revisiting what is being attempted isn't the worst idea.

> Consider healthcare, a few hundred years ago much of healthcare did more harm than good, charlatans and frauds were commonplace. Was that because healthcare itself was a bad thing? Was it a mistake to even go down that path?

If OpenAI and company were still pure research projects, this would hold some amount of water, even if I would still disagree with it. However that exempts the context that OpenAI is actively (and under threat of financial ruin) turning itself into a for-profit business, and is actively selling it's products, as are it's competitors, to firms in the market with the explicit notion of reducing headcount for the same productivity. This doesn't need a citation, look at any AI product marketing and you see a consistent theme is the removal of human labor and/or interaction.

Lerc 6/26/2025||
>> Even if the current impact of a field is predominantly harmful

>So let's just skip the first part then, you conceded it's predominantly harmful. On this we agree.

I'm afraid if you interpret that statement as a concession of a fact, I don't think we can have a productive conversation.

m4rtink 6/26/2025|||
I am not sure about your experience, but these types of channles seem to mostly have the issues of people being to bussy to repply, but when they do, there is often an interesting interaction & this is how users often become contributors to the project over time.

Sure, if you want to make sure you don't get any more contributors, you can try to replace that with a chatbot that will always reply immediately, but might just be wrong like 40% of the time, is not actually working on the project and will certainly not help in building sociel interactions between the project and its users.

perching_aix 6/26/2025||
I have participated in such channels for multiple years on the assisting side, and have been keeping in touch with some of the folks I knew from there still doing it. Also note that the projects I helped around with were more end-user focused.

Most interactions start with users being vague. This can already result in some helpers getting triggered, and starting to be vaguely snarky, but usually this is resolved by using prepared bot commands... which these users sometimes just won't read.

Then the misunderstandings start. Or the misplaced expectations. Or the lies. Or maybe the given helper has been having a bad day, but due to their long time presence in the project, they won't be moderated out properly. And so on. It's just not a good experience.

Ever since I left, I got screencaps of various kinds of conversations. In some cases, the user was being objectively insufferable - I don't think it's fair to expect a human to put up with that. Other times, the helper was being unnecessarily mean - they did not appreciate my feedback on that. Neither happens with LLMs. People don't grow resentful of the never ending horde of what feels like increasingly clueless users, and innocent folk don't get randomly chewed out for not living up to the optimality expectations of those who tend to 1000s of cases similar to theirs every week.

whatevertrevor 6/26/2025|||
I think the solution is neither AI nor human in this case.

While direct human support is invaluable in many cases, I find it really hard to believe how our industry has completely forgotten the value of public support forums. Here are some pure advantages over Discord/Slack/<Insert private chat platform of your liking>

- Much much better search functionality out of the box, because you can leverage existing search engines.

- From the above it follows that high value contributors do not need to spend their valuable time repeatedly answering the same basic questions over and over.

- Your high value contributors don't have to be employees of the company, as many enthusiastic power users often participate and contribute in such places.

- Conversations are _much_ easier to follow without having to resort to hidden threads and forums posts on Discord that no one will ever read or search.

- Over time you build a living library of supporting documentation instead of useful information being strewn in many tiny conversations over months.

- No user expectation to be helped immediately. A forum sets the expectation that this is an async method of communication, so you're less likely to see entitled aggravating behavior (though you won't see many users giving you good questions with relevant information attached even on forums).

perching_aix 6/26/2025||
I think you're forgetting about how e.g. StackOverflow, a Q&A forum, exhibited basically the exact same issues I just ran through. In general, the history of both the unnecessary hostility of helpers and the near-insulting cluelessness and laziness of users on public forums is a very long and extensive one. It's not a format issue, I don't think.
whatevertrevor 6/26/2025||
I'm surprised you read my post and thought I was trying to say that using more public forums and less private chats will solve the so-called "human issue". My argument is not about making customer support more pleasant, or users less hostile. It's about making information more accessible so people can help themselves.

If we make information more accessible, support will reduce in volume. Currently there's a tendency for domain experts to hoard all relevant information in their heads, and dole it out at their discretion in various chat forums. Forums whose existence is often not widely known to begin with (not to mention gated behind making accounts in certain apps the users may or may not care about/want to).

So my point is: instead of trying to automate a decidedly bad solution to make it scalable and treating that as a selling point of AI, we could instead make the information more accessible in the first place?

perching_aix 6/26/2025||
The number of messages in the #help channels I participated in was limited not by the number of participants on either side, but by the speed of the chat. If it went on too quick, people would hold off from posting.

This meant you had a fairly low and consistent ceiling for messages. What you'd also observe over the years is a gradual decline in question quality. According to every helper that is. How come?

Admittedly we'll never really know, so this is speculation on my part, but I think it was exactly because of the better availability of information. During these years, we tried cultivating other resources and implementing features with the specific goal of improving UX. It worked. So the only people still "needing" assistance were those who failed to navigate even this better UX. Hence, worse questions, yet never ending.

Another issue with this idea is that navigating through the sheer volume of information can become challenging. AWS has a pretty decent documentation for example, but if you don't know the given service's docs you're paging through somewhat well, it's a chore to find anything. Keyword search won't be super helpful either. This is because it's a lot of prose, and not a lot of structure. Compare this to the autogenerated docs of AWS CLI, and you'll find a stark difference.

Finding things, especially among a lot of faff, is tiring. Asking a natural language question is trivial. The rest is on people to believe that AI isn't the literal devil, unlike what blogposts like the OP would like one to believe.

Vegenoid 6/26/2025|||
Do we have examples of LLMs being used successfully in these scenarios? I’m skeptical that the insufferable users will actually be satisfied and able to be helped by an LLM, unless the LLM is actually presented as a human, which seems unethical. It also hinges on an LLM being able to get the user to provide the required information accurately, without lying or simply getting frustrated, angry, and unwilling to cooperate.

I’m not sure there is a solution to help people who don’t come to the table willing to put in the effort required to get help. This seems like a deep problem present in all kinds of ways in society, and I don’t think smarter chatbots are the solution. I’d love to be wrong.

perching_aix 6/26/2025||
> Do we have examples of LLMs being used successfully in these scenarios?

If such a dataset exists, I don't have it. Most I have is the anecdotal experiences of not having to be afraid of asking silly questions from LLMs, and learning things I could then cross-validate to be correct without tiring anyone.

eikenberry 6/26/2025|||
> [..] if these projects all integrated LLMs into their chatbots for just a few bucks [..]

No matter your position on the AI helpfulness, asking volunteers to not only spend time helping support a free software project but to also pony up money is just doubling down on the burden free software maintainers face as was highlighted in the recent libxml2 discussion.

perching_aix 6/26/2025||
A lot of the projects that maintain Discord servers in my experience will receive plenty enough donations to make up for the $5 it'd take to serve the traffic that hits their Discord for help with AI. Yes I did run the numbers. It's so (intentionally) cheap, this is a non-issue.

But then one could also just argue that this is something the individual projects can decide for themselves. Not really for either of us to make this call. You can consider what I said as just an example you disagree with in that case.

passwordoops 6/26/2025|||
Counter-counter : there is nothing as intellectually miserable and antisocial as large corporations and institutions replacing Helpdesks with draconian automated systems.

Also, try to come up with a less esoteric example than Discord Help channels. In fact, this is the issue with most defenses of LLMs. The benefits are so niche, or minor, that the example itself shows why they are not worth the money being poured in

perching_aix 6/26/2025||
> there is nothing as intellectually miserable and antisocial as large corporations and institutions replacing Helpdesks with (...) automated systems

Should be fairly obvious, but I disagree. Also I think you mean asocial, not antisocial. What's uniquely draconian about automated systems though? They're even susceptible to the same social engineering attacks humans are (it's just referred to as jailbreaking instead).

> Also, try to come up with a less esoteric example than Discord Help channels.

No.

> The benefits are so niche, or minor, that the example itself shows why they are not worth the money being poured in

Great. This is already significantly more intellectually honest than the entire blogpost.

butundstand 6/26/2025|||
In their bio:

“I’m not a tech worker…” they like to tinker with code and local Linux servers.

They have not seen how robotic the job has become and felt how much pressure there is to act like a copy-paste/git pull assembly line.

raxxorraxor 6/27/2025|||
Better yet, don't use Discord to build a community because if you don't do anything else, your knowledgebase will be lacking, which hinders adaption of your project.

As a dev I have to be already quite desperate if I engage with a chat bot.

Discord is better suited for developers working together, not for publishing the results to an audience.

MarcelOlsz 6/26/2025|||
You forgot one key thing: I don't want to talk to an AI.
reaperducer 6/26/2025|||
if these projects all integrated LLMs into their chatbots for just a few bucks and let them take on the brunt of the interactions, all participants on either side would be left with much more capacity to maintain and express empathy and care

Maybe. But only if the LLMs are correct. Which they too frequently aren't.

So the result is that the tech industry has figured out how to not only automate making people angry and frustrated, they've managed to do it at scale.

Yay.

mac-attack 6/26/2025|||
With all due respect, I don't think customer service interaction are a meaningful rebuttal when discussing the decline in social relations, empathy and care.

The former is an admittedly frustrating aspect in our transactional relationships with companies, while the others are the foundations of a functioning society throughout our civilization. Conflating business interactions with society needs is a familiar trope on HN IMO

reaperducer 6/26/2025|||
I don't think customer service interaction are a meaningful rebuttal when discussing the decline in social relations, empathy and care.

Often you give what you get.

If you're nice to the customer service people on the phone, frequently they loosen up and are nice right back at you. Some kind of crazy "human" thing, I guess.

perching_aix 6/26/2025|||
I literally led with a noncommercial example.
trod1234 6/26/2025||
This assumes the conclusion that AI would solve the lack of resources interaction issue. Unfortunately the data has been in on this for quite a long time (longer the LLM chatbots have been in existence) if you've worked in IT at a large carrier provider or for large call centers you know about this.

The simple fact of the matter is, there is a sharp gap between what an AI can do, and what a human does in any role involving communications, especially customer service.

Worse, there are psychological responses that naturally occur when you do any number of a few specific things that escalate conflict if you leave this to an AI. A qualified CSR person is taught how to de-escalate, diffuse, and calm the person who has been wound up to the point of irrationality. They are the front-line punching bags.

AI can't differentiate between what's acceptable, and what's not because the tokens it uses to identify these contexts have two contradictory states in the same underlying tokens. This goes to core classical computer science problems of halting, and other aspects.

The companies that were ahead of the curb for this invested a lot into this almost a decade and a half ago, and they found that in most cases these types of systems exponentiated the issues once they did finally get to a person, and they took it out on that person irrationally because they were the representative of the company that put them through what amounts to torture.

Some examples of behavior that causes these types of responses are when you are being manipulated in a way that you know is manipulation, it causes stress through perceptual blindspots causing an inconsistent internal mental state resulting in confusion. When that happens it causes a psychological reversal often of irrational anger. An infinite or byzantine loop designed to run people in circular hamster wheels is one such structure.

If you've ever been in a social interaction where you offer an olive branch and they seem to accept it, but at the last minute through it back in your face, you've experienced this. The smart individual doesn't ever do this because they know they will make an enemy for life who will always remember.

This is also how through communication, you can impose coercive cost on people, and companies have done this for years where anti-trust and FTC weren't being enforced. These triggers are inherent to a lesser or greater degree in all of us, every person alive.

The imposition of personal cost through this and other psychological blindspots is how torturous and vexatious processes are created.

Empathy and care are a two way street. It requires both entities to be acting in good faith through reflective appraisal. When this is distorted, it drives people crazy, and there is a critical saturation point where assumptions change because the environment has changed. If people show the indicators that they are acting in bad faith, others will treat them automatically as acting in bad faith. Eventually, the environment dictates that those people must prove they are acting in good faith (somehow) but proving this is quite hard. The environment switches from innocent benefit of the doubt to, guilty until proven innocent.

These erosions of the social contract while subtle, dictate social behavior. Can you imagine a world where something bad happens to you, and everyone just turns their backs, or prevents you from helping yourself?

Its the slipperly slope of society failing back to violence, few today commenting on things like this have actually read the material published by the greats on the social contract and don't know how society arose from the chaos of violence.

rolha-capoeira 6/26/2025||
This presupposes that human value only exists in the things current AI tech can replace—pattern recognition/creation. I'd wager the same argument was made when hand-crafted things were being replaced with industrialized products.

I'm not saying those things aren't valuable, or that humans can't express social and spiritual value in those ways, but that human value doesn't only exist there. And so, to give AI the power of complete dehumanization is to reduce humans to just pattern followers. I don't believe that is the case.

munificent 6/26/2025||
> I'm not saying those things aren't valuable, or that humans can't express social and spiritual value in those ways, but that human value doesn't only exist there.

This sounds sort of like a "God of the gaps" argument.

Yes, we could say that humanity is left to express itself in the margins between the things machines have automated away. As automation increases its capabilities, we just wander around looking for some untouched back-alley or dark corner the robots haven't swept through yet and do our dancing and poetry slams there until the machines arrive forcing us to again scurry away.

But at that point, who is the master, us or the machines?

rolha-capoeira 6/26/2025|||
What we still get paid to do is different than what we're still able to do. I'm still able to knit a sweater if I find it enjoyable. Some folks can even do it for a living (but maybe not a living wage)
mattgreenrocks 6/26/2025|||
If this came to pass, the population would be stripped of dignity pretty much en masse. We need to feel competent, useful, and connected to people. If people feel they have nothing left, then their response will be extremely ugly.
giraffe_lady 6/26/2025|||
> And so, to give AI the power of complete dehumanization is to reduce humans to just pattern followers.

It would but I don't think that's what they're saying. The agent of dehumanization isn't the technology, but the selection of what the technology is applied to. Or like the quip "we made an AI that creates, freeing up more time for you to work."

Wherever human value, however you define that, exists or is created by people, what does it look like to apply this technology such that human value increases? Does that look like how we're applying it? The article seems to me to be much more focused on how this is actually being used right now rather than how it could be.

danielbln 6/26/2025|||
It kind of makes sense if following a particular pattern is your purpose and life, and maybe your identity.
malux85 6/26/2025|||
We should actively encourage fluidity in purpose, too much rigidity or militant clinging to ideas is insecurity or attempts at absolving personal responsibility.

Resilience and strength in our civilisation comes from confidence in our competence,

not sanctifying patterns so we don’t have to think.

We need to encourage and support fluidity, domain knowledge is commoditised, the future is fluid composition.

asciimov 6/26/2025|||
Great, tell that to someone who spent years honing their skills that it's too bad the rug was pulled out from beneath you, time to start over from the bottom again.

Maybe there would be merit to this notion if society provided the necessary safety net for this person to start over.

malux85 6/26/2025|||
Agreed, I think there should be much more safety net for people to start over and be more fluid, I definitly think the weird "Full time employed or homeless" thing has to change
danielbln 6/27/2025|||
"Protect the person, not the job" is what we should be aiming for. I don't think we will, but we should.
haswell 6/26/2025||||
> We should actively encourage fluidity in purpose

I don't think we should assume most people are capable of what you describe. Assigning "should" to this assumes what you're describing is psychologically tenable across a large population.

> too much rigidity or militant clinging to ideas is insecurity or attempts at absolving personal responsibility.

Or maybe some people have a singular focus in life and that's ok. And maybe we should be talking about the responsibility of the companies exploiting everyone's content to create these models, or the responsibility of government to provide relief and transition planning for people impacted, etc.

To frame this as a personal responsibility issue seems fairly disconnected from the reality that most people face. For most people, AI is something that is happening to them, not something they are responsible for.

And to whatever extent we each do have personal responsibility for our careers, this does not negate the incoming harms currently unfolding.

malux85 6/26/2025||
“Some people have a singular purpose in life and that’s OK”

Strong disagree, that’s not OK, it’s fragile

haswell 6/26/2025||
Much of society is fragile. The point is that we need to approach this from the perspective of what is, not from what we wish things could be.
adamc 6/26/2025||||
People come with all sorts of preferences. Telling people who love mastery that they have to be "fluid" isn't going to lead to happy outcomes.
danielbln 6/26/2025|||
Absolutely, I agree with that.
MichaelZuo 6/26/2025|||
How would this matter?

People can self assign any value whatsoever… that doesn’t change.

If they expect external validation then that’s obviously dependent on multiple other parties.

pojzon 6/26/2025|||
Due to how AI works its only a matter of time till its better at pretty much everything humans do beside “living”.

People tend to talk about any AI related topic comparing it to any industrial shift that happened in the past.

But its much Much MUCH bigger this time. Mostly because AI can make itself better, it will be better and it is better with every passing month.

Its a matter of years until it can completely replace humans in any form of intellectual work.

And those are not mine words but smartest ppl in the world, like AI grandfather.

We humans think we are special. That there wont be something better than us. But we are in the middle of the process of creating something better.

It will be better. Smarter. Not tired. Wont be sick. Wont ever complain.

And it IS ALREADY and WILL replace a lot of jobs and it will not create new ones purely due to efficiency gains and lack of brainpower in majority of ppl who will be laid off.

Not everyone is a noble prize winner. And soon we will need only such ppl to advance AI.

serbuvlad 6/26/2025|||
> because AI can make itself better

Can it? I'm pretty sure current AI (not just LLMs, but neural nets more generally) require human feedback to prevent overfitting. Fundamentally eschewing any fear or hope of the singularity as predicted.

AI can not make itself better because it can not meaningfully define what better means.

pojzon 6/26/2025||
AlphaEvolved reviewed how its trained and found a way to improve the process.

Its only the beginning. aI agents are able to simulate tasks, get better at them and make themselves better.

At this point its silly to say otherwise.

sonofhans 6/26/2025||||
> Its a matter of years until it can completely replace humans in any form of intellectual work.

This is sensationalism. There’s no evidence in favor of it. LLMs are useful in small, specific contexts with many guardrails and heavy supervision. Without human-generated prior art for that context they’re effectively useless. There’s no reason to believe that the current technical path will lead to much better than this.

z0r 6/26/2025|||
Call me when 'AI' cook meals in our kitchens, repairs the plumbing in our homes and removes the trash from the curb.

Automation has costs and imagining what LLMs do now as the start of the self-improving, human replacing machine intelligence is pure fantasy.

NitpickLawyer 6/26/2025||
To say that this is pure fantasy when there are more and more demos of humanoid robots doing menial tasks, and the costs of those robots are coming down is ... well something. Anger, denial (you are here)...
reaperducer 6/26/2025|||
To say that this is pure fantasy when there are more and more demos of humanoid robots doing menial tasks

A demo is one thing. Being deployed in the real world is something else.

The only thing I've seen humanoid robots doing is dancing and occasionally a backflip or two. And even most of that is with human control.

The only menial task I ever saw a humanoid robot do so far is to take bags off of a conveyor belt, flatten them out and put them on another belt. It did it at about 1/10th the speed of a human, and some still ended up on the floor. This was about a month ago, so the state of the art is still in the demo stage.

z0r 6/27/2025|||
I'm waiting. You're talking to someone who believed that self-driving vehicles would put truckers out of work in a decade right around 2012. I didn't think that one through. The world is very complicated and human beings are the cheapest and most effective way to get physical things done.
unsui 6/26/2025||
not entirely.

The risk raised in the article is that AI is being promoted beyond its scope (pattern recognition/creation) to legal/moral choice determination.

The techo-optimists will claim that legal/moral choices may be nothing more than the sum of various pattern-recognition mechanisms...

My take on the article is that this is missing a deep point: AI cannot have a human-centered morality/legality because it can never be human. It can only ever amplify the existing biases in its training environments.

By decoupling the gears of moral choice from human interaction, whether by choice or by inertia, humanity is being removed from the mechanisms that amplify moral and legal action (or, in some perverse cases, amplify the biases intentionally)

rolha-capoeira 6/27/2025||
to build on your point, we only need to look at another type of entity that has a binary reward system and is inherently amoral: the corporation. Though it has many of the same rights as a human (in the US), the corporation itself is amoral, and we rely upon the humans within to retain moral compass, to their own detriment, which is a foolish endeavor.

even further, AI has only learned through what we've articulated and recorded, and so its inherent biases are only that of our recordings. I'm not sure how that sways the model, but I'm sure that it does.

kelseyfrog 6/26/2025||
Whether we like it or not, AI sits at the intersection of both Moravec's and Jevon's paradox. Just as more efficient engines lead to increased gas usage, as AI gets increasingly better at problems difficult for humans, we see even greater proliferation within that domain.

The reductio on this is the hollowing out of the hard-for-humans problem domain, leaving us to fight for the scraps of the easy-for-humans domain. At first glance this sounds like a win. Who wouldn't want something else to solve the hard problems? The big issue with this is easy-for-human problems are often dull, devoid of meaning, and low-wage. Paradoxically, the hardest problems have always been the ones that make work meaningful.

We stand at the crossroads where one path leads to an existence where with a poverty of meaning and although humans create and play by their own rules, we feel powerless to change it. What the hell are we doing?

ololobus 6/26/2025||
Interesting point of view, didn't know about Jevon's paradox before. To me, the outcome still depends on whether AI can get superhuman [1] (and beyond) at some point. If it can, then, well, we will likely indeed see that suitable-for-human areas of the intellectual labor are shrinking. If it cannot, then it becomes an even more philosophical question similar to the agnosticism beliefs. Is the universe completely knowable? Because if it's not, then we might as well have an infinite more hard problems, and AI just rises a bar for what we can achieve by paring a human with AI compared to just human alone.

[1] I know it's a bit hard to define, but I'd vaguely say that it's significantly better in the majority of intelligence areas than the vast majority of the population. Also it should be scalable. If we can make it slightly better than human by burning the entire Earth's energy, then it doesn't make much sense.

serbuvlad 6/26/2025||
Prioritize goals over the process and what AIs can do doesn't matter.

Want to make a movie? The goal should be getting the movie out, seen by people and reviewed.

Whether it's people captured by film, animations in Blender or AI slop, what matters is the outcome. Is it good? Do people like it?

I do the infrastructure at a department of my Uni as sort of a side-gig. I would have never had the time to learn Ansible, borg, FreeIPA, wireguard, and everything else I have configured now and would have probably resorted to a bunch of messy shell scripts that don't work half the time like the people before me.

But everything I was able to set up I was able to set up in days, because of AI.

Sure, it's really satisfying because I also have a deep understanding of the fundamentals, and I can debug problems when AI fails, and then I ask it "how does this work" as a faster Google/wiki.

I've tried windsurf but given up because the AI does something that doesn't work and I can give it the prompts to find a solution (+ think for myself) much faster than it can figure out itself (and probably at the cost of a lot less tokens).

But the fact that I enjoy the process doesn't matter. And the moment I can click a button and make a webapp, I have so many ideas in my drawer for how I could improve the network at Uni.

I think the problem people have is that they work corporate jobs where they have no freedom to choose their own outcomes so they are basically just doing homework all their life. And AI can do homework better than them.

Vegenoid 6/26/2025|||
Take this too far and you run into a major existential crisis. What is the goal of life? Most people would say something along the lines of bringing joy to others, experiencing joy yourself, accomplishing things that you are proud of, and continuing the existence of life by having children, so that they can experience joy. The joy of life is in doing things, joy comes from process. Goals are useful in that they enable the doing of some process that you want to be doing, or in the joy of achieving the goal (in which case the joy is usually derived from the challenge in the process of achieving the goal).

> Want to make a movie? The goal should be getting the movie out, seen by people and reviewed.

This especially falls apart when it comes to art, which is one of the most “end-goal” processes. People make movies because they enjoy making movies, they want movies to be enjoyed by others because they want to share their art, and they want it to be commercially successful so that they can keep making movies. For the “enjoying a movie” process, do you truly believe that you’d be happy watching only AI-generated movies (and music, podcasts, games, etc.) created on demand with little to no human input for the rest of your life? The human element is truly meaningless to you, it is only about the pixels on the screen? If it is, that’s not wrong - I just think that few people actually feel this way.

This isn’t an “AI bad” take. I just think that some people are losing sight of the role of technology. We can use AI to enable more people than ever before to spend time doing the things they want to do, or we can use it to optimize away the fun parts of life and turn people even further into replaceable meat-bots in a great machine run by and for the elites at the top.

kelseyfrog 6/26/2025||||
When all we care about is the final product, we miss the entire internal arc, the struggle, the bruised ego, and the chance of failure, and the reward in feeling "holy shit, I did it!" that comprises the essence of being human.

Reducing the human experience as a means to an end is the core idea of dehumanization. Kant addressed this in the "humanity formula of the categorical imperative:

    "Act in such a way that you treat humanity, whether in your own person or in the person of any other, always at the same time as an end, never merely as a means."
I'm curious how you feel about the phrase "the real treasure was the friends we made along the way." What does it mean to you?
skuxxlife 6/26/2025|||
But the process _does_ matter. That is the whole point of life. Why else are we even here if not to enjoy the process of making? It’s why people get into woodworking or knitting as hobbies. If it was just about the end result, they could just go to a store a buy something that would be way cheaper and easier. But that’s not the point - it’s something that _you_ made with your own hands, as imperfect as they are, and the experience of making something.
rafram 6/26/2025||
> For example, to create an LLM such as ChatGPT, you'd start with an enormous quantity of text, then do a lot of computationally-intense statistical analysis to map out which words and phrases are most likely to appear near to one another. Crunch the numbers long enough, and you end up with something similar to the next-word prediction tool in your phone's text messaging app, except that this tool can generate whole paragraphs of mostly plausible-sounding word salad.

This explanation might've been passable four years ago, but it's woefully out of date now. "Mostly plausible-sounding word salad"?

mv4 6/26/2025|
What would be a better explanation today?
diggan 6/26/2025||
I think "mostly plausible-sounding" is, albeit simplified, OK for an analogy I guess. But the "word salad" part gives the impression it doesn't even look like real human text, which it kind of does at the surface. I think it's mostly "word salad" that makes it sound far off from the truth.
djhn 6/26/2025|||
Over the past year the chat bots have improved in many ways, but their written output has regressed to the mean: the average RLHF-driven preference.

It is word salad, unless you’re a young, underpaid contractor from a country previously colonised by the British or the United States.

diggan 6/26/2025||
> Over the past year the chat bots have improved in many ways, but their written output has regressed to the mean: the average RLHF-driven preference.

How could you possibly judge such a diverse set of outputs? There are thousands of models, that can each be steered/programmed with prompts and with a lot parameter-twiddling, it's always impossible you could say "the chat bots" and give some sort of one-size-fits-all judgement of all LLMs. I think your reply shows a bit of ignorance if that's all you've seen.

Oxford Dictionaries says "word salad" is "a confused or unintelligible mixture of seemingly random words and phrases", and true, I'm no native speaker, but that's not commonly the output I get from LLMs. Sometimes though, some people’s opinions on the internet feel like word salad, but I guess it's hard to distinguish from bait too.

djhn 6/27/2025||
I meant what I said: chat bots, not models or APIs. Give it a try if you don’t believe me. Try using the leading chat interfaces logged out, from a clean browser and new IP.
diggan 6/27/2025||
Sure, I don't doubt that, but still, what do you think these chat bots are using? Or are you talking about ELIZA? If so, what you say now makes a lot of sense.
relaxing 6/26/2025|||
Word salad refers to human writing with poor diction and/or syntax.
mrcwinn 6/26/2025||
Yes, before AI, society was doing fantastically well on "social relations, empathy, and care." XD

I remain an optimist. I believe AI can actually give us more time to care for people, because the computers will be able to do more themselves and between each other. Unproven thesis, but so is the case laid out in this article.

ACCount36 6/26/2025|
Anyone who thinks that AI is bad for "empathy and care" should be forced to work for a year in a tech support call center, first line.

There are some jobs that humans really shouldn't be doing. And now, we're at the point where we can start offloading that to machines.

tim333 6/26/2025||
>The push to adopt AI is, at its core, a political project of dehumanization

I can't really say I've seen that. The article seems to be about adoption of AI in the Canadian public sector, not something I'm really familiar with as a Brit. The government here hopes to boost the economy with it and Hassabis at Deepmind hopes to advance science and cure diseases.

I think AI may well make the world more humane by dealing with a variety of our problems.

old_man_cato 6/26/2025||
Dehumanization might be the wrong word. It's certainly anti social technology, though, and that's bad enough.
munificent 6/26/2025|
I believe that our socializing is the absolute most fundamentally human aspect of us as a species.

If you cut off a bird's wings, it can't bird in any real meaningful sense. If you cut off humans from others, I don't think we can really be human either.

ACCount36 6/26/2025|||
There are a lot of incredibly offended kiwi birds out there now.
old_man_cato 6/26/2025|||
And I think a lot of people would agree with you.
PolyBaker 6/26/2025||
I think there is a fundamental non-understanding of power present in the post. By that I mean that the author doesn't appreciate that technology (or any tool for that matter) gives power and control to the user. This is used to further our understanding of the world with the intent of creating more technology (recursive process). The normies just support those at the forefront that actually change society. The argument in the post is fundamentally anti-technology. Follow this argument and you end up at a place where we live in caves rather than buildings.

Also the anti-technology stance is good for humanity since it fundamentally introduces opposition to progress and questions the norm, ultimately killing the weak/inefficient parts of progress.

alganet 6/26/2025||
I think AI could have been great.

It's just that greed took over, and it took over big time.

Several shitty decisions in a row: scaling it too much, stealing data, marketing it before it can deliver, government use. The list goes on and on.

This idea that there's something inherent about the technology that's dehumanizing is an old trick. The issue lies in whoever is making those shitty decisions, not the tech itself.

There's obviously a fog surrounding every single discussion about this stuff. Probably the outcome of another remarkably shitty decision by someone (so many people parroting marketing ideas, it's dumb).

We'll be ok as humans, trust me on this one. It's too big to not fail.

tptacek 6/26/2025|
The problem this author has is with the technology industry, not AI in particular, which really is just a (surprisingly powerful) cohering of forces tech has unleashed over the last 25 years.
More comments...