Top
Best
New

Posted by smartmic 6/26/2025

AI Is Dehumanization Technology(thedabbler.patatas.ca)
158 points | 172 commentspage 4
suthakamal 6/26/2025|
[flagged]
yupitsme123 6/26/2025||
Do you really feel that skipping LLMs would be like skipping the industrial revolution, electricity, or the internet? Twenty years from now where do you see societies that "embrace" this technology vs. ones that don't?

It's obvious what electricity and mass production can do to improve the prosperity and happiness of a given society. It's not so obvious to me what benefits we'd be missing out on if we just canceled LLMs at this point.

suthakamal 6/26/2025|||
LLMs aren’t the end all be all of anything. But they’re clearly a step towards augmenting human cognition and in giving machines the ability to perform cognitive tasks. And when Google says a quarter of its code is being written by LLMs, and DeepMind is making tremendous progress on protein folding and DNA understanding with fundamentally the same technology, it seems pretty clear that we’d miss out on a lot without this.

Full disclosure: I think protein folding and DNA prediction could quite possibly the biggest advancements in medicine, ever. And still, all the critiques of LLMs being janky and not nearly sufficient to be generally intelligent are true.

So yes, I think it’s absolutely on the scale of electrification.

yupitsme123 6/26/2025||
When I look at the problems in my life, in my country, or in the world around me, not once has it occurred to me that they were due to a lack of advanced pattern recognition or DNA prediction.

When people were dying of hunger then being able to create more food was obviously a huge win. Likewise for creating light where people used to live in darkness.

But contemporary technologies solve non-problems and take us closer to a future no one asked for, when all we want is cheaper rent, cheaper healthcare, and less corruption.

suthakamal 6/26/2025||
You don’t think protein folding and dna prediction will yield better healthcare?
yupitsme123 6/26/2025||
I said cheaper. Not better. What difference does it make if it's better if only a few people can afford it. I also don't accept longer lifespans as something that is always worth pursuing.

You also didn't address my point that those technologies do nothing to solve the real problems that real people want solved. There's a strong possibility that they'll just exacerbate them.

suthakamal 6/27/2025||
I guess your argument could be leveled against any transformational technology, from the industrial age through to the internet (which many doubted would have any meaningful economic impact, and clearly didn't solve many of the most pressing problems of the day for humanity).
mm263 6/26/2025|||
By "Luddite," you mean "resist progress, therefore bad." Progress is not inherently bad. Luddites didn't say it is; this blog post doesn't say it is either. We are currently rushing forward with implementing AI everywhere, as much as possible, and what these posts (thinking about Xe Iaso) urge you to think about is how this new revolutionary technology affects us, society, the people who will be displaced by it. If it will yield a disproportionate amount of misery, then we should oppose it on the moral grounds. There's no guarantee of ASI heaven or hell, so it's merely prudent to think about the repercussions. We didn't think - damn, we couldn't even approach imagining - all of the repercussions of replacing traditional agriculture with industrial agriculture, of the industrial revolution, of the internet, so maybe, with technology this powerful, it would be sensible to think about the repercussions before we upend the social order once again.
giraffe_lady 6/26/2025|||
> The idea that we could just reject the technology feels kind of like a Luddite reaction to it.

The luddites were a labor movement protesting how a new technology was used by mill owners to attack collective worker power in exchange for producing a worse product. Their movement failed but they were right to fight it. The lesson we should take from them isn't to give up in the face of destabilizing technological change.

shadowgovt 6/26/2025|||
> Their movement failed but they were right to fight it. The lesson we should take from them isn't to give up in the face of destabilizing technological change.

Hard to say. They sort of represented the specialist class being undermined by technology de-specializing their skillset. This is in contrast to labor strikes and riots which were executed by unskilled labor finding solidarity to tell machine owners "your machine is neat but unless you meet our demands, you'll be running around trying to operate it alone." Luddites weren't unions; they were guilds.

One was an attempt to maintain a status quo that was comfortable for some and kept products expensive, the other was a demand that given the convenience afforded by automation, the fruits of that convenience be diffused through everyone involved, not consolidated in the hands of the machine owners.

suthakamal 6/26/2025||||
They were wrong to believe that technological progress could be stopped. The viable path is policy which ensures the gains are fairly distributed, not try to break the machines. That tactic has never and will never work.
xg15 6/26/2025|||
> The viable path is policy which ensures the gains are fairly distributed, not try to break the machines.

This was exactly what the historical Luddite movement was trying to archive. The industrialists responded with "lol no". Then came the breaking of machines.

NegativeLatency 6/26/2025||||
I don't want to start a snippy argument, so sorry if this sounds combative, but when you realize that there isn't a "policy which ensures the gains are fairly distributed", then what would you suggest?

Unionization and collective action does work, it's why we have things like the concept of the weekend. It's also generally useful when advocating change to have a more extreme faction.

suthakamal 6/26/2025||
ranked choice voting is a good start. The new york mayoral primary is a hopeful sign.
zorked 6/26/2025||||

  But the Luddites themselves “were totally fine with machines,” says Kevin Binfield, editor of the 2004 collection Writings of the Luddites. They confined their attacks to manufacturers who used machines in what they called “a fraudulent and deceitful manner” to get around standard labor practices. “They just wanted machines that made high-quality goods,” says Binfield, “and they wanted these machines to be run by workers who had gone through an apprenticeship and got paid decent wages. Those were their only concerns.”
https://www.smithsonianmag.com/history/what-the-luddites-rea...
DrillShopper 6/26/2025||||
> The viable path is policy which ensures the gains are fairly distributed

Okay, so where are those? Where are even the proposals for those?

What would you propose? What do you think is fair distribution of these gains?

suthakamal 6/27/2025||
High taxes, and ranked choice voting. New York's mayoral primary is a hopeful sign to me.
jrflowers 6/26/2025|||
This is a good point. When has breaking stuff and disrupting productivity as a form of protest ever worked? It’s not like battles are fought with violence. They are fought through people doing Policy in their heads, which sort of just naturally becomes Policy out in the world on its own.
suthakamal 6/27/2025||
I'm not saying protest doesn't work. I'm saying rejecting technology never has.
jrflowers 6/27/2025||
That isn’t what luddites did though. I wrote my earlier post quite a while after several other folks clearly and eloquently explained that in response to your post. I figured you would’ve been up to speed about tactics vs goals vis a vis the luddites by the time you got to that post
suthakamal 6/27/2025||
that their attempts to change policy didn't work doesn't mean that changing policy is the wrong objective, merely that their tactics failed.
mouse_ 6/26/2025|||
Preach!
mouse_ 6/26/2025||
> Any information processing technology can be argued to be a surveillance technology.

The telemetric enclosure movement and its consequences have been a disaster for humanity, and advancements in technology are now doing more harm than good. Life expectancy is dropping for the first time in ages, and the generational gains in life expectancy had a lot of inertia behind them. That's all gone now.

danielbln 6/26/2025||
Any sources to back that up? All I can find is rising life expectancy across the board globally, with a dip during the pandemic that almost all countries have recovered from. The US has been a bit sluggish there, but still.
suthakamal 6/26/2025||
Yes. There has been a regression in these metrics for white folks in the US. This is the first generation of whites in America who can expect to earn less and live shorter lives than their parents. However, that doesn’t generalize to the rest of the population, or world, and in America the reasons are policy: healthcare and education. Not because AI or tech broadly is particularly pernicious.
RS-232 6/26/2025||
Were water mills, spinning jennies, and printing presses dehumanizing too?
GeoAtreides 6/26/2025||
In a way; water mills and spinning jennies led to the dickensian horrors of the textile mills: https://www.hartfordstage.org/stagenotes/acc15/child-labor

The industrialisation itself, although increased material output, decimated the lives and spirits of those who worked in factories.

And the printing press led to the Reformation and the thirty years war, one of the most devastating wars ever.

mchusma 6/26/2025||
...and led to our current time of maximal abundance, free time, leisure, freedom to work in more ways, and peace.
relaxing 6/26/2025|||
Yes, there are many books written about the dehumanizing aspects of the industrial revolution.

Consider we still place particular value on products which are “artisanal” or “hand crafted.”

ACCount36 6/26/2025|||
Of course!

There were people whose entire identities were tied to being able to manually copy a book.

Just imagine how much they seethed as printing press was popularized.

card_zero 6/26/2025||
Is that how it went, I wonder?

https://academic.oup.com/book/27066?login=false

Seems the scribes kept going for a good hundred years or so, doing all the premium and arty publications.

micromacrofoot 6/26/2025|||
Kinda? https://en.wikipedia.org/wiki/Luddite

> The Luddite movement began in Nottingham, England, and spread to the North West and Yorkshire between 1811 and 1816.[4] Mill and factory owners took to shooting protesters and eventually the movement was suppressed by legal and military force, which included execution and penal transportation of accused and convicted Luddites.

stirfish 6/26/2025|||
I think there are quite a few dehumanizing aspects of the industrial revolution. It wasn't just the water mills, but rather the lengths we put people through to keep them running.
bilbo0s 6/26/2025|||
We don't even need to go back that far.

All these arguments could be made for, say, news media, or social media.

AI being singled out is a bit disingenuous.

If it is dehumanizing, it is because our collective labor, culture, and knowledge base have concerted to make it so.

I guess, people should really think of it this way: A database is garbage in, garbage out, but you shouldn't blame the database for the data.

relaxing 6/26/2025||
All those arguments have been and are still being made for MSM and social media. AI is not being singled out.
tines 6/26/2025||
No, because they aren't the same. Those things are tools that reallocate cognitive burden. LLMs destroy cognitive burden. LLMs cause cognitive decline, a spinning jenny doesn't.
bilbo0s 6/26/2025||
I don't know man?

Gonna have to disagree there. A lot of models are being used to reallocate cognitive burden.

A phd level biologist with access to the models we can envision in the future will probably be exponentially more valuable than entire bio startups are today. This is because s/he will be using the model to reallocate cognitive burden.

At the same time, I'm not naive. I know that there will be many, many non phd level biologist wannabes that attempt to use models to remove entirely cognitive burden. But what they will discover is that they are unable to hold a candle to the domain expert reallocating cognitive burden.

Models don't cause cognitive decline. They make cognitive labor exponentially more valuable than it is today. With the problem being that it creates an even more extreme "winner take all" economic environment that a growing population has to live in. What happens when a startup really only needs a few business types and a small team of domain experts? Today, a successful startup might be hundreds of jobs. What happens when it's just a couple dozen? Or not even a dozen? (Other than the founders and investors capturing even more wealth than they do presently.)

ololobus 6/26/2025|||
I'd totally agree with this point if we assume that efficiency/performance growth will flatten at some point. For example, if it gets logarithmic soon, then the progress will grow slowly over the next decades. And then, yes, it will likely look like that current software developers, engineers, scientists, etc., just got an enormously powerful tool, which knows many languages almost perfectly and _briefly_ knows the entire internet.

Yet, if we trust all these VC-backed AI startups and assume that it will continue growing rapidly, e.g., at least linearly, over the next years, I'm afraid that it may indeed reach a superhuman _intelligence_ level (let's say p99 or maybe even p999 of the population) in most of the areas. And then why do you need this top of the notch smart-ass human biologist if you can as well buy a few racks of TPUs?

bilbo0s 6/27/2025||
Because only the biologist knows what assays to ask the super human intelligence for. And how the results affect the biomolecular process you want to look at.

If you can’t ask the right questions, like everyone without a phd in biology, you’re kind of out of luck. The superhuman intelligence will just spin forever trying to figure out what you’re talking about.

tines 6/26/2025|||
It doesn't really matter what something can be used for, it matters what it will be used for most of the time. Television can be used for reading books, but people mostly don't use it that way. Smartphones can be used for creation, but people mostly don't use them that way. You've got Satya Nadella on a stage saying AI makes you a better friend because it can reply to messages from your friends for you. We are creating, and to a large extent have created, a world that we will not want to live in, as evidenced by skyrocketing depression and the loneliness epidemic.

Read Neil Postman or Daniel Boorstin or Marshall McLuhan or Sherry Turkle. The medium is the message.

wagwang 6/26/2025|
This blog post basically reads as, AI doesn't always adhere to my leftist values.
DowsingSpoon 6/26/2025|
Murder also doesn’t adhere to my leftist values, which is to say, your statement is useless without being specific about which values AI doesn’t adhere to, and why you think that’s not a problem at all. The article explicitly calls out the “deeply-held values of justice, fairness, and duty toward one another.” Are these the specific leftist values you’re so dismissive of?
Terr_ 6/26/2025|||
Or to invert it, what "non-leftist values" does grandparent poster believe are lacking? (Hopefully the answer isn't "...You know the ones.")
charcircuit 6/26/2025|||
For example:

>What happens to people's monthly premiums when a US health insurance company's AI finds a correlation between high asthma rates and home addresses in a certain Memphis zip code? In the tradition of skull-measuring eugenicists, AI provides a way to naturalize and reinforce existing social hierarchies, and automates their reproduction.

This sentence is about how AI may be able to more effectively apply the current values of society as opposed to the author's own values. It also fails to recognize that for things like insurance there are incentives to reduce bias to avoid mispricing policies.

>The logical answer is that they want an excuse to fire workers, and don't care about the quality of work being done.

This sentence shows that the author perceives that AI may harm workers. Harming workers appears to be against her values.

>This doesn't inescapably lead to a technological totalitarianism. But adopting these systems clearly hands a lot of power to whoever builds, controls, and maintains them. For the most part, that means handing power to a handful of tech oligarchs. To at least some degree, this represents a seizure of the 'means of production' from public sector workers, as well as a reduction in democratic oversight. >Lastly, it may come as no surprise that so far, AI systems have found their best product-market fit in police and military applications, where short-circuiting people's critical thinking and decision-making processes is incredibly useful, at least for those who want to turn people into unhesitatingly brutal and lethal instruments of authority.

These sentences shows that the author values people being able to break the law.

Terr_ 6/27/2025|||
> > high asthma rates and home addresses in a certain Memphis zip code [...] naturalize and reinforce existing social hierarchies

> This sentence is about how AI may be able to more effectively apply the current values of society

*whoooosh*

No, it's about how poor people growing up in polluted regions can be kept poor by the damage being inflicted upon them.

Keeping a permanent poor hereditary underclass is not a "current value" of in society at large.

DrillShopper 6/26/2025|||
This post reads like a really bad LLM
charcircuit 6/26/2025||
I tried to keep my explanation simple since it appeared the other commenter had trouble understanding the author's views on AI which were pretty clear when I read over it. The other commentary called out a set of values which were quoted from a discussion unrelated to AI.