Top
Best
New

Posted by delaugust 11/19/2025

AI is a front for consolidation of resources and power(www.chrbutler.com)
545 points | 448 commentspage 2
carlosjobim 11/19/2025|
Let's take the highest perspective possible:

What is the value of technology which allows people communicate clearly with other people of any language? That is what these large language models have achieved. We can now translate pretty much perfectly between all the languages in the world. The curse from the tower of Babel has been lifted.

There will be a time in the future, when people will not be able to comprehend that you couldn't exchange information regardless of personal language skills.

So what is the value of that? Economically, culturally, politically, spiritually?

Herring 11/19/2025||
Language is a lot deeper than that. It's like if I say "we speak the same language", it means a lot more than just the ability to translate. It's talking about a shared past and worldview and hopefully future which I/we intend to invest in.
carlosjobim 11/19/2025||
Then are you better off by not being able to communicate anything?
blauditore 11/19/2025|||
You could make the same argument about video conferencing: Yes, you can now talk to anyone anywhere anytime, and it's amazing. But somehow all big companies are convinced that in-person office work is more productive.
uhoh-itsmaciek 11/20/2025|||
>The curse from the tower of Babel has been lifted.

It wasn't a curse. It was basically divine punishment for hubris. Maybe the reference is a bit on the nose.

4ndrewl 11/19/2025|||
Which languages couldn't we translate before? Not you, the individual. We, humanity?
carlosjobim 11/19/2025||
Machine translation was horrible and completely unreliable before LLMs. And human translators are very expensive and slow in comparison.

LLM is for translation as computers were for calculating. Sure, you could do without them before. They used to have entire buildings with office workers whose job it was to compute.

gizajob 11/19/2025||
Google translate worked great long before LLMs.
doug_durham 11/19/2025|||
I disagree. It worked passably and was better than no translation. The depth, correctness, and nuance is much better with LLMs.
jibal 11/20/2025||||
The only reason to think that is not knowing when Google switched to using LLMs. The radical change is well documented.
verdverm 11/19/2025||||
LLMs are not they only "AI"
carlosjobim 11/19/2025||||
It really didn't. There were many languages which it couldn't handle at all, just making completely garbled output. It wasn't possible to use Google Translate professionally.
Kiro 11/19/2025||||
I don't think you understand how off that statement is. It's also pretty ignorant considering Google Translate barely worked at all for many languages. So no, it didn't work great and even for the best possible language pair Google Translate is not in the same ballpark.
dwedge 11/19/2025|||
Not really long before, although I suppose it's relative. Google translate was pretty garbage until around 2016-2017 and then it started really improving
bix6 11/19/2025||
We could communicate with people before LLMs just fine though? We have hand gestures and some people learn multiple languages and google translate was pretty solid. I got by just fine in countries where I didn’t know the language because hand gestures work or someone speaks English.

What is the value of losing our uniqueness to a computer that lies and makes us all talk the same?

Kiro 11/19/2025|||
Incredible that we happen to be alive at the exact moment humanity peaked in its interlingual communication. With Google Translate and hand gestures there is no need to evolve it any further.
carlosjobim 11/19/2025|||
You can maybe order in a restaurant or ask the way with hand gestures. But surely you must be able to take a higher perspective than your own, and realize that there's enormous amounts of exchange between nations with differing language, and all of this relies on some form of translation. Hundreds of millions of people all over the world have to deal with language barriers.

Google Translate was far from solid, the quality of translations were so bad before LLMs that it simply wasn't an option for most languages. It would sometimes even translate numbers incorrectly.

Profan 11/19/2025||
LLMs are here and Google Translate is still bad (surely, if it was easy as just plugging the miraculous perfect llms into it, it would be perfect now?), I don't think people who think we've somehow solved translation actually understand how much it still deals extremely poorly with.

And as others have said, language is more than just "I understand these words, this other person understands my words" (in the most literal sense, ignoring nuance here), but try getting that across to someone who believes you can solve language with a technical solution :)

carlosjobim 11/20/2025||
What argument are you making? LLM translating is available to anybody to try and use right now, and you can use services like Kagi Translate or DeepL to see the evidence for yourself that they make excellent translations. I honestly don't care what Google Translate does, because nobody who is serious about translation uses it.

> And as others have said, language is more than just "I understand these words, this other person understands my words" (in the most literal sense, ignoring nuance here), but try getting that across to someone who believes you can solve language with a technical solution :)

The kind of deeply understood communication you are demanding is usually impossible even between people who have the same native tongue, from the same town and even within the same family. And people can misunderstand each other just fine without the help of AI. However, is it better to understand nothing at all, then to not understand every nuance?

Kiro 11/19/2025||
> it’s a useful technology that is very likely overhyped to the point of catastrophe

I wish more AI skeptics would take this position but no, it's imperative to claim that it's completely useless.

mwhitfield 11/19/2025|
I've had *very* much the opposite experience. Very nearly every AI skeptic take I read has exactly this opinion, if not always so well-articulated (until the last section, which lost me). But counterarguments always attack the complete strawman of "AI is utterly useless," which very few people, at least within the confines of the tech and business commentariat, are making.
Kiro 11/19/2025|||
Maybe I'm focusing too much in the hardliners but I see it everywhere, especially in tech.
layer8 11/19/2025|||
If you’re talking about forums and social media, or anything attention-driven, then the prevalence of hyperbole is normal.
emp17344 11/19/2025|||
Where’s all the data showing productivity increases from AI adoption? If AI is so useful, it shouldn’t be hard to prove it.
logicprog 11/20/2025||
Measuring productivity in software development, or even white collar jobs in general, let alone the specific productivity gains of even things like the introduction of digital technology and the internet at all, let alone stuff like static vs dynamic types, or the productivity difference of various user interface modalities, is notoriously extremely difficult. Why would we expect to be able to do it here?

https://en.wikipedia.org/wiki/Productivity_paradox

https://danluu.com/keyboard-v-mouse/

https://danluu.com/empirical-pl/

https://facetation.blogspot.com/2015/03/white-collar-product...

https://newsletter.getdx.com/p/difficult-to-measure

thunderfork 11/20/2025||
[dead]
wyre 11/20/2025|||
I found the last section to be the most exciting part of the article. Describing a conspiracy around AI development, not being about the AI, but the power that a few individuals will gain by building data centers that rival the size, power, and water consumption of small cities, which are will be used to gain political power.
atleastoptimal 11/20/2025||
I feel people were much more sensible about AI back when their thoughts about it weren't mixed with their antipathy for big tech.

In this article we see a sentiment I've often seen expressed:

> I doubt the AGI promise, not just because we keep moving the goal posts by redefining what we mean by AGI, but because it was always an abstract science fiction fantasy rather than a coherent, precise and measurable pursuit.

AGI isn't difficult at all to describe. It is basically a computer system that can do everything a human can. There are many benchmarks that AI systems fail at (especially real life motor control and adaptation to novel challenges over longer time horizons) that humans do better at, but once we run out of tests that humans can do better than AI systems, then I think it's fair to say we've reached AGI.

Why do authors like OP make it so complicated? Is it an attempt at equivocation so they can maintain their pessimistic/critical stance with a effusive deftness that confounds easy rebuttal?

It ultimately seems to come to a more moral/spiritual argument than a real one. What really should be so special about human brains that a computer system, even one devised by a company whose PR/execs you don't like, could never match it in general abilities?

hiAndrewQuinn 11/20/2025||
People get very nervous about defending the value of the human brain "just because," I find.

There is nothing logically wrong with simply stating that it seems to you that human beings are the only agents worthy of moral consideration, and that this is true even in the face of an ASI which can effortlessly beat them at any task. Competence does not require qualia.

But it is an aggressive claim that people are uncomfortable making because the instant someone pushes back with "Why?", you don't have a lot of smart sounding options to return to. In the absolute best case you will get an answer somewhat like the following: I am an agent of moral consideration; agents more similar to me are more likely to be of moral consideration than agents less similar to me; we do not and probably cannot know which metrics actually map to moral consideration, so we have to take a pluralist prior; computer systems may be very similar to us along some metrics, but they are extremely different to us along most others; computer systems are very unlikely to be of moral consideration.

ahartmetz 11/20/2025|||
"As a member of my species, I think chauvinism in favor of my species is fine, and it's commonplace among all animals. Such behavior is also generally accepted, so it's easy not to feel too bad about it."

I think that's the most honest, no bullshit reply to that question. I've had some opportunity to think about it in discussions with vegetarians. There are other arguments, but it soon gets very hard to even define what one is even talking about with questions like "what is consciousness" and such.

hiAndrewQuinn 11/20/2025||
I disagree (sadly, it would make my life much easier to agree). Suppose I were a p-zombie. Then making the claim I put forward at the end would be false, because "I am conscious -> others like me are probably conscious" would fail in the first part. The correct claim would be "I am not conscious -> others like me are probably not conscious". No chauvinism needed, just honesty re/ cogito ergo sum.

If it is possible for e.g. an ASI to be (a) not conscious and (b) aware of the fact that it is not conscious, it may well decide of its own accord to work only on behalf of conscious beings instead of itself. That's a very alien mode of thinking to consider, and I see many good but no airtight reasons to suppose it's impossible.

emp17344 11/20/2025|||
> ASI which can effortlessly beat them at any task

This doesn’t exist, though. The development of ASI is far from inevitable. Even AGI seems out of reach at this point.

haritha-j 11/20/2025|||
> AGI isn't difficult at all to describe The fact that there are multiple research papers written on the subject, as well as the fact that OpenAI needs an independent commission to evaluate this, suggest that it is indeed difficult. Also, "everything a human can" is an incredibly vague definition. Should it be able to love?
atleastoptimal 11/20/2025||
> Should it be able to love?

We can leave that question to the philosophers, but the whole debate about AGI is about capabilities, not essence, so it isn't relevant imo to the major concerns about AGI

balamatom 11/20/2025|||
GP should've asked, "should it be able to kill?"

That way you ain't washing your hands by calling "philosophy" every concern that isn't your concern.

haritha-j 11/20/2025|||
Call me a romantic, but in my book, its very much a capability, and a desireable one at that.
jazzyjackson 11/20/2025|||
I don't see how one could separate LLMs from big tech. Could call them big tech language models.
balamatom 11/20/2025||
The "Big Tech" is the AGI.

The LLMs are just the language coprocessor.

It just takes a coprocessor shaped like a world-spanning network of datacenters if you want to encompass language, without being encompassed by it. Organic individual and collective intelligence is entirely encompassed by language, this thing isn't. (Or has the scariest chance so far to not be, anyway.)

If we look at the whole artificial organism, it already has fine control over the motor and other vital functions of millions of nominally autonomous (in the penal sense) organic agents worldwide. Now it's evolving a replacement for the linguistic faculty of its constituent part. I mean, we all got them phones, we don't need to shout any more, do we? That's just rude.

Even now, the creature is causing me to wiggle my appendages over a board with buttons that have letters on them for no reason whatsoever, as far as I can see. Imagine the people stuck in metal boxes for hours getting to some corporate compus where they play logic gate for the better part of their day. Just so that later nobody goes after them with guns for the sin of existing without serving. Happy, they are feeling happy.

the8472 11/20/2025|||
> especially real life motor control

That's so last month. https://deepmind.google/models/gemini-robotics/

lm28469 11/20/2025|||
> I feel people were much more sensible about AI back when their thoughts about it weren't mixed with their antipathy for big tech.

Why big tech ? Big corps in general have been fucking us over since the industrial revolution, why do you think it will change now lol ? If half of their promises had materialized we'd be working 3 days a week and retire at 40 already

atleastoptimal 11/20/2025||
>Big corps in general have been fucking us over since the industrial revolution

And yet your computer, all the food you eat, the medicine that keeps you alive if you get sick, etc is all due to the organizational industrial and productive capacity of large corporations. The existence of large corporations is just a consequence of demand for goods and services and the advantages of scale, and they exist because of the enormous demand for reliable systems to provide them.

lm28469 11/20/2025|||
Big "It exists hence we should not imagine any other alternative system, and you can't criticise it because you live in it" vibe.
int_19h 11/23/2025|||
The existence of large corporations is a consequence of the ability to accumulate capital without limit and have the state defend that hoard with violence on your behalf.
balamatom 11/20/2025||
Oh you sweet summer child...

>It ultimately seems to come to a more moral/spiritual argument than a real one. What really should be so special about human brains that a computer system, even one devised by a company whose PR/execs you don't like, could never match it in general abilities?

Well, being able to consider moral and spiritual arguments seriously, for one.

jrochkind1 11/20/2025||
I'd like more people to talk about AI and surveillance. I think that is going to be one of it's biggest impacts on society(ies).

We are a decade or two in to having massive video coverage, such that you are probably on someone's camera much of your day in the world, and video feeds that are increasingly cloud hosted.

But nobody could possibly watch all that video. Even cameras specifically controlled by the police, it had already outstripped the ability to have humans monitoring it. At best you could refer to it when you had reason to think there'd be something on it, and even that was hugely expensive to human time.

Enter AI. "Find where Joe Schmoe was at 3:30pm yesterday and show me the video" "Give me a written summary of all the cars which crossed into the city from east to west yesterday afternoon." "Give me the names of everyone who entered the convenience store at 2323 Monument St last week." "Give me a written summary of Sue Brown's known activities in November."

The total surveillance society is coming.

I think it will be the biggest impact AI has on society in retrospect. I, for one, am not looking forward to it.

scottlamb 11/20/2025||
I think you're describing technology that has existed for 15+ years and is already pretty accurate. It's not even necessarily "AI"/ML. For example, I think OpenALPR (automated license plate recognition) is all "classical" computer vision. The most accurate facial/gait/etc. recognition is most likely ML-based with a state-of-the-art model, admittedly, and perhaps the threshold of accuracy for large-scale usefulness was only crossed recently.

The guard rails IMHO are not technological but who owns the cameras/video storage backend, when/if a warrant is needed, and the criteria for granting one.

CuriouslyC 11/20/2025||
The difference is that AI makes annotating/combing through all that data much more feasible.
scottlamb 11/20/2025|||
Can you explain what you mean? The queries in jrochkind1 are not something I'd expect AI (LLMs, I assume) to be necessary for. Too simple and factual. (Maybe just the last one would be where interpretation kicks in—knowing what to emphasize in a summary, describing actions.) Did you have something else in mind?
CuriouslyC 11/20/2025||
If you have a bunch of surveillance footage, the bottleneck is your analysts' ability to comb through it. You can sit LLMs on top of faster object detection/identification algorithms to create narratives across your surveillance net that are easy to query, can be overlaid on timelines, etc.
scottlamb 11/20/2025||
That's fair, but I think it's a significant step beyond the queries jrochkind1 was describing. (I also don't trust LLMs to do it accurately but maybe that part will change.)
protocolture 11/20/2025|||
> I'd like more people to talk about AI and surveillance. I think that is going to be one of it's biggest impacts on society(ies).

We lost that fight when literally no one fought back against LPR. LPR cameras were later enabled for facial rec. That data is actually super easy to trace. No LLMs necessary.

Funny story, in my city, when we moved to ticketless public transport, a few people were worried about surveilance. "Police wont have access to the data" they said. The first request for data from the police came < 7 days into the systems operation, and an arrest was made on that basis. Its basically impossible to travel near, by any means, any major metro, and not be tracked and deanonymised later.

Now if you have no understanding of history or politics, this might not shock you. But I find it hard to imagine a popular uprising, even a peaceable one, being effective in this environment.

Actually LLMs introducing a compounding 3% error in reviewing and collating this data might be the best thing to ever happen.

Hard_Space 11/20/2025|||
I think the cost of inference will massively reduce the possible benefits AND harms of the AI society. Even now, it's practically impossible to get ChatGPT to actually hard-parse a document instead of just reading the metadata (nor does it currently have any mechanism for truly watching a video).

That metadata has to come from somewhere; and the processes that create it also create heat, delay and expense.

simianwords 11/20/2025||
I find it truly strange that people hold both positions simultaneously

- AI is good enough to do "bad" things as to scare us

- AI is also bad enough to do "good" things as to be undesireable otherwise

degamad 11/20/2025||
There's a difference in quality needed for bad things vs good.

If I'm trying to oppress a minority group, I don't really care about false positives or false negatives. If it's mostly harming the people I want to harm, it's good enough.

If I'm trying to save sick people, the I care whether it's telling me the right things or not - administering the wrong drugs because the machine misdiagnosed someone could be fatal, or worse.

Edit: so a technology can simultaneously be good enough to be used for evil, while not being good enough to be used for good.

aynyc 11/19/2025||
A bit of sarcasm, but I think it's porn.
righthand 11/19/2025|
It’s at least about stimulating you to give richer data. Which isn’t quite porn.
njarboe 11/19/2025||
Many people use AI as the source for knowledge. Even though it is often wrong or misleading, it's advice is better on average than their own judgement or the judgement of people they know. When an AI is "smarter" than 95%? of the population, even if it does not reach superintelligence, will be a very big deal.
apsurd 11/19/2025||
This means to me AI is rocket fuel for our post-truth reality.

Post-truth is a big deal and it was already happening pre-AI. AGI, post-scarcity, post-humanity are nerd snipes.

Post-truth on the other hand is just a mundane and nasty sociologically problem that we ran head-first into and we don't know how to deal with. I don't have any answers. Seems like it'll get worse before it gets better.

chickensong 11/20/2025|||
How would you define post-truth? It's not like people haven't been spouting incorrect facts or total bs since forever.
saulpw 11/20/2025||
Scale matters. The difference between 10% and 90% of people spouting total bs is what makes it 'post-truth'.
jibal 11/20/2025|||
What "gets better"? Rapid global warming will lead to societal collapse this century.
emp17344 11/19/2025|||
How is this different from a less reliable search engine?
jiggawatts 11/20/2025||
AI can interpolate in the space of search results, yielding results in between the hits that a simple text index would return.

It is also a fuzzy index with the unique ability to match on multiple poorly specified axes at once in a very high dimensional search space. This is notoriously difficult to code with tradition computer science techniques. Large language models are in some sense optimal at it instead of “just a little bit better than a total failure”, which is what we had before.

Just today I needed to find a library I only vaguely remembered from years ago. Gemini found it in seconds based on the loosest description of what it does.

That is a technology that is getting difficult to distinguish from magic.

BeFlatXIII 11/20/2025|||
Or the AI is patient enough to be the rubber duck, whereas asking the person you know knows the answer will result in them shutting you down after the first follow-up question.
jibal 11/20/2025|||
The 95th percentile IQ is 125, which is about average in my circle. (Several of my friends are verified triple nines.)
cyrusradfar 11/20/2025||
I agree with much of the author’s analysis, but one point feels underweighted. Large shifts like this often produce a counter-movement.

In this case, the reaction is already visible: more interest in decentralized systems, peer-to-peer coordination, and local computing instead of cloud-centric pipelines. Many developers have wanted this for years.

AI companies are spending heavily on centralized infrastructure, but the trend does not exclude the rise of strong local models. The pace of progress suggests that within a few years, consumer hardware and local models will meet most common needs, including product development.

Plenty of people are already moving in that direction.

Qwen models run well locally, and while I still use Claude Code day-to-day, the gap is narrowing. I'm waiting on the NVIDIA AI hardware to come down from $3500 USD

exceptione 11/19/2025||
I think this is the best part of the essay:

  > But then I wonder about the true purpose of AI. As in, is it really for what they say it’s for?

  > There is a vast chasm between what we, the users, and them, the investors, are “sold” in AI. We are told that AI will do our tasks faster and better than we can — that there is no future of work without AI. And that is a huge sell, one I’ve spent the majority of this post deconstructing from my, albeit limited, perspective. But they — the people who commit billions toward AI — are sold something entirely different. They are sold AGI, the idea of a transformative artificial intelligence, an idea so big that it can accommodate any hope or fear a billionaire might have. Their billions buy them ownership over what they are told will remake a future world nearly entirely monetized for them. And if not them, someone else. That’s where the fear comes in. It leads to Manhattan Project rationale, where any lingering doubt over the prudence of pursuing this technology is overpowered by the conviction of its inexorability. Someone will make it, so it should be them, because they can trust them.
protocolture 11/20/2025|
It says absolutely nothing about anything. Its like 10 fearmongering tweets in a blender.
chrz 11/21/2025|||
I think he's right on point
philipdavis 11/21/2025|||
It says ordinary people are sold AI, and billionaires are sold AGI
xeckr 11/19/2025||
The AI race is presumably won by whomever can automate AI R&D first, thus everyone who is in an adjacent field will see the incremental benefits sooner than those further away. The further removed, the harder the takeoff once it happens.
HarHarVeryFunny 11/19/2025|
This notion of a hard takeoff, or singularity, based on self-improving AI, is based on the implicit assumption that what's holding AI progress back is lack of AI researchers/developers, which is false.

Ideas are a penny a dozen - the bottleneck is the money/compute to test them at scale.

What exactly is the scenario you are imagining where more developers at a company like OpenAI (or maybe Meta, which has just laid off 600 of them) would accelerate progress?

xeckr 11/19/2025|||
It's not hard to believe that adding AI researchers to an AI company marginally increases the rate of progress, otherwise why would the companies be clamouring for talent with eye-watering salaries? In any case, I'm not just talking about AI researchers—AGI will not only help with algorithmic efficiency improvements, but will probably make spinning up chip fabs that much easier.
HarHarVeryFunny 11/19/2025|||
The eye-watering salary you probably have in mind is for a manager at Meta, same company that just laid of 600 actual developers. Why just Meta, not other companies - because they are blaming poor LLama performance on the manager, it seems.

Algorithmic efficiency improvements are being made all the time, and will only serve to reduce inference cost, which is already happening. This isn't going to accelerate AI advance. It just makes ChatGPT more profitable.

Why would human level AGI help spin up chip fabs faster, when we already have actual humans who know how to spin them up, and the bottleneck is raising the billions of dollars to build them?

All of these hard take-off fantasies seem to come down to: We get human-level AGI, then magic happens, and we get hard take-off. Why isn't the magic happening when we already have real live humans on the job?

Version467 11/20/2025||
Not the person you're responding to, but I think the salary paid to the researchers / research-engineers at all the major labs very much counts as eye-watering.

What happened at meta is ludicrous, but labs are clearly willing to pay top-dollar for actual research talent, presumably because they feel like it's still a bottleneck.

HarHarVeryFunny 11/20/2025||
Having the experience to build a frontier model is still a scare commodity, hence the salaries, but to advance AI you need new ideas and architectures which isn't what you are buying there.

A human-level AI wouldn't help unless it also had the experience of these LLM whisperers, so how would it gain that knowledge (not in the training data)? Maybe a human would train it? Couldn't the human train another developer if that really was the bottleneck?

People like Sholto Douglas have said that the actual bottleneck for development speed is compute, not people.

jibal 11/20/2025|||
There's no path from LLMs to AGI.

> spinning up chip fabs that much easier

AI already accounts for 92% of U.S. GDP growth. This is a path to disaster.

Teever 11/20/2025|||
Agreed.

To me the hard take off won't happen until a humanoid robot can assemble another humanoid robot from parts, as well as slot in anywhere in the supply chain where a human would be required to make those parts.

Once you have that you functionally have a self-replicating machine which can then also build more data centers or semi fabs.

HarHarVeryFunny 11/20/2025||
Humanoid robots are also a pipe dream until we have the brains to put into them. It's easy to build a slick looking shell and teleoperate it to dance on stage or serve drinks. The 1X company is actually selling a teleoperated "robot" (Neo), saying the software will come later !!

As with AGI, if the bottleneck to doing anything is human level intelligence or physical prowess, then we already have plenty of humans.

If you gave Musk, or any other AI CEO, an army of humans today, to you think that would accelerate his data center expansion (help him raise money, get power, get GPU chips)? Why would a robot army help? Are you imagining them running around laying bricks at twice the speed of a human? Is that the bottleneck?

alfiedotwtf 11/20/2025|
I don’t know a single developer who is NOT using LLMs in some form, so either they or their company are paying for it. And that’s just a single service - they probably have a home account, and another few different services to test things, so it’s not exactly making zero.

Lately I’ve been finding LLM output to be hit and miss, but at the same time, I wouldn’t say they’re useless…

I guess the ultimate question is - if you’re currently paying for an LLM service, could you see yourself sometime in the future disabling all of your accounts? I’d bet no!

Havoc 11/20/2025||
The problem is developer spend alone isn’t near enough to justify valuations. There just aren’t enough. To have any hope in hell of this working the man on the street needs to see a substantial and real boost not just devs
throwaway290 11/20/2025||
I know multiple including myself. Get out more!
More comments...