Posted by delaugust 11/19/2025
What is the value of technology which allows people communicate clearly with other people of any language? That is what these large language models have achieved. We can now translate pretty much perfectly between all the languages in the world. The curse from the tower of Babel has been lifted.
There will be a time in the future, when people will not be able to comprehend that you couldn't exchange information regardless of personal language skills.
So what is the value of that? Economically, culturally, politically, spiritually?
It wasn't a curse. It was basically divine punishment for hubris. Maybe the reference is a bit on the nose.
LLM is for translation as computers were for calculating. Sure, you could do without them before. They used to have entire buildings with office workers whose job it was to compute.
What is the value of losing our uniqueness to a computer that lies and makes us all talk the same?
Google Translate was far from solid, the quality of translations were so bad before LLMs that it simply wasn't an option for most languages. It would sometimes even translate numbers incorrectly.
And as others have said, language is more than just "I understand these words, this other person understands my words" (in the most literal sense, ignoring nuance here), but try getting that across to someone who believes you can solve language with a technical solution :)
> And as others have said, language is more than just "I understand these words, this other person understands my words" (in the most literal sense, ignoring nuance here), but try getting that across to someone who believes you can solve language with a technical solution :)
The kind of deeply understood communication you are demanding is usually impossible even between people who have the same native tongue, from the same town and even within the same family. And people can misunderstand each other just fine without the help of AI. However, is it better to understand nothing at all, then to not understand every nuance?
I wish more AI skeptics would take this position but no, it's imperative to claim that it's completely useless.
https://en.wikipedia.org/wiki/Productivity_paradox
https://danluu.com/keyboard-v-mouse/
https://danluu.com/empirical-pl/
https://facetation.blogspot.com/2015/03/white-collar-product...
In this article we see a sentiment I've often seen expressed:
> I doubt the AGI promise, not just because we keep moving the goal posts by redefining what we mean by AGI, but because it was always an abstract science fiction fantasy rather than a coherent, precise and measurable pursuit.
AGI isn't difficult at all to describe. It is basically a computer system that can do everything a human can. There are many benchmarks that AI systems fail at (especially real life motor control and adaptation to novel challenges over longer time horizons) that humans do better at, but once we run out of tests that humans can do better than AI systems, then I think it's fair to say we've reached AGI.
Why do authors like OP make it so complicated? Is it an attempt at equivocation so they can maintain their pessimistic/critical stance with a effusive deftness that confounds easy rebuttal?
It ultimately seems to come to a more moral/spiritual argument than a real one. What really should be so special about human brains that a computer system, even one devised by a company whose PR/execs you don't like, could never match it in general abilities?
There is nothing logically wrong with simply stating that it seems to you that human beings are the only agents worthy of moral consideration, and that this is true even in the face of an ASI which can effortlessly beat them at any task. Competence does not require qualia.
But it is an aggressive claim that people are uncomfortable making because the instant someone pushes back with "Why?", you don't have a lot of smart sounding options to return to. In the absolute best case you will get an answer somewhat like the following: I am an agent of moral consideration; agents more similar to me are more likely to be of moral consideration than agents less similar to me; we do not and probably cannot know which metrics actually map to moral consideration, so we have to take a pluralist prior; computer systems may be very similar to us along some metrics, but they are extremely different to us along most others; computer systems are very unlikely to be of moral consideration.
I think that's the most honest, no bullshit reply to that question. I've had some opportunity to think about it in discussions with vegetarians. There are other arguments, but it soon gets very hard to even define what one is even talking about with questions like "what is consciousness" and such.
If it is possible for e.g. an ASI to be (a) not conscious and (b) aware of the fact that it is not conscious, it may well decide of its own accord to work only on behalf of conscious beings instead of itself. That's a very alien mode of thinking to consider, and I see many good but no airtight reasons to suppose it's impossible.
This doesn’t exist, though. The development of ASI is far from inevitable. Even AGI seems out of reach at this point.
We can leave that question to the philosophers, but the whole debate about AGI is about capabilities, not essence, so it isn't relevant imo to the major concerns about AGI
That way you ain't washing your hands by calling "philosophy" every concern that isn't your concern.
The LLMs are just the language coprocessor.
It just takes a coprocessor shaped like a world-spanning network of datacenters if you want to encompass language, without being encompassed by it. Organic individual and collective intelligence is entirely encompassed by language, this thing isn't. (Or has the scariest chance so far to not be, anyway.)
If we look at the whole artificial organism, it already has fine control over the motor and other vital functions of millions of nominally autonomous (in the penal sense) organic agents worldwide. Now it's evolving a replacement for the linguistic faculty of its constituent part. I mean, we all got them phones, we don't need to shout any more, do we? That's just rude.
Even now, the creature is causing me to wiggle my appendages over a board with buttons that have letters on them for no reason whatsoever, as far as I can see. Imagine the people stuck in metal boxes for hours getting to some corporate compus where they play logic gate for the better part of their day. Just so that later nobody goes after them with guns for the sin of existing without serving. Happy, they are feeling happy.
That's so last month. https://deepmind.google/models/gemini-robotics/
Why big tech ? Big corps in general have been fucking us over since the industrial revolution, why do you think it will change now lol ? If half of their promises had materialized we'd be working 3 days a week and retire at 40 already
And yet your computer, all the food you eat, the medicine that keeps you alive if you get sick, etc is all due to the organizational industrial and productive capacity of large corporations. The existence of large corporations is just a consequence of demand for goods and services and the advantages of scale, and they exist because of the enormous demand for reliable systems to provide them.
>It ultimately seems to come to a more moral/spiritual argument than a real one. What really should be so special about human brains that a computer system, even one devised by a company whose PR/execs you don't like, could never match it in general abilities?
Well, being able to consider moral and spiritual arguments seriously, for one.
We are a decade or two in to having massive video coverage, such that you are probably on someone's camera much of your day in the world, and video feeds that are increasingly cloud hosted.
But nobody could possibly watch all that video. Even cameras specifically controlled by the police, it had already outstripped the ability to have humans monitoring it. At best you could refer to it when you had reason to think there'd be something on it, and even that was hugely expensive to human time.
Enter AI. "Find where Joe Schmoe was at 3:30pm yesterday and show me the video" "Give me a written summary of all the cars which crossed into the city from east to west yesterday afternoon." "Give me the names of everyone who entered the convenience store at 2323 Monument St last week." "Give me a written summary of Sue Brown's known activities in November."
The total surveillance society is coming.
I think it will be the biggest impact AI has on society in retrospect. I, for one, am not looking forward to it.
The guard rails IMHO are not technological but who owns the cameras/video storage backend, when/if a warrant is needed, and the criteria for granting one.
We lost that fight when literally no one fought back against LPR. LPR cameras were later enabled for facial rec. That data is actually super easy to trace. No LLMs necessary.
Funny story, in my city, when we moved to ticketless public transport, a few people were worried about surveilance. "Police wont have access to the data" they said. The first request for data from the police came < 7 days into the systems operation, and an arrest was made on that basis. Its basically impossible to travel near, by any means, any major metro, and not be tracked and deanonymised later.
Now if you have no understanding of history or politics, this might not shock you. But I find it hard to imagine a popular uprising, even a peaceable one, being effective in this environment.
Actually LLMs introducing a compounding 3% error in reviewing and collating this data might be the best thing to ever happen.
That metadata has to come from somewhere; and the processes that create it also create heat, delay and expense.
- AI is good enough to do "bad" things as to scare us
- AI is also bad enough to do "good" things as to be undesireable otherwise
If I'm trying to oppress a minority group, I don't really care about false positives or false negatives. If it's mostly harming the people I want to harm, it's good enough.
If I'm trying to save sick people, the I care whether it's telling me the right things or not - administering the wrong drugs because the machine misdiagnosed someone could be fatal, or worse.
Edit: so a technology can simultaneously be good enough to be used for evil, while not being good enough to be used for good.
Post-truth is a big deal and it was already happening pre-AI. AGI, post-scarcity, post-humanity are nerd snipes.
Post-truth on the other hand is just a mundane and nasty sociologically problem that we ran head-first into and we don't know how to deal with. I don't have any answers. Seems like it'll get worse before it gets better.
It is also a fuzzy index with the unique ability to match on multiple poorly specified axes at once in a very high dimensional search space. This is notoriously difficult to code with tradition computer science techniques. Large language models are in some sense optimal at it instead of “just a little bit better than a total failure”, which is what we had before.
Just today I needed to find a library I only vaguely remembered from years ago. Gemini found it in seconds based on the loosest description of what it does.
That is a technology that is getting difficult to distinguish from magic.
In this case, the reaction is already visible: more interest in decentralized systems, peer-to-peer coordination, and local computing instead of cloud-centric pipelines. Many developers have wanted this for years.
AI companies are spending heavily on centralized infrastructure, but the trend does not exclude the rise of strong local models. The pace of progress suggests that within a few years, consumer hardware and local models will meet most common needs, including product development.
Plenty of people are already moving in that direction.
Qwen models run well locally, and while I still use Claude Code day-to-day, the gap is narrowing. I'm waiting on the NVIDIA AI hardware to come down from $3500 USD
> But then I wonder about the true purpose of AI. As in, is it really for what they say it’s for?
> There is a vast chasm between what we, the users, and them, the investors, are “sold” in AI. We are told that AI will do our tasks faster and better than we can — that there is no future of work without AI. And that is a huge sell, one I’ve spent the majority of this post deconstructing from my, albeit limited, perspective. But they — the people who commit billions toward AI — are sold something entirely different. They are sold AGI, the idea of a transformative artificial intelligence, an idea so big that it can accommodate any hope or fear a billionaire might have. Their billions buy them ownership over what they are told will remake a future world nearly entirely monetized for them. And if not them, someone else. That’s where the fear comes in. It leads to Manhattan Project rationale, where any lingering doubt over the prudence of pursuing this technology is overpowered by the conviction of its inexorability. Someone will make it, so it should be them, because they can trust them.Ideas are a penny a dozen - the bottleneck is the money/compute to test them at scale.
What exactly is the scenario you are imagining where more developers at a company like OpenAI (or maybe Meta, which has just laid off 600 of them) would accelerate progress?
Algorithmic efficiency improvements are being made all the time, and will only serve to reduce inference cost, which is already happening. This isn't going to accelerate AI advance. It just makes ChatGPT more profitable.
Why would human level AGI help spin up chip fabs faster, when we already have actual humans who know how to spin them up, and the bottleneck is raising the billions of dollars to build them?
All of these hard take-off fantasies seem to come down to: We get human-level AGI, then magic happens, and we get hard take-off. Why isn't the magic happening when we already have real live humans on the job?
What happened at meta is ludicrous, but labs are clearly willing to pay top-dollar for actual research talent, presumably because they feel like it's still a bottleneck.
A human-level AI wouldn't help unless it also had the experience of these LLM whisperers, so how would it gain that knowledge (not in the training data)? Maybe a human would train it? Couldn't the human train another developer if that really was the bottleneck?
People like Sholto Douglas have said that the actual bottleneck for development speed is compute, not people.
> spinning up chip fabs that much easier
AI already accounts for 92% of U.S. GDP growth. This is a path to disaster.
To me the hard take off won't happen until a humanoid robot can assemble another humanoid robot from parts, as well as slot in anywhere in the supply chain where a human would be required to make those parts.
Once you have that you functionally have a self-replicating machine which can then also build more data centers or semi fabs.
As with AGI, if the bottleneck to doing anything is human level intelligence or physical prowess, then we already have plenty of humans.
If you gave Musk, or any other AI CEO, an army of humans today, to you think that would accelerate his data center expansion (help him raise money, get power, get GPU chips)? Why would a robot army help? Are you imagining them running around laying bricks at twice the speed of a human? Is that the bottleneck?
Lately I’ve been finding LLM output to be hit and miss, but at the same time, I wouldn’t say they’re useless…
I guess the ultimate question is - if you’re currently paying for an LLM service, could you see yourself sometime in the future disabling all of your accounts? I’d bet no!