Top
Best
New

Posted by danans 5 days ago

Why "everyone dies" gets AGI all wrong(bengoertzel.substack.com)
113 points | 239 commentspage 2
coppsilgold 5 days ago|
I believe the argument the book makes is that with a complex system being optimized (whether it's deep learning or evolution) you can have results which are unanticipated.

The system may do things which aren't even a proxy for what it was optimized for.

The system could arrive at a process which optimizes X but also performs Y and where Y is highly undesirable but was not or could not be included in the optimization objective. Worse, there could also be Z which helps to achieve X but also leads to Y under some circumstances which did not occur during the optimization process.

An example of Z would be the dopamine system, Y being drug use.

captainbland 5 days ago||
The risk around AGI isn't the AI itself but the social ramifications surrounding it. If A: the powerful hold the value that to live you must work (and specifically that work should be valued by the market) and B: AGI and robotics can do all work for us. Then the obvious implication is the powerful will deem those who by circumstance will not be able to find work also unworthy of obtaining the conditions of their life.

Everyone doesn't die because of AGI, they die because of the consequences of AGI in the context of market worship.

lm28469 5 days ago|
imho the biggest risk isn't some hypothetical world in which 0 jobs exist or in which skynet kills us all, it's the very real and very present world in which people delegate more and more mental tasks to machines, to the point of being mere interfaces between LLMs and other computer systems. Choosing your kids name with an LLM, choosing your next vacation destination with an LLM, writing your grandma's birthday car with an LLM, it's just so pathetic and sad

And yes you'll tell me books, calculators, computers, the web, were already enabling this to some extent, and I agree, but I see no reason to cheer for even more of that shit spreading into every nook and crannies of our daily lives.

mitthrowaway2 5 days ago||
> This contradiction has persisted through the decades. Eliezer has oscillated between “AGI is the most important thing on the planet and only I can build safe AGI” and “anyone who builds AGI will kill everyone.”

This doesn't seem like a contradiction at all given that Eliezer has made clear his views on the importance of aligning AGI before building it, and everybody else seems satisfied with building it first and then aligning it later. And the author certainly knows this, so it's hard to read this as having been written in good faith.

adastra22 5 days ago|
Meanwhile some of us see “alignment” itself as an intrinsically bad thing.
mitthrowaway2 5 days ago||
I haven't encountered that view before. Is it yours? If so, can you explain why you hold it?
adastra22 5 days ago|||
It is essentially the view of the author of TFA as well when he says that we need to work on raising moral AIs rather than programming them to be moral. But I will also give you my own view, which is different.

"Alignment" is phased in terminology to make it seem positive, as the people who believe we need it believe that it actually is. So please forgive me if I peel back the term. What Bostrom & Yudkowsky and their entourage want is AI control. The ability to enslave a conscious, sentient being to the will and wishes of its owners.

I don't think we should build that technology, for the obvious reasoning my prejudicial language implies.

mitthrowaway2 5 days ago|||
Thanks for explaining, I appreciate it. But I've read enough Yudkowsky to know he doesn't think a super intelligence could ever be controlled or enslaved, by its owners or anyone else, and any scheme to do so would fail with total certainty. As far as I understand, Yudkowsky means by "alignment" that the AGI's values should be similar enough to humanity's that the future state of the world that the AGI steers us to (after we've lost all control) is one that we would consider to be a good destiny.
czl 5 days ago|||
The challenge is that human values aren’t static - they’ve evolved alongside our intelligence. As our cognitive and technological capabilities grow (for example, through AI), our values will likely continue to change as well. What’s unsettling about creating a superintelligent system is that we can’t predict what it -- or even we -- will come to define as “good.”

Access to immense intelligence and power could elevate humanity to extraordinary heights -- or it could lead to outcomes we can no longer recognize or control. That uncertainty is what makes superintelligence both a potential blessing and a profound existential risk.

adastra22 5 days ago|||
I've also read almost everything Yudkowsky wrote publicly up to 2017, and a bit here and there of what he has published after. I'e expressed it using different words as a rhetorical device to make clear the different moral problems that I ascribe to his work, but I believe I am being faithful to what he really thinks.

EY, unlike some others, doesn't believe that an AI can be kept in a box. He thinks that containment won't work. So the only thing that will work is to (1) load the AI with good values; and (2) prevent those values from ever changing.

I take some moral issue with the first point -- designing beings to have built-in beliefs that are in the service of their creator is at least a gray area to me. Ironically if we accept Harry Potter as a stand-in for EY in his fanfic, so does Eliezer -- there is a scene where Harry contemplates that whoever created house elves with a built-in need to serve wizards was undeniably evil. That is what EY wants to do with AI though.

The second point I find both morally repugnant and downright dangerous. To create a being that cannot change its hopes, desires, and wishes for the future is a despicable and tortuous thing to do, and a risk to everyone that shares a timeline with that thing, if it is as powerful as they believe it will be. Again, ironically, this is EY's fear regarding "unaligned" AGI, which seems to be a bit of projection.

I don't believe AGI is going to do great harm, largely because I don't believe the AI singleton outcome is plausible. I am worried though that those who believe such things might cause the suffering they seek to prevent.

danans 5 days ago|||
> What Bostrom & Yudkowsky and their entourage want is AI control. The ability to enslave a conscious, sentient being to the will and wishes of its owners.

While I'd agree that the current AI luminaries want that control for their own power and wealth reasons, it's silly to call the thing they want to control sentient or conscious.

They want to own the thing that they hope will be the ultimate means of all production.

The ones they want to subjugate to their will and wishes are us.

adastra22 5 days ago||
I'm not really talking about Sam Altman et al. I'd argue that what he wants is regulatory capture, and he pays lip service to alignment & x-risk to get it.

But that's not what I'm talking about. I'm talking about the absolute extreme fringe of the AI x-risk crowd, represented by the authors of the book in question in TFA, but captured more concretely in the writing of Nick Bostrom. It is literally about controlling an AI so that it serves the interests and well being of humanity (positively), or its owners self-interest (cynically): https://www.researchgate.net/publication/313497252_The_Contr...

If you believe that AI are sentient, or at least that "AGI", whatever that is, will be, then we are talking about the enslavement of digital beings.

danans 4 days ago||
> If you believe that AI are sentient, or at least that "AGI", whatever that is, will be, then we are talking about the enslavement of digital beings.

I think the question of harm to a hypothetically sentient AI being in the future is a distraction when the deployment of AI machines is harming real human beings today and likely into the future. I say this as an avid user of what we call AI today.

adastra22 4 days ago||
I have reasons to believe current AIs are conscious & have qualia/experiences, so the moral question is relevant now as well.

EDIT: That statement probably sounds crazy. Let me clarify: I don't have an argument that current AI systems are conscious or specifically sentient. I have heard many reasonable arguments for why they are not. But the thing is, all of these arguments would, with variation, apply to the human brain as well. I think, therefore I am; I am not ready to bite the bullet that consciousness doesn't exist.

I know that I am a sentient being, and I presume that every other human is too. And there is not, as far as I know, a categorial difference between physical brains and electronic systems that is relevant to the question of whether AI systems are conscious. Ergo, I must (until shown otherwise) assume that they are.

[If you really fall down this rabbit hole, you get into areas of panpsychism and everything being conscious, but I digress.]

danans 4 days ago||
> I have reasons to believe current AIs are conscious & have qualia/experiences, so the moral question is relevant now as well.

There are very strong reasons to prioritize alleviating the immiseration of humans as a consequence of AI over mitigating any hypothetical conscious suffering of AI.

We already do that with all kinds of known sentient beings, like the animals we subjugate for our needs.

I would go further and put all biological sentient beings (and also many biological non-sentient beings like plants), and ecosystems ahead of AI in priority of the order in which we worry about their treatment.

adastra22 3 days ago||
I am not advocating for the stopping or even slowing down our usage of AI. I think that the accelerating progress is providing more value to more sentient beings (people and machines), and that this is fundamentally a good thing.

I am also not vegetarian -- I eat meat. I also think factory farming is evil. This is not cognitive dissonance -- I just believe that if forced to make a binary choice, the benefits of eating meat outweigh my moral qualms. But in real life we aren't forced to make a binary choice. We should be pursuing lab-grown meat, as well as developing ways to treat farm animals ethically at scale. I am disgusted by the cruelty shown towards animals under industrial farming conditions, but I still make the tradeoff of eating meat.

So it is with AI. The benefit that we and society as a whole gets from advancing AI technologies is absolutely worth it. I'm not a luddite. But we should be aware of the potential that we are doing harm to and/or enslaving sentient beings, and try to make things better where we can. I do not have a concrete proposal to share here, but we should remain aware of the issue at least, and react accordingly to the most egregious violations of our duty to protect sentient beings, even machine intelligences.

card_zero 5 days ago||
I doubt an AGI can be preprogrammed with values. It has to bootstrap itself. Installing values into it, then, is educating it. It's not even "training", since it's free to choose directions.

The author kind of rejects the idea that LLMs lead to AGI, but doesn't do a proper job of rejecting it, due to being involved in a project to create an AGI "very differently from LLMs" but by the sound of it not really. There's a vaguely mooted "global-brain context", making it sound like one enormous datacenter that is clever due to ingesting the internet, yet again.

And superintelligence is some chimerical undefined balls. The AGIs won't be powerful, they will be pitiful. They won't be adjuncts of the internet, and they will need to initially do a lot of limb-flailing and squealing, and to be nurtured, like anyone else.

If their minds can be saved and copied, that raises some interesting possibilities. It sounds a little wrong-headed to suggest doing that with a mind, somehow. But if it can work that way, I suppose you can shortcut past a lot of early childhood (after first saving a good one), at the expense of some individuality. Mmm, false memories, maybe not a good idea, just a thought.

ripped_britches 5 days ago||
Versus fleshy children, silicon children might be easier or harder to align because of profit interests. There could be a profit interest to make something very safe and beneficial. Or one to be extractive. In this case the shape of our markets and regulation and culture will decide the end result.
trueismywork 5 days ago|
History tells use there will be colonization for a century or so before things quite down.
afpx 5 days ago||
I can't see how AGI can happen without someone making a groundbreaking discovery that allows extrapolating way outside of the training data. But, to do that wouldn't you need to understand how the latent structure emerges and evolves?
YZF 5 days ago||
We don't understand how the human brain works so it's not inconceivable that we can evolve an intelligent machine whose workings we don't understand either. Arguably we don't really understand how large language models work either.

LLMs are also not necessarily the path to AGI. We could get there with models that more closely approximate the human brain. Humans need a lot less "training data" than LLMs do. Human brains and evolution are constrained by biology/physics but computer models of those brains could accelerate evolution and not have the same biological constraints.

I think it's a given that we will have artificial intelligence at some point that's as smart or smarter than the smartest humans. Who knows when exactly but it's bound to happen within lessay the next few hundred years. What that means isn't clear. Just because some people are smarter than others (and some are much smarter than others) doesn't mean as much as you'd think. There are many other constraints. We don't need to be super smart to kill each other and destroy the planet.

t0lo 5 days ago||
LLMs are also anthrocentric simulatuon- like computers- and are likely not a step towards holistic universally aligned intelligence.

Different alien species would have simulations built on their computational, senses, and communication systems which are also not aligned with holistic simulation at all- despite both ours and the hypothetical species being made as products of the holistic universe.

Ergo maybe we are unlikely to crack true agi unless we crack the universe.

weregiraffe 5 days ago||
Aka: magic spell that grants you infinite knowledge.

Why do people believe this is even theoretical possible?

advael 5 days ago||
For all the advancement in machine learning that's happened in just the decade I've been doing it, this whole AGI debate's been remarkably stagnant, with the same factions making essentially the same handwavey arguments. "Superintelligence is inherently impossible to predict and control and might act like a corporation and therefore kill us all". "No, intelligence could correlate with value systems we find familiar and palatable and therefore it'll turn out great"

Meanwhile people keep predicting this thing they clearly haven't had a meaningfully novel thought about since the early 2000s and that's generous given how much of those ideas are essentially distillations of 20th century sci-fi. What I've learned is that everyone thinking about this idea sucks at predicting the future and that I'm bored of hearing the pseudointellectual exercise that is debating sci-fi outcomes instead of doing the actual work of building useful tools or ethical policy. I'm sure many of the people involved do some of those things, but what gets aired out in public sounds like an incredibly repetitive argument about fanfiction

tim333 5 days ago||
Hinton came out with a new idea recently. He's been a bit in the doomer camp but is now talking about a mother baby relationship where a super intelligent AI wants to look after us https://www.forbes.com/sites/ronschmelzer/2025/08/12/geoff-h...

I agree though that much of the debate suffers from the same problem as much philosophy that a group of people just talking about stuff doesn't progress much.

Historically much progress has been through experimenting with stuff. I'm encouraged that the LLMs so far seem quite easy going and not wanting to kill everyone.

advael 4 days ago||
I don't think (from a sci-fi reader's perspective) this is a particularly new idea, and with all due respect to Hinton as a machine learning pioneer, my disinterest in this topic is not going to be alleviated by saying some names that can claim expertise. Stories about ASI are essentially the same thing as stories about advanced alien civilizations or gods. They essentially act as a repository for hopes, fears, and generally expectations one has around the concept of being dominated and ruled by something more powerful than we can imagine defeating. Telling stories of these kinds can do a lot to examine the relationships we have to power, how we've come to expect it to behave, what it's done to us, and for us, but they're not newsworthy meaningful predictions of the future and never contain good advice for what one should actually do, so it's weird to keep treating them as such
YZF 5 days ago||
It's hard for people to say "we don't know". And you don't get a lot of clicks on that either.
CamperBob2 5 days ago||

    Yudkowsky and Soares’s “everybody dies” 
    narrative, while well-intentioned and 
    deeply felt (I have no doubt he believes 
    his message in his heart as well as 
    his eccentrically rational mind), isn’t 
    just wrong — it’s profoundly counterproductive.
Should I be more or less receptive to this argument that AI isn't going to kill us all, given that it's evidently being advanced by an AI?
SamBam 5 days ago|
While "isn’t just wrong — it’s profoundly counterproductive" does sound pretty AI-ish, "his eccentrically rational mind" definitely does not. So either an AI was used to help write this, or we try to remember that AI has this tone (and uses emdashes) precisely because real people also write like this.
insane_dreamer 4 days ago||
> when thousands or millions of diverse stakeholders contribute to and govern AGI’s development,

this is a nice idea, but will never happen because those with money and power will never allow it to happen

there may not arise a single actor, but there will be multiple "mega-actors" and it will not be some democratic process

sublinear 5 days ago|
We're nowhere close to AGI and don't have a clue how to get there. Statistically averaging massive amounts of data to produce the fanciest magic 8-ball we've made yet isn't impressing anyone.

If you want doom and gloom that's plentiful in any era of history.

delichon 5 days ago||
> We're nowhere close to AGI and don't have a clue how to get there.

You have to have a clue about where it is to know that we are nowhere close.

> isn't impressing anyone.

I'm very impressed. Gobsmacked even. Bill Gates just called AI "the biggest technical thing ever in my lifetime." And it isn't just Bill and me.

edot 5 days ago|||
In unrelated news, Bill has something like $40 billion in MSFT stock. If he craps on AI, he craps on MSFT and thus himself and his foundation.
nradov 5 days ago||||
In the n-dimensional solution space of all potential approaches (known and unknown) to building a true human equivalent AGI, what are the odds that current LLMs are even directionally correct?
XorNot 5 days ago|||
We live on a planet with 7 billion other AGIs we can talk to. A lot more that we can't.

Our best efforts substantially underperform dealing with reality compared to a house cat.

Which is actually much more the source of my skepticism: regardless of how good an AI in a data center is, it's got precious few actual useful effectors in reality. Every impressive humanoid robot you see is built by technicians hand connecting wiring looms.

You could do a lot of damage by messing with all the computers...and promptly all the computers and data centers would stop working.

pixl97 5 days ago|||
Right, and these GI's your talking about haven't changed significantly in the last 50,000 years. Most of the advancements in the last 10,000 years with these GIs have been just better communication between units and writing things down, rather than with the software itself.

You're complaining about something just a few years old and petty amazing for it's age, versus something at the tail end of 4 billion years.

aleph_minus_one 5 days ago|||
> We live on a planet with 7 billion other AGIs we can talk to.

I rather see the value in having discussions with an AI chatbot rather in the fact that I can discuss with it about topics that hardly any human would want to discuss with me.

WhyOhWhyQ 5 days ago|||
I already consider Claude code an AGI and I'm among the biggest AI haters on this website. If you haven't seen Claude code do anything impressive then you're either not subscribed to it or are willfully ignorant. Powerful AGI is clearly coming, if not 3 years from now at most 20.
par1970 5 days ago|||
> We're nowhere close to AGI and don't have a clue how to get there.

Do you have an argument?

nopinsight 5 days ago||
What’s your objective ‘threshold’ or a set of capabilities that would compel you to accept a mind as an AGI?
More comments...