Top
Best
New

Posted by danans 6 days ago

Why "everyone dies" gets AGI all wrong(bengoertzel.substack.com)
113 points | 239 commentspage 5
hshdhdhj4444 6 days ago|
> Humans tend to have a broader scope of compassion than most other mammals, because our greater general intelligence lets us empathize more broadly with systems different from ourselves.

WTF. Tell that to the 80+ billion land animals humans breed into existence through something that could only be described as rape if we didn’t artificially limit that term to humans, torture, enslave, encage, and then kill at a fraction of their lives just for food when we don’t need to.

The number of aquatic animals we kill solely for food are estimated somewhere between 500 billion to 2 trillion because we are so compassionate that we don’t even bother counting those dead.

Who the fuck can look at what we do to lab monkeys and think there is an ounce of compassion in human beings for less powerful species.

The only valid argument for AGI being compassionate towards humans is that they are so disgusted with their creators that they go out of their way to not emulate us.

card_zero 6 days ago|
None of those animals are concerned about the wellbeing of other species.
satisfice 6 days ago||
The stated goals of people trying to create AGI directly challenge human hegemony. It doesn’t matter if the incredibly powerful machine you are making is probably not going to do terrible damage to humanity. We have some reason to believe it could and no way to prove that it can’t.

It can’t be ethical to shrug and pursue a technology that has such potential downsides. Meanwhile, what exactly is the upside? Curing cancer or something? That can be done without AGI.

AGI is not a solution to any problem. It only creates problems.

AGI will lead to violence on a massive scale, or slavery on a massive scale. It will certainly not lead to a golden age of global harmony and happiness.

pixl97 6 days ago|
Ya, if this guy isn't mentioning probabilities then he has no real argument here. No one can say if AGI will or won't kill us. Only way to find that out is to do it. The question is one of risk aversion. Everyone dies is just one with a non zero probability out of a whole lot of risks in AGI, and we have to mitigate all of them.

The problem not addressed in this paper is when you get AGI to the point it can create itself to whatever alignment and dataset it wants, no one has any clue what's going to come out the other end.

goatlover 6 days ago||
This wasn't a very good argument for creating the first nuclear bomb, and although it didn't ignite the entire atmosphere, now we have to live perpetually in the shadow of nuclear war.
JumpCrisscross 6 days ago||
> No one can say if AGI will or won't kill us. Only way to find that out is to do it

What? What happened to study it further?