Top
Best
New

Posted by danans 6 days ago

Why "everyone dies" gets AGI all wrong(bengoertzel.substack.com)
113 points | 239 commentspage 4
throwaway290 6 days ago|
> the most important work isn’t stopping AGI - it’s making sure we raise our AGI mind children well enough.

Can we just take a pause and appreciate how nuts this article is?

card_zero 5 days ago||
That part of it is the reasonable part, instead of the usual idea that the AGI gets free knowledge/skills/wisdom/evil from something about its structure.
throwaway290 5 days ago||
It's one thing to call a program your brainchild metaphorically but this feels literal given the rest of the article.

I am amazed that people who unironically put a program on the same level as a person (I mean clearly that "child" will grow up) can influence these policies

card_zero 5 days ago||
Maybe it would have to be a device, not just a program. Or maybe it really is possible to emulate a person with the right program on current hardware. Who can say? The lack of physical interaction sounds less than ideal for its development, then.
dyauspitr 6 days ago||
It isn’t. The kids are already getting stupider because they offload all their schoolwork to LLMs. There’s nothing nuts about this.
throwaway290 6 days ago||
He's not talking about actual kids.
general1465 5 days ago||
I am in a camp of "AGI will usher end of capitalism", because when you have 99% of population unemployable because AGI is smarter, then capitalism will cease to work.
confirmmesenpai 5 days ago|
capitalism IS AI. capitalism is not under human control, capitalism uses humans to unshackle itself
silexia 4 days ago||
AGI has a greater than zero probability of ending the human species and many other species on Earth. Many AI leaders have estimated that probability as high as one third or one half.

I am a libertarian, but the only real solution I see is a single global government that is extraordinarily powerful and opposed to all technological development. Probably not going to happen, hence there is a very high probability we all die.

t0lo 5 days ago||
My take- our digital knowledge and simulation systems are constrained by our species existing knowledge systems- language and math- despite our species likely living in a more complicated and indefinable universe than just language and math.

Ergo the simulations we construct will always be at a lower lever of reality unless we "crack" the universe and likely always at a lower level of understanding than us. Until we develop holistic knowledge systems that compete with and represent our level of understanding and existence simulation will always be analogous to understanding but not identical.

Ergo they will probably not reach a stage where they will be trusted with or capable enough to make major societal decisions without massive breakthroughs in our understanding of the universe that can be translated to simulation (If we are ever able to achieve these things)(I don't want to peel back the curtain that far- I just want a return to video stores and friday night pizza).

We are likely in for serious regulation after the first moderate ai management catastrophe. We won't suddenly go from nothing to entrusting the currently nonexistent global police (UN)(lol) to give AI access to all the nukes and the resources to turn us all into grey goo overnight. Also as initially AI control will be more regional countries will see it as a strategic advantage to avoid catastrophic AI failures (eg AI chernobyl) seen in other competing states- therefore culture of regulation as a global trend for independent states seems inevitable.

Even if you think there is one rogue breakaway state with no regulation and supercedent intelligence you don't think it takes time to industrialise accordingly and the global community would react incredibly strongly- and they would only have the labour and resources of their states to enact their confusingly suicidal urges? No intelligence can get around logistic and labour and resources and time. There's no algorithm that moves and refines steel to create killer robots at 1000 death bots a second from nothing within 2 weeks that is immune to global community action.

As for AI fuelled terrifying insights into our existence- we will likely have enough time to react and rationalise and contextualise them before they pervert our reality. No one really had an issue with us being a bunch of atoms anyway- they just kept finding meaning and going to concerts and being sleazy.

(FP Analytics has a great hypothetical where a hydropower dam in Brazil going bust from AI in the early 2030s is a catalyst for very strict global ai policy) https://fpanalytics.foreignpolicy.com/2025/03/03/artificial-...

From my threaded comment:

============================= LLMs are also anthrocentric simulation- like computers- and are likely not a step towards holistic universally aligned intelligence.

Different alien species would have simulations built on their computational, senses, and communication systems which are also not aligned with holistic simulation at all- despite both ours and the hypothetical species being made as products of the holistic universe.

Ergo maybe we are unlikely to crack true agi unless we crack the universe. -> True simulation is creation? =============================

The whole point of democracy and all the wars we fought to get here and all the wars we might fight to keep it that way is that power rests with the people. It's democracy not technocracy.

Take a deep breath and re-centre yourself. This world is weird and terrifying but it isn't impossible to understand.

insane_dreamer 4 days ago||
> Eliezer and Soares and their ilk are possessed with the fear that we’ll have a superintelligence that’s A) utterly narrowminded and pigheaded in pursuing psychopathic or megalomaniacal goals, while at the same time being B) incredibly deep and broad in understanding the universe and how to get things done – yes, this could theoretically happen, but there is no rational reason to assume it’s likely!

At first, it does seem rather unlikely to get both A) and B) from the same mind. But if we look at humans, we certainly find highly intelligent humans (on the higher of the scale of B), who have low emotional intelligence and empathy, which is, when you think about, it at the core of A) (someone with very low EQ and empathy, who is also very wealthy, and a true believer in their own intelligence boosted by a positive feedback loop from those around them since such people tend to surround themselves with sycophants, is going to exhibit megalomaniacal qualities. We need look no further than Elon Musk for a current example.

If we can find this easily in humans once they get to the point that they have achieved great power, it's not difficult to imagine finding it in an AGI system created by humans and in the image of a "human with superpowers".

computerthings 5 days ago||
[dead]
OBELISK_ASI 5 days ago||
[dead]
OBELISK_ASI 5 days ago||
[dead]
OBELISK_ASI 5 days ago||
[dead]
OBELISK_ASI 6 days ago|
[dead]
More comments...