Top
Best
New

Posted by thedudeabides5 4 hours ago

The Rational Conclusion of Doomerism Is Violence(www.campbellramble.ai)
71 points | 110 commentspage 2
hax0ron3 4 hours ago|
I don't agree with Yudkowsky, but I think there's certainly a chance that he's right about AI destroying humanity. I just don't think the likelihood of that happening is as high as he thinks it is. But there certainly is a chance.

The problem with trying to stop it is, how? Even if you killed every single AI company leader and every single top AI engineer, it would almost certainly just slow down the rate of progress in the technology, not stop it. The technology is so vital to national security that in the face of such actions, state security forces would just bring development of the tech under their direct protection Manhattan Project-style. Even if you killed literally every single AI engineer on the planet, it's pretty likely that this would just delay the development of the technology by a decade or so instead of actually preventing it.

The technology is pushed forward by a simple psychological logic: every key global actor knows that if they don't build the technology, they will be outcompeted by other actors who do build the technology. No key actor thinks that they have the luxury of not building the technology even if they wanted to not build it. It's very similar to nuclear weapons in that regard. You can talk about nuclear disarmament all you want but at the end of the day, having nuclear weapons is vital to having sovereignty. If you don't have nuclear weapons, you will always be in danger of becoming just the prison bitch of countries that do have them. AI seems that it is growing toward a similar position in the calculus of states' notional security.

I can think of no example in history of the entire world deciding to just forsake the development of a technology because it seemed like it could prove to be too dangerous. The same psychological logic always applies.

Aurornis 2 hours ago||
> I don't agree with Yudkowsky, but I think there's certainly a chance that he's right about AI destroying humanity. I just don't think the likelihood of that happening is as high as he thinks it is. But there certainly is a chance.

This is the rhetorical trick that LessWrongers (Yudkowsky's site) have settled on for decades: They have justified everything around the premise that there's a chance, however small, that the world will end. You can't argue that the world ending is a bad thing, so they have their opening for the rest of their arguments, which is that we need to follow their advice to prevent the world maybe ending. They rebut any counterarguments by trying to turn it into a P(doom) debate where we're fighting over how likely this outcome is, but by the time the discussion gets there you've already been forced to accept their argument. Then they push the P(doom) argument aside and try to argue that it doesn't matter how unlikely it is, we have a morally duty to act.

zbentley 1 hour ago|||
This is an entertaining (and often exasperating) decades-old trend in competitive U.S. college debate, as well.

A common advantageous strategy is to take the randomly-selected topic, however unrelated, and invent a chain of logic that claims that taking a given side/action leads to an infinitesimal risk of nuclear extinction/massive harms. This results in people arguing that e.g. "building more mass transit networks" is a bad idea because it leads to a tiny increase in the risk of extinction--via chains as silly as "mass transit expansion needs energy, increased energy production leads to more EM radiation, evil aliens--if they exist--are very marginally more likely to notice us due to increased radiation and wipe out the human race". That's not a made-up example.

The strategy is just like the LessWrongers' one: if you can put your opponent in the position of trying to reduce P(doom), you can argue that unless it's reduced to actual zero, the magnitude of the potential negative consequence is so severe as to overwhelm any consideration of its probability.

In competitive debate, this is a strong strategy. Not a cheat-code--there are plenty of ways around it--but common and enduring for many years.

As an aside: "debate", as practiced competitively, often bears little relation to "debate" as understood by the general public. There are two main families of competitive debate: one is more outward-facing and oriented towards rhetorical/communication/persuasion practice; the other is more ingrown and oriented towards persuading other debaters, in debate-community-specific terms, of which side should win. There's overlap, but the two tend to be practiced/judged by separate groups, according to different rubrics, and/or in different spaces or events. That second family is what I'm referring to above.

kansface 1 hour ago||||
It is a reimagining of Pascal’s Wager. On the original front, I don’t see the neo-Rationalists converting to Christianity en masse.
empiricus 1 hour ago|||
well, rhetorical trick or not, it is worth thinking about the fact that the dynamics of the thing are already outside anyone's control. I mean, everyone is racing and you cannot stop.
squigz 4 hours ago|||
> I can think of no example in history of the entire world deciding to just forsake the development of a technology because it seemed like it could prove to be too dangerous. The same psychological logic always applies.

Can't you? Haven't many (most?) countries agreed to nuclear disarmament? What about biological weapons? Even anti-personnel mines, I think?

hax0ron3 4 hours ago|||
Those weapons are still all being developed and would be brought out in any actually existential war where they seemed useful. The agreements would last only as long as the wars were not existential, or as long as the various countries involved believed that use of them, and the resulting retaliation in kind, would be more destructive than not using them. But one way or another, countries still develop them.
dweinus 3 hours ago||
I don't think it needs to be a binary to be effective. Yes, those weapons still exist, but understanding of existential risk and political pressures have slowed them considerably and resulted in a safer, more cautious world.
switchbak 2 hours ago||||
China is rapidly building out their nuclear arsenal as we speak, and the USA is undergoing an expensive replacement process of theirs as well.

That kind of idea might have held water in the 90's, but that's not the world we live in any longer.

dpark 3 hours ago|||
> Haven't many (most?) countries agreed to nuclear disarmament?

This misses the point. He specifically said the entire world because the point is that someone will develop AGI (theoretically; I’m not making a statement about how close we are to this).

9 nations have nuclear weapons despite non proliferation agreements and supposed disarmament. It’s not enough for most countries to agree not to build nuclear weapons if the goal is to have no nuclear weapons. Same for AGI. If it can be developed, you need all nations to agree not to develop it if it don’t want it to exist. Otherwise it will simply be developed by nations that don’t agree with you.

(Also arguably the only reason most nations don’t have nuclear weapons is the threat of destruction from nations that already have them if they try.)

justafewwords 3 hours ago|||
[flagged]
morningsam 3 hours ago|||
>The technology is pushed forward by a simple psychological logic: every key global actor knows that if they don't build the technology, they will be outcompeted by other actors who do build the technology. No key actor thinks that they have the luxury of not building the technology even if they wanted to not build it.

I don't remember who, but someone made an interesting point about this around the time GPT-4 was released: If the major nuclear powers all understand this, doesn't that make nuclear war more likely the closer any of them get to AGI/ASI? After all, if the other side getting there first guarantees the complete and total defeat of one's own side, a leader may conclude that they don't have anything to lose anymore and launch a nuclear first strike. There are a few arguments for why this would be irrational (e.g. total defeat may, in expectation, be less bad than mutual genocide), but I think it's worth keeping in mind as a possibility.

boothby 3 hours ago||
Cold comfort: AGI will not genocide humanity until it can plausibly automate logistics from mining raw materials to building out compute and power generation.
tintor 2 hours ago||
Humanity agreed, for example, that growing ozone hole is dangerous for everyone, and worked together to ban production of gases that damage ozone layer. See Montreal Protocol International Treaty. It was highly effective. Training powerful AIs isn’t different.
hax0ron3 2 hours ago||
I think that trying to stop AI development is more like trying to stop nuclear weapon proliferation than it is like fixing the ozone hole. I think the difference is that if one country works to fix the ozone hole, that doesn't make the other countries scared that they are falling behind in ozone hole fixing technology and might get conquered or reduced to subservience as a result.

Nuclear weapon proliferation seems to have plateaued recently, but I think that this appearance is partly deceptive. The main reasons it has plateaued is that: 1) building and maintaining nuclear weapons is expensive, 2) there are powerful countries that are willing to use military force to stop some other countries from developing nukes, and 3) many countries have reached nuclear latency (the ability to build nuclear weapons very quickly once the political order is given to do it) and are only avoiding actually giving the order to build nukes because they don't see a current important-enough reason to do it.

zbentley 1 hour ago||
We've also made progress as a species towards banning and reducing other things that in-group upsides and really bad externalities: off-the-shelf sale of broad system antibiotics; chattel slavery; human organ trafficking; some damaging recreational drugs.

The prohibitions aren't perfect, of course (and not without their own negative externalities in some cases). But all of those things are much more accessible to people than nuclear weapons, and we've still had successes in banning/reducing them. So maybe there's hope yet.

AndrewKemendo 4 hours ago||
Wouldn’t be a proper technology revolution without some version of labor realizing they are commodities and rejecting the collapse of the current form of labor power, so that tells me we’re actually in the transition from an old economic process to a new one.

Dont forget, the Luddites were correct about the direction that automation and labor power were going. They weren’t blindly “fighting machines”, they were fighting inequitable working conditions.

https://en.wikipedia.org/wiki/Luddite

>Periodic uprisings relating to asset prices also occurred in other contexts in the century before Luddism. Irregular rises in food prices provoked the Keelmen to riot in the port of Tyne in 1710 and tin miners to steal from granaries at Falmouth in 1727. There was a rebellion in Northumberland and Durham in 1740, and an assault on Quaker corn dealers in 1756.

necovek 4 hours ago||
I am disappointed "Doomerism" is not an official name for the practice of putting Doom on anything and everything!
bjourne 3 hours ago||
Yes, but against the angry dormers we have hordes of cheerful coomers who welcome the fruits of the labour of the AI with one open arm.
jmull 4 hours ago||
People are basing their entire world view on not understanding the nature of exponential phenomena.

Exponential phenomena only begin in a medium that holds the potential for that phenomena, and necessarily consume that medium.

That is, exponential phenomena are inherently self-limiting. The bateria reaches the edge of the petri dish. When the all the nitroglycerin is broken up the dynamite is done exploding.

That doesn't mean exponential phenomena aren't dangerous -- of course they can be. I mentioned dynamite, after all. And there are nukes.

But it's really far from "AI is improving exponentially now" to "AI will destroy everyone".

I see AI companies consuming cash at unsustainable rates. Since their motive is profit, this is necessarily limiting. Cash, meanwhile is a proxy for actual resources, which have their own, non-exponential limitations -- employees, data centers, electricity, venture capitalist with capital, etc.

AI isn't going to keep improving exponentially -- it can't. Like every other exponential phenomenon, it will consume the medium of potential that supports it (and rather quickly).

saulpw 3 hours ago||
Agreed. But, many said the same thing about Moore's Law or its equivalents in 1985, 1995, 2005, 2015, and yet the pace of core hardware development has been relentlessly exponential. I keep thinking we must be approaching some kind of limit (and surely we must be!) but I've learned not to bet on it.
avidiax 3 hours ago|||
It's often constructive to consider the edges and corners of the space of possible positions, to understand the weaknesses of the various arguments.

For this case, imagine that you're an accelerationist, and you want the AI to take over, kill everyone, and usher in a new AI-only age for the planet, and later the universe.

How disappointed are you as this person? It's bottlenecks everywhere. Communities don't want to allow datacenters to be built. You literally want to bring nuclear power plants online just to run a few DCs, and those historically take 10+ years to permit and build. There's not enough AC switchgear and transformers to send power into the DCs, even if you have the power. Chip prices are skyrocketing, and you have to sign a 3-4 year contract to get RAM now.

And meanwhile, the AI doesn't have many robot bodies. Tesla might put some feeble robots into mass production in a few years, but humans can knock those over with a stick into a puddle of water and it's over for that robot. The nuclear arsenals are all still in bunkers and submarines requiring two guys to physically turn keys, and the computers down there are so old they use 8" floppies.

Sure, there's some good progress on autonomous weapons, but a few million self-destructing AI drones built by human hands isn't going to cut it.

So as a hypothetical person hoping that AI destroys everything, you'd be rather impatient, I think, unless you think the AI can trick humanity into destroying itself relatively soon.

greenavocado 3 hours ago||
> People are basing their entire world view [on things getting worse because their leadership is abandoning them or actively working against their interests]

We understand hard times and are willing to work together to solve problems, but not when leadership is actively harmful.

Fixed that for you.

jmull 3 hours ago||
That's a completely separate point, is it not?

Maybe write it up and post a top-level comment if you think it's a point worth making.

eemax 4 hours ago||
> The Rational Conclusion of Doomerism Is Violence

No it isn't. The most prominent "doomer" has a strong grasp and deep, wholehearted appreciation for the the principles of liberalism and the rule of law:

https://x.com/ESYudkowsky/status/2043601524815716866

Which the author of this piece of slop appears to lack.

arduanika 3 hours ago|
It is true that only Yudkowsky gets to say what the rational conclusion of his ideas are. Nobody else gets to speculate. Only the pope of rationalism, because he's the rational one here. See? It's right there in the name!

> this piece of slop

Citation needed. Or maybe we need to update the title of that children's book for internet arguments: Everyone Who Disagrees With Me Is Slop.

The Yud post you linked is not slop, either. It's not LLM-generated, nor is it insincere. But I do have to point out: He's the one who is slinging the tsunami of words here, not Alexander Campbell.

handoflixue 3 hours ago|||
I think it's rather relevant that the community itself rejects the logic you're trying to impose on it. You can straw-man any sort of conclusion on to any sort of philosophy. This will not actually help you much at all if you're trying to predict what people will actually do.

If the only people that reach your conclusion are ones that don't actually subscribe to the philosophy, then it doesn't matter, because no one is actually acting on those conclusions.

And if we want to hold people responsible because others pervert their ideas, then we have to accept that Jesus Christ was a horrific, evil person for preaching "Love thy Neighbor"; just look at the crusades that were somehow the "rational conclusion" of that philosophy!

eemax 2 hours ago|||
> It is true that only Yudkowsky gets to say what the rational conclusion of his ideas are. Nobody else gets to speculate. Only the pope of rationalism, because he's the rational one here. See? It's right there in the name!

No, I am saying that Yudkowsky's views are straightforwardly compatible with bedrock principles of liberalism, and the author of the piece fails to acknowledge that compatibility or grapple with them himself. It's not about "rationalism" or who is "allowed" to speculate.

I called it slop because it says false things that have the hallmark of LLM style, e.g.

> The Sequences build the liturgy: a small caste of correct thinkers, epistemically and morally superior, whose rationality entitles them to govern what the rest of humanity is allowed to build. It’s not a safety movement. It’s a priesthood with an origin story written in fanfiction.

Apocryphon 2 hours ago||
> the hallmark of LLM style

That's just because LLMs were likely trained on a decade plus of human-generated Medium, Substack, Quora, and LinkedIn post slop.

aaroninsf 4 hours ago||
Hot take:

I assume the author wrote this with the expectation that much of the readsherp gasp, and react with "the natural horror all right thinking folk would have in response to violence of any kind."

Sorry, lol, no.

The appropriate question for "all right thinking" folk is very different: if argumentation has no impact and it's obvious that it shall have none—what other avenue do you expect opponents, who take the risks seriously, to take...?

That's not a rhetorical question.

To put it bluntly: the machinery of contemporary capitalism, especially as practiced by our industry, very clearly leaves no avenue.

How many days ago was Ronan Farrow here doing an AMA on his critique of Altman—whose connection to this specific community is I assume common knowledge...?

How many of you have carried, or worked beneath, the banner, move fast and break things...?

What message does that ethos convey, about their the extent to which "tech" is going respect community standards, regulation—the law?

And on the other edge: what does this ethos enshrine about how best to accomplish one's aims?

One of the bigger domestic stories this past week which has inflamed a certain side of Reddit, is the "disgruntled employee torches warehouse" one.

Consider also—and I'm deadly serious—the broader frame narrative we are all laboring within today: that the new contract of the capitalist class—including and perhaps especially those in "tech," e.g. in the Peter Thiel circles—seems very much to be, "social stability via surveillance and a police state, rather than through equity and discourse."

When code is law, the law is buggy.

When there is no recourse through the law, you get violence.

imbus 3 hours ago|
[dead]
More comments...