Top
Best
New

Posted by thedudeabides5 6 hours ago

The rational conclusion of doomerism is violence(www.campbellramble.ai)
78 points | 131 commentspage 3
imbus 5 hours ago|
[dead]
arduanika 5 hours ago||
This has been decades in the making. We had premonitions of the violence that would come, for example with the Zizians. Get ready for what happens when a million blogposts worth of bad philosophy, bad analogies, and anti-institutionalist hubris are deeply indoctrinated into a vast, decentralized network of highly capable engineering minds who lack common sense and normal restraints.

They hate the framing that LLMs are just stochastic parrots, which is ironic, because Yudkowsky's many parrots are (latent, until now) stochastic terrorists.

PaulHoule 6 hours ago||
... been saying this for years. If you really believed what Yudkowsky says you wouldn't just be posting on lesswrong, you would be taking direct action against a clear and present danger.
jmull 6 hours ago||
No you wouldn't.

Look at what the molotov cocktail guy accomplished by "taking direct action against a clear and present danger": Nothing, besides casting himself as an extremist nut, increasing the resistance to his viewpoint in the population at large.

It's downright dumb to attempt to impose your will via unilateral violence when you aren't in a position to actually complete the goal. Note that that goes whether you're actually right or not.

hax0ron3 5 hours ago|||
>casting himself as an extremist nut, increasing the resistance to his viewpoint in the population at large.

I think the majority of the population at large either doesn't care about what happened or wish that the guy had actually managed to kill Altman. Not even necessarily because Altman is involved with AI, but just because he is extremely rich. I don't imagine any increased resistance from the population at large - the population at large either doesn't mind when rich people are killed or loves it. The exceptions would be people like entertainers who develop a parasocial relationship with the public and provide direct joy to people, but AI company leaders don't fall into that category.

That said, it is true that killing Altman would almost certainly achieve nothing. See my other post in this thread.

throwaway27448 5 hours ago||||
Obviously, ineffective action will be counterproductive. I recommend effective action.
handoflixue 4 hours ago||
That's exactly the point every prominent member of the "Doomer" community is making: Violence isn't an effective action; it is a counterproductive action. It is actively destructive.
throwaway27448 4 hours ago||
Well what other tools do we have? Waiting for the market to fix things is also destructive and harms orders of magnitude more people than violent direct action does; democracy is wildly ineffective compared to violence even at its most optimistic; what else remains? Fleeing the planet?
handoflixue 3 hours ago||
> Well what other tools do we have?

I'll answer with a quote from the founder of the Rationalist movement, Eliezer:

"How certain do you have to be that your child has terminal cancer, before you start killing puppies? 10% sure? 50% sure? 99.9%? The answer is that it doesn't matter how certain you are, killing puppies doesn't cure cancer."

throwaway27448 1 hour ago||
Ok, I don't think anyone suggested killing puppies. Are you going to take this topic seriously or just dodge the question?
PaulHoule 5 hours ago|||
I'm not advocating for that, I'm just saying the whole thing is performative and gets taken at face value in a way that it should not be.

If you wanted to be a contrarian concerned about x-risks go try to find $1B to pay Embraer or another minor aviation vendor to make a plane to do stratospheric aerosol injection or something.

---

If you want my diagnosis it is, in a time of lower social inequality cults frequently tried to steal labor and money from a broad base of people.

For instance in the L. Ron Hubbard age Scientology would treat you as a "public" if you had money to take and if you didn't or after you'd been bled dry you would be be recruited as "staff". Hubbard thought it was immoral to take donations without giving something in return so it was centered around getting people to spend on "auditing". Between 1950 Dianetics and the current Miscavige age, income and wealth has gotten concentrated and he changed that single element of the Hubbard doctrine and now it is all about recruiting money from "whales" who donated to the International Association of Scientologists (IAS)

https://tonyortega.substack.com/p/scientologys-ias-trophy-wi...

(A good backgrounder on pernicious cults is https://en.wikipedia.org/wiki/Snapping:_America%27s_Epidemic...)

In the case of the Yudkowsky thing the mass just doesn't have a lot of money to steal after paying the rent and turning the labor of the unskilled and ignorant (even if they think otherwise) is a case of the juice not being worth the squeeze, so the point is to build a Potempkin village that looks like a social movement that creates a frame where you can get money from sources such as "SBF steals it and gives it to the movement" as well as "rich kids who inherited a lot of money but don't have a lot of sense"

adjejmxbdjdn 5 hours ago|||
Your statement is incorrect.

If you really believed what Yudkowsky says you would be taking action that maximizes the chances of reducing a clear and present danger.

Between Yudkowsky and the Molotov cocktail guy, which approach do you think had and is having more of an impact?

An individual can rarely, if ever, enact change through violence. The history of nearly all successful movements is violence often makes change harder.

Rallying people through speech is a far more successful way for an individual to enact change through violence

virissimo 6 hours ago|||
Does this apply to other domains or just AI? For example, if you think gain-of-function research accidents put millions of lives at risk, is the logical next step to quit your job and become a terrorist?
kelseyfrog 6 hours ago|||
Disagree. Just one more blog post. I swear, one more blog post will do it.
imbus 5 hours ago||
[dead]
SpicyLemonZest 6 hours ago|||
They are! Yudkowsky sat down with Senator Bernie Sanders last month to explain what's at stake, successfully convinced him that it's a big deal, and Sanders has now proposed a national moratorium on AI data centers (https://www.sanders.senate.gov/press-releases/news-sanders-o...) to help slow things down. That's pretty direct, and a lot more useful than random violence by random people.
AndrewKemendo 6 hours ago||
That pesky basilisk to worry about though
vrganj 5 hours ago||
Yeah I mean Lenin recognized that a century ago.

The only meaningful way to affect change against the oligarchy is and always has been violence.

This is not a novel insight.

mrguyorama 3 hours ago|
The founders of the US started a war that killed tens of thousands of colonists over small taxes and a desire to eventually end slavery, which was basically unprofitable at that point in America. The local connected and rich people thought those were valid reasons for political violence.

The "Boston Massacre" involved a crowd of people throwing rocks and balls of ice at soldiers and getting shot at.

But now it's all "Oh political violence must be avoided at all costs". Now it's "Political violence doesn't work, now lets set off fireworks on July 4th to celebrate the birth of our nation through violence"

kelseyfrog 5 hours ago|
War is a mere continuation of policy by other means[1]. When policy through legislation is empirically impotent[2], calls to continue attempts at a failed strategy are indistinguishable from being told, "continue losing."

There is a real, undeniable, build up of political tension. When it fails to be released in the legislative arena, it doesn't dissapate. When we point out that, "the quality of life right now is the best it's ever been," it doesn't dissapate. When we try to crush it, it doesn't dissapate. The last remaining pressure release is violence however condemnable it may be. Perhaps we should, you know, fix participatory democracy rather than pontificating on a natural outcome of machine we created yet refuse to fix. If fixing it continues to be more difficult than eliminating violence we should continue to expect violence.

1. https://oll.libertyfund.org/pages/clausewitz-war-as-politics...

2. https://archive.org/details/gilens_and_page_2014_-testing_th...

arduanika 5 hours ago||
> "fix participatory democracy"

Ah yes, a popular codeword for "I did not get my way".

There is no electoral majority behind the AI doomer cult. It is not a failure of "democracy" that they haven't gotten what they want. It is a failure of their activism, or just the general unpalatability of their wild ideas, or both. They don't get to throw Molotovs just because they lose.

vrganj 5 hours ago|||
Maybe democracy is fundamentally flawed because the demos is? How should one act in such a situation?
TuringTest 5 hours ago||
Who gets to say that the demos is fundamentally flawed? Each in-group have their own opinions on what's a flaw.

Society evolves through epiphenomena caused by the behaviour of the majority; the fact that some minorities view that evolution as 'flawed' cannot change that evolution, unless they're able to influence the majority to also see it that way.

Now, democracy is essentially a way for everybody to broadcast their views on society's flaws on non-violent ways. The alternative is that some groups broadcast their opinions in violent ways, and we have learned to see that situation as undesirable.

kelseyfrog 5 hours ago||||
Ah yes, "continue losing."

Go ahead and read Gilens and Page and tell me participatory democracy is working. Until then, expect more of the same impotent condemnations and a refusal to understand the social mechanics producing acts of violence.

arduanika 5 hours ago||
I am aware of their arguments, yes, but what I'm objecting to is that you're bringing this irrelevant hobbyhorse into a discussion of a truly fringe ideology. We're not talking about a classic G&P-style issue where the voters and the elites disagree. Nobody cares for the AI doomers -- not elites, not voters, nobody.

When you talk about "participatory democracy" in a thread like this, you are enabling them in their delusion that people do care. The AI safetyist think tanks put out these pushpolls trying to convince themselves that voters care about AI doom. They seal up the walls of their echo chamber, and they believe themselves to be heroes. Then one day, one of them throws a Molotov, and nobody is surprised.

kelseyfrog 5 hours ago||
> Nobody cares for the AI doomers

Which is precisely why they've resorted to violence.

We can do better than denigrating positions as "hobbyhorse." HN deserves better than that.

arduanika 30 minutes ago||
Fair enough. Retry: What I'm objecting to is your brilliant, insightful point, to which you are attached enough that you've injected it into this thread where it is, by your own admission, irrelevant. They're resorting to violence because they're unpopular, not because democracy failed to do its job here.
kelseyfrog 14 minutes ago||
My dear friend, this is a discussion board where people share their takes. It's simply a take.

We can attempt to deduce the root cause, but please don't assume we're on different epistemic footing. It's speculation and that's fine.

sleepybrett 4 hours ago|||
> There is no electoral majority behind the AI doomer cult.

how can you be sure? has anyone polled it? are they too scared to poll it?

unethical_ban 5 hours ago||
"Those who make peaceful revolution impossible will make violent revolution inevitable."

Wealth inequality isn't just about economic wellbeing but political power. Separately, the US legislature is almost entirely crippled, only able to pass one major bill per presidential term, while the dominant political party celebrated this and cedes all power to an executive whose intention is to tear apart the administrative state and bring about techno feudalism.

I once again note that none of the AI leadership has even tried to address government policies to guarantee a baseline of economic wellbeing for our citizens, while they acknowledge AI will likely have massive, disruptive impacts on society and economy. Anthropic is the only one that has shown any public concern for the dangers of AI by insisting on some moral baseline of AI use in the Defense department.