Top
Best
New

Posted by thedudeabides5 3 hours ago

The Rational Conclusion of Doomerism Is Violence(www.campbellramble.ai)
71 points | 110 comments
thegrim33 9 minutes ago|
The LLM doomerism is just one arm of the general "us vs them" strategy - defining a group of people as the others who are the bad guys, defining yourself as the good guys, constantly fostering hate against the others, finding ways to give your group rationale for why they have the moral high ground, all of it in the end an act to gain power/influence/money for the people orchestrating it.

The anti-AI angle is just the latest flavor of it, replacing previous ones (I'm sure you can think of some) and eventually being replaced by the next new thing/person that they'll try to direct us to hate.

I'm willing to bet any amount of money that 99.99% of AI doomers identify with the same extreme end of the political spectrum. That should be a very big red flag and highly indicative of the real motive behind the movement.

MostlyStable 2 hours ago||
It is completely coherent to both think that an extremely bad thing is coming, and yet that does not justify any particular action. "The ends don't justify the means" and literal entire religions have been built on this concept. It is not irrational or incoherent to believe that even something as serious as extinction does not justify arbitrary action.

Someone _may_ decide that it does, but it is not a necessary conclusion.

And that is completely aside from the many many (in my opinion convincing) arguments that such acts of violence would not be effective anyways.

This article is a much better (and much longer) extension of the argument and direct refutation of the OP article

https://thezvi.substack.com/p/political-violence-is-never-ac...

hn_throwaway_99 2 hours ago||
The older I get, the more I get the sneaking suspicion that statements like "the ends don't justify the means" and "violence is always the wrong answer" are, at best, wildly logically inconsistent in any society at any time, and at worst, designed to ensure only a very few people in power can commit violence.

An ongoing conflict has resulted in the violent deaths of literally many thousands of children. The people who enable those deaths are usually safely ensconced thousands of miles away, often living in cushy suburbs.

To emphasize as strongly as I possibly can, I am not advocating for more violence. Quite the contrary, I'm advocating for less. I just don't understand why we have all these adages to convince people that "violence is always wrong", while I'm sure some at least some of the people who say that are actively engaged in building machines designed to kill people.

Related, the Substack link you posted is titled "Political Violence is Never The Answer". But our country (and a lot of them) were literally founded on political violence. How do people square those 2 ideas?

Aurornis 1 hour ago|||
> The older I get, the more I get the sneaking suspicion that statements like "the ends don't justify the means" and "violence is always the wrong answer" are, at best, wildly logically inconsistent in any society at any time, and at worst, designed to ensure only a very few people in power can commit violence.

My experience has been the polar opposite: The older I get, the more I've seen people come to completely incorrect conclusions that justify their decisions to harm others. This ranges from petty things like spreading gossip, to committing theft from people they don't like ("they had it coming!") to actual physical violence.

In every case, zoom out a little bit and it becomes obvious how their little self-created bubble distorted their reality until they believed that doing something wrong was actually the right and justified move.

I think you're reaching too far to try to disprove the statement in a general context. Few people are going to say "violence is always the wrong answer" in response to someone defending themselves against another person trying to murder them, for example. I think these edge cases get too much emphasis in the context of the article, though. They're used as a wedge to open up the possibility that violence can be justified some times, which turns into a wordplay game to stretch the situation to justify violence.

hn_throwaway_99 58 minutes ago|||
I think you have wildly misunderstood my point, given that your statement of "The older I get, the more I've seen people come to completely incorrect conclusions that justify their decisions to harm others" is not the polar opposite of what of I was saying - if anything, it aligns with what I was saying very well.

To rephrase, my point is that phrases like "the ends don't justify the means" and "political violence is never the answer" seem to almost always be applied in very specific contexts, completely ignoring other contexts where many people (I'd say "society at large") are completely OK with the ends justifying the means and political violence.

To use your own sentence, I've seen many people in positions of power "coming to completely incorrect conclusions that justify their decisions to harm others", e.g. why bombing children in their beds is OK.

Aurornis 55 minutes ago||
> To rephrase, my point is that phrases like "the ends don't justify the means" and "political violence is never the answer" seem to almost always be applied in very specific contexts

That's not what you said. You were talking about society as a whole, not narrow contexts. I'll re-quote your original comment that I was responding to:

> statements like "the ends don't justify the means" and "violence is always the wrong answer" are, at best, wildly logically inconsistent in any society at any time, and at worst, designed to ensure only a very few people in power can commit violence.

I was responding to your "at best, wildly logically inconsistent in any society at any given time" claim.

hn_throwaway_99 44 minutes ago||
Yes, society as a whole applies statements like "the ends justify the means" in wildly inconsistent ways, deeming it unacceptable in certain contexts and being completely fine with it in other contexts. I literally said in my original comment "To emphasize as strongly as I possibly can, I am not advocating for more violence. Quite the contrary, I'm advocating for less."

Beyond that, I can't help you with your reading comprehension.

metabagel 20 minutes ago|||
The point of the comment you are replying to is that it's often logically inconsistent for people to say that violence is never the answer, given the amount of violence committed by our military, law enforcement, immigration enforcement, etc. - much of which is deemed acceptable.
solaarphunk 1 hour ago||||
This is just a version of individualism vs the state. Much of western society has become increasingly confused about what violence is acceptable, let alone who should be allowed to commit violence, or have a monopoly on violence.

If we can't agree on that baseline, then its quite obvious that we'll continue to have an escalation in the types of violence that we've seen in the past few years, against the political and corporate classes in the US, with very little end in sight.

antonvs 30 minutes ago||
> If we can't agree on that baseline

Part of the point about violence is it has little to do with societal agreement, to start with. It's what happens when that agreement breaks down. And in the end, it can change the agreement.

pembrook 18 minutes ago||||
> "Political Violence is Never The Answer". But our country (and a lot of them) were literally founded on political violence. How do people square those 2 ideas?

The is just survivorship bias. Violence sits at the root of ALL human societies and institutions. The vast majority of which have failed or are currently failing.

If you're on HN you're probably sitting in one of the lucky, relatively prosperous ones. Violence didn't create that prosperity, otherwise Sudan and Liberia would be the richest countries in the world.

Your relative prosperity came your ancestors being smart enough to build frameworks to allow a society to run decentralized without the need for violence (common law, free markets and trade, enforcement of private property rights, etc).

It's the lack of violence which built the prosperity you enjoy today. Not the other way around.

nradov 1 hour ago||||
During WWII, the entire Allied leadership was willing to kill millions of Axis children if that's what it took to win the war and force the enemy to surrender unconditionally. There was at least some genocidal intent. Population centers were intentionally bombed to wipe out civilian factory workers. We can argue about whether that was right or wrong but the reality is that it's probably inevitable once armed conflicts involving nation states escalate to an existential level.

“Before we’re through with them, the Japanese language will be spoken only in hell.”

-- Admiral William F. "Bull" Halsey Jr., 1941

sublinear 55 minutes ago||||
> How do people square those 2 ideas?

If you're seriously trying to understand the nuance of the act itself, you should consider reading at what is standard issue for law enforcement and military.

"On Killing" by Dave Grossman is a classic.

If you only want to understand and stay in the realm of politics, I don't think you'll ever find a good answer either way. There's hypocrisy in every argument for or against violence. None of that is on the minds of people "in the shit" at that time. All that stuff comes later. As you're well aware, PTSD is no joke.

What I would take away from this is to recognize all the other ways in which we are compelled to act against our own self interest under what are sold as higher moral purposes.

From that perspective, it's not that hard to see how people can treat violence as just another tool. Whether it works is a question of how much those people value life above all else. If you're surprised that's not always the case in every culture, you may want to study that first. Beliefs may devalue life for persistence against a long history of conflict. This is where you may start to find some glimmers of an answer why we in the west sometimes think violence works to get those people to "snap out of it", but it really is ultimately about control of those people or that land at the end of the day.

slopinthebag 1 hour ago|||
Even more simply put, if political violence is never the answer and the institution of government is the biggest single source of political violence, what does that say about the legitimacy of the institution of government?

These trite quips act as a way to ensure only the elite ruling class has a justification for the violence they inflict.

janalsncm 2 hours ago|||
Your reasoning makes sense under a regime of infinite games. In other words, the goal is to continue playing the game rather than win once.

These people do not believe we are in an infinite game. They believe they have a narrow set of moves to avoid checkmate, and apparently getting rid of Sam Altman is one of them.

I will suggest another reason though: we are likely already in the light cone of continued AI development. So none of the vigilante actions are justified under their own logic. It’s probably preferable to avoid being in jail when the robot apocalypse comes.

I don’t think the death of Sam Altman or even the dissolution of OpenAI would stop the continuation of AI development. There are too many actors involved, and too many companies and nation states invested in continuing AI development. Even Eliezer Yudkowsky became president of the United States he could not stop it.

matthewdgreen 1 hour ago||
Eliezer Yudkowsky has gone so far as to say that it might be ok to kill most of humanity (excepting a "viable reproduction population") to stop AI. If that's not just talk, then this line reasoning only gives you a few possible modes of action. I would not be worried about the people with Molotov cocktails, but I'd be very worried about bio terrorism.
hollerith 8 minutes ago||
>Eliezer Yudkowsky has gone so far as to say that it might be ok to kill most of humanity (excepting a "viable reproduction population") to stop AI

That doesn't sound like a non-misleading summary of anything he would say. Do you have a quote or a link?

atmavatar 2 hours ago|||
> "The ends don't justify the means" and literal entire religions have been built on this concept.

Most religions rely on a supernatural force judging us post-mortem to balance out the rights and wrongs done during life.

The problem with this, of course, is that there's zero evidence this force exists, and relying on this force to right the wrongs in life only serves to prevent the masses from attempting to correct the wrongs themselves either directly via vigilantism or, more importantly, by replacing existing systems with ones which will serve them better.

I'm all for fixing things first via the soap box and ballot box, but sometimes the ammo box is the only resort left.

    The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants.
    - Thomas Jefferson
I don't believe we're at that point in the US, but I could certainly understand someone making that claim for a country like Iran.
janalsncm 1 hour ago||
> The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants.

When the British cavalry came to Virginia in 1781, Thomas Jefferson famously fled the governor’s mansion.

morningsam 2 hours ago|||
Yudkowsky himself also posted a rebuttal today: https://x.com/ESYudkowsky/article/2043601524815716866
Aurornis 57 minutes ago|||
Anyone willing to read that wall of text should also read Yudkowsky's original piece on the topic: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-no...

The inflammatory conclusion of his 2023 writing was that we need to "shut it all down", escalating to bombing datacenters:

> be willing to destroy a rogue datacenter by airstrike.

Now that someone who was an open follower of his words tried to bomb Sam Altman's house and threatened to burn down their datacenters, Yudkowsky is scrambling to backtrack. The X rant tries to argue that "bombing" and "airstrike" are different and therefore you can't say he advocated for bombing anything (a distinction any rationalist would normally pounce on for its logical inconsistency, if it wasn't coming from a famous rationalist figure). He's also trying to blame his hurried writings for TIME for not being clear enough that he was only advocating for state-sponsored airstrikes, not civilian airstrikes, bombs, or attacks. Again that distinction seems like grasping at straws now that he's face to face with the realities of his extremist rhetoric.

hollerith 2 minutes ago||
You attempt to cast doubt that Yudkowsky "was only advocating for state-sponsored airstrikes, not civilian airstrikes, bombs, or attacks."

Let's let the reader decide. The strings "bomb" and "attack" never occur in the article. The strings "strike" and "destroy" occurs once each, and this next excerpt contains both occurences:

>Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.

Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.

handoflixue 1 hour ago||||
I found the last paragraph a fairly great summary of a rather long post:

> How certain do you have to be that your child has terminal cancer, before you start killing puppies? 10% sure? 50% sure? 99.9%? The answer is that it doesn't matter how certain you are, killing puppies doesn't cure cancer.

stickfigure 1 hour ago||
The whole post should have just been this one line. He likes the sound of his own voice too much.

That said, it rings hollow. AI doomerism is rooted in Terminator style narratives, and in that narrative, the rogue Sarah Connor changes history (with a lot of violence, explosions, and special effects).

The whole scene is toxic.

guelo 1 hour ago||||
Jeebuz that was long, I only made it through about half of it. But I think he's calling for cold war nuclear treaties style international cooperation. But I believe those mechanisms are broken and unavailable to us for two main reasons:

1. The Western world and especially the US is in the process of destroying the UN and other institutions of international law in order to protect Israel, for reasons that I have tried and failed to understand because the propaganda around it is so dense.

2. The Supreme Court made bribery of politicians legal so now we have AI investors with actual governmental power. All restraint efforts will be blocked by the federal government at minimum for these next 3 crucial years.

xrd 2 hours ago||||
That was really fascinating. Thanks.
doctorpangloss 2 hours ago|||
I find all of this stuff very interesting but nonetheless these two voices sound like they could never win an election and aspire not to. That is the ultimate test of the worthlessness of a policy - it's all equally worthless until it wins an election, and that's what makes it reality.

AI Doomerism versus Accelerationism are both playful fantasies, it doesn't really matter what measurements or probabilities or observations they make, because the substantive part is the policies they advocate for, and policies are meaningless - all equally worthless - until elected.

What am I saying? The best rebuttal is, get elected.

lazyasciiart 17 minutes ago||
Iran's leadership seems to be a solid rebuttal of that argument.
Joker_vD 2 hours ago||
> "The ends don't justify the means"

Eh. The ends do justify the means, but only inasmuch as those means actually do help to achieve the ends — astonishingly often, they don't (and rarer, but also often, actually bring you in the opposite direction from those end goals), and so remain unjustified.

MostlyStable 2 hours ago|||
I personally believe quite strongly that some things are just immoral on their face and that I would rather fail/die without using them than succeed/live while using them. I agree that in very many cases where people do these things, they are, in the long run, counter productive, but I also believe that even if could be conclusively proven that this wasn't the case, I would still advocate against their use.
f1shy 2 hours ago||||
Thanks.

That sentence is constantly repeated, as if it would be some kind of absolute truth. The fact is, for every end, there will be probably some means that are totally justified, and some that not.

I think the original context is: no matter how high, pure and perfect the end is, it does not meany any mean is justified.

kgwgk 1 hour ago||
According to Jocker_vD it’s only the means that won’t help that wouldn’t be justified.
BurningFrog 1 hour ago|||
I agree, but it's only half of the equation.

Your solution also can't be worse than the problem it solves!

Overly clear example: Killing your noisy neighbors actually achieves the end of a quiet home. But that really doesn't justify it.

nitwit005 2 hours ago||
Mentally ill people often have a justification for their actions which is vaguely rational, but you'll notice the vast majority of people aren't doing what they're doing.

These people just get attracted to political causes somehow. Even the woman's suffrage movement had some people setting buildings on fire.

stickfigure 2 hours ago|
I miss the days when people blamed all their woes on their parents circumcising them. Simpler times.
gradientsrneat 52 minutes ago||
Maybe if the LLM CEOs stopped spreading doomer narratives to sell their products, these people would chill out.
thephyber 43 minutes ago||
This issue is more complicated.

Sam Altman has stated that the AI revolution will “be like an infinite number of immigrants”. That’s a dangerous thing to say when the country’s political environment has convinced half of the voters that all immigrants are rapey, murderey, immoral subhumans.

Also, Sam Altman helped create OpenAI with the original goals of being an ethical non-profit, only to pivot and kick out all of the people who still wanted that original vision. Now several of the LLM CEOs are screaming “we have to stay fully on the accelerator pedal or the Chinese will get there first”, all while abandoning the ethics that supposedly made us better than the Chinese. (And yes, I understand the issues with the Chinese government and that people are different than their government).

rzmmm 29 minutes ago|||
Too bad it's effective marketing strategy. Negative emotions are more powerful drivers than positive ones.
drivebyhooting 2 hours ago||
Can LLMs design and build a chip foundry to manufacture semiconductors? No?

Can LLMs design and build the reactors to enrich uranium, breed plutonium, and construct nuclear weapons? No?

Can LLMs design and manufacture Shahed drones? No?

There are already super intelligences at large with “scary capability”. And yet the word hasn’t ended.

georgemcbay 1 hour ago||
> And yet the world hasn’t ended.

...yet

But we only need things to spiral out of control one time for that to change.

The world as we understand it would have ended if Vasily Arkhipov didn't veto the decision to launch a sub nuke during the Cuban Missile Crisis.

Is an emotionless AI system in his place ever going to make the same decision he did?

How confident are you we won't put an AI system in his place, particularly when we have to assume if we don't others will?

drivebyhooting 37 minutes ago||
Sounds like your fear is not of artificial intelligence but artificial incompetence. That’s a very different position from the AI doomers.
kurthr 31 minutes ago||
Can LLMs convince a human who has power over each and everyone of those things to use them for a(n unstated) prompts goal?

Yeah, probably over 50% of the population already, and if not many of the rest soon.

linksnapzz 1 hour ago||
I'm not surprised that the sort of individual prone to taking Yud too seriously is also likely to be a comically-inept assassin.

Had he tried to blow up the diesel genset at a datacenter, he'd have burnt his lips on the exhaust pipe.

geremiiah 43 minutes ago||
LLMs are dangerous in other ways (LLM psychosis and false confidence has probably already caused negligent deaths). However, I don't think we are close to a terminator scenario.

At the same time, if we ever do create an AGI, and eventually an ASI, I think it would only be a matter of time before the machines take over entirely, and they would probably be the ones which will continue the legacy of our species. Is that bad? Idk.

mellosouls 1 hour ago||
Some recent discussions on the Altman attack:

https://news.ycombinator.com/item?id=47745230

https://news.ycombinator.com/item?id=47724921

https://news.ycombinator.com/item?id=47722096

beloch 50 minutes ago||
"There is a final irony that deserves attention. If the doomers truly hold their stated beliefs at their stated confidence levels, they should be more honest about what those beliefs imply. A few weeks before the attack, a journalist asked Yudkowsky: if AI is so dangerous, why aren't you attacking data centers? His answer, relayed by Soares: "If you saw a headline saying I'd done that, would you say, 'wow, AI has been stopped, we're safe'? If not, you already know it wouldn't be effective."

----------

There are several thousand AI data centres in the U.S. alone, and hundreds are over a thousand square meters in floor space. Think about the physical effort it would take to reliably destroy, beyond the possibility of repair, just one typical computer in your home. Now multiply that out to thousands of server racks. Even if the employees rolled out the red carpet for you and handed you a baseball bat, you wouldn't get very far. Next, consider that these data centres are popping up all over the world in the most unlikely and remote locations. They don't need workers. They just need power, water, and, preferably, lax tax and environmental standards.

Doomers are attacking billionaires because they perceive them to be the soft, meaty, weak-points of a gigantic inhuman machine. They believe that just scaring Sam Altman a little will have a huge impact compared to trying to attack a data centre. However, billionaires can afford pretty decent security. This doomer movement probably isn't going to accomplish much until they target the engineers and support staff that surround billionaires. Billionaires don't scare easily because they have so much protection, but the poorly paid and poorly secured people around them are another story.

Poorly secured means easy to coerce with a stick. Poorly paid means easy to coerce with a carrot. The threat doomers pose is relatively small until they start turning employees against their own companies. What's an activist with a baseball bat compared to an employee who knows how to disable every computer in multiple data centres simultaneously?

tcoff91 2 hours ago|
I have a different perspective on this given that I view climate change as the biggest threat we face as a species.

I feel like robotics is the only hope we have to be able to scale action against climate change. It's clear that emissions reduction is just not going to happen, and catastrophic warming is inevitable. Therefore we will have to do an extraordinary amount of labor in order to modify our environment to save civilization from sea level rise and to be able to repair damages caused by natural disasters. There just aren't enough humans to do everything that is going to need to be done.

It sure would have been nice to have 100 thousand firefighting robots battling the fires in Los Angeles last year.

Given that we need better AI in order to make these robots happen, I view AI as a critical technology that we need to maintain civilization.

derektank 2 hours ago||
Wouldn’t geoengineering through stratospheric aerosol engineering (likely with sulfates) be both cheaper and less technically challenging than changing the built environment? If we’re accepting massive climate changes anyways, it seems like taking the risk with solar radiation modifications would be the next step
tcoff91 2 hours ago|||
That would require global consensus and could ignite wars if there isn't global consensus. Seems very likely that this could have unanticipated consequences that could be worse, but admittedly this is an area I don't really know much about.
ACCount37 2 hours ago||
No one gives a shit about "global consensus". As demonstrated in 2020s by multiple countries taking major unilateral actions unopposed.

If a nuclear power starts SAI, what is everyone else going to do? Shake their fists at the sky, realistically.

dpark 2 hours ago|||
Ah, yes. Let us spray more sulfates into the air. Let’s fight global warming by poisoning all the waterways and oceans with more acid rain.
derektank 2 hours ago||
The sulfate concentrations required to meaningfully reduce solar radiation is orders of magnitude below the level that causes acid rain. The Tambora eruption didn’t result in global acid rain (though it did in Indonesia, naturally) while cooling the globe by at least half a degree Celsius if not more. And on top of that, there are other possible aerosols we could use, like calcium carbonate
graemep 2 hours ago|||
That is interesting, and I think you are right that emissions reductions will not happen any time soon (eventually, but it will take a while).

I am not convinced we need robots. A lot of it is not all that hard to do. For example, better forestry management to prevent forest fires. A lot of cities rebuild big chunks of their infrastructure over a century or so anyway. The problem is more social and political - you get worse forest management because you can blame climate change when it happens.

dpark 2 hours ago|||
> It sure would have been nice to have 100 thousand firefighting robots battling the fires in Los Angeles last year.

Yes, but also 100k firefighting robots is kind of a lot. How many firefighting robots should exist in the world? And how many seawall-building robots for the rising sea level? And how many other robots? At what point does the environmental cost of all these robots offset their benefits?

arduanika 2 hours ago|||
Upvoted because this is an interesting take, but I disagree at least somewhat. I think you should be wary whenever you've narrowed down your options to, "in order to solve the top-priority problem X, our only hope is solution Y."

I agree that some technological solution might be the key to dealing with the climate, and that maybe robots would be part of such a solution, maybe powered by similar techniques as the current wave of AI. It's not an insane scenario, but it's worth keeping your perspective open to other possible developments.

tcoff91 1 hour ago||
I definitely am open to other possible developments and accept that I'm likely wrong just as basically everyone is wrong when predicting the future.
irishcoffee 2 hours ago||
https://www.howeandhowe.com/civil/thermite

The firefighting robots of which you speak already exist.

tcoff91 2 hours ago||
Hell yeah, those look awesome. I look forward to the autonomous versions that don't require fully manual remote operation. It'd be great if coordinators could have like an RTS-style view and command these like they're starcraft units.
More comments...