Posted by thedudeabides5 3 hours ago
The anti-AI angle is just the latest flavor of it, replacing previous ones (I'm sure you can think of some) and eventually being replaced by the next new thing/person that they'll try to direct us to hate.
I'm willing to bet any amount of money that 99.99% of AI doomers identify with the same extreme end of the political spectrum. That should be a very big red flag and highly indicative of the real motive behind the movement.
Someone _may_ decide that it does, but it is not a necessary conclusion.
And that is completely aside from the many many (in my opinion convincing) arguments that such acts of violence would not be effective anyways.
This article is a much better (and much longer) extension of the argument and direct refutation of the OP article
https://thezvi.substack.com/p/political-violence-is-never-ac...
An ongoing conflict has resulted in the violent deaths of literally many thousands of children. The people who enable those deaths are usually safely ensconced thousands of miles away, often living in cushy suburbs.
To emphasize as strongly as I possibly can, I am not advocating for more violence. Quite the contrary, I'm advocating for less. I just don't understand why we have all these adages to convince people that "violence is always wrong", while I'm sure some at least some of the people who say that are actively engaged in building machines designed to kill people.
Related, the Substack link you posted is titled "Political Violence is Never The Answer". But our country (and a lot of them) were literally founded on political violence. How do people square those 2 ideas?
My experience has been the polar opposite: The older I get, the more I've seen people come to completely incorrect conclusions that justify their decisions to harm others. This ranges from petty things like spreading gossip, to committing theft from people they don't like ("they had it coming!") to actual physical violence.
In every case, zoom out a little bit and it becomes obvious how their little self-created bubble distorted their reality until they believed that doing something wrong was actually the right and justified move.
I think you're reaching too far to try to disprove the statement in a general context. Few people are going to say "violence is always the wrong answer" in response to someone defending themselves against another person trying to murder them, for example. I think these edge cases get too much emphasis in the context of the article, though. They're used as a wedge to open up the possibility that violence can be justified some times, which turns into a wordplay game to stretch the situation to justify violence.
To rephrase, my point is that phrases like "the ends don't justify the means" and "political violence is never the answer" seem to almost always be applied in very specific contexts, completely ignoring other contexts where many people (I'd say "society at large") are completely OK with the ends justifying the means and political violence.
To use your own sentence, I've seen many people in positions of power "coming to completely incorrect conclusions that justify their decisions to harm others", e.g. why bombing children in their beds is OK.
That's not what you said. You were talking about society as a whole, not narrow contexts. I'll re-quote your original comment that I was responding to:
> statements like "the ends don't justify the means" and "violence is always the wrong answer" are, at best, wildly logically inconsistent in any society at any time, and at worst, designed to ensure only a very few people in power can commit violence.
I was responding to your "at best, wildly logically inconsistent in any society at any given time" claim.
Beyond that, I can't help you with your reading comprehension.
If we can't agree on that baseline, then its quite obvious that we'll continue to have an escalation in the types of violence that we've seen in the past few years, against the political and corporate classes in the US, with very little end in sight.
Part of the point about violence is it has little to do with societal agreement, to start with. It's what happens when that agreement breaks down. And in the end, it can change the agreement.
The is just survivorship bias. Violence sits at the root of ALL human societies and institutions. The vast majority of which have failed or are currently failing.
If you're on HN you're probably sitting in one of the lucky, relatively prosperous ones. Violence didn't create that prosperity, otherwise Sudan and Liberia would be the richest countries in the world.
Your relative prosperity came your ancestors being smart enough to build frameworks to allow a society to run decentralized without the need for violence (common law, free markets and trade, enforcement of private property rights, etc).
It's the lack of violence which built the prosperity you enjoy today. Not the other way around.
“Before we’re through with them, the Japanese language will be spoken only in hell.”
-- Admiral William F. "Bull" Halsey Jr., 1941
If you're seriously trying to understand the nuance of the act itself, you should consider reading at what is standard issue for law enforcement and military.
"On Killing" by Dave Grossman is a classic.
If you only want to understand and stay in the realm of politics, I don't think you'll ever find a good answer either way. There's hypocrisy in every argument for or against violence. None of that is on the minds of people "in the shit" at that time. All that stuff comes later. As you're well aware, PTSD is no joke.
What I would take away from this is to recognize all the other ways in which we are compelled to act against our own self interest under what are sold as higher moral purposes.
From that perspective, it's not that hard to see how people can treat violence as just another tool. Whether it works is a question of how much those people value life above all else. If you're surprised that's not always the case in every culture, you may want to study that first. Beliefs may devalue life for persistence against a long history of conflict. This is where you may start to find some glimmers of an answer why we in the west sometimes think violence works to get those people to "snap out of it", but it really is ultimately about control of those people or that land at the end of the day.
These trite quips act as a way to ensure only the elite ruling class has a justification for the violence they inflict.
These people do not believe we are in an infinite game. They believe they have a narrow set of moves to avoid checkmate, and apparently getting rid of Sam Altman is one of them.
I will suggest another reason though: we are likely already in the light cone of continued AI development. So none of the vigilante actions are justified under their own logic. It’s probably preferable to avoid being in jail when the robot apocalypse comes.
I don’t think the death of Sam Altman or even the dissolution of OpenAI would stop the continuation of AI development. There are too many actors involved, and too many companies and nation states invested in continuing AI development. Even Eliezer Yudkowsky became president of the United States he could not stop it.
That doesn't sound like a non-misleading summary of anything he would say. Do you have a quote or a link?
Most religions rely on a supernatural force judging us post-mortem to balance out the rights and wrongs done during life.
The problem with this, of course, is that there's zero evidence this force exists, and relying on this force to right the wrongs in life only serves to prevent the masses from attempting to correct the wrongs themselves either directly via vigilantism or, more importantly, by replacing existing systems with ones which will serve them better.
I'm all for fixing things first via the soap box and ballot box, but sometimes the ammo box is the only resort left.
The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants.
- Thomas Jefferson
I don't believe we're at that point in the US, but I could certainly understand someone making that claim for a country like Iran.When the British cavalry came to Virginia in 1781, Thomas Jefferson famously fled the governor’s mansion.
The inflammatory conclusion of his 2023 writing was that we need to "shut it all down", escalating to bombing datacenters:
> be willing to destroy a rogue datacenter by airstrike.
Now that someone who was an open follower of his words tried to bomb Sam Altman's house and threatened to burn down their datacenters, Yudkowsky is scrambling to backtrack. The X rant tries to argue that "bombing" and "airstrike" are different and therefore you can't say he advocated for bombing anything (a distinction any rationalist would normally pounce on for its logical inconsistency, if it wasn't coming from a famous rationalist figure). He's also trying to blame his hurried writings for TIME for not being clear enough that he was only advocating for state-sponsored airstrikes, not civilian airstrikes, bombs, or attacks. Again that distinction seems like grasping at straws now that he's face to face with the realities of his extremist rhetoric.
Let's let the reader decide. The strings "bomb" and "attack" never occur in the article. The strings "strike" and "destroy" occurs once each, and this next excerpt contains both occurences:
>Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
> How certain do you have to be that your child has terminal cancer, before you start killing puppies? 10% sure? 50% sure? 99.9%? The answer is that it doesn't matter how certain you are, killing puppies doesn't cure cancer.
That said, it rings hollow. AI doomerism is rooted in Terminator style narratives, and in that narrative, the rogue Sarah Connor changes history (with a lot of violence, explosions, and special effects).
The whole scene is toxic.
1. The Western world and especially the US is in the process of destroying the UN and other institutions of international law in order to protect Israel, for reasons that I have tried and failed to understand because the propaganda around it is so dense.
2. The Supreme Court made bribery of politicians legal so now we have AI investors with actual governmental power. All restraint efforts will be blocked by the federal government at minimum for these next 3 crucial years.
AI Doomerism versus Accelerationism are both playful fantasies, it doesn't really matter what measurements or probabilities or observations they make, because the substantive part is the policies they advocate for, and policies are meaningless - all equally worthless - until elected.
What am I saying? The best rebuttal is, get elected.
Eh. The ends do justify the means, but only inasmuch as those means actually do help to achieve the ends — astonishingly often, they don't (and rarer, but also often, actually bring you in the opposite direction from those end goals), and so remain unjustified.
That sentence is constantly repeated, as if it would be some kind of absolute truth. The fact is, for every end, there will be probably some means that are totally justified, and some that not.
I think the original context is: no matter how high, pure and perfect the end is, it does not meany any mean is justified.
Your solution also can't be worse than the problem it solves!
Overly clear example: Killing your noisy neighbors actually achieves the end of a quiet home. But that really doesn't justify it.
These people just get attracted to political causes somehow. Even the woman's suffrage movement had some people setting buildings on fire.
Sam Altman has stated that the AI revolution will “be like an infinite number of immigrants”. That’s a dangerous thing to say when the country’s political environment has convinced half of the voters that all immigrants are rapey, murderey, immoral subhumans.
Also, Sam Altman helped create OpenAI with the original goals of being an ethical non-profit, only to pivot and kick out all of the people who still wanted that original vision. Now several of the LLM CEOs are screaming “we have to stay fully on the accelerator pedal or the Chinese will get there first”, all while abandoning the ethics that supposedly made us better than the Chinese. (And yes, I understand the issues with the Chinese government and that people are different than their government).
Can LLMs design and build the reactors to enrich uranium, breed plutonium, and construct nuclear weapons? No?
Can LLMs design and manufacture Shahed drones? No?
There are already super intelligences at large with “scary capability”. And yet the word hasn’t ended.
...yet
But we only need things to spiral out of control one time for that to change.
The world as we understand it would have ended if Vasily Arkhipov didn't veto the decision to launch a sub nuke during the Cuban Missile Crisis.
Is an emotionless AI system in his place ever going to make the same decision he did?
How confident are you we won't put an AI system in his place, particularly when we have to assume if we don't others will?
Yeah, probably over 50% of the population already, and if not many of the rest soon.
Had he tried to blow up the diesel genset at a datacenter, he'd have burnt his lips on the exhaust pipe.
At the same time, if we ever do create an AGI, and eventually an ASI, I think it would only be a matter of time before the machines take over entirely, and they would probably be the ones which will continue the legacy of our species. Is that bad? Idk.
https://news.ycombinator.com/item?id=47745230
----------
There are several thousand AI data centres in the U.S. alone, and hundreds are over a thousand square meters in floor space. Think about the physical effort it would take to reliably destroy, beyond the possibility of repair, just one typical computer in your home. Now multiply that out to thousands of server racks. Even if the employees rolled out the red carpet for you and handed you a baseball bat, you wouldn't get very far. Next, consider that these data centres are popping up all over the world in the most unlikely and remote locations. They don't need workers. They just need power, water, and, preferably, lax tax and environmental standards.
Doomers are attacking billionaires because they perceive them to be the soft, meaty, weak-points of a gigantic inhuman machine. They believe that just scaring Sam Altman a little will have a huge impact compared to trying to attack a data centre. However, billionaires can afford pretty decent security. This doomer movement probably isn't going to accomplish much until they target the engineers and support staff that surround billionaires. Billionaires don't scare easily because they have so much protection, but the poorly paid and poorly secured people around them are another story.
Poorly secured means easy to coerce with a stick. Poorly paid means easy to coerce with a carrot. The threat doomers pose is relatively small until they start turning employees against their own companies. What's an activist with a baseball bat compared to an employee who knows how to disable every computer in multiple data centres simultaneously?
I feel like robotics is the only hope we have to be able to scale action against climate change. It's clear that emissions reduction is just not going to happen, and catastrophic warming is inevitable. Therefore we will have to do an extraordinary amount of labor in order to modify our environment to save civilization from sea level rise and to be able to repair damages caused by natural disasters. There just aren't enough humans to do everything that is going to need to be done.
It sure would have been nice to have 100 thousand firefighting robots battling the fires in Los Angeles last year.
Given that we need better AI in order to make these robots happen, I view AI as a critical technology that we need to maintain civilization.
If a nuclear power starts SAI, what is everyone else going to do? Shake their fists at the sky, realistically.
I am not convinced we need robots. A lot of it is not all that hard to do. For example, better forestry management to prevent forest fires. A lot of cities rebuild big chunks of their infrastructure over a century or so anyway. The problem is more social and political - you get worse forest management because you can blame climate change when it happens.
Yes, but also 100k firefighting robots is kind of a lot. How many firefighting robots should exist in the world? And how many seawall-building robots for the rising sea level? And how many other robots? At what point does the environmental cost of all these robots offset their benefits?
I agree that some technological solution might be the key to dealing with the climate, and that maybe robots would be part of such a solution, maybe powered by similar techniques as the current wave of AI. It's not an insane scenario, but it's worth keeping your perspective open to other possible developments.
The firefighting robots of which you speak already exist.