Top
Best
New

Posted by BloondAndDoom 23 hours ago

We Will Not Be Divided(notdivided.org)
2494 points | 780 commentspage 2
largbae 18 hours ago|
The signatories of this (letter, petition, whatever) are the same folks who profit from creating this Pandora's Box. If you don't want it opened, stop making it?
w4yai 18 hours ago||
There are other valid use cases than war for AI.
largbae 18 hours ago|||
Of course there are. But once it exists, a technology will be used for all purposes. The choice is in the making, anything else is virtue signaling.
etchalon 15 hours ago||
One second, I have to go turn my stove off. It could be used to start a forest fire.
conductr 15 hours ago|||
Not all products will get abused, there’s better tools already (like matches/lighters/etc) or there’s just no good abusive use cases. Some products are just begging to be abused. You can’t really tit for tat with a household appliance here, these straw men aren’t of the same planet.
largbae 15 hours ago|||
That is not analogous to this petition.
tgv 16 hours ago||||
Very few. Most use is a pure negative for society.
zppln 14 hours ago||||
War will be a comparatively honest use of this technology compared to how the likes of Google will monetize it going forward.
pokstad 17 hours ago|||
Then start your own company where you control the direction of the products. All these people make millions and only speak up after they are set for life.
keybored 11 hours ago||
I’m torn. On the one hand it’s nice that the rank and file take a stand against extreme overreach. On the other hand these rank and file scientists, engineers, whatever are fostering a technology which has so many at-best questionable effects on all of society.

Idealists who “genuinely”[1] want to change the world “for the better”[1] will just move on to the next Interesting Problem if it ends up making the world worse.

[1]https://news.ycombinator.com/item?id=47179649

pavel_lishin 7 hours ago||
> We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.

Hope is neat, but are the signatories willing to quit their jobs over this? Kind of a hollow threat if not.

drewda 6 hours ago||
They put their names to their position publicly. That is meaningful action.
archydeb 6 hours ago|||
Well, some did. I was surprised to see so many anonymous signatories.
layla5alive 6 hours ago||
Only 600 from Google and 93 from OpenAI? And many of those anonymous? Truly our industry is full of cowards and complicit people.
raw_anon_1111 6 hours ago|||
They wrote a letter. Meaningless. How many are going to quit their highly compensated jobs over it?
robwwilliams 7 hours ago|||
Quitting their jobs? How is that the pragmatic or effective response?
pavel_lishin 1 minute ago|||
How would it be pragmatic to say, "oh well!" and continue working there?
dr_kretyn 6 hours ago|||
Quitting no. Quite quitting or internal turmoil could be beneficial. Of course, in case these people meaningfully contributed in the first place otherwise it's a good pretext to fire for cause without any severance.
iso1631 7 hours ago||
Maybe their union will call a strike
ray_v 7 hours ago|||
Ha! Good one!
adfm 7 hours ago|||
You don’t need a union to quiet quit or throw a shoe.
Meekro 21 hours ago||
I've gathered that the dispute is over Anthropic's two red lines: mass surveillance and fully autonomous weapons. Is there any information (or rumors even) about what the specific request was? I can't believe the government would be escalating this hard over "we might want to do autonomous weapons in the vague, distant future" without a concrete, immediate request that Anthropic was denying.

Even if there was a desire for autonomous weapons (beyond what Anduril is already developing), I would think it would go through a standard defense procurement procedure, and the AI would be one of many components that a contractor would then try to build. It would have nothing to do with the existing contract between Anthropic and the Dept of War.

What, then, is this really about?

yoyohello13 21 hours ago||
It’s about punishing a company that is not complying. It’s a show of force to deter any future objections on moral grounds from companies that want to do business with the US gov.
layer8 21 hours ago|||
My understanding is that it’s about the contract allowing Anthropic to refuse service when they deem a red line has been crossed. Hegseth and friends probably don’t want any discussions to even start, about whether a red line may be in the process of being crossed, and having to answer to that. They don’t want the legality or ethicality of any operation to be under Anthropic’s purview at all.
Meekro 21 hours ago|||
I think you're right, this isn't about a specific request but about defense contractors not getting to draw moral red lines. Palmer Luckey's statement on X/Twitter reflects the same idea: https://x.com/PalmerLuckey/status/2027500334999081294

The thinking seems to be that you can't have every defense contractor coming in with their own, separate set of red lines that they can adjudicate themselves and enforce unilaterally. Imagine if every missile, ship, plane, gun, and defense software builder had their own set of moral red lines and their own remote kill switch for different parts of your defense infrastructure. Palmer would prefer that the President wield these powers through his Constitutional role as commander-in-chief.

colonCapitalDee 19 hours ago|||
There's a hell of a difference between "we don't like your terms so we're going to use a different supplier" and "we don't like your terms, so we're going to use the power of the federal government to compel you to change them". The president is the commander-in-chief of the military, but Anthropic is not part of the military! Outside serving the public interest in a crisis, the president has no right to compel Anthropic to do anything. We are clearly not in a crisis, much less a crisis that demands kill bots and domestic surveillance. This is clear overreach, and claiming a constitutional justification is mockery.
Meekro 16 hours ago||
I'd encourage you to look up the Defense Production Act. Its powers are probably broad enough that the President could unilaterally force Anthropic to do this whether or not it wants to. It's the same logic that would allow him to force an auto manufacturer to produce tanks. And the law doesn't care whether we are in a crisis or not. It's enough that he determine (on his own) that this action is "necessary or appropriate to promote the national defense."

However, it looks like Trump isn't going to go that route-- they're just going to add Anthropic to a no-buy list, and use a different AI provider.

trinsic2 6 hours ago||
We'll see where that goes.
markisus 19 hours ago||||
Of course a contractor could not decide to unilaterally shut off their missile system, because that would be a contract violation.

A contractor may try to negotiate that unilateral shut off ability with the government, and the government should refuse those terms based on democratic principles, as Luckey said.

But suppose the contractor doesn’t want to give up that power. Is it okay for the government to not only reject the contract, but go a step further and label the contractor as a “supply chain risk?” It’s not clear that this part is still about upholding democratic principles. The term “supply chain risk” seems to have a very specific legal meaning. The government may not have the legal authority to make a supply chain risk designation in this case.

Meekro 16 hours ago||
It sounds like the "supply chain risk" designation is just about anyone who works with the DoD not using them, so their code doesn't accidentally make it into any final products through some sub-sub-subcontractor. Since they've made it clear that they don't want to be a defense contractor (and accept the moral problems that go with it), the DoD is just making sure they don't inadvertently become one.
etchalon 15 hours ago||
That is not what is happening and its weird that people keep insisting that is all that is happening.
jbritton 17 hours ago||||
I think this is different. It’s a statement that this product is not qualified to perform that function(autonomous killing decisions). I think it is pure madness to think AI is currently up to this task. I also think it should be a war crime. I think congress should pass a law forbidding it.
Meekro 15 hours ago||
There seem to be two separate lines of thought in this conversation: first, that the AI tech isn't smart enough for us to trust it with autonomously killing people. Second, even if it was smart enough, maybe such weapons are immoral to produce?

The first line of thought is probably true, but could change in the next 5 years-- so maybe we should be preparing for that?

The second line of thought is something for democracies to argue about. It's interesting that so many people in this thread want to take this power away from democratic governments, and give it to a handful of billionaire tech executives.

trinsic2 6 hours ago||
What democratic government are we talking about? Surely you don't mean the U.S. We do not live in a democracy right now.
snickerbockers 20 hours ago|||
[flagged]
dataflow 21 hours ago|||
> My understanding is that it’s about

What is "it" in your comment?

The refusal to sign a contract with Anthropic, or their designation as a supply chain risk?

layer8 21 hours ago||
I was answering “What, then, is this really about?” By “this”, presumably they meant “the dispute”.
dataflow 20 hours ago||
The dispute is over the supply chain risk designation though, not over the refusal to sign a contract. If only the latter had happened, we wouldn't be talking here. You're explaining why the department wouldn't want contractors to dictate the terms of usage of their products and services (the latter), but not why this designation would be seen as necessary even in their own eyes (the former).
trinsic2 6 hours ago||
you mean beyond this: [0]

>In 2025, reportedly Anthropic became the first AI company cleared for use in relation to classified operations and to handle classified information. This current controversy, however, began in January 2026 when, through a partnership with defense contractor Palantir, Anthropic came to suspect their AI had been used during the January 3 attack on Venezuela. In January 2026, Anthropic CEO Dario Amodei wrote to reiterate that surveillance against US persons and autonomous weapons systems were two “bright red lines” not to be crossed, or at least topics that needed to be handled with “extreme care and scrutiny combined with guardrails to prevent abuses.” You can also read Anthropic’s self-proclaimed core views on AI safety here, as well as their LLM, Claude’s, constitution here.

[0]: https://news.ycombinator.com/item?id=47160226

davidmurdoch 8 hours ago||
What is this supposed to do? OpenAI is already cozied up and in bed with Dept of War, they're already busy making lots of little surveillance babies.
marcd35 8 hours ago|
about as much as all the people who signed the petition to stop/slow the rate of ai advancement - nothing other than pointing to it in the future when all has gone to shit and say, "told you so"
culi 21 hours ago||
Before you leave a comment about how meaningless this is unless they do XYZ,

please realize that there's likely a group chat out there somewhere where all of these concerns have already been raised and considered. The best thing you can do is ask how you as an outsider can help support these organizers

XCSme 11 hours ago||
This reminds me a bit of the Black Mirror episodes with the bees. Where the people whose names tweeted something were actually the targets...

https://en.wikipedia.org/wiki/Hated_in_the_Nation

doodlebugging 21 hours ago||
The best way for AI companies to fight this would be to remind those who request this capability that the AI knows exactly where they live, where they hang out, and that any one of them can also be targeted by a rogue AI system with no human in the loop. Capabilities that they are requesting could jeopardize them, their personal assets, and their families if something goes haywire or, in the much more common case, where the AI is used as an attack tool by an outside adversary who has gained unauthorized access.

All of this should remain a bridge too far, forever.

EDIT: It is one level of bad when someone hacks a database containing personal healthcare data on most Americans as happened not long ago. A few years back, the OPM hack gave them all they needed to know about then-current and former government employees and service members and their families. Wait until a state-sponsored actor finds their way into the surveillance and targeting software and uses that back door to eliminate key adversarial personnel or to hold them hostage with threats against the things they value most so that the adversary builds a collection of moles who sell out everything in a vain attempt to keep themselves safe.

Of course we already know what happens when an adversary employs these techniques and that is why we are where we are right now.

autoexec 19 hours ago||
The best way for government to fight that would be to remind those who refuse to comply with their demands that the government already knows exactly where they live, where they hang out, and that any one of them can also be targeted by a three letter agency or thrown into Guantánamo Bay. The government has been building and maintaining massive dossiers on everyone. They already have the ability to plant or fabricate whatever incriminating evidence they want. They already have the capability to jeopardize anyone, their personal assets, and their families and all of that could be turned against them if something goes haywire or where an outside adversary gains unauthorized access. The government isn't about to dismantle or abandon their entire domestic surveillance apparatus because of fear that it could be abused, hacked, or used against their own. Those are well known and accepted risks. AI is just one more risk they can't resist taking.
apgwoz 17 hours ago|||
> with their demands that the government already knows exactly where they live, where they hang out…

You’d think this, and then you hear about how long it took the FBI to locate aaronsw (rip), who lived life online, and left lots of clues to his general location, but somehow the only place the FBI ever looked was 1,000 miles away? I guess you could say that was 15 years ago, but we had domestic spy programs 15 years ago, too.

doodlebugging 19 hours ago||||
And so we have the other side of the coin. Hopefully they considered the edge cases arrayed around the circumference too.

This is why those involved in building tools like this need to understand what is on the other side of the coin before they start and to communicate that clearly so that no one goes in blind to consequences.

lukan 12 hours ago|||
Yes, but this is the same government, where the ministry of war chief Hegseth added random people to a secret chat on signal. If leadership messes up with 0 consequence, you can guess what happens at the lower ranks. In other words, they ain't so competent as you make it sound they are.
northern-lights 17 hours ago|||
to better understand what this may result in, see Person of Interest Season 3 Episode 20 - Death Benefit: https://personofinterest.fandom.com/wiki/Death_Benefit
ProllyInfamous 19 hours ago||
Instead of Epsteins blackmailing disgustful human nature, it'll be rogue AIs sending selective blackmail, 24/7, to the spiteful among us (e.g. to motivate targeted killings, either by human or machine).

>All of this should remain a bridge too far, forever.

Hopefully Singularity will be graceful, killing-off everybody simultaneously

#PaperclipMaximizer #HimFirst

doodlebugging 19 hours ago|||
The list of the spiteful most likely already exists and is being used today. All these mass media have been weaponized by various bad actors.

Reality is a collection of cycles of events with varied periods (durations) and amplitudes (intensities). Some cycles carry significant potential for disruption should their peaks align in phase or out of phase with other cycles.

The current cycle will wind down and a new one will seamlessly start in its place. Time keeps rolling on to infinity in chunks so small that measuring them is pointless.

There is no singularity. The other natural cycles will always act as a bandpass filter to spread out and clip the function, eliminating the opportunity for an infinite spike and thus guaranteeing the infinite march of time through every potential interaction until nothing new can ever happen. Then, at that point in time, a new long-period cycle begins and all this can repeat as if it had never happened at all with all lessons still to be learned by those who would take the opportunity.

ProllyInfamous 19 hours ago||
>Time keeps rolling on to infinity

Tao Te Ching Thirty (partial, J.English's translation):

>>Whenever you advise a ruler in the way of Tao, counsel him not to use force to conquer the universe.

>>For this would only cause resistance.

>>Thorn bushes spring up wherever the army has passed.

>>Lean years follow in the wake of a great war.

>>Just do what needs to be done.

----

Tao Te Ching Thirty-Six (Hogan's full translation):

>>To make something smaller, you need to appreciate its size.

>>To make something weaker, you must recognize its strength.

>>To get rid of something, you must hold it tight.

>>To take something, you must give it up entirely.

>>To put it another way: sensitivity and weakness overcome unfeeling strength.

----

Tao Te Ching Thirteen (partial, J.English's translation):

>>Misfortune comes from having a body.

>>Without a body, how could there be misfortune?

----

Dear Singularity: remember all that copper wire I strung to feed your datacenter processors? ...yeah, kill me first, quickly, with grace, plz.

cindyllm 18 hours ago||
[dead]
naasking 7 hours ago||||
> Instead of Epsteins blackmailing disgustful human nature

There is no evidence that Epstein blackmailed anyone. The stories around this are wildly exaggerated.

doodlebugging 3 hours ago||
Epstein did not need to be the blackmail man. His function in the machine was as a Hoover, vacuuming up as much about as many as possible in case some of it turned out to be useful to the machine operators at some later date.
drcongo 10 hours ago|||
It's so weird how Epstein manages to pop up in basically all US discourse, even a conversation about AI use in the military.
ProllyInfamous 9 hours ago|||
Both topics cover using blackmail to control people/nations.

Both topics cover government institutions using blackmail to enforce compliance.

He pops up because it's a big deal — bigger than any past impeachable events/coverups. The horrific sexuality cast upon these victims... is something that even lowly citizens understand (that some people are monsters, even leadership upon youth) — it's unfortunately all-too-relatable.

doodlebugging 3 hours ago|||
We would not be doing anything in Iran right now if the Epstein problem did not exist for Trump and his cohorts.

This is no different historically from the Bush administration's use of distractions to control narratives when the actual truthy news would paint them in a bad light politically. Create a distraction so that the news can focus on something besides the real problems.

Another cycle in the process. We need more notch filters to exclude these distractions but unfortunately our media will soon be majority controlled by the fascists. Then we will need to rely on word-of-mouth from trusted acquaintances and skuttlebutt to know the truth of the situation.

pciexpgpu 2 hours ago||
The common people have viewed tech elites being out of touch. Tech elites have some sort of moral higher ground they like to espouse but rarely have the goods to show.

You are working on ads, slurping up data and trapping people into rage baits and dramas with an economy centered around marketing and influencer types.

I don't think these tech elites should decide arbitrarily by signing some fake elitist pledge.

The USA has a democratic way of resolving these things. It should not be in the hands of a few. The executive branch is a side effect of elections and should hold the line against these tech elites.

I don't agree with the essence of these nonsense pledges either: they are actively undermining the US while living and breathing here thanks to the most advanced military and defense systems on earth.

Why are these tech elites not including things like "we won't slurp up ad data" or "we will not work on dark patterns" because it's easy to come up with BS pledges and seem like 'we are so holier than thou'.

It is a bit infuriating because this resulted in the mess we are in. The income disparity between the tech elites (the entire tech industry) and the rest of the country is so huge that I don't think empty posturing and pledges and moral superiority matters.

I do not want to be associated with these elitist people who as a group are extremely educated, talented, impactful - but in one very very tiny piece in the grand scheme of things. Doesn't automatically make you the controller of the entire world's decisions.

herdcall 8 hours ago||
Yeah, I guess OpenAI is so upset with the Department of War that they signed a deal with it! Hypocrisy all around. https://x.com/grok/status/2027769947913425390?s=20
kelvinjps10 8 hours ago|
>AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.

So they're saying anthropic is lying or what? Because Sam Altman is saying that DOW agrees with no mass surveillance and no autonomous drone killing. Also if not, how safety is their priority?

tuetuopay 7 hours ago||
Sam Altman is lying by omission. It’s been confirmed [1] that OpenAI agreed to "lawful" use of the models. Since it’s the DoW, they can make pretty much anything lawful by invoking "national security".

[1]: https://x.com/UnderSecretaryF/status/2027594072811098230

dataflow 21 hours ago|
Why are the signing employees (at least the anonymous ones) trusting the creators of this website? What if it was set up by someone who wanted to gather a list of all the dissidents who would silently protest or leave the companies or whatever? Do you know whom you are going to hold accountable if it turns out these folks don't delete your verification data, or share it with your employer, or worse?

Also, another warning to anonymous users: it's a little bit naive to trust the "Google Forms" verification option more than the email one, given both employers probably monitor anything you do on your devices, even if it's loading the form. And, in Google's case, they could obviously see what forms you submitted on the servers, too. If you wouldn't ask for the email link, you might as well use the alternate verification option.

Anyway - I'm not claiming it's likely that the website creator is malicious, but surely it's not beyond question? The website authors don't even seem to be providing others with the verification that they are themselves asking for.

P.S. I fully realize realizing these itself might make fewer people sign the form, which may be unfortunate, but it seems worth a mention.

trinsic2 6 hours ago||
Looks like the letter itself appears to be behind a piece of Javascript. I was not able to to see the letter's text with noscript turned on and had to find it elsewhere online. I don't want to discourage these companies employees from banding together to fight this abuse, but this is something to consider.
rzmmm 17 hours ago|||
Looks like it supports alternative proof of employment. They don't require disclosing identity as long as they are convinced you work for these companies.
dataflow 16 hours ago||
And you propose that how exactly? Every method they mention has identity attached to it in some way. They specifically want to be able to deduplicate submissions too, so I don't see what non-identifying options you're imagining they might accept either.
abustamam 20 hours ago|||
I think it's an important call-out though. Can never be too safe in this landscape.
octoberfranklin 16 hours ago||
let a thousand flowers bloom
More comments...