Top
Best
New

Posted by BloondAndDoom 1 day ago

We Will Not Be Divided(notdivided.org)
2588 points | 824 commentspage 15
remarkEon 1 day ago|
This whole episode is very bizarre.

Anthropic appears to be situating themselves where they are set up as the "ethical AI" in the mindspace of, well, anyone paying attention. But I am still trying to figure out where exactly Hegseth, or anyone in DoW, asked Anthropic to conduct illegal domestic spying or launch a system that removes HITL kill chains. Is this all just some big hypothetical that we're all debating (hallucinating)? This[1] appears to be the memo that may (or may not) have caused Hagesth and Dario to go at each other so hard, presumably over this paragraph:

>Clarifying "Responsible Al" at the DoW - Out with Utopian Idealism, In with Hard-Nosed Realism. Diversity, Equity, and Inclusion and social ideology have no place in the DoW, so we must not employ AI models which incorporate ideological "tuning" that interferes with their ability to provide objectively truthful responses to user prompts. The Department must also utilize models free from usage policy constraints that may limit lawful military applications. Therefore, I direct the CDAO to establish benchmarks for model objectivity as a primary procurement criterion within 90 days, and I direct the Under Secretary of War for Acquisition and Sustainment to incorporate standard "any lawful use" language into any DoW contract through which AI services are procured within 180 days. I also direct the CDAO to.ensure all existing AI policy guidance at the Department aligns with the directives laid out in this memorandum.

So, the "any lawful use" language makes me think that Dario et al have a basket of uses in their minds that they feel should be illegal, but are not currently, and they want to condition further participation in this defense program on not being required to engage in such activity that they deem ought be illegal.

It is no surprise that the government is reacting poorly to this. Without commenting on the ethics of AI-enabled surveillance or non-HITL kill chains, which are fraught, I understand why a department of government charged with making war is uninterested in debating this as terms of the contract itself. Perhaps the best place for that is Congress (good luck), but to remind: the adversary that these people are all thinking about here is PRC, who does not give a single shit about anyone's feelings on whether it's ethical or not to allow a drone system to drop ordinance on it's own.

[1] https://media.defense.gov/2026/Jan/12/2003855671/-1/-1/0/ART...

sensanaty 1 day ago||
I'm going to copy a comment I made in a related thread:

I might be being a bit conspiratorial, but is anyone else not buying this whole song and dance, from either side? Anthropic keeps talking about their safeguards or whatever, but seeing their marketing tactics historically it just reads more like trying to posture and get good PR for "fighting the system" or whatever.

"Our AI is so advanced and dangerous Trump has to beg us to remove our safeguards, and we valiantly said no! Oh but we were already spying on people and letting them use our AIs in weapons as long as a human was there to tick a checkbox. Also, once our models improve enough then we'll be sending in The Borg to autonomously target our Enemies™"

I just don't buy anything spewing out of the mouths of these sociopathic billionaires, and I trust the current ponzi schemers in the US gov't even less.

Especially given how much astroturfing Anthropic loves doing, and the countless comments in this thread saying things like "Way to go Amodei, I'm subbing to your 200 dollar a month plan now forever!!11".

One thing I know for sure is that these AI degenerates have made me a lot more paranoid of anything I read online.

nobodywillobsrv 1 day ago||
It really feels like I am no longer impressed with Anthropic safety.

Do they have even a basic understanding of the different regimes of safety and what allegiance means to ones own state?

It would be fine to say they are opting out of all forms of protection against adversaries.

But it feels like just more insane naive tech bro stuff.

As someone outside the tech bro bubble in fintech in London, can somebody explain this in a way that doesn't indicate these are sort of kids in a playground who think there is no such thing as the wolf?

Again, opting out and specializing in tech that you are going to provide to your enemies AND friends alike is fine. That is a good specialization. But this is not what I hear. I hear protest songs not deep thinking of thousand year mind set.

verisimi 1 day ago||
It's great that people are taking a moral position re their work, and are seemingly prepared to take a bit of a risk in expressing themselves.

However, if we're honest, Google has a long history of selling 'the people' out on domestic surveillance. There is even a good argument that this is what it was created for in the first place, given it was seeded with money from inqtel, the CIA venture capital fund. So, while I commend acting with your conscience in this (rather minor) case, and I'm glad to see people attempt to draw a line somewhere, what will this really come to? I strongly suspect this is event itself is just theater for the masses, where corporates and their employees get to stand up to government (yay!). The reality is probably all that is being complained about, and far worse, has been going on for years.

How far would these signatories go? Would they be prepared to walk away from all that money? Will they stop the rest of the dystopian coding/legislation writing, or is that stuff still ok (not that evil)?

Ultimately, is gaining the money worth the loss of one's soul? If you know better, and know that it is wrong to assist corporations and governments in cleaving people open for profit and control, but do it anyway for the house, private schools, holidays, Ferrari, only taking a stand in these performative, semi-sanctioned events - is this really the standard you accept for yourself? If so, then no problem. If not, what exactly are you doing the rest of the time? Are you able to switch your morality/heart/soul off? Judge yourself. If you find you are not acting in accord with yourself, everything is already lost.

politician 1 day ago||
I simply do not understand why Americans tech companies and their employees will hew and cry about supporting the military. For those of you who support their position, have you ever stopped to consider that your safe, comfortable lives of free speech and protests and TikTok and food and gas and Amazon Next-Day deliveries is enabled by a massive nuclear deterrent operated by the very military you oppose?

It is just so disappointing to come here and read these naive takes. Yes, Anthropic should be compelled to support the military using the DPA if necessary.

rectang 1 day ago||
> “I have neither the time nor the inclination to explain myself to a man who rises and sleeps under the blanket of the very freedom that I provide, then questions the manner in which I provide it.”

— Colonel Jessup

politician 1 day ago||
No individual, whether a colonel or a CEO, has inherent authority over national security decisions. Authority flows through democratic institutions. A contractor can choose whether to participate, but national defense policy is determined by elected institutions, not private executives. If society believes AI should or should not be used for certain military purposes, the venue for that decision is democratic governance not unilateral corporate refusal or approval.

On a CBS interview this morning, Dario defended his position with the claim that he must act because "Congress is slow." CEOs can and should make decisions about what their companies build or refuse to build. What they cannot do is substitute their judgment for the constitutional processes that govern national security. We must not vest de facto policy control in unelected corporate leaders.

dingi 1 day ago||
It really shows how far the HN crowd is from reality.
hakrgrl 1 day ago||
1.5 hours after this was posted, Sam Altman stated openai will work with the DoW.

So much for this waste of a domain name. https://x.com/sama/status/2027578652477821175

"Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. "

busko 1 day ago||
For those who don't use X:

https://xcancel.com/sama/status/2027578652477821175

andai 1 day ago|||
>Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.

I don't get it. Aren't these the same things that Anthropic was trying to negotiate?

Edit: it was explained elsewhere in this thread:

https://news.ycombinator.com/item?id=47188473#47190614

nobody_r_knows 1 day ago||||
Redirect every tweet to x-cancel link: https://chromewebstore.google.com/detail/xcancelcom-redirect...

Saves you the hastle of visiting that shit-show.

jamiequint 1 day ago|||
WTF is this garbage site?
nikolay 1 day ago|||
It's for people who want to read Twitter/X while trying so hard to convince themselves that they don't.
zahlman 1 day ago|||
> It's for people who want to read

individual posts on Twitter/X without requiring JavaScript and without being fed a sidebar full of algorithmic recommendations.

busko 1 day ago|||
It's for people who want the context of what's going on here who have neither the time nor stupidity to be on X.

I presume you're on X so no offence to you directly.

TheDong 1 day ago||||
If you click the '?' link in the upper right it will explain what it does https://xcancel.com/about
esseph 1 day ago|||
it mirrors what is on x.com
Gigachad 1 day ago|||
Something doesn’t make sense here. His tweet claims he has exactly the same restrictions that Anthropic had.
skissane 1 day ago|||
This tweet (from Under Secretary of State Jeremy Lewin) explains it:

https://x.com/UnderSecretaryF/status/2027594072811098230

https://xcancel.com/UnderSecretaryF/status/20275940728110982...

The OpenAI-DoW contract says "all lawful uses", and then reiterates the existing statutory limits on DoW operations. So it basically spells out in more detail what "all lawful uses" actually means under existing law. Of course, I expect it leaves interpreting that law up to the government, and Congress may change that law in the future.

Anthropic wanted to go beyond that. They wanted contractual limitations on those use cases that are stronger than the existing statutory limitations.

OpenAI has essentially agreed to a political fudge in which the Pentagon gets "all lawful uses" along with some ineffective language which sounds like what Anthropic wanted but is actually weaker. Anthropic wasn't willing to accept the fudge.

qdotme 1 day ago|||
Well, or just the possibility of future-proofing the agreement in favor of the US government, as well as walking back the slippery slope of „no autonomic lethality” and „no mass surveillance”.

The former, grants the Congress the ability to change the definition of all „lawful use” as democratically mandated (if the war is officially declared, if the martial law is officially declared).

The latter, is subtle. There can exist a human responsibility for lethal actions taken by fully autonomous AI - the individual who deploys it, for instance, can be made responsible for the consequences even if each individual „pulling of a trigger” has no human in the loop (Dario’s PoV unacceptable).

Similarly, and less subtly, acceptance of foreign mass surveillance, domestic surveillance (as long as its lawful and not meeting the unlawful mass surveillance limits!) seems to be more in the Pentagon’s favor.

Whether we like it or not, we’re heading into some very unstable time. Anthropic wanted to anchor its performance to stable (maybe stale) social norms, Pentagon wanted to rely on AI provider even as we change those norms.

squarefoot 1 day ago||||
"All lawful uses" has no meaning when a malignant narcissistic sociopath in power controlled by ruthless rich psychopaths can now rewrite every law at will.
PakG1 1 day ago|||
Because the US government has such a great track record on ensuring that this kind of stuff is only done legally with the utmost integrity. /s
Jensson 1 day ago||||
Sam probably told them they can renegotiate those restrictions in a year or so when the drama has died down.
patcon 1 day ago|||
yeah, something shady. i don't trust sam at all.

i once ran into someone in london in 2023 who was doing their thesis on AI regulation. they had essentially ended up doing a case-study on sam. their honest non-academic conclusion (which they shared quietly) was that they were absolutely terrified of sam altman.

fear is one of those signals we ought to listen to more often

m3kw9 1 day ago|||
Is not shady, the systems are not ready for that kind of task esp autonomous hunting. Is smart negotiations, plus Sam would have used the Anthropic situation against them saying you can’t designate all AI top American AI companies supply chain risk etc. it’s complete idiocy the would do that anyways
qdotme 1 day ago||
Ready at what level, though. The subtleties are what matters.

It’s well established that belligerents can use mines, to separate the tactical decision of deploying for purposes of area denial; from the snap-second lethal decision (if one can stretch that definition) to detonate in response to an triggering event.

Dario’s model prohibits using AI to decide between enemy combatant and an innocent civilian (even if the AI is bad at it, it is better than just detonating anyways); Sam’s model inherits the notion that the „responsible human” is one that decided to mine that bridge; and AI can make the kill decision.

How is that fundamentally different in the future war where an officer might make a decision to send a bunch of drones up; but the drones themselves take on the lethal choice of enemy/ally/no-combatant engagement without any human in the loop? ELI5 why we can’t view these as smarter mines?

puchatek 1 day ago||
It's different because we are talking about a technology that we might lose control over at some point. Those drones in your example might make an entirely different choice than what you anticipated when you let them take off.
labrador 1 day ago||||
This is a actaully a government bailout of OpenAI. Investors gave it a bunch of money earlier knowing this was going to happen. Greg Brockman is a major Republican donor for 2026. Nice for OpenAI.
ddtaylor 1 day ago||||
PR spin/lying while behind closed doors agreeing to it. What's hard to understand about OpenAI lying?

Altman publicly claimed he had no financial stake in OpenAI to emphasize his mission-driven focus. In 2024 it was revealed that Altman personally owned the OpenAI Startup Fund.

In May 2024, actress Scarlett Johansson accused Altman of intentionally mimicking her voice for ChatGPT's "Sky" persona after she had explicitly declined to work with them.

When OpenAI’s aggressive non-disparagement agreements were leaked, which threatened to strip departing employees of all their vested equity (potentially millions of dollars) if they criticized the company, Altman claimed he was unaware of the "provision."

gritspants 1 day ago||||
My theory is that they both went through normal procurement processes. At some point, one of Palantir's forward deployed sales agents slapped someone's arm at the golph course and said, yes we can automously kill with our AI agents. Anthropic, having little to do with the kind of 'AI' in a use case that made sense for, declined.
jaco6 1 day ago||
[dead]
straydusk 1 day ago||||
I know the reaction to this, if you're a rational observer, is "OpenAI have cut corners or made concessions that Anthropic did not, that's the only thing that makes sense."

However, if you live in the US and pay a passing attention to our idiotic politics, you know this is right out of the Trump playbook. It goes like this:

* Make a negotiation personal

* Emotionally lash out and kill the negotiation

* Complete a worse or similar deal, with a worse or similar party

* Celebrate your worse deal as a better deal

Importantly, you must waste enormous time and resources to secure nothing of substance.

That's why I actually believe that OpenAI will meet the same bar Anthropic did, at least for now. Will they continue to, in the same way Anthropic would have? Seems unlikely, but we'll see.

spuz 1 day ago||
You're missing an important part of the negotiation - Trump must benefit personally in some way. In this case, Greg Brockman has given by far the biggest single donation ($25m) to Trump's MAGA PAC in September last year.
foobarqux 1 day ago||||
No, the difference is that the government agrees to no "unlawful" use as determined by the government.

Anthropic said that mass surveillance was per se prohibited even if the government self-certified that it was lawful.

Tadpole9181 1 day ago||||
Well tweets aren't legally binding, so chances are he's just outright lying so they can have their cake (DoD contracts) and eat it too (no bad PR)
jkaplowitz 1 day ago|||
> Well tweets aren't legally binding

There's nothing in general about a tweet that makes it any more or less legally binding than any other public communication, and they certainly can be used in legally binding ways. But sure, a simple assertion to the public from the CEO of a privately held company about what a separate contract says is not legally binding - whether through tweet, blog, press release, news interview, or any other method.

sudo_cowsay 1 day ago||||
companies like saying things that makes it look like they aren't doing anything bad but then they decide to do exactly what they said they wouldnt

e.g. google project maven, microsoft hololens (military), and much much more

nurettin 1 day ago|||
This is so funny to me. Especially since Elon musk had to buy Twitter due to his tweets.
Tadpole9181 1 day ago||
> Especially since Elon musk had to buy Twitter due to his tweets.

Okay, yes, if you openly and directly state a unilateral contract offer and you're already in trouble with the SEC, Tweets can be legally binding.

anigbrowl 1 day ago||||
You really think someone would do that, just go on the internet and tell lies?

https://knowyourmeme.com/memes/just-go-on-the-internet-and-t...

moralestapia 1 day ago|||
Makes 100% sense.

They said yes to the same thing.

karmasimida 1 day ago||
Dario is being ruled out due to ideological standing

Makes perfect sense

nandomrumber 1 day ago||
Yep.

Everyone is over thinking it.

There would have been a conversation between Hegseth and Trump that went something like:

This guy thinks he can tell us what we can and can’t do.

Get rid of him.

It’s that simple.

karmasimida 1 day ago||
He is a horrible public presenter. He presents himself as someone who is nervous, validation seeking, yet it is stupid of you to not trust him.

He lacks confidence yet feels incredibly arrogant.

He would succeed in academia as the principal of some prestige university with this exterior, not as CEO of an AI company.

nikolay 1 day ago|||
He's the reason why many people avoid OpenAI as he is among the top 3 most untrustworthy people in tech!
LPisGood 1 day ago|||
Who are the other two?
nikolay 17 hours ago||
In my book, Peter Thiel and Elon Musk, members of the PayPal Mafia.
nashashmi 1 day ago|||
Zuckerberg is number one?
nikolay 17 hours ago||
He's crazy, not evil.
nashashmi 3 hours ago||
he loves people who lack morality and ethics.
RobLach 1 day ago|||
So all these OpenAI signers are resigning, or...?
jalapenos 1 day ago||
Why only have the cake when you can eat it too
mcs5280 1 day ago|||
Remember when they removed him for not being consistently candid?
jalapenos 1 day ago||
And then Microsoft forced him back in on the grounds of: he's a scumbag but he's our scumbag so he's untouchable
dang 1 day ago|||
Related ongoing thread:

OpenAI agrees with Dept. of War to deploy models in their classified network - https://news.ycombinator.com/item?id=47189650 - Feb 2026 (22 comments)

dataflow 1 day ago|||
The wording I see is not exactly free of loopholes. I noted them on the other thread: https://news.ycombinator.com/item?id=47190163
xtracto 1 day ago|||
When I started reading all these news, the thought that came to my mind is: how sweet of these companies to try this, but unfortunately I am sure that other countries advancing AI like China (deepseek, GLM, etc) or Russia, or whoever WILL have their companies' AI at their disposal

Unfortunately, this is the new arms race, race to the moon, and all that together.

neya 1 day ago|||
This is not about wars or winning contracts. If you know about Sam's strategies - It's just business. This deal ensures Anthropic doesn't have the financial cushion that OpenAI desperately needs (they just raised billions, also trending on HN). Is it ethical? Probably not. But, all is fair in love and war (proverb).
puchatek 1 day ago|||
The deal was only possible because anthropic stayed by their convictions. OpenAI didn't have agency in that. You're making it sound like Altman orchestrated the whole thing.
neya 1 day ago||
> You're making it sound like Altman orchestrated the whole thing.

Not at all, as a matter of fact I'm just stating what you're stating. It's just business.

jalapenos 1 day ago|||
Altman is a snake who uses words purely instrumentally, and this is well known.

He basically takes advantage of people's limited memories and default assumption that when a person says something its honest.

ahf8Aithaex7Nai 1 day ago|||
I dislike the style of Altman's language about as much as I dislike the bullshit language used in politics or the self-incriminating, overly specific denials used by prominent figures to defend themselves against criminal allegations: “I have never had sexual relations with anyone under the age of 18 outside of my own family.”

The language is so coded that the many places where the core statement must be negated stand out like a sore thumb.

chamomeal 1 day ago|||
Aaaaaand it’s gone
m3kw9 1 day ago|||
Learn to read. “ Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”
SlightlyLeftPad 1 day ago||
Meanwhile, the mass surveillance is outsourced to Flock
SilverElfin 1 day ago|||
Greg Brockman who cofounded OpenAI is the biggest donor to Trump’s PAC. Altman claims they kept the same restrictions as Anthropic essentially. So my conclusion is OpenAI successfully bribed the government into ditching Anthropic and viciously attacking them by abusing their power (supply chain risk).

Probably the most corrupt way of killing a competitor I’ve heard of.

stinkbeetle 1 day ago|||
[flagged]
hshdhdhj4444 1 day ago|||
You’re right.

The people who actually know stuff about the world are reality TV stars, Fox News hosts, and podcasters just asking questions.

Those are the people with actual knowledge.

stinkbeetle 1 day ago||
Pathetic strawman.
Jimmc414 1 day ago||||
What else can they do? Would you recommend they stay silent? It sounds like they are no longer the gatekeepers of this technology or they never were to begin with.
stinkbeetle 1 day ago|||
I would recommend they start by understanding the landscape and developing strategies that are more suited for the actual world as it is, not the naive fantasy land they believe it is.

Coming out publicly playing their hand like it's a royal flush when it's a 7 high and their cards are facing their opponent clearly wasn't going to do anything. The cynical take is they aren't that naive and this just gives them plausible deniability within their social circles when they are interrogated as to why they work for these corporations. But I like to give the benefit of the doubt.

WatermelonApe 1 day ago|||
[dead]
teaearlgraycold 1 day ago|||
All they did was say they didn’t want their company to do something. They never said they had the power to ensure that.
stinkbeetle 1 day ago||
Being disingenuous isn't a clever or interesting way to discuss a topic though.
senderista 1 day ago||
"The world is a complicated, messy, and sometimes dangerous place."

So you better just let the guys with the guns do whatever they want.

busko 1 day ago||
Hoorah! shock and awe
mrcwinn 1 day ago||
OpenAI employees lol.

You’ve lost utterly and completely. Even if you, as an individual, are a good person.

twocommits 1 day ago||
[dead]
Jaxon_Varr 1 day ago||
[dead]
techreader2 1 day ago|
[dead]
More comments...