Top
Best
New

Posted by golfer 1 day ago

We do not think Anthropic should be designated as a supply chain risk(twitter.com)
798 points | 431 commentspage 9
lenny321 1 day ago|
[dead]
builderhq_io 18 hours ago||
[dead]
catchcatchcatch 20 hours ago||
[dead]
proshno 23 hours ago||
[dead]
Helloyello 1 day ago||
[dead]
bishop_cobb 1 day ago||
[dead]
roughly 1 day ago||
It feels like Sam's playing chess against an opponent who's playing dodge ball. He's leveraged this situation to get OpenAI in with the DoD in a way that's going to be extremely lucrative for the company and hurt his biggest rival in the process, but I think he's still seeing DoD as Just Another Customer, albeit a big government one. This administration just held a gun to the head of Anthropic and (if the "supply chain risk" designation holds and does as much damage as they're hoping) pulled the trigger, because Anthropic had the gall to tell them no. One thing this administration's shown is you cannot hold lines when you're working with them - at some point the DoD's going to cross his "red lines" and he's going to have to choose whether he's going to risk his entire consumer business and accede to being a private wing of the government like Palantir or if he wants to make a genuine tech giant. There's no third choice here.
3eb7988a1663 1 day ago||
I do not see this as any mastermind play, but fully compromising principles. Which is a play.

"Donations" to a corrupt regime + signing a deal that says DoD can do whatever they want is not out maneuvering so much as rolling in the pig stye.

roughly 1 day ago||
So is the theory that OpenAI believes it can’t compete on the open market or that they don’t know this will eventually cost them their consumer business?
3eb7988a1663 1 day ago|||
I doubt most consumers pay enough attention that they would be aware of something like this. Even if they did, few companies have clean hands these days that is just falls into the general haze of, "everything is awful."

For OpenAI, it is likely a huge contract which gives them immediate cash today. Plus the event can be repackaged in further financing deals. "Good enough for the DoD, with N year contracts for analysis of the hardest problems"

tadfisher 1 day ago|||
The reality is that all data we have created and will create that is accessible on the public Internet will be used to train autonomous weapons systems used to kill humans. So the consumer business will be lost eventually, no matter what OpenAI believes.
BLKNSLVR 1 day ago|||
Everyone already knows what he is going to do when it comes to that.
discardable_dan 1 day ago||
It also doesn't matter because Claude 4.6 is so much better at writing code that nobody cares what OpenAI is doing.
o175 20 hours ago|
Everyone's applauding Anthropic for having principles. Let's look at what those principles actually do.

Anthropic refused the Pentagon contract. Within hours, OpenAI signed it. The capability didn't pause. It just changed vendors. Anthropic's "red line" is a speed bump on a highway with no exit ramp.

But it does accomplish one thing: it gives their engineers a story they can tell themselves. We're the good ones. We said no. That moral comfort is what lets extremely talented people keep building the exact technology that makes all of this possible.

Worse, the "safety-focused" brand doesn't just pacify the people already there. It recruits researchers who'd otherwise never touch frontier AI, funneling them into building the most powerful models on earth because they've been told this is where the responsible work happens. The red lines don't slow capability development. They accelerate it by capturing talent that would have stayed on the sidelines.

And in this whole drama, who actually represents the public? Trump performs strongman nationalism. The Pentagon performs operational necessity. Anthropic performs moral courage. Everyone has a role. Nobody's role is the people whose data gets collected, whose lives get restructured by these systems. The only party with real skin in the game is the only one without a seat.

listless 20 hours ago|
This is exactly right. It’s crazy to me how easily people get confused and think that corporations are “good” or “evil”.

Anthropic is incredibly good at marketing. They are constantly out talking about how dangerous AI is an even showing how Claude does dangerous thing in their own testing. This is intentional - so that you see them as having the truly powerful AI. in fact it’s so powerful, all they can do is warn you about it.

They knew refusing this contract would make them look like the good guy. Again. They knew OpenAI would sign it. They knew vapid celebrities would celebrate them.

Folks come on. Don’t be so easily taken in. None of these people are good guys. They are all just here to make money and accumulate power and standing. That’s ok. There’s nothing wrong with that. But we gotta stop acting like we’re in some ongoing battle of good vs evil and tech companies are somehow virtuous.

o175 20 hours ago||
Even if they believe every word sincerely, it changes nothing. The structural effect is identical. Sincere people build the same capability, the contract reroutes the same way. You don't need cynicism to explain this.

The honest version might actually be worse, because sincere people work harder.