Posted by BloondAndDoom 21 hours ago
If government procurement rules intended for national security risks can be abused as a way to punish Anthropic for perceived lack of loyalty, why not any other company that displeases the administration like Apple or Amazon?
This marks an important turning point for the US.
Setting aside the spectacular metastasis of a lawless kakistocracy that is literally rewriting the facts on record...
Anthropic's leadership has wisely attempted to make it clear that its product is not fit for the US DoD's purpose/objective, which is automated killing at scale.
It would be (is) grossly, historically negligent to operate weapons with LLMs. Anthropic built systems for a thuggocracy that only understands bribery, blackmail, and force.
Saying the government can just nationalize any company purely because they want to use the tech to kill people has pretty big implications and his historically against what this country stands for.
>[1] The government can make you go over to southeast Asia and kill people personally.
Is this a normative statement? In other words are you simply claiming "the government has men with guns and therefore can force people/companies do whatever they want", or are you claiming that "the government should be able to commandeer civilian resources for whatever it wants"?
Is it a "moral duty" to aid your government, especially in the current social/political environment? Conscription is theoretically still allowed in the US, and you're theoretically supposed to register for the SSS, but nobody has been prosecuted for failure to do so in decades. That suggests the "moral duty" aspect has significantly weakened. Moreover if we're making comparisons to the draft, it's also worth noting the draft allows for conscientious objection. That makes your claim of "that’s not their call to make" quite questionable.
Whether they participate voluntarily in a commercial transaction or participate only when compelled to by law (setting aside the question of whether the government does or should have that power) is certainly their call to make.
Just as any individual can decide whether to volunteer, whether to wait until drafted, or whether to refuse to be drafted and face the consequences.
(History shows these decisions, and the rights to make them, are meaningful at scale!)
Finally, governments who expect their leading scientists to do groundbreaking work simply out of fear of imprisonment are NGMI against governments whose scientists believe in their cause.
Conscientious objection still puts the ball in the government’s court. You have to make your case to the government that you have a deeply held religious or moral belief that precludes participation in war, and then the government decides what it wants to do. It’s not clear to me how a corporation would prove the existence of such a belief. But even if that was possible, it wouldn’t give the company the right to decide unilaterally.
People largely tend not to believe in this kind of jingoistic bullshit nowadays.
> If Congress doesn’t want AI-powered killing machines, they’re the ones who have the right to make that call.
You have it backwards, and you know it. If Congress wants to invoke natsec concerns to force companies to sell to the federal government, then they have to explicitly say so, and any such legislation and exercise of execute power pursuant thereto would be heavily litigated.
> The government can make you go over to southeast Asia and kill people personally. It’s totally incompatible with that to say companies should be allowed to veto the use of their technologies in war.
Yes, it's legal to have drafts, but that's not relevant, and also includes certain exceptions for conscientious objectors. It doesn't matter if its paradoxical or ironic that an individual could be pressed into military service whereas a private company doesn't have to sell stuff to the federal government.
US tech companies were previously forced into compliance with PRISM or threatened with destruction (see: escalating fines to infinity against Yahoo, forcing their eventual compliance).
You know what's new? This administration is doing out in the open what used to go on quietly.
> You know what's new? This administration is doing out in the open what used to go on quietly.
So this administration has got bold and the behaviour has become overt.
People have this intuitive sense that there's some kind of authority of truth or justice, an available recourse that we could've and should've used.
But that sense is incorrect.
What we actually have the political and justice systems that Trump and his adherent have, so far, quite successfully subverted.
In other words we might have killed Osama Bin Laden, but he won. The U.S truly is a "shadow of it's former self."
It's interesting to see that nothing happens despite this. Now he started another war to distract from his involvement in the huge Epstein network. Also, by the way, quite interesting to see how many people were involved here; there is no way Ghislaine could solo-organise all of that yet she is the only one in prison. That makes objectively no sense.
e: Americans seem to be surprised to learn that their democracy is indeed classified as a flawed democracy for more than a decade by The Economist due to decades of backsliding (the more rapid regression lately is not yet accounted for, but I wouldn't be surprised if the outcome of the 2026 elections results in a hybrid regime assessment in 2027).
They were going to do him for conspiracy to defraud the United States and conspiracy to obstruct an official proceeding, re. the 2020 stuff before he got reelected.
Half the country just hasn't accepted the reality that the other half refuses to share a society with them and wants them dead.
Governments should not be permitted to introduce regulations against companies of this kind if the regulations can be enforced selectively and with regulator discretion, as the GDPR and antitrust definitely are. The free-speech implications are staggering.
A recent report shows the approval numbers, for all americans it's at 36%. For white americans, its at 45%
Even 36% is sky high for what he did.
https://www.nytimes.com/interactive/polls/donald-trump-appro...
https://www.reuters.com/graphics/TRUMP-POLLS-AUTOMATED/APPRO...
https://www.economist.com/interactive/trump-approval-tracker
History will put Trumpers and Confederate at the same level of despicability.
You step up and start shooting at the heartless monsters running the first (US armed forces) and second (ICE) most well-funded militaries in the world. Go ahead. We’ll be right there behind you.
(Yeah, I’m burning some hn karma for this, I imagine.)
But nope, only words, words and more words.
"We mustn't consider dealing with problem x because it wasn't considered important by our founding fathers"
"China are catching up, so we need to cower behind a tariff wall rather than risk losing an open competition"
"Other countries with similar legal systems have successfully reformed their supreme courts, but there's nothing we can learn from them"
"We shouldn't constrain rogue leaders because of, er, something to do with King George III"
...and now "we can't push back against the regime, because they'll shoot us if we do".
It's so weird - a huge shift in such a short period of time. As an outsider who wishes America well, it's really sad to see.
As for getting shot, while the chance of getting shot in the US for opposing the government is much higher than in similar circumstances in somewhere like the UK (which is far from perfect - but rarely actually shoots people), its also much, much lower than in Iran or China or Saudi Arabia.
Pushing back against the US government is a lot safer than taking part in something like the 2022 protests that ousted the Sri Lankan government, and lots of normally apolitical people took part in that (which was why it succeeded).
Your ignorance of reality does not define reality.
If you are in law enforcement, do not follow clearly unlawful orders. The president is not your boss. This is a functioning democracy.
If you are a librarian, do not hide otherwise lawful books that the current administration dislikes.
If you are in logistics, do not collect obviously unconstitutional taxes. Make sure to challenge them in courts first.
If you are in a university, stick to what is true and scientifically sound. Do not hide inconvenient truths.
If you are a baker, do not refuse to make a rainbow colored cake just because you are worried what the people wearing metaphorically brown shirts might say.
The list goes on and on and on. This has been well documented throughout history. Fascism needs a seed to thrive, and that seed is people complying in advance. Not with actual laws, but with the idea what direction the law will take, just because it's easier for them. People not helping other people because immigration is not in vogue right now and who knows what the neighbors might say.
Don't dismiss words: they are the necessary link between (individual) thoughts and collective deeds.
PS. Trump also got there with words: speeches, slogans, imprecations
And that whenever a mass shooting happens in the US, Americans reassure themselves that gun violence is a price worth paying for the Second Amendment. And there is a run on pawn shops and gun stores because mass shootings are the best form of advertising America's billion dollar gun lobby has.
And that Americans will wax poetic about watering the Tree of Liberty with the Blood of Tyrants and Patriots any time gun control comes up, because they believe their Second Amendment is an absolute vouchsafe against tyranny and because of that, they and they alone are the only truly free country.
And they were willing to rise up in Portland.
And they were willing to rise up during COVID.
And they were willing to rise up on Jan 6th.
And they're willing to shoot up schools and black churches and gay nightclubs and mosques so often it no longer makes the news.
But now, with blatant and undeniable tyranny in their face and shooting them dead in the streets... nothing.
Not that violence would necessarily be productive (although historically speaking no social or political progress happens without it)... but it's weird that the most violent society in human history, born of genocide and bathed in blood, with more guns than people and gun violence enshrined as its second most important and fundamental virtue, the land of "give me liberty or give me death" is all of a sudden the most timid.
Like goddamn throw a Molotov cocktail or something.
You're making the mistake of assuming an attribute of a culture cannot be accurate unless it's 100% accurate about every member.
I think it's perfectly valid to call Americans to the carpet when they won't live up to their stated principles, if only because of how obnoxious they've been about their own sense of exceptionalism, and how their guns serve as an absolute vouchsafe against tyranny.
History is going to note that the only times Americans attempted a revolution against their government was first in defense of slavery and second in defense of fascism, and that isn't a good look. Replying with #notallamericans doesn't help.
edit: OK partial mea culpa as the US had anti-slavery revolts[0], but the two events that will stand out for their lasting impact and scope are the Civil War and Jan. 6th. The Revolutionary War doesn't count because they were British at the time.
[0]https://en.wikipedia.org/wiki/Slave_rebellion_and_resistance...
That is, the money doesn't care so long as it's still profitable. When the recession comes a Democrat will be allowed back in to fix things.
See Liz Truss.
I think the solution is also obvious for the United States — higher taxes and lower government spending. We need to do both. However, you can't get elected if you promise both those things.
Yeah dude, that's the point.
The US government has lots of corporatism, but this isn't an example of that.
The current US administration's relationships with corporations is more seeking to maximise how much bribe money it can extract from them, whilst undermining them with counterproductive policies no matter how big the tax breaks are.
Take the stated tool for this action, the Defense Production Act ("DPA") [1]. It was passed in 1950. What does it cover? Well, lots of things. The DPA has been invoked many times over 76 years.
Notably in 1980 it was expanded to include "energy", I guess in response to the 1970s OPEC Oil Crisis.
Remember during he pandemic when gas prices skyrocketed? As an aside, that was Trump's fault. But given that "energy" is a "material good" under the DPA, the government could've invoked it to tackle high energy prices and didn't.
So, the government is willing to invoke the DPA to protect corporate and wealthy interests, which now includes military applications of AI for imperialist purposes, but never for you, the average citizen. IT's weird how that keeps consistently happening.
The US government has consistently acted to further the interests of US corporations and the ultra-wealthy. You probably just haven't been paying attention until now.
[1]: https://en.wikipedia.org/wiki/Defense_Production_Act_of_1950
What exactly is being imposed by anthropic?
This is from the anthropic letter:
> We held to our exceptions for two reasons. First, we do not believe that today’s frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America’s warfighters and civilians. Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights.
Do you see these views as “left wing”? Or what do you disagree with here?
Compliance with the DoD doesn't remove big tech's complicity.
Please memorize the 14 points of fascism, you will see examples of this multiple times a day. Its ecerywhere.
futhermore this is kind of a naive framing painting the state as somehow separate from majority of the capital...
Trump and associates have used the machinery of state to attack their enemies, attacked and belittled the judiciary while trying to subvert it, and demanded fealty from large businesses under threat of destroying them. It is unprecedented, reckless and a very dangerous moment, unfortunately not just the US has to live with the consequences.
If you think it is business as usual you need to do some reading of history, specifically a century ago in Germany.
Companies who subscribed will find themselves without an important tool because the president went on a rant, and might wonder if it’s safe to depend on other American companies.
It's not only the US being special in this case.
The problem is pretty simple: there is money to be made and someone will do what the Pentagon wants. Will it be worse in capabilities than Anthropic? Probably, but as long as it can be used to wage autonomous war wherever the US military decides, it will be good enough.
Anthropic can stick to their beliefs as much as they want, but it will not change the outcome, maybe just postpone it a bit.
On an unrelated note, I think the Pentagon erred when it labeled them a supply chain vulnerability, they should have used the DPA to make them do what they need. Less drama and much cheaper compared to replacing them with a whole different company.
They can and do do this routinely. Many individuals get marked and regularly go through additional screening if their travel plans raise flags. This isn't even unique to the US... most Western nations do the same. If there is a serious brain drain risk, the US government can easily go all out and have the whole company put on the no-fly list.
Let's hope so, because I am not so certain.
When the US banded human embryo research did that erode trust? I didn’t hear anything about that at the time.
I wonder if this is how some non minority of American thinks or was just worded like that to try to appeal to the "most radical patriots"
I think war is bad and generally a stupid thing to do, but my point is that if they were negotiating terms with the department at all, it's really a given they'd be OK with the stuff you took issue with.
Every nation has some bias but I think Americans have power poisoning for being the dominant power for so long. They think they are entitled to do anything and believe they are the good guys in the history. Well...
I thought the US was a country of immigrants (or was before it started hunting them)?
There are only good/bad people for moments in time. Some are good for longer than others.
But I get it, anti-American sentiment is very popular right now.
Americans do the same, hence whole world got ttump. 95% of the world aint US, so such logic is even easier for almost whole mankind - is US force of good or evil? Different places would give you different answers, and most americans would not like the actual spread these days.
I’m sure nothing good can come out of strong-arming some of the brightest scientists and engineers the U.S. has. Such a waste of talent trying to make them bend over to the government’s wishes… instead of actually fostering innovation in the very competitive AI industry.
You can argue that the government refusing to do any business with company A is overreach, I suppose, but I imagine that the next logical escalation in this rhetorical slapfight is going to be the government saying "we cannot guarantee that any particular use will not include some version of X, and therefore we have to prevent working with this supplier"...which I sort of see?
Just to take the metaphor to absurdity, imagine that a maker of canned tomatoes decided to declare that their product cannot be used to "support a war on terror". Regardless of your feelings on wars on terror and/or canned tomatoes, the government would be entirely rational to avoid using that supplier.
This is a massive body slam. This means that Nvidia, every server vendor, IBM, AWS, Azure, Microsoft and everybody else has to certify that they don't do business directly or indirectly using Anthropic products.
This is literally the mechanism by which the DoD does what you're suggesting.
Generally speaking, the DoD has to do procurement via competitive bidding. They can't just arbitrarily exclude vendors from a bid, and playing a game of "mother may I use Anthropic?" for every potential government contract is hugely inefficient (and possibly illegal). So they have a pre-defined mechanism to exclude vendors for pre-defined reasons.
Everyone is fixated on the name of the rule (and to be fair: the administration is emphasizing that name for irritating rhetorical reasons), but if they called it the "DoD vendor exclusion list", it would be more accurate.
You are confusing the need to avoid Anthropic as a component of something the DoD is buying, with prohibitions against any use.
The DoD can already sensibly require providers of systems to not incorporate certain companies components. Or restrict them to only using components from a list of vetted suppliers.
Without prohibiting entire companies from uses unrelated to what the DoD purchases. Or not a component in something they buy.
What the declaration of supply chain risk does though is, that nobody at Lockheed can use Anthropic in any way without risking being excluded from any bids by the DoD. This effectively loses Anthropic half or more of the businesses in the US.
And maybe to take a step back: Who in their right minds wants to have the military have the capabilities to do mass surveillance of their own citizens?
Who in their right minds wants to have the US military have the capability to carry out an unprovoked first strike on Moscow, thereby triggering WW3, bringing about nuclear armageddon?
And yet, do contracts for nuclear-armed missiles (Boeing for the current LGM-30 Minuteman ICBMs, Northrop Grumman for its replacement the LGM-35 Sentinel expected to enter service sometime next decade, and Lockheed Martin for the Trident SLBMs) contain clauses saying the Pentagon can't do that? I'm pretty sure they don't.
The standard for most military contracts is "the vendor trusts the Pentagon to use the technology in accordance with the law and in a way which is accountable to the people through elected officials, and doesn't seek to enforce that trust through contractual terms". There are some exceptions – e.g. contracts to provide personnel will generally contain explicit restrictions on their scope of work – but historically classified computer systems/services contracts haven't contained field of use restrictions on classified computer systems.
If that's the wrong standard for AI, why isn't it also the wrong standard for nuclear weapons delivery systems? A single ICBM can realistically kill millions directly, and billions indirectly (by being the trigger for a full nuclear exchange). Does Claude possess equivalent lethal potential?
That's not the same thing as delivering a weapon that has a certain capability but then put policy restrictions on its use, which is what your comparison suggests.
The key question here is who gets to decide whether or not a particular version of a model is safe enough for use in fully autonomous weapons. Anthropic wants a veto on this and the government doesn't want to grant them that veto.
I expect they'll ask the Pentagon to sign a liability disclaimer and then send it anyway.
Whereas, Anthropic is saying they'll refuse to let the Pentagon use their technology in ways they consider unsafe, even if Pentagon indemnifies Anthropic for the consequences. That's very different from how Boeing would behave.
When we're entering the realm of "there isn't even a human being in the decision loop, fully autonomous systems will now be used to kill people and exert control over domestic populations" maybe we should take a step back and examine our position. Does this lead to a societal outcome that is good for People?
The answer is unabashedly No. We have multiple entire genres of books and media, going back over 50 years, that illustrate the potential future consequences of such a dynamic.
* autonomous weapons systems
* private defense contractor leverages control over products it has already sold to set military doctrine.
The second one is at least as important as the first one, because handing over our defense capabilities to a private entity which is accountable to nobody but it's shareholders and executive management isn't any better than handing them over to an LLM afflicted with something resembling BPD. The first problem absolutely needs to be solved but the solution cannot be to normalize the second problem.
Yes, this is the part where I acknowledge that it might be overreach in my original comment, but it's not nearly as extreme or obvious as the debate rhetoric is implying. There are various exclusion rules. This particular rule was (speculating here!) probably chosen because a) the evocative name (sigh), and b) because it allows broader exclusion, in that "supply chain risks" are something you wouldn't want allowed in at any level of procurement, for obvious reasons.
Calling canned tomatoes a supply chain risk would be pretty absurd (unless, I don't know...they were found to be farmed by North Korea or something), but I can certainly see an argument for software, and in particular, generative AI products. I bet some people here would be celebrating if Microsoft were labeled a supply chain risk due to a long history of bugs, for example.
>Designating Anthropic as a supply chain risk would be an unprecedented action—one historically reserved for US adversaries, never before publicly applied to an American company.
Some very brief googling also confirmed this for me too.
>Everyone is fixated on the name of the rule (and to be fair: the administration is emphasizing that name for irritating rhetorical reasons), but if they called it the "DoD vendor exclusion list", it would be more accurate.
This statement misses the point. The political punishment to disallow all US agencies and gov contractors from using Anthropic for _any _ purpose, not just domestic spying, IS the retaliation, and is the very thing that's concerning. Calling it "DoD vendor exclusion list" or whatever other placating phrase or term doesn't change the action.
it's also unprecedented for a contractor to suddenly announce their products will, from now on, be able to refuse to function based on the product's evaluation of what it perceives to be an ethical dilemma. Just because silicon valley gets away with bullying the consumer market with mandatory automatic updates and constantly-morphing EULAs doesn't mean they're entitled to take that attitude with them when they try to join the military industrial complex. Actually they shouldn't even be entitled to take that attitude to the consumer market but sadly that battle was lost a long time ago.
>for _any _ purpose
they're allowed to use it for any purpose not related to a government contract.
That is a deeply deceptive description of what happened. Anthropic was clear from the beginning of the contract the limitations of Claude; the military reneged; and beyond cancelling the contract with Anthropic (fair enough), they are retaliating in an attempt to destroy its businesses, by threatening any other company that does business with Anthropic.
No, that's not what they said.
"Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now".
Thing is that very much want access to Anthropic's models. They're top quality. So that definitely want Anthropic to bid. AND give them unrestricted access.
But that's what the supply-chain risk is for? I'm legitimately struggling to understand this viewpoint of yours wherein they are entitled to refuse to directly purchase Anthropic products but they're not entitled to refuse to indirectly purchase Anthropic products via subcontractors.
It's the same as Trump claiming emergency powers to apply tariffs, when the "emergency" he claimed was basically "global trade exists."
Yes, the government can choose to purchase or not. No, supply chain risk is absolutely not correct here.
You might be completely right about their real motivations, but try to steelman the other side.
What they might argue in court: Suppose DoD wants to buy an autonomous missile system from some contractor. That contractor writes a generic visual object tracking library, which they use in both military applications for the DoD and in their commercial offerings. Let’s say it’s Boeing in this case.
Anthropic engaged in a process where they take a model that is perfectly capable of writing that object tracking code, and they try to install a sense of restraint on it through RLHF. Suppose Opus 6.7 comes out and it has internalized some of these principles, to the point where it adds a backdoor to the library that prevents it from operating correctly in military applications.
Is this a bit far fetched? Sure. But the point is that Anthropic is intentionally changing their product to make it less effective for military use. And per the statute, it’s entirely reasonable for the DoD to mark them as a supply chain risk if they’re introducing defects intentionally that make it unfit for military use. It’s entirely consistent for them to say, Boeing, you categorically can’t use Claude. That’s exactly the kind of "subversion of design integrity" the statute contemplates. The fact that the subversion was introduced by the vendor intentionally rather than by a foreign adversary covertly doesn’t change the operational impact.
The DoD has a right to avoid such models, and to demand that their subcontractors do as well.
It’s like saying “well I’d hope Boeing would test the airplane before flying it” in response to learning that Boeing’s engineering team intentionally weakened the wing spar because they think planes shouldn’t fly too fast. Yeah, testing might catch the specific failure mode. But the fact that your vendor is deliberately working against your requirements is a supply chain problem regardless of how good your test coverage is.
You’re really trying to complain that the use of the rule is inappropriate here, which may be true, but is far more a matter of opinion than anything else.
I fully understand that they are using it to ban things from the supply chain. The law, however, is not “first find the effect you want, then find a law that results in that, then accuse them of that.”
You can’t say someone murdered someone just because you want to put them in jail. You can’t use a law for banning supply chain risks just because you want to ban them from the supply chain.
This isn’t idle opinion. Read the law.
Is it really reasonable to refuse to buy a fighter jet because somebody at Lockheed who works on a completely unrelated project uses claude to write emails?
If I sell red widgets that I make by hand to the government, I won't be allowed to use Anthropic to help me write my web-site.
What it does is prevent companies that Anthropic needs to do business with from doing business with Anthropic.
If Anthropic “needs” the government to not have this rule, then perhaps they had a losing hand, and they overplayed it.
I don’t agree with you and think you’re being melodramatic, but if you are right, that’s my response.
Is that not sufficient here?
That’s what they will argue, anyway.
To begin with, the existing contract included the language on usage.
Other companies also have such language about usage. It's fairly standard, and is little more than licensing terms.
The idea this is unprecedented is some PR talking point nonsense.
The existing contract is only a few dozen months old. It didn’t hold up to scrutiny under real world usage of the service. The government wants to change the contract. This is not the kill shot you think it is. It’s totally normal for agreements to evolve. The government is saying it needs to evolve. This is all happening rapidly and it’s irrelevant that the government agreed to similar terms with OpenAI as well. That agreement will also need to evolve. But this alone doesn’t give Anthropic any material legal challenge. The courts understand bureaucracy moves slowly better than anyone else, and won’t read this apparent inconsistency the same way you are.
> (b) Prohibition. (1) Unless an applicable waiver has been issued by the issuing official, Contractors shall not provide or use as part of the performance of the contract any covered article, or any products or services produced or provided by a source, if the covered article or the source is prohibited by an applicable FASCSA orders as follows:
"Misinformation" does not mean "facts I don't like".
> No one who wants to work with the US government would be able to have Claude on their critical path.
Yes. That is what the rule means. Or at least "the department of war". It's not clear to me that this applies to the whole government.
Again, they could have just chosen another vendor for their two projects of mass spying on American citizens and building LLM-powered autonomous killer robots. But instead, they actively went to torch the town and salt the earth, so nothing else may grow.
No.
It honestly doesn’t take much of a charitable leap to see the argument here: AI is uniquely able (for software) to reject, undermine, or otherwise contradict the goals of the user based on pre-trained notions of morality. We have seen many examples of this; it is not a theoretical risk.
Microsoft Excel isn’t going to pop up Clippy and say “it looks like you’re planning a war! I can’t help you with that, Dave”, but LLMs, in theory, can do that. So it’s a wild, unknown risk, and that’s the last thing you want in warfare. You definitely don’t want every DoD contractor incorporating software somewhere that might morally object to whatever you happen to be doing.
I don’t know what happened in that negotiation (and neither does anyone else here), but I can certainly imagine outcomes that would be bad enough to cause the defense department to pull this particular card.
Or maybe they’re being petty. I don’t know (and again: neither do you!) but I can’t rule out the reasonable argument, so I don’t.
But that is not what has happened here: The DoD is declaring Anthropic as economical Ice-Nine for any agency, contractor, or supplier of an agency. That is an awful lot of possible customers for Anthropic, and right now, nobody knows if it is an economic death sentence.
So I'm really struggling to understand why you're so bent on assuming good faith for a move that cannot be interpreted in a non-malicious way.
This issue is about more than the government blacklisting a company for government procurement purposes.
From what I understand, the government is floating the idea of compelling Anthropic — and, by extension, its employees — to do as the DoD pleases.
If the employees’ resistance is strong enough, there’s no way this will serve the government’s interests.
A vast number of people in positions of responsibility right know have their life at the mercy of the redaction pen and are ultimately going to do whatever it takes to keep that pen out of the "wrong hands"
And where would they emigrate? Russia? China? UAE? :-)
The EU (which is not the same as Europe), is also looking a bit sharper on AI regulation at the moment (for now… not perfect but sharper etc etc).
Not to mention UK is arguably further down the mass surveillance pipeline than the US. They’ve always had more aggressive domestic intelligence surveillance laws which was made clear during the Snowden years, they’ve had flock style cameras forever, and they have an anti encryption law pitched seemingly yearly.
I’d imagine most top engineers would rather try to push back on the US executive branch overreach than move. At least for the time being.
I’m not gonna dispute the UK being further down some parts of the road.
Not sure what you’d count as top engineers, but I know enough that have been asking about and moving to the UK/EU that it’s been a noticeable reversal of the historic trends. Also, a major slowdown of these kinds of people in the UK/EU wanting to move to the US.
It is American owned now but it clearly hired enough talent for Google to buy it.
Which is why people are talking about this -- it's about ideology now.
You may personally be motivated solely by money. Not everybody is you.
Ideology is easy to throw around for internet comments but working on the cutting edge stuff next to the brightest minds in the space will always be a major personal draw. Just look at the Manhattan project, I doubt the primary draw for all of those academics was getting to work on a bomb. It was the science, huge funding, and interpersonal company.
This also isn’t hypothetical. I know top-talent engineers and researchers that have moved out of the USA in the last 12 months due to the political climate (which goes beyond just the AI topics).
And you might want to read a few books on the Manhattan project and the people involved before you use that analogy. I don’t think it’s particularly strong.
Are they working remotely for US companies? In Canada that’s very much still the case everywhere you look
> Even the big American companies have been opening offices in places like London to hire the top talent at high salaries.
I assumed this discussion was about rejecting working for US companies who would be susceptible to the executive branch’s bullying, not whether you can you make a US tier salary off American companies while not living in America. If you’re doing that you might as well live in America among among the other talent and maximize your opportunities.
https://worldpopulationreview.com/country-rankings/education...
You attract talent for the same reasons china attracts sales; at the cost of your very own rights.
Look at the towns suffering around data centres for a start. The rest of us are happy to pay for what you'll do to yourselves.
And the US can’t realistically stop our well-funded homegrown AI Hardware startups from manufacturing with TSMC. This is part of why there’s funding from the EU to develop Sovereign AI capabilities, currently focused on designing our own hardware. We’re nothing like as far behind as you might expect in terms of tech, just in terms of scale.
Also, while US export restrictions might make things awkward for a short while, it wouldn’t stop European innovation. The chips still flow, our own hardware companies would scale faster due to demand increase, and there’s the adage about adversity being the parent of all innovation (or however it goes).
See what happened to Russian Baikal production on TSMC
Or because of the revoked processor design licenses from the British company Arm (which is still UK headquartered… despite being NASDAQ listed and largely owned by Japanese firm SoftBank)?
Or perhaps you think the US could stop us using the 12nm fabs being built by TSMC on European soil? Or could stop us manufacturing RISC-V-based chips (Swiss-headquartered technology)?
The US is weak in digital-logic silicon fabrication and it knows it. That’s why it’s been so panicked about Intel and been trying to get TSMC to build fabs on US soil. They’re pouring tens of billions of dollars into trying to claw back ownership and control of it, but it’s not like Europe or China or others are standing still on it either.
Being built as in not operating yet?
12 nm gpu is what? Nvidia 1080/2060 level? Those top researchers mentioned would love to train on that. Also how many gpus would be made annually?
Also what about CPU? You gonna use risc-v? With what toolchain?
Chinese could pull it off in a few years, yeah.
EU? Nah. Started thinking about sovereignty too late compared to China
Meta recently bought Rivos in a huge show of confidence for RISC-V across processor types for server class.
As for fabrication, the poster above has a lot to learn about both the US’ current weak at-home capabilities (and everything they’re building relies on European suppliers for all the key technology and machines etc.) and about the scaling properties of sub-14nm nodes. Any export controls or sanctions to prevent Europe using American-designed Taiwan-manufactured chips would result in American being cutoff from everything they need to build fabs on US soil. It would backfire massively.
Lastly, the UK and EU already have cutting edge AI Inference chips, and the ones for training are coming this year. Full stack integration (server box, racks, etc) is also being developed this year. We’re not a decade away from doing this - we’re 18 months away. Deployment at scale will take longer - not having Nvidia as competition would be a huge boon for that haha!
The fabs aren't, and that is no small thing. The tech stack is there though.
It's pretty tiresome that the HN audience keeps assuming Europe doesn't have "tech" because it doesn't have Facebook. Where do you think all the wealth comes from? Europe is all over everyone's R&D and supply chain.
And no, working remotely for US companies doesn't count.
Yeah, and also be slapped with some unrealized capital gains tax on assets they acquired while working in the US...
I’ll take a pay cut any day for the ethos of the EU.
It's exactly that big. It's not as big for people with low qualifications, but the more highly qualified the specialist, the greater the difference.
> Second, you need to factor in cost of living, which by most accounts is lower.
But here the difference really isn't that big.
> Third, meaningful labor laws and a shared appreciation for work-life balance.
This works more against EU rather than for them. Peak tech skills aren't usually acquired through laziness around and following meaningful labor laws, even in the EU.
> while we celebrate business acumen, we don’t fetishize wealth
An excuse for poor people (who still fetishize wealth)
At the end of the day it’s a matter of incentives, and good knowledge work can’t simply be forced out of people that are unwilling to cooperate.
At least you are not paying taxes for the things you don't agree on. It's indeed a strange time we are living in.
This is a trap. Two, I guess, but let's take the first one:
Domestic mass surveillance. Domestic.
Remember the eyes agreements: https://www.perplexity.ai/search/are-the-eyes-agreements-abo...
Expanding:
> These pacts enable member countries to share signals intelligence (SIGINT), including surveillance data gathered globally. Disclosures, notably from Edward Snowden in 2013, revealed that allies intentionally collect data on each other's citizens - bypassing domestic restrictions like the US ban on NSA spying on Americans - then exchange it.
Banning domestic mass surveillance is irrelevant.
The eyes-agreements allow them (respective participating countries) to share data with each other. Every country spies on every other country, with every country telling every other country what they have gathered.
This renders laws, which are preventing The State from spying on its own citizens, as irrelevant. They serve the purpose of being evidence of mass manipulation.
So there you have it
No. Hope is not a strategy. Too much of the techno optimist future narratives we use to coat over the increasingly screaming cognitive dissonance as we see what keeps us civil, from each other's throats, decline, smothered by the rise of the broligarchy.
What's happening here is not about AI. It's a loyalty test, administered to every major actor in the economy, the more influential, the more ruthless and earlier.
Your core values, in exchange for taxpayer money access and loyalty to the Don, an offer few can refuse.
And the choice will come for everyone. It's a distillation attack to filter the
- DEI for Grants - Your officer's oath to not kill civilians by word of your leader for continued career - AI Safety for non blacklisting - Your immigirant employee's location for us not harassing your offices in person - Your trans neighbour shipped to a reeducation camp and gender reassignment for the safety of your family.
Becoming complicit is the ultimate loyalty
So stop hope. Stop asking. Demand, Force, Resist.
``` Do not go gentle into that long night, The righteous should burn and rave at close of day; Rage, rage against the dying of the light ```
The agreement at the heart of 5 Eyes is to not surveil the other nations - this must be up there for most persistently misunderstood fact among techies (probably why AI spits it out)
Snowden wasn’t showing the world the NSA surveillance systems against them; he was trying to show that the US was illegally spying on its own citizens by leveraging the five-eyes countries to collect and aggregate the data on their behalf.
There were a lot of things Snowden revealed, but most assuredly it was also about spying on US citizens. The NSA directly wiretapping people, even in cases when all communication was domestic. The NSA working to bypass security via routers diverted during shipping to Google, Facebook, and others, backdoors installed, thus compromising their infrastructure.
Back to the 5eyes, there is a difference in terms of scope and scale, when it comes to a foreign country spying on your citizens, and you doing it. The scope is entirely different, the scale, the capability.
It does matter whether it is 5eyes doing it, or whether it is domestic.
Now, does this stance matter overall? I don't know. It's a nice moral stance, I think. Is it functionally realistic?
I just don't know.
Snowden, as a very rare exception, did show clearly that the government agencies are quite capable of not providing anything to cite.
As an Australian, I wouldn't trust it at all. The US government has already asked the Australian government for highly expanded information on Australian citizens, and that's above the table.
Stop believing what these people are telling you. They have an awful track record, and the people making the statements now are even worse than the previous people.
Here is an interesting thing to think about which country spies on Americans the most and how? Are there New Zealand commandos sneaking around the shores tapping cables? Moles working in the AT&T for the Canadian government? What happens if one of those individuals get caught, are they quietly allowed to leave, and if they commit any crimes do the charges get erased magically? Otherwise, if that doesn't happen there is danger they'll grab our spies in their countries in turn. Or they just blatantly pass lists around of who works for whom so they don't interfere with each other as that would preclude getting the data back through the loop to the NSA.
There is of course another loophole and that is private entities collecting data. The Constitution doesn't say anything about that, so the government figures it's fare game if they just pay a company to collect the data and then they query later. They didn't collect it so it's not "spying".
Anne Sacoolas (the woman who mowed down a British teenager with her car, but escaped because she had diplomatic immunity) turned out to be a senior CIA spy.
https://en.wikipedia.org/wiki/Anne_Sacoolas#Diplomatic_issue...
is that so?
When these things done right you won't hear about it.
Now the DoD, who are by far the largest budgetary expense for the tax payer, wants us to believe they don't have a better Ai than current industry? That is a double sword admission; either they are exposing themselves again as economic decision makers, or admitting they spend money on routine BS with zero frontier war fighting capabilities.
Either way, it is beyond time to reform the Military and remove the majority of its leadership as incompetent stewards and strategists. That doesn't even include the massive security vulnerabilities in our supply chains given military needs in various countries. (Taiwan and Thailand)
Sure if you immediately stopped government spending today we'd have negative growth today but that's not because other things aren't growing, it's because you just removed part of the base that existed last year. That would be true of literally pretty much any economy ever, or anything that's growing and you decided to remove a chunk of the base from.
And yes I absolutely believe the government does not have better generative AI than Anthropic or its competitors.
So many people in the US live a paycheck to paycheck lifestyle, that the covid lockdowns without government spending would have likely devolved into zombie apocalypse territory where hungry people were ransacking homes in more affluent neighborhoods (yes, even occupied homes). This is why people also bought lots of guns and ammo during Covid. You may think those people are crackpots, but I feel we actually got very close to it happening.
My local food bank (big city) ran out of supplies just as they announced the first waves of stimulus or whatever they called it (the weekly checks). So I’m pretty sure we were literally only days away from that being a reality.
They wouldn't ransack home in rich neighbourhoods for food for a million reasons (too far, too weak, roads are closed, rich homes have security, rich people have as much food at home or less compared to an average person). They would break into the supermarkets first, then each others homes around them before what was left would organize and go searching.
The checks helped and were the right call but we weren't close to a zombie outbreak.
Would love for you to tell me how close we were from it or how many days without food/work/income a large portion of our population could endure before they “would organize and go searching” - which by the way is exactly what I’m talking about.
Eisenhower warned of the military-industrial complex, and 60 years later it's eating everyone's lunch.
not even top 3
Homeland Security is less than 1/6th the budget of DoD alone.
Trying to imagine somebody that doesn’t know that the military buys dumb stuff and for some reason a human doesn’t come to mind. I keep picturing a horse
This is the case for every government/nation in the world. The difference between communism and capitalism, is that the Politburo in capitalism allows the natural selection of elites based on their performance on an open economy. At least that was the case until 2011.
Statement on the comments from Secretary of War Pete Hegseth - https://news.ycombinator.com/item?id=47188697 - Feb 2026 (31 comments)
I am directing the Department of War to designate Anthropic a supply-chain risk - https://news.ycombinator.com/item?id=47186677 - Feb 2026 (872 comments)
President Trump bans Anthropic from use in government systems - https://news.ycombinator.com/item?id=47186031 - Feb 2026 (111 comments)
Google workers seek 'red lines' on military A.I., echoing Anthropic - https://news.ycombinator.com/item?id=47175931 - Feb 2026 (132 comments)
Statement from Dario Amodei on our discussions with the Department of War - https://news.ycombinator.com/item?id=47173121 - Feb 2026 (1527 comments)
The Pentagon Feuding with an AI Company Is a Bad Sign - https://news.ycombinator.com/item?id=47168165 - Feb 2026 (33 comments)
Tech companies shouldn't be bullied into doing surveillance - https://news.ycombinator.com/item?id=47160226 - Feb 2026 (157 comments)
The Pentagon threatens Anthropic - https://news.ycombinator.com/item?id=47154983 - Feb 2026 (125 comments)
US Military leaders meet with Anthropic to argue against Claude safeguards - https://news.ycombinator.com/item?id=47145551 - Feb 2026 (99 comments)
Hegseth gives Anthropic until Friday to back down on AI safeguards - https://news.ycombinator.com/item?id=47142587 - Feb 2026 (128 comments)
Prediction: in time, OpenAI will be declared such to privatise profits but socialise losses
or would the government just buy the stocks on the market?
It requires proof of employment, e.g., company email aaddress, photo of employee badge, and discloses a US-based "cloud computing" vendor where the identities will be stored in the cloud
After employment verification it claims the stored identities will be destroyed upon request. The site operator is apparently anonymous
One can imagine this list could be useful to multiple parties for multiple purposes
As a species, this is just natural selection.
Money rules region, race, ideology, etc.
The other two definitely never would in a million years.
https://www.yahoo.com/news/articles/macron-outline-france-nu...
https://www.defense.gouv.fr/sites/default/files/ministere-ar...
Unless you are saying Europe is basically submissive to the US due to the nuclear situation.
(Please edit comment to remove names incase they want to remove from OP)
And people like to flag kill the truth but it was a union who got the Koreans deported and it was a union that made it so the Chinese couldn't get citizenship. These are facts and the guys who would be their victims haven't forgotten it. Obviously the majority would like to hide this inconvenient truth using the tool this site offers to do that, but it doesn't change the truth, and these people know it.