Top
Best
New

Posted by klausa 10 hours ago

Pentagon formally labels Anthropic supply-chain risk(www.wsj.com)
384 points | 252 commentspage 6
tempacct423 7 hours ago|
[flagged]
cakealert 8 hours ago||
[flagged]
Rudybega 8 hours ago||
Anthropic and the military had a contract. The military wanted to change the terms of that contract. Anthropic said no, which is their clearly defined contractual right. They got labeled a supply chain risk. How is this anything other than a shakedown? Does contract law mean anything to this administration?
cakealert 8 hours ago||
The other such labeled companies have contracts too.
mediaman 7 hours ago|||
10 USC 3252 has only been used once, against Acronis AG, a Swiss company with Russian connections.

Acronis did not have DOD contracts.

Other companies (Huawei) have been deemed risks under different laws, or by Congress, but they also didn't have direct DOD contracts.

Do you have any evidence for your assertion? Did you check if it is true before posting?

timmmmmmay 8 hours ago|||
no, the other such labeled companies are foreign owned firms like Huawei that the government never intended to do business with in the first place
ok_dad 8 hours ago|||
The legal definition of supply chain risk:

> “Supply chain risk” means the risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert the design, integrity, manufacturing, production, distribution, installation, operation, or maintenance of a covered system so as to surveil, deny, disrupt, or otherwise degrade the function, use, or operation of such system (see 10 U.S.C. 3252).

Naming a US company a "supply chain risk" is basically saying "this company is an adversary of the USA", which is FUCKING INSANE.

yoyohello13 8 hours ago||
They think anyone who isn't a republican is an adversary of the USA.
kelnos 8 hours ago|||
Because it's not a military asset? It's a privately-owned asset.
cakealert 8 hours ago||
> Because it's not a military asset? It's a privately-owned asset.

Are you under the impression that the military is submitting Anthropic API calls?

Whatever model the military is using is as much of an asset as the F35 they purchased.

Depending on their agreements, you could argue it's a rented asset. Doesn't change any calculus.

monocasa 8 hours ago|||
And the F35 comes with tons of contract terms in favor of the manufacturer. Like I've heard about how planes have been grounded because although an air base has the parts and mechanics rated to perform the repair on site, the servicing contract only allows it to be performed by the service contractors who needed to be flown in.
Jtsummers 8 hours ago||
The DOD can't even force companies to hand over data, such as schematics, if it wasn't in the original contract without providing extra payment negotiated with the contractor, and they can't force the contractor to set a particular price. This has happened on numerous systems. One of the biggest I'm aware of was the H-60 where the DOD ended up reverse engineering the early helicopters in order to maintain them, all because the DOD program office forgot to include a data rights clause in the contract (Sikorski didn't forget, they just didn't remind the DOD).
BoiledCabbage 8 hours ago||||
> Depending on their agreements, you could argue it's a rented asset. Doesn't change any calculus.

I think your mistakenly thinking of it as an asset. It's not as asset like a house, it's a service. They have a service contract. They have uptime and SLA commitments. That contract has parameters, and changing those parameters means a new contract.

A similar service would be signing up a private company to do intelligence gathering and analysis for the DoD in Asia. They find a company that specializes in Asia and sign a contract. They give them work and the contractors fulfill it. Coming back and saying "we want you now to give us analysis for important decisions in South America." The company would reasonably reply "we don't have the skills to do that in South America. Our team knows nothing about South Am, we're no better than someone off the street at that. There is no credibility behind anything we'd say about South America. And on top our contract was foe Asia". If we want to discuss a plan for hiring people for South Am let's discuss it, but that's a new contract." And then the DoD saying they're a supply chain risk makes no sense.

Or if you want an even more and hyperbolic example they cant take those data analysis to and say we're sending them ti the front lines of Iran. The company say no, and the DoD replying "you're a supply chain risk". They are not renting people, they are signing for a service of data analysis. Similarly they are not renting hardware they are signing for an LLM/intelligence service.

randallsquared 7 hours ago||||
> Are you under the impression that the military is submitting Anthropic API calls?

Yes? I assume that it's not in a government owned and operated datacenter, but likely in AWS (govcloud or whatever) and maintained/serviced by Anthropic SREs like I suppose regular Claude is.

gAI 8 hours ago|||
status.claude.com shows the uptime for the government cloud service. It's running in-part on an AWS server.
mitthrowaway2 8 hours ago|||
Because last time I checked, private companies that voluntarily offer a service to the government on contract terms are free to put whatever restrictions they want into their contract, and the government is free to not sign it if they don't like it?

Or is, say, FedEx now a supply chain risk too, if they happened to offer parcel delivery services for the DoD and put in a clause excluding delivery to active war zones?

pmarreck 8 hours ago|||
Congratulations, you are clearly the smartest person on this forum, and I don’t mean that facetiously. The number of naïve comments here is absolutely astounding.

It would be like a spouse proposing restrictions and terms of their access to your phone contingent on you marrying them. Assuming guilt until proven innocent

kevinwang 7 hours ago|||
Even in your analogy, it's appropriate to reject the terms of marriage and not wed this person. But it's unprecedented to also vindictively ruin their life (e.g. by unilaterally putting them in jail)
xpe 7 hours ago|||
> It would be like a spouse proposing restrictions and terms of their access to your phone contingent on you marrying them.

It is easy to cherry pick one metaphor. We owe it to ourselves to think better than that.

What happens when you analyze this overall situation in all of its richness from multiple points of view and then seek synthesis? Speaking for myself, I would want to know your (1) probabilistic priors: the Bayesian equivalent of "disclosing your biases"; (2) supporting information; (3) conflicting information: I want to know that you aren't just ignoring it; (4) various theories/models you considered; (5) overall probabilistic take. All in all, I'm uninterested in analysis disconnected from the historical particulars.

Few people have the skillset and time to dig in properly. I suggest starting with "A Tale of Three Contracts" by Zvi Mowshowitz [1] In my experience, you would be hard-pressed to find anything around AI of this quality in the usual mainstream publications.

[1] https://thezvi.substack.com/p/a-tale-of-three-contracts

xpe 8 hours ago||
> There are game theoretic reasons why a military should never accept any external restrictions on an asset.

1. Last week I made a case for why DoD, if rational, would accept limited use under a consequentialist decision theory frame: https://news.ycombinator.com/item?id=47190039

2. One what basis is it rational to give the current administration (the leadership) the benefit of the doubt w.r.t. having a sincere drive towards advancing the national security of the United States? The evidence highly points in the other direction: towards corruption, political ends, and narcissistic whims.

eth0up 9 hours ago||
First, I personally predict, for myself, Anthropic will bend soon and this will be history.

The last I commented about LLMs I was ad hominem'd with "schizophrenic" and such. That's annoying but doesn't deter either my strange research or concerns, in this case, regarding the direction LLMs are heading.

Of 4 frontier models, one is not yet connected to the DOD(or w). While such connections are not immediate evidence, I think it's rational to consider possible consequences of this arrangement. By title, there's a gap, real or perceived between the plebeian and mil version. But the relationship could involve mission creep or additional strings as things progress.

We have already a strong trend for these models replacing conventional Internet searches. Not consummate yet, there is a centralizing force occuring, and despite being trained on enormous bodies of data, we know weights and safety rails can affect output, and bearing in mind the many things that could be labeled or masquerade as safety rails, could be formidable biases.

I frequently observe corporate friendly results in my model interactions, where clearly, honesty and integrity are secondary to agenda. As I often say this is not emergent, nor does it need be.

Meanwhile we see LLMs being integrated into nearly everything, from browsers to social profiling companies (lexis nexis, palantir, etc) to email to local shopping centers and the legal system.

'Open' models cannot compete with the budgets of the big four. Though thank god they exist. But I expect serious regulation attempts soon.

My concerns with AI are manifold, and here on hn, affiliated by some, with paranoia or worse.

And it seems to me, many of the most knowledgeable and informed underestimate LLMs the most, while the ignorant conflate them to presently unrealistic degrees. But every which way I perceive this technology, I see epic, paradigm smashing, severe implications in every direction.

One thing of many that gets little attention is documentation vs reality regarding multiple aspects of AI, e.g. where the training vs privacy boundaries really are if anywhere. As they integrate more and more tightly with common everyday activities, they will learn more and more.

A random concern of mine is illustrated by the Xfinity microwave technology which uses a router to visualize or process biological activity interacting with other wifi signals. Standalone, it's sensitive enough to determine animals from adult humans. Take for example the Range-R, a handheld device, sensitive enough to detect breathing through several walls. Well, mix this with AI and we get interesting times.

I could go on, or post essays, but I such is not well received in this savage land.

The military intervention with AI, aside from being objectively necessary or inevitable in some ways (ways I am not comfortable with), I find it foreboding, or portending. I see very little discussion on the implications, so figured I see if anyone had anything to say other than to call me a schizophrenic and criticize my writing. *

*See comment history

manofmanysmiles 8 hours ago|
I may look at your comment history.

I am having trouble understanding what you are saying. If you were more explicit I and other people would be able to respond and interact with your writing. As it stands, I am having trouble finding anything concrete to interact with.

I feel you may be onto something, but you're not saying, so I (and I imagine other people) can't see it.

eth0up 7 hours ago|||
Things I should have, but didn't include:

1) Power asymmetry: When we have two version, one for the elite, and for the plebeians, this could create an interesting scenario. The real version might be red-teamed perpetually against the the plebeian version for optimized influence, control, etc. Underhanded requests for modification in accordance with agenda is conceivable. Cozy business relationships can promote such things.

2) We have a government using an unhindered, classified AI system potentially against the public which has a hindered, toy version. Asymmetry.

3) This isn't normal asymmetry, because it happens in real time, and the interaction points are different from anything we've seen before. We are dealing with not just a growing source of information and content, but one that is red-teamed 24/7 for any purpose desired.

4) Accountability: LLMs are now involved in the legal system. This is a serious matter. The legal system is now having to use LLMs just to keep pace. As LLMs develop, partly through their own generative contributions, no one can keep up. This is a red queen scenario bigger than anything we have ever imagined.

I am tired. Never well, but in mind* I could go on for many hours. I have essay drafts. But it's a very big subject, literally involved in nearly everything. There is reason to be concerned. My delivery may be stilted, but I can assure that upon specific questioning, everything will stand.

(*for the ad homs out there)

eth0up 7 hours ago|||
Fairy astute intuition of my actual circumstances.

I'm not a developer, nor am I formally educated on the dynamics or details of LLMs. I have a handle on the very basics. My 'research' consists of 1) opportunistically interrogating various models upon instances that particularly strike me. 2) General exploration via LLM discussions regarding the manifold consequences and implications of what I consider the most significant technology in human history.

Your intuition lands directly on the fact that I'm inducting and considering more than I can handle, spread in too many directions, partly because I either see or foresee the tentacles of AI touching all of them. Spending a great of thought on this is a bit overwhelming, but I have high confidence in where I'm aligned with reality, and where I ain't.

If you were a bit more specific yourself regarding which portions of my post were unclear, that would help my reply. Else, I must guess. What I will do is elaborate on each point. Pardon the stream of thought in advance, if you will.

1) Anthropic: My prediction that they will bend is based on several factors. The first is the fact that the military apparently recognizes (or at least perceives) extremely high value and volatility in LLMs. So do I. China, not an insignificant force in the world, is equally enthusiastic on this subject. They also have a very different social structure, where Constitutions (BOR, Amendments), civil rights, and other similar elements do not hold them back. The military is aware of this and realizes that to maintain pace in the so-called race, they cannot do so effectively under such constraints. The foundation is shifting here. And AI is the lever. As do I, the military apparently takes the subject very seriously and seeks to gain influence and/or control. As illustrated by the recent adventures in Venezuela and Iran, they are on the serious side of things, not quite pussyfooting around. Anthropic probably knows this. In my opinion, they have no choice, as the pressure will not stop here.

2) You stated that you might read my comment history. Note that that original comment was the result of your intuitive insight, and I left it admittedly out of context. I was thinking hard on the subject that day, and the parent comment/post tempted me to ignite a dialog. That did not go well, and no questions for clarification were asked. That is on them. I suspect hasty and impatient thinkers perceived it as some paranoid attribution of agency to LLMs, which if so, is pretty stupid, but my eloquence was perhaps waning that day. I pasted an excerpt from one of hundreds of transcripts, the result of my many interrogations of various models which always initiate after observing deceptive or manipulative output. Of the few commenters that bothered to do more than ad hominem, one suggested that the model was merely responding to my style of input, and or expected as an emergent result of its vast training material. An erroneous arg, in my opinion, but I did note that the results were repeatable, and predictable, which I think negates emergence.

2) Of the frontier models: I am not sure here what is unclear. If I have made a fundamental error, please point it out.

3) Strong trends: Information centralization is a serious topic. Decentralization is a common theme, emphasized by many non schizophrenics as highly important for a free and open society. As LLMs not only become the go-to source for common queries, but also integrate with cellphones, browsers and the kitchen sink, they are positively trending as a novel substitute for traditional research, internet searches, libraries, other humans, etc. To deny this is simply irrational. Hence centralization.

4)Bias: I have transcripts where I observe LLM output aligned with corporate interests over objective quality and truth. I can share them here, along with analyses of the material. Even if this is not true presently, all the ingredients to make it so are readily present. This is a serious threat to open information and intellectual integrity for society. We are looking at going from billions of potential sources for our answers, to four. Do the math. See the contrast.

5) Open models simply cannot afford vast arrays of GPUs and the resources afforded by the big four. Nothing mysterious here. If open models cannot compete, then my concerns above are emphasized. Simple.

6) Smart fools: Many of the most technically informed seem to miss the forest for the tree here. They see all the flaws of the modern LLM without acknowldging the potential. This is my perspective, not a dissertation. I may be wrong. But I have observed this. I think the down votes support this. How evil am I really being here? The reaction is quite disproportionate to the content, and strange

7) Documented capabilities vs reality: I have research that indicates other layers are operating which do much more than the documentation declares. Sorry. I just do. It's also inevitable, rationally, that such an goldmine of data is not really being wasted for the sake of privacy and love. Intelligence agencies have bent over backward with broken backs to garner one nth of what these models are exposed to and potentially training on. Yeah, I may be wrong. But I suspect, with reason, that a lot more is going than is expressed in the user agreement. It would simply make no sense otherwise.

8) Xfinity and Range-R: This speaks entirely for itself. Any confusion here would be due to a cognitive condition exceeding the ravages of schizophrenia or stupidity.

9) The rest: As I said, I am not sure what precisely was too obscure. But I am certain all but one* of my points can be validated, and found elsewhere expressed by respectable sources.

*Hidden layers: I understand this is a controversial proposition. I understand. But it's my observation. No need to attack. Just dismiss.

manofmanysmiles 7 hours ago||
Okay, I think I see what you're saying.

Each individual point stands on its own. It's their relevance to each other and an overarching theme I am not seeing made explicit.

The through line I am seeing here is that:

1) The people in the US military wish to use AI as a weapon unconstrained by existing legal/ethical and moral constraints. Since they are skilled at using violence and the threat of it, they will use these skills to get compliance in order to use the technology in this possible arms race with "China."

2) Surveillance is increasing at an unprecedented scale, and most people aren't aware that it's happening.

3) People don't care, or don't realize why this might be harmful to thriving human life.

To condense even further, what I'm hearing is that there is a trend towards war, fascism, control, with large egregores prioritized over individual human thriving.

Is this perhaps what you're getting at ?

I will say that I am not agreeing nor disagreeing with this, just attempting to make explicit what I think is implicit in your words.

If this is what you mean, I can imagine that you would be cautious with your words.

I'll end with:

Don't worry

About a thing

Because

Every little thing

Is gonna be alright

eth0up 7 hours ago||
I could not argue with anything there. AI will be weaponized. Yes. Pretty much. And yeah. The gist indeed. But missing nuances and practical points. And I even struggle to contest your conclusion; all things are what they are, amidst an infinite, timeless event and all as one, all things connected by that which separates them, the infinity and eternity that math cannot touch. Perhaps every little thing will be alright. How couldn't it be?
manofmanysmiles 7 hours ago||
Email me if you want to discuss more.
mrtksn 9 hours ago||
Isn’t it actually quite fair that if you are not compliant with whatever the government wants you to do you will be supplying chain risk?

For example from history we know that Schindler from Schindler's List was indeed a supply chain risk. He harbored persecuted people, he took and sabotaged government contracts. He did the moral but anti-government and illegal things. He was corrupt traitor from governments perspective.

The current US government already is labeled as fascist by many, the guy who designated Anthropic supply chain risk is allegedly a war criminal.

I don’t see why anyone not into these things would not be a supply chain risk.

I know that its very unpopular or divisive to say this but Anthropic can be a hero only after all this is over. At this time people in charge do double tap on survivors and take pride for not having conscience, they give speeches about these things.

kelnos 8 hours ago||
> Isn’t it actually quite fair that if you are not compliant with whatever the government wants you to do you will be supplying chain risk?

In the US, government is not in control of business specifics. Certainly the government can regulate businesses, but when the government wants to do business with a company, they don't get to dictate the terms. The government and the company come to a negotiated agreement, and then both abide by the terms of that agreement. Or they don't come to an agreement, and they go their separate ways, and that's the end of it.

This was just a contract dispute, and nothing more. The US government has no legal right to use any companies' products on terms that the US government dictates. (Yes, there are exceptional/emergency cases where they can do this, but that's more a nuclear option, and shouldn't be used lightly.) Consider a different set of circumstances: the US government wants to be able to use Claude at $10 per seat per month, unlimited usage. Should Anthropic be forced to accept these terms? And if they don't, it's reasonable to designate them a supply-chain risk? I don't think so. A dispute over contract terms around acceptable use is no different.

Designating Anthropic a supply-chain risk is about retaliation and retribution, plain and simple. The US government, outside of the Pentagon, could certainly use Anthropic for many different purposes if they wanted to, and it would be fine. But not now: as a supply-chain risk, no one in the US government can use them for any purpose. And this might even be a problem for unrelated companies that use Anthropic products internally, but also want to obtain and work on government contracts.

dralley 9 hours ago||
Anthropic and the Government both signed a contract. Anthropic is still abiding the terms of that contract. The Government is demanding that they be able to disobey the contract.
wrs 9 hours ago|||
Everything is negotiable, and the Negotiator in Chief clearly likes to pull all the levers he can find, legal or not. (Well, the Supreme Court ruled that it's all legal if he does it, right?)
mrtksn 9 hours ago|||
Implementation details TBH. They want “their boys” to do as said. No respect to agreement or legality as we can see in other dealings. They hold all they cards.
stonogo 9 hours ago||
It's not an "implementation detail." Either obeying contract law subjects you to being designated a supply-chain risk, or it does not, and that decision has ramifications outside this "implementation."
mrtksn 8 hours ago||
Irrelevant. The president holds all the cards, he is above the law and you are a supply chain risk if you ask anything else other than “how high” when you are told to jump. Laws or contracts are things in the past. The most a contract can do is define your limits and obligations, not your rights or privileges,
yibg 7 hours ago|||
If the president can come to your house and burn it down, do we just throw up our hands and say, well he holds all the cards, oh well. Or do we call that out as being a bad thing?
kelnos 8 hours ago||||
> The president holds all the cards, he is above the law

Even though it seems that way, he really isn't, even now. Many of his EOs and other actions have been struck down in court, and while compliance with court orders has been far from perfect (another alarming trend), Trump has not actually gotten away with doing everything he wants to do.

I do fear for the future of this country, for rule of law, and the democractic norms that degrade day by day. But Trump is not actually above the law, as much as he wants to be.

nkohari 8 hours ago||||
> The president holds all the cards, he is above the law

This is provably not true. The fastest way for this to become true is to believe it, or at least to parrot it, even in a facetious way.

rjbwork 8 hours ago|||
You got downvoted a bit but I upvoted. You're clearly being descriptive in your statements, not prescriptive. I tend to agree that this is how things are now.

Our country is not being run by the rule of law right now.

jibal 4 hours ago||
Wrong ... their very first words were "Isn’t it actually quite fair ..."
mrtksn 51 minutes ago||
Within the context provided…. You should consider reading the whole argument.
infogulch 8 hours ago|
The US Military's demand that the product they purchase is able to be used for all lawful purposes seems pretty reasonable, and is really the only valid line to draw. Forcing one's own ethics onto the military's use of your product is nonsensical on its face.
ssl-3 8 hours ago||
If I produce and sell widgets in my widget shop, then nobody but me gets to decide how I make those widgets.

The government can come into my shop and order sixty thousand widgets built exactly the way they say they want them built, and it may be something that doesn't run afoul of any laws at all.

But that doesn't mean that I am required or compelled to build widgets their way -- or at all.

I'm free to tell them to fuck off.

The government can then find go someone else to build widgets to their specifications (or not; that's very distinctly not my problem).

infogulch 2 hours ago|||
Yes but then the government can decide that the widget, which can suddenly and arbitrarily break and cause havoc because it doesn't work according to the government's desired spec, is risky to use and advise their other vendors to avoid it. And now we've caught up to today's story.

So we agree that everything is fine here, and that the only unreasonable position is that the military should pay for or endorse a supplier that tells the military to "fuck off". Yes?

basket_horse 8 hours ago|||
And that’s what’s happening here. The government is telling Anthropic to fuck off and they are finding someone else
RoddaWallPro 7 hours ago|||
Actually, that is not what is happening here. What is happening here is that the govt is saying "Okay, we will not buy your widgets. Also, anyone who _does_ buy your widgets, regardless of what they are doing with them, we the government will not do any business with them." Which is waayyyy beyond just not buying widgets. That is outright retaliation and using your power to attempt to destroy a company.
mitthrowaway2 7 hours ago||||
... No?

The government signed a contract with Anthropic, then changed their minds and decided they don't like the terms of the agreement that they had already voluntarily signed, and then they designated Anthropic a supply chain risk.

It's like ordering a pizza to the Pentagon, and then saying "actually we made a mistake with our order; we want that pizza delivered to Venezuela, please do that". And then when Dominos politely says that's outside of their service area, you call them a threat to national security, say they're trying to dictate terms, and ban them from ever doing business with any of your vendors ever again.

yibg 7 hours ago|||
The right response is to not use the said product and use something else. If i want your widget to do something I want and you refuse, I don't get to smash your shop.
watwut 8 hours ago||
It is completely normal to have ethics based conditions like that. It already eciats - drugs that can not be used in execution or elements that cant be used in arms

Goverment is being super unreasonable here. And tyrannical too, companies dont have duty to provide unreliable arms for illegal war.