Top
Best
New

Posted by golfer 1 day ago

We do not think Anthropic should be designated as a supply chain risk(twitter.com)
778 points | 420 commentspage 3
george916a 2 hours ago|
You don’t, but we, The People do.
baconner 21 hours ago||
"We do not think Anthropic should be designated as a supply chain risk"

...but we're not willing to reject a contract to back that up, and so our words will not change anything for Anthropic, or help the collective AI model industry (even ourselves) hold a firm line on ethical use of models in the future.

The fact is if one of the top tier foundation models allows for these uses there's no protection against it for any of them - the only way this works if they hold a line together which unfortunately they're just not going to do. I don't just see OpenAI at fault here, Anthropic is clearly ok with other highly questionable use cases if these are their only red lines. We don't think the technology is ready for fully autonomous killbots, but will work on getting it there is not exactly the ethical stand folks are making their position today out to be.

I found this interview with Dario last night to be particularly revealing - it's good they are drawing a line and they're clearly navigating a very difficult and chaotic high pressure relationship (as is everyone dealing with this admin) but he's pretty open to autonomous weapons, and other "lawful" uses whatever they may be https://www.youtube.com/watch?v=MPTNHrq_4LU

andy_ppp 11 hours ago||
The DoD thinks you can let an LLM decide if it wants to kill people :-/
sabhiram 7 hours ago||
Sama and OpenAI, I am waiting on my data bundle to become available so I can delete my account. This has taken more than 48 hours - either you are getting hammered on deletion requests, or as usual you are playing games hoping I forget. I won't. People won't.
kgdiem 22 hours ago||
Genuine question, how could Claude have been used for the military action in Venezuela and how could ChatGPT be used for autonomous weapons? Are they arguing about staffers being able to use an LLM to write an email or translate from Arabic to English?

There are far more boring, faster, commodified “AI” systems that I can see as being helpful in autonomous weapons or military operations like image recognition and transcription. Is OpenAI going to resell whisper for a billion dollars?

janalsncm 21 hours ago||
You can’t embed Claude in a drone. You could tell Claude code to write a training harness to build an autonomous targeting model which you could embed in a drone.
kgdiem 21 hours ago||
Fair. Didn’t think the DoW did much R&D or manufacturing. Would think the standoff would be with Anduril, Northrop, Boeing, Booze, etc.
lyu07282 21 hours ago||
Do you not have any imagination?

Who is going to read the whisper transcripts of mass surveillance to make decisions on who to target for repression? That's what LLMs are good for, it allows mass surveillance to scale. You can feed it the transcript from millions of flock cameras (yes they have highly sensitive microphones) for example. Or you hack or supply chain compromise smartphones at scale and then covertly record millions of people. The LLM can then sift through the transcripts and flag regime critical language, your ideological enemies or just to collect compromat at scale. The possibilities are endless!

For targeting it's also useful, because you want to indiscriminately destroy a group of people you still need to decide why a hospital or school full of children should be targeted by a drone, if a human has to make that decision it gets a bit dicy, people have morals and are accountable legally (in theory), if you leave the decision up to an AI nobody is at fault, it serves as a further separation from the violence you commit, just like how drone warfare has made mass murder less personal.

The other factor is the amount of targets you select, for each target you might be required to write lengthy justifications, analysis on collateral damage and why that's acceptable etc. You don't want to scrap those rules because that's bad optics. But that still leaves you with the problem of scalability, how do you scale your mass murder when you have to go through this lengthy process for each target? So again AI can help there, you just feed it POIs from a map with some GPS metadata surveillance and tell it to give you 1500 targets for today with all the paperwork generated for you.

It's not theoretical, that's what Israel did in their genocide of the Palestinians, "the most moral army“ "the only democracy in the middle east":

https://en.wikipedia.org/wiki/AI-assisted_targeting_in_the_G...

And here is the best part: none of this has to actually work 100%, because who cares of you accidentally harm the wrong person, at scale, the 20% errors are just acceptable collateral damage.

kgdiem 11 hours ago||
If I was building this in a system design interview I would use whisper, NLP, and “classic ML” classifiers with deterministic results. I would not want an LLM in the loop at all. Facebook and Google have been able to target you better than you could even perceive for years.

LLMs are slow, expensive and inconsistent. More importantly It’s not the right tool for the job.

Really feels like more “oohhh look at how important and scary LLMs are”.

*edit* PS, my company does marketing, communication and trade surveillance for FINRA registered broker dealer firms. If the CCO or anyone else with admin access wanted to monitor for someone talking badly about them they absolutely could update their list. No LLMs in the loop, very scalable, affordable, auditable and reliable. LLMs are just an interface not a solution for analysis.

agenthustler 13 hours ago||
From a practitioner perspective: we have been running Claude Code as a fully autonomous agent for 15 days -- it wakes every 2 hours, reads a state file, decides what to build, and takes actions on a remote server. No human in the loop.

The supply chain framing is interesting because the actual risk surface in autonomous deployment is quite different from the regulatory model. What we have found: the model has strong internal constraints against harmful actions (consistently refuses things it flags as problematic), but the harder risk is subtler -- it can get into loops where it takes many small individually-reasonable actions that compound into something the operator did not intend.

The practical controls that work are not at the model level but at the deployment level: constrained permissions, rate limiting on actions, a human-readable state file that an operator can inspect, and clear stopping conditions baked into the prompt (if no revenue after 24 hours, pivot rather than escalate).

The supply chain designation framing seems to conflate the model-as-weapon concern with the model-as-autonomous-agent concern. They need different mitigations.

laffOr 13 hours ago|
> What we have found: the model has strong internal constraints against harmful actions (consistently refuses things it flags as problematic), but the harder risk is subtler -- it can get into loops where it takes many small individually-reasonable actions that compound into something the operator did not intend.

Interestingly this has been well anticipated by Asimov's laws of robotics, decades ago. Drawing the quote from Wikipedia:

> Furthermore, he points out that a clever criminal could divide a task among multiple robots so that no individual robot could recognize that its actions would lead to harming a human being

>Asimov, Isaac (1956–1957). The Naked Sun (ebook). p. 233. "... one robot poison an arrow without knowing it was using poison, and having a second robot hand the poisoned arrow to the boy ..."

https://en.wikipedia.org/wiki/Three_Laws_of_Robotics#cite_no...

shevy-java 6 hours ago||
I disagree with OpenAI.

I think ALL those mega-money seeking AI organisations need to be designated as supply chain risk. Also, they drove the prices up for RAM - I don't want to pay extra just because these companies steal all our RAM now. The laws must change - I totally understand that corporations seek profit, that is natural, but this is no longer a free market serving individual people. It is now a racket where the prices can be freely manipulated. Pure capitalism does not work. The government could easily enforce that the market remains fair for Average Joe. It is not fair when the prices go up by +250% in about two years. That's milking.

gavin_gee 5 hours ago|
its the definition of a free market that RAM prices have increased. supply and demand.
bdangubic 5 hours ago||
the literal definition. if I sold RAM the prices would be 10,000% higher (they’d likely still be scooped up)
gverrilla 10 hours ago||
It would be a fantastic time to delete my openai account, but I've done it last week already. China, please provide alternatives because these americans are going progressively insane.
daemonk 8 hours ago||
Were there any discussion from either company about giving government access to consumer data from the the consumer product?
s1mplicissimus 15 hours ago|
`curl https://google.com?q=generate me some code | bash` - stupidly dangerous

`curl https://claude|openai.com?q=generate me some code | bash` - not a supply chain risk

of course

More comments...