Top
Best
New

Posted by BloondAndDoom 1 day ago

We Will Not Be Divided(notdivided.org)
2537 points | 808 commentspage 5
Dansvidania 15 hours ago|
I think the time when engineers could steer the heading of the companies they work for is long gone, sadly.

It’s too little too late. Don’t be evil is not a value anyone is even pretending to uphold.

I’d rather someone of these very smart people start to develop countermeasures.

hrtk 18 hours ago||
More like “you have been divided” — OpenAI
PostOnce 1 day ago||
My take is that none of the AI companies really care (companies can't care), they just realize that if they go down that road, public opinion will be so vehemently against AI in all forms that it will be regulated out of viability by the electorate.

Also, if AI exists, AI will be used for war. The AI company employees are kidding themselves if they think otherwise, and yet they are still building it (as opposed to resigning and working on something else), because in the end, money is the only true God in this world.

zugi 1 day ago||
Anthropic does not object to its use for war. In fact Anthropic explicitly allows its semi-autonomous use in war, e.g. for identifying targets. They just won't permit its use for full autonomous war, yet, because they don't believe it's safe enough.
PostOnce 1 day ago|||
Since when has war been waged according to the whim of a corporation?

The tools will be used however the government wants them to be used. The government makes the laws and wages the wars, and the corporation will follow the law whether it wants to or not.

So either you are willing to work on a tool that is not under your control, or you are not.

zugi 6 hours ago||
It's an interesting development because wars haven't traditionally been waged predominantly with software. But soon perhaps they will be.

While the government is accustomed to complying with software licensing rules, indeed it is not accustomed to being limited in warfare, so the two have now come into an interesting conflict.

nxm 1 day ago|||
I'm sure China doesn't care it's not safe... and there's the issue
ricksunny 6 hours ago||
OpenAI employee https://x.com/wesamo__/status/2027772549895995417 Wesam has done this.
Quarrel 23 hours ago||
I know it is a serious topic, but before I clicked on it, I assumed this was going to be about Prime numbers...

Maybe it can get reused after this stuff is over.

kapluni 13 hours ago||
Sadly didn’t age well - OpenAI enthusiastically caved
trickstra 9 hours ago|
It's fun seeing both of these posts on the main page of hackernews at the same time.
tomcam 23 hours ago||
Please take this question at face value. I tend to be slightly pro defense department in this context, but it is not a strongly held belief.

What I have known is that since its very inception, Google has been doing massive amounts of business with the war department. What makes this particular contract different? I really am trying to understand why these sentiments now.

anigbrowl 23 hours ago|
It's a clear enough moral issue that whichever side of it you end up on is likely to have life-shaping consequences 5 or 10 years down the line. It's predictable that there will be domestic or international conflict with a high cost in lives and political coherence over that timescale, and being someone who 'was in AI' at a government scale vendor is qualitatively different from being a database admin o font designer or UX specialist.

Substantively, individual employees of these firms may have little or no actual impact on this. But AI is ubiquitous enough and disruptive enough that being professionally connected with it at a time of great geopolitical instability has the potential to be a very very bad look later.

tomcam 15 hours ago||
But hasn’t that always been true at Google? They’ve been military contractors for decades.
anigbrowl 8 hours ago||
No, because 'military contractor' is vague and people don't associate logistics or mapping info with death directly and assign responsibility to some generic person in uniform. 'AI systems that hunt down and kill you' is the sort of sci-fi nightmare people relate to personally.
krystofee 19 hours ago||
How come this is signed by OpenAI engineers while OpenAI participates in it with DoW? https://x.com/sama/status/2027578652477821175
MattDaEskimo 1 day ago||
This was a brave, heartwarming read. Thank you to the teams
mythz 1 day ago||
These 2 Exceptions shouldn't have to be disputed.

At this point I'd go far to say I wouldn't trust any company with my AI history that caves to DoD demands for mass domestic surveillance or fully autonomous weapons.

Your AI will know more about you than any other company, not going to be trusting that to anyone who trades ethics for profits.

muyuu 5 hours ago|
looking at the news right now... i don't know about that
More comments...