Posted by BloondAndDoom 1 day ago
It’s too little too late. Don’t be evil is not a value anyone is even pretending to uphold.
I’d rather someone of these very smart people start to develop countermeasures.
Also, if AI exists, AI will be used for war. The AI company employees are kidding themselves if they think otherwise, and yet they are still building it (as opposed to resigning and working on something else), because in the end, money is the only true God in this world.
The tools will be used however the government wants them to be used. The government makes the laws and wages the wars, and the corporation will follow the law whether it wants to or not.
So either you are willing to work on a tool that is not under your control, or you are not.
While the government is accustomed to complying with software licensing rules, indeed it is not accustomed to being limited in warfare, so the two have now come into an interesting conflict.
Maybe it can get reused after this stuff is over.
What I have known is that since its very inception, Google has been doing massive amounts of business with the war department. What makes this particular contract different? I really am trying to understand why these sentiments now.
Substantively, individual employees of these firms may have little or no actual impact on this. But AI is ubiquitous enough and disruptive enough that being professionally connected with it at a time of great geopolitical instability has the potential to be a very very bad look later.
At this point I'd go far to say I wouldn't trust any company with my AI history that caves to DoD demands for mass domestic surveillance or fully autonomous weapons.
Your AI will know more about you than any other company, not going to be trusting that to anyone who trades ethics for profits.