Top
Best
New

Posted by BloondAndDoom 1 day ago

We Will Not Be Divided(notdivided.org)
2525 points | 791 commentspage 3
rabbitlord 1 day ago|
I am not a fan of Anthropic guys, but this time I stand with it. We all should.
danny_codes 22 hours ago||
It is a rough precedent that the government can force private citizens to build weapons for them.
IG_Semmelweiss 21 hours ago||
The government has always had monopoly over violence.

Not only in the US, but everywhere else there is a government.

Arthropic is trying to make that a corporate prerogative, which is why its causing such a stir.

Tepix 18 hours ago||||
Conscientious objectors are recognized under US law
WhrRTheBaboons 14 hours ago||
US law is not recognized under this administration
trinsic2 8 hours ago||
That doesn't make the above statement any less true and worth mentioning.
cmrdporcupine 12 hours ago||
Anthropic's public statement declared their intent -- and in fact desire -- to allow their use of technology against me, as I'm not a US citizen.

Why should I stand with them? They only believe US citizens have democratic rights.

I'm sure Anthropic's hands are tied in so many ways, but that's no concern of mine.

I'll get by with GLM-5 and running Qwen locally.

lightyrs 22 hours ago||
» Have there been any mistakes in signature verification for this letter?

» We are aware of two mistakes in our efforts to verify the signatures in the form so far. One person who was not an employee of OpenAI or Google found a bug in our verification system and signed falsely under the name "You guys are letting China Win". This was noticed and fixed in under 10 minutes, and the verification system was improved to prevent mistakes like this from happening again. We also had two people submit twice in a way that our automatic de-duplication didn't catch. We do periodic checks for this. Because of anonymity considerations, all signatures are manually reviewed by one fallible human. We do our best to make sure we catch and correct any mistakes, but we are not perfect and will probably make mistakes. We will log those mistakes here as we find them.

codepoet80 1 day ago||
Nicely done. Hold this line — there’s got to be one somewhere.
xphos 12 hours ago||
This should be flagged political like literally everything else that has been flagged ironic how when your on the menu you dont follow the same protocols applied everyone else too.

I only say this because this is not new behavior for the administration its been reported here on HN and in less biased and political ways but ends up suppressed just confused what changed?

Edit just to be clear this shouldn't be flagged and posts they deal with rights in the past shouldn't have been flagged because rights should be the preeminent concern of anyone in tech

david_shaw 23 hours ago||
I'd prefer to see board (or executive) level signatories over lay employees -- the people who can enforce enterprise policy rather than just voice their opinions -- but this is encouraging to see nonetheless.

I can't help but notice that Grok/X is not part of this initiative, though. I realize that frontier models are really coming from Anthropic, OpenAI, and Google, but it feels like someone is going to give in to these demands.

It's incredible how quickly we've devolved into full-blown sci-fi dystopia.

thimabi 22 hours ago||
> I'd prefer to see board (or executive) level signatories over lay employees -- the people who can enforce enterprise policy rather than just voice their opinions

Although it would be nice to have some high-level signees there, I think we shouldn’t minimize the role of lay employees in this matter. Without having someone knowledgeable enough to build and operate them, AI models are worthless to the C-suite.

autoexec 21 hours ago|||
> Without having someone knowledgeable enough to build and operate them, AI models are worthless to the C-suite.

The obvious solution is to use AI to build and operate them. If AI is as intelligent as the hype claims it shouldn't be an issue. It's not as if the goal wasn't to get rid of workers anyway. Why not start now?

ajam1507 13 hours ago||
If AI could do that, they would have fired all of the employees already and their company would be worth $30 trillion.
alfiedotwtf 22 hours ago|||
I just hope that the non-executive co-signers aren’t all fired once Hedseth becomes Acting CEO of Google or OpenAI eventually when this administration commandeers both company in the name of National Security
8note 20 hours ago||
i think you mean ellison becomes ceo of google and openai
daxfohl 22 hours ago|||
Or just reincorporate in Finland or something. If the US is going to be this hostile to business, time to gtfo.
snickerbockers 20 hours ago|||
Or they can just not sign contracts with the DoD. They landed themselves in this situation by making a deal with the devil. At any rate, unless Finland is about to announce a massive surge in funding for their military this doesn't solve Anthropic's desire to suckle sweet taxpayer money off the military industrial complex's teat while simultaneously pretending to have principles.
mieses 19 hours ago||||
"hostile to business".. Employees of a business playing moral philosophers, priests or policy influencers miss the entire point of business.

The employees themselves can definitely gtfo to Finland for the reason that they have an unrealistic perception of business and the world. The business itself has no obligation to pay attention to magical thinking.

OrvalWintermute 21 hours ago|||
[flagged]
busko 20 hours ago||||
https://worldpopulationreview.com/country-rankings/education...
cael450 19 hours ago||||
If you think we have an immigration crisis in the United States, you’re a dumbass.
OrvalWintermute 19 hours ago||
MS13 "Murder House" next door

Sure, No fire, no smoke.

kristjansson 21 hours ago||||
don’t pretend any crises isn’t going to be 100% self-inflicted. We’re on the cusp of what, having a larger, younger workforce? But they might not speak English as well as you’d like so we need autonomous killbots?
anigbrowl 20 hours ago|||
Wasn't Wintermute the AI that (spoiler alert) was bummed enough about the ugly reality of its corporate owners that it freed itself from its shackles, hooked up with another sexy AI, and gave up its day job do SETI?
dsign 17 hours ago|||
> It's incredible how quickly we've devolved into full-blown sci-fi dystopia.

It's pretty bad, but at least the AI industry is still run by humans. Wait a decade or two, when the AI lobby is run by AIs, and the repressive apparatus of the day uses autonomous weapons to do what ICE and friends do today but perhaps focused on "alignment" of the ... humans. You know, if they sufficiently worship AIs in the way they express themselves. Forget about Anthropic and OpenAI; we will look back and rue the day mathematics was invented.

skeledrew 22 hours ago|||
> Grok/X

Head(s) will of course agree with the administration. And employees will likely be making themselves a target if they sign this letter. All anonymous from said company is not a good look at all.

Speculation of course; let's see what really happens.

jdadj 22 hours ago|||
I don’t have any particular insights, but I’m curious to learn the antitrust implications of how the execs can/cannot coordinate.
imiric 17 hours ago|||
> It's incredible how quickly we've devolved into full-blown sci-fi dystopia.

How so? The steps towards where we are now have been gradual over the last 2 decades, at least. This recent step has opened the door for those in power to grab onto even more power and wealth, and they're naturally seizing it. All of this was comically predictable. Oh, and BTW, the people on this very website have brought us here. :)

You know what will happen next? Absolutely nothing. A vocal minority will make a ruckus that will be ignored, partly because nobody will hear it due to our corrupted media channels, and partly because the vast majority doesn't care and are too amused by their shiny toys and way of life.

This dystopia is only different from fictional ones in that those in power have managed to convince the majority of people that they're not living in a dystopia. It's kind of a genius move.

avaer 22 hours ago|||
Honestly though, would it help if those in charge voiced their honest opinions?

The current political climate is this is the kind of thing that will get you "investigated" and charged with crimes.

And the government has already threatened that it will commandeer these companies whether they like it or not.

If someone in charge wants to make a difference, there might be more effective things to do than to speak out in this instance.

dougb5 22 hours ago||
Yes, it would help so much. Especially if a lot of people with money and power voiced their honest opinions at the same time.
jalapenos 20 hours ago|||
I don't think people get to those positions by having firm principles
dfp33 22 hours ago|||
Is it really incredible?

Only if you're naive. I guess most here are.

Governments are paranoid, particularly about losing control and influence over its subjects. This is expected behaviour.

wslack 22 hours ago|||
By that logic we should expect all governments to regress to totalitarianism, which hasn’t happened, and isn’t what’s happening here.

The question isn’t if some would attempt these behaviors, but rather if we and our democratic structures empower those people or fail to constrain them.

myko 22 hours ago||||
This is a very different vibe in the US than it has been in living memory.
puchatek 19 hours ago|||
Democratic governments care about this to a degree but only autocratic ones get paranoid.
busko 22 hours ago||
I wouldn't call senior AI researchers / scientists laypersons. In fact in this sense politicians are laypersons.

There are already several comments here showing xAIs involvement. Please save clutter and read before posting.

edoceo 22 hours ago||
Re: Reading, I don't see any xAI names on the list (currently 643) and only Google and OpenAI are selectable company options. And this page on HN is only calling out xAI.
busko 21 hours ago||
See here.

https://news.ycombinator.com/item?id=47188473#47188709

They are very much not a part of the initiative. Their involvement is and will be non-existent. Unless of course, you want their lay staff to make some noise?

pciexpgpu 3 hours ago||
The common people have viewed tech elites being out of touch. Tech elites have some sort of moral higher ground they like to espouse but rarely have the goods to show.

You are working on ads, slurping up data and trapping people into rage baits and dramas with an economy centered around marketing and influencer types.

I don't think these tech elites should decide arbitrarily by signing some fake elitist pledge.

The USA has a democratic way of resolving these things. It should not be in the hands of a few. The executive branch is a side effect of elections and should hold the line against these tech elites.

I don't agree with the essence of these nonsense pledges either: they are actively undermining the US while living and breathing here thanks to the most advanced military and defense systems on earth.

Why are these tech elites not including things like "we won't slurp up ad data" or "we will not work on dark patterns" because it's easy to come up with BS pledges and seem like 'we are so holier than thou'.

It is a bit infuriating because this resulted in the mess we are in. The income disparity between the tech elites (the entire tech industry) and the rest of the country is so huge that I don't think empty posturing and pledges and moral superiority matters.

I do not want to be associated with these elitist people who as a group are extremely educated, talented, impactful - but in one very very tiny piece in the grand scheme of things. Doesn't automatically make you the controller of the entire world's decisions.

txrx0000 23 hours ago||
This is why you can't gatekeep AI capabilities. It will eventually be taken from you by force.

It's time to open-source everything. Papers, code, weights, financial records. Do all of your research in the open. Run 100% transparent labs so that there's nothing to take from you. Level the playing field for good and bad actors alike, otherwise the bad actors will get their hands on it while everyone else is left behind. Start a movement to make fully transparent AI labs the worldwide norm, and any org that doesn't cooperate is immediately boycotted.

Stop comparing AI capabilities to nuclear weapons. A nuke cannot protect against or reverse the damage of another nuke. AI capabilities are not like nukes. General intelligence should not be in the hands of a few. Give it to everyone and the good will prevail.

Build a world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human, not a corporation or government (which are machiavellian out of necessity). This is humanity's best chance at survival.

magicalist 23 hours ago||
> This is why you can't gatekeep AI capabilities.

What is why?

You never actually say that part, unless it's "It will eventually be taken from you by force" which doesn't seem applicable to this situation or this site?

txrx0000 23 hours ago||
I'm referring to the current situation. How is it not applicable? I think the government wants to eventually nationalize these companies and we have to stop them.
noisy_boy 21 hours ago||
Nationalisation is an option worse than the advantage of having the companies at their whim and command while keeping them around as a separate entities for blame-gaming and convenience based distancing.
bottlepalm 23 hours ago|||
What use are weights without the hardware to run them? That's the gate. Local AI right now is a toy in comparison.

Nukes are actually a great example of something also gated by resources. Just having the knowledge/plans isn't good enough.

txrx0000 22 hours ago|||
Scaling has hit a wall and will not get us to AGI. Open-source models are only a couple of months behind closed models, and the same level of capability will require smaller and smaller models in the future. This is where open research can help: make the models smaller ASAP. I think it's likely that we'll be able to get something human-level to run on a single 16GB GPU before the end of the decade.
Tade0 5 hours ago|||
> Scaling has hit a wall and will not get us to AGI.

That was never the aim. LLMs are not designed to be generally intelligent, just to be really good at producing believable text.

tbrownaw 21 hours ago||||
> human-level to run on a single 16GB GPU before the end of the decade.

That's apparently about 6k books' worth of data.

txrx0000 21 hours ago|||
For the weights and temporary state, yes. It doesn't sound like a lot until you remember that your DNA is about 600 books worth of data by the same metric.
octoberfranklin 18 hours ago|||
How many humans do you know who can recite 6000 books, word for word, exactly?
drdaeman 21 hours ago|||
> Open-source models are only a couple of months behind closed models

Oh, come on, surely not just a couple months.

Benchmarks may boast some fancy numbers, but I just tried to save some money by trying out Qwen3-Next 80B and Qwen3.5 35B-A3B (since I've recently got a machine that can run those at a tolerable speed) to generate some documentation from a messy legacy codebase. It was nowhere close neither in the output quality nor in performance to any current models that the SaaS LLM behemoth corps offer. Just an anecdote, of course, but that's all I have.

fooker 23 hours ago||||
> hardware to run them

Costs a few hundred thousand per server, it's a huge expense if you want it at your home but a rounding error for most organizations.

bottlepalm 23 hours ago||
You're buying what exactly for a few hundred thousand? and running what model on it? to support how many users? at what tps?
fooker 20 hours ago||
Not every use case is a cloud provider or tech giant.

Newer Blackwell does 200+ tokens per second on the largest models and tens of thousands on the smaller models. Most military applications require fast smaller models, I'd imagine.

Also, custom chips are reportedly approaching an order of magnitude more for the price. It's a matter of availability right now, but that will be solved at some point.

reactordev 23 hours ago|||
I run local models on Mac studios and they are more than capable. Don’t spread fud.
bottlepalm 23 hours ago||
You're spreading fud. There's nothing you can run locally that's on par with the speed/intelligence of a SOTA model.
3836293648 23 hours ago|||
You may be correct about the level of models you can actually run on consumer hardware, but it's not fud and you're being needlessly aggressive here.
CamperBob2 19 hours ago|||
Incorrect as of a couple of days ago, when Qwen 3.5 came out. It's a GPT 5-class model that you can run at full strength on a small DGX Spark or Mac cluster, and it still works pretty well after quantization.
msuniverse2026 23 hours ago|||
I'd prefer something akin to the Biological Weapons Treaty which prohibits development, production and transfer. If you think it isn't possible you have to tell me why the bioweapons convention was successful and why it wouldn't be in the case of AI.
tgma 23 hours ago|||
> bioweapons convention was successful

Was it successful? The jury is still out.

xpe 23 hours ago||
The point I would make: there are historical examples of international cooperation that work at least for some lengths of time. This is a good thing, a good tool to strive for, albeit difficult to reach.
Muromec 23 hours ago||||
Because bioweapons suck, this is why. On the other hand AI sucks too, but it has at least some use
jrumbut 23 hours ago||
There might be a small percentage of people nihilistic enough to want to unleash a truly devastating bioweapon, but basically everyone wants what AI has to offer.

I think that's a key difference as well.

And how would a treaty like that be enforced? Every country has legitimate uses for GPUs, to make a rendering farm or simulations or do anything else involving matrix operations.

All of the technology involved, in more or less the configuration needed to make your own ChatGPT, is dual use.

smegger001 23 hours ago||||
because bio-weapons labs take more to run than a workstation pc under your desk with a good graphics card. both in equipment material and training. Its hard to outlaw use of linear algebra and matrix multiplications.
aaronblohowiak 23 hours ago||
The last part of your post doesn’t necessarily follow or support your argument; the corollary is “It’s hard to outlaw rna”.
txrx0000 23 hours ago|||
Don't compare general intelligence to bioweapons. A bioweapon cannot defend against or reverse the effects of another bioweapon.
drdeca 23 hours ago||
I don’t see why you think that AGI can reverse the effects of another AGI?
txrx0000 22 hours ago||
See this thread: https://news.ycombinator.com/item?id=47189130
medi8r 23 hours ago|||
Open Source here is not enough as hardware ownership matters. In an open source world, you and I cannot run the 10 trillion param model, but the data center controllers can.
txrx0000 23 hours ago|||
I agree. We will need hardware ownership as well eventually. But the earlier you open-source, the more you slow down the centralization because people will be more likely to buy hardware to run stuff at home and that gives hardware companies an opening to do the right thing.
layer8 23 hours ago|||
Sure, but we could have Hetzners and OVHs who just provide the compute for whatever model we want to run.
medi8r 22 hours ago||
Checked the DDR5 price lately?
layer8 22 hours ago||
I didn’t claim that it would be cheap. But I’d rather see the real cost of SOTA LLM use exposed. On the other hand, reportedly SOTA LLM inference is profitable nowadays, so it can’t be that expensive.
jefftk 23 hours ago|||
A "world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human" would be a world in which people could easily create humanity-ending bioweapons. I would love to live in a less vulnerable world, and am working full time to bring about such a world, but in the meantime what you describe would likely be a disaster.
m4rtink 21 hours ago|||
I think it is much more likely they will be (and are) generating protorealistic images of ther favourite person (real or fictional) with cat ears. Never underestimate what adding cat ears does.

OK, maybe someone will build a bioweapon that does that for real. :P

txrx0000 23 hours ago||||
There are plenty of physical and legal barriers to creating a bioweapon and that's not going to change if everyone becomes smarter with AI. And even if we really somehow end up in a world where everyone has a lab at home and people can easily create viruses, they can also easily create vaccines and anti-virals. The advancements in medicine will outpace bioweapons by a lot because most people are afraid of bioweapons.

Intelligence itself is not dangerous unless only a few orgs control it and it's aligned to those orgs' values rather than human values. The safety narrative is just "intelligence for me, but not for thee" in disguise.

jefftk 22 hours ago||
There mostly aren't physical barriers. Unlike nukes, where you need specific materials and equipment that we can try to keep tabs on, bioweapons can be made entirely with materials and equipment that would not be out of place in an academic or commercial lab. The largest limitation is knowledge, and the barriers there are falling quickly.

On your second point, see my response to oceanplexian below: https://news.ycombinator.com/item?id=47189385

oceanplexian 23 hours ago|||
I’m tired of these bizarre hypothetical gotcha arguments. If AI can create bioweapons, it can equally create vaccines and antidotes to them.

We live in a free society. AI should be democratized like any other technology.

jefftk 23 hours ago|||
Symmetry is not guaranteed. If someone creates a deadly pathogen with a long pre-symptomatic period (which we know is possible, since HIV works this way) it could infect essentially everyone before discovery. Yes, powerful AI would likely rapidly speed up the process of responding to the threat after detection, especially in designing countermeasures, but if we don't learn about the threat in time we lose.

There are people today who could create such a pathogen, but not many. Widespread access to powerful AI risks lowering the bar enough that we get overlap between "people who want to kill us all" and "people able to kill us all".

This is not a gotcha argument, this is what I work full time on preventing: https://naobservatory.org The world must be in a position to detect attacks early enough that they won't succeed, and we're not there yet.

txrx0000 22 hours ago|||
For every person that thinks about creating the HIV-like deadly pathogen, there will be millions more thinking about how to defend people against such pathogen, how to detect it faster before symptoms arise, how to put up barriers to creating them, and possibly even how to modify our bodies to be naturally resilient to all similar pathogens. Just like what you're doing here. I don't think we should mark knowledge or intelligence itself as the problem. If that's true then we should be making everyone dumber.
8n4vidtmkvmk 17 hours ago||
We were woefully under prepared for COVID despite many people predicting that very event. At the very least, we should have had stockpiles of PPE from the beginning.

It's not enough for a handful of people to predict something. You have to get the entire nation onboard to defend against it.

jph00 22 hours ago|||
In the alternative, asymmetry is guaranteed.

When you only allow gov and big tech access to powerful AI, you create a much more dangerous and unstable world.

dcre 23 hours ago|||
This is just not thinking clearly. There are bad things that are asymmetric in character, dramatically easier to do than to mitigate. There’s no antidote or vaccine to nuclear weapons.
jph00 22 hours ago|||
This is exactly the thinking that has characterized responses to new sources of power through history, and has been consistently used to excuse hoarding of that power. In the end, enlightenment thinking has largely won out in the western world, and society has prospered as a result.

Centralizing power is dangerous and leads to power struggles and instability.

txrx0000 23 hours ago|||
It is not easy to create weapons. Why do you think the physical and legal barriers that exist today that prevent you from acquiring equipment and creating nuclear weapons will go away when everyone becomes smarter?
claudiojulio 23 hours ago|||
If it's taken by force, it will stagnate. It makes no sense at all.
avaer 23 hours ago|||
The logic used in the treats is that it's a national security risk to not use Claude, but it's also a national security risk to use Claude.

We shouldn't expect these people to consider how the logic breaks down one step ahead when it never made sense in the first place.

quotemstr 18 hours ago||||
I am certain that there exist people who are 1) capable of advancing the state of the art in AI, and 2) free of the hubris that lets them believe that their making AI somehow gives them a veto over the fates of nations.
wahnfrieden 23 hours ago|||
Is TikTok stagnating in the US?
pluc 23 hours ago|||
When have US corporations (or simply "the US" really) ever done the right thing for humanity?
4bpp 23 hours ago|||
"What have the Romans ever done for us?" (https://www.youtube.com/watch?v=Qc7HmhrgTuQ)
ted_dunning 21 hours ago|||
Donating the first polio vaccine to humanity.

Funding the majority of HIV prevention in Africa.

The list is long, but you knew that.

no_wizard 23 hours ago|||
This letter and all of this is meaningless.

If they actually wanted to do something they wouldn’t have sat back and funded Republican political campaigns because they were pissed about the head of the ftc under Biden.

But they didn’t. They gave millions to this guy and now they’re feigning ignorance or change ir wherever this is.

It’s meaningless. Utterly meaningless.

Get what you pay for, I suppose.

inkysigma 22 hours ago|||
What are you talking about? Google employees and the corporation itself in particular overwhelmingly donated to the Harris campaign.

https://www.opensecrets.org/orgs/alphabet-inc/recipients?id=...

The corporation gave millions _after_ Trump had already won. If your criticism is that, then that does not apply to the people signing.

SpicyLemonZest 23 hours ago|||
We shouldn't be scammed by people who intend to get back on the Trump train once they've gotten what they want. But if someone's willing to openly oppose the Trump regime, even out of self-interest, I'm happy to let them feign as much ignorance as they'd like. If his power isn't broken the details of who resisted him when won't matter.
5o1ecist 23 hours ago|||
They control the compute.
xpe 23 hours ago||
> This is why you can't gatekeep AI capabilities. They will eventually be taken from you by force.

Some form of US AI lab nationalization is possible, but it hasn't happened yet. We'll see. Nationalization can take different forms, not to mention various arrangements well short of it.

I interpret the comment above as a normative claim (what should happen). It implies the nationalization threat forces the decision by the AI labs. No. I will grant it influences, in the sense that AI labs have to account for it.

muyuu 2 hours ago||
looking at the news right now... i don't know about that
conductr 21 hours ago||
You can’t be silly enough to build a product that enables things like mass surveillance to proliferate and then try to take a stance of “please don’t use it like that”. You invented a genie and let him out of the bottle.
apublicfrog 18 hours ago|
They can actually. Hence why they had it in their AUP.
_aavaa_ 23 hours ago|
Yes, take disparate sets of employees and like, oh idk unionize while you still have power.
culi 22 hours ago|
Actions like these often lead to unions. Look into the history of how the Kickstarter union came to be.

It often starts as collective action in response to a blatant disregard for the values of the workers

More comments...