Top
Best
New

Posted by spenvo 7/1/2025

Sam Altman Slams Meta’s AI Talent Poaching: 'Missionaries Will Beat Mercenaries'(www.wired.com)
344 points | 699 comments
softwaredoug 7/2/2025|
Mercenaries over missionaries.

Many employers want employees to act like cult members. But then when going gets tough, those are often the first laid off, and the least prepared for it.

Employers, you can't have it both ways. As an employee don't get fooled.

foobiekr 7/2/2025||
During the first ever layoff at $company in 2001, part of the dotcom implosion, one of my coworkers who got whacked complained that it didn’t make sense as he was one of the companies biggest boosters and believers.

It was supremely interesting to me that he thought the company cared about that at all. I couldn’t get my head around it. He was completely serious, he kept arguing that his loyalty was an asset. He was much more experienced than me (I was barely two years working).

In hindsight, I think it is true that companies value that in a way. I’ve come to appreciate people who just stick it out for awhile. I try and make sure their comp makes it worth their while. They are so much less annoying to deal with than the assholes who constantly bitch or moan about doing what they’re paid for.

But as a personal strategy, it’s a poor one. You should never love or be loyal to something that can’t love you back.

citizenpaul 7/2/2025|||
The one and ONLY way I've ever seen "company" loyalty rewarded in any way is if you have a DIRECT relationship with a top level senior manager (C-suite). They will specifically protect you if they truly believe you are on "their side" and you are at their beck and call.
octo888 7/2/2025|||
Always a fun game to watch a new C suite get hired and then figure out which of the news hires that follow are their mates.
serial_dev 7/2/2025||||
Companies appreciate loyalty… as long as long as it doesn’t cost them anything. The moment you ask for more money or they need to reduce the workforce, all of that goes out the window.
yibg 7/2/2025|||
I think loyalty has value to the company but not as much as people think. To simplify it, multiple things contribute to "value" and loyalty is just a small part of it.
ryeguy_24 7/2/2025|||
100% agree. There is no reason for employees to be loyal to a company. LLM building is not some religious work. It’s machine learning on big data. Always do what is best for you because companies don’t act like loyal humans, they act like large organizations that aren’t always fair or rationale or logical in their decisions.
saubeidl 7/2/2025|||
> LLM building is not some religious work.

To a lot of tech leadership, it is. The belief in AGI as a savior figure is a driving motivator. Just listen to how Altman, Thiel or Musk talk about it.

foobiekr 7/2/2025|||
That’s how they talk about it publicly. Internally I can attest that the companies for two of the three you list are not like that internally at all. It’s all marketing, outwardly focused.
saubeidl 7/2/2025||
I believe it's the opposite. They don't dare say their ridiculous tech cult stuff to their employees, but it's what they truly believe.

AGI is their capitalist savior, here to redeem a failing system from having to pay pesky workers.

cmrdporcupine 7/2/2025||
"Tech founders" for whom the "technology" part is the thing always getting in the way of the "just the money and buzzwords" part.

Now they think they can automate it away.

25+ years in this industry and I still find it striking how different the perspective between the "money" side and the "engineering" side is... on the same products/companies/ideas.

belter 7/2/2025||||
> Just listen to how Altman, Thiel or Musk talk about it.

It’s surprising how little they seem to have thought it through. AGI is unlikely to appear in the next 25 years, but even if, as a mental exercise, you accept it might happen, it reveals it's paradox: If AGI is possible, it destroys its own value as a defensible business asset.

Like electricity, nuclear weapons, or space travel , once the blueprint exists, others will follow. And once multiple AGIs exist, each will be capable of rediscovering and accelerating every scientific and technological advancement.

AGI isn’t a moat. AGI is what kills the moat.

bmau5 7/2/2025||
The prevailing idea seems to be that the first company to achieve superintelligence will be able to leverage it into a permanent advantage via exponential self improvement, etc.
belter 7/3/2025||
> able to leverage it into a permanent advantage via exponential self improvement

Their fantasies of dominating others, through some modern day Elysium, reveal far more about their substance intake than rational grasp of where they actually stand... :-)

dragonwriter 7/3/2025|||
Tech leadership always treats new ventures or fields that way, because being seen to treat it that way and selling the idea of treating it that way is how you attract people (employees, and if you are very lucky investors, too) that are willing to sacrifice their own rational material interests to advancing what they see as the shared religious goal (which is, in fact, the tech leader’s actual material interest.)

I mean, even on HN, which is clearly a startup-friendly forum, that tendency among startup leaders has been noted and mocked repeatedly.

neves 7/2/2025|||
But at least consider the impact on society of your job. A lot of these big companies are nocive and addictive and are destroying our society fabric.
asoneth 7/2/2025|||
> Employers, you can't have it both ways.

Exactly. Though you can learn a lot about an employer by how it has conducted layoffs. Did they cut profits and management salaries and attempt to reassign people first? Did they provide generous payouts to laid off employees?

If the answer to any of these questions is no then they're not worth committing to.

sailfast 7/2/2025|||
1000x this. You should ideally feel like you’re part of a great group of folks and doing good work - but this is not a guarantee of anything at all.

When it comes down to it, you’re expendable when your leadership is backed into a corner.

stevenAthompson 7/2/2025|||
A Ronin is just a Samurai who has learned his lesson.
anshumankmr 7/2/2025|||
Only place you can say if you are an employee and a missionary is well if you are a missionary or working in a charity/ NGO etc trying to help people/animals etc.

The rest of us are mercenaries only.

ysofunny 7/2/2025||
or if you own the company
neves 7/3/2025||
It's also nice to work for the government.

At least if you work in a functional democracy where state bureaucrats can't be fired at a dictator's whim.

anshumankmr 7/5/2025||
Be careful what you wish for... cause too far in the other way you get "babu" culture which I feel is one the things that has ruined India.
neves 7/2/2025|||
We are not a company, we are a family
kayodelycaon 7/2/2025|||
Ferengi Rules of Acquisition:

#6: Never allow family to stand in the way of opportunity.

#111: Treat people in your debt like family… exploit them.

#211: Employees are the rungs on the ladder of success. Don't hesitate to step on them.

belter 7/2/2025||
#91: Your boss is only worth what he pays you.
softwaredoug 7/2/2025||||
These CEOs will be the first to say "we are a team, not a family" when they do layoffs.
kevin_thibedeau 7/2/2025|||
"I have decided that you need to go spend more time with your family. Really I'm just doing you a favor."
anshumankmr 7/2/2025|||
Relevant Silicon Valley Scene:https://www.youtube.com/watch?v=u48vYSLvKNQ
meepmorp 7/3/2025|||
Well, we're a family, but you're still being disowned at layoff time
bko 7/2/2025|||
I think there's more to work than just taking home a salary. Not equally true among all professions and times in your life. But most jobs I took were for less money with questionable upside. I just wanted to work on something else or with different people.

The best thing about work is the focus on whatever you're doing. Maybe you're not saving the world but it's great to go in to have one goal that everyone goes towards. And you get excited when you see your contributions make a difference or you build great product. You can laugh and say I was part of a 'cult', but it sure beats working a misearble job for just a slightly higher paycheck.

ramoz 7/2/2025|||
Missionaries https://www.youtube.com/watch?v=zt7BPxHqbkU
tracker1 7/2/2025|||
Especially for an organization like OpenAI that completely twisted its original message in favor of commercialization. The entire missionary bit is BS trying to get people to stay out of a sense of what exactly?

I'm all for having loyalty to people and organizations that show the same. Eventually it can and will shift. I've seen management changed out from over me more times than I can count at this point. Don't get caught off guard.

It's even worse in the current dev/tech job market where wages are being pushed down around 2010 levels. I've been working two jobs just to keep up with expenses since I've been unable to match my more recent prior income. One ended recently, and looking for a new second job.

m3kw9 7/2/2025|||
That’s because you don’t believe/realize in the mission of the product and its impact to society. When if work at Microsoft, you are just working to make MS money as they are like a giant machine.

That said it seems like every worker can be replaced. Lost stars replaced by new stars

Henchman21 7/2/2025|||
They sure can have it both ways. They do now.
HellDunkel 7/2/2025|||
No. The cult members are less likely to be laid off. Simply because they don‘t stand out and provide less surface for attack.
quijoteuniv 7/2/2025||
Only be loyal to doing work :)
jrm4 7/1/2025||
Big picture, I'll always believe we dodged a huge bullet in that "AI" got big in a nearly fully "open-source," maybe even "post open-source" world. The fact that Meta is, for now, one of the good guys in this space (purely strategically and unintentionally) is fortunate and almost funny.
burroisolator 7/2/2025||
AI only got big, especially for coding, because they were able to train on a massive corpus of open source code. I don't think it is a coincidence.
hardwaresofton 7/2/2025|||
Another funny possibly sad coincidence is that the licenses that made open source what it is will probably be absolutely useless going forward, because as recent precedent has shown, companies can train on what they have legally gained access to.

On the other hand, AGPL continues to be the future of F/OSS.

haiku2077 7/2/2025|||
MIT is also still useful; it lets me release code where I don't really care what other people do with it as long as they don't sue me (an actual possibility in some countries)
LtWorf 7/2/2025||
Which countries would these be?
haiku2077 7/2/2025||
The US, for one. You can sue nearly anyone for nearly anything, even something you obviously won't win in court, as long as you find a lawyer willing to do it; you don't need any actual legal standing to waste the target's time and money.

Even the most unscrupulous lawyer is going to look at the MIT license, realize the target can defend it for a trivial amount of money (a single form letter from their lawyer) and move on.

Jensson 7/2/2025|||
You can sue for damages if they have malware in the code, there is no license that protects you from distributing harmful products even if you do it for free.
haiku2077 7/2/2025||
If I commit fraud, sure. But the code I release is extremely honest about what it does :)
thih9 7/2/2025|||
There are other ways to litigate that the malicious/greedy can use, where MIT offers no protection; e.g. patent trolling.
tom_m 7/2/2025||||
And illegally too. Anthropic didn't pay for those books they used.

It's too late at this point. The damage is done. These companies trained on illegally obtained data and they will never be held accountable for that. The training is done and they got what they needed. So even if they can't train on it in the future, it doesn't matter. They already have those base models.

ddq 7/2/2025||
Then punitive measures are in order. Add it to the pile of illegal, immoral, and unethical behavior of the feudal tech oligarchs already long overdue for justice. The harm they have done and are doing to humanity should not remain unpunished.
malfist 7/2/2025||||
Legally or illegally gained access too. Lest we forget Meta pirating books
coffeefirst 7/2/2025||
And the legality of this may vary by jurisdiction. There’s a nonzero chance that they pay a few million in the US for stealing books but the EU or Canada decide the training itself was illegal.
andy99 7/2/2025|||
Then the EU and canada just won't have any sovereign LLMs. They'll have to decide if they'd rather prop up some artificial monopoly or support (by not actively undermining) innovation.
foobiekr 7/2/2025|||
It’s not going to happen. The EU is desperate to stop being in fourth place in technology and will do absolutely nothing to put a damper on this. It’s their only hope to get out of the rut.
EGreg 7/2/2025||||
Explain how AGPL would prevent AI from being trained on it or AI-generated code competing with it. I have used AGPL for a decade and still not sure.
hardwaresofton 7/2/2025||
It wouldn't -- AGPL code that is picked up would also just get "fair used" into new software.

That said, AGPL as a trend was a huge closing of the spigot of free F/OSS code for companies to use and not contribute back to.

EGreg 7/2/2025||
Yes, I hope it was a trend. People were judging me when I first started using it over 10 years ago.
jorvi 7/2/2025||||
Yup. The book torrenting case is pretty nuts.

If I can reproduce the entirety of most books off the top of my head and sell that to people as a service, it's a copyright violation. If AI does it, it's fair use.

Pants-on-head idiotic judge.

derektank 7/2/2025|||
>If I can reproduce the entirety of most books off the top of my head and sell that to people as a service, it's a copyright violation. If AI does it, it's fair use.

Assuming you're referring to Bartz v. Anthropic, that is explicitly not what the ruling said, in fact it's almost the inverse. The judge said that output from an AI model which is a straight up reproduction of copyrighted material would likely be an explicit violation of copyright. This is on page 12/32 of the judgement[1].

But the vast majority of output from an LLM like Claude is not a word for word reproduction; it's a transformative use of the original work. In fact, the authors bringing the suit didn't even claim that it had reproduced their work. From page 7, "Authors do not allege that any infringing copy of their works was or would ever be provided to users by the Claude service." That's because Anthropic is already explicitly filtering out results that might contain copyrighted material. (I've run into this myself while trying to translate foreign language song lyrics to English. Claude will simply refuse to do this)[2]

[1] https://www.courtlistener.com/docket/69058235/231/bartz-v-an...

[2] https://claude.ai/share/d0586248-8d00-4d50-8e45-f9c5ef09ec81

gosub100 7/2/2025||
They should still have to pay damages for possessing the copyrighted material. That's possession, which courts have found is copyright violation. Remember all the 12 year olds who got their parents sued back in the 2000s? They had unauthorized copies.
derektank 7/2/2025||
I don't know what exactly you're referring to here. The model itself is not a copy, you can't find the copyrighted material in the weights. Even if you could, you're allowed under existing case law to make copies of a work for personal use if the copies have a different character and as long as you don't yourself share the new copies. Take the Sony Betamax case, which found that it was legal and a transformative use of copyrighted material to create a copy of a publicly aired broadcast onto a recording medium like VHS and Betamax for the purposes of time-shifting one's consumption.

Now, Anthropic was found to have pirated copyrighted work when they downloaded and trained Claude on the LibGen library. And they will likely pay substantial damages for this. So on those grounds, they're as screwed as the 12 year olds and their parents. The trial to determine damages hasn't happened yet though.

gosub100 7/2/2025||
> The model itself is not a copy,

Agreed

> the Sony Betamax case, which found that it was legal and a transformative use of copyrighted material to create a copy of a publicly aired broadcast

Good thing libgen is not publicly aired in broadcast format.

> So on those grounds, they're as screwed as the 12 year olds and their parents.

Except they have deep enough pockets to actually pay the damages for each count of infringement. That's the blood most of us want to see shed.

You cannot have trained the model without possession of copyrighted works. Which we seem to be in agreement on.

hardwaresofton 7/2/2025||||
This was immediately my reaction as well, but I'm not a judge so what do I know. In my own mind I mark it as a "spice must flow" moment -- it will seem inevitable in retrospect but my simple (almost surely incorrect) take is that there just wasn't a way this was going to stop AI's progress. AI as a trend has incredible plot armor at this point in time.

Is the hinge that the tools can recall a huge portion (not perfectly of course) but usually don't? What seems even more straight forward is the substitute good idea, it seems reasonable to assume people will buy less copies of book X when they start generating books heavily inspired by book X.

But, this is probably just a case of a layman wandering into a complex topic, maybe it's the case that AI has just nestled into the absolute perfect spot in current copyright law, just like other things that seem like they should be illegal now but aren't.

fragmede 7/2/2025||||
I didn't see the part of the trial where they got the "entirety of most books" out of Llama. What did you see that I didn't?
redman25 7/2/2025||||
Sad to say but it would have put US companies at a major disadvantage if they were not allowed to.
tim333 7/2/2025||||
I'm not sure that's true. I've never heard of a human being done for copyright for reciting a book passage.

I daresay the difference with AI is that pretty much no human can do that well enough to harm the copyright holder, whereas AI can churn it out.

tom_m 7/2/2025|||
Yea, that dipshit judge just opened the flood gates for more problems. The problem is they don't understand how this stuff works and they're in the position of having to make a judgement on it. They're completely unprepared to do so.

Now there's precedent for future cases where theft of code or any other work of art can be considered fair use.

sneak 7/2/2025||||
The AGPL is a nonfree license that is virtually impossible to comply with.

It’s an EULA trying to pretend it’s a license. You can’t have it both ways.

hardwaresofton 7/2/2025|||
This is a strong claim, given it is listed as a free, copyleft license:

https://www.gnu.org/licenses/agpl-3.0.en.html

Could you expand on why you think it's nonfree? Also, it's not that hard to comply with either...

px43 7/2/2025|||
For some people "free" means "autonomy", and copyleft licences do a lot to restrict autonomy.
jrochkind1 7/2/2025|||
So interestingly, free meant autonomy for Stallman and the original proponents of "copyleft" style licenses too. But autonomy for end-users, not developers. But Stallman et al believed the copyleft style licenses maximized autonomy for end-users, rightly or wrongly, that was the intent.
hardwaresofton 7/2/2025||||
Yeah if it's a problem of definition, then I definitely agree that it could not match there, it certainly isn't a do anything you want license.
waffletower 7/2/2025|||
"Free" decidedly means autonomy; "I have been freed from prison". Use of the word "free" in many OSS licenses is a jarring euphemism.
tedheath123 7/2/2025||
cf. https://en.wikipedia.org/wiki/Two_Concepts_of_Liberty
sneak 7/2/2025|||
marcan does a much more detailed job than I do:

https://news.ycombinator.com/item?id=30495647

https://news.ycombinator.com/item?id=30044019

GNU/FSF are the anticapitalist zealots that are pushing this EULA. Just because they approve of it doesn’t make it free software. They are confused.

hardwaresofton 7/2/2025||
I read through and I think that the analysis suffers from the fact that in the case when the modifier is the user it's fine.

Free software refers to user freedoms, not developer freedoms.

I don't think the below is right:

> > Notwithstanding any other provision of this License, if you modify the Program, your modified version must prominently offer all users interacting with it remotely through a computer network (if your version supports such interaction) an opportunity to receive the Corresponding Source of your version by providing access to the Corresponding Source from a network server at no charge, through some standard or customary means of facilitating copying of software.

>

> Let's break it down:

>

> > If you modify the Program

>

> That is if you are a developer making changes to the source code (or binary, but let's ignore that option)

>

> > your modified version

>

> The modified source code you have created

>

> > must prominently offer all users interacting with it remotely through a computer network

>

> Must include the mandatory feature of offering all users interacting with it through a computer network (computer network is left undefined and subject to wide interpretation)

I read the AGPL to mean if you modify the program then the users of the program (remotely, through a computer network) must be able to access the source code.

It has yet to be tested, but that seems like the common sense reading for me (which matters, because judges do apply judgement). It just seems like they are trying too hard to do a legal gotcha. I'm not a lawyer so I can't speak to that, but I certainly don't read it the same way.

I don't agree with this interpretation of every-change-is-a-violation either:

> Step 1: Clone the GitHub repo

>

> Step 2: Make a change to the code - oops, license violation! Clause 13! I need to change the source code offer first!

>

> Step 1.5: Change the source code offer to point to your repo

This example seems incorrect -- modifying the code does not automatically make people interact with the program over a network...

"free software" was defined by the GNU/FSF... so I generally default to their definitions. I don't think the license falls afoul of their stated definitions.

That said, they're certainly anti-capitalist zealots, that's kind of their thing. I don't agree with that, but that's besides the point.

marcosdumay 7/2/2025|||
It's not really "virtually impossible to comply with". It's very restrictive, yes, but not hard to comply if you want to.

And yes, it is an EULA pretending to be a license. I'd put good odds on it being illegal in my country, and it may even be illegal on the US. But it's well aligned with the goals of GNU.

surfingdino 7/2/2025||||
And if they AI companies don't like the license, they will ignore it or pay to be given a waver. Long may they rot in hell for doing that.
yard2010 7/2/2025|||
Hell is, by design, a consequence for poor people. (People could literally pay the church to not go to hell[0]). Rich people have no consequences whatsoever, let alone poor people consequences.

[0] https://www.cambridge.org/core/books/abs/preaching-the-crusa...

GTP 7/2/2025||
Not "by design", as historically the hell came first. It was only much later that they catholic church started talking about the purgatory and the possibility of reducing your punishment by paying money.
smokel 7/2/2025|||
The people running AI companies have figured out that there is no such thing as hell. We have to come up with new reasons for people to behave in a friendly way.
fennecbutt 7/2/2025|||
We already have such reasons. Besides, all religious "kindness" was never kindness without strings attached, even though they'd like you to think that was the case.
_heimdall 7/2/2025|||
The people running AI companies aren't magic, they can't be certain about what comes after death.
pizzafeelsright 7/2/2025|||
If I can have AI retype all code per my desire how exactly is source code special?

I like open source. I also don't think that is where the magic is anymore.

It was scale for 20 years.

Now it is speed.

bravesoul2 7/2/2025|||
Open source may be necessary but it is not sufficient. You also needed the compute power and architecture discoveries and the realisation that lots of data > clever feature mapping for this kind of work.

A world without open source may have given birth to 2020s AI but probably at a slower pace.

pydry 7/1/2025|||
Dont make the mistake of anthropomorphizing Mark Zuckerberg. He didnt open source anything because he's a "good guy", he's just commoditizing the complement.

The "good guy" is a competitive environment that would render Meta's AI offerings to be irrelevant right now if it didnt open source.

somenameforme 7/2/2025|||
The reason Machiavellianism is stupid is that the grand ends the means aim to obtain often never come to pass, but the awful things done in pursuit of them certainly do. So the motivation behind those means doesn't excuse them. And I see no reason the inverse of this doesn't hold true. I couldn't care less if Zuckerburg thinks open sourcing Llama is some grand scheme to let him to take over the world to become its god-king emperor. In reality, that almost certainly won't happen. But what certainly will happen is the world getting free and open source access to LLM systems.

When any scheme involves some grand long-term goal, I think a far more naive approach to behaviors is much more appropriate in basically all cases. There's a million twists on that old quote that 'no plan survives first contact with the enemy', and with these sort of grand schemes - we're all that enemy. Bring on the malevolent schemers with their benevolent means - the world would be a much nicer place than one filled with benevolent schemers with their malevolent means.

hansvm 7/2/2025|||
> The reason Machiavellianism is stupid is that the grand ends the means aim to obtain often never come to pass

That doesn't feel quite right as an explanation. If something fails 10 times, that just makes the means 10x worse. If the ends justify the means then doesn't that still fit into Machiavellian principles? Isn't the complaint closet to "sometimes the ends don't justify the means"?

somenameforme 7/2/2025||
You have to assume a grand ends is achievable through some knowable means. I don't see any real reason to think this is the case, certainly not on any sort of a meaningful timeframe. And I think this is even less true when we consider the typical connotation of Machiavellianism, which is through 'evil' actions.

It's extremely difficult to think of any real achievements sustained on the back of Machiavellianism, but one can list essentially endless entities whose downfall was brought on precisely by such.

bit1993 7/2/2025||||
Machiavellianism is not for everyone. It is specifically a framework for people in power. Kings, Heads of States, CEOs, Commanders. Competitive environments with allot at stake (peoples lives, money, future), in these environments it is often difficult to make decisions. Having a framework in place that allows you to make decisions is very useful.
luqtas 7/2/2025||
Mitch Prinstein wrote a book about power and it shows that dark traits aren't the standard in most leaders, nor they are the best way to get into/stay in power

author is "board certified in clinical child and adolescent psychology, and serves as the John Van Seters Distinguished Professor of Psychology and Neuroscience, and the Director of Clinical Psychology at the University of North Carolina at Chapel Hill" and the book is based on evidence

edit: you can't take a book from 1600 and a few alive assholes with power and conclude that. there's a bunch of philanthropists and other people around

pydry 7/2/2025|||
Im not saying that the end outcome wont be beneficial. I dont have a crystal ball. Im just saying that what he is doing is in no way selfless or laudable or worthy of praise.

Same goes for when Microsoft went gaga for open source and demanded brownie points for pretending to turn over a new leaf.

ddellacosta 7/2/2025||||
> Dont make the mistake of anthropomorphizing Mark Zuckerberg

Considering the rest of your comment it's not clear to me if "anthropomorphizing" really captures the meaning you intended, but regardless, I love this

shwaj 7/2/2025||
I think it’s a play on “don’t anthropomorphize the lawn mower”, referring to Larry Ellison.
p4ul 7/2/2025||
https://www.youtube.com/watch?v=-zRN7XLCRhc&t=2308s
ddellacosta 7/2/2025||
Gotcha, thank you both, I totally missed this
jrm4 7/1/2025||||
Oh, absolutely -- I definitely meant that in the least complimentary way possible :). In a way, it's just the triumph of the ideals of "open source," -- sharing is better for everyone, even Zuck, selfishly.
mmmeff 7/2/2025||
The moment Meta produces something competitive with OpenAI is the moment they stop releasing the weights and rebrand from Llama. Mark my words.
Quarrelsome 7/1/2025||||
they did say "accidentally". I find that people doing the right thing for the wrong reasons is often the best case outcome.
landl0rd 7/1/2025||||
The price tag on this stuff, in human capital, data, and hardware, is high enough to preclude that sort of “perfect competition” environment.

Don’t let the perfect be the enemy of the good.

petesergeant 7/2/2025||
> The price tag on this stuff, in human capital, data, and hardware, is high enough to preclude that sort of “perfect competition” environment.

I feel like we right now live in that perfect competition environment though. Inference is mostly commoditized, and it’s a race to the bottom for price and latency. I don’t think any of the big providers are making super-normal profit, and are probably discounting inference for access to data/users.

ath92 7/2/2025||
Only because everyone believes it’s a winner takes all game and this perfect competition will only last for as long as the winner hasn’t come out on top yet.
petesergeant 7/2/2025||
> everyone believes it’s a winner takes all game

Why would anyone think that, and why do you think everyone thinks that?

TheOtherHobbes 7/2/2025|||
Because tech is now a handful of baronial oligopolies, and the AI companies are fighting to be the next generation of same.

And this pattern has repeated itself reliably since the industrial revolution.

Successful ASI would essentially end this process, because after ASI there's nowhere else for humans to go (in tech at least.)

mlazos 7/2/2025|||
Everyone always thinks this at least in big tech I’ve never heard a PM or exec say a market is not winner take all. It’s some weird corpo grift lang that nothing is worth doing unless its winner take all.
moralestapia 7/2/2025|||
>he's just commoditizing the complement

That's a cool smaht phrase but help me understand, for which Meta products are LLMs a complement?

ethbr1 7/2/2025|||
A continuous stream of monetizable live user data?

The entire point of Meta owning everything is that it wants as much of your data stream as it can get, so it can then sell more ad products derived from that.

If much of that data begins going off-Meta, because someone else has better LLMs and builds them into products, that's a huge loss to Meta.

moralestapia 7/2/2025||
Sorry, I don't follow.

>because someone else has better LLMs and builds them into products

If that were true they wouldn't be trying to create the best LLM and give it for free.

(Disclaimer: I don't think Zuck is doing this out of the good of his heart, obv. but I don't see the connection with the complements and whatnot)

comfysocks 7/2/2025|||
Meta has ad revenue. I think Meta’s play is to make it difficult for pure AI competitors to make revenue through LLMs.
ethbr1 7/2/2025|||
Meta's play is to make sure there isn't an obvious superiority to one company's closed LLM -- because that's what would drive customers to choosing that company's product(s).

If LLM effectiveness is all about the same, then other factors dominate customer choice.

Like which (legacy) platforms have the strongest network effects. (Which Meta would be thrilled about)

simianwords 7/2/2025|||
That’s not commoditising the complement!
comfysocks 7/4/2025||
I’m not the poster that said it was that.
mu53 7/2/2025|||
I think its about sapping as much user data from competitors. A company seeking to use an LLM has a choice between OpenAI, LLaMA, and others. If they choose LLaMA because it's free and host it themselves, OpenAI misses out on training data and other data like that
chartered_stack 7/2/2025||
Well is the loss of training data from customers using self-hosted Llama that big a deal for OpenAI or any of the big labs at this point? Maybe in late-2022/early-2023 during the early stages of RLHF'd mass models but not today I don't think. Offerings from the big labs have pretty much settled into specific niches and people have started using them in certain ways across the board. The early land grab is over and consolidation has started.
fantispug 7/2/2025||||
Meta's primary business is capturing attention and selling some of that attention to advertisers. They do this by distributing content to users in a way that maximizes attention. Content is a complement to their content distribution system.

LLMs, along with image and video generation models, are generators of very dynamic, engaging and personalised content. If Open AI or anyone else wins a monopoly there it could be terrible for Meta's business. Commoditizing it with Llama, and at the same time building internal capability and a community for their LLMs, was solid strategy from Meta.

moralestapia 7/2/2025||
So, imagine a world where everyone but Meta has access to generative AI.

There's two products:

A) (Meta) Hey, here are all your family members and friends, you can keep up with them in our apps, message them, see what they're up to, etc...

B) (OpenAI and others) Hey, we generated some artificial friends for you, they will write messages to you everyday, almost like a real human! They also look like this (queue AI generated profile picture). We will post updates on the imaginary adventures we come up with, written by LLMs. We will simulate a whole existence around you, "age" like real humans, we might even get married between us and have imaginary babies. You could attend our virtual generated wedding online, using the latest technology, and you can send us gifts and money to celebrate these significant events.

And, presumably, people will prefer to use B?

MEGA lmao.

Zambyte 7/2/2025||||
Their primary product: advertisements.

It takes content to sell advertisements online. LLMs produce an infinite stream of content.

HDThoreaun 7/2/2025|||
VR/metaverse is dead in the water without gen AI. The content takes too long to make otherwise
saubeidl 7/2/2025|||
What's even crazier is that China are the good guys when it comes to open source AI.
_heimdall 7/2/2025|||
We would have to know their intent to really know if they fit a general understanding "the good guys."

Its very possible that China is open sourcing LLMs because its currently in their best interest to do so, not because of some moral or principled stance.

llm_nerd 7/2/2025|||
But that's precisely why Meta are the "good guys". They specifically called China the good guys in the same way that Meta is the good guys, though in this case many of the Chinese models are extremely good.

Meta has open sourced all of their offerings purely to try to commoditize the industry to the greatest extent possible, hoping to avoid their competitors getting a leg up. There is zero altruism or good intentions.

If Meta had actually competitive AI offering, there is zero chance they would be releasing any of it.

echelon 7/2/2025||
China nor Meta are the good guys, and they are not stewards of open source AI.

China has stopped releasing frontier models, and Meta doesn't release anything that isn't in the llama family.

- Hunyuan Image 2.0 (200 millisecond flux) is not released

- Hunyuan 3D 2.5, the top performing 3D model and an order of magnitude improvement over 2.1, is not released

- Seedream Video, which outperforms Google Veo 3 on ELO rankings, is not released

- Qwen VLo, an instructive autoregressive model, is not released

The list is much larger than this.

wraptile 7/2/2025||||
The country ruled by "people's party" has almost no open source culture while capitalism is leading the entire free software movement. I'm not sure what that says about our society and politics but the absurdist in me is having a good laugh every time I think about this :D
Aeolun 7/2/2025|||
There’s actually a lot of open source software made by Chinese people. The government just doesn’t fund it. Not directly anyway, but there’s a ton of Chinese companies that do.
amy214 7/3/2025|||
>There’s actually a lot of open source software made by Chinese people

Yea exactly, there is also a lot of chinese people out there, statistically a large chunk are cool with it.

Same dynamic as the US can be really - other countries see the US government and think to themselves, "I don't like these US people, look at what their government did" meanwhile US people are like "what do you mean, I don't like what the government did either". That's what a lot of Chinese people are thinking (but now allowed to say, in China criticizing the government is against their community guidelines)

hollerith 7/3/2025|||
>There’s actually a lot of open source software made by Chinese people.

If that is true and the software is any good, you should be able to name an open-source project that we've heard of started by people living in China.

DeepSeek released some models as open weights and some software for running the models. That's the only example I can think of.

Fade_Dance 7/4/2025|||
I've recently been exploring PKM/knowledge management programs, and the best open source one is a Chinese project - SiYuan.

I have a feeling that their collaborative hacker culture is more hardware oriented, which would be a natural extension from the tech zones where 500 companies are within a few miles of each other and engineers are rapidly popping in and out and prototyping parts sometimes within a day.

Anecdotally, I've dealt with Chinese collaborative community projects in the ThinkPad space, where they have come together to design custom motherboards to modernize old ThinkPads. Of course there was a lot of software work as well when it comes to BIOS code, Thunderbolt, etc. I remember thinking how watching that project develop was like peering into another world with a parallel hacker culture that just developed... differently.

Oh there's also a Chinese project that's going to modernize old Blackberries with 5G internals. Cool stuff!

wraptile 7/3/2025|||
China does have their own Github as gitee.com¹ which runs a fork of Gitea but it's basically dead because it's impossible to have anything like Github with the current censorship aparatus. Here's the excerpt from wiki:

> On 18 May 2022, Gitee announced all code will be manually reviewed before public availability.[4][5] Gitee did not specify a reason for the change, though there was widespread speculation it was ordered by the Chinese government amid increasing online censorship in China.[4][6]

1 - https://en.wikipedia.org/wiki/Gitee

jowea 7/3/2025||||
I won't pretend to be deeply familiar with China, but I think of two reasons: China doesn't take IP law seriously, so they can just copy, pirate whatever anyway. And the West has more wealthy idealistic techies with the free time for free software.
cmrdporcupine 7/2/2025|||
Capitalist countries (actually there are no other kinds of economies, in reality) are leading the open source software movement because it is a way for corporations to get software development services and products for free rather than paying for. It's a way of lowering labour costs.

Highly paid software engineers working in a ZIRP economy with skyrocketing compensation packages were absolutely willing to play this game, because "open source" in that context often is/was a resume or portfolio building tool and companies were willing to pay some % of open source developers in order to lubricate the wheels of commerce.

That, I think, is going to change.

Free software, which I interpret as copyleft, is absolutely antithetical to them, and reviled precisely because it gets in the way of getting work for free/cheap and often gets in the way of making money.

jowea 7/3/2025|||
Copyleft isn't antithetical, see how many people are paid to work on the Linux kernel. I believe some other ecosystem software is also copylefted, like systemd.

And is building on top of the unpaid labour of SW engineers really a major part of the open source ecosystem? I feel open source is more a way for companies to cooperate in building shared software with less duplication of costs.

wraptile 7/3/2025|||
I disagree, the corporate open source is just half of the story. Much of free software space is pushed by idealists who can afford to pursue the ideals due to freedoms and finances provided by capitalist systems.
saubeidl 7/2/2025|||
I don't think the intent really matters once the thing is out in the open.

I want open source AI i can run myself without any creepy surveillance capitalist or state agency using it to slurp up my data.

Chinese companies are giving me that - I don't really care about what their grand plan is. Grand plans have a habit of not working out, but open source software is open source software nonetheless.

andsoitis 7/2/2025||
> I want open source AI i can run myself

What are you running?

> Chinese companies are giving me that

I have not become aware of anything other than DeepSeek. Can you recommend a few others that are worth looking into?

saubeidl 7/2/2025||
Alibaba's Qwen is pretty good, and it looks like Baidu just open sourced Ernie!
qwertox 7/2/2025||||
It's really hard to tell. If instructions like the current extreme trend of "What a great question!" and all the crap that forces one to put

  * Do not use emotional reinforcement (e.g., "Excellent," "Perfect," "Unfortunately").
  * Do not use metaphors or hyperbole (e.g., "smoking gun," "major turning point").
  * Do not express confidence or certainty in potential solutions.
into the instructions, so that it doesn't treat you like a child, teenager or narcissistic individual who is craving for flattery, can really affect the mood and way of thinking of an individual, those Chinese models might as well have baked in something similar but targeted at reducing the productivity of certain individuals or weakening their beliefs in western culture.

I am not saying they are doing that, but they could be doing it sometime down the road without us noticing.

tim333 7/2/2025|||
One private company in China, funded by running a quant hedge fund. I'm not sure China as in Xi is good.
saubeidl 7/2/2025|||
Alibaba and Baidu both open source their models as well.
echelon 7/2/2025||
None of the big tech companies in China are releasing their frontier models anymore.

- Hunyuan Image 2.0 (200 millisecond flux) is not released

- Hunyuan 3D 2.5, the top performing 3D model and an order of magnitude improvement over 2.1, is not released

- Seedream Video, which outperforms Google Veo 3 on ELO rankings, is not released

- Qwen VLo, an instructive autoregressive model, is not released

kevinventullo 7/2/2025|||
I mean in some sense the Chinese domestic policy (“as in Xi”) made the conditions possible for companies like DeepSeek to rise up, via a multi-decade emphasis on STEM education and providing the right entrepreneurial conditions.

But yeah by analogy with the US, it’s not as if the W. Bush administration can be credited with the creation of Google.

Tepix 7/2/2025|||
Do we know if Meta will stick to its strategy of making weights available (which isn't open source to be clear) now that they have a new "superintelligence" subdivision?
eleveriven 7/2/2025|||
It's not ideal, but having major players accidentally propping up an open ecosystem is probably the best-case outcome we could've hoped for
add-sub-mul-div 7/1/2025|||
Your ability to use a lesser version of this AI on your own hardware will not save you from the myriad ways it will be used to prey on you.
simianwords 7/2/2025|||
Why not? Current open models are more capable than the best models from 6 months back. You have a choice to use a model that is 6 months old - if you still choose to use the closed version that’s on you.
jayd16 7/1/2025|||
And an inability to do so would not have saved you either.
casebash 7/1/2025|||
Oh, they're actually the bad guys, just folks haven't thought far enough ahead to realise it yet.
robocat 7/2/2025|||
> bad guys

You imply there are some good guys.

What company?

pyrale 7/2/2025|||
There are plenty of companies that don't immediately qualify as "the bad guys".

For instance, of all companies I've interviewed with or have friends working at that developed tech, some companies build and sell furnitures. Some are your electricity provider or transporter. Some are building inventory management systems for hospitals and drug stores. Some develop a content management system for medical dictionnary. The list is long.

The overwhelming majority of companies are pretty harmless and ethically mundane. They may still get involved in bad practice, but that's not inherent to their business. The hot tech companies may be paying more (blood money if you ask me), but you have other options.

rTX5CMRXIfFG 7/2/2025||||
Depends. Does your definition of “good” mean “perfect”? If so, cynical remarks like “no one is good” would be totally correct.
saubeidl 7/2/2025||||
Signal, Proton, Ecosia, DuckDuckGo, Mastodon, Deepseek.
FrustratedMonky 7/2/2025||||
There are some less bad.

But, can't think of one off hand. Maybe Toys-R-Us? Ooops gone. Radio Shack? Ooops, also gone.

On the scale of Bad/Profit, Nice dies out.

bigiain 7/2/2025||||
Google circa 2005?

Twitter circa 2012?

In 2025? Nobody, I don't think. Even Mozilla is turning into the bad guys these days.

alternatex 7/2/2025|||
Signal, Mastodon
Zambyte 7/2/2025||
Bluesky, Kagi
bigiain 7/2/2025||
In my head at least, Bluesky are way closer to "the bad guys'. I don't trust them at all, pretty sure that in spite of what they say, they're going to do the same sort of rug pull that Google did with their "do no evil" assurances.
Zambyte 7/3/2025||
Funnily enough, I would actually flip it to say this about Kagi. With Bluesky, everything they have built is available to continue to be useful for people completely independent of what the folks over at Bluesky decide to do. There is no vendor lock in at all.

Kagi, on the other hand, has released none of their technology publicly, meaning they have full power to boil the frog, with no actual assurance that their technology will be useful regardless of their future actions.

jsrozner 7/2/2025||||
Google was bad the moment it chose its business model. See The Age of Surveillance Capitalism for details. Admittedly there was a nice period after it chose its model when it seemed good because it was building useful tools and hadn't yet accrued sufficient power / market share for its badness to manifest overtly as harm in the world.
HSO 7/2/2025|||
DeepSeek et al.

Obv

codedokode 7/2/2025|||
You are searching in the wrong place if you look for "good guys" among commercial companies.
plemer 7/1/2025||||
OK, lay it on us.
bn-l 7/1/2025|||
It’s not unreasonable given the mountain of evidence of their past behaviour to just assume they are always the “bad guy”.
bigyabai 7/2/2025||
I would normally agree, but we're instantially talking about the company that made Pytorch and played an instrumental role in proliferating usable offline LLMs.

If you can make that algebra add up to "bad guy" then be my guest.

pickledoyster 7/2/2025||
It seems like you're claiming that Pytorch + an open-weight LLM > everything on this wiki page, especially the anchored section https://en.wikipedia.org/wiki/Facebook_content_management_co...
bigyabai 7/2/2025||
I am. I genuinely don't understand how Meta's LLM contributions have anything to do with Myanmar.

It's like telling an iPhone user that iCloud isn't trustworthy because of the Foxconn suicide nets. It's basically the definition of a non-sequitur.

scott_w 7/2/2025||||
Just read Careless People.
shakna 7/2/2025||||
I wouldn't call mass piracy [0], for their own competitive gain, to be a "good" act. Especially when it seems they know they were doing the wrong thing - and that they know that the copyright complaints have grounds.

> The problem is that people don’t realize that if we license one single book, we won’t be able to lean into fair use strategy.

[0] https://www.theatlantic.com/technology/archive/2025/03/libge...

TheRoque 7/2/2025||||
Come on... Is it still necessary to remind everyone how evil meta is ? The only reason they released "open source" models was to annoy the competition. They latest stunts: https://futurism.com/meta-sketchy-training-ai-private-photos
fragmede 7/2/2025||
don't call them open source when they're not. it's shared model.
TheRoque 7/2/2025||
It's just how they call them... Hence the quotes.
cess11 7/2/2025||||
They're involved in genocide and enables near-global tyranny through their surveillance and manipulation. There are no excuses for working for or otherwise enabling them.
b112 7/1/2025|||
[flagged]
tombert 7/1/2025||
Well at least they're doing it For Great Justice then.
lawn 7/2/2025|||
This is an instance of bad guys fighting bad guys.
MaxPock 7/2/2025|||
[flagged]
sebmellen 7/2/2025|||
Meta's open source AI strategy really did predate the frontier Chinese model wave.
gsky 7/2/2025||||
Ofcourse CCP is the genuine one and never lies and does propoganda /s
chvid 7/2/2025|||
Yep. A “are we the baddies” moment for us in tech. Though it still doesn’t seem to have clicked for most …
imjonse 7/2/2025||
Wish it was only true in tech...
echelon 7/1/2025|||
Most of Meta's models have not been released as open source. Llama was a fluke, and it helps to commoditize your compliment when you're not the market leader.

There is no good or open AI company of scale yet, and there may never be.

A few that contribute to the commons are Deep Seek and Black Forest Labs. But they don't have the same breadth and budget as the hyperscalers.

_aavaa_ 7/2/2025|||
Llama is not open source. It is at best weights available. The license explicitly limits what kind of things you are allowed to use the outputs of the models for.
jacquesm 7/2/2025|||
Which, given what it was trained on, is utterly ridiculous.
Grimblewald 7/2/2025||
Yup, but that being said, Llama is GPLv3 weather Meta likes it or not. Same as ChatGPT and all the others. ALL of them can perfectly reproduce GPLv3 licensed works and data, making them derivative work, and the license is quite clear on that matter. In fact up until recently you could get chatGPT to info dump all sorts of things with that argument, but now when you try you will hit a network error, and afterwards it seems something breaks and it goes back to parroting a script on how it's under a proprietary license.
Iolaum 7/2/2025|||
This is interesting but it has not been proven in court, right?
Grimblewald 7/12/2025||
related stuff has, the core part being that if your model reproduces parts or all of a licensed work, it needs to comply with the license / copyright. Otherwise why aren't pirates just making 'models' that generate protected material, or music, and completely bypass all laws?

I know because I wanted to, as a form of protest/performance art, train a model to a few Disney movies and publicly distribute, but legal advice was this would put me directly into hot water not just because of who im pissing off (which i knew and was comfortable with) but also the fact there was precedent (i.e. news papers suing LLM providers).

It would be an open and shut case that would leave me in financial ruin.

The reason openAI hasn't been struck with this yet is, who has the time? and there isn't much to learn from all that either. Most open source tooling out competes openAI's offering as is, so the community wouldn't really win beyond punishing someone.

_aavaa_ 7/3/2025|||
I don't see how this follows at all. Github isn't GPL3 just because it stores and gives you back gpl3 code
Grimblewald 7/12/2025||
read the license, and look up what derivative work means. If you're still unclear after that I'm happy to walk you through it.
birn559 7/2/2025|||
Is that easier to enforce than having AI only trained in a legal way (=obeying licenses / copyright law)?
_aavaa_ 7/2/2025||
Yes. Having training obey copyright is a big coordination problem that requires copyright holders to group together to sue meta (and prove they broke copyright, which is not something proven before for LLM).

Whereas meta suing you into radioactive rubble is straightforward.

phyrex 7/1/2025||||
That's not true, the llama that's open source is pretty much exactly what's used internally
saubeidl 7/2/2025|||
> There is no good or open AI company of scale yet, and there may never be.

Deepseek, Baidu.

lynx97 7/2/2025||
[flagged]
tomhow 7/2/2025||
> This is hopelessly naive

When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."

https://news.ycombinator.com/newsguidelines.html

lynx97 7/2/2025||
I agree, nobody should call anyone an idiot. However, naivity isn't a slur, its a personality trait.
tomhow 7/2/2025||
It's not ok here. The comment loses nothing if that sentence is removed.
neilv 7/1/2025||
Can someone make an honest argument for how OpenAI staff are missionaries, after the coup?

I'd be very happy to be convinced that supporting the coup was the right move for true-believer missionaries.

(Edit: It's an honest and obvious question, and I think that the joke responses risk burying or discouraging honest answers.)

ponector 7/1/2025||
That is just an act of corpo-ceo bulshitting employees and press about high moral standards, mission, etc. Don't trust any of his words.
sitkack 7/1/2025|||
Anytime someone tells you to be in it for the mission, you are expendable and underpaid.
chaosharmonic 7/2/2025|||
I don't at all disagree with you, but at the kind of money you'd be making at an org like OAI, it's easy to envision there being a ceiling, past which the additional financial compensation doesn't necessarily matter that much.

The problem with the argument is that most places saying this are paying more like a sub-basement, not that there can't genuinely be more important things.

That said, Sam Altman is also a guy who stuck nondisparagement terms into their equity agreement... and in that same vein, framing poaching as "someone has broken into our home" reads like cult language.

ryandrake 7/2/2025||
We shouldn’t even be using the offensive word “poaching.” As an employee, I am not a deer or wild boar owned by a feudal lord by working for his company. And another employer isn’t some thief stealing me away. I have agency and control over who I enter into an employment arrangement with!
chaosharmonic 7/2/2025|||
I don't disagree with this either -- it's very clearly just a free market working both ways.

It also immediately reminds me of the no-call agreements companies had with each other last decade 10 or 15yrs ago.

jaza 7/2/2025||||
So then, is "headhunting" more or less bad?
ryandrake 7/2/2025||
I think anything that evokes “hunting on someone else’s land for his property” is equally inappropriate.
jack_riminton 7/2/2025|||
Would "bought" be better then? implies slavery!
vasco 7/2/2025|||
There's a word for this, it's called being hired.
saubeidl 7/2/2025|||
"Making a more competitive offer"
ponector 7/1/2025||||
That could be genuine words. Mission is to be expendable and make them rich.

Don't forget about the mission during next round of layoffs and record high quarterly profits.

vram22 7/2/2025||
Totally agree.

Well said.

Man, you are on a mission, to enable manumission!

https://en.m.wikipedia.org/wiki/Manumission

DanielHB 7/2/2025||||
Crazy that this proves that engineers making >1 million USD /year can still be underpaid
pm90 7/2/2025||
Yes Capitalism is an amazing thing
mbac32768 7/2/2025||||
Could Facebook hire away OpenAI people just by matching their comp? Doubtful. Facebook is widely hated and embarrassing to work at. Facebook has to offer significantly more.

And if someone at OpenAI says hey Facebook just offered me more money to jump ship, that's when OpenAI says "Sorry to hear, best of luck. Seeya!"

In this scenario, you're only underpaid by staying at OpenAI if you have no sense of shame.

lonesword 7/2/2025|||
> Facebook is widely hated and embarrassing to work at.

Not sure it's widely hated (disclaimer: I work there), despite all the bad press. The vast majority of people I meet respond with "oh how cool!" when they hear that someone works for the company that owns Instagram.

"Embarassing to work at" - I can count on one hand the number of developers I've met who would refuse to work for Meta out of principle. They are there, but they are rarer than HN likes to believe. Most devs I know associate a FAANG job with competence (correctly or incorrectly).

> Could Facebook hire away OpenAI people just by matching their comp?

My guess is some people might value Meta's RSUs which are very liquid higher than OAI's illiquid stocks? I have no clue how equity compensation works at OAI.

pm90 7/2/2025|||
Within my (admittedly limited) social circle of engineers/developers there is consensus that working at Facebook is pretty taboo. I’ve personally asked recruiters to not bother.
999900000999 7/2/2025|||
Honestly I’d be happy to work at any FAANG. Early FB in particular was great in terms of keeping up with friends.

I’ve only interviewed with Meta once and failed during a final interview. Aside from online dating and defense I don’t have any moral qualms regarding employment.

My dream in my younger days was to hit 500k tc and retire by 40. Too late now

dmoy 7/2/2025||
> defense

By defense do you mean like weapons development, or do you mean the entire DoD-and-related contractor system, including like tiny SIBR chasing companies researching things like, uh

"Multi-Agent Debloating Environment to Increase Robustness in Applications"

https://www.sbir.gov/awards/211845

Which was totally not named in a backronym-gymnastics way of remembering the lead researcher's last vacation destination or hometown or anything, probably.

999900000999 7/2/2025||
I'm trying to avoid anything primarily DoD related.

I guess I'd be ok with getting a job at Atlassian even if some DoD units use Jira.

I don't have anything against anyone who works on DOD projects, it's just not something I'm comfortable with

scarface_74 7/2/2025||||
> Doubtful. Facebook is widely hated and embarrassing to work at. Facebook has to offer significantly more.

I’m at a point in my career and life at 51 that I wouldn’t work for any BigTech company (again) even if I made twice what I make now. Not that I ever struck it rich. But I’m doing okay. Yes I’ve turned down overtures at both GCP, Azure, etc.

But I did work for AWS (ProServe) from the time I was 46-49 remotely knowing going in that it was a toxic shit show for both the money and for the niche I wanted to pivot to (cloud consulting) I knew it would open doors and it has.

If I were younger and still focused on money instead of skating my way to retirement working remotely, doing the digital nomad thing off an on etc, I would have no moral qualms about grinding leetcode and exchanging my labor for as much money as possible at Meta. No one is out here feeding starving children or making the world a better place working for a for profit company.

My “mission” would be to exchange as much money as possible for labor and I tell all of the younger grads the same thing.

simianwords 7/2/2025|||
I wonder what is it that Facebook offered? It can’t be money so I think it’s more responsibility or freedom. Or they had some secret breakthroughs?
Palmik 7/2/2025||
It's money. It's also a fresh, small org and a new project, which is exciting for variety of reasons.
simianwords 7/2/2025||
I can't explain why but I don't think money is it. Nor a new project or whatever can't be it either. Its just too small of a value proposition when you are already in openAI making banger models used by the world.
pm90 7/2/2025||
According to reports, the comp packages were in the hundreds of millions of dollars. I doubt anyone but execs are making that kind of money at OpenAI; its the sort of money you hope from a successful exit after years of efforts. I don’t blame them for jumping ship.
blitzar 7/2/2025||||
I need you to be a team player on this one.
javcasas 7/2/2025|||
And will be fired/thrown under the bus the moment firing you is barely more profitable for the CxO than having you around.
casualscience 7/1/2025||||
yeah, I used to work in the medical tech space, they love to tell you how much you should be in it for the mission and that's why your pay is 1/3 what you could make at FAANG... of course, when it came to our sick customers, they need to pay market rates.
noname120 7/1/2025|||
Yes, especially not his
KaiserPro 7/2/2025|||
There are a couple of ways to read the "coup" saga.

1) Altman was trying to raise cash so that openAI would be the first,best and last to get AGI. That required structural changes before major investors would put in the cash.

2) Altman was trying to raise cash and saw an opportunity to make loads of money

3) Altman isn't the smartest cookie in the jar, and was persuaded by potential/current investors that changing the corp structure was the only way forward.

Now, what were the board's concerns?

The publicly stated reason was a lack of transparency. Now, to you and me, that sounds a lot like lying. But where did it occur and what was it about. Was it about the reasons for the restructure? was it about the safeguards were offered?

The answer to the above shapes the reaction I feel I would have as a missionary

If you're a missionary, then you would believe that the corp structure of openai was the key thing that stops it from pursuing "damaging" tactics. Allowing investors to dictate oversight rules undermines that significantly, and allows short term gain to come before longterm/short term safety.

However, I was bought out by a FAANG, one I swear I'd never work for, because they are industrial grade shits. Yet, here I am many years later, having profited considerably from working at said FAANG. turns out I have a price, and it wasn't that much.

pj_mukh 7/1/2025|||
Honest answer*:

I think building super intelligence for the company that owns and will deploy the super intelligence in service of tech's original sin (the algorithmic feed) is a 100x worse than whatever OpenAI is doing, save maybe OpenAI's defense contract, which I have no details about.

Meta will try to buoy this by open-sourcing it, which, good for them, but I don't think it's enough. If Meta wants to save itself, it should re-align its business model away from the feeds.

In that way, as a missionary chasing super intelligence, I'd prefer OpenAI.

*because I don't have an emotional connection to OpenAI's changing corporate structure away from being a non-profit:

makeitdouble 7/1/2025|||
As a thought exercise, OpenAI can partner to apply the technology to:

- online gambling

- kids gambling

- algorithmic advertising

Are these any better ? All of these are of course money wells and a logical move for a for-profit IMHO.

And they can of course also integrate into a Meta competitor's algorithmic feeds as well, putting them at the same level as Meta in that regard.

All in all, I'm not seeing them having any moral high ground, even purely hypotheticaly.

pj_mukh 7/1/2025||
Wait if an online gambling company uses OpenAI API then hosts it all on AWS, somehow OpenAI is more morally culpable than AWS? Why?
makeitdouble 7/1/2025||
I saw the discussion as whether OpenAI is on a better moral ground than Meta, so this was my angle.

On where the moral burden lies in your example, I'd argue we should follow the money and see what has the most impact on that online gambling company's bottom line.

Inherently that could have the most impact on what happens when that company succeeds: if those become OpenAI's biggest clients, it wouldn't be surprising that they put more and more weight in being well suited for online gambling companies.

Does AWS get specially impacted by hosting online gambling services ? I honestly don't expect them to, not more than community sites or concert ticket sellers.

pj_mukh 7/1/2025||
There is no world in which online gambling beats other back-office automation in pure revenue terms. I'm comfortable saying that OpenAI would probably have to spend more money policing to make sure their API's aren't used by gambling companies than they'd make off of them. Either way, these are all imagined horrors, so it is difficult to judge.

I am judging the two companies for what they are, not what they could be. And as it is, there is no more damaging technology than Meta's various algorithmic feeds.

makeitdouble 7/2/2025||
> There is no world in which online gambling beats other back-office automation in pure revenue terms.

Apple's revenue is massively from in-app purchases, which are mainly games, and online betting also entered the picture. We had Tim Cook on the stand explain that they need that money and can't let Epic open that gate.

I think we're already there in some form or another, the question would be whether OpenAI has any angle for touching that pie (I'd argue no, but they have talented people)

> I am judging the two companies for what they are, not what they could be

Thing is, AI is mostly nothing right now. We're only discussing it because it of its potential.

pj_mukh 7/2/2025||
My point exactly. The App Store has no play in back office automation so the comparison doesn’t make sense. AFAICT, OpenAI is already making Billions on back office automation. I just came from a doctors visit where the she was using some medical grade ChatGPT wrapper to transcribe my medical conversation meanwhile I fight with instagram for the attention of my family members.

AI is already here [1]. Could there be better owners of super intelligence? Sure. Is OpenAI better than Meta. 100%

[1] https://www.cnbc.com/amp/2025/06/09/openai-hits-10-billion-i...

eli_gottlieb 7/2/2025||||
If you have "superintelligence" and it's used to fine-tune a corporate product that preexisted it, you don't have superintelligence.
svara 7/2/2025||||
> I think building super intelligence for the company that owns and will deploy the super intelligence in service of tech's original sin (the algorithmic feed) is a 100x worse than whatever OpenAI is doing,

OpenAI announced in April they'd build a social network.

I think at this point it barely matters who does it, the ways in which you can make huge amounts of money from this are limited and all the major players after going to make a dash for it.

pj_mukh 7/2/2025||
Like I told another commenter, "I am judging the two companies for what they are, not what they could be."

I'm sure Sam Altman wants OpenAI to do everything, but I'm betting most of the projects will die on the vine. Social networks especially, and no one's better than Meta at manipulating feeds to juice their social networks.

epolanski 7/2/2025|||
> In that way, as a missionary chasing super intelligence, I'd prefer OpenAI.

There ain't no missionary, they all doing it for the money and will apply it to anything that will turn dollars.

actionfromafar 7/1/2025|||
An honest argument is that cults often have missionaries.
kenjackson 7/1/2025|||
I'm not very informed about the coup -- but doesn't it just depend on what side most of the employees sat/sit on? I don't know how much of the coup was just egos or really an argument about philosophy that the rank and file care about. But I think this would be the argument.
kevindamm 7/1/2025||
There was a petition with a startlingly high percentage of employees signing it, but no telling how many of them felt pressured to to keep their job.
Analemma_ 7/1/2025|||
The thing where dozens of them simultaneously posted “OpenAI is nothing without its people” on Twitter during the coup was so creepy, like actual Jonestown vibes. In an environment like that, there’s no way there wasn’t immense pressure to fall into line.
lcnPylGDnU4H9OF 7/1/2025||
That seems like kind of an uncharitable take when it can otherwise be explained as collective political action. I’d see the point if it were some repeated ritual but if they just posted something on Twitter one time then it sounds more like an attempt to speak more loudly with a collective voice.
jacquesm 7/2/2025|||
They didn't need pressuring. There was enough money involved that was at risk without Sam that they did what they thought was the best way to protect their nest eggs.
kevindamm 7/9/2025||
That was actually the kind of pressure I was thinking of, not social/managerial pressure, though I think either could apply in that situation, depending on the individual.
_Algernon_ 7/2/2025|||
Altman has to be the most transparently two-faced tech CEO there is. I don't understand why people still lap up his bullshit.
Foobar8568 7/2/2025|||
Money.
_Algernon_ 7/2/2025||
What money is in it for the "rationalist", AI doom crowd that build up the narrative Altman wants for free?
mitthrowaway2 7/2/2025||
Suggesting that the AI doom crowd is building up a narrative for Altman is sort of like saying the hippies protesting nuclear weapons are in bed with the arms makers because they're hyping up the destructive potential of hydrogen bombs.
_Algernon_ 7/2/2025|||
That analogy falls flat. For one we have seen the destructive power of hydrogen bombs through nuclear tests. Nuclear bombs are a proven, real threat that exists now. AGI is the boogeyman under the bed, that somehow ends up never being there when you are looking for it.
bigyabai 7/2/2025|||
It's a real negotiating tactic: https://en.wikipedia.org/wiki/Brinkmanship

If you convince people that AGI is dangerous to humanity and inevitable, then you can force people to agree with outrageous, unnecessary investments to reach the perceived goal first. This exactly happened during the Cold War when Congress was thrown into hysterics by estimates of Soviet ballistic missile numbers: https://en.wikipedia.org/wiki/Missile_gap

mitthrowaway2 7/2/2025||
Chief AI doomer Eliezer Yudkowsky's latest book on this subject is literally called "If Anyone Builds it, Everyone Dies". I don't think he's secretly trying to persuade people to make investments to reach this goal first.
bigyabai 7/2/2025|||
He absolutely is. Again, refer to the nuclear bomb and the unconscionable capital that was invested as a result of early successes in nuclear tests.

That was an actual weapon capable of killing millions of people in the blink of an eye. Countries raced to get one so fast that it was practically a nuclear Preakness Stakes for a few decades there. By collating AI as a doomsday weapon, you necessarily are begging governments to attain it before terrorists do. Which is a facetious argument when AI has yet to prove it could kill a single person by generating text.

JoshTriplett 7/2/2025|||
> He absolutely is.

When people explicitly say "do not build this, nobody should build this, under no circumstances build this, slow down and stop, nobody knows how to get this right yet", it's rather a stretch to assume they must mean the exact opposite, "oh, you should absolutely hurry be the first one to build this".

> By collating AI as a doomsday weapon, you necessarily are begging governments to attain it before terrorists do.

False. This is not a bomb where you can choose where it goes off. The literal title of the book is "if anyone builds it, everyone dies". It takes a willful misinterpretation to imagine that that means "if the right people build it, only the wrong people die".

If you want to claim that the book is incorrect, by all means attempt to refute it. But don't claim it says the literal opposite of what it says.

neilv 7/9/2025||
Though there still is the problem of telling a child of an early level of cognitive development not to do something, which virtually guarantees that they will try to do it.

One of my favorite Tweets:

https://x.com/AlexBlechman/status/1457842724128833538

> Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale

> Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus

mitthrowaway2 7/2/2025|||
Edward Teller worried about the possibility that the Trinity nuclear test might start a chain reaction with the nitrogen in the Earth's atmosphere, enveloping the entire planet in a nuclear fireball that destroyed the whole world and all humans along with it. Even though this would have meant that the bomb would have had approximately a billion times more destructive power than advertised, and made it far more of a doomsday weapon, I think it would also not have been an appealing message to the White House. And I don't think that realization made anyone feel it was more urgent to be the first to develop a nuclear bomb. Instead, it became extremely urgent to prove (in advance of the first test!) that such a chain reaction would not happen.

I think this is a pretty close analogy to Eliezer Yudkowsky's view, and I just don't see how there's any way to read him as urging anyone to build AGI before anyone else does.

staticman2 7/2/2025|||
The grandparent asked what money was in it for rationalists.

You're saying an AI researcher selling AI Doom books can't be profiting off hype about AI?

dwohnitmok 7/2/2025|||
This reminds me a lot of climate skeptics pointing out that climate researchers stand to make money off books about climate change.

Selling AI doom books nets considerably less money than actually working on AI (easily an order of magnitude or two). Whatever hangups I have with Yudkowsky, I'm very confident he's not doing it for the money (or even prestige; being an AI thought leader at a lab gives you a built-in audience).

bigyabai 7/2/2025|||
The inverse is true, though - climate skeptics are oftentimes paid by the (very rich) petrol lobby to espouse skepticism. It's not an asinine attack, just an insecure one from an audience that also overwhelmingly accepts money in exchange for astroturfing opinions. The clear fallacy in their polemic being that ad-hominem attacks aren't addressing the point people care about. It's a distraction from global warming, which is the petrol lobby's end goal.

Yudkowsky's rhetoric is sabotaged by his ridiculous forecasts that present zero supporting evidence of his claims. It's the same broken shtick as Cory Doctorow or Vitalik Buterin - grandiose observations that resemble fiction more than reality. He can scare people, if he demonstrates the causal proof that any of his claims are even possible. Instead he uses this detachment to create nonexistent boogeymen for his foreign policy commentary that would make Tom Clancy blush.

mitthrowaway2 7/2/2025||
What sort of unsupported ridiculous forecast do you mean? Can you point to one?
staticman2 7/2/2025||
I'm not the grandparent but the more interesting question is what could possibly constitute "supporting evidence" for an AI Doom scenario.

Depending on your viewpoint this could range from "a really compelling analogy" to "A live demonstration akin to the trinity nuclear test."

mbourgon 7/2/2025|||
FWIW, in the case of Eliezer's book, there's a good chance that at the end of the day when we account for all the related expenses, it makes very little net profit, and might even be unprofitable on net (which is totally fine, since the motivation from writing the book isn't making money).
Grimblewald 7/2/2025||||
dumb people need symbols. Same reason elon gets worship.
blitzar 7/2/2025||||
"He looks like such a nice young man"
dkdbejwi383 7/2/2025||||
People buy into the BS and are terrified of missing out or being left behind.
bigyabai 7/2/2025|||
Tim Cook is right there. If I say "Vision Pro" I'll probably get downvoted out of a mere desire to not want to talk about that little excursion.
NiloCK 7/2/2025||
The Vision Pro flopped, but I don't see the connection to two-faced-ness. Help?
bigyabai 7/2/2025||
The "this is our best product yet" to "this is an absolute flop" pipeline has forced HN into absolute denial over the "innovation" their favorite company is capable of.
m463 7/1/2025|||
Missionary (from wikipedia):

A missionary is a member of a religious group who is sent into an area in order to promote its faith or provide services to people, such as education, literacy, social justice, health care, and economic development. - https://en.wikipedia.org/wiki/Missionary

Post coup, they are both for-profit entities.

So the difference seems to be that when meta releases its models (like bibles), it is promoting its faith more openly than openai, which interposes itself as an intermediary.

ASalazarMX 7/1/2025|||
I'd bet 100 quatloos that your comment will not have honest arguments below. You can't nurture missionaries in an exploitative environment.
CamperBob2 7/1/2025|||
Not to mention, missionaries are exploitative. They're trying to harvest souls for God or (failing the appearance of God to accept their bounty) to expand the influence of their earthbound church.

The end result of missionary activity is often something like https://www.theguardian.com/world/video/2014/feb/25/us-evang... .

Bottom line, "But... but I'm like a missionary!" isn't my go-to argument when I'm trying to convince people that my own motives are purer than my rival's.

BoorishBears 7/2/2025||
There's one slightly more common outcome of your so-called "missionary activities".
TylerE 7/1/2025|||
Eh? Plenty of cults like Jehivahs Witnesses that are exploitive as hell.
delfinom 7/1/2025|||
This is just a CEO gaslighting his employees to "think of the mission" instead of paying up

No different than "we are a family"

buremba 7/1/2025|||
But “we are family”
optimalsolver 7/2/2025||
I got all my sisters with me.
logsr 7/1/2025|||
> “I have never been more confident in our research roadmap,” he wrote. “We are making an unprecedented bet on compute, but I love that we are doing it and I'm confident we will make good use of it. Most importantly of all, I think we have the most special team and culture in the world. We have work to do to improve our culture for sure; we have been through insane hypergrowth. But we have the core right in a way that I don't think anyone else quite does, and I'm confident we can fix the problems.”

tldr. knife fights in the hallways over the remaining life boats.

throwawayq3423 7/1/2025|||
100% agree. You are hearing the dictator claim righteousness.
8note 7/4/2025|||
yeah... didnt the missionaries all leave after the coup? and the folks who remain are the mercenaries looking for the big stock win after sama figures out a way to be acquired or IPO?

all the chatter here at least was that the OpenAI folks were sticking around because they were looking for a big payout

econ 7/1/2025||
[flagged]
mc32 7/1/2025|||
They didn't mean it as a pun, but understanding it as a pun helps understand the situation.

In religions, missionaries are those people who spread the word of god (gospel) as their mission in life for a reward in the afterlife. Obviously, mercenaries are paid armies who are in it for the money and any other spoils of war (sex, goods, landholdings, etc.)

So I guess he's trying to frame it as them being missionaries for an Open and accepting and free Artificial Intelligence and framing Meta as the guys who are only in it for the money and other less savory reasons. Obviously, only true disciples would believe such framing.

jjtheblunt 7/1/2025||||
english is my first language : they mean that Sam Altman's people are preaching a righteous future for AI, or something vague like that.
riffic 7/1/2025|||
Close. A missionary is what the sex position was named after.
CamperBob2 7/1/2025||
Specifically, Catholic missionaries indoctrinating indigenous cultures into their church's imaginary sexual hangups. All other positions were considered sinful.

Again, not a label I'd self-apply if I wanted to take the high road.

throwaway31131 7/2/2025||
What goes around comes around...

From March of this year,

"As we know, big tech companies like Google, Apple, and Amazon have been engaged in a fierce battle for the best tech talent, but OpenAI is now the one to watch. They have been on a poaching spree, attracting top talent from Google and other industry leaders to build their incredible team of employees and leaders."

https://www.leadgenius.com/resources/how-openai-poached-top-...

not2b 7/2/2025|
We shouldn't use the word "poaching" in this way. Poaching is the illegal hunting of protected wildlife. Employees are not the property of their employers, and they are free to accept a better offer. And perhaps companies need to revisit their compensation practices, which often mean that the only way for an employee to get a significant raise is to change companies.
roguecoder 7/2/2025||
Indeed! It would be illegal _not_ to poach employees: https://www.goodspeedmerrill.com/blog/2023/12/what-is-a-no-p...
jacquesm 7/2/2025||
Sam vs Zuck... tough choice. I'm rooting for neither. Sam is cleverly using words here to make it seem like OpenAI are 'the good guys' but the truth is that they're just as nasty and power/money hungry as the rest.
fluidcruft 7/2/2025||
Sam Altman literally casts himself a God apparently and that's somehow to be taken as an indictment of his rivals. Maybe it's my GenX speaking but that's CEO bubblespeak for "OpenAI is fucked, abandon ship".
jeremyjh 7/2/2025|||
And thus far, considerably less “open”.
sidcool 7/2/2025||
Strictly between the two, I'd go with Sam
paxys 7/1/2025||
Pretty telling that OpenAI only now feels like it has to reevaluate compensation for researchers while just weeks ago it spent $6.5 billion to hire Jony Ive. Maybe he can build your superintelligence for you.
JKCalhoun 7/2/2025||
Poachers don't like poachers. We all remember the secret and illegal anti-poaching agreement between Adobe, Apple, Intel, Intuit, Google and Pixar.
ALLTaken 7/1/2025||
[flagged]
subarctic 7/1/2025|||
Just looked it up, looks like they bought or merged with a company he worked at or owned part of, at a valuation of 6.5 billion. Not sure about the details, e.g.like how much of that he gets
SequoiaHope 7/1/2025|||
https://duckduckgo.com/?q=ive+openai
aspenmayer 7/1/2025||
https://en.wikipedia.org/wiki/Io_(company)

https://www.nytimes.com/2025/05/21/technology/openai-jony-iv... ( https://archive.is/2025.05.26-084513/https://www.nytimes.com... )

bluecalm 7/1/2025||
Do I "poach" a stock when I offer more money for it than the last transaction value? "Poaching" employees is just price discovery by market forces. Sounds healthy to me. Meta is being the good guys for once.
jimmywetnips 7/1/2025||
[flagged]
nativeit 7/1/2025|||
The elderly couple showed up with baseball bats?
Freedom2 7/1/2025||||
Sounds like some tariffs should be applied as as well considering there's now a trade imbalance!
datavirtue 7/1/2025|||
You must be new here. No joking allowed.
ahartmetz 7/2/2025|||
AFAIU, that is basically true? Isn't it in the guidelines somewhere? Sarcasm or (exclusive-or!) really good humor get a pass in practice.
aspenmayer 7/2/2025||
I think it’s a matter of style or finesse. If you can make it look good, even breaking the rules is socially acceptable, because a higher order desire is to create conditions in individuals where they break unjust rules when the greater injustice would be to censor yourself to comply with the rules in a specific case.

Good artists copy, great artists steal.

Good rule followers follow the rules all the time. Great rule followers break the rules in rare isolated instances to point at the importance of internalizing the spirit that the rules embody, which buttresses the rules with an implicit rule to not follow the rules blindly, but intentionally, and if they must be broken, to do so with care.

> I have spread my dreams under your feet;

> Tread softly because you tread on my dreams.

https://en.wikipedia.org/wiki/Aedh_Wishes_for_the_Cloths_of_...

datavirtue 7/1/2025|||
See.
aspenmayer 7/1/2025||
I can fully believe one can be funny in a way that isn’t validated or understood, or even perceived as humorous. I’m not sure HN is a good bellwether for comedic potential.
absurdo 7/2/2025||
If you don’t adhere to the guidelines we’ll send mean and angry emails to dang.
aspenmayer 7/2/2025||
> If you don’t adhere to the guidelines we’ll send mean and angry emails to dang.

That’s so weird, you’re on! That makes two of us! When I don’t adhere to the guidelines, I also send mean and angry emails to dang. Apologies in advance, dang.

noisy_boy 7/2/2025||

    s/good guys/willing to pay/
thih9 7/2/2025||
What I hear is: “The person that profits from employees who don’t prioritize money encourages employees to not prioritize money.”

Unsurprising, unhelpful for anyone other than sama, unhealthy for many.

BoorishBears 7/2/2025|
This concept is not at all tied to trying to depress salaries and goes back decades: https://knowledge.wharton.upenn.edu/article/mercenaries-vs-m...

I don't imagine Sam Altman said this because he thinks it'll somehow save him money on salaries down the line.

rightbyte 7/2/2025|||
The 2000 article seem to refer to prospect capitalists and Altmam refers to workers.

I don't think the context is the same. In the context of Altman, he wants 'losers'.

sumeno 7/1/2025||
Does he have the same conviction when people from other companies decide to join OpenAI?
FirmwareBurner 7/1/2025||
It's only bad when other people do it.
latexr 7/2/2025||
It’s only bad when other people do it to him.
rchaud 7/1/2025||
"Apostates who turned to darkness" vs "Converts who saw the light".
a4isms 7/2/2025||
There can be no peace until they renounce their Rabbit God and accept our Duck God!

https://norberthaupt.com/2015/11/22/the-rabbit-god-and-the-d...

ipsum2 7/1/2025|
The game theoretic aspect of this is quite interesting. If Meta will make OpenAI's model improvements open source, then the value of every poached employee will be worth significantly less as time goes on. That means it's in the employees best interest to leave first, if their goal is to maximize their income.
bilbo0s 7/1/2025||
Open source could also be a bait and switch.

ie - Zuck has no intention to keep opening up the models he creates. Thus, he knows he can spend the money to get the talent. Because he has every intention to make it back.

HardCodedBias 7/1/2025||
Zuck has the best or the second best distribution on the planet.

If he neutralizes the tech advantage of other companies his chances of winning rise.

thephyber 7/2/2025||
How well was Zuck able to use his massive distribution channels to win in his cryptocurrency project, or the Metaverse after that?

Meta has become too fickle with new projects. To the extent that LLAMA can help them improve their core business, they should drive that initiative. But if they get sidetracked on trying to build “AI friends” for all of their human users, they are just creating another “solution in search of a problem”.

I hope both Altman and Zuck become irrelevant. Neither seems particularly worthy of the power they have gained and aren’t willing to show a spine in face of government coercion.

jekwoooooe 7/1/2025||
Allegedly they were offered 100m just in the first year. I think they will be fine
paxys 7/1/2025|||
That was immediately proven to be false, both by Meta leadership and the poached researchers themselves. Sam Altman just pulled the number out of his ass in an interview.
ipsum2 7/1/2025|||
That's my point. The ones that left early got a large sum of money. The ones that leave later will get less. That would incentivize people to be the first to leave.
More comments...