Top
Best
New

Posted by spenvo 7/1/2025

Sam Altman Slams Meta’s AI Talent Poaching: 'Missionaries Will Beat Mercenaries'(www.wired.com)
344 points | 699 commentspage 2
techterrier 7/2/2025|
People taking my money in exchange for doing a thing - Missionaries People taking someone else's money in exchange for doing a thing - Mercenaries

got it

pizzafeelsright 7/2/2025|
Missionaries are sent, called by faith.

Mercs don't take money, they earn it.

Why not be both?

moonshadow565 7/3/2025||
Holy order
lanthissa 7/2/2025||
there has yet to be a value openAI originally claimed to have that has lasted a second longer than there was profit motive to break it.

they went from open to closed. they went from advocating ubi to for profit. they went from pacific to selling defense tech. they went from a council overseeing the project to a single man in control.

and thats fine, go make all the money you can, but don't try do this sick act where you try to convince people to thank you for acting in your own self interest.

andsoitis 7/1/2025||
> OpenAI is the only answer for those looking to build artificial general intelligence

Let’s assume for a moment that OpenAI is the only company that can build AGI (specious claim), then the question I would have for Sam Altman: what is OpenAI’s plan once that milestone is reached, given his other argument:

> And maybe more importantly than that, we actually care about building AGI in a good way,” he added. “Other companies care more about this as an instrumental goal to some other mission. But this is our top thing, and always will be.

If building AGI is OpenAI’s only goal (unlike other companies), will OpenAI cease to exist once mission is accomplished or will a new mission be devised?

darth_avocado 7/1/2025||
OpenAI’s only goal isn’t building AGI. It is to build it first and make money off it.
a_bonobo 7/1/2025|||
Exactly! The Microsoft-OpenAI agreement states that AGI is whatever makes them 100 billion in profits. Nothing in there about anything intelligence related.

>The two companies reportedly signed an agreement last year stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits.

https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...

cma 7/1/2025||||
The profit cap was supposed to be for first to acheive AGI being end game, and would ensure redistribution (though with apparently some kind of Altman tax through early World Coin ownership stake). When they realized they wouldn't reach AGI with current funding and they were so close to $100 billion market cap they couldn't entice new investors on $100 billion in profits, why didn't they set it to, say, $10 trillion instead of infinity? Because they are missionaries?

A leaked email from Ilya early on even said they never planned to open source stuff long term, it was just to entice researchers at the beginning.

Whole company is founded on lies and Altman was even fired from YC over self detailing or something in I think a deleted YC blog post if I remember right.

DebtDeflation 7/2/2025||||
And in the meantime, their goal is clearly to make money off non-AGI AI.

I constantly get quasi-religious vibes from the current AI "leaders" (Altman, Amodei, and quite a few of the people who have left both companies to start their own). I never got those sort of vibes from Hinton, LeCun, or Bengio. The latest crop really does seem to believe that they're building some sort of "god" and that their god getting built first before one of their competitors builds a false god is paramount (in the literal meaning of the term) for the future of the human race.

zeryx 7/1/2025||||
Is there even a point to money post AGI?
rchaud 7/1/2025|||
Something tells me food and water supplies, weapons and private security forces aren't going to be paid for in OAI compute credits.
eli_gottlieb 7/2/2025||||
Not the real thing, no.
komali2 7/2/2025|||
Yes, because the development of AGI doesn't automatically mean the end of capitalism. Feudalism, mercantilism, and the final form, capitalism, weren't overthrown by new technologies, and while AGI is certainly a very special new technology, so was the internet. It doesn't matter how special AGI is if it's controlled by one company under the mechanisms of a capitalist liberal democracy - it's not like the laws don't matter anymore, or the contracts, debts, allegiances.

What can AGI give us that would end scarcity, when our scarcity is artificial? New farming mechanisms that mean nobody go hungry? We already throw away most of our food. We don't lack food, our resource allocation mechanism (Capitalism) just requires some people to be hungry.

What about new medicines? Magic new pills that cure cancer - why would these be given away for free when they can be sold, instead?

Maybe AGI will recommend the perfect form of fair and equitable governance! Well, it almost certainly will be a recommendation that strips some power from people who don't want to give up any power at all, and it's not like they'll give it up without a fight. Not that they'll need to fight - billionaires exist today and have convinced people to fight for them, against people's own self interest, somehow (I still don't understand this).

So, I'll modify Mark Fisher's quote - it's easier to imagine the creation of AGI than it is to imagine the end of capitalism.

Ray20 7/2/2025||
>our resource allocation mechanism (Capitalism) just requires some people to be hungry

One of the observable features of capitalism is that there are no hungry people. Capitalism has completely solved the problem of hunger. People are hungry when they don't have capitalism.

>billionaires exist today and have convinced people to fight for them

People usually fighting for themselves. It's just that billionaires often are not enemies of society, but source of social well-being. Or even more often - a side effect of social well-being. People fighting for billionaires to protect social well-being, not to protect billionaires.

>it's easier to imagine the creation of AGI than it is to imagine the end of capitalism

There is no need to even imagine the end of capitalism - we see it all the time, most of the world can hardly be called capitalist. And the less capitalism there is, the worse.

komali2 7/2/2025||
> One of the observable features of capitalism is that there are no hungry people. Capitalism has completely solved the problem of hunger. People are hungry when they don't have capitalism.

This is as fascinating to me as if someone walked up to me and said "Birds don't exist." It's a statement that's instantly, demonstrably provably wrong by simply turning and pointing at a bird, or in this case, by Googling "Child hunger in the usa," and seeing a shitload of links demonstrating that 12.8% of US households are food insecure.

Or, the secondary point, that hunger is only when no capitalism, demonstrably untrue, since the countries that ensure capitalism can continue to thrive by providing cheap labor, have visible extreme hunger, such as India. India isn't capitalist? America isn't capitalist? Madagascar isn't capitalist? Palestine?

> It's just that billionaires often are not enemies of society, but source of social well-being.

How can someone not be an enemy of society when they maintain artificial scarcity by hoarding such a massive portion of society's output, and then acting to hoard and concentrate our collective wealth even more into their own hands? Since when has "greed" not been a universally reviled trait?

> we see it all the time, most of the world can hardly be called capitalist. And the less capitalism there is, the worse.

I genuinely can't understand what you're seeing in the world to think the global economy is not capitalist in nature.

Ray20 7/2/2025||
> seeing a shitload of links demonstrating that 12.8% of US households are food insecure.

This is definitely not a manipulation of statistics and not a trivialization of food insecurity that are relevant to many parts of the world. And then they wonder why people choose to support billionaires instead of you lying cannibals.

> such as India

> Madagascar isn't capitalist? Palestine?

No? This countries has nothing to do with an economy built on the principles of the inviolability of private property and economic freedom. USA has more socialism than this countries have capitalism.

> How can someone not be an enemy of society when they maintain artificial scarcity by hoarding such a massive portion of society's output

because it is not portion of society's output that matters, but size of that output. What's the point of even distribution if size of the share is not enough even to not to die from starvation?

> Since when has "greed" not been a universally reviled trait?

Question is not either greed reviled trait or not. Greed is a fact of human nature. The question is what this ineradicable human quality leads to in specific economic systems: to universal prosperity, as under capitalism, or to various abominations like mass starvation, as without it.

komali2 7/3/2025||
> This is definitely not a manipulation of statistics and not a trivialization of food insecurity that are relevant to many parts of the world. And then they wonder why people choose to support billionaires instead of you lying cannibals.

There is no manipulation of statistics here, anyone that's worked in a school could tell you this, including me, personally. There are hungry children in the USA. It should be telling to you and your view on life, and the ideas you consume, that you believe a vast conspiracy to manipulate statistics is more likely than capitalism causing hunger.

> And then they wonder why people choose to support billionaires instead of you lying cannibals.

I really don't understand this insult lol, but I think it's funny that you think billionaires have more support than not. It's fine, the cycle of history that ends with the many poor realizing they outnumber the few rich 100,000:1 definitely will never ever happen again, they should keep concentrating wealth into a few people, it's totally safe this time.

> This countries has nothing to do with an economy built on the principles of the inviolability of private property and economic freedom.

Wrong, they're capitalist.

> USA has more socialism than this countries have capitalism.

Nope, wrong.

> What's the point of even distribution if size of the share is not enough even to not to die from starvation?

I don't get it, are you admitting that people do go hungry in the USA then? Well, regardless, the majority of the food in the USA is thrown away, or subsidies are provided to farmers to not grow it. It's not an issue of scarcity, it's an issue of distribution. Capitalism has no mechanism to guarantee people don't go hungry - if people going hungry is profitable (or ensuring they're fed is not profitable), then, this will occur under capitalism.

> to universal prosperity, as under capitalism, or to various abominations like mass starvation, as without it.

Mass starvation happens today, under global capitalism. Mass starvation happened in the USA once because the stock market crashed (among some other reasons). Capitalism is no more immune to mass starvation than other economic systems. Capitalism also apparently leads to people unnecessarily dying from overwork (exploiting cheap labor in other countries), lack of healthcare (America's for-profit healthcare system), etc.

Your blinders on the true nature of capitalism will only turn people away from it into my friends' welcoming arms. If you're truly interested in maintaining capitalism, you need to get better at defending it, the way neoliberals are. Get better at admitting the faults of capitalism in a way that lets you sustain them, or people are going to abandon it altogether. This dogmatic denial of the flaws of capitalism are funny to watch, but do you no good.

SchemaLoad 7/2/2025|||
What even is the monetization plan for AI. Seems like the cutting edge tech becomes immediately devalued to nothing after a few months when a new open source modal is released.

After spending so many billions on this stuff, are they really going to pay it all off selling API credits?

Yeul 7/2/2025|||
It's incredibly disheartening when you realize that the entire house of cards that is the internet is built on one thing: advertising money.
guappa 7/2/2025|||
They have no idea but they're building a new AI and will soon ask it.
blueblisters 7/1/2025|||
Nope AGI is not the end goal - https://blog.samaltman.com/the-gentle-singularity

> OpenAI is a lot of things now, but before anything else, we are a superintelligence research company.

IMO, AGI is already a very nebulous term. Superintelligence seems even more hand-wavy. It might be useful to define and understand limits of "intelligence" first.

Nasrudith 7/2/2025||
Superintelligence has always been rhetorical slight of hand to equate "better than human" with "literally infinitely improving and godlike" in spite of optimization always leveling off eventually for one reason or another.
jaza 7/2/2025||
I wouldn't worry, I forecast we'll have peace in the Middle East before we have true AGI.
toofy 7/1/2025||
hilarious seeing that he views it this way when his company is so very well known for taking (strong arguments say stealing) everything from everyone.

i’m noticing more and more lately that our new monarchs really do have broken thought patterns. they see their own abuse towards others as perfectly ok but hilariously demand people treat them fairly.

small children learn things that these guys struggle to understand.

jddj 7/1/2025||
I think they understand that it's all performative
kypro 7/1/2025||
Sam comes across as an extremely calculating person to me. I'm not suggesting he's necessarily doing this for bad reasons, but it's very clear to me the public facing communications he makes are well considered and likely not fully reflective of his actual views and positions on things, but instead what he believes to be power maximising.

He's very good at creating headlines and getting people talking online. There's no doubt he's good at what he does, but I don't know why anyone takes anything he says seriously.

unfitted2545 7/1/2025||
This interview with Karen Hao is really good (https://www.youtube.com/watch?v=8enXRDlWguU), she interviews people that have had 1 on 1 meetings with Sam, and they always say he aligned with them on everything to the point where they don't actually know what he believes. He will tailor his opinions to try and weave in trust.
roguecoder 7/2/2025|||
SBF demonstrated how utilitarian thought + massive money can easily spiral into self-centered anti-social behavior.

Being a billionaire seems to be inherently bad for human brains.

FireBeyond 7/1/2025||
Even more blatantly and directly, "Don't you dare use our model, trained on other people's work, to train yours".
alganet 7/1/2025||
Why does this feel like the "Friendship Ended With Musadir" meme?

https://knowyourmeme.com/memes/friendship-ended-with-mudasir

keeeba 7/2/2025||
Just checking my notes here.

This is the same Sam Altman who abandoned OpenAI’s founding mission in favour of profit?

No it can’t be

conartist6 7/2/2025||
Yeah I can't help but think he is starting to more closely fit the definition for mercenary.

For example, I'm on a mission to build a better code editor for the world. That's cost me 4 years of my life and several hundred thousand dollars.

He wanted one, so he bought it for 3 billion. I think he's doomed to fail there for pretty much the exact reasons he states here...

achrono 7/2/2025|||
The word "beat" and "mercenaries" are also quite important here -- to me, this is Altman's way of saying "you losers who left OpenAI, you will pay a steep price, because we will mess with you really deeply". The threat to Meta is just a natural consequence of that, to the extent that Meta clings onto said individuals.
cies 7/2/2025||
Or the one who's sister filed a suit against him for sexual abuse?

"missionary" pfff...

arccy 7/2/2025||
[flagged]
spenvo 7/1/2025||
OpenAI's tight spot:

1) They are far from profitability. 2) Meta is aggressively making their top talent more expensive, and outright draining it. 3) Deepseek/Baidu/etc are dramatically undercutting them. 4) Anthropic and (to a lesser extent?) Google appear to be beating them (or, charitably, matching them) on AI's best use case so far: coding. 5) Altman is becoming less like-able with every unnecessary episode of drama; and OpenAI has most of the stink from the initial (valid) grievance of "AI-companies are stealing from artists". The endless hype and FUD cycles, going back to 2022, have worn industry people out, as well as the flip flop on "please regulate us". 6) Its original, core strategic alliance with Microsoft is extremely strained. 7) and, related to #6, its corporate structure is extremely unorthodox and likely needs to change in order to attract more investment, which it must (to train new frontier models). Microsoft would need to sign off on the new structure. 8) Musk is sniping at its heels, especially through legal actions.

Barring a major breakthrough with GPT-5, which I don't see happening, how do they prevail through all of this and become a sustainable frontier AI lab and company? Maybe the answer is they drop the frontier model aspect of their business? If we are really far from AGI and are instead in a plateau of diminishing returns that may not be a huge deal, because having a 5% better model likely doesn't matter that much to their primary bright spot:

Brand loyalty from the average person to ChatGPT is the best bright spot, and OpenAI successfully eating Google's search market. Their numbers there have been truly massive from the beginning, and are I think the most defensible. Google AI Overviews continue to be completely awful in comparison.

nashashmi 7/1/2025||
They have majority of the attention and market cap. They have runway. And that part is the most important thing. Others don’t have the users to grand test developments.
ninininino 7/1/2025||
I'm not so sure they have runway.

XAI has Elon's fortune to burn, and Spacex to fund it.

Gemini has the ad and search business of Google to fund it.

Meta has the ad revenue of IG+FB+WhatsApp+Messenger.

Whereas OpenAI $10 billion in annual revenue, but low switching costs for both consumers and developers using their APIs.

If you stay at the forefront of frontier models, you need to keep burning money like crazy, that requires raising rounds repeatedly for OpenAI, whereas the tech giants can just use their fortunes doing it.

fzzzy 7/1/2025|||
They definitely have a very valuable brand name even if the switching costs are low. To many people, AI == ChatGPT
Tepix 7/2/2025||
But that's just one good marketing campaign away of changing.
nashashmi 7/2/2025|||
Ok, others have more runway, and less research talent.

OpenAI has enough runway to figure things out and place themselves in a healthier position.

And come to think of it, loosing a few researchers to other companies may not be so bad. Like you said that others have cash to burn. They might spend that cash more liberally and experiment with bolder riskier products and either fail spectacularly or succeed exponentially. And OpenAI can still learn from it well enough and still benefit even though it was never their cash.

VirusNewbie 7/1/2025|||
Good analysis, my counter to it is that OpenAI has one of the leading foundational models, while Meta, despite being a top paying tech company, continued to release sub par models that don't come close to the other big three.

So, what happened? Is there something fundamentally wrong with the culture and/or infra at Meta? If it was just because Zuckerburg bet on the wrong horses to lead their LLM initiatives, what makes us think he got it right this time?

fzzzy 7/1/2025||
For one thing, all the trade secrets going from openai and anthropic to meta.
wavemode 7/1/2025|||
> how do they prevail through all of this and become a sustainable frontier AI lab and company?

I doubt that OpenAI needs or wants to be a sustainable company right now. They can probably continue to drum up hype and investor money for many years. As long as people keep writing them blank checks, why not keep spending them? Best case they invent AGI, worst case they go bankrupt, which is irrelevant since it's not their own money they're risking.

logsr 7/1/2025|||
The biggest problem OAI has is that they don't own a data source. Meta, Google, and X all have existing platforms for sourcing real time data at global scale. OAI has ChatGPT, which gives them some unique data, but it is tiny and very limited compared to what their competitors have.

LLMs trained on open data will regress because there is too much LLM generated slop polluting the corpus now. In order for models to improve and adapt to current events they need fresh human created data, which requires a mechanism to separate human from AI content, which requires owning a platform where content is created, so that you can deploy surveillance tools to correctly identify human created content.

edm0nd 7/2/2025||
OAI has a deal with reddits corpus of data to use.

They will either have to acquire a data source or build their own moving forward imo. I could see them buying reddit.

Sam Altman also owns something like ~10% of reddits stock since they went public.

comfysocks 7/4/2025|||
The flip-flop on regulation sounds like: “please regulate us (in a way that builds a moat for incumbents out of fear of an imagined future doom scenario)” and “please Don’t regulate us (in a way that prevents us from stealing, and causing actual harm now).”
blueblisters 7/1/2025|||
If they can turn ChatGPT into a free cash flow machine, they will be in a much more comfortable position. They have the lever to do so (ads) but haven't shown much interest there yet.

I can't imagine how they will compete if they need to continue burning and needing to raise capital until 2030.

storgendibal 7/1/2025||
The interest and actions are there now: Hiring Fidji Simo to run "applications" strongly indicates a move to an ad-based business model. Fidji's meteoric rise at Facebook was because she helped land the pivot to the monster business that is mobile ads on Facebook, and she was supposedly tapped as Instacart's CEO because their business potential was on ads for CPGs, more than it was on skimming delivery fees and marked up groceries.
simianwords 7/2/2025|||
Maybe employees realised this and left OpenAI for this reason.
jekwoooooe 7/1/2025||
OpenAI has no shot without a huge cash infusion and to offer similar packages. Meta opened the door.
WaltPurvis 7/2/2025||
I'm pretty sure Sam Altman's only mission in life is to be as personally wealthy as Mark Zuckerberg. Is that mission really supposed to inspire undying loyalty and insane workloads from OpenAI staffers?
inerte 7/2/2025||
Yes, if wealth is also what they want, and they believe the crumbs trickling down will be big enough.
jme789 7/2/2025||
You realize he takes a <$100k salary and doesn't own any equity in OpenAI right?
nwmcsween 7/2/2025||
Sam Altman complaining about "unethical" corporate behavior is pure gold
qwertox 7/2/2025|
Sam Altman is not a bit different than Mark Zuckerberg. His mission is to make money and get as much information to process about individuals, to be used for his benefit, all the rest is just blah blah.
scyzoryk_xyz 7/2/2025||
It'll be a sad type of fun watching him become another disgusting and disconnected bazillionaire. Remember when Mark was a Honda Fit driving Palo Alto based founder focused on 'connecting people' and building a not yet profitable 'social network' to change the world?

This is a repeat of the fight for talent that always happens with these things. It's all mercenary - it's all just business. Otherwise they'd remain an NGO.

I can't help but think that it would have been a much better move for him to get fired from OpenAI. Allow that to do it's own thing and start other ventures with a clean reputation, and millions instead of billions in the bank.

JKCalhoun 7/2/2025||
> Remember when Mark was a Honda Fit driving Palo Alto based founder focused on 'connecting people'…?

That Mark must have come after the Mark that created a site in college where the visitor compared two women and ranked which of the two were "hotter".

scyzoryk_xyz 7/2/2025|||
That's fair. Legend is Sam was the guy applying to YC at 17 with "I am Sam Altman and I am coming"

So yeah. Naked ambition. They're both just creaming their pants for power.

whoisyc 7/2/2025||||
“College” is underselling it. He went to Harvard. He sold a friendly down to the earth image early one and people bought it. But don’t forget he went to Harvard.
corndoge 7/2/2025|||
People change, even from horny college nerd into altruistic missionary into oligarch. And even beyond.
throw4847285 7/2/2025||
Or it turns out that people don't change, as explored in the entirely fictitious but very enjoyable film The Social Network. All those steps, even the horny college nerd, were facades, and the real core of his character is naked ambition. He will warp himself into any shape in order to pursue wealth and power. To paraphrase Robert Caro, power does not corrupt, it reveals.
d--b 7/2/2025||
I think half of him truly believes that his work ultimately will benefit humanity as a whole and half of him is a cynical bastard like the rest of them.

Ultimately, he’ll just realize that humanity doesn’t give a fuck, and that he’s in it for himself only.

And the typical butterfly-to-caterpillar transition will be complete.

taco_emoji 7/2/2025|||
We have essentially zero reason to believe he cares about any benefit to humanity at all.
blitzar 7/2/2025||||
I too truly believe that if you make me the richest person in the world, at the expense of all other people, it will be a benefit to humanity.
doctorwho42 7/2/2025|||
Well a lot of AI bros think that AI can generate novel solutions to all of the world's problems, they just need more data and processing power. I.E. the AI God (all knowing), which when you take a step back is utter lunacy. How can LLM's generate solutions to climate change if it's a predictive model.

All of this to say, they delude themselves that the future of humanity needs "AI" or we are doomed. Ironically, the creation and expansion of LLM's drastically increased the power usage of humanity to its own detriment.

saubeidl 7/2/2025||
I honestly believe the current AI hype and its associated wasteful power usage will be what seals humanities fate.

Big Tech has become a doomsday cult.

More comments...