Top
Best
New

Posted by spenvo 7/1/2025

Sam Altman Slams Meta’s AI Talent Poaching: 'Missionaries Will Beat Mercenaries'(www.wired.com)
344 points | 699 commentspage 3
angst 7/2/2025|
“I don’t think Sam is the guy who should have the finger on the button for AGI.”

- Ilya Sutskever, Co-founder, co-lead of Superalignment Team , Departed early 2024

- May 15, 2025, The Atlantic

Anyway, I concur it's a hard choice as one other comment mentions.

godelski 7/2/2025||
Is this a button any one person should have their finger on?
moffkalast 7/2/2025|||
They don't make buttons large enough for multiple people to press. It's always going to be someone in the end.
godelski 7/2/2025||
Yes they do?

There's also plenty of buttons that can't be pressed unless unlocked by multiple keys which cannot be turned by a single person.

TFYS 7/2/2025||||
Exactly. AGI is something that will significantly affect all of humanity. It should be treated like nuclear weapons.
zxexz 7/2/2025|||
Effectively kept secret and in the shadows by those working on it, until a world-altering public display makes it a hot politically charged issue, unaltering even 80 years later?

Edit: Honestly, I bet that "Altman", directed by Nolan's simulacrum and starring a de-aged Cillian Murphy (with or without his consent), will in fact deservedly win a few oscars in 2069.

TFYS 7/2/2025|||
International co-operation to control development and usage. The decision to unleash AGI can only be made once. Making such a decision should not be done hastily, and definitely not for the reason of pursuing private profit. Such a decision needs input from all of humanity.
andsoitis 7/2/2025||
> International co-operation to control development and usage.

Non-starter. Why would you trust your adversary to "stay within the lanes". The rational thing to do is to extend your lead as much as possible to be at the top of the pyramid. The arms race is on.

godelski 7/2/2025|||
With these things, the distrust is a feature, not a flaw. The distrust ensures you are keeping a close eye on each other. The collaboration means you're also physically close and intellectually close. Secrets are still possible, but it's just harder to keep them because it's easier for things to slip
TFYS 7/2/2025|||
It's rational only if you don't consider the risks of an actual superhuman AGI. Geopolitical issues won't matter if such a thing is released without making sure it can be controlled. These competition based systems of ours will be the death of us. Nothing can be done in a wise manner when competiton forces our hand.
jonplackett 7/2/2025|||
And quickly proliferated around the world to other superpowers and rogue states…

Remember the soviets got the nuke so quick because they just exfiltrated the US plans

nlitened 7/2/2025||||
> It should be treated like nuclear weapons.

Seeing how currently nuclear weapon holders are elected, that would be a disaster

TFYS 7/2/2025||
The disaster will happen if AGI is created and let loose too quickly, without considering all the effects it will have. That's less likely to happen when the number of people with that power is limited.
Ray20 7/2/2025||||
>It should be treated like nuclear weapons.

Either you get it or you're screwed?

godelski 7/2/2025|||
It's already being treated like nuclear and that's a problem

  - highly centralized
  - lots of misinformation
    - lots of fear mongerng
  - arms race between most powerful countries
    - who can't stop because if the other gets a significant lead it could be used to destroy the other
  - potentially world changing
    - potential to cause unprecedented levels of harm
    - potential to cause unprecedented levels of prosperity
Sometimes things are just done better with your enemy than in direct competition with them. "Keep your enemies closer" kinda thing.

As a parallel, look at medicine and gain of function research. It has a lot of benefits but can walk the line of bioweapons development. A mistake causes a global event. So its best to work together. Everyone benefits from any progress by anyone. Everyone is harmed by mistakes by any one actor. That's regardless of working together or not. But working together means you keep an eye on one another, helping prevent mistakes. Often ones that are small and subtle. The adversarial nature is (or can be) beneficial in this case

Regardless of who invents AGI, it affects the entire world.

Regardless of who invents AGI, you can't put the genie back on the bottle (or rather it's a great struggle to that's extremely costly, if even possible)

Regardless of who invents AGI, the other will have it in a matter of months

beng-nl 7/2/2025||||
Good point. I like George Hotz’ philosophy on this - paraphrasing badly - if everyone has AI, nobody can be subjugated by AI.
godelski 7/2/2025||
That's all fine and dandy if we skip the development phase and AI is already invented.

But this doesn't work during the transition. During the development. "The button" here is for AGI. As in, when it's created and released.

8note 7/4/2025|||
its a button everyone should have finger on. if you can get it to do something good, go for it?
energy123 7/2/2025|||
I want Sam to win more than I do Zuck, just based on the proven negativity generated by Meta. I don't want that individual or that company anywhere near additional capital, power, capability or influence.
latexr 7/2/2025|||
The hypocrite who violates everyone else’s privacy to sell ads, or the scammer who collects eyeballs in exchange for cryptocurrency and whose “product” has been banned in multiple countries…

Yeah, there’s no good choice here. You should be rooting for neither. Best case scenario is they destroy each other with as little collateral damage as possible.

baq 7/2/2025||||
Between the trio of Thiel, Zuck and sama I’d pick the fourth option - I don’t want to be on that train anymore
stackbutterflow 7/2/2025|||
It became clear in 2024 and 2025 that they're all dangerous.

All these tech billionaires or pseudo billionaires are basically believing that an enlightened dictatorship is the best form of governance. And of course they ought to be the dictator or part of the board.

ponector 7/2/2025|||
Fourth option is Musk with xAI.
_Algernon_ 7/2/2025||
Fourth option is butlerian jihad
btheunissen 7/2/2025||||
At the start of the “LLM boom” I was optimistic that OAI/Anthropic were in a position finally unseat the Big 4 in at least this area. Now I’m convinced the only winners are going to be Google, Meta, Amazon, and we are right back to where we started.
jcfrei 7/2/2025|||
What makes you think so? They got the Chatgpt.com domain and the product seems to be growing more than any other (check out app downloads: https://appmagic.rocks/top-charts/apps). They got the first mover advantage - and as we know around here that's a huuuge advantage.
latexr 7/2/2025||
> They got the Chatgpt.com domain and the product seems to be growing more than any other

And still haemorrhaging money.

thephyber 7/2/2025|||
I still have hope that Anthropic will win out over OpenAI.

But… Why put Meta in that group?

I see Apple, Google, Microsoft, and Amazon as all effectively having operating systems. Meta has none and has failed to build one for cryptocurrency (Libra / Deis) and metaverse.

Also, both Altman and Zuck leave a lot to be desired. Maybe not as much as Musk, but they both seem to be spineless against government coercion and neither gives me a sense that they are responsible stewards of the upside or downside risks of AGI. They both just seem like they are full throttle no matter the consequences.

richardatlarge 7/2/2025|||
Yes, I’ve never seen a more sociopathic company than Meta. They are so true to the cliched ethos of “you are the product.” Sickens me that society has facilitated the rise of such banality of evil
latexr 7/2/2025||
> Sickens me that society has facilitated the rise of such banality of evil

American society. Those are uniquely products of the US, exported everywhere, and rightfully starting to get push back. Unfortunately later than what it should’ve happened.

raverbashing 7/2/2025|||
Does any of those OpenAI spinoffs have something to show already or are they still "raising money"
catgary 7/2/2025||
[flagged]
WJW 7/2/2025|||
Even if it's zero, he could still be a shitty person who shouldn't have access to that button. If anyone should have such access at all, of course.
Gigablah 7/2/2025||||
Is that the du jour unit of measurement for morality now?
linotype 7/2/2025||
gestures broadly at every other thing we've known about Mark Zuckerberg since "Dumb Fucks" in college
renewiltord 7/2/2025|||
I suppose as an American taxpayer and American voter, he is responsible for as many ethnic cleansings as anyone else. Supposedly, Armenians leaving Nagorno-Karabakh is ethnic cleansing, and the US did give aid to Azerbaijian so that makes Americans facilitators of ethnic cleansing, though admittedly so are the Canadians.
elif 7/2/2025||
I've seen paying people too much completely erode the core of teams. It's really hard to convince yourself to work 60 hour weeks when you have generational FU$ and a family you love.
dmoy 7/2/2025||
Man I disagree with this on multiple points:

I wouldn't describe a team full of people who don't want to work 60 hour weeks as "eroded", cus like... That's 6x 10 hour days leaving incredibly little time for family, chores, unwinding, etc. Once in awhile maybe, but sustained that'll just burn people out.

And also by that logic, is every executive paid $5M+/yr in every company, or every person who's accumulated say $20M, also eroding their team? Or is that only applied to someone who isn't managing people, for some reason?

azan_ 7/2/2025||
There are people that are capable of working 60+/week long term without burnout, they are very rare but do exist (I know like two or three).
Koffiepoeder 7/2/2025||
I know many, though most of them are either founders, business owners or farmers. FWIW, only one person in that list is an employee.
dmoy 7/2/2025||
I guess there's a subset that sustains for a known set of years - surgical residents. 60-80+ hours
gosub100 7/2/2025|||
That's why CEO pay is so low. They take the honor in leadership and across the board just take a menial compensation package. Why work hard , 60 hrs a week even, if you get paid FU$? This is why boards limit comp packages so aggressively.
nilkn 7/2/2025|||
There's limited or no evidence of this in other domains where astonishing pay packages are used to assemble the best teams in the world (e.g., sports).
dbspin 7/2/2025|||
There's vast social honour and commensurate status attached to activities like bing a sports / movie star. Status that can easily be lost, and cannot be purchased for almost any amount of money. Arguably that status is a greater motivator than the financial reward - ie.: see the South Korean idol system. It's certainly not going to be diminished as a motivator by financial reward. There's no equivalent for AI researchers. At best the very best may win the acclaim of their peers and a Nobel prize. It's not a remotely equivalent level of celebrity / access to all the treasures the world can provide.
nilkn 7/2/2025||
Top AI researchers are about the closest thing to celebrity status that has ever been attainable for engineering / CS folks outside of winning a Nobel Prize. Of course, the dopamine cycle and public recognition and adoration are nowhere near the same level as professional sports, but someone being personally courted by the world's richest CEOs handing out $100M+ packages is still decidedly not experiencing anything close to a normal life. Some of these folks still had their hiring announced on the front pages of the NYT and WSJ -- something normally reserved for top CEOs or, yes, sports stars and other legitimate celebrities.
sorcerer-mar 7/2/2025||||
Sports have a much, much tighter feedback loop on performance than anything in software, and certainly tighter than R&D.

Same with a lot of the financial roles with comp distributions like this.

nilkn 7/2/2025|||
Either Meta makes rapid progress on frontier-level AI in the next year or it doesn't -- there's definitely a feedback loop that's measured in tangible units of time. I don't think it's unreasonable to assume that when Zuck personally hires you at this level of compensation, there will be performance expectations and you won't stick around for long if you don't deliver. Even in top-tier sports, many underperformers manage to stick around for a couple years or even a half-decade at seven or eight figure compensation before being shown the door.
sorcerer-mar 7/2/2025||
In reality all frontier models will likely progress at nearly the same pace making it difficult to disaggregate this team's performance compared to others. More importantly, it'll be nearly impossible to disaggregate any one contributor's performance from the others, making it basically impossible to enforce accountability without many many repetitions to eliminate noise.

> Even in top-tier sports, many underperformers stick around for a couple years or a half-decade at seven or eight figure compensation before being shown the door.

This can happen in the explicit hopes that their performance improves, not because it's unclear whether they are performing, and not generally over lapses in contract.

nilkn 7/2/2025||
There are plenty of established performance management mechanisms to determine individual contributions, so while I wouldn't say that's a complete nonissue, it's not a major problem. The output of the team is more important to the business anyway (as is the case in sports, too).

And if the team produces results on par with the best results being attained anywhere else on the planet, Zuck would likely consider that a success, not a failure. After all, what's motivating him here is that his current team is not producing that level of results. And if he has a small but nonzero chance of pushing ahead of anyone else in the world, that's not an unreasonable thing to make a bet on.

I'd also point out that this sort of situation is common in the executive world, just not in the engineering world. Pretty much every top-tier executive at top-tier companies is making seven or eight figures as table stakes. There's no evidence I'm aware of that this reduces executive or executive team performance. Really, the evidence is the opposite -- companies continue paying more and more to assemble the best executive teams because they find it's actually worth it.

sorcerer-mar 7/2/2025||
> There are plenty of established performance management mechanisms to determine individual contributions

"Established" != valid, and literally everyone knows that.

The executives you reference are never ICs and are definitionally accountable to the measured performance of their business line. These are not superstar hires the way that AI researchers (or athletes) are. The body in the chair is totally interchangeable so long as the spreadsheet says the right number, and you expect the spreadsheet performance to be only marginally controlled by the particular body in the chair. That's not the case with most of these hires.

nilkn 7/2/2025||
I'd say execs getting hired for substantial seven- and eight-figure packages, with performance-based bonuses / equity grants and severance deals, absolutely do have a lot more in common with superstars than with most other professionals. And, just like superstars, they're hired based off public reputation more than anything else (just the sphere of what's "public" is different).

It's false that execs are never ICs. Anyone who's worked in the upper-echelon of corporate America knows that. Not every exec is simply responsible 1:1 for a business line. Many are in transformation or functional roles with very complex responsibilities across many interacting areas. Even when an exec is responsible for a business line in a 1:1 way, they are often only responsible for one aspect of it (e.g., leading one function); sometimes that is true all the way up to the C-suite, with the company having literally only a single exception (e.g., Apple). In those cases, exec performance is not 1:1 tied to the business they are 1:1 attached to. High-performing execs in those roles are routinely "saved" and banked for other roles rather than being laid off / fired in the event their BU doesn't work out. Low-performing execs in those roles are of course very quickly fired / re-orged out.

If execs really were so replaceable and it's just a matter of putting the right number in a spreadsheet, companies wouldn't be paying so much money for them. Your claims do not pass even the most basic sanity check. By all means, work your way up to the level we're talking about here and then report back on what you've learned about it.

Re: performance management and "everyone knowing that", you're right of course -- that's why it's not an interesting point at all. :) I disagree that established techniques are not valid -- they work well and have worked for decades with essentially no major structural issues, scaling up to companies with 200k+ employees.

sorcerer-mar 7/2/2025||
I did not say their performance is 1:1 with a business line, but great job tearing down that strawman.

I said they are accountable to their business line -- they own a portfolio and are accountable for that portfolio's performance. If the portfolio does badly, it means nearly by definition that the executive is doing badly. Like an athlete, that doesn't mean they're immediately put to the streets, but it also is not ambiguous whether they are performing well or not.

Which also points to why performance management methods are not valid, i.e. a high-sensitivity, high-specificity measure of an individual executive's actual personal performance: there are obviously countless external variables that bear on the outcome of a portfolio. But nonetheless, for the business's purpose, it doesn't matter. Because the real purpose of performance management methods is to have a quasi-objective rationalization for personnel decisions that are actually made elsewhere.

Perhaps you can mention which performance management methods you believe are valid (high-specificity and high-sensitivity measures of an individual's personal performance) in AI R&D?

"Pretty much every top-tier executive at top-tier companies is making seven or eight figures as table stakes". In this group, what percentage are ICs? Sure there are aberrational celebrity hires, of course, but what you are pointing to is the norm, which is not celebrity hires doing IC work.

> If execs really were so replaceable... companies wouldn't be paying so much money for them

High-level executives within the same tier are largely substitutable - any qualified member of this cohort can perform the role adequately. However, this is still a very small group of people ultimately responsible for huge amounts of capital and thus collectively can maintain market power on compensation. The high salaries don't reflect individual differential value. Obviously there are some remarkable executives and they tend to concentrate in remarkable companies, by definition, and also by definition, the vast majority of companies and their executives are totally unremarkable but earn high salaries nonetheless.

nilkn 7/2/2025||
Lack of differentiation within a tiny elite circle of candidates does not imply that salaries do not reflect individual differential value broadly. While these people control a large amount of capital, they do not own that capital -- their control is granted due to their talent and can be instantly revoked at any moment. They have no leverage to maintain control of this capital except through their own reputation and credibility. There is no "tenure" for executives -- the "status" of the role must essentially be re-earned constantly over time to maintain it, and those who don't do so are quickly forced out.

The researchers being hired here are just as accountable as the execs we're talking about -- there is a clear outcome that Zuck expects, and if they don't deliver, they will be held accountable. I really, genuinely don't see what's so complicated about this.

Accountability to a business line does not imply that if that business does poorly then every exec accountable to it was doing poorly personally. I'm actually a personal counter-example and I know a number of others too. In fact, I've even seen execs in failing BUs get promoted after the BU was folded into another one. Competent exec talent is hard to find (learning to operate successfully at the exec level of a Fortune 50 company is a very rarefied skill and can't be taught), and companies don't want to lose someone good just because that person was attached to a bad business line for a few months or years.

Something important to understand about the actual exec world is that executives move around within companies constantly -- the idea that an executive is tied to a single business and if something goes wrong there they must have sucked is just not true and it's not how large companies operate generally. When that happens, the company will figure out the correct action for the business line (divest, put into harvest mode, merge into another, etc., etc.), then figure out what to do with the executives. It's an opportunity to get rid of the bad ones and reposition the top ones for higher-impact work. Sometimes you do have to get rid of good people, though, which is true of all layoffs -- but even with execs there's a desire to avoid it (just like you'd ideally want to retain the top engineers of a product line being shuttered).

roguecoder 7/2/2025|||
I disagree. If you want similarly-tight feedback loops on performance, pair programming/TDD provides it. And even if you hate real-time collaboration or are working in different time zones, delightful code reviews on small slices get pretty close.
sorcerer-mar 7/2/2025||
Are these people just “programming?”
sebastos 7/2/2025|||
Take-one-for-the-team salaries trump there too. Tom Brady and the Patriots dynasty say hi.
bravesoul2 7/2/2025|||
I'd say it would be easier to do 60 hours when you can afford servants to take care of the rest of life.
michaelanckaert 7/2/2025|||
Is working than 60 hours a week necessary to have a good team? Sure, having FU$ as you put it removes the necessity to keep the scale of your work life balance tipped to the benefit if your employer. But again, a good work life balance should not imply erosion the team.
roguecoder 7/2/2025|||
Working more than 50 hours a week is counter-productive, and even working 50 hours a week isn't consistently more productive than working 40 hours a week.

It is very easy to mistake _feeling_ productive and close with your coworkers for _being_ productive. That's why we can't rely on our feelings to judge productivity.

driverdan 7/2/2025|||
> It's really hard to convince yourself to work 60 hour weeks

Why would they do that? There is absolutely no reason to overwork.

RogerL 7/2/2025|||
> really hard to convince yourself to work 60 hour weeks

Good!

inerte 7/2/2025||
I was ready to downvote you giving examples how +$100m net worth individuals are probably the hardest workers (or were, to get there) just like most of the people replying to you, but your `and a family you love` tripped me. I sorta agree... if you want be maximize time with family and you have FU$, would you really really work that hard?

I am not saying exactly they don't love their family... but it's not necessarily a priority over glory, more money, or being competitive. And if the relationship is healthy and built on solid foundations usually the partner knows what they're getting into and accept the other person (children on the other hand had no choice).

It's a weird take to tie this up with team morale, tough.

jlebar 7/1/2025||
I think that leaks like this have negative information value to the public.

I work at OAI, but I'm speaking for myself here. Sam talks to the company, sometimes via slack, more often in company-wide meetings, all the time. Way more than any other CEO I have worked for. This leaked message is one part of a long, continuing conversation within the company.

The vast majority of what he and others say doesn't get leaked. So you're eavesdropping on a tiny portion of a conversation. It's impossible not to take it out of context.

What's worse, you think you learned something from reading this article, even though you probably didn't, making you more confident in your conclusions when you should be less confident.

I hope everyone here gets to have the experience of seeing HN discuss something that you're an expert in. It's eye-opening to see how confidently wrong most poasters are. It certainly has humbled my own reactions to news. (In this particular instance I don't think there's so much right and wrong but more that I think if you had actually been in the room for more of the conversation you'd probably feel different.)

Btw Sam has tweeted about an open source model. Stay tuned... https://x.com/sama/status/1932573231199707168

makeitdouble 7/1/2025||
The other side of it: some statements made internally can be really bad but employees brush over them because they inherently trust the speaker to some degree, they have additional material that better aligns with what they want to hear so they latch on the rest, and current leaders' actions look fine enough to them so they see the bad parts as just communication mishaps.

Until the tide turns.

jiggawatts 7/1/2025||
Worse: employees are often actively deceived by management. Their “close relationship” is akin to that of a farmer and his herd. Convinced they’re “on the inside” they’re often blind to the truth that’s obvious from the outside.

Or simply they don’t see the whole picture because they’re not customers or business partners.

I’ve seen Oracle employees befuddled to hear negative opinions about their beloved workplace! “I never had to deal with the licensing department!”

tikhonj 7/1/2025|||
Okay, but I've also heard insiders at companies I've worked completely overlook obvious problems and cultural/management shortcomings issues. "Oh, we don't have a low-trust environment, it's just growing pains. Don't worry about what the CEO just said..."

Like, seriously, I've seen first-hand how comments like this can be more revealing out of context than in context, because the context is all internal politics and spin.

diggan 7/1/2025|||
> Btw Sam has tweeted about an open source model. Stay tuned... https://x.com/sama/status/1932573231199707168

Sneaky wording but seems like no, Sam only talked about "open weights" model so far, so most likely not "open source" by any existing definition of the word, but rather a custom "open-but-legal-dept-makes-us-call-it-proprietary" license. Slightly ironic given the whole "most HN posters are confidently wrong" part right before ;)

Although I do agree with you overall, many stories are sensationalized, parts-of-stories always lack a lot of context and large parts of HN users comments about stuff they maybe don't actually know so much about, but put in a way to make it seem so.

echelon 7/1/2025|||
There are ten measures by which a model can/should be open:

1. The model code (pytorch, whatever)

2. The pre-training code

3. The fine-tuning code

4. The inference code

5. The raw training data (pre-training + fine-tuning)

6. The processed training data (which might vary across various stages of pre-training and fine-tuning)

7. The resultant weights blob

8. The inference inputs and outputs (which also need a license; see also usage limits like O-RAIL)

9. The research paper(s) (hopefully the model is also described in literature!)

10. The patents (or lack thereof)

A good open model will have nearly all of these made available. A fake "open" model might only give you two of ten.

impossiblefork 7/1/2025|||
Open weights is unobjectionable. You do get a lot.

It's nice to also know what the training data is, and it's even nicer to be aware of how it's fine-tuned etc., but at least you get the architecture and are able to run it as you like and fine tune it further as you like.

diggan 7/2/2025||
> Open weights is unobjectionable

Yeah? Try me :)

> but at least you get the architecture and are able to run it as you like and fine tune it further as you like.

Sure, that's cool and all, and I welcome that. But it's getting really tiresome of seeing huge companies who probably depend on actual FOSS to constantly get it wrong, which devalues all the other FOSS work going on, since they wanna ride that wave, instead of just being honest with what they're putting out.

If Facebook et al could release compiled binaries from closed source code but still call those binaries "open source", and call the entire Facebook "open source" because of that, they would. But obviously everyone would push back on that, because that's not what we know open source to be.

Btw, you don't get to "run it as you like", give the license + acceptable use a read through, and then compare to what you're "allowed" to do compared to actual FOSS licenses.

gphil 7/1/2025|||
> I hope everyone here gets to have the experience of seeing HN discuss something that you're an expert in. It's eye-opening to see how confidently wrong most poasters are.

This is so true. And not confined to HN.

phatfish 7/1/2025|||
I agree with the sentiment.

Having been behind the scenes of HN discussion about a security incident, with accusations flying about incompetent developers, the true story was the lead developers new of the issue, but it was not prioritised by management and pushed down the backlog in place of new (revenue generating) features.

There is plenty of nuance to any situation that can't be known.

No idea if the real story here is better or worse than the public speculation though.

gist 7/1/2025|||
> I think that leaks like this have negative information value to the public.

To most people I'd think this is mainly for entertainment purposes ie 'palace intrique' and the actual facts don't even matter.

> The vast majority of what he and others say doesn't get leaked. So you're eavesdropping on a tiny portion of a conversation. It's impossible not to take it out of context.

That's a good spin but coming from someone who has an anonymous profile how do we know it's true (this is a general thing on HN people say things but you don't know how legit what they say is or if they are who they say they are).

> What's worse, you think you learned something from reading this article, even though you probably didn't, making you more confident in your conclusions when you should be less confident.

What conclusions exactly? Again do most people really care about this (reading the story) and does it impact them? My guess is it doesn't at all.

> I hope everyone here gets to have the experience of seeing HN discuss something that you're an expert in.

This is a well known trope and is discussed in other forms ie 'NY Times story is wrong move to the next story and you believe it' ie: https://www.epsilontheory.com/gell-mann-amnesia/

jlebar 7/1/2025|||
> coming from someone who has an anonymous profile how do we know it's true

My profile is trivially connected to my real identity, I am not anonymous here.

gist 7/2/2025||
How is it trivially connected to your real identity exactly?

I am not seeing how it is at all.

lossolo 7/1/2025|||
> That's a good spin but coming from someone who has an anonymous profile how do we know it's true (this is a general thing on HN people say things but you don't know how legit what they say is or if they are who they say they are).

Not only that, but how can we know if his interpretation or "feelings" about these discussions are accurate? How do we know he isn't looking through rose-tinted glasses like the Neumann believers at WeWork? OP isn't showing the missing discussion, only his interpretation/feelings about it. How can we know if his view of reality is accurate and unbiased? Without seeing the full discussion and judging for ourselves, we can't.

jlebar 7/2/2025||
> How can we know if his view of reality is accurate and unbiased? Without seeing the full discussion and judging for ourselves, we can't.

I agree with that of course.

aleph_minus_one 7/1/2025|||
> I hope everyone here gets to have the experience of seeing HN discuss something that you're an expert in. It's eye-opening to see how confidently wrong most poasters are.

Some topics (and some areas where one could be an expert in) are much more prone to this phenomenon than others.

Just to give a specific example that suddenly comes to my mind: Grothendieck-style Algebraic Geometry is rather not prone to people confidently posting wrong stuff about on HN.

Generally (to abstract from this example [pun intended]): I guess topics that

- take an enormous amount of time to learn,

- where "confidently bullshitting" will not work because you have to learn some "language" of the topic very deeply

- where even a person with some intermediate knowledge of the topic can immediately detect whether you use the "'grammar' of the 'technical language'" very wrongly

are much more rarely prone to this phenomenon. It is no coincidence that in the last two points I make comparisons to (natural) languages: it is not easy to bullshit in a live interview that you know some natural language well if the counterpart has at least some basic knowledge of this natural language.

joules77 7/1/2025|||
I think its more the site's architecture that promotes this behavior.

In the offline world there is a big social cost to this kind of behavior. Platforms haven't been able to replicate it. Instead they seem to promote and validate it. It feeds the self esteem of these people.

Karrot_Kream 7/1/2025|||
It's hard to have an informed opinion on Algebraic Geometry (requires expertise) and not many people are going to upvote and engage with you about it either. It's a lot easier to have an opinion on tech execs, current events, and tech gossip. Moreover you're much more likely to get replies, upvotes, and other engagement for posting about it.

There's a reason politics and tech gossip are where most HN comments go these days. This is a pretty mainstream site.

aspenmayer 7/1/2025||
> There's a reason politics and tech gossip are where most HN comments go these days. This is a pretty mainstream site.

HN is the digital water cooler. Rumors are a kind of social currency, in the capital sense, in that it can be leveraged and has a time horizon for value of exchange, and in the timeliness/recency biased sense, as hot gossip is a form of information that wants to be free, which in this context means it has more value when shared, and that value is tapped into by doing so.

KaiserPro 7/2/2025|||
I too worked at a place where hot button issues were being leaked to international news.

Leaks were done for a reason. either because they agree with the leak, really disagree with the leak, or want to feel big because they are a broker of juicy information.

Most of the time the leaks were done in an attempt to stop something stupid from happening, or highlight where upper management were making the choice to ignore something for a gain elsewhere.

Other times it was there because the person was being a prick.

Sure its a tiny part of the conversation, but in the end, if you've got the point where your employees are pissed off enough to leak, that's the bigger problem.

bboygravity 7/1/2025|||
I totally agree that most articles (pretty much all news/infotainment) is devoid of any information.

At the same time all I need to know about Sam is in the company/"non-profit's" name, which is in itself is now simply a lie.

crystal_revenge 7/1/2025|||
This is a strangely defensive comment for a post that, at least on the surface, doesn't seem to say anything particularly damning. The fact that you're rushing to defend your CEO sort of proves the point being made, clearly you have to make people believe they're a part of something bigger, not just pay them a lot.

The only obvious critique is that clearly Sam Altman doesn't believe this himself. He is legendarily mercenary and self serving in his actions to the point where, at least for me, it's impressive. He also has, demonstrably here, created a culture where his employees do believe they are part of a more important mission and that clearly is different than just paying them a lot (which of course, he also does).

I do think some skepticism should be had around that view the employees have, but I also suspect that was the case for actual missionaries (who of course always served someone else's interests, even if they personally thought they were doing divine work).

wat10000 7/1/2025||
The headline makes it sound like he's angry that Meta is poaching his talent. That's a bad look that makes it seem like you consider your employees to be your property. But he didn't actually say anything like that. I wouldn't consider any of what he said to be "slams," just pretty reasonable discussion of why he thinks they won't do well.

I'd say this is yet another example of bad headlines having negative information content, not leaks.

makeitdouble 7/1/2025||
With no dogs in the fight, the very fact he's talking to his employees about a competitor's hiring practices is noteworthy.

The delivery of the message can be milder and better than how it sounds in the chosen bits, but the overall picture kinda stays the same.

wat10000 7/2/2025||
To me, there’s an enormous difference between “they pay well but we’re going to win the race” and “my employees belong to me and they’re stealing my property.”

Notably, I don’t see him condemning Meta’s “poaching” here, just commenting on it. Compare this with, for example, Steve Jobs getting into a fight with Adobe’s CEO about whether they’d recruit each other’s employees or consider them to be off limits.

furyofantares 7/2/2025|||
I've experienced that. Absolutely.

But I've also experienced that the outside perspective, wrong as it may be on nearly all details, can give a dose of realism that's easy to brush aside internally.

incoming1211 7/1/2025|||
Sounds like someone is upset they didn't get poached.
dylan604 7/1/2025|||
Your comment comes across dangerously close to sounding like someone that has drunk the kool-aid and defends the indefensible.

Yes, you can get the wrong impression from hearing just a snippet of a conversation, but sometimes you can hear what was needed whether it was out of context or not. Sam is not a great human being to be placed on a pedestal that never needs anything he says questioned. He's just a SV CEO trying to keep people thinking his company is the coolest thing. Once you stop questioning everything, you're in danger of having the kool-aid take over. How many times have we seen other SV CEOs with a "stay tuned" tweet that they just hope nobody questions later?

>if you had actually been in the room for more of the conversation you'd probably feel different

If you haven't drunk the kool-aid, you might feel differently as well.

SAMA doesn't need your assistance white knighting him on the interwebs.

jacquesm 7/2/2025||
Technically, after being labelled a missionary you can't really blame people for spreading the word of the almighty.
threetonesun 7/1/2025||
Little Miyazaki knock offs posting on the Nazi Hellsite former known as Twitter isn't really helping how the "public" feels about OAI either.
apwell23 7/1/2025||

  Another said: “Yes we’re quirky and weird, but that’s what makes this place a magical cradle of innovation,” wrote one. “OpenAI is weird in the most magical way. We contain multitudes.”
i thought i was reading /r/linkedinlunatics
Festro 7/2/2025||
What an odd turn of phrase. Historically speaking, mercenaries have absolutely slaughtered missionaries in every confrontation.

If missionaries could be mercenaries, they would.

epolanski 7/2/2025||
Also, OpenAI ain't missionaries, it's a for profit company full of people working there for a fat paycheck and equity.
revskill 7/2/2025||
Both are orthogonal concepts.
thephyber 7/2/2025||
Anchoring the reader’s opinion by using the phrase “Missionaries” is pure marketing. Missionaries don’t get paid huge dollars or equity, they do it because it’s a religion / a calling.

Ultimately why someone chooses to work at OpenAI or Meta or elsewhere boils down to a few key reasons. The mission aligns with their values. The money matches their expectations. The team has a chance at success.

The orthogonality is irrelevant because nobody working for OpenAI or Meta is a missionary.

antupis 7/2/2025|||
The term comes from John Doerr https://www.youtube.com/watch?v=n6iwEYmbCwk . But Altman kicked most missionaries out during corporate turmoil in 2023, so not sure where this comes from.
tim333 7/2/2025||
From that by the way

>...one hand The Mercenaries they have enormous Drive they're opportunistic like Andy Grove they believe only the paranoids survive they're really sprinting for the short run but that's quite different I suggest to you than the missionaries who have passion not paranoia who are strategic not opportunistic and they're focused on the big idea in partnerships. It's the difference between focusing on the the competition or the customer.

It's a difference between worshiping at the altar of Founders or having a meritocracy where you get all the ideas on the table and the best ones win it's a difference between being exclusively interested in the financial statements or also in the mission statements it's a difference between being a loner on your own or being part of a team having an attitude of entitlement versus contribution or uh as Randy puts it living a deferred Life Plan versus a whole life that at any given moment is trying to work difference between just making money anybody tells you they don't want to make money is lying or making money and making meaning Al also or my bottom line is it's the difference between success or success and significance.

avidphantasm 7/2/2025|||
And both missionaries and mercenaries are responsible for the abuse and obliteration of many millions. Neither are forces for good.
conartist6 7/2/2025||
The force has a light side and a dark side. Apparently Switzerland is so famously neutral because their national export was mercenaries. You can't take sides in wars of you want to sell soldiers to both sides...

But also I imagine that it helps when you wish to stay neutral if people are afraid of what you could do if you were directly involved in a conflict.

a_c 7/2/2025|||
Missionary as founder, mercenary as employee, everyone happy
sksrbWgbfK 7/2/2025|||
The Knights Templar (https://en.wikipedia.org/wiki/Knights_Templar) were kinda both, but modern AI is more mercenary in order to grab all the profits and become a monopoly nowadays.
megablast 7/2/2025|||
Except we have a lot more missionaries than mercenaries now. Right? So who won?
conartist6 7/2/2025|||
Yeah apparently being well fed and paid and extensive lly prepared helped? It's like these mercenaries were actually what you would call "professionals"
nashashmi 7/2/2025|||
Mercenaries get paid to follow orders and kill. Missionaries are more independent. That is the sell point to the OpenAI worker.
codingwagie 7/1/2025||
The value of these researchers to meta is surely more than a few billion. Love seeing free markets benefit the world
simianwords 7/2/2025||
Im a bit torn about this. If it ends up hurting OpenAI so much that they close shop, what is the incentive for another OpenAI to come up?

You can spend time making a good product and get breakthroughs and all it takes is for meta to poach your talent, and with it your IP. What do you have left?

roguecoder 7/2/2025||
Trade secrets and patent laws still apply.

But also, every employee getting paid at Meta can come out with the resources to start their own thing. PayPal didn't crush fintech: it funded the next twenty years of startups.

ipsum2 7/1/2025||
How do you figure? If you assume that Meta gets the state of the art model, revenue is non-existent, unless they start a premium tier or put ads. Even then, its not clear if they will exceed the money spent on inference and training compute.
HWR_14 7/1/2025||
It's worth a few billion (easily) to keep people's default time sink as aimlessly playing on FV/IG as opposed to chatting with ChatGPT. Even if that scroll is replaced by chatting with llama as opposed to seeing posts.
zmmmmm 7/2/2025||
It says something that he still believes he has "missionaries" after betraying all the core principles that OpenAI was founded on. What exactly is their mission now other than generating big $?
motorest 7/2/2025||
> It says something that he still believes he has "missionaries" after betraying all the core principles that OpenAI was founded on.

What I find the most troubling in this reaction is how hostile it is to the actual talent. It accuses everyone and anyone who is even considering to join Meta in particular but any competitor in general as being a mercenary. It's using the poisoning the well fallacy to shield OpenAI from any competition. And why? Because he believes he is in a personal mission? This emits "some of you may die, but it's a sacrifice I am willing to make" energy. Not cool.

hliyan 7/2/2025|||
It's absoluely ridiculous that investors are driven (and are expected to be driven) by maximisation of return of investment, and that alone, but when labour/employees exhibit that same behaviour, they are labeld "mercenaries" or "transacitonal".
misterhill 7/2/2025|||
He was very happy when money caused them all to back him despite the fact that he obviously isn't a safe person to have in a position of power. But if they realize they have better money options than turning OpenAI into a collusion against its original foundation and mostly for his benefit, well then they are mercenaries..
DebtDeflation 7/2/2025||||
These examples of double standards for labor vs capital are literally everywhere.

Capital is supposed to be mobile. Economic theory is based on the idea that capital should flow to its best use (e.g., investors should withdraw it from companies that aren't generating sufficient returns and provide it to those who are) including being able to flow across international borders. Labor is restricted from flowing across international boundaries by law and even job hopping within a country is frowned upon by society.

We have lower rates of taxation on capital (capital gains and dividends) than on labor income because we want to encourage investment. We're told that economic growth depends on it. But doesn't economic growth also depend on people working and shouldn't we encourage that as well?

There's an entire industry dedicated to tracking investment yields for capital and we encourage the free flow of this information "so that people can make informed investing decisions". Yet talking about salaries with co-workers is taboo for some reason.

The list goes on and on and on.

roguecoder 7/2/2025||
Those lower rates of taxation on capital don't even incentivize investment, because investment is inelastic. What else are you going to do with the money, swim in it?

It's just about rich people wanting a bigger share of the pie and having enough money to buy the policies they prefer.

Similarly, we have laws that guarantee our right to talk with our coworkers about our income, but the penalties have been completely gutted. And the penalty for companies illegally colluding on salary by all telling a third party what they are paying people and then using that data to decide how much to pay is ... nada.

We need to figure out how to have people who work for a living fund political campaigns (either directly with money or by donating our time), because this alternative of a badly-compressed jpeg of an economy sucks.

alwa 7/2/2025|||
Couldn’t his claim apply equally to investors and employees? In both categories, people who are there to do stuff for mercenary reasons are likely to be (in his view) missing some of the drive and cohesion of a group of “true believers” working for the same purpose?

The contrast between SpaceX and the defense primes comes to mind… between Warren Buffett and a crypto pumper-and-dumper… between a steady career at (or dividend from) IBM and a Silicon Valley startup dice-roll (or the people who throw money into said startups knowing they’re probably going to lose it)

grafmax 7/2/2025|||
He claims to be advancing progress. He believes that progress comes from technology plus good governance.

Yet our government is descending into authoritarianism and AI is fueling rising data center energy demands exacerbating the climate crisis. And that is to say nothing of the role that AI is playing in building more effective tools for population control and mass surveillance. All these things are happening because the governance of our future is handled by the ultra-wealthy pursuing their narrow visions at the expense of everyone else.

Thus we have no reason to expect good “governance” at the hands of this wealthy elite and we only see evidence to the opposite. Altman’s skill lies in getting people to believe that serving these narrow interests is the pursuit of a higher purpose. That is the story of OpenAI.

bookv 7/2/2025||
[flagged]
graemep 7/2/2025|||
> Also the only way multiculturalism can work is through a totalitarian state which is why surveillance and censorship is so big in the UK. Also the reason why Singapore works.

Singapore, if anything, is evidence against your claim about the UK. Singapore has multiple cultures, but it does not promote multi-culturalism as it is generally understood in the UK. Their language policy is:

1. Everyone has to speak reasonably good English. 2. Languages other English, Malay, Mandarin and Tamil are discouraged.

https://en.wikipedia.org/wiki/Language_planning_and_policy_i...

The language policy is more like the treatment of Welsh in the 19th century, or Sri Lanka's attempt to impose a single national language from the 60s to the 80s (but more flexible as it retains more than one language). A more extreme (because it goes far beyond language) and authoritarian example would be contemporary China's suppression of minority cultures. I do not think anyone would call any of those multiculturalism.

The reason for surveillance and censorship in the UK is very different. It is a general feeling in the ruling class that the hoi polloi cannot be trusted and centralised decision making is preferable to local or individual decision making. The current Children's Wellbeing and Schools Bill is a great example - the central point is that the authorities will make more decisions for people and organisations and decide what they an do to a greater extent than at the moment.

dgb23 7/2/2025||||
> the only way multiculturalism can work is through a totalitarian state

I'm seeing more and more people using this kind of rhetoric in the last few years. Extremely worrying.

sofixa 7/2/2025||||
> Also the only way multiculturalism can work is through a totalitarian state which is why surveillance and censorship is so big in the UK.

That seems like a wild claim to make without any supporting evidence. Even Switzerland can be used to disprove it, so I'm not sure where you're coming from that assuredly.

The UK isn't totalitarian in the same sense that even Singapore is, let alone actually totalitarian states like Eritrea, North Korea, China, etc.

dgb23 7/2/2025|||
For reference:

Switzerland has one of the highest percentage of foreigners in Europe, four official languages, a decentralized political system, very frequent direct democratic votes and consensus governance (no mayors, governors and prime ministers, just councils all the way down).

Switzerland set up in such a way that it absorbs and integrates many different cultures into a decentralized, democratic system. One of the primary historical causes for this is our belligerent past. I'd like to think that this was our only way out of constantly hitting each other over our heads.

grumbelbart2 7/2/2025|||
Even the US is a good counterexample. It is a widely fragmented country, yet that worked for decades without degrading into a totalitarian state.
BoxOfRain 7/2/2025||||
The UK would need to have a well-funded and well-equipped police force to be a proper police state, and the rate of shoplifting, burglary etc that goes on suggests otherwise.
Djampo 7/2/2025|||
[dead]
eleveriven 7/2/2025|||
Yeah, that "missionaries" line feels pretty rich coming from the guy who presided over OpenAI's pivot from nonprofit ideals to capped-profit reality
zombiwoof 7/2/2025|||
Exactly. He says missionaries and immediately follows it by talking about compensation (ie a mercenary incentive)
blitzar 7/2/2025|||
It says something that he believes he is anything but a Mercenary
ahartmetz 7/2/2025||
He believes anything that's profitable for him.
chvid 7/2/2025|||
It is a widely accepted definition of AGI that it is something that is either really smart (or generates more than 100B usd in revenue).

It is also clear Sam Altman and OpenAI’s core values remain intact.

spencerflem 7/2/2025||
Lol, I'm sure Sam Altman's ideals haven't changed but you're a fool if you think OpenAI is aiming for anything loftier than a huge pile of money for investors.
chvid 7/2/2025||
Exactly. AGI.
whiteboardr 7/2/2025|||
Oh the hypocrisy - one of the biggest thiefs and liars calling “his” people jumping ship “mercenaries”.

So wrong on so many levels - what a time to be alive.

benatkin 7/2/2025|||
They haven't released much closed source, open weights in comparison to their competitors, but they made their Codex agent Open Source while Claude Code is still closed source.
jonplackett 7/2/2025||
That’s just a wrapper though isn’t it. The secret sauce is still secret.
benatkin 7/2/2025||
And with the others as well, the secret sauce of training is still secret. Their competitors' "open source" in Gemma, Llama, etc is closed source. It's like Mattermost Team Edition where the binary is shipped under the MIT license. OpenAI should be held to a higher standard based on their name and original structure and pitch and they've fallen short, but I think to say they completely threw it out is an exaggeration. They hit the same roadblocks of copyright and other sorts of scrutiny that others did.
beebmam 7/2/2025||
A company's mission is not an individual's mission. I personally would never hire an engineer whose main pursuit is money or promotions. These are the laziest engineers that exist and are always a liability.
quantified 7/2/2025|||
Everyone is the chairman of the board of their lives, with a fiduciary duty to their shareholder, namely themselves. You can decide to hire only employees who either believe in mission over pay or who are willing to mouth the words, but you will absolutely miss out on good employees.

I remember defending a hiring candidate who had said he got into his specialty because it paid better than others. We hired him and he was great, worth his pay. No one else on the hiring team could defend a bias against someone looking out for themselves.

Bendy 7/2/2025||||
And I would never work for someone with such a paranoid suspicion of the motives of their employees, who doesn’t want to take any responsibility in their employees’ professional growth, and who doesn’t want to pay them what they’re worth.
cschep 7/2/2025||||
this is so, so out of touch.
Cantinflas 7/2/2025||||
What is your opinion on managerial virtue signaling?
cubancigar11 7/2/2025|||
"I would never give me money to someone who wants money."
amarcheschi 7/1/2025||
Had he been doing the poaching, he would be saying mercenaries will beat missionaries. Why believe in ceos words at this point
gilfoyle 7/2/2025||
Sam Altman went from "I'm doing this because I love it" to proposing to receive 7% equity in the for-profit entity in a matter of months. Now he calls out researchers leaving for greener pastures as mercenaries while the echo of "OpenAI is nothing without its people" hasn't faded.
Animats 7/1/2025|
Yeah, yeah, typical rich guy whining when labor makes some gains.
More comments...