Posted by spenvo 7/1/2025
- Ilya Sutskever, Co-founder, co-lead of Superalignment Team , Departed early 2024
- May 15, 2025, The Atlantic
Anyway, I concur it's a hard choice as one other comment mentions.
There's also plenty of buttons that can't be pressed unless unlocked by multiple keys which cannot be turned by a single person.
Edit: Honestly, I bet that "Altman", directed by Nolan's simulacrum and starring a de-aged Cillian Murphy (with or without his consent), will in fact deservedly win a few oscars in 2069.
Non-starter. Why would you trust your adversary to "stay within the lanes". The rational thing to do is to extend your lead as much as possible to be at the top of the pyramid. The arms race is on.
Remember the soviets got the nuke so quick because they just exfiltrated the US plans
Seeing how currently nuclear weapon holders are elected, that would be a disaster
Either you get it or you're screwed?
- highly centralized
- lots of misinformation
- lots of fear mongerng
- arms race between most powerful countries
- who can't stop because if the other gets a significant lead it could be used to destroy the other
- potentially world changing
- potential to cause unprecedented levels of harm
- potential to cause unprecedented levels of prosperity
Sometimes things are just done better with your enemy than in direct competition with them. "Keep your enemies closer" kinda thing.As a parallel, look at medicine and gain of function research. It has a lot of benefits but can walk the line of bioweapons development. A mistake causes a global event. So its best to work together. Everyone benefits from any progress by anyone. Everyone is harmed by mistakes by any one actor. That's regardless of working together or not. But working together means you keep an eye on one another, helping prevent mistakes. Often ones that are small and subtle. The adversarial nature is (or can be) beneficial in this case
Regardless of who invents AGI, it affects the entire world.
Regardless of who invents AGI, you can't put the genie back on the bottle (or rather it's a great struggle to that's extremely costly, if even possible)
Regardless of who invents AGI, the other will have it in a matter of months
But this doesn't work during the transition. During the development. "The button" here is for AGI. As in, when it's created and released.
Yeah, there’s no good choice here. You should be rooting for neither. Best case scenario is they destroy each other with as little collateral damage as possible.
All these tech billionaires or pseudo billionaires are basically believing that an enlightened dictatorship is the best form of governance. And of course they ought to be the dictator or part of the board.
And still haemorrhaging money.
But… Why put Meta in that group?
I see Apple, Google, Microsoft, and Amazon as all effectively having operating systems. Meta has none and has failed to build one for cryptocurrency (Libra / Deis) and metaverse.
Also, both Altman and Zuck leave a lot to be desired. Maybe not as much as Musk, but they both seem to be spineless against government coercion and neither gives me a sense that they are responsible stewards of the upside or downside risks of AGI. They both just seem like they are full throttle no matter the consequences.
American society. Those are uniquely products of the US, exported everywhere, and rightfully starting to get push back. Unfortunately later than what it should’ve happened.
I wouldn't describe a team full of people who don't want to work 60 hour weeks as "eroded", cus like... That's 6x 10 hour days leaving incredibly little time for family, chores, unwinding, etc. Once in awhile maybe, but sustained that'll just burn people out.
And also by that logic, is every executive paid $5M+/yr in every company, or every person who's accumulated say $20M, also eroding their team? Or is that only applied to someone who isn't managing people, for some reason?
Same with a lot of the financial roles with comp distributions like this.
> Even in top-tier sports, many underperformers stick around for a couple years or a half-decade at seven or eight figure compensation before being shown the door.
This can happen in the explicit hopes that their performance improves, not because it's unclear whether they are performing, and not generally over lapses in contract.
And if the team produces results on par with the best results being attained anywhere else on the planet, Zuck would likely consider that a success, not a failure. After all, what's motivating him here is that his current team is not producing that level of results. And if he has a small but nonzero chance of pushing ahead of anyone else in the world, that's not an unreasonable thing to make a bet on.
I'd also point out that this sort of situation is common in the executive world, just not in the engineering world. Pretty much every top-tier executive at top-tier companies is making seven or eight figures as table stakes. There's no evidence I'm aware of that this reduces executive or executive team performance. Really, the evidence is the opposite -- companies continue paying more and more to assemble the best executive teams because they find it's actually worth it.
"Established" != valid, and literally everyone knows that.
The executives you reference are never ICs and are definitionally accountable to the measured performance of their business line. These are not superstar hires the way that AI researchers (or athletes) are. The body in the chair is totally interchangeable so long as the spreadsheet says the right number, and you expect the spreadsheet performance to be only marginally controlled by the particular body in the chair. That's not the case with most of these hires.
It's false that execs are never ICs. Anyone who's worked in the upper-echelon of corporate America knows that. Not every exec is simply responsible 1:1 for a business line. Many are in transformation or functional roles with very complex responsibilities across many interacting areas. Even when an exec is responsible for a business line in a 1:1 way, they are often only responsible for one aspect of it (e.g., leading one function); sometimes that is true all the way up to the C-suite, with the company having literally only a single exception (e.g., Apple). In those cases, exec performance is not 1:1 tied to the business they are 1:1 attached to. High-performing execs in those roles are routinely "saved" and banked for other roles rather than being laid off / fired in the event their BU doesn't work out. Low-performing execs in those roles are of course very quickly fired / re-orged out.
If execs really were so replaceable and it's just a matter of putting the right number in a spreadsheet, companies wouldn't be paying so much money for them. Your claims do not pass even the most basic sanity check. By all means, work your way up to the level we're talking about here and then report back on what you've learned about it.
Re: performance management and "everyone knowing that", you're right of course -- that's why it's not an interesting point at all. :) I disagree that established techniques are not valid -- they work well and have worked for decades with essentially no major structural issues, scaling up to companies with 200k+ employees.
I said they are accountable to their business line -- they own a portfolio and are accountable for that portfolio's performance. If the portfolio does badly, it means nearly by definition that the executive is doing badly. Like an athlete, that doesn't mean they're immediately put to the streets, but it also is not ambiguous whether they are performing well or not.
Which also points to why performance management methods are not valid, i.e. a high-sensitivity, high-specificity measure of an individual executive's actual personal performance: there are obviously countless external variables that bear on the outcome of a portfolio. But nonetheless, for the business's purpose, it doesn't matter. Because the real purpose of performance management methods is to have a quasi-objective rationalization for personnel decisions that are actually made elsewhere.
Perhaps you can mention which performance management methods you believe are valid (high-specificity and high-sensitivity measures of an individual's personal performance) in AI R&D?
"Pretty much every top-tier executive at top-tier companies is making seven or eight figures as table stakes". In this group, what percentage are ICs? Sure there are aberrational celebrity hires, of course, but what you are pointing to is the norm, which is not celebrity hires doing IC work.
> If execs really were so replaceable... companies wouldn't be paying so much money for them
High-level executives within the same tier are largely substitutable - any qualified member of this cohort can perform the role adequately. However, this is still a very small group of people ultimately responsible for huge amounts of capital and thus collectively can maintain market power on compensation. The high salaries don't reflect individual differential value. Obviously there are some remarkable executives and they tend to concentrate in remarkable companies, by definition, and also by definition, the vast majority of companies and their executives are totally unremarkable but earn high salaries nonetheless.
The researchers being hired here are just as accountable as the execs we're talking about -- there is a clear outcome that Zuck expects, and if they don't deliver, they will be held accountable. I really, genuinely don't see what's so complicated about this.
Accountability to a business line does not imply that if that business does poorly then every exec accountable to it was doing poorly personally. I'm actually a personal counter-example and I know a number of others too. In fact, I've even seen execs in failing BUs get promoted after the BU was folded into another one. Competent exec talent is hard to find (learning to operate successfully at the exec level of a Fortune 50 company is a very rarefied skill and can't be taught), and companies don't want to lose someone good just because that person was attached to a bad business line for a few months or years.
Something important to understand about the actual exec world is that executives move around within companies constantly -- the idea that an executive is tied to a single business and if something goes wrong there they must have sucked is just not true and it's not how large companies operate generally. When that happens, the company will figure out the correct action for the business line (divest, put into harvest mode, merge into another, etc., etc.), then figure out what to do with the executives. It's an opportunity to get rid of the bad ones and reposition the top ones for higher-impact work. Sometimes you do have to get rid of good people, though, which is true of all layoffs -- but even with execs there's a desire to avoid it (just like you'd ideally want to retain the top engineers of a product line being shuttered).
It is very easy to mistake _feeling_ productive and close with your coworkers for _being_ productive. That's why we can't rely on our feelings to judge productivity.
Why would they do that? There is absolutely no reason to overwork.
Good!
I am not saying exactly they don't love their family... but it's not necessarily a priority over glory, more money, or being competitive. And if the relationship is healthy and built on solid foundations usually the partner knows what they're getting into and accept the other person (children on the other hand had no choice).
It's a weird take to tie this up with team morale, tough.
I work at OAI, but I'm speaking for myself here. Sam talks to the company, sometimes via slack, more often in company-wide meetings, all the time. Way more than any other CEO I have worked for. This leaked message is one part of a long, continuing conversation within the company.
The vast majority of what he and others say doesn't get leaked. So you're eavesdropping on a tiny portion of a conversation. It's impossible not to take it out of context.
What's worse, you think you learned something from reading this article, even though you probably didn't, making you more confident in your conclusions when you should be less confident.
I hope everyone here gets to have the experience of seeing HN discuss something that you're an expert in. It's eye-opening to see how confidently wrong most poasters are. It certainly has humbled my own reactions to news. (In this particular instance I don't think there's so much right and wrong but more that I think if you had actually been in the room for more of the conversation you'd probably feel different.)
Btw Sam has tweeted about an open source model. Stay tuned... https://x.com/sama/status/1932573231199707168
Until the tide turns.
Or simply they don’t see the whole picture because they’re not customers or business partners.
I’ve seen Oracle employees befuddled to hear negative opinions about their beloved workplace! “I never had to deal with the licensing department!”
Like, seriously, I've seen first-hand how comments like this can be more revealing out of context than in context, because the context is all internal politics and spin.
Sneaky wording but seems like no, Sam only talked about "open weights" model so far, so most likely not "open source" by any existing definition of the word, but rather a custom "open-but-legal-dept-makes-us-call-it-proprietary" license. Slightly ironic given the whole "most HN posters are confidently wrong" part right before ;)
Although I do agree with you overall, many stories are sensationalized, parts-of-stories always lack a lot of context and large parts of HN users comments about stuff they maybe don't actually know so much about, but put in a way to make it seem so.
1. The model code (pytorch, whatever)
2. The pre-training code
3. The fine-tuning code
4. The inference code
5. The raw training data (pre-training + fine-tuning)
6. The processed training data (which might vary across various stages of pre-training and fine-tuning)
7. The resultant weights blob
8. The inference inputs and outputs (which also need a license; see also usage limits like O-RAIL)
9. The research paper(s) (hopefully the model is also described in literature!)
10. The patents (or lack thereof)
A good open model will have nearly all of these made available. A fake "open" model might only give you two of ten.
It's nice to also know what the training data is, and it's even nicer to be aware of how it's fine-tuned etc., but at least you get the architecture and are able to run it as you like and fine tune it further as you like.
Yeah? Try me :)
> but at least you get the architecture and are able to run it as you like and fine tune it further as you like.
Sure, that's cool and all, and I welcome that. But it's getting really tiresome of seeing huge companies who probably depend on actual FOSS to constantly get it wrong, which devalues all the other FOSS work going on, since they wanna ride that wave, instead of just being honest with what they're putting out.
If Facebook et al could release compiled binaries from closed source code but still call those binaries "open source", and call the entire Facebook "open source" because of that, they would. But obviously everyone would push back on that, because that's not what we know open source to be.
Btw, you don't get to "run it as you like", give the license + acceptable use a read through, and then compare to what you're "allowed" to do compared to actual FOSS licenses.
This is so true. And not confined to HN.
Having been behind the scenes of HN discussion about a security incident, with accusations flying about incompetent developers, the true story was the lead developers new of the issue, but it was not prioritised by management and pushed down the backlog in place of new (revenue generating) features.
There is plenty of nuance to any situation that can't be known.
No idea if the real story here is better or worse than the public speculation though.
To most people I'd think this is mainly for entertainment purposes ie 'palace intrique' and the actual facts don't even matter.
> The vast majority of what he and others say doesn't get leaked. So you're eavesdropping on a tiny portion of a conversation. It's impossible not to take it out of context.
That's a good spin but coming from someone who has an anonymous profile how do we know it's true (this is a general thing on HN people say things but you don't know how legit what they say is or if they are who they say they are).
> What's worse, you think you learned something from reading this article, even though you probably didn't, making you more confident in your conclusions when you should be less confident.
What conclusions exactly? Again do most people really care about this (reading the story) and does it impact them? My guess is it doesn't at all.
> I hope everyone here gets to have the experience of seeing HN discuss something that you're an expert in.
This is a well known trope and is discussed in other forms ie 'NY Times story is wrong move to the next story and you believe it' ie: https://www.epsilontheory.com/gell-mann-amnesia/
My profile is trivially connected to my real identity, I am not anonymous here.
I am not seeing how it is at all.
Not only that, but how can we know if his interpretation or "feelings" about these discussions are accurate? How do we know he isn't looking through rose-tinted glasses like the Neumann believers at WeWork? OP isn't showing the missing discussion, only his interpretation/feelings about it. How can we know if his view of reality is accurate and unbiased? Without seeing the full discussion and judging for ourselves, we can't.
I agree with that of course.
Some topics (and some areas where one could be an expert in) are much more prone to this phenomenon than others.
Just to give a specific example that suddenly comes to my mind: Grothendieck-style Algebraic Geometry is rather not prone to people confidently posting wrong stuff about on HN.
Generally (to abstract from this example [pun intended]): I guess topics that
- take an enormous amount of time to learn,
- where "confidently bullshitting" will not work because you have to learn some "language" of the topic very deeply
- where even a person with some intermediate knowledge of the topic can immediately detect whether you use the "'grammar' of the 'technical language'" very wrongly
are much more rarely prone to this phenomenon. It is no coincidence that in the last two points I make comparisons to (natural) languages: it is not easy to bullshit in a live interview that you know some natural language well if the counterpart has at least some basic knowledge of this natural language.
In the offline world there is a big social cost to this kind of behavior. Platforms haven't been able to replicate it. Instead they seem to promote and validate it. It feeds the self esteem of these people.
There's a reason politics and tech gossip are where most HN comments go these days. This is a pretty mainstream site.
HN is the digital water cooler. Rumors are a kind of social currency, in the capital sense, in that it can be leveraged and has a time horizon for value of exchange, and in the timeliness/recency biased sense, as hot gossip is a form of information that wants to be free, which in this context means it has more value when shared, and that value is tapped into by doing so.
Leaks were done for a reason. either because they agree with the leak, really disagree with the leak, or want to feel big because they are a broker of juicy information.
Most of the time the leaks were done in an attempt to stop something stupid from happening, or highlight where upper management were making the choice to ignore something for a gain elsewhere.
Other times it was there because the person was being a prick.
Sure its a tiny part of the conversation, but in the end, if you've got the point where your employees are pissed off enough to leak, that's the bigger problem.
At the same time all I need to know about Sam is in the company/"non-profit's" name, which is in itself is now simply a lie.
The only obvious critique is that clearly Sam Altman doesn't believe this himself. He is legendarily mercenary and self serving in his actions to the point where, at least for me, it's impressive. He also has, demonstrably here, created a culture where his employees do believe they are part of a more important mission and that clearly is different than just paying them a lot (which of course, he also does).
I do think some skepticism should be had around that view the employees have, but I also suspect that was the case for actual missionaries (who of course always served someone else's interests, even if they personally thought they were doing divine work).
I'd say this is yet another example of bad headlines having negative information content, not leaks.
The delivery of the message can be milder and better than how it sounds in the chosen bits, but the overall picture kinda stays the same.
Notably, I don’t see him condemning Meta’s “poaching” here, just commenting on it. Compare this with, for example, Steve Jobs getting into a fight with Adobe’s CEO about whether they’d recruit each other’s employees or consider them to be off limits.
But I've also experienced that the outside perspective, wrong as it may be on nearly all details, can give a dose of realism that's easy to brush aside internally.
Yes, you can get the wrong impression from hearing just a snippet of a conversation, but sometimes you can hear what was needed whether it was out of context or not. Sam is not a great human being to be placed on a pedestal that never needs anything he says questioned. He's just a SV CEO trying to keep people thinking his company is the coolest thing. Once you stop questioning everything, you're in danger of having the kool-aid take over. How many times have we seen other SV CEOs with a "stay tuned" tweet that they just hope nobody questions later?
>if you had actually been in the room for more of the conversation you'd probably feel different
If you haven't drunk the kool-aid, you might feel differently as well.
SAMA doesn't need your assistance white knighting him on the interwebs.
Another said: “Yes we’re quirky and weird, but that’s what makes this place a magical cradle of innovation,” wrote one. “OpenAI is weird in the most magical way. We contain multitudes.”
i thought i was reading /r/linkedinlunaticsIf missionaries could be mercenaries, they would.
Ultimately why someone chooses to work at OpenAI or Meta or elsewhere boils down to a few key reasons. The mission aligns with their values. The money matches their expectations. The team has a chance at success.
The orthogonality is irrelevant because nobody working for OpenAI or Meta is a missionary.
>...one hand The Mercenaries they have enormous Drive they're opportunistic like Andy Grove they believe only the paranoids survive they're really sprinting for the short run but that's quite different I suggest to you than the missionaries who have passion not paranoia who are strategic not opportunistic and they're focused on the big idea in partnerships. It's the difference between focusing on the the competition or the customer.
It's a difference between worshiping at the altar of Founders or having a meritocracy where you get all the ideas on the table and the best ones win it's a difference between being exclusively interested in the financial statements or also in the mission statements it's a difference between being a loner on your own or being part of a team having an attitude of entitlement versus contribution or uh as Randy puts it living a deferred Life Plan versus a whole life that at any given moment is trying to work difference between just making money anybody tells you they don't want to make money is lying or making money and making meaning Al also or my bottom line is it's the difference between success or success and significance.
But also I imagine that it helps when you wish to stay neutral if people are afraid of what you could do if you were directly involved in a conflict.
You can spend time making a good product and get breakthroughs and all it takes is for meta to poach your talent, and with it your IP. What do you have left?
But also, every employee getting paid at Meta can come out with the resources to start their own thing. PayPal didn't crush fintech: it funded the next twenty years of startups.
What I find the most troubling in this reaction is how hostile it is to the actual talent. It accuses everyone and anyone who is even considering to join Meta in particular but any competitor in general as being a mercenary. It's using the poisoning the well fallacy to shield OpenAI from any competition. And why? Because he believes he is in a personal mission? This emits "some of you may die, but it's a sacrifice I am willing to make" energy. Not cool.
Capital is supposed to be mobile. Economic theory is based on the idea that capital should flow to its best use (e.g., investors should withdraw it from companies that aren't generating sufficient returns and provide it to those who are) including being able to flow across international borders. Labor is restricted from flowing across international boundaries by law and even job hopping within a country is frowned upon by society.
We have lower rates of taxation on capital (capital gains and dividends) than on labor income because we want to encourage investment. We're told that economic growth depends on it. But doesn't economic growth also depend on people working and shouldn't we encourage that as well?
There's an entire industry dedicated to tracking investment yields for capital and we encourage the free flow of this information "so that people can make informed investing decisions". Yet talking about salaries with co-workers is taboo for some reason.
The list goes on and on and on.
It's just about rich people wanting a bigger share of the pie and having enough money to buy the policies they prefer.
Similarly, we have laws that guarantee our right to talk with our coworkers about our income, but the penalties have been completely gutted. And the penalty for companies illegally colluding on salary by all telling a third party what they are paying people and then using that data to decide how much to pay is ... nada.
We need to figure out how to have people who work for a living fund political campaigns (either directly with money or by donating our time), because this alternative of a badly-compressed jpeg of an economy sucks.
The contrast between SpaceX and the defense primes comes to mind… between Warren Buffett and a crypto pumper-and-dumper… between a steady career at (or dividend from) IBM and a Silicon Valley startup dice-roll (or the people who throw money into said startups knowing they’re probably going to lose it)
Yet our government is descending into authoritarianism and AI is fueling rising data center energy demands exacerbating the climate crisis. And that is to say nothing of the role that AI is playing in building more effective tools for population control and mass surveillance. All these things are happening because the governance of our future is handled by the ultra-wealthy pursuing their narrow visions at the expense of everyone else.
Thus we have no reason to expect good “governance” at the hands of this wealthy elite and we only see evidence to the opposite. Altman’s skill lies in getting people to believe that serving these narrow interests is the pursuit of a higher purpose. That is the story of OpenAI.
Singapore, if anything, is evidence against your claim about the UK. Singapore has multiple cultures, but it does not promote multi-culturalism as it is generally understood in the UK. Their language policy is:
1. Everyone has to speak reasonably good English. 2. Languages other English, Malay, Mandarin and Tamil are discouraged.
https://en.wikipedia.org/wiki/Language_planning_and_policy_i...
The language policy is more like the treatment of Welsh in the 19th century, or Sri Lanka's attempt to impose a single national language from the 60s to the 80s (but more flexible as it retains more than one language). A more extreme (because it goes far beyond language) and authoritarian example would be contemporary China's suppression of minority cultures. I do not think anyone would call any of those multiculturalism.
The reason for surveillance and censorship in the UK is very different. It is a general feeling in the ruling class that the hoi polloi cannot be trusted and centralised decision making is preferable to local or individual decision making. The current Children's Wellbeing and Schools Bill is a great example - the central point is that the authorities will make more decisions for people and organisations and decide what they an do to a greater extent than at the moment.
I'm seeing more and more people using this kind of rhetoric in the last few years. Extremely worrying.
That seems like a wild claim to make without any supporting evidence. Even Switzerland can be used to disprove it, so I'm not sure where you're coming from that assuredly.
The UK isn't totalitarian in the same sense that even Singapore is, let alone actually totalitarian states like Eritrea, North Korea, China, etc.
Switzerland has one of the highest percentage of foreigners in Europe, four official languages, a decentralized political system, very frequent direct democratic votes and consensus governance (no mayors, governors and prime ministers, just councils all the way down).
Switzerland set up in such a way that it absorbs and integrates many different cultures into a decentralized, democratic system. One of the primary historical causes for this is our belligerent past. I'd like to think that this was our only way out of constantly hitting each other over our heads.
It is also clear Sam Altman and OpenAI’s core values remain intact.
So wrong on so many levels - what a time to be alive.
I remember defending a hiring candidate who had said he got into his specialty because it paid better than others. We hired him and he was great, worth his pay. No one else on the hiring team could defend a bias against someone looking out for themselves.