Top
Best
New

Posted by adrianhon 23 hours ago

Sam Altman may control our future – can he be trusted?(www.newyorker.com)
1425 points | 586 comments
ronanfarrow 21 hours ago|
Ronan Farrow here. Andrew Marantz and I spent 18 months on this investigation. Happy to answer questions about the reporting.
cs702 20 hours ago||
Thank you for coming on HN and offering to answer questions.[a]

This is a fantastic piece, very timely, evidently well-researched, and also well-written. Judging by the little that I know, it's accurate. Thank you for doing the work and sharing it with the world.

OpenAI may be in a more tenuous competitive position than many people realize. Recent anecdotal evidence suggests the company has lost its lead in the AI race to Anthropic.[b]

Many people here, on HN, who develop software prefer Claude, because they think it's a better product.[c]

Is your understanding of OpenAI's current competitive position similar?

---

[a] You may want to provide proof online that you are who you say you are: https://en.wikipedia.org/wiki/On_the_Internet%2C_nobody_know...

[b] https://www.latimes.com/business/story/2026-04-01/openais-sh...

[c] For example, there are 2x more stories mentioning Claude than ChatGPT on HN over the past year. Compare https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru... to https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...

ronanfarrow 16 hours ago|||
Thank you for this, very much appreciate the thoughtful response.

The piece captures some of the anxieties within OpenAI right now about their competitive position. This obviously ebbs and flows but of late there has been much focus on Anthropic's relative position. We of course mention the allegations of "circular deals" and concerns about partners taking on debt.

cs702 14 hours ago|||
Thank you. Yes, I saw that. The company's always been surrounded by endless talk about insane hype, speculative bubbles, and financial engineering. I wasn't asking so much about that.

I was asking more about your informed view on how OpenAI's technology, products, and roadmap are perceived, particularly by customers and partners, in comparison to those of competitors.

If you have an opinion about that, everyone here would love to hear about it.

globalnode 2 hours ago|||
at this point even googles ai search results are better than gpt - obv. this is not for full programs but if you know what youre doing and just want a snippet, thats all you need.
Ericson2314 9 hours ago||||
Ronan Farrow's expertise is investigations into elite amorality, not evaluating technical products. Why are you asking this question?
cs702 8 hours ago|||
I didn't asking him to evaluate them. I asked him how customer and partners perceive them.

He's had so many conversations that he likely has a sense of how perceptions of the company and its offerings have changed.

I'm curious.

bloppe 5 hours ago|||
Much of the article and general palace intrigue is predicated on the idea that OpenAI has a singularly revolutionary product. If it later turns out to be a commodity, or OpenAI is simply outcompeted nonetheless, then the idea that Sam Altman's personal shortcomings are something to stress about would seem quaint. Just another hubristic tech billionaire acting in bad faith doesn't really pry attention the same way as someone "controlling your future".
irishcoffee 9 hours ago|||
My guess is that the answer to your question, fantastic question, is that nobody knows. I remember having the same thoughts when Covid was first “arriving” if you will: we wanted people in the know to throw us a nugget of information, and they just didn’t know.

As it turns out, and what I’m kind of going with for this LLM shit, is that it’ll play out exactly how you think it will. The companies are all too big to fail, with billionaire backers who would rather commit fraud than lose money.

philipallstar 1 hour ago||
How would fraud help here? Don't they just need scale of lots of customers paying a little bit? How do you fraud your way into that?
keepamovin 1 hour ago|||
If you were in charge of the deciding what should be done with Sam Altman, what would you choose?
unsupp0rted 12 hours ago||||
Many of us prefer OpenAI's Codex, because we think it's a better product.

No comment on the CEO: I just find the product superior in everything but UI/UX and conversation. It's better at quality code.

mliker 12 hours ago|||
Who is “us”? It does seem that some scientists prefer Codex for its math capabilities but when it comes to general frontend and backend construction, Claude Code is just as good and possibly made better with its extensive Skills library.

Both codex and Claude code fail when it comes to extremely sophisticated programming for distributed systems

keldaris 9 hours ago|||
As a scientist (computational physicist, so plenty of math, but also plenty of code, from Python PoCs to explicit SIMD and GPU code, mostly various subsets of C/C++), I can confirm - Codex is qualitatively better for my usecases than Claude. I keep retesting them (not on benchmarks, I simply use both in parallel for my work and see what happens) after every version update and ever since 5.2 Codex seems further and further ahead. The token limits are also far more generous (and it matters, I found it fairly easy to hit the 5h limit on max tier Claude), but mostly it's about quality - the probability that the model will give me something useful I can iterate on as opposed to discard immediately is much higher with Codex.

For the few times I've used both models side by side on more typical tasks (not so much web stuff, which I don't do much of, but more conventional Python scripts, CLI utilities in C, some OpenGL), they seem much more evenly matched. I haven't found a case where Claude would be markedly superior since Codex 5.2 came out, but I'm sure there are plenty. In my view, benchmarks are completely irrelevant at this point, just use models side by side on representative bits of your real work and stick with what works best for you. My software engineer friends often react with disbelief when I say I much prefer Codex, but in my experience it is not a close comparison.

physicsguy 2 hours ago|||
I've tried both against similar and haven't found it such a clear cut difference. I still find neither are able to fully implement a complex algorithm I worked on in the past correctly with the same inputs. Not sharing exactly the benchmark I'm using but think about something for improving performance of N^2 operations that are common in physics and you can probably guess the train of thought.
ricksunny 7 hours ago|||
>As a scientist (computational physicist,

Is there one that you prefer for, i dunno, physics?

zeroxfe 11 hours ago||||
I'm in that camp -- I have the max-tier subscription to pretty much all the services, and for now Codex seems to win. Primarily because 1) long horizon development tasks are much more reliable with codex, and 2) OpenAI is far more generous with the token limits.

Gemini seems to be the worst of the three, and some open-weight models are not too bad (like Kimi k2.5). Cursor is still pretty good, and copilot just really really sucks.

the__alchemist 8 hours ago||||
Claude Code, Codex, and Cursor are old news. If you're having problems, it's because you're not using the latest hotness: Cludge. Everyone is using it now - don't get left behind.
outside1234 6 hours ago||
Cludge has been left behind by Clanker, that’s the new hotness. 45B valuation!
unsupp0rted 12 hours ago||||
Us = me and say /r/codex or wherever Codex users are. I've tried both, liked both, but in my projects one clearly produces better results, more maintainable code and does a better job of debugging and refactoring.
sampullman 12 hours ago|||
That's interesting, I actively use both and usually find it to be a toss up which one performs better at a given task. I generally find Claude to be better with complex tool calls and Codex to be better at reviewing code, but otherwise don't see a significant difference.
SOLAR_FIELDS 10 hours ago|||
If you want to find an advocate for Codex that can give a pretty good answer as to why they think it's better, go ask Eric Provencher. He develops https://repoprompt.com/. He spends a lot of time thinking in this space and prefers Codex over Claude, though I haven't checked recently to see if he still has that opinion. He's pretty reachable on Discord if you poke around a bit.
hirako2000 1 hour ago||
Quite irrelevant what factions think. This or that model may be superior for these and those use cases today, and things will flip next week.

Also. RLHF mean that models spit out according to certain human preference, so it depends what set of humans and in what mood they've been when providing the feedback.

aswanson 11 hours ago|||
Any difference in performance on mobile development?
sampullman 10 hours ago||
For that I'm not so sure. I tried both early 2025 and was disappointed in their ability to deal with a TCA based app (iOS) and Jetpack compose stuff on Android, but I assume Opus 4.6 and GPT 5.4 are much better.
rocketpastsix 10 hours ago|||
yea Im not in this "us" you speak of.
Finbel 3 hours ago||
Of course you're not one of "us" if you're one of "them".
lhl 4 hours ago||||
As some other people mentioned, using both/multiple is the way to go if it's within your means.

I've been working on a wide range of relatively projects and I find that the latest GPT-5.2+ models seem to be generally better coders than Opus 4.6, however the latter tends to be better at big picture thinking, structuring, and communicating so I tend to iterate through Opus 4.6 max -> GPT-5.2 xhigh -> GPT-5.3-Codex xhigh -> GPT-5.4 xhigh. I've found GPT-5.3-Codex is the most detail oriented, but not necessarily the best coder. One interesting thing is for my high-stakes project, I have one coder lane but use all the models do independent review and they tend to catch different subsets of implementation bugs. I also notice huge behavioral changes based on changing AGENTS.md.

In terms of the apps, while Claude Code was ahead for a long while, I'd say Codex has largely caught up in terms of ergonomics, and in some things, like the way it let's you inline or append steering, I like it better now (or where it's far, far, ahead - the compaction is night and day better in Codex).

(These observations are based on about 10-20B/mo combined cached tokens, human-in-the-loop, so heavy usage and most code I no longer eyeball, but not dark factory/slop cannon levels. I haven't found (or built) a multi-agent control plane I really like yet.)

baq 1 hour ago||
This is the way. Eg. IME Gemini is really damn good at sql.
baq 1 hour ago||||
I’m one of those ‘us’, Claude’s outputs require significant review and iteration effort (to put it bluntly they get destroyed by gpt and Gemini). I’m basically using sonnet to do code search and write up since it is a better (more human-like) writer than gpt and faster and more reliable than gemini, but that’s about it.
zem 10 hours ago||||
I've found claude startlingly good at debugging race conditions and other multithreading issues though.
josephg 10 hours ago||
My rule of thumb is that its good for anything "broad", and weaker for anything "deep". Broad tasks are tasks which require working knowledge of lots of random stuff. Its bad at deep work - like implementing a complex, novel algorithm.

LLMs aren't able to achieve 100% correctness of every line of code. But luckily, 100% correctness is not required for debugging. So its better at that sort of thing. Its also (comparatively) good at reading lots and lots of code. Better than I am - I get bogged down in details and I exhaust quickly.

An example of broad work is something like: "Compile this C# code to webassembly, then run it from this go program. Write a set of benchmarks of the result, and compare it to the C# code running natively, and this python implementation. Make a chart of the data add it to this latex code." Each of the steps is simple if you have expertise in the languages and tools. But a lot of work otherwise. But for me to do that, I'd need to figure out C# webassembly compilation and go wasm libraries. I'd need to find a good charting library. And so on.

I think its decent at debugging because debugging requires reading a lot of code. And there's lots of weird tools and approaches you can use to debug something. And its not mission critical that every approach works. Debugging plays to the strengths of LLMs.

7thpower 11 hours ago||||
Not a scientist and use codex for anything complex.

I enjoy using CC more and use it for non coding tasks primarily, but for anything complex (honestly most of what I do is not that complex), I feel like I am trading future toil for a dopamine hit.

DeathArrow 2 hours ago|||
Many paying customers say that Anthropic degraded the capability of Opus and Claude Code in the last months and the outcomes are worse. There are even discussions on HN about this.

Last one is from yesterday: https://news.ycombinator.com/item?id=47660925

bko 9 hours ago||||
I also find Codex much more generous in terms of what you get with a Pro ($20/mo) subscription. I use it pretty much non-stop and I have yet to hit a limit. Weekly reset is much better as well.
KaiserPro 1 hour ago||||
GPT/claude/gemini is pretty interchangeable at this point.
baq 1 hour ago||
Absolutely not the case. They're complementary.
thaoanh404 1 hour ago||||
i find myself being more productive with codex/copilot on coding tasks, but claude does seem to be better at planning
DeathArrow 2 hours ago||||
I prefer GLM 5.1 and MiniMax 2.7. With a better harness like Forge Code, I have better results for way less money than by using GPT and Opus.
shevy-java 2 hours ago||||
Does this work for people? To me having a "better product" would be completely irrelevant if the use cases are evil.
aaa_aaa 5 hours ago||||
Shill talk
enraged_camel 11 hours ago|||
[flagged]
brightbeige 20 hours ago||||
He’s replying on this twitter thread - perhaps someone with an account can ask there and link his comment here?

https://xcancel.com/RonanFarrow/status/2041127882429206532#m

jamiequint 15 hours ago||
Here is the actual link, not a link to some weird third-party site that can't be trusted.

https://x.com/RonanFarrow/status/2041127882429206532

rounce 10 hours ago|||
FYI xcancel is just a mirror that allows reading replies without needing an account.
SwellJoe 14 hours ago|||
Whereas X can be trusted?
jamiequint 10 hours ago||
Yes? It's the data source, not a third-party. How is this even a question?
minimaxir 7 hours ago|||
There's pedantic, and then there's needlessly pedantic.

xcancel is a valid workaround for X links on Hacker News and is sufficient for original attribution.

SwellJoe 7 hours ago|||
X restricts what you can view without logging in. Many folks don't want to log in to X, for obvious reasons. Posting an xcancel link is kinda like folks posting various `archive` URLs to bypass paywalls, work around overloaded servers, etc. That's an extremely common practice here that usually goes without comment.
ed 11 hours ago||||
It's worth noting Codex has 2x more stories than Claude https://hn.algolia.com/?query=codex
cloverich 6 hours ago||
But by page 5, those stories have around 50-60 karma, while claude page five is still 500+

(i found your comment surprising based on my daily hn reading recollection - i mostly read top N daily and feel i only occassionally see codex stories).

ATMLOTTOBEER 8 hours ago||||
Yeah we moved to Claude a few months ago, mostly because the devs kept using it anyway. Altman stuff is interesting but at the end of the day you just go with whatever tool works
georgemcbay 17 hours ago|||
> You may want to provide proof online that you are who you say you are

Unfortunately it probably doesn't even matter here on HN considering how brigaded down this story is predictably getting.

But yeah, it was a fantastic piece.

dang 12 hours ago|||
It wasn't getting "brigaded down" - it set off a software penalty called the flamewar detector. I turned that off as soon as I saw it.
ronanfarrow 16 hours ago|||
Fair request, here you go: https://x.com/RonanFarrow/status/2041203911697068112
taurath 17 hours ago|||
The statements around the sexual abuse allegations seemed to be the most puzzling to me - his sister’s allegations and claims of underage partners because he has a tendency to hook up with younger partners. It does seem like this piece gives him a pretty clean bill of health in that matter - I guess would you be able to talk about how you investigated?

Did you do any extra investigations into Annie’s allegations? It feels to me like the unstated conclusion is recovered memory can’t be trusted, which is a popular understanding but a very wrong one put out by the now defunct and discredited False Memory Syndrome Foundation. It was founded by the parents of the psychologist who coined DARVO, directly in reaction to her accusing them of abuse.

Dissociation is real (I have a dissociative disorder, and abuse I “recovered” but did not remember for much of my adolescence and early adulthood has been corroborated by third parties) and many CSA survivors have severe memory problems that often don’t come to a head until adulthood. I know you didn’t dismiss her claim, but the way the public tends to think about recovered memories is shaped primarily by that awful organization.

ronanfarrow 16 hours ago|||
All fair points on trauma and memory.

As noted in the piece, we spent months talking to Altman's partners and what we found and didn't is as described.

taurath 15 hours ago|||
Thanks for the response! Cheers just fully reread the piece and appreciate your reporting.
girvo 12 hours ago|||
It's super neat to see you here on HN taking questions, kudos :)
gowld 9 hours ago||||
That's not a fair assessment. "False memory syndrome" and "repressed/recovered memory" are both outside scientific mainstream consensus.
taurath 3 hours ago||
Correct, because there truly isn’t a great way to answer with certainty - there was evidence in the 80s of suggestive techniques being used by poorly trained psychologists, and there are many people who remember and then find corroboration.

There’s a lot more who remember and may not have corroboration more than with themselves and among their close friends or healthcare provider. Part of CSA is usually there is very little a kid can do about evidence, as the power discrepancy is far too much. Often with rich abusers, the exact same process occurs. Perps pick victims who are vulnerable or controllable, and constantly seek power and domination. Nothing to do with the boardroooms or batch of ceo billionaires running the economy right now certainly.

fontain 7 hours ago||||
I am very sympathetic to the situation you describe. I certainly think it is possible that Annie is describing something that happened. I think the author did a fair job of representing the allegations, finding the right balance between disclosing that they were unable to corroborate the allegations without dismissing them.

That said, "recovering" memories as a therapy does not pass any sort of sniff test and it doesn't take a concerted effort to discredit the concept. Human memory is very malleable. Patients with mental health issues (which could predate abuse, or could be caused by abuse) are often in search of answers and that makes them very vulnerable.

Could a memory be buried deep in our subconscious, forgotten, only to return to the surface later? Sure, we all forget things and then remember them when triggered by something, whether that's a smell or sound or something else entirely. But can we engineer that process, with any degree of reliability? How can we even begin to reliably reverse engineer the triggers?

I think it is also important to keep in mind that Annie is rich, and the health care available to rich people can be very predatory. There are endless examples of nonsense therapies for all types of health, from ear seeds to treatments for "chronic Lyme".

Memories that return organically due to a trigger are a world apart from "recovered" memories, we shouldn't conflate them. If Annie's memories were triggered in adulthood, sure, that's really no different than remembering something... but "recovered"? That is something else entirely.

Correct me where I'm wrong, I'd like to learn your perspective, maybe there's a missing piece.

taurath 2 hours ago||
> recovering" memories as a therapy

Recovered memory therapy was a discredited hypnotherapy that leaned heavily on suggestion or was associated often with fairly coercive interrogations during the 80s CSA panic - https://en.wikipedia.org/wiki/Day-care_sex-abuse_hysteria

> Memories that return organically due to a trigger are a world apart from "recovered" memories, we shouldn't conflate them.

Agree, though I think the mechanism can be a bit more towards the idea of a “recovery” of traumatic memory, even if the term as understood carries false connotations.

The concept you’re missing is dissociation, and dissociative disorders. In the 40s it was called just “hysteria”, and for many cases up to the late 90s an extreme form was called multiple personality disorder, now DID (dissociative identity disorder). https://en.wikipedia.org/wiki/Dissociative_disorder

Not everyone who goes through traumatic events will respond to it via dissociation of identity, and indeed not all people are equally capable of developing a dissociative disorder, 2 people may go through very similar events (say survive a war as siblings or even twins) and one might dissociate the traumatic experience and one might not. Dissociation doesn’t work quite like you might imagine from a term like “multiple personalities” - that happens in some extreme cases, but think of identity dissociation as an adaptive response to events or situations that are paradoxical (esp to a child’s mind), extreme or traumatic, and can’t be escaped or use of other mechanisms cant be called upon.

Dissociation is on a sort of spectrum, where at one side you have common experiences like zoning out when on a common commute, and on another you have separated self-parts/alter egos to handle wildly different situations.

It’s a mechanism I frankly wasn’t aware of and I’m not sure that I would be able to fully beleive or empathize with, but for my getting a diagnosis of a dissociative disorder changed my life, and made a thousand things about me that I could never figure out make sense. The “model” as it put it at the time responded to experiment, and by recognizing that I was dealing with pretty constant, heavy dissociation and different self states with memory deficiencies helped me figure out how to work through a ton of really intractable problems for me. I’m finally after decades of ineffective therapy able to really understand how I work.

Idk how to talk about it without sounding like I’m trying to sell the idea. But yeah it was a mind blowing thing to me. Over the last 20 years especially a ton of truly respectable research has been done and the increase in efficacy of treatments on dissociation, and trauma generally is one of the unsung advancements for humanity in the last decade. I think the number is that around 3-6% of people meet the clinical criteria for a dissociative disorder - OSDD, DID, DPDR, or dissociative amnesia. 5x more people than have schizophrenia, 5x more than have red hair.

My favorite public clinical resource I point to people is the CTAD Clinic YouTube - https://youtube.com/@thectadclinic?si=5AyR5H8K8Cf2sn3C

Pretty easy to understand explainers from a clinician in the UK.

For a more clinical and study approach this one is the currently best put together research IMO: https://www.taylorfrancis.com/books/edit/10.4324/97810030573...

The TLDR is dissociation is an important mechanism that most people don’t know about but has had a wave of research and study and is much more common than one might expect. The sad part is how often dissociative disorders correlate w abuse.

fontain 1 hour ago||
Thank you very much for the details.

I’m reading more now and I think the missing piece for me is the distinction between “repressed” memories and “recovered” memories.

I understood repressed memories to be an accepted idea, distinct from “recovered” memories. I am reading that the people mentioned in your original comment rejected the idea of repressed memory altogether, and believed that everything traumatic must be remembered.

So, to me, reading that someone “recovered” memory reads like they went through a specific type of therapy intended to “find” these repressed memories. Whereas to you, “recovered” memories could be repressed memories that came back to the surface organically — whether at random, triggered or through a therapy intended to deal with disassociating. Is that right?

hello_humans 11 hours ago|||
[flagged]
jzymbaluk 11 hours ago|||
Hi Ronan, thanks for the article and for answering questions.

My question is, how do you know when an enormous project like this, conducted over an 18-month time span is "done"? I assume you get a lot of leeway from editors and publishers on this matter. How do you make the decision to finally pull the trigger on publishing?

cm2012 9 hours ago|||
I just spent a while reading the article. I really appreciate you writing it. In my case, it made me like Sam Altman a lot more. But I was only able to conclude this because of all the evidence you took the time to put together. It paints the picture of someone trying to do something very difficult in a rapidly changing environment and a lot of pressure, but still making the important choices and not shirking them.
ronanfarrow 9 hours ago||
Interesting to hear! While this hasn’t been a commonplace reaction, I think if I do my job right it should allow people to read the facts as they will, exactly like this. It’s strenuously designed to be fair and, where appropriate, even generous.
philip1209 9 hours ago|||
We talk about Sam Altman a lot. At this point he has a Hollywood movie in post-production, a book ("The Optimist"), and a seemingly endless stream of profiles. It feels intellectually lazy to keep researching the same guy when the industry is moving beyond him.

All evidence today suggests Anthropic is passing OpenAI in relative and absolute growth. So where's the critical reporting? The DOD coverage was framed around the Pentagon's decisions, not Anthropic's. And nobody seems interested in examining whether the company that branded itself as the ethical AI lab actually is one. That seems like a story worth writing.

solenoid0937 8 hours ago|||
> whether the company that branded itself as the ethical AI lab actually is one

FWIW I have two(!!) close friends working for Anthropic, one for nearly two years and one for about 4 months.

Both of them tell me that this is not just marketing, that the company actually is ethical and safety conscious everywhere, and that this was the most surprising part about joining Anthropic for them. They insist the culture is actually genuine which is practically unicorn rarity in corporate America.

We have worked for FAANG so I know where they're coming from; this got me to drop my cynicism for once and I plan on interviewing with them soon. Hopefully I can answer this question for myself.

root_axis 8 hours ago|||
Yeah, every engineer in the bay area has a way of framing the business they work for as a benign force for good... Until they find themselves working somewhere else, then suddenly they have a lot to say about the unacceptable things going on there.

From the outside, I find Anthropic's hyperbolic marketing to be an indication that they are basically the same as every other bay area tech startup - more or less nice folks who are primarily concerned with money and status. That's not a condemnation, but I reject all the "do no evil" fanfare as conveniently self serving.

fwipsy 3 hours ago|||
My model is that Anthropic was founded by OpenAI engineers who self-selected for safety-consciousness. However, it's still subject to the same problem: power corrupts. I think they are better than OpenAI but they are definitely sliding.
JumpCrisscross 6 hours ago||||
> every engineer in the bay area has a way of framing the business they work for as a benign force for good

This isn't remotely true in my experience. The senior folks I know at Meta, for example, pretty much concede they're ersatz drug dealers.

rapnie 48 minutes ago||||
Indeed. The bad behavior is emergent, where most individual intentions are good. Good story, bad outcome.
solenoid0937 6 hours ago|||
TBH I have worked at multiple FAANG and I don't know anyone other than maybe new grads that actually drank the koolaid.

Certainly most of us know we are just in it for the money, and the soul-grinding profit machine will continue to grind souls for profit regardless of what we want.

So that's why it is surprising to me when my (fairly senior) grizzled ex-FAANG friends, that share the same view, start waxing poetic about Anthropic being different and genuine. I think "maybe it is" and decide to interview. IDK, I guess some part of me wants to believe that nice things can exist.

Bolwin 5 hours ago||||
I find it bizarre even the public image of Anthropic is seen as ethical after the Department of War debacle, in which they themselves admitted they had basically no qualms with their tech being used for war and slaughter at all except two very very thin lines, namely mass surveillance of American citizens and fully automated weaponry with their current models.

It only showed they were marginally more ethical than OpenAI and XAI which isn't saying much.

fwipsy 3 hours ago||
Anthropic has two principles they're willing to stand behind, even when it costs them. That's not a lot, but OpenAI only has one principle: look out for number one.
__alexs 1 hour ago||||
If you know even the basics of ethics then such claims are clearly nonsense. There is no stable context independent ethical behaviour. This is a great example of the dangers of motivated reasoning.
jarek-foksa 9 minutes ago||||
> the company actually is ethical and safety conscious everywhere

I wonder what Anthropic tries to achieve by spreading such blatant lies with their bot accounts. I'm definitely not buying anything from a company so morally corrupt to smear the competition while claiming to be somehow "ethical". And I'm not talking just about this thread, it's a recurring pattern on Reddit.

DirkH 3 hours ago||||
I have multiple friends at Anthropic. I can second this. One thing I notice about Anthropic culture is that it is unusually kind.

So much so that I worry they won't be Machiavellian enough to survive. Hope I am wrong.

foolswisdom 8 hours ago||||
I think cynicism is deserved just from observing Dario's remarks.
hypersoar 7 hours ago|||
[flagged]
xvector 6 hours ago||
It might stick tbh. Their PBC+LTBT structure severely limits the power of shareholders. https://www.anthropic.com/news/the-long-term-benefit-trust
giwook 9 hours ago||||
There may be a reason why Altman is talked about a lot. This article in particular surfaces real information and new perspectives we've not heard in this level of detail before on some pretty significant topics that will be impacting you, me, and pretty much everyone we know not only today but well into the future.

You have a point in that Anthropic deserves some coverage too and that there are interesting perspectives that we've not heard of on that front either.

But just because that's true doesn't mean this article isn't very much relevant and needed.

Because it is.

freely0085 8 hours ago||
The New Yorker has given plenty of coverage about Anthropic in their past issues earlier this year.
ronanfarrow 9 hours ago||||
For what it’s worth, the story, while focused on OpenAI, is not uncritical of Anthropic. It explores whether there is a wider race to the bottom in terms of safety, and erosion of even some of Anthropic’s commitments.
k1m 8 hours ago||||
After the US launched its attack on Iran, the ethical AI lab's CEO wrote: "Anthropic has much more in common with the Department of War than we have differences." - https://www.anthropic.com/news/where-stand-department-war
mptest 7 hours ago||
"how easy it is, for those of us who play no part in public affairs, to sneer at the compromises required of those who do" - robert harris

Not making any value judgements, but I can see how one might value their interpretability research higher than what the ceo says in a time where the corrupt, criminal executive branch is muscling in to everything from what's written on currency, to journalistic sources. I generally blame fascists before i blame those unable or unwilling to resist them. though obviously, ideally, we'd all lock arms and, together through friendship, crush authoritarians and fascists.

morpheuskafka 2 hours ago|||
They are a private company. They have zero obligation to sell anything to any part of the government or military. The only reason they are involved in "public affairs" is because they want to profit from the government. Moreover, long before this DoW controversy, they had plenty of nationalist and anti-China rhetoric in their press releases, more so than the other AI firms.
whattheheckheck 6 hours ago|||
Seriously blame anyone other than the fucking abuser. These people
Nevermark 8 hours ago||||
We should stop talking about potential problems or perpetrators, when we have talked about them “enough”?

That would be irrational.

We should give air time to other problems?

I think everyone agrees with that.

You have managed to distill a surprisingly pure vintage of false dichotomy, from a near Platonic varietal of whataboutism.

basisword 8 hours ago||||
OP says they’ve been working on this for 18 months. Most of what you’ve said wasn’t the case until much more recently.
_HMCB_ 8 hours ago||||
[flagged]
easterncalculus 8 hours ago||||
[flagged]
xvector 9 hours ago|||
Normies don't know what an "Anthropic" is. They use ChatGPT. Particularly sharp normies might know that ChatGPT is made by OpenAI, and the sharpest might know that Sam Altman is the CEO.

Now, they may have heard the word "Anthropic" due to recent media coverage. But they don't know what it is and don't remember what it makes. The fact that all businesses use "Anthropic" is about as relevant to them as knowing the overseas shipping company for all the shit they buy off Amazon.

So articles about OAI will always produce more revenue for the media, because it's related to what normies actually use day to day.

sebmellen 10 hours ago|||
Ronan Farrow on Hacker News. Now I’ve seen everything.
ronanfarrow 8 hours ago||
I’ve really appreciated how substantive and polite the discourse here is, overall!
dang 7 hours ago|||
I'm a mod here and wanted to let you know 2 things: (1) I've marked your account with a beta feature that displays a colored line to the left of new comments (since you last viewed the page). It might help you keep track of this rather large thread.*

(2) I'm sorry the post was downranked off the frontpage for a while this afternoon. A software penalty kicks in when the discussion seems overheated ("flamewar detector") but I turned this off as soon as I became aware of it. We make a point of moderating HN less when a story is YC-related (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...) but as this goes against standard internet axioms, people usually assume the opposite.

(* And yes, any reader who wants this is welcome to email hn@ycombinator.com to ask - I haven't turned it on for everyone because I'm worried it would slow the site down. Also, it's a bit buggy and not only have I not had time to fix it, I've forgotten what the bugs are.)

tootie 6 hours ago|||
Not a question but just wanted to make sure you saw this:

https://theonion.com/anyone-else-have-those-weird-dreams-whe...

fblp 11 hours ago|||
Hi Ronan appreciate you being here. what would help you and others continue to do journalism like this? (including commenting on HN?)
ronanfarrow 8 hours ago||
This is a vast and tricky question. The business model has basically fallen out from under journalism, and especially this kind of labor-intensive investigative reporting. The media landscape is increasingly dominated by moneyed individuals and companies essentially buying up the discourse.

I would really suggest subscribing to and finding ways to amplify independent outlets and journalists, and encouraging others to do so.

fblp 8 hours ago|||
Got it! Any recommendations on who to subscribe to? Any personal links for you?

In developer communities often you can support individual developers or groups through a monthly subscription / donation on their github page or similar.

mplanchard 7 hours ago|||
Well, this piece was in The New Yorker, which is reasonably priced and regularly includes excellent investigative journalism. I get the physical copies, which can be too much to keep up with if you try to read everything, but it’s easy enough if you skim and just read the things that stick out as being of particular interest.
ilamont 2 hours ago||
The New Yorker also comes with Apple News+ subscriptions (part of an Apple One plan that many people get for extra iCloud storage) which further includes a number of top-tier and local news orgs such as the Wall Street Journal, LA Times, SF Chronicle, Times of London, etc.

The Sam Altman piece can be read here: https://apple.news/APTX4OkywRWeJXIL7b8a7zQ

t0lo 2 hours ago|||
Drop Site News, 404 Media, Boston Review, The Intercept, and Atavist are all very worth supporting.
ricksunny 6 hours ago|||
Treating quality investigative reporting like the scarce resource that it is, as one of the most well-known can you shed any light on why Reuters would delegate resources to commission investigative reporters to unmask Banksy (in a world where all-things-Epstein represents an unending source of investigative opportunities in the public interest)?
aragonite 9 hours ago|||
I had a question about reporting conventions. In the paragraph where Altman is said to have told Murati that his allies were "going all out" to damage her reputation, the claim is attributed to "someone with knowledge of the conversation" but the attribution is tucked inconspicuously into the middle of the sentence (rather than say leading upfront ("According to someone with knowledge of the conversation, Altman...")) and Altman's non-recollection appears only parenthetically.

As a reader, am I supposed to infer anything about evidentiary weight from these stylistic choices? When a single anonymous source's testimony is presented in a "declarative" narrative style like here (with the attribution in a less prominent position), should we read that as reflecting high confidence on your end (perhaps from additional corroboration not fully spelled out)? And does the fact that Altman’s non-recollection appears in parentheses carry any epistemic signal (e.g. that you assign it less evidentiary weight)? Or is that mostly a matter of (say) prose rhythm?

replytofarrow 9 hours ago||
[flagged]
antirealist 1 hour ago|||
Hi Ronan. TCatK is a phenomenal book, not only in exposing the wrongdoing of powerful people, but also in presenting the meta-issue of how hard it was to get the word out, and you handled it all with nuance. You're about as close as I have to a personal hero.

Long time HN lurker, made an account just to say that :)

euio757 7 hours ago|||
Nice biography from Loopt to OpenAI. Why no mention of the Worldcoin cryptocurrency https://x.com/sama/status/1451203161029427208 in this piece? Was there nothing interesting to report in that area?
shinryuu 4 hours ago||
It was mentioned, but not by name.
Akuehne 53 minutes ago|||
Great article.

Thank you for fielding questions. And please don't stop, your work is great.

tbagman 8 hours ago|||
Wonderful work and writing, Ronan -- I'm appreciative of your careful balance between objective fact-finding and synthesis.

For me, a big worry about AI is in its potential to further ease distorting or fabricating truth, while simultaneously reducing people's "load-bearing" intellectual skills in assessing what is true or trustworthy or good. You must be in the middle of this storm, given your profession and the investigations like this that you pursue.

Do you see a path through this?

f154hfds 6 hours ago|||
> in 2014, [Graham] had recruited Altman to be his successor as president.

> [Graham's] judgment was based not on Altman’s track record, which was modest, but on his will to prevail, which Graham considered almost ungovernable.

One thing I don't understand is why Paul Graham offered YC to Altman if he knew how slippery he was..

sonofhans 6 hours ago|||
Perhaps your question answers itself.
logicallee 23 minutes ago|||
Your report is mukraking, it doesn't include anything positive. I was considering subscribing to the New Yorker but won't do so now.

For anyone else interested, you can see ChatGPT, Claude, Grok and Gemini summarize their article here:

https://www.youtube.com/live/xQj0Ftl7j88

There's nothing positive in it. The report isn't worth reading, and anyone who reads it will know less about Sam Altman than they did before they read it.

egonschiele 9 hours ago|||
Just wanted to say what an incredible person you are! Catch and Kill and the related reporting was awesome too!
ronanfarrow 8 hours ago||
This is so appreciated, thank you! These stories can honestly take a lot out of me so thoughtful reactions mean a lot.
cmiles8 21 hours ago|||
Great reporting.

Altman describes his shifting views as genuine good faith evolution of thinking. Do you believe he has a clear North Star behind all this that’s not centered on himself?

ronanfarrow 16 hours ago|||
The piece is an interrogation of this very question, at great length and with some nuance. I think what it does most usefully is scrutinize an array of different answers to the question.

My own impression after many hours of conversation is that he is identifying something of a true north star when he frames this around "winning." There are people in the story who talk about him emphasizing a desire for power (as opposed to, say, wealth). I think he probably also believes, to some extent, the story he tells that equates winning, and his gaining power, with a superabundant utopian future for all.

However, I think critics correctly highlight a tension between his statements about centering humanity writ large and his tilt into relentless accelerationism.

i7l 20 hours ago|||
(Other people's) money.
mplanchard 7 hours ago|||
Hi Ronan, absolutely wild to see you here in the belly of the beast.

I have not read the article yet, because I get the physical magazine and look forward to reading it analog. I therefore only have an inconsequential question.

I love the New Yorker’s house style and editorial “voice,” and I have always been curious about the editing process. I enjoyed the recent exhibit at the NYPL, which had some marked up drafts with editor feedback and author comments.

Did you find that your editors made significant changes to the voice of the piece, and/or do you find any aspects of their editing process particularly notable or unusual?

Can’t wait to read this one, and hope the HN crowd treats you well.

Uhhrrr 6 hours ago|||
The last couple sentences tie things up really nicely.
gib444 4 hours ago|||
As someone on a budget, how can I pay for good journalism when it so spread out across various (expensive) outlets?
input_sh 2 hours ago||
Paying for 1 is doing more than paying for 0.

It's not your responsibility to fund for every single one, just find the one you like the most and subscribe to that one.

tsunamifury 7 hours ago|||
I know why the cantilevered pool statement is there and why you mentioned it.

I’m sure you don’t know half of the totally fucked up things Sam did to get “revenge” for the slight of a leaking pool.

felixgallo 10 hours ago|||
This is brilliant work, guys. Did you get any pressure to soften or spike the story?
ronanfarrow 8 hours ago||
I won’t get into behind-the-scenes specifics here but I think you can imagine how pressurized this topic was and the amount of heat that tends to generate. I’m used to getting a lot of blowback and it’s never fun. I just hope the work is meticulous and fair enough, and that enough people see the benefits of that, that I get to continue to do it.
Balgair 6 hours ago||
Hey, just want to say thanks for the piece and for all the hard work and effort you did to get this out there. I've published a bit here and there, and the actual writing is only ~50% of the work load (for me at least). So thanks for going through all the effort and pain to get it out, really appreciate all the work you do for me and the rest of Joe Public.
_alternator_ 10 hours ago|||
Do you think the recent conflict between Anthropic and the Department of War, and the apparent bootlicking by OpenAI has fundamentally altered the public perception of OAI? Are they the baddies now in the general public opinion?
jharohit 8 hours ago|||
what model was used to create the visual at the top of the article?
saberience 10 minutes ago|||
The article is paywalled, where can we read it?
bck102 7 hours ago|||
Have you considered doing a piece on Aaron Swartz? Timnit Gebru? Michael O. Church?
doctorpangloss 6 hours ago||
It could be titled "Hypergraphia"
xnx 18 hours ago|||
In depth reporting is great. This is a really tricky topic to cover over the course of 18 months. A year and a half ago OpenAI was ascendant, now it's -at best- stalling and, more likely, trending toward irrelevant.
Stevvo 11 hours ago|||
Love the visual. Fantastic.
artursapek 6 hours ago|||
hey I loved that Ricky Gervais joke about you at the globes
e40 2 hours ago||
For those that don’t know or remember:

“Tonight isn’t just about the people in front of the camera. In this room are some of the most important TV and film executives in the world. People from every background. But they all have one thing in common: They’re all terrified of Ronan Farrow.”

Lerc 7 hours ago|||
From time to time I have been accused of being an apologist for Sam Altman, but I have always tried to assess information based upon what it says instead of whether it matches an existing narrative. You list a number of distortions in your article which show the problem. If you are a good person, bad stories about you may be fake. If you are a bad person, bad stories about you may still be fake.

My prima facie view on Altman has been that he presents as sincere. In interviews I have never seen him make a statement that I considered to be a deliberate untruth. I also recognise that people make claims about him go in all directions, and that I am not in a position to evaluate most of those claims. About the only truly agreed upon aspect has been how persuasive he is.

I can definitely see a possibility of people feeling like they have been lied to if they experienced a degree of persuasion that they are unaccustomed to. If you agree to something that you feel like you didn't really feel like you would have, I can see people concluding that they have been lied to rather than accept that they had been intellectually beaten.

In all such cases where an issue is contentious, you should ask yourself, what information would significantly change your views. If nothing could change your view, then it's a matter beyond reason.

I think you will agree that there is no smoking gun in this article, and it is just an outlay of the allegations. Evaluating allegations becomes tricky because I think it becomes a character judgement of those making the claims.

I have not heard a single person in all of this criticise Ilya Sutskever's character. If he were to make a statement to say that this article is an accurate representation of what he has experienced, it would go a long way.

I think Paul Graham should make a statement, The things he has publicly claimed are at odds with what the article says he has privately claimed. I have no opinion if one or the other is true or if they can be reconciled but there seem to be contradictions that need to be addressed.

While I do not have sources to hand (so I will not assert this as true but just claim it is my memory) I recall Sam Altman himself saying that he himself did not think he should have control over our future, and the board was supposed to protect against that, but since the 'blip' it was evident that another mechanism is required. I also recall hearing an interview where Helen Toner suggested that they effectively ambushed Altman because if he had time to respond to allegations he could have provided a reasonable explanation. It did not reflect well on her.

I am a little put off by some of the language used in the article. Things like "Altman conveyed to Mira Murati" followed by "Altman does not recall the exchange" Why use a term such as 'conveyed' which might imply no exchange to recall? If a third party explained what they thought Altman thought. Mira Murati could reasonbly feel like the information has been conveyed while at the same time Altman has no experience of it to recall. Nevertheless it results in an impression of Altman being evasive. If the text contained "Altman told Mira Murati" then no such ambiguity would exist.

"Later, the board was alarmed to learn that its C.E.O. had essentially appointed his own shadow board" Is this still talking about Brockman and Sutskever? I just can't see this as anything other than a claim he took advice from people he trusted. I assume those board members who were alarmed were not the ones he was trusting, because presumably the others didn't need to find out. The people he disagreed with still had votes so any claim of a 'shadow board' with power is nonsense, and if it is a condemnable offence, is the same not true of the alignment of board members who removed him.

Josh Kushner apparently made a veiled threat to Muratti, the claim "Altman claims he was unaware of the call" casts him as evasive by stacking denial upon denial, but without any other indication that was undisclosed in the article, it would have been more surprising if he did know of the call. I also didn't know of the call because I am not those two people.

The claim of sexual abuse says via Karen Hao "Annie suggested that memories of abuse were recovered during flashbacks in adulthood." To leave it at that without some discussion about the scientific opinion on previously unremembered events being recalled during a flashback seems to be journalistically irresponsible.

nickpp 21 minutes ago|||
Paul Grahams's latest public statement on the issue:

https://x.com/paulg/status/2041363640499200353

laserlight 2 hours ago||||
I have experience in dealing with Sam Altman-like behavior. I hope to explain how their tactics unfold.

> I can see people concluding that they have been lied to rather than accept that they had been intellectually beaten.

There are two angles to this: from an individual perspective and from a collective one.

One's interaction with such a manipulator isn't a single shot. There is not a single event that they are “beaten”. First, one gets persuaded --- you might argue that there's nothing wrong with a skillful persuasion. At some point they realize that the reality is not in line with their expectations. They bring the point up to the manipulator and ask for a change, this time in more concrete terms. The manipulator agrees with the change, negotiates compromises, and the relationship continues. After some time the manipulated party realizes that things are not going in the direction they desire. This time they ask for more concrete terms, without accepting any compromises. The manipulator accepts, yet continues to act against the terms. The manipulated party is now angry and directly confronts the manipulator. The manipulator apologizes and tells that none of it was intentional, and asks for another chance. However, at that point, the manipulator has run out of “politically correct” “persuasion tactics”, and tells blatant lies to make the other party behave.

From a collective perspective, even those “politically correct” “persuasion tactics” are discovered to be lies, because what the manipulator told different parties are in direct opposition to each other, i.e., they cannot all be truths.

> Helen Toner suggested that they effectively ambushed Altman because if he had time to respond to allegations he could have provided a reasonable explanation. It did not reflect well on her.

I understand how her behavior may raise a flag for the unsuspecting, but it was exactly the right one. Manipulators prey on the benefit of the doubt. If Toner were to bring Altman's behavior into attention of others, no doubt that Altman would manipulate them successfully.

It's unfortunate that many people are unaware of these tactics and assume the best of intentions, when such assumptions fuel the manipulation that they would better avoid.

clapthewind 6 hours ago|||
You make very good points. Signed up to point this out to others.
rhlannx 11 hours ago|||
I have the feeling that if you write an article in that style, the subject of the story becomes the hero even if you insert a couple of negatives. In the same manner that Michael Corleone becomes the hero of The Godfather.

I'm not pleased with the headline and the general framing that AI works. The plagiarism and IP theft aspects are entirely omitted. The widespread disillusion with AI is omitted.

On the positive side, the Kushner ad Abu Dhabi involvements (and threats from Kushner) deserve a wider audience.

My personal opinion is that "who should control AI" is the wrong question. In the current state, it is an IP laundering device and I wonder why publications fall silent on this. For example, the NYT has abandoned their crown witness Suchir Balaji who literally perished for his convictions (murder or not).

ronanfarrow 8 hours ago||
For what it’s worth, I don’t think the piece at all avoids key areas of disillusionment with the technology. Quite the contrary.
FloorEgg 12 hours ago|||
Hi Ronan,

I would love to read your piece and pay you and new Yorker for it, but I am not interested in paying a subscription. If I could press a button and pay a reasonable one time license such as $3 or $5 for just this article, or better yet a few cents per paragraph as they load in, I wouldn't hesitate.

However I'm not going to pay for yet another subscription to access one article I'm interested in.

I'm sure you can't do anything about this, but I just wanted you to know.

You deserve to be compensated for great journalism. In this case, unfortunately, I won't read it and you won't earn income from me.

cloud_line 11 hours ago|||
You could buy a physical copy (and this isn't meant to sound sarcastic).
jzymbaluk 11 hours ago||||
You can walk down to a bookstore or anywhere that sells magazines and buy a physical copy
IrishTechie 12 hours ago||||
I’ve often thought about a model like this and would love to see a few news outlets run it as a pilot and see how it stacks up.
mikeyouse 10 hours ago||
Many have tried it (as well as the oft-recommended micropayments idea) and it never justifies the added expense and overhead of the customization. Closest is probably the NYTimes’ gift article feature.
Dylan16807 10 hours ago||
I really doubt the implementation difficulty is the actual reason. It's not hard to have an extra table of specific article permissions.
caycep 12 hours ago||||
You could hit up a public library...
eichin 11 hours ago|||
Looking online it looks like the newsstand price of an issue is around $10 (which I'd assume is heavily ad subsidized, if anyone is still buying print ads?) which is an interesting data point for a pricing model. (Of course, I looked online because I have no idea where I'd find a newsstand around here - the nearest newsstand that show up on google maps has reviews that say "It's just snacks and scratch tickets." and "three newspapers and no magazines" - I may have to stop by just to see what three newspapers they have :-)
mattbee 12 hours ago||||
Or just switch your browser to Reader Mode and it's free.
CookieTonsure 10 hours ago|||
[dead]
sieabahlpark 10 hours ago|||
[dead]
wileydragonfly 7 hours ago|||
[flagged]
stavros 9 hours ago|||
There's a very minor typo in the article:

> “Investors are, like, I need to know you’re gonna stick with this when times get hard,”

Should be:

> “Investors are like, I need to know you’re gonna stick with this when times get hard,”

JumpCrisscross 6 hours ago||
I'm not seeing a typo. Just a stylistic difference.
stavros 1 minute ago|||
In "that's, like, your opinion", "like" is an interjection, you can take it out and not change the meaning: "That's your opinion".

In "investors were like, you need to grow", you're semi-quoting someone, and can't take it out: "investors were you need to grow".

SwellJoe 6 hours ago|||
Pretty sure the correction is wrong, not merely a stylistic choice.
loloquwowndueo 21 hours ago|||
[flagged]
LoganDark 21 hours ago||
Many browsers let you disable autoplay globally.
loloquwowndueo 21 hours ago||
Sure, there are a couple of buttons I can press to stop the video. Why do I have to? Find me one person who likes auto playing videos. The page was created with a deliberate annoying choice that I have to go out of my way to override.
binarymax 19 hours ago|||
Why do you think the author of this piece, to who you originally replied, has any control over this?
LoganDark 20 hours ago|||
I'm not talking about pausing the video after it starts playing. I'm talking about a global setting to prevent videos from playing before you manually unpause them. Safari has such a setting, for instance.
loloquwowndueo 18 hours ago||
Exactly what “I have to go out of my way to override” covers, from my comment.
mannyv 9 hours ago|||
[flagged]
tstrimple 7 hours ago|||
Hard hitting journalism here. Is the person who lied for years to promote himself trustworthy? More news at 11!
Uptrenda 10 hours ago|||
Damn, just wanted to say reporters are scary... The amount of detail here is huge. You think of hackers as the ones good at doxing... Nah, its reporters.
giwook 11 hours ago|||
Any plans to tackle any of the other folks who might be mentioned in the same sentence as Altman, like Darius Amodei?
mathisfun123 11 hours ago||
[flagged]
yakkomajuri 10 hours ago|||
I think the comment was out of legitimate interest rather than weighing one against the other
giwook 9 hours ago|||
Huh? It's a genuine question. The article is great and the writer did a fantastic job.

Please try to give people the benefit of the doubt though I know it's hard in today's society.

wyldfire 9 hours ago|||
Dang, can you substantiate that this is actually Mr. Farrow like he claims?

Or Mr Farrow can you post some evidence somewhere we can see?

rupi 3 hours ago||
Ronan Farrow, the write of this article, made a comment in this thread that is buried in all the comments, "As is always the case with incredibly precise and rigorously fact-checked reporting like this, where every word is chosen carefully (the initial closing meeting for this one was nearly eight hours long, with full deliberation about each sentence), there is more out there on that subject than is explicitly on the page."

I saw that before I read the article and it made me read the article in a very different way than I normally do. As I was reading, I found myself thinking, "Why is it worded that way? What else is the writer trying to say, or not say?"

It made reading this a lot more interactive than I normally associate with passive reading. Great job, Ronan!

laylower 10 minutes ago||
Reading this makes me even happier to pay for Anthropic.

Amodei and his sister saw through the behavior and called it out.

" “Eighty per cent of the charter was just betrayed,” Amodei recalled. He confronted Altman, who denied that the provision existed. Amodei read it aloud, pointing to the text, and ultimately forced another colleague to confirm its existence to Altman directly. (Altman doesn’t remember this.) Amodei’s notes describe escalating tense encounters, including one, months later, in which Altman summoned him and his sister, Daniela, who worked in safety and policy at the company, to tell them that he had it on “good authority” from a senior executive that they had been plotting a coup. Daniela, the notes continue, “lost it,” and brought in that executive, who denied having said anything. As one person briefed on the exchange recalled, Altman then denied having made the claim. “I didn’t even say that,” he said. “You just said that,” Daniela replied. (Altman said that this was not quite his recollection, and that he had accused the Amodeis only of “political behavior.”) In 2020, Amodei, Daniela, and other colleagues left to found Anthropic, which is now one of OpenAI’s chief rivals."

andrewrn 6 hours ago||
“By 2018, several Y.C. partners were so frustrated with Altman’s behavior that they approached Graham to complain. Graham and Jessica Livingston, his wife and a Y.C. founder, apparently had a frank conversation with Altman. Afterward, Graham started telling people that although Altman had agreed to leave the company, he was resisting in practice”

You can subtly see residue of this frustration in Dalton and Michael’s videos when Sam Altman comes up. It’s only thinly veiled that Sam was a snake while at YC.

isolay 12 minutes ago||
That guy is a snake, not just while at YC.
mi_lk 2 hours ago||
video link?
arionhardison 11 hours ago||
Hi @ronanfarrow — I have only had one interaction with Sam Altman in person, and I was advised to keep it to myself. I know this crowd may not care, but Altman is absolutely terrified of Black people — not in any contextual sense, but in a visceral, instinctive way. For someone who, as you put it, "controls our future," this should matter.

FYI: I am by far not the only one to have experienced this and it 100% impacts hiring and other decisions at OpenAI.

edbaskerville 11 hours ago||
Can you give more details?

It wouldn't particularly surprise me if Sam Altman were racist, but I'm curious what the specific incident you observed was.

arionhardison 11 hours ago||
Yes, but first I want to be very clear on some things.

1. I could have hidden my identify behind a throwaway. I did not feel that would be appropriate when making this calim.

2. I am not looking for anything, literally at all. Any follow ups for blogs; anything that would benefit I will not answer.

3. This is NOT a new account, I am very easy to find; I am 6'1 140lbs

I was working for a company called NationBuilder and I had the opportunity to go on a work trip. Outside of a talk he had just given I was waiting for my ride and I looked over like...damn thats the speaker. I wanted to say Hi; he damn near flagged down the police. I apologized and just decided to move on.

Note: It was in Reno, and no I don't want to go into details; the others are not hard to find because I happened upon them via blog posts so i'm sure if someone with the accumen of RF wants to know, he will find.

I have heard similar stores from several people in the years since. I AM NOT CALLING THIS PERSON RACIST. I am saying; he is observably scared of black people and that is not someone I want making descions about how the world moves foward.

Xmd5a 31 minutes ago|||
> he is observably scared of black people

More like I'm black, he got scared when I approached him in the street, thus he must be racist. You're under the spell of your own signifier that you see everywhere like a proud interpretive paranoid.

pesus 5 hours ago||||
Thank you for sharing this. I 100% believe it, and it lines up with my experience with other people who came from similar backgrounds as Sam Altman - i.e. white, rich, privileged, and attending elite universities.

I will disagree with one part - I do believe it is racism. Most will never admit it publicly, but if they think you're one of them, it often comes out rather quickly, especially when alcohol is involved.

portender 3 hours ago|||
It's sad to me that "racism" is such a divisive word to many, and is met with defensiveness rather than introspection and communication. Trying to not be racist takes work, and communication, and is a process, not a state.

I appreciate OP's sharing as well. Also, racism isn't peddled only by rich white elite university attendees, it reaches into all the corners.

bakugo 1 hour ago||||
I don't think you're in a position to comment on what is and isn't racism, considering you just made a sweeping negative generalization based on race without recognizing it for what it is.

Also, I find it interesting how your list of "backgrounds that define bad people" conveniently omits a specific trait that many tech CEOs of questionable morals share, likely because it doesn't align with your agenda.

LAC-Tech 1 hour ago|||
Sam Altman - i.e. white, rich, privileged, and attending elite universities.

Sam Altman is jewish, not white.

mememememememo 4 hours ago||||
An extranordinary claim needs a bit more evidence than one datapoint where in his defense maybe he is scared of anyone he doesn't know trying to talk on the street.
interstice 3 minutes ago||
Also mentioned was that more evidence is not hard to find
arionhardison 10 hours ago||||
Note: To all the downvotes; I did this publicly and not anon for a reason, if you will do the same I am more than willing to provide evidence for all of these claims as long as its done publicly and in the open.
arionhardison 9 hours ago|||
PG said something along the lines of: "There should be no truth that is increasingly unpopular to speak."

If you don't believe what I shared is true, address that directly. But seeing my post sitting at 1 point and [flagged] after 2 hours is not OK. Just as DJT can't flag away his issues, you shouldn't be able to do so on HN.

One of the things I've loved most about HN is that it was real — grounded in observability, empirical evidence, not bias or feelings. I really hope that what happened to my post is not the beginning or a continuance of the end for that ethos.

latexr 28 minutes ago|||
> One of the things I've loved most about HN is that it was real — grounded in observability, empirical evidence, not bias or feelings.

That has never been the case, because HN is frequented by humans and humans are biased. Someone who claims to be unaffected by feelings is someone you cannot trust, as it means they are blind to their own shortcomings. Being robotic about the world is no way to live—that’s how you get people who are so concerned with nitpicks and “ackshually” that they completely lose sight of what’s important. They become easy to manipulate because they are more concerned with the letter of the law than its spirit or true justice.

Objectivity and empiricism are positive traits but should be employed selectively. Emotions aren’t a weakness, they are what drives us to change and improve. Understanding your own emotions equips you better to understand the world. But they too can be used to manipulate you. To truly grow, you have to employ your emotional and rational sides together. Focusing on just the rational will get you far but not all the way.

HN is primarily about curiosity—it’s in the guidelines four times—and you can’t have that without emotion.

tastyface 2 hours ago||||
I tried to respond to your comment with some personal observations on racist currents in this community, but my comment immediately got flagged. So yeah! This site ain't what it used to be. Best for the good folks to seek community elsewhere, I reckon. I miss the old days as well, but I don't think they're coming back.
hnbad 1 hour ago||
If this site ever was anti-racist, that must have been a long time ago. I threw away my old account many years ago only to come back with this one (because it's difficult to completely ignore HN if you work in tech) and the reason I threw that one away was in part the overwhelming reactionary bias in this community.

The "progressives" were at best silent "don't rock the boat" types more inclined to insist on civility than to challange reactionary sentiments while the reactionaries ranged from dog-whistling to outspoken, across the entire range of white supremacism, sexism, homophobia, transphobia, antisemitism, zionism and so on. The only comments that would ever get flagged or downvoted were those that were explicit enough to be seen as "impolite" because they happened to spell out calls for genocide or violence rather than merely gesturing at it with the thinnest veneer of plausible deniability.

tastyface 1 hour ago||
Well, I do remember it being more about the underdogs and a cheeky "fuck the system" attitude without much malice. Maybe I just wasn't tuned into this stuff back then. Now, though, both users and tech leaders can unironically parrot Stormfront rhetoric from 10 years ago (using vaguely cordial language) and no one even bats an eye. The kind of stuff that would have made you unemployable just a few years ago.

When I think of HN in the before times, I think of people like Aaron Swartz. Would he have enjoyed his technical discussions peppered with comments on how the West is being "invaded" and "outbred" by third-world hordes? Based on what I know about him -- and please correct me if I'm wrong -- I'm guessing he would have noped out of that kind of community in a flash. Yet nowadays I see this kind of talk here all the time, percolating all the way up to industry leaders like Musk and DHH.

sharmi 5 hours ago||||
Just came to say, I appreciate your emotionally intelligent and balanced take on your experience, where it would have been very easy to react and let emotions take over (understandably).
tastyface 6 hours ago|||
[flagged]
kombookcha 4 hours ago|||
Thank you for sharing this.
ahf8Aithaex7Nai 3 hours ago||||
Thank you for sharing this experience with us. Don't worry about the downvotes. That's just how it is here sometimes. I don't think it reflects the views of most readers.
valianteffort 1 hour ago|||
[flagged]
baq 42 minutes ago|||
The longer I live, the more secrets coming out I see, the less surprised I am with every next one.
elschneider 4 hours ago||
I really hope @ronanfarrow addresses this. Thanks for sharing
jablongo 5 hours ago||
For me, the attempted productization of Sora was conclusive proof that 1) OAI was overcapitalized and desperate for revenue 2) safety didn't matter to them much 3) improving the world didn't matter much either.

At one point you mentioned an interaction with OpenAI staff where you were looking to interview AI Safety researchers. You were rebuffed b/c "existential safety isn't a thing". Does this mean that you could find no evidence of a AI Safety team at OAI after Jan Leike left? If you look at job postings it does seem like they have significant safety staff...

hirako2000 1 hour ago||
Interestingly we are still experiencing the technological momentum inspired and created by what OpenAI used to be. AI for humanity.

Given the initiative started circa 2017, much of the goods remain. It's a hijack of creative geniuses who got together, which is now turning into cow milking tech.

thrwaway55 6 hours ago||
We need only ask the dead. Aaron Schwartz knew what Altman is. The answer to the topic is no.
mastazi 4 hours ago|
I'm interested in knowing more about this topic, do you have any resources about the relationship between Swhartz and Altman?
stingraycharles 4 hours ago|||
It’s not difficult to find these, Aaron always said that Sam was not to be trusted.
palmotea 3 hours ago||
Apparently Aaron Swartz and Sam Altman were classmates at the original 2005 Y Combinator class. This article has a picture of them literally standing next to each other: https://www.hindustantimes.com/trending/throwback-photo-of-f...

The OP says this:

> The board member was not the only person who, unprompted, used the word “sociopathic.” One of Altman’s batch mates in the first Y Combinator cohort was Aaron Swartz, a brilliant but troubled coder who died by suicide in 2013 and is now remembered in many tech circles as something of a sage. Not long before his death, Swartz expressed concerns about Altman to several friends. “You need to understand that Sam can never be trusted,” he told one. “He is a sociopath. He would do anything.”

t0lo 2 hours ago||
Does the hindustan times report facts. 90% of indian outlets are basically unfactual
hnbad 1 hour ago||
The cited snippet is in TFA. Did you read it? Did you read the Hindustan Times article either?

Because that one doesn't actually include any relvant statement, it just contains the picture GP was pointing out - and the entire point of referencing that picture was to emphasize that they had had contact, which is already implied by them being in the same YC batch, which I don't think you are challenging.

Please don't post comments like this one. "90% of Indian outlets are basically unfactual" is a hyperbolic claim - regardless of the truth content of "Indian outlets" that claim is bogus unless you have factual evidence to back up the specific number which I doubt because "basically unfactual" is not well-defined). But even worse, it's completely irrelevant to the discussion at hand because the factual accuracy of the Hindustan Times is at best tangential because nothing in GP's comment hinged on its accuracy unless you're saying the description of that photo as being one depicting both of them as members of the same YC cohort is "unfactual" or you're accusing them of having manipulated the image itself. But even then it would be irrelevant because you seem to take issue with the description of Altman as a sociopath (i.e. the quote), not the fact they were batch mates, and this quote is explicitly cited as being from TFA this comment thread is about, not the Hindustan Times piece. Comments like that just waste time, cause unrelated hostile arguments and could have been avoided by simply reading either of the articles involved.

t0lo 1 hour ago||
I found a great piece from the halal times that backs up my claim

https://www.halaltimes.com/indian-media-has-become-a-factory...

It's fully up to you if you want to generalise before you read based on the publications name. I won't judge. If we read the times of india in full every time to give it the benefit of the doubt and counter our biases, the world would be a far less productive place. If a country's media has a reputation for low fact checking it's usually deserved.

input_sh 2 hours ago|||
It's mentioned in the submitted article (about half way through), you should read it.
stavros 9 hours ago||
I found it very interesting that Altman et al were worried that AI will become supremely intelligent and China will make a supervirus or some AI drones or whatnot, but not a single person was worried about destroying all jobs because we wouldn't need humans any more.

Or maybe they were not so much "worried" but "hopeful" that they'd amass literally all the wealth in the world.

druskacik 1 hour ago||
Altman is an advocate of Universal Basic Income, as far as I'm aware. That doesn't sound like he's not worried about massive job losses.

https://www.cbsnews.com/news/sam-altman-universal-basic-inco...

https://finance.yahoo.com/news/sam-altman-wants-universal-ex...

latexr 15 minutes ago|||
> Altman is an advocate of Universal Basic Income

So he says. And the way he proposed reaching that was with a scam cryptocurrency under his control which has rightfully been banned in several countries.

https://www.buzzfeednews.com/article/richardnieva/worldcoin-...

https://www.technologyreview.com/2022/04/06/1048981/worldcoi...

stavros 4 minutes ago|||
If there's one thing that's clear from the article, it's that he's a proponent of anything that will benefit him, even multiple conflicting things at the same time.
red369 8 hours ago|||
I also find that interesting.

And not intending to defend the motives of anyone involved, but I'm hoping we can not worry about literally all jobs being destroyed, and AI companies amassing all the wealth in the world.

Don't we need at least some humans working and earning to buy these AI services? Am I not being imaginative enough? Is it possible for the whole economy to consist just of AI selling services to each other?

I realise that even if AI destroys most jobs, or even just a lot of jobs, and amasses most wealth, or a lot of wealth, it would still be a terrible thing for humans. The word "all" could have just been hyperbole, and it is still a valid point. I just want to know people's thoughts on whether entire replacement is possible.

eloisius 1 hour ago|||
Why keep human consumers to buy your services when you could just amass all the wealth you desire, and have autonomous systems that can ensure your unassailable physical security? You would sit atop the most stratified dominance hierarchy ever achieved, and it would reduce other humans to mere pets or breeding stock. I don’t think normal humans would desire that kind of power, and I don’t believe LLMs will take us there, but I wouldn’t put it past the perverted billionaire maniac.
gpt5 2 hours ago|||
Do you need ants buying services from humans for the world economy to function?

If AI will indeed become superintelligent, we won't matter.

RealityVoid 5 hours ago||
I think fundamentally, the concern is misplaced. The fact you need to work for wealth is a convention of our constraints. The change in constraints would lead to other means of distribution. It's easy to see if someone who believes more productivity is good would not see making jobs obsolete a real problem. Thew would see us adapting to the new conditions in a relatively short while.
blargey 3 hours ago|||
> The fact you need to work for wealth is a convention of our constraints

The current constraint is "you need to produce to have things".

If one company's AI takes all the jobs, and thus does all the producing-to-have-things, the constraint transforms into "you need that company's permission to have things".

Hence the top-level question.

foobiekr 4 hours ago||||
The new conditions almost surely being like the old conditions: slavery, sexual exploitation, etc.
chii 4 hours ago|||
Those who are concerned is implying that any new distribution mechanism is not going to favour them.

And under the capitalist system, if nothing changes, the "new" distribution system is indeed not going to favour them - at best there would be some sort of UBI, and at worst you would be left to starve in the streets.

However, i cannot see how one can transition to a new system, and yet have the existing powers in the current system agree and not be disadvantaged.

kmfrk 18 hours ago||
Gobsmacking details about Altmans' time as Y Combinator president, in case anyone's wondering.

Fantastic reporting.

ronanfarrow 16 hours ago|
As is always the case with incredibly precise and rigorously fact-checked reporting like this, where every word is chosen carefully (the initial closing meeting for this one was nearly eight hours long, with full deliberation about each sentence), there is more out there on that subject than is explicitly on the page.
kmfrk 15 hours ago|||
One of the decidedly eerier parts of this story as you keep reading are all the gaps between what people are saying about Altman, and what they clearly want to say about Altman but can't.
devmor 11 hours ago||
Throughout my life, what colleagues/friends are unwilling to remark plainly on has been the most telling factor of someone’s character to me.
dugidugout 11 hours ago||
This can be true I suppose, but equally I have a few friends who practically play characters as if they've resigned themselves to a role in a sitcom. For instance: one of my friends is late to just about everything and treats everyone as if we are on-call. We plainly note this repeatedly, the friend is, I hope, equally frustrated and embarrassed by it, and in spite of this nothing changes. This is obviously a critical element to their broader character.

Perhaps you mean to distinguish social groups without much intimacy? To which I'm sure we could provide some convincing cases, but this seems like a silly heuristic generally.

rincebrain 10 hours ago|||
I have been in or next to a number of social circles with such missing stairs, where for various reasons people in the groups have decided to not directly acknowledge certain Facts that are known about some members, because it would involve them confronting their hypocrisy.

Someone cheating regularly on their partner, flagrant substance use problems, controlling people who ostracize anyone who doesn't agree with their sometimes insane perspectives...

People will go along with quite a lot to avoid friction, especially as they get older and picking up new social circles becomes higher cost.

It's possibly the most telling thing, when you see what people say is a hard line versus how they actually respond to it.

satvikpendem 7 hours ago|||
Maybe they have ADHD because the symptoms fit, if they really do acknowledge the problem yet cannot fix it.
xnx 8 hours ago||||
> where every word is chosen carefully (the initial closing meeting for this one was nearly eight hours long

For anyone unfamiliar with this process, the New Yorker documentary is well worth the watch: https://www.netflix.com/title/81770824

Teever 9 hours ago||||
You mention many proxies of Musk who post negative content about Altman.

In your investigation were you able to determine if Altman has similar proxies?

How common would you say that this is? Do these kinds of people generally have teams of people who sling mud for them?

Can you speculate on how that manifests on a site like Hackernews?

trvz 1 hour ago|||
Calling your own article all those things is a major turn-off.
wolvoleo 17 minutes ago|
https://archive.is/Cd0Yl
More comments...