Top
Best
New

Posted by paulpauper 1 day ago

OpenAI Moves to Complete Potentially the Largest Theft in Human History(thezvi.substack.com)
241 points | 96 commentspage 2
binarymax 1 day ago|
I want to understand this more, so can someone please ELI5 what the theft in the article actually is? Theft implies someone lost something. I think it's theft from the non-profit? But what does that mean? Is it theft of taxes because of the wealth accumulated in the non-profit was not taxed according to how it would have been for a for-profit entity?

EDIT: I'm not sure why I'm being downvoted. I read the article and it's not clear to me. The entire article is written with the assumption that the reader knows what the author is thinking.

joe_the_user 1 day ago|
It seems like you're mixing "I don't understand X" with what's effectively an argument that X is false. Perhaps people feel that there's some bad faith in that approach.

Also, the article is very clear - the wealth transfer is moving the money/capital controlled by a non-profit to stockholders of a for-profit company. The non-profit lost that property, the share holders gained that property. It seems like taking an implicit assumption something like "the same people are running the for-profit on the same basis they ran the non-profit so where's the theft" - feel free to make that argument but mix the claim with "I don't understand" doesn't seem like a fair approach.

binarymax 1 day ago||
I'm absolutely not arguing that X is false, because I don't know what X is, and I am arguing in good faith. I will follow up with the question: if the non-profit and the for-profit are owned by the same shareholders, what is the theft? Is this not a legal transfer between business entities?

I am also a somewhat harsh critic of Sam Altman (mostly around theft of IP used to train models, and around his odd obsession with gathering biometrics of people). So I'm honestly looking for answers here to understand, again, what wrongdoing is being done?

overvale 1 day ago||
I'm not 100% clear myself but I think that the criticism is that what was supposed to be a non-profit delivering world-changing technology for the public good was bullied/manipulated into a for-profit entity that would enrich investors and consolidate power among the wealthy.

So the "theft" is the wealthy stealing the benefits of AGI from the people. I think.

whatpeoplewant 1 day ago||
The IP concern is real, but it isn’t binary: we can move from monolithic pretraining on scraped corpora to multi-agent, agentic LLM workflows that retrieve licensed content at inference with provenance, metering, and revocation. Distributed agentic AI lets rights holders expose APIs or sandboxes so models reason in parallel over data without copying it, yielding auditable logs and pay-per-use economics. Parallel agentic AI pipelines can also enforce policy (e.g., no-train/no-store) as first-class constraints, which is much harder to do with a single opaque model.
bix6 1 day ago||
Anyone else read Empire of AI? It left me pretty disgusted with openAI and Altman in particular. Curious if anyone has a rec for a book that is more positive in AIs benefits / the behavior of Sam / OAI?

Edit: downvoting why? Sama fanboys? Tell me your book rec then.

AJ007 17 hours ago|
Did not read the book, but have been following OpenAI since the beginning. The whole thing comes across as a bait and switch with parallels to Google's "Don't be evil." At minimum he isn't a person whom comes across as trustworthy - but very few tech leaders (or politicians, etc.) do.

This situation is arguably better than an alternative where Google or another big tech monopoly had also monopolized LLMs (which seems like the most likely winner otherwise, however they may have also never voluntarily ventured in to publicly releasing LLM tools because of the copyright issues and risk of cannibalizing their existing ad business.) Feels like this story isn't finished and writing a book is premature.

seydor 1 day ago||
a Lehman Brothers moment
FridayoLeary 1 day ago||
When the whole ai thing exploded Sam Altman was peddling this hopeful narrative that Openai is a non profit is concerned about safety and improving humanity, and they were working with regulators to ensure the safety of the industry. The cynics have been vindicated.

> or when Altman said that if OpenAI succeeded at building AGI, it might “capture the light cone of all future value in the universe.” That, he said, “is for sure not okay for one group of investors to have.”

He really is the king of exaggeration.

If i understood correctly the author does admit that continuing openai as a nonprofit is unrealistic, and the current balance of power could be much worse, but what disgusts me is the dishonest messaging they started off with.

denverllc 1 day ago||
According to empire of ai, they started OpenAI as a nonprofit so they could get people devoted to the mission and wouldn’t have to pay the high SV wages
r_lee 1 day ago||
And (im pretty sure) to get funding from prominent figures who were afraid of AI being monopolized privately and being used for evil...
lawn 1 day ago||
Just looking at Altman's history none of this is even remotely surprising.

Lookup Worldcoin for instance.

jampa 1 day ago||
I think OpenAI is screwed long-term, and their leadership knows it. Their most significant advantage was their employees, most of whom have now left for other companies. They're getting boxed in across every segment where they were previously the leader:

- Multimodality (browser use, video): To compete here, they need to take on Google, which owns the two biggest platforms and can easily integrate AI into them (Chrome and YouTube).

- Pricing: Chinese companies are catching up fast. It feels like a new Chinese AI company appears every day, slowly creeping up the SOTA benchmarks (and now they have multimodality, too).

- Coding and productivity tools: Anthropic is now king, with both the most popular coding tool and model for coding.

- Social: Meta is a behemoth here, but it's surprising how far they've fallen (where is Llama at?). This is OpenAI's most likely path to success with Sora, but history tells us AI content trends tend to fade quickly (remember the "AI Presidents" wave?).

OpenAI knows that if AGI arrives, it won't be through them. Otherwise, why would they be pushing for an IPO so soon?

It makes sense to cash out while we're still in "the bubble." Big Tech profits are at an all-time high, and there's speculation about a crash late next year.

If they want to cash out, now is the time.

kyle_grove 1 day ago||
I'd agree with all those facts about the competitive landscape, but in each of those competitors, there's enough wiggle room for me to think OpenAI isn't completely boxed in.

Google on multimodality: has been truly impressive over the last six months and has the deep advantages of Chrome, YouTube, and being the default web indexer, but it's entirely plausible they flub the landing on deep product integration.

Chinese companies and pricing: facts, and it's telling to me that OpenAI seems to have abandoned their rhetorical campaign from earlier this year teasing that "maybe we could charge $20000 a month" https://techcrunch.com/2025/03/05/openai-reportedly-plans-to....

Coding: Anthropic has been impressive but reliability and possible throttling of Claude has users (myself included) looking for alternatives.

Social: I think OpenAI has the biggest opportunity here, as OpenAI is closest to being a consumer oriented company of the model hyperscalers and they have a gigantic user base that they can take to whatever AI-based platform category replaces social. I'm somewhat skeptical that Meta at this point has their finger on the pulse of social users, and I think Superintelligence Labs isn't well designed to capitalize on Meta's advantages in segueing from social to whatever replaces social.

nofriend 1 day ago|||
> OpenAI knows that if AGI arrives, it won't be through them. Otherwise, why would they be pushing for an IPO so soon?

an ipo is a way to seek more capital. they don't think they can achieve agi solely through private investment.

jgalt212 1 day ago||
> an ipo is a way to seek more capital. they don't think they can achieve agi solely through private investment.

private deals are becoming bigger than public deals recently. so perhaps the IPO market is not a larger source of capital. different untapped capital, maybe, but probably not larger.

eeasss 1 day ago|||
Unfortunately I think you are wrong. Their most important asset is the leadership role of the company, the brand name and the muscle memory. Other employers may come and go - on a system level this doesn’t look important as longer as they can replace talanted folks with other talanted ones. This seems to be the case for nowhere
roody15 1 day ago|||
Have to agree if services likes Deepseek remain free or at least extremely cheap I don’t see a long term profitability outlook for OpenAI. Gemini has also greatly improved and with Googles infrastructure and ecosystem … again long term outlook doesn’t look promising for OpenAI.
sumedh 1 day ago|||
> It feels like a new Chinese AI company appears every day

The average joe is not using them though, for the general public AI is ChatGpt.

czhu12 1 day ago|||
What about just search? I basically never use google anymore and am perfectly happy to pay for OpenAI
throwaway314155 1 day ago||
> most of whom have now left for other companies

Is there like a public list of all employees who have transitioned or something? As far as I know there have been some high profile departures.

uvaursi 1 day ago||
[flagged]
drivebyhooting 1 day ago|
And here I thought the article would be about the blatant copyright infringement of every author, artist, and creative to train their models.

Take image diffusion models. They’re trained on the creative works of thousands and completely eliminates the economic niche for them.