Posted by donohoe 1 day ago
Someone has to come up with $1.4 trillion in actual cash, fast, or this whole thing comes crashing down. Why? At the end of all this circular financing and deals are folks that actually want real cash (eg electricity utilities that aren’t going to accept OpenAI shares for payment).
If the above doesn’t freak you about a bit at how bonkers this whole thing has become then you need a reality check. “Selling ads” on ChatGPT ain’t gonna close that hole.
These deals aren't for 100% payment up front. The deals also include stock, not just cash. So, no, they do not need to come up with $1.4 trillion in cash quickly.
This AWS deal is spread over 7 years. That's $5.4 billion per year, though I assume it's ramping up over time.
> At the end of all this circular financing and deals are folks that actually want real cash (eg electricity utilities that aren’t going to accept OpenAI shares for payment).
Amazon's cash on hand is on the order of $100 billion. They also have constant revenue coming in. They will not have any problem accepting OpenAI shares and then paying electricity bills with cash.
These deals are also being done in the open with publicly traded companies. Investors can see the balance sheets and react accordingly in the stock price.
The one I found best documented (1) is a Meta's SPV to fund their Hyperion DC in Louisiana, which is a deal that is 80% financed by private credit firm Blue Owl. There is a lot of financial trickery to getting the SPV to be counted by the ratings agencies as debt belonging to a different entity that does not count against Meta's books but treated by the market as basically something that Meta will back. But xAI's Memphis DC is also a SPV, and Microsoft is doing that as well. I'm not sure about AMZN, but that we're starting to see that from their competitors suggests they will also be going to this way.
1: By the invaluable Matt Levine, here: https://www.bloomberg.com/opinion/newsletters/2025-10-29/put... but the other major companies have their own SPV's
If the market collapses I think Meta can technically just walk away and they lose access to those data centers (which they no longer want anyways) and the SPV is stuck holding $X of assets with $>X liabilities and the issues of the credit are on the hook but not Meta.
And investors are fine being on the hook because they get a higher return from the SPV bonds than Meta bonds. (risk adjusted it's probably the same return).
Do we?
The payments Meta et al are making to the SPV are payments for data-center services. The data centers are then buying the assets and issuing the debt. Now, Meta is obligated to make those payments to the SPV. Which looks like debt. But they are only obligated to do so if the services are being provided.
Blue Owl, meanwhile, owns 80% of the datacentre venture. If the price of chips crashes, that's Blue Owl's problem. Not Meta's. If Meta terminates their contract, same deal. (If Beijing nukes Taiwan and the chips quintuple in value, that's Blue Owl's gain. Mostly. Not Meta's.)
> Why don't they just say "actually no, we all know that's debt and it's owned by Meta so we will consider it when rating their credit."?
If Meta stopped paying the SPV, the SPV would have the recourse of a vendor. If Meta stopped making payments on its bonds, that would trigger cross defaults, et cetera. Simply put, Meta has more optionality with this structure than it would if it issued its own debt.
The red flag to keep an eye out for are cross guarantees, i.e. Meta, directly or indirectly, guaranteeing the SPV's debt.
Does that make any sense? No.
Then Meta would do this in a wholly-controlled off balance sheet vehicle à la Enron. The fact that they're involving side cars signals some respect for their rating.
I'm no expert on the specifics of the circular financing we're seeing here so the rest of what you wrote might be true, but I know enough about how Wall Street and the world in general works to know that closing with this as a defense shows an incredible naivete that makes me question everything else you have said.
https://www.bloomberg.com/news/articles/2025-10-31/meta-xai-...
‘Brad, if you want to sell shares, I’ll find you a buyer…I just—enough,’ Altman said on Gerstner’s podcast.”
https://www.theinformation.com/articles/ilya-saw-mira-murati...
Hopefully nobody reading this has experienced it: these are the words of a true sociopath/addict.
"I'm mad you questioned me" is fucking classic.
I told dang I was out and I am after this. Sorry dang.
It's not. It's done on the basis of don't question me bro.
Sorry but is there some lore behind it as I feel like the last sentence has me wondering what it means. If you could share the lore, I would really appreciate it.
but overall, I agree that this is a very weird thing to say by Sam Altman
There are two important points by Keynes that are relevant:
1. The market can remain irrational longer than you can remain solvent. Even if you're betting on a crash, it will probably happen after you get margin called and lose all your money. You can be absolutely right about where this is headed, but keep your personal investments away from this.
2. The value of a company isn't determined by any sound fundamentals. It's determined by how much you can get a sucker to pay (aka Keynes' castles in the air theory). Until we run out of suckers OpenAI will be able to keep getting cash infusions to pay whoever actually demands cash instead of stock. And as long as there are suckers that are CEOs of big tech companies they are going to be getting really big cash infusions.
[1] https://www.pgpf.org/programs-and-projects/fiscal-policy/mon...
It's certainly possible to imagine OpenAI eventually generating far more revenue than Google, even without anything close to AGI. For example, if they were to improve productivity of 10% of the economy by 10% and capture a third of that value for themselves, that would be more than enough. Alternatively, displacing Google as the go-to place for search and selling ads against that would likely generate at least Google levels of revenue. Or some combination of both.
Is this guaranteed to happen? Of course not. But it's not in "bonkers" territory either.
The Amazon deal is actually spread over 7 years. Other deals have different terms, but also spread over multiple years.
Deals like these have cancellation terms. OpenAI could presumably pay a fee and cancel in the future if their projections are too high and they don't need some of the compute from these deals.
The deals also include OpenAI shares. The deals are being made with companies that have sufficient revenue or even cash on hand to buy the compute and electricity.
The claim above that someone needs to come up with $1.4 trillion right now or everything will collapse isn't grounded in any real understanding of these deals. It's just adding up numbers and comparing them to a single annual revenue snapshot.
Even under the most bullish cases for AI the real $ requires here looks iffy at best.
I think we all know that a big part of the angle here is to keep the hype going until there’s a liquidity event, folks will cash out and then at the like they won’t care what happens.
This is “if we get 1% of the market” logic.
Of course, you must also make a convincing case for getting to that 1%.
Inherently, no. In practice, it's riddled with biases deep enough [1] to make it an informal fallacy.
"The competition in a large market, such as CRM software, is very tough," and "there are power laws which mean that you have to rank surprisingly high to get 1% of a market" [2]. Strategically, it ignores the necessity of establishing a beachhead in a small market, where "a small software company" has "a much better chance of getting a decent sized chunk."
OpenAI has nothing resembling this ecosystem, and will never be nearly as valuable a place to buy ads. Replacing Google is probably the least realistic business plan for OpenAI - if that's what they're betting on, they're cooked.
The fun part is to go back now and listen to Blake Lemoine interviews from summer 2022. That for me was the start of all this.
Search engines were never a user friendly app to begin with. You had to know how to search well to get comprehensive answers, and the average person is not that scrupulous. Google’s product is inferior, believe it or not. There will be nothing normal about seeing a list of search results pretty soon, so Google literally has a legacy app out in the wild as far as facts are concerned.
So imagine that, Google would have to remove Search as they know it (remove their core business) and standup a app that looks the same as all the new apps.
People might like one AI persona more than others, which means people will seek out all types of new apps. LLMs is the worst thing that could have ever happened to Google quite frankly.
I'd be more worried about OpenAI surviving. Aside from the iffy finances, much of their top talent seems to leave after falling out with Altman.
Googles biggest advancement in the last ~15 years is to produce worse search results so that you spend more time engaging with Google, and doing more searches, so that Google can show more ads. Facebook is similar in that they feed you tons of rage-bait, engagement spam, and things you don't like infused with nuggets of what you actually want to see about your friends / interests. Just like a slot machine the point is that you don't always get what you want, so there's a compulsion to use because MAYBE you will get lucky.
OpenAI's potential for mooning hinges on creating a fusion of information and engagement where they can sell some sort of advertisement or influence. The problem of course is that the information and engagement is pretty much coming in the most expensive form possible.
The idea that the LLM is going to erode actual products people find useful enough to pay for is unlikely to come true. In particular people are specifically paying for software because of it's deterministic behavior. The LLM is by its nature extremely nondeterministic. That's fully in the realm of social media, search engines, etc. If you want a repeatable and predictable result the LLM isn't really the go to product.
I don’t disagree with you entirely, but I’d argue the second level apps are harder to chase because they get so specialized.
Death of Google (as everyone knows Google today) is a tricky one. It seems impossible to believe at this exact moment. It can sit next to IBM in the long run, no shame at all, amazing run.
OpenAI is at the very least worth at least half as much as Google. I foresee Google becoming like IBM, and these new LLM companies being the new generation of tech companies.
The big question would be how much of this revenue is unjustifiably circular, and how much of it is extractable - but those are questions for when the growth slows. Im certain every supplier has ways to back out of these commitments if the finances look shaky.
I have got incredible value from ChatGPT up to this point but I have been using it less and less.
What I have mostly extracted from it is a giant list of books I need to read. A summary of the ideas of a book I haven't read is obviously not the same as reading the whole book.
Before all this there were so many areas I was curious about that ChatGPT gave me a nice surface level summary of. I now know much better what I want to focus on but I don't need more surface level summaries.
Is there evidence that their revenues are growing faster than their costs?
Very little data about expenses, but it looks like they may be growing a little slower (3-4x a year) than revenue. Which makes sense because inference and training get more efficient over time.
> We don't have evidence one way or the other
I don't see how both of these things can be true. How can we know something to be likely or unlikely if we have no evidence of how things are?
If we don't have any evidence they're moving towards profitability, how is it likely they will become profitable?
It can’t be the same hedge on both sides of the trade.
Why vol? They're just short rates, which is a silly way to say leveraged. If rates become volatile but halve, OpenAI does fine. If rates stabilise at 10%, OpenAI fails. There is no "duration hedging," which for OpenAI would involve buying duration, i.e. bets that profit when rates go up, going on.
>OpenAI thought to be preparing for $1tn stock market float. ChatGPT developer is considering filing for an IPO by the second half of 2026...
Economic history strongly suggests this would be a bad assumption.
More pertinently, we have a long history of people buying into bubbles only for them to crash hard, no matter how often people tell them "past performance is not a guarantee of future growth" or whatever the legally mandated phrase is for the supply of investment opportunities to the public where you live.
Sometimes the bubbles do useful things before they burst, like the railways. Sometimes the response to the burst creates a bunch of social safety nets, sometimes it leads to wars, sometimes both (e.g. Great Depression).
But what if, maybe, it ain't so? Of course, lots of AI things are going to fail, and nobody is exactly sure of the future. But what if, after in depth inspection, the overall thing is actually looking pretty good and OpenAI like a winner?
May be incorrect, but it's not writing down the answer first and working backwards.
> But what if, maybe, it ain't so?
https://www.youtube.com/watch?v=9z70BKwfSUA
Comedic take from last time, but the point at the conclusion remains. "Just this once, we think we might".
> Of course, lots of AI things are going to fail, and nobody is exactly sure of the future. But what if, after in depth inspection, the overall thing is actually looking pretty good and OpenAI like a winner?
Much as I like what LLMs and VLMs can do, much as I think they can provide value to the tune of trillions of USD, I have no confidence that any of this would return to the shareholders. The big players are all in a Red Queen's race, moving as fast as they can just to stay at the same (relative) ranking for the SOTA models; at the same time, once those SOTA models are made, there are ways to compress them effectively with minimal losses of performance, and if you combine that with the current rate of phone hardware improvements it's plausible we'll get {state of the art for 2025} models running on-device sometime between 2027 and 2030, with no money going to any model provider.
I have not invested in OpenAi.
But the truth is, right now the potential revenue is not achievable with a relevant investment into energy generation.
Interesting rat race which will lead to something. Let's see what it will be
The effects would be devastating to say the least in how I feel like it.
If S&P 500 grew thanks to this AI bubble, it sure as well will shrink as well due to the popping of this bubble too.
There is no free lunch but more precisely I am worried more about the retirement schemes in which people put their money into etc.
Personally I was saying this thing a long time ago that AI feels like a bubble and maybe S&P 500 would have some issues and thus to diversify into international or gold etc. and I was met with criticism because "S&P 500 is growing the fastest so I am wasting money investing in gold etc.", Yea because that's because bubbles can also grow... and they also shrink... and they do both of these things fast.
it will grow even more with the next generation of models.
What if AI invents fusion power?
(Thanks for the downvotes I wanted to keep my karma at 69)
2. Outside of software, inventions have to be turned into physical things like power plants. That doesn’t happen overnight and is expensive.
3. The industry is already going through a power revolution in the form of battery + solar and it’s going to take a while for a new technology to climb the learning curve enough to be competitive.
4. What if AI gives us all a pony?
“Please don't comment about the voting on comments. It never does any good, and it makes boring reading.”
Also, IIUC the guys in The Big Short would've lost everything if the government stepped in sooner since the banks controlled the price of the CDSs and could've maintained the incorrect price if they had a bunch of extra cash.
Yeah. "Markets can remain irrational longer than you can remain solvent."
https://en.wikipedia.org/wiki/Michael_Burry had an investor panic and nearly lost everything. He was right, but he nearly got the timing wrong.
If you were actually the guys from the big short and you have strong conviction, you should short the market (literally like the guys from big short) and get really rich.
Money is the language they understand, so hit them where it hurts.
When you go long, you can still make money by being “sort of right” or “obliquely right” or “somewhat wrong but lucky”or by just collecting dividends if the market stays irrational long enough. If you short something you have to be exactly right (both about what will happen and precisely when) or your money will end up in the hands of the people you’re betting against. It’s not a symmetrical thing you can just switch back and forth on.
if no, and you thought it was a bubble, does that price of NVIDIA from 2 years ago (not from today) makes sense to you now?
I was at WeWork around the time of its downfall. I have a lot of opinions about how that place was ran, but I can assure you pre-pandemic they were buying up every office space because they were filling them with tenants. Not paying for offices was a result of tenants not paying due to the pandemic.
That’s the same as what happened when WeWork was buying up office space pre-pandemic and then using handwavy nonsense like “Community Adjusted EBITDA” as part of the smoke and mirrors to pretend like there was an actual business there.
The pandemic expedited the pain, but the business model was broken and folks called BS long before Covid hit.
They're going to sell ads at the moment people are looking to buy stuff. It's the single most viable business model we've ever seen.
Besides, how are ads on ChatGPT supposed to work? If some student is asking it to write their paper for them, is ChatGPT going to stop in the middle of it and go "Hey, you know what sounds good right now? A nice bowl of soup..." Although admittedly that would make for some hilarious proof of people using AI for things they shouldn't...
ChatGPT will also probably be selling ad infrastructure to inject ads just like Google injects ads into search. They probably will pay out little to websites that include the “ChatGPT” widget to integrate ChatGPT with their site that also has ads.
Right now the barriers are technical for injecting ads into AI responses.
As an advanced research engine, knowing it will reliably only recommend you sponsored products means it's worthless - and worse it will be primed to advocate for sponsored products.
Then the whole thing becomes a scam engine, because check out what Facebook ads look like today.
Regardless of if that’s true, it’s clearly still a huge business opportunity. And you point out Facebook ads are a scam yet they bring in $164B/year and growing. Regardless of value judgement, there’s clearly a lot of money to chase.
Plus like Google search they have a ton of organic traffic. Chatgpt has replaced Google search as my starting point to investigate anything. Lots of that is related to things where I will eventually spend money
Google/facebook do that today, because the content they're showing is created pre-ad, and the ads have to be injected after the fact.
With AI- the content is being generated in the same place that the ads are being injected, which allows us to be much more subtle about it.
How much do you think a car company would pay for to put special training weight on their marketing materials? I would guess big money
"While we're on the topic of self-harm, did you know the ABC Co Truck has the highest safety rating?"
https://openai.com/index/introducing-chatgpt-atlas/
> Besides, how are ads on ChatGPT supposed to work?
"How do I do XYZ?" "Product ABC can do XYZ for you."
This would create a ton of hesitation to use this for product recommendations if I knew ChatGPT wasn't using its extensive input for products and reviews and coming back with an objective answer for me.
I guess at this point would we even know the difference? Is it possible this is already happening?
Is it going to inject ads for indeed while a recruiter is using ChatGPT to summarize a stack of resumes?
If it only ever injects ads for specific requests how profitable would that even be? I understand clients would want their product to be recommended but if I only get the ad answer when prompting a certain way, can I the user avoid ads by asking questions a specific way?
I think the queries will fall into profitable (product recommendations) and non profitable (writing an essay or code) just the way they do for Google. Probably former will have a generous free tier and latter will be largely paywalled. I don't know how they'll do that, but I imagine they'll find some way
It's a mass consumer (software) product and they need new revenue venues and ads have a history of working well. Even Spotify, Netflix, Amazon Prime, ... Companies that historically don't have the ad infrastructure of Google or Facebook have increasingly profitable ad tiers
OpenAI maybe in the same situation, committed to spending $1.4T while enjoying a good revenue year this year but then One Bad Thing and poof.
Or, well, they stated that the TCO of the compute they have commitments for is $1.4T, which is a somewhat strange phrasing. I assume it's due to it being a mix of self-owned vs. rental compute, and what they mean is the TCO to OpenAI rather than the TCO to the owner of the compute.
I get that folks are now just engaged in “keeping up with the Jones’” FOMO behavior but none of this is making any sense.
The financial impact if the whole AI space loses even 50% of its current "valuation" will be huge. The financial impact of the whole AI space continuing at its current velocity is... More of whatever is going on now?
I'd be happy if the industry/stock market proves me wrong, but I can't see this ending any other way than with a major crash that makes the dot-com boom seem like a minor blimp.
We used to have lunch at the bar across the street and just about once or twice a week for several months, we'd walk in and there would be a table with about 15-20 people sitting around drinking and reminiscing about how they were going to change the world.
A lot of developers I know just completely left the industry and never came back.
If this crash exceeds that one? We're in for some seriously tough times.
It doesn't come off as schadenfreude to me as much as it does the emotional clarity of accepting the oncoming train and knowing there's nothing you could have done to stop it. This brand of "just along for the ride" nihilism seems pretty damn common now.
> A central theme of the discussion was the staggering demand for computational power. Gerstner highlighted OpenAI’s reported commitment of $1.4 trillion for compute over the next five years, questioning how a company with reported revenues of $13 billion could manage such an outlay.
> Altman pushed back forcefully. “First of all, we’re doing well more revenue than that. Second of all, Brad, if you want to sell your shares, I’ll find you a buyer,” he quipped. He expressed profound confidence in the company’s trajectory. “We do plan for revenue to grow steeply. Revenue is growing steeply. We are taking a forward bet that it’s going to continue to grow.”
This seems to be just the tip of the iceberg; what about the rest?
A few billion dollar businesses run out of money due to negligence and greed? Govt:"THEY ARE JUST WITTLE GUYS WHO NEED HELP"
Recent analysis shows AWS is burning through Amazon’s free cash on AI buildouts which is very concerning if the bubble pops, leaving Amazon holding the bag of invested capital not making returns.
Amazon is a bit late to the party on these headlines, and lots of unanswered questions about what’s really going on here.