Top
Best
New

Posted by zlatkov 11 hours ago

OpenAI raises $110B on $730B pre-money valuation(techcrunch.com)
https://openai.com/index/scaling-ai-for-everyone/

https://x.com/sama/status/2027386252555919386

https://xcancel.com/sama/status/2027386252555919386

334 points | 433 commentspage 3
himata4113 5 hours ago|
Less than a decade ago companies reaching 1 trillion was still every much "out there". Now we have an IPO at almost 1 trillion.

It's clear that the stock market cannot be considered normal anymore, held up on hopes at prayers at best.

aurareturn 4 hours ago||
Sure it can. The value of the dollar coincides with stock market valuations.
himata4113 2 hours ago|||
"UBS downgraded the US stock market" happened today for a reason and you're implying that the US dollar lost 1000% of it's value since less than a decade ago? 1/10th of what it is worth today?
danny_codes 4 hours ago|||
Exactly. A devalued dollar means higher number without adjustment
epolanski 5 hours ago||
Well, it's still VC market right now, and all the investors have vetted interest into the music not stopping.
tosh 11 hours ago||
> We continue to have a great relationship with Microsoft. Our stateless API will remain exclusive to Azure, and we will build out much more capacity with them.

This sounds a bit like going forward (some) OpenAI APIs will also run on platforms other than Azure (AWS)?

Anyone knows more?

rob74 11 hours ago||
I guess Amazon would have a hard time justifying their investment if OpenAI remained Azure-exclusive...

https://openai.com/index/amazon-partnership/

zmmmmm 4 hours ago|||
Curious what is meant by "stateless".

OpenAI desperately needs to be available outside Azure. We are exclusively using Anthropic atm because it is what is available in AWS Bedrock and it works. These things are solidifying fast.

sidewndr46 11 hours ago||
Unless I'm mistaken wasn't someone at Microsoft suggesting they would just develop their own models soon?
zippyman55 11 hours ago||
Wow! This is circular financing. Sharknado, Altnado….
redleader55 3 hours ago||
Everyone thinks this bubble will continue until AGI(bulls) or until someone calls them on it(bears). I think it will continue until someone finds a quick way to make cheap energy(bullish) or until we can't build more power plants to support AI growth(bearish).

While nothing fancy has happened yet in the area of cheap energy, there is still enough power around the world to build AI data centers. The problem is this power exits in countries that the West has decided, many times for good reasons, they don't want to deal with their leaders.

I'm predicting that over 2027, either the US will become more aggressive in making war with these countries or company CEOs will start developing "reality-distorsion-fields" around them and decide having enough power for the next datacenter is more for the good of humanity. Before that Europe will decide that AI training on human faces(eg. of non-Europeans) is not really a problem and will allow US companies to train their models in EU countries.

jppope 9 hours ago||
Interesting story for sure (to be clear I'm not talking about the writing by Reuters), but would you buy or skip the OpenAi IPO?

To me it feels like one of those throw some play money into it and see what happens sort of situations. Expect it will return negative due to the raw financials and outlook, but small chance the brand carries enough weight with the public that it spikes.

I'd love to hear other thoughts though

epolanski 5 hours ago|
If the IPO was at 20B maybe I could throw a 1000$.

But at such numbers it's nonsense.

I don't see any moat. LLMs are commodities.

Enterprise is on Gemini/NotebookLM and Copilot as it's a natural extension of the Google and Office suite they use.

Devs are in Anthropic camp, but they will jump as soon as they can save 90% of the money for 99% of the output.

Yizahi 11 hours ago||
Nvidia will get all that money back via GPU purchases, Amazon via cloud rental and SoftBank is being typical SoftBank - a rich but not particularly bright kid in a class :) .
AnimalMuppet 10 hours ago|
"I give you $30 billion if you use it to buy $30 billion of stuff from me" doesn't sound like a very good investment. Is Nvidia expecting more back than it puts in? Enough more to make the deal profitable?

Or is it just to keep Nvidia from crashing?

max51 5 hours ago|||
"I give you 30B$ worth of hardware that costs me <10B$ to make in exchange for 30B$ worth of shares in your company" would be a more accurate description.
dminik 3 hours ago||
How does this work with private companies? It feels like Nvidia could find that the market does not value OpenAI stock the same way they did.
recursive 3 hours ago||
Public or private, there's never a guarantee of being able to sell back for some nominal price.
Yizahi 10 hours ago||||
Well, I won't pretend I know the answer :) . But I assume that a) they are partially betting on making a normal return on investment (i.e. OAI not crashing), b) they profit from running a huge expense/revenue cycle (a company making say a million of profit and having a billion revenue is favored better than the same but with only ten million revenue), and c) even if all goes wrong, it is still better to get back most of the investment even if not everything and zero profit, compared to a possibility of just losing it all like SoftBank or other investors.
rich_sasha 10 hours ago||||
In the end it's exchanging GPUs for OpenAI shares. It's not a non-trade, and in the current market Nvidia could really sell the stuff for cash. The marginal cost is very much sharply positive.
vonneumannstan 10 hours ago|||
$30B in sales is worth more than $30B in stock appreciation...
CrzyLngPwd 4 hours ago||
Puff puff until it pops!
pigeons 8 hours ago||
Does anyone have any ethical concerns using openai regarding money donated to the current US administration in one way or another? I will search for more accurate details about that situation. I know about several other ethical concerns with openai that people have, including copyright and other considerations regarding the work being trained on, as well as lack of action regarding users who are harmed by their usage of the product, often regarding mental health, environmental concerns, actually quite a few others, but I am interested if many people think their political donations are an issue or not.
maplethorpe 10 hours ago||
> The Information had previously reported that $35 billion of Amazon’s investment could be contingent on the company either achieving AGI or making its IPO by the end of the year. OpenAI’s announcement confirms the funding split, but says only that the additional $35 billion will arrive “in the coming months when certain conditions are met.”

Incredible.

konschubert 10 hours ago||
So basically, Amazon is buying into the IPO at an early price. Maybe this is the time to divest from MSCI world. I don’t want to be the bag holder in the world’s largest pump and dump.

It can both be true at the same time: That AI is going to disrupt our world and that Open AI does not have a business model that supports its valuation.

WarmWash 10 hours ago|||
Tesla is a car company with relatively small, and shrinking, sales, that is worth $1.5T on the promise of [Elons_Promise_of_the_Month]
konschubert 10 hours ago|||
yea, proving my point that the index funds are maybe not the safest place if you want to invest into real value. And soon, twitter/Grok/spacex might be doing an IPO
sixQuarks 10 hours ago|||
You forgot to mention they solved vision based autonomous driving, but I guess that doesn’t matter if Elon = bad
zozbot234 10 hours ago|||
SAE level 2 driver assistance is explicitly not autonomous driving.
SpicyLemonZest 10 hours ago||||
It's this kind of dynamic that makes me pull back on my otherwise pretty AI-forward stance. There's an entire community of people who passionately believe it's obvious and undeniable that Elon Musk has solved problems that he has not solved and his companies deliver things they don't deliver. Tesla is absolutely unambiguous in their marketing material (https://www.tesla.com/fsd) that they do not have autonomous driving, but you're far from the first person I've encountered who's been tricked into believing otherwise.

I don't think that's my relationship with AI, I'm hardly an uncritical booster. But would I know if it was?

soulofmischief 8 hours ago||||
Gonna need a citation for that, buck.
outside1234 10 hours ago||||
Seems not solved:

https://fortune.com/2026/02/26/tesla-robotaxis-4x-8x-worse-t...

albedoa 10 hours ago||||
Hope daddy sees this and gives you that lollipop.
iancmceachern 10 hours ago|||
Huh? They did not "solve" vision based driving.
eviks 10 hours ago||
Of course, but if Elon=great you can ignore that
general_reveal 10 hours ago||||
Did it ever occur to you that an entire generation of developers are going to retire in less than 20 years? They are betting that the software industry will be autonomous. Really, think of our industry like AUV phenomena. We’re the drivers that are about to be shown the door, that’s the bet.

World will still need software, lots of it. Their valuation is based on an entire developer-less future world (no labor costs).

zozbot234 10 hours ago|||
Even the rise of high-level languages did not lead to a "developer-less future". What it did was improve productivity and make software cheaper by orders of magnitude; but compiler vendors did not benefit all that much from the shift.
rishabhaiover 8 hours ago|||
A high-level language or a compiler wasn't automating end-to-end reasoning for a programming task.
howardYouGood 7 hours ago|||
[dead]
wongarsu 10 hours ago||||
OpenAI has all the name recognition (which is worth a couple billion in itself), but when it comes to actual business use cases in the here and now Anthropic seems ahead. Even more so if we are talking about software dev. But they are valued at less than half of OpenAI's valuation

What is somewhat justifying OpenAI's valuation is that they are still trying for AGI. They are not just working on models that work here and now, they are still approaching "simulating worlds" from all kinds of angles (vision, image generation, video generation, world generation), presumably in hopes that this will at some point coalesce in a model with much better understanding of our world and its agency in it. If this comes to pass OpenAI's value is near unlimited. If it doesn't, its value is at best half what it is today

whizzter 10 hours ago|||
> What is somewhat justifying OpenAI's valuation is that they are still trying for AGI.

And that's the dealbreaker for me since they've been so adamant on scaling taking them there, while we're all seeing how it's been diminishing returns for a while.

I was worried a few years back with the overwhelming buzz, but my 2017 blogpost is still holding strong. To be fair it did point to ASI where valuation is indeed unlimited, but nowadays the definition of AGI is quite weakened in comparison.. but does that then convey an unlimited valuation?

zozbot234 10 hours ago||||
Obligatory reminder that today's so called "AGI" has trouble figuring out whether I should walk or drive to the car wash in order to get my dirty car washed. It has to think through the scenario step by step, whereas any human can instantly grok the right answer.
wongarsu 9 hours ago|||
The idea/hope is that a video model would answer the car wash problem correctly. There are exactly the kinds of issues you have to solve to avoid teleporting objects around in a video, so whenever we manage more than a couple seconds of coherent video we will have something that understands the real world much better than text-based models. Then we "just" have to somehow make a combined model that has this kind of understanding and can write text and make tool calls

Yes, this is kind of like Tesla promising full self driving in 2016

SpicyLemonZest 10 hours ago||||
I just don't know how to engage with these criticisms anymore. Do you not see how increasingly convoluted the "simple question LLMs can't answer" bar has gotten since 2022? Do the human beings you know not have occasional brain farts where they recommend dumb things that don't make much sense?
maplethorpe 3 hours ago|||
> Do the human beings you know not have occasional brain farts where they recommend dumb things that don't make much sense?

I completely agree. I'm ashamed to admit, I've actually walked to the car wash without my car on more than one occasion. We all make mistakes!

bigstrat2003 7 hours ago|||
> Do the human beings you know not have occasional brain farts where they recommend dumb things that don't make much sense?

Not that dumb, no. That's why it's laughable to claim that LLMs are intelligent.

SpicyLemonZest 5 hours ago||
I should note for epistemic honesty that I expected I would be able to come up with an example of a mistake I made recently that was clearly equally dumb, and now I don't have a response to offer because I can't actually come up with that example.
reducesuffering 8 hours ago|||
What are you talking about? OpenAI's ChatGPT free tier (that everyone uses) answers this in the first sentence within a couple seconds.

"If your goal is to get your dirty car washed… you should probably drive it to the car wash "

toraway 7 hours ago||
That problem went viral weeks ago so is no longer a valid test. At the time it was consistently tripping up all the SOTA models at least 50% of the time (you also have to use a sample > 1 given huge variation from even the exact same wording for each attempt).

The large hosted model providers always "fix" these issues as best as they can after they become popular. It's a consistent pattern repeated many times now, benefitting from this exact scenario seemingly "debunking" it well after the fact. Often the original behavior can be replicated after finding sufficient distance of modified wording/numbers/etc from the original prompt.

waisbrot 6 hours ago|||
For example, I just asked ChatGPT "The boat wash is 50 meters down the street. Should I drive, sail, or walk there to get my yacht detailed?" and it recommended walking. I'm sure with a tiny bit more effort, OpenAI could patch it to the point where it's a lot harder to confuse with this specific flavor of problem, but it doesn't alter the overall shape.
reducesuffering 6 hours ago||
This question is obviously ambiguous. The context here on HN includes "questions LLMs are stupid about, I mention boat wash, clearly you should take the boat to the boat wash."

But this question posed to humans is plenty ambiguous because it doesn't specify whether you need to get to the boat or not, and whether or not the boat is at the wash already. ChatGPT Free Tier handles the ambiguity, note the finishing remark:

"If the boat wash is 50 meters down the street…

Drive? By the time you start the engine, you’re already there.

Sail? Unless there’s a canal running down your street, that’s going to be a very short and very awkward voyage.

Walk? You’ll be there in about 40 seconds.

The obvious winner is walk — unless this is a trick question and your yacht is currently parked in your living room.

If your yacht is already in the water and the wash is dock-accessible, then you’d idle it over. But if you’re just going there to arrange detailing, definitely walk."

toraway 1 hour ago||
You can make the argument that the boat variant is ambiguous (but a stretch), it's really not relevant since the point was revealing the underlying failure mode is unchanged, just concealed now.

The original car question is not ambiguous at all. And the specific responses to the car question weren't even concerned with ambiguity at all, the logic was borderline LLM psychosis in some examples like you'd see in GPT 3.5 but papered over by the well-spoken "intelligence" of a modern SOTA model.

reducesuffering 6 hours ago|||
I don't understand what occasional hiccups prove. The models can pass college acceptance tests in advanced educational topics better than 99% of the human population, and because they occasionally have a shortcoming, it means they're worse than humans somehow? Those edge cases are quickly going from 1% -> 0.01% too...

"any human can instantly grok the right answer."

When asking a human about general world knowledge, they don't have the generality to give good answers for 90% of it. Even very basic questions humans like this, humans will trip up on many many more than the frontier LLMs.

rvz 10 hours ago||||
> What is somewhat justifying OpenAI's valuation is that they are still trying for AGI.

"AGI" is the IPO.

lenerdenator 9 hours ago|||
> If this comes to pass OpenAI's value is near unlimited.

How?

If we have AGI, we have a scenario where human knowledge-based value creation as we know it is suddenly worthless. It's not a stretch to imagine that human labor-based value creation wouldn't be far behind. Altman himself has said that it would break capitalism.

This isn't a value proposition for a business, it's an end of value proposition for society. The only people who find real value in that are people who spend far too much time online doing things like arguing about Roko's Basilisk - which is just Pascal's Wager with GPUs - and people who are so wealthy that they've been disconnected with real-world consequences.

The only reason anyone sees value in this is because the second group of people think it'll serve their self-concept as the best and brightest humanity has ever had to offer. They're confusing ego with ability to create economic value.

zozbot234 9 hours ago||
"End of human-based value creation" is tantamount to post-scarcity. It "breaks" capitalism because it supposedly obviates the resource allocation problem that the free-market economy is the answer to. It's what Karl Marx actually pointed to as his utopian "fully realized communism". Most people would think of that as a pipe dream, but if you actually think it's viable, why wouldn't you want it?
maplethorpe 3 hours ago||||
Around 6 months ago, the company I work for bought Cursor subscriptions for everyone. I thought to myself, "this is it".

The majority of my coworkers now push AI-generated code each day, and it has completely absolved me of any fear whatsoever that AI will take my job.

konschubert 10 hours ago||||
It can both be true that

a) AI is going to replace a Bazillion-Dollar Industry and that

b) being an AI model provider does not allow to capture margins above 5% long-term

I am not saying that this is what will happen, but it's a plausible scenario. Without farmers we would all be dead but that does not mean the they capture monopoly rents on their assets.

outside1234 10 hours ago||||
But Anthropic is the one that is disrupting software development? So why are we not piling into that?
RobotToaster 10 hours ago|||
Exactly, the dot com bubble didn't mean that the internet was just a fad.
boringg 10 hours ago|||
I'm curious how they define AGI technically. Seems like you would want that to be a tight definition.
lm28469 10 hours ago|||
Didn't they already define it as "a system capable of generating at least $100 billion in profit"
uluyol 10 hours ago|||
It just needs to be anything that will force OpenAI to IPO.
stavros 10 hours ago|||
I'd love to know how they define AGI.
zozbot234 10 hours ago|||
They've previously defined AGI as an AI that can directly create $100B in economic value.
stavros 10 hours ago||
Hmm interesting, thanks. I wonder how much value it's already created.
bigfishrunning 4 hours ago||
That number is probably negative
baal80spam 10 hours ago||||
Altman Gets Investment?
etyhhgfff 5 hours ago|||
Obviously in a way they get the $35B.
outside1234 10 hours ago||
Hopefully Microsoft is selling parts of their share of this trash into these funding rounds...
sega_sai 10 hours ago|
Okay, I can understand investment from SoftBank, and maybe somewhat from Amazon (if they plan to use OpenAI's models), but investment from NVidia who will then sell OpenAI the GPUs with X% markup doesn't make sense to me.
More comments...