Posted by zlatkov 11 hours ago
It's clear that the stock market cannot be considered normal anymore, held up on hopes at prayers at best.
This sounds a bit like going forward (some) OpenAI APIs will also run on platforms other than Azure (AWS)?
Anyone knows more?
OpenAI desperately needs to be available outside Azure. We are exclusively using Anthropic atm because it is what is available in AWS Bedrock and it works. These things are solidifying fast.
While nothing fancy has happened yet in the area of cheap energy, there is still enough power around the world to build AI data centers. The problem is this power exits in countries that the West has decided, many times for good reasons, they don't want to deal with their leaders.
I'm predicting that over 2027, either the US will become more aggressive in making war with these countries or company CEOs will start developing "reality-distorsion-fields" around them and decide having enough power for the next datacenter is more for the good of humanity. Before that Europe will decide that AI training on human faces(eg. of non-Europeans) is not really a problem and will allow US companies to train their models in EU countries.
To me it feels like one of those throw some play money into it and see what happens sort of situations. Expect it will return negative due to the raw financials and outlook, but small chance the brand carries enough weight with the public that it spikes.
I'd love to hear other thoughts though
But at such numbers it's nonsense.
I don't see any moat. LLMs are commodities.
Enterprise is on Gemini/NotebookLM and Copilot as it's a natural extension of the Google and Office suite they use.
Devs are in Anthropic camp, but they will jump as soon as they can save 90% of the money for 99% of the output.
Or is it just to keep Nvidia from crashing?
Incredible.
It can both be true at the same time: That AI is going to disrupt our world and that Open AI does not have a business model that supports its valuation.
I don't think that's my relationship with AI, I'm hardly an uncritical booster. But would I know if it was?
https://fortune.com/2026/02/26/tesla-robotaxis-4x-8x-worse-t...
World will still need software, lots of it. Their valuation is based on an entire developer-less future world (no labor costs).
What is somewhat justifying OpenAI's valuation is that they are still trying for AGI. They are not just working on models that work here and now, they are still approaching "simulating worlds" from all kinds of angles (vision, image generation, video generation, world generation), presumably in hopes that this will at some point coalesce in a model with much better understanding of our world and its agency in it. If this comes to pass OpenAI's value is near unlimited. If it doesn't, its value is at best half what it is today
And that's the dealbreaker for me since they've been so adamant on scaling taking them there, while we're all seeing how it's been diminishing returns for a while.
I was worried a few years back with the overwhelming buzz, but my 2017 blogpost is still holding strong. To be fair it did point to ASI where valuation is indeed unlimited, but nowadays the definition of AGI is quite weakened in comparison.. but does that then convey an unlimited valuation?
Yes, this is kind of like Tesla promising full self driving in 2016
I completely agree. I'm ashamed to admit, I've actually walked to the car wash without my car on more than one occasion. We all make mistakes!
Not that dumb, no. That's why it's laughable to claim that LLMs are intelligent.
"If your goal is to get your dirty car washed… you should probably drive it to the car wash "
The large hosted model providers always "fix" these issues as best as they can after they become popular. It's a consistent pattern repeated many times now, benefitting from this exact scenario seemingly "debunking" it well after the fact. Often the original behavior can be replicated after finding sufficient distance of modified wording/numbers/etc from the original prompt.
But this question posed to humans is plenty ambiguous because it doesn't specify whether you need to get to the boat or not, and whether or not the boat is at the wash already. ChatGPT Free Tier handles the ambiguity, note the finishing remark:
"If the boat wash is 50 meters down the street…
Drive? By the time you start the engine, you’re already there.
Sail? Unless there’s a canal running down your street, that’s going to be a very short and very awkward voyage.
Walk? You’ll be there in about 40 seconds.
The obvious winner is walk — unless this is a trick question and your yacht is currently parked in your living room.
If your yacht is already in the water and the wash is dock-accessible, then you’d idle it over. But if you’re just going there to arrange detailing, definitely walk."
The original car question is not ambiguous at all. And the specific responses to the car question weren't even concerned with ambiguity at all, the logic was borderline LLM psychosis in some examples like you'd see in GPT 3.5 but papered over by the well-spoken "intelligence" of a modern SOTA model.
"any human can instantly grok the right answer."
When asking a human about general world knowledge, they don't have the generality to give good answers for 90% of it. Even very basic questions humans like this, humans will trip up on many many more than the frontier LLMs.
"AGI" is the IPO.
How?
If we have AGI, we have a scenario where human knowledge-based value creation as we know it is suddenly worthless. It's not a stretch to imagine that human labor-based value creation wouldn't be far behind. Altman himself has said that it would break capitalism.
This isn't a value proposition for a business, it's an end of value proposition for society. The only people who find real value in that are people who spend far too much time online doing things like arguing about Roko's Basilisk - which is just Pascal's Wager with GPUs - and people who are so wealthy that they've been disconnected with real-world consequences.
The only reason anyone sees value in this is because the second group of people think it'll serve their self-concept as the best and brightest humanity has ever had to offer. They're confusing ego with ability to create economic value.
The majority of my coworkers now push AI-generated code each day, and it has completely absolved me of any fear whatsoever that AI will take my job.
a) AI is going to replace a Bazillion-Dollar Industry and that
b) being an AI model provider does not allow to capture margins above 5% long-term
I am not saying that this is what will happen, but it's a plausible scenario. Without farmers we would all be dead but that does not mean the they capture monopoly rents on their assets.