Posted by donohoe 1 day ago
Lots of questions on if this makes sense, and highly likely Amazon never gets $38B cash from OpenAI out of this.
[0] https://www.tomshardware.com/tech-industry/artificial-intell...
I remember when everyone was racing to produce "datacenter in a shipping container" solutions. I just laughed because apparently nobody actually bothered to check if you could actually plug it in anywhere.
In what context? This isn't fashion, being the 2nd mover has benefits which often outweigh the costs.
OpenAI is doing the same with compute. They're going to have more compute than everyone else combined. It will give them the scale and warchest to drive everyone else out. Every AI company is going to end up being a wrapper around them. And OpenAI will slowly take that value too either via acquisition or cloning successful products.
OpenAI and Anthropic are signing large deals with Google and Amazon for compute resources, but ultimately it means that Google and Amazon will own a ton of compute. Is OpenAI paying Amazon's cap ex just so Amazon can invest and end up owning what OpenAI needs over the long term?
For those paying Google, are they giving Google the money Google needs to further invest in their TPUs giving them a huge advantage?
Google is a viable competitor here.
Everyone else is missing part of the puzzle. They theoretically could compete but they're behind with no obvious way of catching up.
Amazon specifically is in a position similar to where they were with mobile. They put out a competing phone but with no clear advantage it flopped. They could put out their own LLM but they're late. They'd have to put out a product that is better enough to overcome consumer inertia. They have no real edge or advantage over OpenAI/Google to make that happen.
Theoretically they could back a competitor like Anthropic but what's the point? They look like an also ran these days and ultimately who wins doesn't affect Amazon's core businesses.
Every image/video/text post on a meta app is essentially subsidized by oai/gemini/anthropic as they are all losing money on inference. Meta is getting more engagement and ad sales through these subsidized genai image content posts.
Long term they need to catch up and training/inference costs need to drop enough such that each genai post costs less than net profit on the ads but they’re in a great position to bridge the gap.
The end of all of this is ad sales. Google and Meta are still the leaders of this. OpenAI needs a social engagement platform or it is only going to take a slice of Google.
Do you have any sources backing this? As in "more engagement and ad sales" relative to what they would get with no genai content
While I can see Anthropic or any other leading on API usage, it is unlikely that Anthropic leads in terms of raw consumer usage as Microsoft has the Office AI integration market locked down
No, it’s Amazon that’s doing this. OpenAI is paying Amazon for the compute services, but it’s Amazon that’s building the capacity.
the race is for sure on: https://menlovc.com/perspective/2025-mid-year-llm-market-upd...
It's still all about the (yet to be collected) data and advancements in architecture, and OAI doesn't have anything substantial there.
Its slow as balls as of late though. So I use a lot of sonnet 4.5 just because it doesn't involve all this waiting even though I find sonnet to be kinda lazy.
A relatively localized, limited lived experience apparently conveys a lot that LLM input does not - there's an architecture problem (or a compute constraint).
No amount of reddit posts and H200s will result in a model that can cure cancer or drive high-throughput waste filtering or precision agriculture.
I started working in 1997 at the height of the dot com bubble. I thought it would go on forever but the second half of 2000 and 2001 was rough.
I know a lot of people designing AI accelerator chips. Everyone over 45 thinks we are in an AI bubble. It's the younger people that think growth is infinite.
I told them to diversify from their company stock but we'll see if they have listened after the bubble pops
ChatGPT has 800 million weekly users but only 10 million are paying.
Anthropic is moving to Trainium[1], that will free Nvidia GPUs and allow AWS to rent those GPUs to OpenAI.
[1] https://finance.yahoo.com/news/amazon-says-anthropic-will-us...
They just didn’t like the chips is the most logical answer. Particularly given AWS has been doing everything they can to pump up interest, and this huge PR release doesn’t even mention it at all. That omission speaks volumes.
But that feels weird combined with this. You can buy OpenAI API access which is served off of AWS infrastructure, but you can't bill for it through AWS? (I mean, lots of companies work like that. but Microsoft is betting that a lot of people move regular workloads to Azure so they can have centralized billing for inference and their other stuff?)
> Non-API products may be served on any cloud provider.
I am not sure if Bedrock counts. There are 2 OpenAI models already there: https://aws.amazon.com/blogs/aws/openai-open-weight-models-n...
https://www.tomshardware.com/tech-industry/artificial-intell...
This bubble is one for the history books !
There’s been some buzz around the official opening of the Grand Egyptian Museum, which I visited last month. That project took 1.1 to 1.2B USD. Double its original budget estimate but still the museum looks fantastic and it feels, tangibly, like it’s worth a billion.
In contrast with all the money spent on AI, it just feels like monopoly money. Where’s the monument to its success? We could’ve built flying cars or been back to the moon with this much money.
It's much less likely that I'd drive a flying car and there is 0 chance that I would be the one going to the moon if we spent the equivalent money on those things instead.
I currently pay 200 USD a month for AI, and my company pays about 1,200 USD for all employees to use it essentially unlimited - and I get AT LEAST 5x the return on value on that, I would happy multiply all those numbers by 5 and still pay it.
Domain knowledge, bug fixing, writing tests, fixing tests, spotting what’s incomplete, help visualising results, security review generation for human interpretation, writing boilerplate, and simpler refactors
It can’t do all of these things end to end itself, but with the right prompting and guidance holy smokes does it multiply my positive traits as a developer
> and I get AT LEAST 5x the return on value on that
You make $800 by paying OpenAI $200? Can you please explain how your the value put in is 5x and how I can start making $800 more a month?
> holy smokes does it multiply my positive traits as a developer
But it’s not you doing the work. And by your own admission, anyone can eventually figure it out. So if anything you’ve lost traits and handed them to the Llm. As an employee you’re less entrenched and more replaceable.
I estimate that the addtional work I can do is worth that much. It doesn't matter that "I do it" or "The LLM does it" - Its both of us, but I'm responsible for the code (I always execute it, test it, and take responsibility for it). That's just my estimate. Also what a ridiculous phrasing, the intent of what I'm saying is "I would pay a lot more for this because I personally see the value in it" - that's a subjective judgement I'm making, I have no idea who you are, why would you assume thats a tranferrable objective measure that could simply be transferred to you? AI is a multiplier on the human that uses it, and the quality of the output is hugely dependent on the communication skill of the human, you using AI and me using AI will produce different results with 100% certainty, one will be better, it doesn't matter who, I'm saying, they will not be equal.
>But it’s not you doing the work. And by your own admission, anyone can eventually figure it out. So if anything you’ve lost traits and handed them to the Llm. As an employee you’re less entrenched and more replaceable.
So what? I'm results driven - the important thing is that the task gets done - it's not "ME" doing it OR the "LLM" doing it, it's Me AND the LLM. I'm still responsible if there's bugs in it, and I check it and make sure I understand it.
>As an employee you’re less entrenched and more replaceable.
I hate this attitute, this is an attitude of a very poor employee - It leads to gatekeeping and knowledge hoarding, and lots of other petty and defensive behaviour, and it what people think when they view the world from a point of scarcity. I argue the other way - the additional productivity and tasks that I get done with the assistance of the LLMS makes me a more valuable employee, so the business is incentivised to keep me more, there's always more to do, it's just we are now using chainsaws and not axes.
I disagree, I brought all this up because it seems you are confusing perceived, marketed/advertised value with actual value. Again you did not become 5 times more valuable in reality to your employer or by obtaining more money (literal value). You're comparing $200 of "value" which is 200 dollars to...time savings, unmeasureable skill ability? This is the unclear part.
> I hate this attitute, this is an attitude of a very poor employee - It leads to gatekeeping and knowledge hoarding, and lots of other petty and defensive behaviour,
You may hate that attitude but those people will be long employed after the boss sacked you for not taking enough responsibility for your LLM mistakes. This is because entrenching yourself is really the way it's always worked and those people that entrenched themselves didn't do it by relying on a tool to help them do their job. This is the world and sadly LLMs don't do anything to unentrench people making money.
All I am saying is enjoy your honeymoon period with your LLM. If that means creating apple and oranges definitions of "value" then comparing them directly as benefits, then more power to you.
But I agree that the numbers are increasingly beyond reasonable comprehension
Lot of feeling going on in this comment, but that's not really how money works.