Posted by ripe 9/1/2025
But actually every other company has been much more strategic, Microsoft is bullish because they partnered up with OpenAI and it pumps their share price to be bullish, Google is the natural home of a lot of this research.
But actually, Amazon, Apple etc aren't natural homes for this, they don't need to burn money to chase it.
So there we have it, the companies that have a good strategy for this are investing heavily, the others will pick up merges and key technological partners as the market matures, and presumably Zuck will go off and burn $XB on the next fad once AI has cooled down.
On the last earnings call the CEO gave a long rambling defensive response to an analyst question on why they’re behind. Reports from the inside also say that leaders are in full blown panic mode, pressing teams to come up with AI offerings even though Amazon really doesn’t have any recognized AI leaders in leadership roles and the best talent in tech is increasingly leaving or steering clear of Amazon.
I agree they should just focus on what they’re good at, which is logistics and fundamental “boring” compute infrastructure things. However leadership there though is just all over the map trying to convince folks their not behind vs just focusing on strengths.
They have huge exposure because of AWS; if the way people use computing shifts, and AWS isn't well-configured for AI workloads, then AWS has a lot to lose.
> Every other player is scrambling for hardware/electricity while Amazon has been building out data centers for the last 20 years.
Microsoft and Google have also been building out data centers for quite a while, but also haven't sat out the AI talent wars the way Amazon has.
What does that mean? Not enough GPUs?
1. Price-performance has struggled to stay competitive. There’s some supply-demand forces at play, but the top companies consistently seem to strike better deals elsewhere.
2. The way AWS is architected, especially on networking, isn’t ideal for AI. They’ve dug their heels on in their own networking protocols despite struggling to compete on performance. I personally know of several workloads that left AWS because they couldn’t compete on networking performance.
3. Struggling on the managed side. On paper a service like Bedrock should be great but in practice it’s been a hot mess. I’d love to use Anthropic via Bedrock, but it’s just much more reliable when going direct. AWS has never been great at these sort of managed services at scale and they’re again struggling here.
Being on the forefront of
(1) creating a personalized, per user data profile for ad-targeting is very much their core business. An LLM can do a very good job of synthesizing all the data they have on someone to try predicting things people will be interested in.
(2) by offering a free "ask me anything" service from meta.ai which is tied directly to their real-world human user account. They gather an even more robust user profile.
This isn't in-my-opinion simply throwing billions at a problem willy nilly. Figuring out how to apply this to their vast reams of existing customer data economically is going to directly impact their bottom line.
Is synthesizing the right word here?
LLMs look to be shaping up as an interchangeable commodity as training datasets, at least for general purpose use, converge to the limits of the available data, so access to customers seems just as important, if not more, than the models themselves. It seems it just takes money to build a SOTA LLM, but the cloud providers have more of a moat, so customer access is perhaps the harder part.
Amazon do of course have a close relationship with Anthropic both for training and serving models, which seems like a natural fit given the whole picture of who's in bed with who, especially as Anthropic and Amazon are both focused on business customers.
It doesn't have to be either/or of course - a cloud provider may well support a range of models, some developed in house and some not.
Vertical integration - a cloud provider building everything they sell - isn't necessarily the most logical business model. Sometimes it makes more sense to buy from a supplier, giving up a bit of margin, than build yourself.
Much more than the others, metter runs a content business. Gen AI aides in content generation so it behooves them to research it. Even before the current explosion of chatbots, meta was putting this stuff into their VR framework. It's used for their headset tracking and speech to text is helpful for controlling a headset without a physical keyboard.
You're making it sound like they'll follow anything that walks by but I do think it's more strategic than that.
(The other 10% is mostly Google Maps and MercadoLibre.)
Buying competition is par for the course for near-monopolies in their niches. As long as the scale differences in value are still very large, you can avoid competition relatively cheaply, while the acquired still walk away with a lot of money.
This means there's two avenues:
1. Get a team of researchers to improve the quality of the models themselves to provide a _better_ chat interface
2. Get a lot of engineers to work LLMs into a useful product besides a chat interface.
I don't think that either of these options are going to pan out. For (1), the consumer market has been saturated. Laymen are already impressed enough by inference quality, there's little ground to be gained here besides a super AGI terminator Jarvis.
I think there's something to be had with agentic interfaces now and in the future, but they would need to have the same punching power to the public that GPT3 did when it came out to justify the billions in expenditure, which I don't think it will.
I think these companies might be able to break even if they can automate enough jobs, but... I'm not so sure.
[1]: https://www.sec.gov/Archives/edgar/data/1326801/000132680114...
I mean Cursor is already at $500 million ARR...
I could see the increased productivity of using Cursor indirectly generating a lot more value per engineer, but... I wouldn't put my money on it being worth it overall, and neither should investors chasing the Nvidia returns bag.
For Amazon “renting servers” at very high margin is their cash cow. For many competitors it’s more of a side business or something they’re willing to just take far lower margin on. Amazon needs to keep the markup high. Take away the AWS cash stream and the whole of Amazon’s financials start to look ugly. That’s likely driving the current panic with its leadership.
Culturally Amazon does really well when it’s an early mover leader in a space. It really struggles, and its leadership can’t navigate, when it’s behind in a sector as is playing out here.
Companies are not going to stop needing databases and the 307 other things AWS provides, no matter how good LLMs get.
Cheaper competitors have been trying to undercut AWS since the early days of its public availability, it has not worked to stop them at all. It's their very comprehensive offering, proven track record and the momentum that has shielded AWS and will continue to indefinitely.
Further AWS is losing share at a time when GCP and Azure are becoming profitable businesses, so no longer losing money to gain market share.
It's similar to how AWS became the de-facto cloud provider for newer companies. They struggled to convince existing Microsoft shops to migrate to AWS, instead most of the companies just migrated to Azure. If LLMs/AI become a major factor in new companies deciding which will be their default cloud provider, they're going to pick GCP or Azure.
Microsoft's in a sweet spot. Apple's another interesting one, you can run local LLM models on your Mac really nicely. Are they going to outcompete an Nvidia GPU? Maybe not yet, but they're fast enough as-is.
Amazon is the biggest investor of AI of any company. They've already spent over $100b YTD on capex for AI infrastructure.
I really liked the concept of Apple Intelligence with everything happening all on device, both process and data with minimal reliance off device to deliver the intelligence. It’s been disappointing that it hasn’t come to fruition yet. I am still hopeful the vapor materializes soon. Personally I wouldn’t mind seeing them burning a bit more to make it happen.
Go all in the new fad, investors pile up on your stock, dump, repeat...
Does he have this net worth because what he is doing or despite what he is doing? :-)
Correlation does not imply causation. Attribution is hard.
Zuckerberg failed every single fad he tried.
He's becoming more irrelevant every year and only the company's spoils from the past (earned not less by enabling, for example, a genocide to be committed in Myanmar https://www.pbs.org/newshour/world/amnesty-report-finds-face...) help carry them through to the series of disastrous idiotic decision Zuck is inflicting on them.
- VR with Oculus. It never caught on, for most people who own one, it's just gathering dust.
- Metaverse. They actually spend billions on that? https://www.youtube.com/watch?v=SAL2JZxpoGY
- LLAMA is absolute trash, a dumpster fire in the world of LLMs
Zuck is now trying to jump again on the LLM bandwagon and he's trying to...buy his way in with ridiculous pay packages: https://www.nytimes.com/2025/07/31/technology/ai-researchers.... Why is he so wrong to do that, you might ask?
He is doing it at the worst possible moment: LLMs are stagnating and even far better players than Meta like Anthropic and OpenAI can't produce anything worth writing about.
ChatGPT5 was a flop, Anthropic are struggling financially and are lowering token limits and preparing users for cranking up prices, going 180 on their promises not to use chat data for training, and Zuck, in his infinite wisdom, decides to hire top AI talent for premium price at a rapidly cooling market? You can't make up stuff like that.
It would appear that apart from being an ass kisser to Trump, Zuck shares another thing with the orange man-child running the US: a total inability to make good, or even sane deals. Fingers crossed that Meta goes bankrupt just like Trump's 6 banrkruptcies and then Zuck can focus on his MMA career.
I don't know in what circles you're hanging out, I don't know a single person who believed in the metaverse
Oh please, the world was full of hype journalists wanting to sound like they get it and they are in it, whatever next trash Facebook throws their way.
The same way folks nowadays pretend like the LLMs are the next coming of Jesus, it's the same hype as the scrum crowd, the same as crypto, nfts, web3. Always ass kissers who cant think for themselves and have to jump on some bandwagon to feign competence.
Look at what the idiots at Forbes wrote: https://www.forbes.com/councils/forbestechcouncil/2023/02/27...
They are still very influential, despite having shit takes loke that.
Accenture still think the Meta is groundbreaking: https://www.accenture.com/us-en/insights/metaverse
What a bunch of losers!
71% of executives seemed to be very excited about it: https://www.weforum.org/stories/2022/04/metaverse-will-be-go...
Executives (like Zuck) are famous for being rather stupid so if they are claiming something, you bet its not gonna happen.
Apparently, "The metaverse is slowly becoming the new generation’s digital engagement platform, but it’s making changes across enterprises, too."
https://www.softserveinc.com/en-us/blog/the-promise-of-the-m...
A better way to look at it is that the absolute number 1 priority for google since they first created a money spiggot throguh monetising high-intent search and got the monopoly on it (outside of Amazon) has been to hold on to that. Even YT (the second biggest search engine on the internet other than google itself) is high intent search leading to advertising sales conversion.
So yes, google has adopted and killed lots of products, but for its big bets (web 2.0 / android / chrome) it's basically done everything it can to ensure it keeps it's insanely high revenue and margin search business going.
What it has to show for it is basically being the only company to have transitioned as dominent across technological eras (desktop -> web2.0 -> mobile -> (maybe llm).
As good as OpenAI is as a standalone, and as good as Claude / Claude Code is for developers, google has over 70% mobile market share with android, nearly 70% browser market share with chrome - this is a huge moat when it comes to integration.
You can also be very bullish about other possible trends. For AI - they are the only big provider which has a persistent hold on user data for training. Yes, OpenAI and Grok have a lot of their own data, but google has ALL gmail, high intent search queries, youtube videos and captions, etc.
And for AR/VR, android is a massive sleeping giant - no one will want to move wholesale into a Meta OS experience, and Apple are increasingly looking like they'll need to rely on google for high performance AI stuff.
All of this protects google's search business a lot.
Don't get me wrong, on the small stuff google is happy to let their people use 10% time to come up with a cool app which they'll kill after a couple of years, but for their big bets, every single time they've gone after something they have a lot to show for it where it counts to them.
The small stuff that they kill is just that--small stuff that was never important to them strategically.
I mean, sure, don't heavily invest (your attention, time, business focus, whatever) in something that is likely to be small to Google, unless you want to learn from their prototypes, while they do.
But to pretend that Google isn't capable of sustained intense strategic focus is to ignore what's clearly visible.
Google is leading in terms of fundamental technology, but not in terms of products
They had the LLambda chatbot before that, but I guess it was being de-emphasized, until ChatGPT came along
Social was a big pivot, though that wasn't really due to Pichai. That was while Larry Page was CEO and he argued for it hard. I can't say anyone could have known beforehand, but in retrospect, Google+ was poorly conceived and executed
---
I also believe the Nth Google chat app was based on WhatsApp success, but I can't remember the name now
Google Compute Engine was also following AWS success, after initially developling Google App Engine
"AI" in it's current form is already a massive threat to Google's main business (I personally use Google only a fraction of what I used to), so this pivot is justified.
They bought DeepMind in 2014 and always showed of a ton of AI research.
By more reasonable standards of "pivot", the big investment into Google Plus/Wave in the social media era seems to qualify. As does the billions spent building out Stadia's cloud gaming. Not to mention the billions invested in their abandoned VR efforts, and the ongoing investment into XR...
I'd personally define that as Google hedging their bet's and being prepared in case they needed to truly pivot, and then giving up when it became clear that they wouldn't need to. Sort of like "Apple Intelligence" but committing to the bit, and actually building something that was novel, and useful to some people, who were disappointed when it went away.
Stadia was always clearly unimportant to Google, and I say that as a Stadia owner (who got to play some games, and then got refunds.) As was well reported at the time, closing it was immaterial to their financials. Just because spending hundreds of millions of dollars or even a few billion dollars is significant to you or I doesn't mean that this was ever part of their core business.
Regardless, the overall sentimentality on HN about Google Reader and endless other indisputably small projects says more about the lack of strategic focus from people here, than it says anything about Alphabet.
Stadia was just Google's New Coke, Apple's Mac Cube, or Microsoft's MSNBC (or maybe Zune.
When they can't sell ads anymore, they'll have to pivot.
I mean, Facebook's core business hasn't actually failed yet either, but their massive investments in short-form video, VR/XR/Metaverse, blockchain, and AI are all because they see their moat crumbling and are desperately casting around for a new field to dominate.
Google feels pretty similar. They made a very successful gambit into streaming video, another into mobile, and a moderately successful one into cloud compute. Now the last half a dozen gambits have failed, and the end of the road is in sight for search revenue... so one of the next few investments better pay off (or else)
I didn't really see it at first, but I think you are correct to point out that they kind of rhyme. However to me, I think the clear desperation of Facebook makes it feel rather different from what I've seen Google doing over the years. I'm not sure I agree that Google's core business is in jeopardy in the way that Facebook's aging social media platform is.
Also Amazon is in another capital intensive business. Retail. Spending billions on dubious AWS moonshots vs just buying more widgets and placing them across the houses of US customers for even faster deliveries does not make sense.
I recall Zuckerberg saying something about how there were early signs of AI "improving itself." I don't know what he was talking about but if he really believes that's true and that we're at the bottom of an exponential curve then Meta's rabid hiring and datacenter buildout makes sense.
[1] https://the-decoder.com/new-othello-experiment-supports-the-...
Mumbo jumbo magical thinking.
They perform so well because they are trained on probabilistic token matching.
Where they perform terribly, e.g math, reasoning, they are delegating to other approaches, and that's how you get the illusion that there is actually something there. But it's not. Faking intelligence is not intelligence. It's just text generation.
> In that sense, yeah you could say they are a bit "magical"
Nobody but the most unhinged hype pushers are calling it "magical". The LLM can never ever be AGI. Guessing the next word is not intelligence.
> there can be no form of world model that they are developing
Kind of impossible to form a world model if your foundation is probabilistic token guessing which is what LLMs are. LLMs are a dead end in achieving "intelligence", something novel as an approach needs to be discovered (or not) to go into the intelligence direction. But hey, at least we can generate text fast now!
There is no evidence to indicate this is the case. To the contrary, all evidence we have points to these models, over time, being able to perform a wider range of tasks at a higher rate of success. Whether it's GPQA, ARC-AGI or tool usage.
> they are delegating to other approaches > Faking intelligence is not intelligence. It's just text generation.
It seems like you know something about what intelligence actually is that you're not sharing. If it walks, talks and quacks like a duck, I have to assume it's a duck[1]. Though, maybe it quacks a bit weird.
Burden of proof is on those trying to convince us to buy into the idea of LLMs as being "intelligence".
There is no evidence of the Flying Spaghetti monster or Zeus or God not existing either, but we don't take seriously the people who claim they do exist (and there isn't proof because these concepts are made up).
Why should we take seriously the tolks claiming LLMs are intelligence without proof (there can't be proof, of course, because LLMs are not intelligence)?
Are they still really hoping that they are gonna tweak a model and feed it an even bigger dataset and it will be AGI?
If you're saying the magic disappeared after looking at a single transformer, did the magic of human intelligence disappear after you understood human neurons at a high level?
Hopefully some big players, like FB bankrupt themselves.
I can throw wide ranging problems at things like gpt5 and get what seem like dramatically better answers than if I asked a random person. The amount of common sense is so far beyond what we had it’s hard to express. It used to be always pointed out that the things we had were below basic insect level. Now I have something that can research a charity, find grants and make coherent arguments for them, read matrix specs and debug error messages, and understand sarcasm.
To me, it’s clear that agi is here. But then what I always pictured from it may be very different to you. What’s your image of it?
However, even "dumb" people can often make judgements structures in a way that AI's cannot, it's just that many have such a bad knowledge-base that they cannot build the structures coherently whereas AI's succeed thanks to their knowledge.
I wouldn't be surprised if the top AI firms today spend an inordinate amount of time to build "manual" appendages into the LLM systems to cater to tasks such as debugging to uphold the facade that the system is really smart, while in reality it's mostly papering up a leaky model to avoid losing the enormous investments they need to stay alive with a hope that someone on their staff comes up a real solution to self-learning.
https://magazine.sebastianraschka.com/p/understanding-reason...
This, to me at least, seems like an important ingredient to satisfying a practical definition / implementation of AGI.
Another might be curiosity, and I think perhaps also agency.
If I had to pick a name, I'd probably describe ChatGPT & co as advanced proof of concepts for general purpose agents, rather than AGI.
People selling AI products are incentivized to push misleading definitions of AGI.
I give it a high-res photo of a kitchen and ask it to calculate the volume of a pot in the image.
Hell, I’d even say we have AGI if you could emulate something like a hamster.
LLMs are way more impressive in certain ways than such a hypothetical AGI. But that has been true of computers for a long time. Computers have been much better at Chess than humans for decades. Dogs can’t do that. But that doesn’t mean that a chess engine is an AGI.
I would also say we have a special form of AGI if the AI can pass an extended Turing test. We’ve had chat bots that can fool a human for a minute for a long time. Doesn’t mean we had AGI. So time and knowledge was always a factor in a realistic Turing test. If an AGI can fool someone who knows how to properly probe an LLM, for a month or so, while solving a bunch of different real world tasks that require stable long term memory and planning, then I’d day we’re in AGI territory for language specifically. I think we have to distinguish between language AGI and multi-modal AGI. So this test wouldn’t prove what we could call “full” AGI.
These are some of the missing components for full AGI: - Being able to act as a stable agent with a stable personality over long timespans - Capable of dealing with uncertainties. Having a understanding of what it doesn’t know - One-shot learning, with long term retention, for a large number of things - Fully integrated multi-modality across sound, vision, and other inputs/outputs we may throw at it.
The last one is where we may be able to get at the root of the algorithm we’re missing. A blind person can learn to “see” by making clicks and using their ears to see. Animals can do similar “tricks”. I think this is where we truly see the full extent of the generality and adaptability of the biological brain. Imagine trying to make a robot that can exhibit this kind of adaptability. It doesn’t fit into the model we have for AI right now.
What we are saying is that LLM's can't become AGI. I don't know what AGI will look like, but it won't look like an LLM.
There is a difference between being able to melt iron and being able to melt tungsten.
You could fund 1000+ projects with this kinds of money. This is not an effective capital allocation.
Not sure what level of understanding are you referring to but having learned and researched about the pretty much all LLM internals I think this has led me exactly to the opposite line of thinking. To me it's unbelievable what we have today.
It’s also pretty useless to talk about whether something is AGI without defining intelligence in the first place.
Of course it might be the case, but it's not a thing that should be expressed with such confidence.
1) LLMs as simple "next token predictors" so they just mimicry thinking: But can it be argued that current models operate on layers of multiple depth and are able to actually understand by building concepts and making connections on abstract levels? Also, don't we all mimicry?
2) Grounding problem: Yes, models build their world models on text data, but we have models operating on non-textual data already, so this appears to be a technical obstacle rather than fundamental.
3) Lack of World Model. But can anyone really claim they have a coherent model of reality? There are flat-earthers, yet I still wouldn't deny them having AGI. People hallucinate and make mistakes all the time. I'd argue hallucinations is in fact the sign of an emerging intelligence.
4) Fixed learning data sets. Looks like this is now being actively solved with self-improving models?
I just couldn't find a strong argument supporting this claim. What am I missing?
This line means, and literally says, that everything that follows is a summary or direct quotation from an LLM's output.
There's a more charitable but unintuitive interpretation, in which "commenting on them briefly" is intended to mean "I will comment on them briefly:". But this isn't a natural interpretation. It's one I could be expected to reach only after seeing your statement that 'none of the above is AI.' But even this more charitable interpretation actually contradicts your claim that it's not AI.
So now I'm even less sure I know what you meant to communicate. Either I'm missing something really obvious or the writing doesn't communicate what you intended.
There's a bunch of ways AI is improving itself, depending on how you want to interpret that. But it's been true since the start.
1. AI is used to train AI. RLHF uses this, curriculum learning is full of it, video model training pipelines are overflowing with it. AI gets used in pipelines to clean and upgrade training data a lot.
2. There are experimental AI agents that can patch their own code and explore a tree of possibilities to boost their own performance. However, at the moment they tap out after getting about as good as open source agents, but before they're as good as proprietary agents. There isn't exponential growth. There might be if you throw enough compute at it, but this tactic is very compute hungry. At current prices it's cheaper to pay an AI expert to implement your agent than use this.
AGI is a complete no go until a model can adjust its own weights on the fly, which requires some kind of negative feedback loop, which requires a means to determine a failure.
Humans have pain receptors to provide negative feedback and we can imagine events that would be painful such as driving into a parked car would be painful without having to experience it.
If current models could adjust its own weights to fix the famous “how many r’s in strawberry” then I would say we are on the right path.
However, the current solution is to detect the question and forward it to a function to determine the right answer. Or attempt to add more training data the next time the model is generated ($$$). Aka cheat the test.
Interesting. Do you have links?
It is far from clear. There may well be emergent hierarchies of more abstract thought at much higher numbers of weights. We just don't know how a transformer will behave if one is built with 100T connections - something that would finally approach the connectome level of a human brain. Perhaps nothing interesting but we just do not know this and the current limitation in building such a beast is likely not software but hardware. At these scales the use of silicon transistors to approximate analog curve switching models just doesn't make sense. True neuromorphic chips may be needed to approach the numbers of weights necessary for general intelligence to emerge. I don't think there is anything in production at the moment that could rival the efficiency of biological neurons. Most likely we do not need that level of efficiency. But it's almost certain that stringing together a bunch of H100s isn't a path to the scale we should be aiming for.
Even assuming a company gets to AGI first this doesn't mean another one will follow.
Suppose that FooAI gets to it first: - competitors may get there too in a different or more efficient way - Some FooAI staff can leave and found their own company - Some FooAI staff can join a competitor - FooAI "secret sauce" can be figured out, or simply stolen, by a competitor
At the end of the day, it really doesn't matter, the equation AI === commodity just does not change.
There is no way to make money by going into this never ending frontier model war, price of training keeps getting higher and higher, but your competitors few months later can achieve your own results for a fraction of your $.
The fact that philosophy hasn't recognized and rejected this argument based on this speaks volumes of the quality of arguments accepted there.
(That doesn't mean LLMs are or will be AGI, its just this argument is tautological and meaningless)
I think it's entirely valid to question whether a computer can form an understanding through deterministically processing instructions, whether that be through programming language or language training data.
If the answer is no, that shouldn't lead to a deist conclusion. It can just as easily lead to the conclusion that a non-deterministic Turing machine is required.
> I think it's entirely valid to question whether a computer can form an understanding through deterministically processing instructions, whether that be through programming language or language training data.
Since the real world (including probabilistic and quantum phenomena) can be modeled with deterministic computation (a pseudorandom sequence is deterministic, yet simulates randomness), if we have a powerful enough computer we can simulate the brain to a sufficient degree to have it behave identically as the real thing.
The original 'Chinese Room' experiment describes a book of static rules of Chinese - which is probably not the way to go, and AI does not work like that. It's probabilistic in its training and evaluation.
What you are arguing is that constructing an artificial consciousness lies beyond our current computational ability(probably), and understanding of physics (possibly), but that does not rule out that we might solve these issues at some point, and there's no fundamental roadblock to artificial consciousness.
I've re-read the argument (https://en.wikipedia.org/wiki/Chinese_room) and I cannot help but conclude that Searle argues that 'understanding' is only something that humans can do, which means that real humans are special in some way a simulation of human-shaped atoms are not.
Which is an argument for the existence of the supernatural and deist thinking.
It is not meant as an ad hominem. If someone thinks our current computers can't emulate human thinking and draws the conclusion that therefore humans have special powers given to them by a deity, then that probably means that person is quite religious.
I'm not saying you personally believe that and therefore your arguments are invalid.
> Since the real world (including probabilistic and quantum phenomena) can be modeled with deterministic computation (a pseudorandom sequence is deterministic, yet simulates randomness), if we have a powerful enough computer we can simulate the brain to a sufficient degree to have it behave identically as the real thing.
The idea that a sufficiently complex pseudo-random number generator can emulate real-world non-determinism enough to fully simulate the human brain is quite an assumption. It could be true, but it's not something I would accept as a matter of fact.
> I've re-read the argument (https://en.wikipedia.org/wiki/Chinese_room) and I cannot help but conclude that Searle argues that 'understanding' is only something that humans can do, which means that real humans are special in some way a simulation of human-shaped atoms are not.
In that same Wikipedia article Searle denies he's arguing for that. And even if he did secretly believe that, it doesn't really matter, because we can draw our own conclusions.
Disregarding his arguments because you feel he holds a hidden agenda, isn't that itself an ad hominem?
(Also, I apologize for using two accounts, I'm not attempting to sock puppet)
>Searle argues that, without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking" and, since it does not think, it does not have a "mind" in the normal sense of the word.
This is the only sentence that seems to be pointing to what constitutes the specialness of humans, and the terms of 'understanding' and 'intentionality' are in air quotes so who knows? This sounds like the archetypical no true scotsman fallacy.
In mathematical analysis, if we conclude that the difference between 2 numbers is smaller than any arbitrary number we can pick, those 2 numbers must be the same. In engineering, we can reduce the claim to 'any difference large about to care about'
Likewise if the difference between a real human brain and an arbitrarily sophisticated Chinese Room brain is arbitrarily small, they are the same.
If our limited understanding of physics and engineering makes the practical difference not zero, this essentially becomes a bit of a somewhat magical 'superscience' argument claiming we can't simulate the real world to a good enough resolution that the meaningful differences between our 'consciousness simulator' and the thing itself disappear - which is an extraordinary claim.
They're in the "Complete Argument" section of the article.
> This sounds like the archetypical no true scotsman fallacy.
I get what you're trying to say, but he is not arguing only a true Scotsman is capable of thought. He is arguing that our current machines lack the required "causal powers" for thought. Powers that he doesn't prescribe to only a true Scotsman, though maybe we should try adding bagpipes to our AI just to be sure...
He argues that computer programs only manipulate symbols and thus have no semantic understanding.
But that's not true - many programs, like compilers that existed back when the argument was made, had semantic understanding of the code (in a limited way, but they did have some understanding about what the program did).
LLMs in contrast have a very rich semantic understanding of the text they parse - their tensor representations encode a lot about each token, or you can just ask them about anything - they might not be human level at reading subtext, but they're not horrible either.
When it makes a mistake, did it just have a too limited understanding or did it simply not get lucky with its prediction of the next word? Is there even a difference between the two?
I would like to agree with you that there's no special "causal power" that Turing machines can't emulate. But I remain skeptical, not out of chauvinism, but out of caution. Because I think it's dangerous to assume an AI understands a problem simply because it said the right words.
Regardless of whether Searle is right or wrong, you’ve jumped to conclusions and are misunderstanding his argument and making further assumptions based on your misunderstanding. Your argument is also ad-hominem by accusing people of believing things they don’t believe. Maybe it would be prudent to read some of the good critiques of Searle before trying to litigate it rapidly and sloppily on HN.
The randomness stuff is very straw man, definitely not a good argument, best to drop it. Today’s LLMs are deterministic, not random. Pseudorandom sequences come in different varieties, but they model some properties of randomness, not all of them. The functioning of today’s neural networks, both training and inference, is exactly a book of static rules, despite their use of pseudorandom sequences.
In case you missed it in the WP article, most of the field of cognitive science thinks Searle is wrong. However, they’re largely not critiquing him for using metaphysics, because that’s not his argument. He’s arguing that biology has mechanisms that binary electronic circuitry doesn’t; not human brains, simply physical chemical and biological processes. That much is certainly true. Whether there’s a difference in theory is unproven. But today currently there absolutely is a difference in practice, nobody has ever simulated the real world or a human brain using deterministic computation.
Nobody brings up that light travels through the aether, that diseases are caused by bad humors etc. - is it not right to call out people for stating theory that's believed to be false?
>The randomness stuff is very straw man,
And a direct response to what armada651 wrote:
>I think it's entirely valid to question whether a computer can form an understanding through deterministically processing instructions, whether that be through programming language or language training data.
> He’s arguing that biology has mechanisms that binary electronic circuitry doesn’t; not human brains, simply physical chemical and biological processes.
Once again the argument here changed from 'computers which only manipulate symbols cannot create consciousness' to 'we don't have the algorithm for consiousness yet'.
He might have successfully argued against the expert systems of his time - and true, mechanistic attempts at language translation have largely failed - but that doesn't extend to modern LLMs (and pre LLM AI) or even statistical methods.
Where did the argument change? Searle’s argument that you quoted is not arguing that we don’t have the algorithm yet. He’s arguing that the algorithm doesn’t run on electrical computers.
I’m not defending his argument, just pointing out that yours isn’t compelling because you don't seem to fully understand his, at least your restatement of it isn’t a good faith interpretation. Make his argument the strongest possible argument, and then show why it doesn’t work.
IMO modern LLMs don’t prove anything here. They don’t understand anything. LLMs aren’t evidence that computers can successfully think, they only prove that humans are prone to either anthropomorphic hyperbole, or to gullibility. That doesn’t mean computers can’t think, but I don’t think we’ve seen it yet, and I’m certainly not alone there.
>There’s no “scientific consensus” that he’s wrong, there are just opinions.
That's one possibility. The other is that your pomposity and dismissiveness towards the entire field of philosophy speaks volumes on how little you know about either philosophical arguments in general or this philosophical argument in particular.
And yes, if for example, medicine would be no worse at curing cancer than it is today, yet doctors asserted that crystal healing is a serious study, that would reflect badly on the field at large, despite most of it being sound.
“Searle does not disagree with the notion that machines can have consciousness and understanding, because, as he writes, "we are precisely such machines". Searle holds that the brain is, in fact, a machine, but that the brain gives rise to consciousness and understanding using specific machinery.”
It's just a contradiction.
But the way Searle formulates his argument, by not defining what consciousness is, he essentially gives himself enough wiggle room to be always right - he's essentially making the 'No True Scotsman' fallacy.
It’s not just compute. That has mostly plateaued. What matters now is quality of data and what type of experiments to run, which environments to build.
However I do think you are missing an important aspect - and that's people who properly understand important solvable problems.
ie I see quite a bit "we will solve this x, with AI' from startup's that don't fundamentally understand x.
You usually see this from startup techbro CEOs understand neither x nor AI. Those people are already replacable by AI today. The kind of people who think they can query ChatGPT once with "How to create a cutting edge model" and make millions. But when you go in on the deep end, there are very few people who still have enough tech knowledge to compete with your average modern LLM. And even the Math Olympiad gold medalists high-flyers at DeepSeek are about to have a run for their money with the next generation. Current AI engineers will shift more and more towards senior architecture and PM roles, because those will be the only ones that matter. But PM and architecture is already something that you could replace today.
It still is! Lots of vertical productivity data that would be expensive to acquire manually via humans will be captured by building vertical AI products. Think lawyers, doctors, engineers.
As more opens up in OSS and academic space, their knowledge and experience will either be shared, rediscovered, or become obsolete.
Also many of the people are coasting on one or two key discoveries by a handful of people years ago. When Zuck figures this out he gonna be so mad.
AWS is also falling far behind Azure wrt serving AI needs at the frontier. GCP is also growing at a faster rate and has a way more promising future than AWS in this space.
Does it? Then how come Meta hasn't been able to release a SOTA model? It's not for a lack of trying. Or compute. And it's not like DeepSeek had access to vastly more compute than other Chinese AI companies. Alibaba and Baidu have been working on AI for a long time and have way more money and compute, but they haven't been able to do what DeepSeek did.
Are we living in the same universe? LLAMA is universally recognized as one of the worst and least successful model releases. I am almost certain you haven't ever tried a LLAMA chat, because, by the beard of Thor, it's the worst experience anyone could ever had, with any LLM.
LLAMA 4 (behemoth, whatever, whatever) is an absolute steaming pile of trash, not even close to ChatGPT 4o/4/5/, Gemini(any) and even not even close to cheaper ones like DeepSeek. And to think Meta pirated torrents to train it...
What a bunch of criminal losers and what a bunch of waste of money, time and compute. Oh, at least the Metaverse is a success...
https://www.pcgamer.com/gaming-industry/court-documents-show...
https://www.cnbc.com/2025/06/27/the-metaverse-as-we-knew-it-...
This is not really true. Google has all the compute but in many dimensions they lag behind GPT-5 class (catching up, but it has not been a given).
Amazon itself did try to train a model (so did Meta) and had limited success.
It is. It's wild to me that all these VCs pouring money into AI companies don't know what a value-chain is.
Tokens are the bottom of the value-chain; it's where the lowest margins exist because the product at that level is a widely available commodity.
I wrote about this already (shameless plug: https://www.rundata.co.za/blog/index.html?the-ai-value-chain )
I tend personally to stick with ChatGPT most of the time, but only because I prefer the "tone" of the thing somehow. If you forced me to move to Gemini tomorrow I wouldn't be particularly upset.
Gemini holds indeed the top spot, but I feel you framed your response quite well: they are all broadly comparable. The difference in the synthetic benchmark from the top spot and the 20th spot was something like 57 points on a scale of 0-1500
Outside of computer, "the moat" is also data to train on. That's an even wider moat. Now, google has all the data. Data no one else has or ever will have. If anything, I'd expect them to outclass everyone by a fat margin. I think we're seeing that on video however.
Tin foil hat time:
- If you were a God and you wanted to create an ideal situation for the arrival of AI
- It would make sense to precede it with a social media phenomena that introduces mass scale normalization of sharing of personal information
Yes, that would be ideal …
People can’t stop sharing and creating data on anything, for awhile now. It’s a perfect situation for AI as an independent, uncontrollable force.
Garbage in. Garbage out.
There has never been a better time to produce an AI that mimics a racist uneducated teenager.
Yeah, Google totally has a moat. Them saying that they have no moat doesn't magically make that moat go away.
They also own the entire vertical which none of the competitors do - all their competitors have to buy compute from someone who makes a profit just on compute (Nvidia, for example). Google owns the entire vertical, from silicon to end-user.
It would be crazy if they can't make this work.
Google theoretically has reddit access. I wonder if they have sort of an internet archive - data unpolutted by LLMs
On a side note, funny how all the companies seem to train on book archivr which they just downloaded from the internet
And privacy policies that are actually limiting what information gets used in what.
I don't know what you are talking about. I use Gemini on a daily basis and I honestly can't tell a difference.
We are at a point where training corpus and hallucinations makes more of a difference than "model class".
xAI seems to be the exception, not the rule
Right now the delay for Google's AI coding assistant is high enough for humans to context switch and do something else while waiting. Particularly since one of the main features of AI code assistants is rapid iteration.
Also a smart move is to be selling shovels in a gold rush - and that's exactly what Amazon is doing with AWS.
From my admittely poorly informed point of view, strategy-wise, it's hard to tell how wise it is investing in foundational work at the moment. As long as some players release competitive open weight models, the competitive advantage of being a leader in R&D will be limited.
Amazon already has the compute power to place itself as a reseller without investing or having to share the revenue generated. Sure, they won't be at the forefront but they can still get their slice of the pie without exposing themselves too much to an eventual downturn.
So there probably isn’t even a legal moat.
Are you saying the only reason Meta is behind everyone else is compute????
I wouldn't be surprised if the likes of Anthropic wasn't paying AWS for its compute.
As the saying goes, the ones who got rich from the gold rush were the ones selling shovels.
AWS enables thousands of other companies to run their business. Amazon has designed their own Graviton ARM CPUS and their own Trainium AI chips. You can access these through AWS for your business.
I think Amazon sees AI being used in AWS as a bigger money generator than designing new AI algorithms.
Disclaimer; I work for amzn, opinions my own.
https://aws.amazon.com/blogs/machine-learning/aws-and-mistra...
Amazon wants people to move away from Nvidia GPUs and to their own custom chips.
Companies like OpenAI and Anthropic are still incredibly risky investments especially because of the wild capital investments and complete lack of moat.
At least when Facebook was making OpenAI's revenue numbers off of 2 billion active users it was trapping people in a social network where there were real negative consequences to leaving. In the world of open source chatbots and VSClone forks there's zero friction to moving on to some other solution.
OpenAI is making $12 billion a year off of 700 million users [1], or around $17 per user annually. What other products that have no ad support perform that badly? And that's a company that is signing enterprise contracts with companies like Apple, not just some Spotify-like consumer service.
[1] This is almost the exact same user count that Facebook had when it turned its first profit.
That's a bit of a strange spin. Their ARPU is low because they are choosing not to monetize 95% of their users at all, and for now are just providing practically limitless free service.
But monetising those free users via ads will pretty obviously be both practical and lucrative.
And even if there is no technical moat, they seem to have a very solid mind share moat for consumer apps. It isn't enough for competitors to just catch up. They need to be significantly better to shift consumer habits.
(For APIs, I agree there is no moat. Switching is just so easy.)
i am hoping that a device local model would eventually be possible (may be a beefy home setup, and then an app that connects to your home on mobile devices for use on the go).
currently, hardware restrictions prevent this type of home setup (not to mention the open source/free models aren't quite there and difficulty for non-tech users to actually setup). However, i choose to believe the hardware issues will get solved, and it will merely be just time.
The software/model issue, on the other hand is harder to see solved. I pin my hopes onto deepseek, but may be meta or some other company will surprise me.
Apple products as an example have an excellent architecture for local AI. Extremely high-bandwidth RAM.
If you run an OSS model like gpt-oss on a Mac with 32GB of RAM it's already very similar to a cloud experience.
Either way, it's just an example model, plenty of others to choose from. The fact of the matter is that the base model MacBook Air currently comes with about half as much RAM as you need for a really really decent LLM model. The integrated graphics are fast/efficient and the RAM is fast. The AMD Ryzen platform is similarly well-suited.
(Apple actually tells you how much storage their local model takes up in the settings > general > storage if you're curious)
We can imagine that by 2030 your base model Grandma computer on sale in stores will have at least 32GB of high-bandwidth RAM to handle local AI workflows.
The two are effectively separate businesses with a completely separate customer base.
To me, that's a pretty good explanation.
The world is crazy with AI right now, but when we see how DeepSeek became a major player at a fraction of the cost, and, according to Google researchers, without making theoretical breakthroughs. It looks foolish to be in this race, especially now that we are seeing diminishing returns. Waiting until things settle, learning from others attempts and designing your system not for top performance but for efficiency and profit seems like a sane strategy.
And it is not like Amazon is out of the AI game, they have what really matters: GPUs. This is a gold rush, and as the saying goes, they are more interested in selling pickaxes that finding gold.
Customer service bots? Maybe. Coding bots? I bet they use some internally. Their customers don’t really need them, or if the customer does, the customer can run it on their side.
In general these fall into the category of things humans cannot do at the scale and speed necessary to run SaaS companies.
Many of the things LLMs attempt to do are things people already do, slowly and relatively accurately. But until hallucinations are rare, slow expensive humans will typically need to be around. The AI booster’s strategy of ignoring/minimizing hallucinations or equivocating with human fallibility doesn’t work for businesses where reliability is important.
Note that ML algorithms are highly imperfect as well. Uber’s prices aren’t optimal. Google search surfaces tons of spam. But they are better than the baseline of no service exists.
Disagree re: DeepSeek theoretical breakthroughs, MLA and GRPO are pretty good and paved the way for others e.g. Kimi K2 uses MLA for a 1T MoE.
Pay no attention to the cracks that are showing. Nevermind the chill. Everything is fine.
I interact regularly with AWS to support our needs in MLOps and to some extent GenAI. 3 of the experts we talked to have all left for competitors in the last year.
re:Invent London this year presented nothing new of note on the GenAI front. The year before was full of promise on Bedrock.
Outside of AWS, I still can’t fathom how they haven’t integrated an AI assistant into Alexa yet either
[0]: https://www.aboutamazon.com/news/devices/new-alexa-generativ...
I'm curious if non prime members make up a big market for Alexa. I rarely use my smart devices for anything beyond lights, music, and occasional Q&A, and certainly can't see myself paying 20$/month for it.
Unless of course this is going to be met with a price hike for Prime...
* 2018: $99 to $119
* 2022: $119 to $139
We should expect a price hike from $139 to $159 in 2026, assuming the trend continues.
Hmmm... maybe I can install do this through a cheap tablet....
Only thing it can do is set a timer, turn off a light and play music.
It is still nice, but it’s so frustrating when a question pops into my mind, and I accidentally ask Alexa just to get reminded yet again how useless it is for everything but the most basic tasks.
And no, I won’t pay 240 dollars a year so that I can get a proper response to my random questions that I realistically have only about once a week.
And it can't even do that without an Internet connection. As someone who experiences annoyingly frequent outages, it never ceases to boggle my mind that I have a $200 computer, with an 8" monitor and everything, that can't even understand "set a timer for 10 minutes" on its own.
oh the irony
Being able to just order something with zero shipping has a ton of value. I could drive down the street but it would still be an hour at the end of the day.
Video streaming has some value but there are a lot of options.
By far the best thing currently available.
Grok has to be more than n-times (2x?) as good as anything else on the market to attain any sort of lead. Falling short of that, people will simply choose alternatives out of brand preference.
This might be the first case of a company having difficulty selling its product, even if it's a superior product, due to its leader being disliked. I'm not aware of any other instances of this.
Maybe if Musk switches to selling B2B and to the US government...
If you piss off half of your possible user base, adoption becomes incredibly difficult. This is why tech and business leaders should stay out of politics.
I think that's a wildly optimistic figure on your part.
Lets assume that developers are split almost 50/50 on politics.
Of that 50% that follows the politics you approve off, lets err on the side of your argument and assume that 50% of those actually care enough to change their purchases because of it.
Of the 25% we have left, lets once again err on the side of your argument and assume 50% care enough about the politics to disregard any technology superiority in favour of sticking to their political leanings.
Of the 12.5% left, how many do you think are going to say "well, let me get beaten by my competitors because I am taking a stand!", especially when the "beaten" means a comparative drop in income?
After all, after nazi-salute, mecha-hitler, etc blew up, by just how much did the demand for Teslas fall?
The fraction of the population that cares enough about these (on both sides) things are, thankfully, single-digit percentages. Maybe even less.
I had been saving up for a Tesla but now I am looking elsewhere. I think a lot of people are doing the same here in Canada. You can grok the actual numbers if you want.
Yeah but they don't stay out of politics, do they? Gemini painting black Nazis was a deliberate choice to troll the vast majority of the population who isn't woke extremists.
My family subscribes to Grok and it's because of politics, not in spite of it. The answer gap isn't large today but I support Musk's goal of building a truth seeking AI, and he is right about a lot of things in politics too. Grok might well fail financially, the current AI market is too competitive and the world probably doesn't need so many LLM companies. But it's good someone wants AI to say what's true and not merely what's popular in its training set.
If anything they’ve now pissed off 2/3 of the population at some point or another.
And no, generic brand safety mishaps are not the same; everyone is not doing this.
But the project is pretty much dead, it was supposed to launch in February or March and is still not anywhere close to being out.
The blessing right now is the limit to contextual memory. Once those limits fall away and all of your previous conversations are made part of the context I suspect the game will change considerably, as will the players.
There's like a significant loss of model sharpness as context goes over 100K. Sometimes earlier, sometimes later. Even using context windows to their maximum extent today, the models are not always especially nuanced over the long ctx. I compact after 100K tokens.
Because my understandings is that, however you get to 100K, the 100,001st token is generated the same way as far as the model is concerned.
If you give a summary+graph to the model, it can still only attend to the summary for token 1. If it's going to call a tool for a deeper memory, it still only gets the summary when it makes the decision on what to call.
You get the same problem when asking the model to make changes in even medium-sized code bases. It starts from scratch each time, takes forever to read a bunch of files, and sometimes it reads the right stuff, other times it doesn't.
Ever since I started taking care of my LLM logs and memory, I had no issue switching model providers.
Why? It's just a bunch of text. They are forced by law to allow you to export your data - so you just take your life's "novel" and copy paste it into their competition's robot.
how do you know memory won't be modular and avoid lock-in?
I can easily see a decentralized solution where the user owns the memory, and AIs need permission to access your data, which can be revoked.
Well, let’s take your life. Your life is about 3 billion seconds (100 year life). That’s just 3 billion next-tokens. The thing you do on second N is just, as a whole, a next token. If next-token prediction can be scaled up such that we redefine a token from a part of language to an entire discrete event or action, then it won’t be hard for the model to just know what you will think and do … next. Memory in that case is just the next possible recall of a specific memory, or next possible action, and so on. It doesn’t actually need all the memory information, it just needs to know that that you will seek a specific memory next.
Why would it need your entire database of memories if it already knows you will be looking for one exact memory next? The only thing that could explode the computational cost of this is if dynamic inputs fuck with your next token prediction. For example, you must now absolutely think about a Pink Elephant. But even that is constrained in our material world (still bounded physically, as the world can’t transfer that much information through your senses physically).
A human life up to this exact moment is just a series of tokens, believe it or not. We know it for a fact because we’re bounded by time. The thing you just thought was an entire world snapshot that’s no longer here, just like an LLM output. We have not yet trained a model on human lives yet, just knowledge.
We’re not done with the bitter lesson.
Just look at the smartphone market.
I dunno if this is possible; sounds like an informally specified ad-hoc statement of the halting problem.
Truthfully, I don't think anyone would recommend their acquaintances to join Amazon right now.
That said, Amazon is actually winning the AI war. They're selling shovels (Bedrock) in the gold rush.
For senior in-demand talent you are not desperate, and really only desperate people go to work for AWS as they don’t have any better options at a company which respects their employees.
It seems that they just don't care about the high turnover.
AWS is falling behind even in their most traditional area: renting compute capacity.
For example, I can't easily run models that need GPUs without launching classic EC2 instances. Fargate or Lambda _still_ don't support GPUs. Sagemaker Serverless exists but has some weird limits (like 10GB limit on Docker images).
> For example, I can't easily run models that need GPUs without launching classic EC2 instances.
Yeah okay, but you can run most entreprise-level models via Bedrock.
Fargate and lambda are fundamentally very different from EC2/nitro under the hood, with a very different risk profile in terms of security. The reason you can't run GPU workloads on top of fargate and lambda is because exposing physical 3rd-party hardware to untrusted customer code dramatically increases the startup and shutdown costs (ie: validating that the hardware is still functional, healthy, and hasn't been tampered with in any way). That means scrubbing takes a long time and you can't handle capacity surges as easily as you can with paravirtualized traditional compute workloads.
There are a lot of business-minded non-technical people running AWS, some of which are sure to be loudly complaining about this horrible loss of revenue... which simply lets you know that when push comes to shove, the right voices are still winning inside AWS (eg: the voices that put security above everything else, where it belongs).
How?
> The reason you can't run GPU workloads on top of fargate and lambda is because exposing physical 3rd-party hardware to untrusted customer code dramatically increases the startup and shutdown costs
This is BS. Both NVidia and AMD offer virtualization extensions. And even without that, they can simply power-cycle the GPUs after switching tenants.
Moreover, Fargate is used for long-running tasks, and it definitely can run on a regular Nitro stack. They absolutely can provide GPUs for them, but it likely requires a lot of internal work across teams to make it happen. So it doesn't happen.
I worked at AWS, in a team responsible for EC2 instance launching. So I know how it all works internally :)
No? You can reset GPUs with regular PCI-e commands.
> You can't really enforce limits, either. Even if you're able to tolerate that and sell customers on it, the security side is worse
Welp. AWS is already a totally insecure trash, it seems: https://aws.amazon.com/ec2/instance-types/g6e/ Good to know.
Not having GPUs on Fargate/Lambda is, at this point, just a sign of corporate impotence. They can't marshal internal teams to work together, so all they can do is a wrapper/router for AI models that a student can vibe-code in a month.
We're doing AI models for aerial imagery analysis, so we need to train and host very custom code. Right now, we have to use third-parties for that because AWS is way more expensive than the competition (e.g. https://lambda.ai/pricing ), _and_ it's harder to use. And yes, we spoke with the sales reps about private pricing offers.
"AWS Lambda for model running" would be another nice service.
The things that competitors already provide.
And this is not a weird nonsense requirement. It's something that a lot of serious AI companies now need. And the AWS is totally dropping the ball.
> AWS now has to take responsibility for building an AMI with the latest driver, because the driver must always be newer than whatever toolkit is used inside the container.
They already do that for Bedrock, Sagemaker, and other AI apps.
I'm no expert, but I'm pretty sure this[0] is what RTO 5 is.
[0] https://www.phoenixcontact.com/en-pc/products/bolt-connectio...
Don't need to train the models to make money hosting them.
While they're protected now, https://news.ycombinator.com/item?id=20980557 quotes the one I recall...
- Nobody has figured out how to make money from AI/ML other than by selling you a pile of compute and storage for your AI/ML misadventures.
https://threadreaderapp.com/thread/1173367909369802752.html maintains the entire chain of tweets.This is clearly not true. Google Ads? Every recommender system? Waymo self-driving? Uber routing algorithms?
If you swapped out ML for LLMs I would largely agree.
2019 was a different time - though I suspect that your statement about making money (as in profit) rather than just revenue (reselling compute for less than you bought it) would hold true for most companies.
And would this be admitting defeat to the powers of Terrible Orange Website to get you to write more?
As a side, in 2019 about a week after your tweets I was at a training session for Rancher which worked a reference to one of them into a joke.
https://www.cnbc.com/2025/08/08/chatgpt-gpt-5-openai-altman-...
> Last year, OpenAI expected about $5 billion in losses on $3.7 billion in revenue. OpenAI’s annual recurring revenue is now on track to pass $20 billion this year, but the company is still losing money.
> “As long as we’re on this very distinct curve of the model getting better and better, I think the rational thing to do is to just be willing to run the loss for quite a while,” Altman told CNBC’s “Squawk Box” in an interview Friday following the release of GPT-5.
Selling compute for less than it cost you will have as much revenue as you want to pay for.
Paraphrase is from the podcast he was in with the stripe founder, cheeky pints I think
If I switch from Gemini Pro to Opus, that is good for Anthropic. If I switch from Opus 4 to 4.1, that’s not as good for Anthropic.
Sad that these CEOs can get away with this level of sophistry.
could have said the same thing about most FAANG companies at one point or another.
Google doesn’t have this problem. They only run Google ads in their search results. Same thing for Facebook.