Posted by meetpateltech 19 hours ago
It seems like Microsoft stock is then the most straightforward way to invest in OpenAI pre-IPO.
This also confirms the $500 billion valuation making OpenAI the most valuable private startup in the world.
Now many of the main AI companies have decent ownership by public companies or are already public.
- OpenAI -> Microsoft (27%)
- Anthropic -> Amazon (15-19% est), Alphabet/Google (14%)
Then the chip layer is largely already public: Nvidia. Plus AMD and Broadcom.
Clouds too: Oracle, Alphabet/GCP, Microsoft/Azure, CoreWeave.
No hard proof it's a bubble. Bubbles can only be proved to have existed after they pop.
Relevant and under-appreciated.
1. OpenAI considers its consumer hardware IP serious enough to include in the agreement (and this post)
2. OpenAI thinks it's enough of a value differentiator they'd rather go alone than through MS as a hardware partner
OpenAI wearable eyeglasses incoming... (audio+cellular first, AR/camera second?)Also, you have to consider the size of Microsoft relative to its ownership of OpenAI, future dilution, and how Microsoft itself will fare in the future. If, say, Microsoft is on a path towards decreasing relevance/marketshare/profitability, any gains from its stake in OpenAI may be offset by its diminishing fortunes.
That’s a big if. I see a lot of people in big enterprises who would never even consider anything other than Microsoft and Azure.
One thing I will say is the Azure documentation is some of the most cumbersome to navigate I've ever experienced, there is a dearth of information in there, you just have to know how to find it.
Couldn’t they just throw money at the problem? Or fire the criminals who designed it?
Because things are going to change soon. What nobody know is what things exactly, and in what direction.
Windows workstations and servers are now "joined" to Azure instead, where they used to be joined to domain controller servers. Microsoft will soon enough stop supporting that older domain controller design (soon as in a decade).
Both of which seem to be true.
Hell, I'm still amazed they got away with the Office-licenses-only-usable-on-Azure bullshit, but here we are.
https://www.cbsnews.com/news/wall-street-says-yahoos-worth-l...
The biggest real threat to MS position is the Trump administration pushing foreign customers away with stuff like shutting down the ICJ Microsoft accounts, but that'll hurt AWS and Google equally much (The winners of that will be Alibaba and other foregin providers that can't compete in full enterprise stacks today).
We were cloud shopping, and they came by as well with REALLY good discount. Luckily our CTO was massively afraid of what would happen after that discount ran out.
Because if you buy the tokens you presumably do not own the company. And if you buy the company you hopefully don’t own the tokens - nor the assets that back the tokens.
I have no interest in crypto, just wanted to mention this which was surprising to me when I heard it.
https://www.reuters.com/business/crypto-firm-tether-eyes-500...
I struggle to see how those numbers stack up.
So somehow this crypto firm and its investor think it can get a better return than Blackstone with a fraction of the assets. Now, sure, developing market and all that. But really? If it scaled to Blackstone assets level of $1 trillion then you’d expect the platform valuation to scale, perhaps not in lockstep but at least somewhat. So with $1 trillion in collateralised crypto does that make Tether worth $1.5 trillion? I’d love someone to explain that.
Now the main thing is how sustainable these earnings are and if they will continue to be a dominant player in stable coins and if there will continue to be demand for them.
Another difference to Blackstone is Tether takes 100% of the returns on the treasuries backing the coins, whereas Blackstone gets a small fee from AUM, and their goal is to make money for their investor clients.
If crypto wanted to really be decentralized they'd find a way to have stable coins backed by whatever assets where the returns of the assets still came to the stable coin holder, not some big centralized company.
SpaceX?
Was Microsoft the blocker before? prior agreements clearly made true open-weights awkward-to-impossible without Microsoft’s sign-off. Microsoft had (a) an exclusive license to GPT-3’s underlying tech back in 2020 (i.e., access to the model/code beyond the public API), and (b) later, broad IP rights + API exclusivity on OpenAI models. If you’re contractually giving one partner IP rights and API exclusivity, shipping weights openly would undercut those rights. Today’s language looks like a carve-out to permit some open-weight releases as long as they’re below certain capability thresholds.
A few other notable tweaks in the new deal that help explain the change:
- AGI claims get verified by an independent panel (not just OpenAI declaring it).
- Microsoft keeps model/product IP rights through 2032, but OpenAI can now jointly develop with third parties, serve some things off non-Azure clouds, and—critically—release certain open-weights.
Those are all signs of loosened exclusivity.
My read: previously, the partnership structure (not just “Microsoft saying no”) effectively precluded open-weight releases; the updated agreement explicitly allows them within safety/capability guardrails.
Expect any “open-weight” drops to be intentionally scoped—useful, but a notch below their frontier closed models.
I haven't looked too much into Deepseek's actual business, but at least Mistral seemed to be positioning themselves as a professional services shop to integrate their own open-weight models, compliant with EU regulations etc, at a huge premium. Any firm that has the SOA open model could do the same and cannibalize OpenAI's B2B business---perhaps even eventually pivoting into B2C---especially if regulations, downtime or security issues make firms more cloud-skeptical with respect to AI. As long as OpenAI can establish and hold the lead for best open-weight/on-premise model, it will be hard for anyone to justify premium pricing so as to generate sufficient cash flow from training their own models.
I can even imagine OpenAI eventually deciding that B2C is so much more valuable to them than B2B that it's worth completely sinking the latter market...
> OpenAI remains Microsoft’s frontier model partner and Microsoft continues to have exclusive IP rights and Azure API exclusivity
This should be the headline - Microsoft maintains its financial and intellectual stranglehold on OpenAI.
And meanwhile, while vaguer, a few of the bullet points are potentially very favorable to Microsoft:
> Microsoft can now independently pursue AGI alone or in partnership with third parties.
> The revenue share agreement remains until the expert panel verifies AGI, though payments will be made over a longer period of time.
Hard to say what a "longer period of time" means, but I presume it is substantial enough to make this a major concession from OpenAI.
Depends on how this is meant to be parsed but it may be parsed to be a concession from MSFT. If the total amount of revenue to be shared is the same, then MSFT is worse off here. If this is meant to parse as "a fixed proportion of revenue will be shared over X period and X period has increased to Y" then it is an OAI concession.
I don't know the details but I would be surprised if there was a revenue agreement that was time based.
The question "Can we build our stuff on top of Azure OpenAI? What if SamA pulls a marketing stunt tomorrow, declares AGI and cuts Microsoft off?" just became a lot easier. (At least until 2032.)
Also for MS it is worth to keep investing little by little,getting concessions from OpenAI and becoming the de facto owner of it.
I've read this but it's extremely vague: https://openai.com/index/built-to-benefit-everyone/
As is this: https://openai.com/our-structure/
Especially so if the Non-profit foundation doesn't retain voting control, this remains the greatest theft of all time. I still can't quite understand how it should at all be possible.
Looking at the changes for MSFT, I also mostly don't understand why they did it!
"All equity holders in OpenAI Group now own the same type of traditional stock that participates proportionally and grows in value with OpenAI Group’s success. The OpenAI Foundation board of directors were advised by independent financial advisors, and the terms of the recapitalization were unanimously approved by the board."
Truly, truly the greatest theft from mankind in history and they dress it up as if the non-profit is doing anything other than giving away the most valuable startup in history for a paltry sum.
Credit where credit is due, Sam Altman is the greatest dealmaker of all time.
Will be interesting if we get to hear what his new equity stake is!
I wonder what criteria that panel will use to define/resolve this.
https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
Dot-com bubble all over again
>"When a measure becomes a target, it ceases to be a good measure"
What appalls me is that companies are doing this stuff in plain sight. In the 1920s before the crash, were companies this brazen or did they try to hide it better?
I kind of meant this as a joke as I typed this, but by the end almost wanted to quit the tech industry all together.
We're talking about things that would make AGI recognizable as AGI, in the "I know it when I see it" sense.
So things we think about when the word AGI comes up: AI-driven commercial entity selling AI-designed services or products, AI-driven portfolio manager trading AI-selected stocks, AI-made movie going at the boxoffice, AI-made videogame selling loads, AI-won tournament prizes at computationally difficult games that the AI somehow autonomously chose to take part in, etc.
Most probably a combination of these and more.
>This is an important detail because Microsoft loses access to OpenAI’s technology when the startup reaches AGI, a nebulous term that means different things to everyone.
Not sure how OpenAI feels about that.
Just redefine the terms into something that's easy to accomplish but far from the definition of the terms/words/promises.
It only just then became obvious to me that to them it's a question of when, in large part because of the MS deal.
Their next big move in the chess game will be to "declare" AGI.
Nevertheless, I've been wondering of late. How will we know when AGI is accomplished? In the books or movies, it's always been handwaved or described in a way that made it seem like it was obvious to all. For example, in The Matrix there's the line "We marveled at our own magnificence as we gave birth to AI." It was a very obvious event that nobody could question in that story. In reality though? I'm starting to think it's just going to be more of a gradual thing, like increasing the resolution of our TVs until you can't tell it's not a window any longer.
It's certainly not an specific thing that can be accomplished. AGI is a useful name for a badly defined concept, but any objective application of it (like in a contract) is just stupid things done by people that could barely be described as having the natural variety of GI.
'as we have traditionally understood it' is doing a lot of heavy lifting there
https://blog.samaltman.com/reflections#:~:text=We%20believe%...
Microsoft’s IP rights for both models and products are extended through 2032 and now includes models post-AGI...
To me, this suggests a further dilution of the term "AGI."
If you believe in a hard takeoff, than ownership of assets post agi is pretty much meaningless, however, it protects Microsoft from an early declaration of agi by openai.
To sign this deal today, presumably you wouldn’t bother if AGI is just around the corner?
Maybe I’m reading too much into it.
OpenAI’s Jakob Pachocki said on a call today that he expects that AI is “less than a decade away from superintelligence”
"I just wanted you to know that you can't just say the word "AGI" and expect anything to happen.
- Michael Scott: I didn't say it. I declared it
A group of ex frontier lab employees? You could declare AGI today. A more diverse group across academia and industry might actually have some backbone and be able to stand up to OpenAI.
Aren't we humans supposed to have GI? Maybe you're conflating AGI and ASI.
Supposed by humans, who might not be aware of their own limitations.
What were they really expecting as an alternative? Anyone can "declare AGI" especially since it's an inherently ill-defined (and agruably undefinable) concept, it's strange that this is the first bullet point like this was the fruit of intensive deliberation.
I don't fully understand what is going on in this market as a whole, I really doubt anyone does, but I do believe we will look back on this period and wonder what the hell we were thinking believing and lapping up everything these corporations were putting out.
> Microsoft’s IP rights for both models and products are extended through 2032 and now includes models post-AGI, with appropriate safety guardrails.
Does anyone really think we are close to AGI? I mean honestly?
It’s no different than how they moved the goalpost on the definition of AI at the start of this boom cycle
Exactly. As soon as the money runs out, “AGI” will be whatever they’ve got by then.
But, at the same time, we have clearly passed a significant inflection point in the usefulness of this class of AI, and have progressed substantially beyond that inflection point as well.
So I don't really buy into the idea tha OpenAI have gone out of their way to foist a watered down view of AI upon the masses. I'm not completely absolving them but I'd probably be more inclined to point the finger at shabby and imprecise journalism from both tech and non-tech outlets, along with a ton of influencers and grifters jumping on the bandwagon. And let's be real: everyone's lapped it up because they've wanted to - because this is the first time any of them have encountered actually useful AI of any class that they can directly interact with. It seems powerful, mysterious, perhaps even agical, and maybe more than a little bit scary.
As a CTO how do you think it would have gone if I'd spent my time correcting peers, team members, consultants, salespeople, and the rest to the effect that, no, this isn't AI, it's one type of AI, it's an LLM, when ChatGPT became widely available? When a lot of these people, with no help or guidance from me, were already using it to do useful transformations and analyses on text?
It would have led to a huge number of unproductive and timewasting conversation, and I would have seemed like a stick in the mud.
Sometimes you just have to ride the wave, because the only other choice is to be swamped by it and drown.
Regardless of what limitations "AGI" has, it'll be given that monicker when a lot of people - many of them laypeople - feel like it's good enough. Whether or not that happens before the current LLM bubble bursts... tough to say.
I mean, once they "reach AGI", they will need a scale to measure advances within it.
Because everyone knows that once you call a group of people an expert panel, that automatically means they can't be biased /s
Peter Norvig (former research director at Google and author of the most popular textbook on AI) offers a mainstream perspective that AGI is already here: https://www.noemamag.com/artificial-general-intelligence-is-...
If you described all the current capabilities of AI to 100 experts 10 years ago, they’d likely agree that the capabilities constitute AGI.
Yet, over time, the public will expect AGI to be capable of much, much more.
today's models are not able to think independently, nor are they conscious or able to mutate themselves to gain new information on the fly or make memories other than half baked solutions with putting stuff in the context window which just makes it use that to generate stuff related to it, imitating a story basically.
they're powerful when paired with a human operator, I.e. they "do" as told, but that is not "AGI" in my book
See "Self-Adapting Language Models" from a group out of MIT recently which really gets at exactly that.
Then it blew past that and now, what I think is honestly happening, is that we don't really have the grip on "what is intelligence" that we thought we had. Our sample size for intelligence is essentially 1, so it might take a while to get a grip again.
One thing they acknowledge but glance over, is the autonomy of current systems. When given more open ended, long term tasks, LLMs seem to get stuck at some point and get more and more confused and stop making progress.
This last problem may be solved soon, or maybe there's something more fundamental missing that will take decades to solve. Who knows?
But it does seem like the main barrier to declaring current models "general" intelligence.
I think that we're moving the goalposts, but we're moving them for a good reason: we're getting better at understanding the strengths and the weaknesses of the technology, and they're nothing like what we'd have guessed a decade ago.
All of our AI fiction envisioned inventing intelligence from first principles and ending up with systems that are infallible, infinitely resourceful, and capable of self-improvement - but fundamentally inhuman in how they think. Not subject to the same emotions and drives, struggling to see things our way.
Instead, we ended up with tools that basically mimic human reasoning, biases, and feelings with near-perfect fidelity. And they have read and approximately memorized every piece of knowledge we've ever created, but have no clear "knowledge takeoff path" past that point. So we have basement-dwelling turbo-nerds instead of Terminators.
This makes AGI a somewhat meaningless term. AGI in the sense that it can best most humans on knowledge tests? We already have that. AGI in the sense that you can let it loose and have it come up with meaningful things to do in its "life"? That you can give it arms and legs and watch it thrive? That's probably not coming any time soon.
Yes, and if they used it for awhile, they'd realize it is neither general nor intelligent. On paper sounds great though.
Who is this "they" you speak of?
It's true the definition has changed, but not in the direction you seem to think.
Before this boom cycle the standard for "AI" was the Turing test. There is no doubt we have comprehensively passed that now.
eg: https://pmc.ncbi.nlm.nih.gov/articles/PMC10907317/
It's widely accepted that is has been passed. Eg Wikipeida:
> Since the mid-2020s, several large language models such as ChatGPT have passed modern, rigorous variants of the Turing test
We really really Really should Not define as our success function for AI (our future-overlords?) the ability of computers to deceive humans about what they are.
The Turing Test was a clever twist on (avoiding) defining intelligence 80 years ago.
Going forward, valuing it should be discarded post-haste by any serious researcher or engineer or message-board-philosopher, if not for ethical reasons then for not-promoting spam/slop reasons.
1) Look for spelling, grammar, and incorrect word usage; such as where vs were, typing out where our should be used.
2) Ask asinine questions that have no answers; _Why does the sun ravel around my finger in low quality gravity while dancing in the rain?_
ML likes to always come up with an answers no matter what. Human will shorten the conversation. It also is programmed to respond with _I understand_, _I hear what you are saying_, and make heavy use of your name if it has access to it. This fake interpersonal communication is key.
Do you think this goal during training cannot be changed to impersonate someone normal such that you cannot detect you are chatting with an LLM?
Before flight was understood some thought "magic" was involved. Do you think minds operate using "magic"? Are minds not machines? Their operation can not be duplicated?
1. Minds are machines and can (in principle) have their operation duplicated
2. LLMs are not doing this
I don't think so, because LLMs hallucinate by design, which will always produce oddities.
> Before flight was understood some thought "magic" was involved. Do you think minds operate using "magic"? Are minds not machines? Their operation can not be duplicated?
Might involve something we don't grasp, but despite that: only because something moves through air it's not flying and will never be, just like a thrown stone.
And the "agreeability" is not a hallucination, it's simply the path of least resistance, as in, the model can just take information that you said and use that to make a response, not to actually "think" and consider I'd what you even made sense or I'd it's weird or etc.
They almost never say "what do you mean?" to try to seek truth.
This is why I don't understand why some here claim that AGI being already here is some kind of coherent argument. I guess redefining AGI is how we'll reach it
Let's be real guys, it was created by Turing. The same guy who built the first general purpose computer. Man was without a doubt a genius, but it also isn't that reasonable to think he'd come up with a good definition or metric for a technology that was like 70 years away. Brilliant start, but it is also like looking at Newton's Laws and evaluating quantum mechanics based off of that. Doesn't make Newton dumb, just means we've made progress. I hope we can all agree we've made progress...
And arguably the Turing Test was passed by Eliza. Arguably . But hey, that's why we refine and make progress. We find the edge of our metrics and ideas and then iterate. Change isn't bad, it is a necessary thing. What matters is the direction of change. Like velocity vs speed.
People are being fooled in online forums all the time. That includes people who are naturally suspicious of online bullshittery. I'm sure I have been.
Stick a fork in the Turing test, it's done. The amount of goalpost-moving and hand-waving that's necessary to argue otherwise simply isn't worthwhile. The clichéd responses that people are mentioning are artifacts of intentional alignment, not limitations of the technology.
a problem similar to the turing test, "0 or more of these users is a bot, have fun in a discussion forum"
but there's no test or evaluation to see if any user successfully identified the bot, and there's no field to collect which users are actually bots, or partially using bots, or not at all, nor a field to capture the user's opinions about whether the others are bots
No, I did not. I tested it with questions that could not be answered by the Internet (spatial, logical, cultural, impossible coding tasks) and it failed in non-human-like ways, but also surprised me by answering some decently.
We might not _quite_ be at the era of "I'm sorry I can't let you do that Dave...", but on the spectrum, and from the perspective of a lay-person, we're waaaaay closer than we've ever been?
I'd counsel you to self-check what goalposts you might have moved in the past few years...
I say this fully aware that a kitted out tech company will be using LLMs to write code more conformant to style and higher volume with greater test coverage than I am able to individually.
And just like humans, they can be very confidently wrong. When any person tells us something, we assume there's some degree of imperfection in their statements. If a nurse at a hospital tells you the doctor's office is 3 doors down on the right, most people will still look at the first and second doors to make sure those are wrong, then look at the nameplate on the third door to verify that it's right. If the doctor's name is Smith but the door says Stein, most people will pause and consider that maybe the nurse made a mistake. We might also consider that she's right, but the nameplate is wrong for whatever reason. So we verify that info by asking someone else, or going in and asking the doctor themselves.
As a programmer, I'll ask other devs for some guidance on topics. Some people can be absolute geniuses but still dispense completely wrong advice from time to time. But oftentimes they'll lead me generally in the right way, but I still need to use my own head to analyze whether it's correct and implement the final solution myself.
The way AI dispenses its advice is quite human. The big problem is it's harder to validate much of its info, and that's because we're using it alone in a room and not comparing it against anyone else's info.
No they are not smart at all. Not even a little. They cannot reason about anything except that their training data overwhelmingly agrees or disagrees with their output nor can they learn and adept. They are just text compression and rearrangement machines. Brilliant and extremely useful tooling but if you use them enough it becomes painfully obvious.
edit: i'm very thankful my friend didn't end up winning more than he bet. idk what he would have done if his feelings towards the LLM was confirmed by adding money to his pocket..
E.g. I read all the time about gains from SWEs. But nobody questions how good of a SWE they even are. What proportion of SWEs can be deemed high quality?
LLMs are useful but that doesn't make them intelligent.
So I think a lot of people now don't see what the path is to AGI, but also realize they hadn't seen the path to LLM's, and innovation is coming fast and furious. So the most honest answer seems to be, it's entirely plausible that AGI just depends on another couple conceptual breakthroughs that are imminent... and it's also entirely plausible that AGI will require 20 different conceptual breakthroughs all working together that we'll only figure out decades from now.
True honesty requires acknowledging that we truly have no idea. Progress in AI is happening faster than ever before, but nobody has the slightest idea how much progress is needed to get to AGI.
It's also a misleading view of the history. It's true "most people" weren't thinking about LLMs five years ago, but a lot of the underpinnings had been studied since the 70s and 80s. The ideas had been worked out, but the hardware wasn't able to handle the processing.
> True honesty requires acknowledging that we truly have no idea. Progress in AI is happening faster than ever before, but nobody has the slightest idea how much progress is needed to get to AGI.
Maybe, but don't tell that to OpenAI's investors.
That's very ambiguous. "Most people" don't know most things. If we're talking about people that have been working in the industry though, my understanding is that the concept of our modern day LLMs aren't magical at all. In fact, the idea has been around for quite a while. The breakthroughs in processing power and networking (data) were the hold up. The result definitely feels magical to "most people" though for sure. Right now we're "iterating" right?
I'm not sure anyone really see's a clear path to AGI if what we're actually talking about is the singularity. There are a lot of unknown unknowns right?
AGI is a silly concept
We all thought about a future where AI just woke up one day, when realistically, we got philosophical debates over whether the ability to finally order a pizza constitutes true intelligence.
Nobody thought we were anywhere closer to me jumping off the Empire State Building and flying across the globe 5 years ago, but I'm sure I will. Wish me luck as I take that literal leap of faith tomorrow.
"oh look it can think! but then it fails sometimes! how strange, we need to fix the bug that makes the thinking no workie"
instead of:
"oh, this is really weird. Its like a crazy advanced pattern recognition and completion engine that works better than I ever imagined such a thing could. But, it also clearly isn't _thinking_, so it seems like we are perhaps exactly as far from thinking machines as we were before LLMs"
And to the "most groundbreaking blah blah blah", i could argue that the difference between no computer and computer requires you to actually understand the computer, which almost no one actually does. It just makes peoples work more confusing and frustrating most of the time. While the difference between computer that can't talk to you and "the voice of god answering directly all questions you can think of" is a sociological catastrophic change.
It produced a statement. The lexical structure of the statement is highly congruent with its training data and the previous statements.
>The lexical structure of the statement is highly congruent with its training data and the previous statements.
This doesn't accurately capture how LLMs work. LLMs have an ability to generalize that undermines the claim of their responses being "highly congruent with training data".
I don't know what else to tell you other than this infallible logic automaton you imagine must exist before it is 'real intelligence' does not exist and has never existed except in the realm of fiction.
I always like the phrase, "follow the money", in situations like this. Are OpenAI or Microsoft close to AGI? Who knows... Is there a monetary incentive to making you believe they are close to AGI? Absolutely. Take in this was the first bullet point in Microsoft's blog post.
This bodes very poorly for AGI in the near term, IMO
If you use 'multimodal transformer' instead of LLM (which most SOTA models are), I don't think there's any reason why a transformer arch couldn't be trained to drive a car, in fact I'm sure that's what Tesla and co. are using in their cars right now.
I'm sure self-driving will become good enough to be commercially viable in the next couple years (with some limitations), that doesn't mean it's AGI.
If someone wants to claim that, say, GPT-5 is AGI, then it is on them to connect GPT-5 to a car control system and inputs and show that it can drive a car decently well. After all, it has consumed all of the literature on driving and physics ever produced, plus untold numbers of hours of video of people driving.
The only difference between the two is training data the former lacks that the latter does so not a 'vast gulf'.
>And I see no proof whatsoever that we can, today, train a single model that can both write a play and drive a car.
You are not making a lot of sense here. You can have a model that does both. It's not some herculean task. it's literally just additional data in the training run. There are vision-language-action models tested on public roads.
It would be a really silly thing to do, and probably there are engineering subletities as to why this would be a bad idea, but I don't see why you couldn't train a single model to do both.
Is it happening faster than it was six months ago? a year ago?
Well, Google had LLMs ready by 2017, which was almost 9 years ago.
AGI is the end-game. There's a lot of room between current LLMs and AGI.
I don't see the mentions in this post as anyone particularly believing we're close to AGI
Bit like blaming a airplane building company for building airplanes, it's literally what they were created for, no matter how stupid their ideas of the "ideal aircraft" is.
FTFY. OpenAI has not built AGI (not yet, it you want to be optimistic).
If you really need an analogy, it's more in the vein of giving SpaceX crap for yapping about building a Dyson Sphere Real Soon Now™.
I was just informing that the company always had AGI as a goal, even when they were doing the small Gym prototypes and all of that stuff that made the (tech) news before GPT was a thing.
In my opinion, whether AGI happens or not isn't the main point of this. It's the fact that OpenAI and MSFT can go their separate ways on infra & foundation models while still preserving MSFT's IP interests.
It was news when dwarkesh interviewed Karpathy who said per his definition of AGI, he doesn't think it will occur until 2035. Thus, if karpathy is pessimistic, then many people working in AI today think we will have agi by 2032 (and likely sooner, eg end of 2028)
But I guess what most people would consider AGI would be something capable of on-line learning and self improvement.
I don't get the 2035 prediction though (or any other prediction like this) - it implies that we'll have some magical breakthrough in the next couple years be it in hardware and/or software - this might happen tomorrow, or not any time soon.
If AGI can be achieved using scaling current techniques and hardware, then the 2035 date makes sense - moores law dictates that we'll have about 64x the compute in hardware (let's add another 4x due to algorithmic improvements) - that means that 250x the compute will give us AGI - I think with ARC-AGI 2 this was the kind of compute budget they spent to get their models to perform on a human-ish level.
Also perf/W and perf/$ scaling has been slowing in the past decade, I think we got like 6x-8x perf/W compared to a decade ago, which is a far cry than what I wrote here.
Imo it might turn out that we discover 'AGI' in the sense that we find an algorithm that can turn FLOPS to IQ that scales indefinitely, but is very likely so expensive to run, that biological intelligences will have a huge competitive edge for a very long time, in fact it might be that biology is astronomically more efficient in turning Watts to IQ than transistors will ever be.
Thank you, this is the definition we need a proper term for, and this is what most experts mean when they say we have some kind of AGI.
It was ARC-AGI-1 that they used extreme computing budgets to get to human-ish level performance. With ARC-AGI-2 they haven't gotten past ~30% correct. The average human performance is ~65% for ARC-AGI-2, and a human panel gets 100% (because humans understand logical arguments rather than simply exclaiming "you're absolutely right!").
It's a reverse Turing test at this point: "If you get tricked by an LLM to the point of believing it is AGI you're a clown"
> a system you could go to that can do any economically valuable task at human performance or better.
https://open.substack.com/pub/dwarkesh/p/andrej-karpathy?sel...
Oh I have bad news for you...
I think it is near-certain that within two years a large AI company will claim it has developed AGI.
I'd say we're still a long way from human level intelligence (can do everything I can do), which is what I think of as AGI, but in this case what matters is how OpenAI and/or their evaluation panel define it.
OpenAI's definition used to be, maybe still is, "able to do most economically valuable tasks", which is so weak and vague they could claim it almost anytime.
For example an AGI AI could give you a detailed plan that tells you exactly how to do any and every task. But it might not be able to actually do the task itself, for example manual labor jobs for which an AI simply cannot do unless it also "builds" itself a form-factor to be able to do the job.
The AGI could also just determine that it's cheaper to hire a human than to build a robot at any given point for a job that it can't yet do physically and it would be the AGI
All of us in this sub-thread consider ourselves "AGI", but we cannot do any job. In theory we can, I guess. But in practical terms, at what cost? Assuming none of us are truck drivers, if someone was looking for a truck driver, they wouldn't hire us because it take too long for us to get a license, certified, learn, etc. Even though in theory we probably do it eventually.
A Definition of AGI - https://arxiv.org/abs/2510.18212
AGI? we are not even close to AI, but that hasnt stopped every other tom dick and harry and my maid from claiming AI capability.
My definition of AGI is when AI doesn't need humans anymore to create new models (to be specific, models that continue the GPT3 -> GPT4 -> GPT5 trend).
By my definition, once that happens, I don't really see a role for Microsoft to play. So not sure what value their legal deal has.
I don't think we're there at all anyway.
They have money and infra, if AI can create better AI models, then isn't OpenAI with its researches going to be the redundant one?
The key steps will be going beyond just the neural network and blurring the line between training and inference until it is removed. (Those two ideas are closely related).
Pretending this isn't going to happen is appealing to some metaphysical explanation for the existence of human intelligence.
I don't see any way to define it in an easily verifiable way.
Pretty much any test you could devise, others will be able to point out ways that it's inadequate or doesn't capture aspects of human intelligence.
So I think it all just comes down to who is on the panel.
Part of the problem with “AGI” is everyone has their own often totally arbitrary yard sticks.
"Most viable labor" involves getting things from one place to another, and that's not even the hard part of it.
In any case, any sane definition of general AI would entail things that people can generally do.
Like driving.
>That is stupid
That's just, like, your opinion, man.
I feel like everyone’s opinion on how self-driving is going is still rooted in 2018 or something and no one has updated.
I had anecdata that was data, and it said that full-self-driving is wishful thinking.
We cool now?
The world never ceases to surprise me with its stupidity.
Thanks for your contribution.
There's a difference between "I survived" and "I drive anywhere close to the quality of the average American" - a low bar and one that still is not met by Tesla FSD.
Probable the biggest problem as others have stated is that we can’t really define intelligence more precisely than that it is something most humans have and all rocks don’t. So how could any definition for AGI be any more precise?
I said having to satisfy “all” the yard sticks is stupid, because one could conceive a truly infinite number of arbitrary yard sticks.
It's one skill almost everyone on the planet can learn exceptionally easily - which Waymo is on pace to master, but a generalized LLM by itself is still very far from.
As far as driving itself goes as a yardstick, I just don’t find it interesting because we literally have Waymo’s orbiting major cities and Teslas driving on the roads already right now.
If that’s the yardstick you want to use, go for it. It just doesn’t seem particularly smart to hang your hat on that one as your Final Boss.
It also doesn’t seem particularly useful for defining intelligence itself in an academic sort of way because even humans struggle to drive well in many scenarios.
But hey if that’s what you wanna use don’t let me stop you, sure, go for it. I have feeling you’ll need new goalposts relatively soon if you do, though.
> Said one park ranger, “There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists.”
[1] https://www.schneier.com/blog/archives/2006/08/security_is_a...
the key is being able to drive and learn another language and learn to play an instrument and do math and, finally, group pictures of their different pets together. AGI would be able to do all those things as well... even teach itself to do those things given access to the Internet. Until that happens then no AGI.
Of course there are caveats there, but is driving really the yardstick you want to use?
In restricted settings.
Yeah no fam.
>but is driving really the yardstick you want to use?
Yes, because it's an easy one, compared, say, to walking.
But if you insist — let's use that.
Just like self-driving is going well on an empty race track.
>Good luck with that take
Good luck running into a walking robot in the street in your lifetime.
Look, a time traveler from 2019.
It sure must feel like 2018 was a long time ago when that's more than the entirety of your adult life. I get it.
The rest of us aren't that excited to trust our lives to technology that confidently drove into a highway barrier at high speed, killing the driver in a head-on collision mere seven years ago¹.
Because we remember that the makers of that tech said the exact same things you're saying now back then.
And because we remember that the person killed was an engineer who complained about Tesla steering him towards the same barrier previously, and Tesla has, effectively, ignored the complaints.
Tech moves fast. Safety culture doesn't. And the last 1% takes 99% of the time (again, how long ago have you graduated?).
I'm glad that you and your friends are volunteering to be lab rats in the just trust me bro, we'll settle the lawsuit if needs be approach to safety.
I'm not happy about having to share the road with y'all tho.
______
¹https://abcnews.go.com/Business/tesla-autopilot-steered-driv...
but in reality, it's a vacuous goal post that can always be kicked down the line.
But crucially, there is no agreed-upon definition of AGI. And I don't think we're close to anything that resembles human intelligence. I firmly believe that stochastic parrots will not get us to AGI and that we need a different methodology. I'm sure humanity will eventually create AGI, and perhaps even in my lifetime (in the next few decades). But I wouldn't put my money on that bet.
No one credible, no.
Some people believe capitalism is a net-positive. Some people believe in a all-encompassing entity controlling our lives. Some believe 5G is an evil spirit.
After decades I've kind of given up hope on understanding why and how people believe what they believe, just let them.
The only important part is figuring out how I can remain oblivious to what they believe in, yet collaborate with them on important stuff anyways, this is the difficult and tricky part.
As a proxy, you can look at storage. The human brain is estimated at 3.2Pb of storage. The cost of disk space drops by half every 2-3 years. As of this writing, the cost is about $10 / Tb [0]. If we assume about 3 halvings, by 2030 that cost will be around $2.50 / Tb, which means that to purchase a computer roughly the storage size of a human brain, it will cost just under $6k.
The $6k price point means that (high-end) consumers will have economic access to compute commensurate with human cognition.
This is a proxy argument, using disk space as the proxy for the rest of the "intelligence" stack, so the assumption is that processing power will follow suite, also be not as expensive, and that the software side will develop to keep up with the hardware. There's no convincing indication that these assumptions are false.
You can do your own back of the envelope calculation, taking into account generalizations of Moore's law to whatever aspect of storage, compute or power usage you think is most important. Exponential progress is fast and so an order of magnitude misjudgement translates to a 2-3 year lag.
Whether you believe it or not, the above calculation and, I assume, other calculations that are similar, all land on, or near, the 2030 year as the inflection point.
Not to belabor the point but until just a few years ago, conversational AI was thought to be science fiction. Image generation, let alone video generation, was thought by skeptics to be decades, if not centuries, away. We now have generative music, voice cloning, automatic 3d generation, character animation and the list goes on.
One might argue that it's all "slop" but for anyone paying attention, the slop is the "hello world" of AGI. To even get to the slop point represents such a staggering achievement that it's hard to understate.
Outside of robotics / embodied AI, SOTA models have already achieved Sci-Fi level capability.
seems like the entire US tech economy is putting their resources into this goal.
i can see it happening soon if it hasn't already
"Really similar" kinda betrays the fact that it is not similar at all in how it works just in how it appears.
It would be like saying a cloud that kinda looks like a dog is really similar to the labrador you grew up with.
AGI is when the system can train itself which we have already proven.
AI is not making enough money to cover the cost and it will take a decade or so to cover the same.
More likely Americans’ tax dollars will be shoveled into the hole.
I think it's funny and telling that they've used the word "declare" where what they are really doing is "claim".
These guys think they are prophets.
https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
I think you can rebuild human civilization with that.
I feel like replacing highly skilled human labor hardly makes financial sense, if it costs that much.
Or maybe since it is ultimately an agreement about money and IP, they are fine with defining it solely through profits?
MS: I just wanted you to know that you can't just say the word AGI and expect anything to happen.
OpenAI: I didn't say it. I declared it.
You say this somewhat jokingly, but I think they 100% believe something along those lines.
Accidental misquote?