Top
Best
New

Posted by mooreds 18 hours ago

I don't think AGI is right around the corner(www.dwarkesh.com)
316 points | 358 comments
raspasov 15 hours ago|
Anyone who claims that a poorly definined concept, AGI, is right around the corner is most likely:

- trying to sell something

- high on their own stories

- high on exogenous compounds

- all of the above

LLMs are good at language. They are OK summarizers of text by design but not good at logic. Very poor at spatial reasoning and as a result poor at connecting concepts together.

Just ask any of the crown jewel LLM models "What's the biggest unsolved problem in the [insert any] field".

The usual result is a pop-science-level article but with ton of subtle yet critical mistakes! Even worse, the answer sounds profound on the surface. In reality, it's just crap.

0x20cowboy 10 hours ago||
LLM are a compressed version of their training dataset with a text based interactive search function.
lexandstuff 9 hours ago|||
Yes, but you're missing their ability to interpolate across that dataset at retrieval time, which is what makes them extremely useful. Also, people are willing to invest a lot of money to keep building those datasets, until nearly everything of economic value is in there.
beeflet 9 hours ago|||
not everything of economic value is data retrieval
bluefirebrand 9 hours ago||
Most economic value is not data retrieval
HaZeust 8 hours ago||
The stock market is the root for the majority of all the world's economic value, and has almost-exclusively been data retrieval since 2001.
andsoitis 8 hours ago||
Come on. The stock market is not just data retrieval. The statement doesn’t even make sense.
HaZeust 7 hours ago|||
It makes perfect sense, and I meant what I said.

60% of all US equity volume is pure high-frequency trading, and ETFs add roughly another 20% that’s literally just bots responding to market activity and bearish-bullish sentiment analysis on public(?) press releases. 2/3 of trading funds also rely on external data to price in decisions, and I think it was around 90% in 2021 use trading algorithms as their determining factor for their high-frequency trade strategies.

At its core, the movements that make up the market really IS data retrieval.

semiquaver 1 hour ago|||

  > 60% of all US equity volume
Volume is not value.
raspasov 6 hours ago||||
Sure, the market, but HFT is relatively tiny as a market and the profit it brings. Not to mention, it's essentially a zero-sum game.

Brought to you by your favorite Google LLM search result:

"The global high-frequency trading (HFT) market was valued at USD 10.36 billion in 2024 and is projected to reach USD 16.03 billion by 2030"

(unverified by a human, use at your own risk).

HaZeust 6 hours ago||
>"The global high-frequency trading (HFT) market was valued at USD 10.36 billion in 2024 and is projected to reach USD 16.03 billion by 2030"

>

> (unverified by a human, use at your own risk).

Honorable for mentioning the lack of verification; doing so would have dissolved the AI's statement, but jury's out on how much EXACTLY:

Per https://www.sciencedirect.com/science/article/abs/pii/S03784...:

"While estimates vary due to the difficulty in ascertaining whether each trade is an HFT, recent estimates suggest HFT accounts for 50–70% of equity trades and around 50% of the futures market in the U.S., 40% in Canada, and 35% in London (Zhang, 2010, Grant, 2011, O’Reilly, 2012, Easley et al., 2012, Scholtus et al., 2014)"

In my original reply, I used the literal median of that spectrum @ 60%

Jane Street - who has recently found themselves in hot water from the India ban - disputes that AI summary ALONE. Per https://www.globaltrading.net/jane-street-took-10-of-of-us-e... , Jane Street booked 20.5B in trading revenue, primarily though HFT's, just in 2024.

Brought to you by someone who takes these market movements too seriously for their own good.

bdelmas 5 hours ago||
Revenue is not profit
bdelmas 5 hours ago||||
The percentage is irrelevant without knowing how they really work and how much profit they make. They could be at 95% with 0.1% of margin it wouldn’t mean much for the market.

At the end of the day talking about HFT this way is to not know what they do and what service they offer to the market. Overall they are not trending makers but trend followers.

exe34 6 hours ago|||
The stock market does not grow potatoes.
HaZeust 6 hours ago||
And potatoes don't grow nearly as much economic value within industrial societies - as they do in, say, agrarian ones. All to say, I don't understand your point.
exe34 6 hours ago||
The stock market does not make movies either.
whiteboardr 4 hours ago|||
Because hypetrain.
bdelmas 5 hours ago||||
Exactly I am so tired to hear about AI… And they are not even AI! I am also losing faith in this field when I see how much they all push so much hype and lies like this instead of being transparent. They are not AGIs not even AIs… For now they are only models and your definition is a good one
echelon 9 hours ago||||
LLMs are useful in that respect. As are media diffusion models. They've compressed the physics of light, the rules of composition, the structure of prose, the knowledge of the internet, etc. and made it infinitely remixable and accessible to laypersons.

AGI, on the other hand, should really stand for Aspirationally Grifting Investors.

Superintelligence is not around the corner. OpenAI knows this and is trying to become a hyperscaler / Mag7 company with the foothold they've established and the capital that they've raised. Despite that, they need a tremendous amount of additional capital to will themselves into becoming the next new Google. The best way to do that is to sell the idea of superintelligence.

AGI is a grift. We don't even have a definition for it.

vrighter 2 hours ago|||
I hate the "accessible to the layperson" argument.

People who couldn't do art before, still can't do art. Asking someone, or something else, to make a picture for you does not mean you created it.

And art was already accessible to anyone. If you couldn't draw something (because you never invested the time to learn the skill), then you could still pay someone else to paint it for you. We didn't call "commissioning a painting" as "being an artist", so what's different about "commissioning a painting from a robot?"

EGreg 8 hours ago|||
I an not an expert but I have a serious counterpoint.

While training LLMs to replicate the human output, the intelligence and understanding EMERGES in the internal layers.

It seems trivial to do unsupervised training on scientific data, for instance, such as star movements, and discover closed-form analytic models for their movements. Deriving Kepler’s laws and Newton’s equations should be fast and trivial, and by that afternoon you’d have much more profound models with 500+ variables which humans would struggle to understand but can explain the data.

AGI is what, Artificial General Intelligence? What exactly do we mean by general? Mark Twain said “we are all idiots, just on different subjects”. These LLMs are already better than 90% of humans at understanding any subject, in the sense of answering questions about that subject and carrying on meaningful and reasonable discussion. Yes occasionally they stumble or make a mistake, but overall it is very impressive.

And remember — if we care about practical outcomes - as soon as ONE model can do something, ALL COPIES OF IT CAN. So you can reliably get unlimited agents that are better than 90% of humans at understanding every subject. That is a very powerful baseline for replacing most jobs, isn’t it?

GoblinSlayer 2 hours ago|||
Indeed 90% problems can be solved by googling and that's what LLMs do. AGI is expected to be something more than a talking encyclopedia.
imiric 6 hours ago||||
Anthropomorphization is doing a lot of heavy lifting in your comment.

> While training LLMs to replicate the human output, the intelligence and understanding EMERGES in the internal layers.

Is it intelligence and understanding that emerges, or is applying clever statistics on the sum of human knowledge capable of surfacing patterns in the data that humans have never considered?

If this were truly intelligence we would see groundbreaking advancements in all industries even at this early stage. We've seen a few, which is expected when the approach is to brute force these systems into finding actually valuable patterns in the data. The rest of the time they generate unusable garbage that passes for insightful because most humans are not domain experts, and verifying correctness is often labor intensive.

> These LLMs are already better than 90% of humans at understanding any subject, in the sense of answering questions about that subject and carrying on meaningful and reasonable discussion.

Again, exceptional pattern matching does not imply understanding. Just because these tools are able to generate patterns that mimic human-made patterns, doesn't mean they understand anything about what they're generating. In fact, they'll be able to tell you this if you ask them.

> Yes occasionally they stumble or make a mistake, but overall it is very impressive.

This can still be very impressive, no doubt, and can have profound impact on many industries and our society. But it's important to be realistic about what the technology is and does, and not repeat what some tech bros whose income depends on this narrative tell us it is and does.

Salgat 9 hours ago|||
LLMs require the sum total of human knowledge to ape what you can find on google, meanwhile Ramanujan achieved brilliant discoveries in mathematics using nothing but a grade school education and a few math books.
rowanG077 9 hours ago||
You phrase it as a diss but "Yeah LLM suck, they aren't even as smart as Ramanujan" sounds like a high praise to me.
Salgat 8 hours ago||
Unfortunately LLMs fail even basic logic tests given to children so definitely not high praise. I'm just highlighting the absurd amount of data they need versus humans to highlight that they're just spitting out regressions on the training data. We're talking data that would take a human countless thousands of lifetimes to ingest. Yet a human can accomplish infinitely more with a basic grade school education.
jbstack 6 hours ago||
Humans can achieve more within one (or two, or a few) narrowly scoped field(s), after a lot of hard work and effort. LLMs can display a basic level of competency (with some mistakes) in almost any topic known to mankind. No one reasonably expects a LLM to be able to do the former, and humans certainly cannot do the latter.

You're comparing apples and oranges.

Also, your comparison is unfair. You've chosen an exceptional high achiever as your example of a human to compare against LLMs. If you instead compare the average human, LLMs don't look so bad even when the human has the advantage of specialisation (e.g. medical diagnostics). A LLM can do reasonably well against an average (not exceptional) person with just a basic grade school education if asked to produce an essay on some topic.

mhuffman 6 hours ago||
>Humans can achieve more within one (or two, or a few) narrowly scoped field(s), after a lot of hard work and effort.

>No one reasonably expects a LLM to be able to do the former

I can feel Sam Altman's rage building ...

weatherlite 3 hours ago||
Yeah, I think many investors do expect that ...
richardw 14 hours ago|||
They’re great at working with the lens on our reality that is our text output. They are not truth seekers, which is necessarily fundamental to every life form from worms to whales. If we get things wrong, we die. If they get them wrong, they earn 1000 generated tokens.
jhanschoo 12 hours ago||
Why do you say that LLMs are not truth seekers? If I express an informational query not very well, the LLM will infer what I mean by it and address the possible well-posed information queries that I may have intended that I did not express well.

Can that not be considered truth-seeking, with the agent-environment boundary being the prompt box?

richardw 8 hours ago|||
Right now you’re putting in unrequested effort to get to an answer. Nobody is driving you to do this, you’re motivated to get the answer. At some point you’ll be satisfied, or you might give up because you have other things you want to do, more.

An LLM is primarily trying to generate content. It’ll throw the best tokens in there but it won’t lose any sleep if they’re suboptimal. It just doesn’t seek. It won’t come back an hour later and say “you know, I was thinking…”

I had one frustrating conversation with ChatGPT where I kept asking it to remove a tie from a picture it generated. It kept saying “done, here’s the picture without the tie”, but the tie was still there. Repeatedly. Or it’ll generate a reference or number that is untrue but looks approximately correct. If you did that you’d be absolutely mortified and you’d never do it again. You’d feel shame and a deep desire to be seen as someone who does it properly. It doesn’t have any such drive. Zero fucks given, training finished months ago.

chychiu 12 hours ago||||
They are not intrinsically truth seekers, and any truth seeking behaviour is mostly tuned during the training process.

Unfortunately it also means it can be easily undone. E.g. just look at Grok in its current lobotomized version

jhanschoo 12 hours ago||
> They are not intrinsically truth seekers

Is the average person a truth seeker in this sense that performs truth-seeking behavior? In my experience we prioritize sharing the same perspectives and getting along well with others a lot more than a critical examination of the world.

In the sense that I just expressed, of figuring out the intention of a user's information query, that really isn't a tuned thing, it's inherent in generative models from possessing a lossy, compressed representation of training data, and it is also truth-seeking practiced by people that want to communicate.

graealex 8 hours ago|||
You are completely missing the argument that was made to underline the claim.

If ChatGPT claims arsenic to be a tasty snack, nothing happens to it.

If I claim the same, and act upon it, I die.

jhanschoo 6 hours ago|||
You are right. I have ignored completely the context in the phrasing "truth seeker" was made, given my own wrong interpretation to the phrase, and I in fact agree with the comment I was responding to that they "work with the lens on our reality that is our text output".
cornel_io 8 hours ago|||
If ChatGPT claims arsenic to be a tasty snack, OpenAI adds a p0 eval and snuffs that behavior out of all future generations of ChatGPT. Viewed vaguely in faux genetic terms, the "tasty arsenic gene" has been quickly wiped out of the population, never to return.

Evolution is much less brutal and efficient. To you death matters a lot more than being trained to avoid a response does to ChatGPT, but from the point of view of the "tasty arsenic" behavior, it's the same.

imbnwa 10 hours ago|||
>Is the average person a truth seeker in this sense that performs truth-seeking behavior?

Absolutely

sleepybrett 12 hours ago|||
They keep giving me incorrect answers to verifiable questions. They clearly don't 'seek' anything.
anonzzzies 9 hours ago|||
Most on HN are tech people and it is tiring to see they did not just spend a sunday morning doing a Karpathy llm implementation or so. Somehow, like believing in a deity, even smart folk seem to think 'there is more'. Stop. Go to youtube or whatever and watch a video of practically implementing a gpt like thing, and code along. It takes very little time and your hallucinations about agi with these models shall be exorcized.
jhanschoo 9 hours ago||
I don't know if you are indirectly referring to me, but I have done such an implementation, and those particular LLMs are very limited. Two things come to mind.

1. It is still correct that the limited "truth-seeking" that I expressed holds. With respect to the limited world model possessed by the limited training and limited dataset, such a model "seeks to understand" the approximate concept that I am imperfectly expressing that it has data for, and then generate responses based in that.

2. SotA models have access to external data, be it web search or RAG+vector database, etc.. They also have access to the Chain of Thought method. They are trained on datasets that enable them to exploit these tools, and will exploit these tools. The zero-to-hero sequence does not lead you to build such an LLM, and the one that you build has a very limited computational graph. So with respect to more... traditional notions of "truth seeking", these LLMs fundamentally lack the equipment to do that that SotA models have.

jhanschoo 9 hours ago|||
In the sense that I expressed, has it not already then sought out an accurate meaning that you have asked? And then failed to give a satisfactory answer? I would also ask: is said model an advertised "reasoning" model? Also, does it have access to external facts via a tool like web search? I would not expect great ability to "arrive at truth" under certain limitations.

Now, you can't conclude that "they clearly don't 'seek' anything" just by the fact that they got an answer wrong. To use the broad notion of "seeking" like you do, a truth seeker with limited knowledge and equipment would arrive confidently at incorrect conclusions based on accurate reasoning. For example, without modern lenses to detect stellar parallax, one would confidently conclude that the stars in the sky are a different thing than the sun (and planets), since one travels across the sky, but the stars are fixed. Plato indeed thought so, and nobody would accuse him of not being a truth-seeker.

If this is what you had in mind, I hope that I have addressed it, otherwise I hope that you can communicate what you mean with an example.

sleepybrett 8 minutes ago||
I spent an hour on thrusday trying to get some code that would convert one data structure to another in terraform's HCL (which I only deal with once every few years and I find it's looping and eccentricities very annoying).

I opened my 'conversation' with a very clearly presented 'problem statement'. Given this datastructure (with code and an example with data) convert it to this datastructure (with code and the same example data transformed) in terraform.

I went through seven rounds of it presenting me either code that was not syntactically correct or produced a totally different datastructure. Every time it apologized for getting it wrong and then coming back with yet another wrong answer.

I stopped having the conversation when my junior who I also presented the problem to came back with a proper answer.

I'm not talking about it trying to prove to me that trump actually won the 2020 election or that vaccines don't cause autism or anything. Just actual 2+2=4 answers. Much like, in another reply to this post, the guy who had it try to find all the states that have w in their name.

timmg 14 hours ago|||
Interesting. I think the key to what you wrote is "poorly definined".

I find LLMs to be generally intelligent. So I feel like "we are already there" -- by some definition of AGI. At least how I think of it.

Maybe a lot of people think of AGI as "superhuman". And by that definition, we are not there -- and may not get there.

But, for me, we are already at the era of AGI.

Incipient 14 hours ago|||
I would call them "generally applicable". "intelligence" definitely implies leaning - and I'm not sure RAG, fine-tuning, or 6monthly updates counts - to split hairs.

Where I will say we have a massive gap, which makes the average person not consider it AGI, is in context. I can give a person my very modest codebase, and ask for a change, and they'll deliver - mostly coherently - to that style, files in the right place etc. Still to today with AI, I get inconsistent design, files in random spots, etc.

weatherlite 2 hours ago||||
> I find LLMs to be generally intelligent. So I feel like "we are already there" -- by some definition of AGI. At least how I think of it.

I don't disagree - they are useful in many cases and exhibit human like (or better) performance in many tasks. However they cannot simply be a "drop in white collar worker" yet, they are too jagged and unreliable, don't have a real memory etc. Their economic impact is still very much limited. I think this is what many people mean when they say AGI - something with a cognitive performance so good it equals or beats humans in the real world, at their jobs - not at some benchmark.

One could ask - does it matter ? Why can't we say the current tools are great task solvers and call it AGI even if they are bad agents? It's a lengthy discussion to have but I think that ultimately yes, agentic reliability really matters.

apsurd 14 hours ago|||
that's the thing about language. we all kinda gotta agree on the meanings
Davidzheng 12 hours ago|||
I agree with the last part but I think that criticism applies to many humans too so I don't find it compelling at all.

I also think by original definition (better than median human at almost all task) it's close and I think in the next 5 years it will be competitive with professionals at all tasks which are nonphysical (physical could be 5-10 years idk). I could be high on my own stories but not the rest.

LLMs are good at language yes but I think to be good at language requires some level of intelligence. I find this notion that they are bad at spatial reasoning extremely flawed. They are much better than all previous models, some of which are designed for spatial reasoning. Are they worse than humans? Yes but just the fact that you can put newer models on robots and they just work means that they are quite good by AI standards and rapidly improving.

andyfilms1 12 hours ago|||
Thousands are being laid off, supposedly because they're "being replaced with AI," implying the AI is as good or better as humans at these jobs. Managers and execs are workers, too--so if the AI really is so good, surely they should recuse themselves and go live a peaceful life with the wealth they've accrued.

I don't know about you, but I can't imagine that ever happening. To me, that alone is a tip off that this tech, while amazing, can't live up to the hype in the long term.

unscaled 11 hours ago|||
Some employees can be replaced by AI. That part is true. It's not revolutionary (at least not yet) — it's pretty much the same as other post-industrial technologies that have automated some types of work in the past. It also takes time for industries to adapt to these changes. Replacing workers couldn't possibly happen in one year, even if our AI models were more far more capable than they are in practice

I'm afraid that what we're seeing instead are layoffs that are purely oriented at the stock market. As long as layoffs and talk about AI are seen as a positive signal for investors and as long as corporate leadership is judged by the direction the stock price goes, we will see layoffs (as well as separate hiring sprees for "AI Engineers").

It's a telltale sign that we're seeing a large number of layoffs in the tech sector. It is true that tech companies are poised to adapt AI more quickly than others but that doesn't seem to be what's happening. What seem to be happening is that tech companies have been overhiring throughout the decade leading up to the end of COVID-19. At that time hiring was a positive signal — now firing is.

I don't think these massive layoffs are good for tech companies in the long term, but since they mostly affect things that don't touch direct revenue generating operations, they won't hurt in the near-term and by the time company starts feeling the pain, the cause would be too long in the past to be remembered.

aydyn 9 hours ago||
> Some employees can be replaced by AI.

Yes, but not lets pretend that there aren't a lot of middle and even upper management that couldn't also be replaced by AI.

Of course they won't be because they are the ones making the decisions.

weatherlite 2 hours ago||
> Of course they won't be because they are the ones making the decisions.

That's not accurate at all

https://www.businessinsider.com/microsoft-amazon-google-embr...

visarga 10 hours ago||||
> Managers and execs are workers, too--so if the AI really is so good, surely they should recuse themselves and go live a peaceful life

One thing that doesn't get mentioned is AI capability for being held accountable. AI is fundamentally unaccountable. Like the genie from the lamp, it will grant you the 3 wishes but you bear the consequences.

So what can we do when the tasks are critically important, like deciding on an investment or spending much time and resources on a pursuit? We still need the managers. We need humans for all tasks of consequence where risks are taken. Not because humans are smarter, but because we have skin.

Even on the other side, that of goals, desires, choosing problems to be solved - AI has nothing to say. It has no desires of its own. It needs humans to expose the problem space inside which AI could generate value. It generates no value of its own.

This second observation means AI value will not concentrate in the hands of a few, but instead will be widespread. It's no different than Linux, yes, it has a high initial development cost, but then it generates value in the application layer which is as distributed as it gets. Each human using Linux exposes their own problems to the software to get help, and value is distributed across all problem contexts.

I have come to think that generating the opportunity for AI to provide value, and then incurring the outcomes, good or bad, of that work, are fundamentally human and distributed across society.

hn_throwaway_99 11 hours ago||||
> Thousands are being laid off, supposedly because they're "being replaced with AI," implying the AI is as good or better as humans at these jobs.

I don't think the "implying the AI is as good or better as humans" part is correct. While they may not be saying it loudly, I think most folks making these decisions around AI and staffing are quite clear that AI is not as good as human workers.

They do, however, think that in many cases it is "good enough". Just look at like 90%+ of the physical goods we buy these days. Most of them are almost designed to fall apart after a few years. I think it's almost exactly analogous to the situation with the Luddites (which is often falsely remembered as the Luddites being "anti-technology", when in reality they were just "pro-not-starving-to-death"). In that case, new mechanized looms greatly threatened the livelihood of skilled weavers. The quality of the fabric from these looms tended to be much worse than those of the skilled weavers. But it was still "good enough" for most people such that most consumers preferred the worse but much cheaper cloth.

It's the same thing with AI. It's not that execs think it's "as good as humans", it's that if AI costs X to do something, and the human costs 50X (which is a fair differential I think), execs think people will be willing to put up with a lot shittier quality if the can be delivered something much more cheaply.

One final note - in some cases people clearly do prefer the quality of AI. There was an article on HN recently discussing that folks preferred Waymo taxis, even though they're more expensive.

raspasov 6 hours ago||
Not surprising people like Waymos even though they are a bit more expensive. For a few more dollars you get:

- arguably a very nice, clean car

- same, ahem, Driver and driving style

With the basic UberX it’s a crapshoot. Good drivers, wild drivers, open windows, no air-con. UberX Comfort is better but there’s still a range.

deepsun 11 hours ago||||
The wave of layoffs started couple of years before the AI craze (ChatGPT).
theossuary 11 hours ago||||
I don't think anyone is being laid off because of AI. People are being laid off because the market is bad for a myriad of reasons, and companies are blaming AI because it helps them deflect worry that might lower their stock price.

Companies say "we've laid people off because we're using AI,x but they mean "we had to lay people off, were hoping we can make up for them with AI."

hn_throwaway_99 11 hours ago||
> I don't think anyone is being laid off because of AI.

I think that's demonstratively false. While many business leaders may be overstating it, there are some pretty clear cut cases of people losing their jobs to AI. Here are 2 articles from the Washington Post from 2 years ago:

https://archive.vn/C5syl "ChatGPT took their jobs. Now they walk dogs and fix air conditioners."

https://archive.vn/cFWmX "ChatGPT provided better customer service than his staff. He fired them."

sleepybrett 12 hours ago|||
Every few weeks I give LLMs a chance to code something for me.

Friday I laid out a problem very cleanly. Take this datastructure and tranform it into this other datastructure in terraform. With examples of the data in both formats.

After the seventh round of back and forth where it would give me code that would not compile or code that gave me a totally different datastructure, giving it more examples and clarifications all the while I gave up. I gave the problem to a junior and they came back with the answer in about an hour.

Next time an AI bro tells you that AI can 'replace your juniors' tell him to go to hell.

Buttons840 13 hours ago|||
I'll offer a definition of AGI:

An AI (a computer program) that is better at [almost] any task than 5% of the human specialists in that field has achieved AGI.

Or, stated another way, if 5% of humans are incapable of performing any intellectual job better than an AI can, then that AI has achieved AGI.

Note, I am not saying that an AI that is better than humans at one particular thing has achieved AGI, because it is not "general". I'm saying that if a single AI is better at all intellectual tasks than some humans, the AI has achieved AGI.

The 5th percentile of humans deserves the label of "intelligent", even if they are not the most intelligent, (I'd say all humans deserve the label "intelligent") and if an AI is able to perform all intellectual tasks better than such a person, the AI has achieved AGI.

aydyn 13 hours ago|||
I think your definition is flawed.

Take the Artificial out of AGI. What is GI, and do the majority of humans have it? If so, then why is your definition of AGI far stricter than the definition of Human GI?

Buttons840 11 hours ago||
My definition is a high-bar that is undeniably AGI. My personal opinion is that there are some lower-bars that are also AGI. I actually think it's fair to call LLMs from GPT3 onward AGI.

But, when it comes to the lower-bars, we can spend a lot of time arguing over the definition of a single term, which isn't especially helpful.

aydyn 9 hours ago||
Okay, but then its not so much a definition. It's more like a test.
djoldman 13 hours ago||||
I like where this is going.

However, it's not sufficient. The actual tasks have to be written down, tests constructed, and the specialists tested.

A subset of this has been done with some rigor and AI/computers have surpassed this threshold for some tests. Some have then responded by saying that it isn't AGI, and that the tasks aren't sufficiently measuring of "intelligence" or some other word, and that more tests are warranted.

Buttons840 13 hours ago||
You're saying we need to write down all intellectual tasks? How would that help?

If an AI is better at some tasks (that happen to be written down), it doesn't mean it is better at all tasks.

Actually, I'd lower my threshold even further--I originally said 50%, then 20%, then 5%--but now I'll say if an AI is better than 0.1% of people at all intellectual tasks, then it is AGI, because it is "general" (being able to do all intellectual tasks), and it is "intelligent" (a label we ascribe to all humans).

But the AGI has to be better at all (not just some) intellectual tasks.

djoldman 12 hours ago||
> An AI (a computer program) that is better at [almost] any task than 5% of the human specialists in that field has achieved AGI.

Let's say you have a candidate AI and assert that it indeed has passed the above benchmark. How do you prove that? Don't you have to say which tasks?

Buttons840 11 hours ago||
Well, to state it crudely, you just have to find a dumb person who is inferior to the AI at every single intellectual task. This is cruel, and I don't envy that dumb person, but who knows, I might end up being that dumb person--we all might.
snowwrestler 10 hours ago|||
I think any task-based assessment of intelligence is missing the mark. Highly intelligent people are not considered smart just because they can accomplish tasks.
Buttons840 7 hours ago||
I don't understand, you'll have to give an example.

What is the most non-task-like thing that highly intelligent people do as a sign of their intelligence?

rf15 12 hours ago|||
There's definitely also people in the futurism and/or doom and gloom camps with absolutely no skin in the game that can't resist this topic.
refurb 12 hours ago|||
This is a good summary of what LLM offer today.

My company is desperately trying to incorporate AI (to tell investors they are). The fact that LLM gets thing wrong is a huge problem since most work can’t be wrong and if if a human needs to carefully go through output to check it, it’s often just as much work as having that same human just create the output themselves.

But languages is one place LLMs shine. We often need to translate technical docs to layman language and LLMs work great. It quickly find words and phrases to describe complex topics. Then a human can do a final round of revisions.

But anything de novo? Or requiring logic? It works about as well as a high school student with no background knowledge.

smhinsey 12 hours ago||
Fundamentally, they are really powerful text transformers with some additional capability. The further away from that sweet spot and the closer to anthropomorphization the more unreliable the output
giancarlostoro 13 hours ago|||
Its right around the corner when you prove it as fact. Otherwise as suggested it is just hype to sell us on your LLM flavor.
JKCalhoun 14 hours ago|||
Where does Eric Schmidt fit? Selling something?
raspasov 13 hours ago|||
I think he's generally optimistic which is a net positive.
paulryanrogers 2 hours ago||
Why is that a net positive?
rvz 12 hours ago|||
Already invested in the AI companies selling you something.
ninetyninenine 13 hours ago||
Alright, let’s get this straight.

You’ve got people foaming at the mouth anytime someone mentions AGI, like it’s some kind of cult prophecy. “Oh it’s poorly defined, it’s not around the corner, everyone talking about it is selling snake oil.” Give me a break. You don’t need a perfect definition to recognize that something big is happening. You just need eyes, ears, and a functioning brain stem.

Who cares if AGI isn’t five minutes away. That’s not the point. The point is we’ve built the closest thing to a machine that actually gets what we’re saying. That alone is insane. You type in a paragraph about your childhood trauma and it gives you back something more coherent than your therapist. You ask it to summarize a court ruling and it doesn’t need to check Wikipedia first. It remembers context. It adjusts to tone. It knows when you’re being sarcastic. You think that’s just “autocomplete”? That’s not autocomplete, that’s comprehension.

And the logic complaints, yeah, it screws up sometimes. So do you. So does your GPS, your doctor, your brain when you’re tired. You want flawless logic? Go build a calculator and stay out of adult conversations. This thing is learning from trillions of words and still does better than half the blowhards on HN. It doesn’t need to be perfect. It needs to be useful, and it already is.

And don’t give me that “it sounds profound but it’s really just crap” line. That’s 90 percent of academia. That’s every selfhelp book, every political speech, every guy with a podcast and a ring light. If sounding smarter than you while being wrong disqualifies a thing, then we better shut down half the planet.

Look, you’re not mad because it’s dumb. You’re mad because it’s not that dumb. It’s close. Close enough to feel threatening. Close enough to replace people who’ve been coasting on sounding smart instead of actually being smart. That’s what this is really about. Ego. Fear. Control.

So yeah, maybe it’s not AGI yet. But it’s smarter than the guy next to you at work. And he’s got a pension.

sponnath 9 hours ago|||
Something big is definitely happening but it's not the intelligence explosion utopia that the AI companies are promising.

> Who cares if AGI isn’t five minutes away. That’s not the point. The point is we’ve built the closest thing to a machine that actually gets what we’re saying. That alone is insane. You type in a paragraph about your childhood trauma and it gives you back something more coherent than your therapist. You ask it to summarize a court ruling and it doesn’t need to check Wikipedia first. It remembers context. It adjusts to tone. It knows when you’re being sarcastic. You think that’s just “autocomplete”? That’s not autocomplete, that’s comprehension

My experience with LLMs have been all over the place. They're insanely good at comprehending language. As a side effect, they're also decent at comprehending complicated concepts like math or programming since most of human knowledge is embedded in language. This does not mean they have a thorough understanding of those concepts. It is very easy to trip them up. They also fail in ways that are not obvious to people who aren't experts on whatever is the subject of its output.

> And the logic complaints, yeah, it screws up sometimes. So do you. So does your GPS, your doctor, your brain when you’re tired. You want flawless logic? Go build a calculator and stay out of adult conversations. This thing is learning from trillions of words and still does better than half the blowhards on HN. It doesn’t need to be perfect. It needs to be useful, and it already is.

I feel like this is handwaving away the shortcomings a bit too much. It does not screw up in the same way humans do. Not even close. Besides, I think computers should rightfully be held up to a higher standard. We already have programs that can automate tasks that human brains would find challenging and tedious to do. Surely the next frontier is something with the speed and accuracy of a computer while also having the adaptability of human reasoning.

I don't feel threatened by LLMs. I definitely feel threatened by some of the absurd amount of money being put into them though. I think most of us here will be feeling some pain if a correction happens.

hnanon12341 9 hours ago|||
I find it kind of funny that in order to talk to AI people, you need to preface your paragraph with "I find current AI amazing, but...". It's like, you guess it, pre-prompting them for better acceptance.
ninetyninenine 8 hours ago|||
Oh come on, it's not some secret code. People say “AI is amazing, but...” because it is amazing... and also flawed. That’s just called having a balanced take, not pre-prompting for approval. What do you want them to do, scream “THIS SUCKS” and ignore reality? It’s not a trick, it’s just how grown-ups talk when they’re not trying to win internet points.
ninetyninenine 8 hours ago|||
You say LLMs are “insanely good” at comprehending language, but then immediately pull back like it’s some kind of fluke. “Well yeah, it looks like it understands, but it doesn’t really understand.” What does that even mean? Do you think your average person walking around fully understands everything they say? Half of the people you know are just repeating crap they heard from someone else. You ask them to explain it and they fold like a cheap tent. But we still count them as sentient.

Then you say it’s easy to trip them up. Of course it is. You know what else is easy to trip up? People. Ask someone to do long division without a calculator. Ask a junior dev to write a recursive function that doesn’t melt the stack. Mistakes aren’t proof of stupidity. They’re proof of limits. And everything has limits. LLMs don’t need to be flawless. They need to be better than the tool they’re replacing. And in a lot of cases, they already are.

Now this part: “computers should be held to a higher standard.” Why? Says who? If your standard is perfection, then nothing makes the cut. Not the car, not your phone, not your microwave. We use tools because they’re better than doing it by hand, not because they’re infallible gods of logic. You want perfection? Go yell at the compiler, not the language model.

And then, this one really gets me, you say “surely the next frontier is a computer with the accuracy of a machine and the reasoning of a human.” No kidding. That’s the whole point. That’s literally the road we’re on. But instead of acknowledging that we’re halfway there, you’re throwing a tantrum because we didn’t teleport straight to the finish line. It’s like yelling at the Wright brothers because their plane couldn’t fly to Paris.

As for the money... of course there's a flood of it. That’s how innovation happens. Capital flows to power. If you’re worried about a correction, fine. But don’t confuse financial hype with technical stagnation. The tools are getting better. Fast. Whether the market overheats is a separate issue.

You say you're not threatened by LLMs. That’s cute. You’re writing paragraphs trying to prove why they’re not that smart while admitting they’re already better at language than most people. If you’re not threatened, you’re sure spending a lot of energy trying to make sure nobody else is impressed either.

Look, you don’t have to worship the thing. But pretending it's just a fancy parrot with a glitchy brain is getting old. It’s smart. It’s flawed. It’s changing everything. Deal with it.

mirroriingg 5 hours ago||
You sure spend a lot of energy and time living out this psychodrama.

If it's so self evidently revolutionary, why do you feel the need to argue about it?

ninetyninenine 1 hour ago||
I’m trying to fix humanity by smoothing out the sections that are deficient in awareness and IQ.

We need humanity at its best to prepare for the upcoming onslaught when something better tries to replace us. I do it for mankind.

raspasov 9 hours ago|||
There's a lot in here. I agree with a lot of it.

However, you've shifted the goal post from AGI to being useful in specific scenarios. I have no problem with that statement. It can write decent unit tests and even find hard-to-spot, trivial mistakes in code. But again, why can it do that? Because a version of that same mistake is in the enormous data set. It's a fantastic search engine!

Yet, it is not AGI.

ninetyninenine 8 hours ago||
You say it's just a fancy search engine. Great. You know what else is a fancy search engine? Your brain. You think you're coming up with original thoughts every time you open your mouth? No. You're regurgitating every book, every conversation, every screw-up you've ever witnessed. The brain is pattern matching with hormones. That’s it.

Now you say I'm moving the goalposts. No, I’m knocking down the imaginary ones. Because this whole AGI debate has turned into a religion. “Oh it’s not AGI unless it can feel sadness, do backflips, and write a symphony from scratch.” Get over yourself. We don’t even agree on what intelligence is. Half the country thinks astrology is real and you’re here demanding philosophical purity from a machine that can debug code, explain calculus, and speak five languages at once? What are we doing?

You admit it’s useful. You admit it catches subtle bugs, writes code, gives explanations. But then you throw your hands up and go, “Yeah, but that’s just memorization.” You mean like literally how humans learn everything? You think Einstein invented relativity in a vacuum? No. He stood on Newton, who stood on Galileo, who probably stood on a guy who thought the stars were angry gods. It’s all remixing. Intelligence isn’t starting from zero. It’s doing something new with what you’ve seen.

So what if the model’s drawing from a giant dataset? That’s not a bug. That’s the point. It’s not pulling one answer like a Google search. It’s constructing patterns, responding in context, and holding a conversation that feels coherent. If a human did that, we’d say they’re smart. But if a model does it, suddenly it’s “just autocomplete.”

You know who moves the goalposts? The people who can’t stand that this thing is creeping into their lane. So yeah, maybe it's not AGI in your perfectly polished textbook sense. But it's the first thing that makes the question real. And if you don’t see that, maybe you’re not arguing from logic. Maybe you’re just pissed.

raspasov 7 hours ago||
Of course, I have no original thoughts. Something is not created out of nothing. That would be Astrology, perhaps :).

But the difference between a human and an LLM is that humans can go out in the world and test their hypothesis. Literally every second is an interaction with a feedback loop. Even typing this response to you right now. LLMs currently have to wait for the next 6-month retraining cycle. I am not saying that AGI cannot be created. In theory it can be but we are definitely milking the crap out of a local maximum we've currently found which is definitely not the final answer.

PS Also, when I said "it can spot mistakes," I probably gave it too much credit. In one case, it presented several potential issues, and I happened to notice that one of them was a problem. In many cases, the LLM suggests issues that are either hypothetical or nonexistent.

Animats 10 hours ago||
A really good point in that note:

"But the fundamental problem is that LLMs don’t get better over time the way a human would. The lack of continual learning is a huge huge problem. The LLM baseline at many tasks might be higher than an average human's. But there’s no way to give a model high level feedback. You’re stuck with the abilities you get out of the box."

That does seem to be a problem with neural nets.

There are AIish systems that don't have this problem. Waymo's Driver, for example. Waymo has a procedure where, every time their system has a disconnect or near-miss, they run simulations with lots of variants on the troublesome situation. Those are fed back into the Driver.

Somehow. They don't say how. But it's not an end to end neural net. Waymo tried that, as a sideline project, and it was worse than the existing system. Waymo has something else, but few know what it is.

jrank 7 hours ago||
Yes, LLM don't improve as humans do, but they could use other tools for example designing programs in Prolog to expand their capabilities. I think that the next step in AI will be LLM being able to use better tools or strategies. For example, designing architectures in which logic rules, heuristic algorithms and small fine-tuned LLM agents are integrated as tools for LLMs. I think that new, more powerful architectures for helping LLMs are going to mature in the near future. Furthermore, there is the economic pushing force to develop AI application for warfare.

Edited: I should add, that a Prolog system could help the LLM to continue learning by adding facts to its database and inferring new relations, for example using heuristics to suggest new models or ways for exploration.

ei23 8 hours ago|||
Why so few ask: Isnt it enough that current SOTA AI make us humans already so much better everyday? Exponential selfimprovement seems like a very scary thing to me and even if all went right, humans have to give up their pole position in the intelligence race. That will be very hard to swallow for many. If we really want self improvement, we should get used to beeing useless :)
hagbarth 6 hours ago|||
Sure, but that's what the post is about - AGI. You won't get to any reasonable definition of AGI without self improvement.
otabdeveloper4 7 hours ago|||
> current SOTA AI make us humans already so much better everyday?

Citation needed. I've seen the opposite effect. (And yes, it is supported by research.)

ei23 7 hours ago||
> Citation needed. I've seen the opposite effect. (And yes, it is supported by research.)

Citation needed.

nkoren 7 hours ago|||
https://www.media.mit.edu/publications/your-brain-on-chatgpt...
ozim 7 hours ago|||
If someone wants to get a feel how is this a problem with neural nets watch John Carmack recent talk:

https://www.youtube.com/watch?v=4epAfU1FCuQ

More specific part on this exact thing is around 30min mark.

raspasov 7 hours ago|||
Correct me if I'm wrong, but from what I've heard, Waymo employs heuristics, rules, neural networks, and various other techniques that are combined and organized by humans into a system.

It's not an end-to-end neural network.

"AIish" is a good description. It is, by design, not AGI.

hnanon12341 9 hours ago||
Yeah since most AI is training on massive data, it also means that it will take a while before you get your next massive data set again.
Animats 9 hours ago|||
Worse, the massive data set may not help much with mistakes. Larger LLMs do not seem to hallucinate less.
dathinab 17 hours ago||
I _hope_ AGI is not right around the corner, for social political reasons we are absolutely not ready for it and it might push the future of humanity into a dystopia abyss.

but also just taking what we have now with some major power usage reduction and minor improvements here and there already seems like something which can be very usable/useful in a lot of areas (and to some degree we aren't even really ready for that either, but I guess thats normal with major technological change)

it's just that for those companies creating foundational models it's quite unclear how they can recoup their already spend cost without either major break through or forcefully (or deceptively) pushing it into a lot more places then it fits into

twelve40 16 hours ago||
I agree and sincerely hope this bubble pops soon

> Meta Invests $100 Billion into Augmented Reality

that fool controls the board and he seems to be just desperately throwing insane ad money against the wall hoping that something sticks

for Altman there is no backing out either, need to make hay while the sun shines

for the rest of us, i really hope these clowns fail like it's 2000 and never get to their dystopian matrix crap.

pbreit 16 hours ago||
"that fool" created a $1.8 trillion company.
SoftTalker 13 hours ago|||
He created a company that tracks and profiles people, psychologically manipulates them, and sells ads. And has zero ethical qualms about the massive social harm they have left in their wake.

That doesn't tell me anything about his ability to build "augmented reality" or otherwise use artificial intelligence in any way that people will want to pay for. We'll see.

Ford and GM have a century of experience building cars but they can't seem to figure out EVs despite trying for nearly two decades now.

Tesla hit the ball out of the park with EVs but can't figure out self-driving.

Being good at one thing does not mean you will be good at everything you try.

masfoobar 1 hour ago|||
I was rather sad when everyone moved away from MySpace to Facebook. While the interface today would likely be poor, I felt it was much better than what Facebook was offering. Still its hard to believe it was around 19 years ago many started to move over.

While I cannot remember the names of these sites, there were various attempts to create a shared platform website where you could create a profile and communicate with others. I remember joining a few at least back in 2002 before MySpace, Yahoo360. There was also Bebo which, I think, was for the younger kids of the day.

Lets not forget about friendsreunited.

Many Companies become successful being at the right place at the right time. Facebook is one of those companies.

Had facebook been created a year or so beforehand (or a year or two after) we would likely be using some other "social media" today. Be interesting how that would have compared to facebook. Would it be "more evil" ???

Regardless, whether its Facebook/MarkZuckerberg or [insert_social_media]/[owner]... we would still end up with a new celebrity millionnaire/billionnaire.. and would still be considered "a fool" one way or another.

packetlost 12 hours ago|||
> Ford and GM have a century of experience building cars but they can't seem to figure out EVs despite trying for nearly two decades now.

Your EV knowledge is 3 years out of date. Both Ford and GM have well liked and selling EVs. Meanwhile Tesla's sales are cratering.

buzzerbetrayed 9 hours ago||
[flagged]
AaronAPU 15 hours ago||||
I’m always fascinated when someone equates profit with intelligence. There are many very wealthy fools and there always have been. Plenty of ingredients to substitute for intelligence.

Neither necessary nor sufficient.

falcor84 14 hours ago|||
I really don't see how a person can build one of the most successful companies in history from scratch without exhibiting intelligence.

There are many things we can and should say about Zuckerberg, but I don't think that unintelligent is one them.

Bender 1 hour ago|||
It certainly doesn't hurt when the government profiles and grooms intelligent people out of Stanford SRI the dark side, Harvard, hands them an unlimited credit card and says, "Make this thing that can do x,y,z." and then helps them network with like-minded creators and removes any obstacles in their path. One has to at least admit that was a contributing factor to their success as the vast majority of people do not get these perks.

Partially related documentary [1]

[1] - https://www.youtube.com/watch?v=a3Xxi0b9trY [video][44 mins]

DanHulton 14 hours ago||||
You can be terribly intelligent and be an incredible fool at the same time. The two aren't mutually exclusive.
muzani 9 hours ago||
Intelligence is the ability to creatively solve problems. Wisdom is picking the right problems to solve.
skeeter2020 13 hours ago|||
Intelligence probably IS positively correlated with success, but the formula is complex and involves many other factors, so I have to believe it's relatively weak correlation. Anecdotally I know about as many smart failures as smart successes.
apparent 14 hours ago||||
You can be a wealthy fool who inherited money, or married into it. It is also possible to be a wealthy fool who was just in the right place at the right time. But I would guess that people who appear to have "earned" their money are much less likely to be wealthy fools than those who appear to have inherited/married into it.
ipaddr 12 hours ago||
We see this all of the time. Business makes successful bets in one area and tries to make bets in new area and fails.

Once you achieve wealth it gives you the opportunity to make more bets many of which will fail.

The greater and younger the success the more hubris. You are more likely to see fools or people taking bad risks when they earned it themselves. They have a history of betting on themselves and past success that creates an ego that overrides common sense.

When you inherit money you protect it (or spend it on material things) because you have no history of ever being able to generate money.

skeeter2020 13 hours ago|||
No kidding, that would make Larry Ellison the richest, most intelligent lawn mower in the world.
lowsong 14 hours ago||||
No the thousands of people working at Facebook did, he just got rich from it.
apsurd 14 hours ago|||
I'm no zuck fanboy, but i'm compelled to ask what purpose rejecting the role of a leader serves?

HN is "the smart reddit" as my brother coined, and i'm very aware of how much nonsense is on here, but it is in a relative sense true.

All to say, blindly bashing the role of a leader seems faulty and dismissive.

prisenco 13 hours ago|||
There's a big discussion in there about the inherent requirement of labor, the definition of leadership, collective vs hierarchical decision-making, hegemonic inertia and market capture and more. This is probably not the best place to have it.

Not to say that Zuckerberg is dumb but there's plenty of ways he could have managed to get where he is now without having the acumen to get to other places he wants to be.

dandanua 7 hours ago|||
No one is rejecting the role of leader, it's just extremely exaggerated nowadays, like everyone thinks Facebook==Zuckerberg. And the leaders don't worth x1000 (or even x1000000 for some) unless they are doing a job of 1000 people. In most cases they are not even capable of doing 99% of the work people in their companies can do. Egomaniac Musk has already published his thoughts on programming problems, only confirming how dumb he is in this field.
cschep 12 hours ago|||
Which they absolutely would not have done if Zuck didn't start it, get funding, pay them handsomely, and tell them exactly what to do.

I'm sure that Zuck is worthy of huge amounts of criticism but this is a really silly response.

guelo 12 hours ago||
There were dozens of social networking companies at the time that FB was founded. If Zuck didn't exist those same or similar workers would have been building a similar product for whichever other company won the monopoly-ish social media market.
the_gastropod 15 hours ago||||
Aren't there enough examples of successful people who are complete buffoons to nuke this silly trope from orbit? Success is no proof of wisdom or intelligence or whatever.
umbra07 15 hours ago||
can you point to someone as successful as zuckerberg, who was later conclusively shown to be a fraud or a total moron?
benreesman 14 hours ago|||
Fortunes are just bigger now in both notional and absolute terms, inevitable with Gini going parabolic, says nothing about the guy on top this week.

Around the turn of the century a company called Enron collapsed in an accounting scandal so meteoric it also took down Arthur Anderson (there used to be be a Big Five). Bad, bad fraud, buncha made up figures, bunch of shady ties to the White House, the whole show.

Enron was helmed by Jeff Skilling, a man described as "incandescently brilliant" by his professors at Wharton. But it was a devious brilliance: it was an S-Tier aptitude for deception, grandiosity, and artful rationalization. This is chronicled in a book called The Smartest Guys in The Room if you want to read about it.

Right before that was the collapse of Long Term Capital Management: a firm so intellectually star studded the book about that is called When Genius Failed. They almost took the banking system with them.

The difference between then and now is that it took a smarter class of criminal to pull off a smaller heist with a much less patient public and much less erosion of institutions and norms. What would have been a front page scandal with prison time in 1995 is a Tuesday in 2025.

The new guys are dumber, not smarter: there aren't any cops chasing them.

falcor84 14 hours ago||
"S-Tier aptitude for deception" is also known as intelligence.
benreesman 14 hours ago|||
I think you'll find a consensus among clinical psychiatrists that the closest technical term for the colloquial notion of someone who puts all of their INT into LIE is Cluster B.

I see no evidence that great mathematicians or scientists or genre-defining artists or other admired abd beloved intellectual luminaries with enduring legacies or the recipients of the highest honors for any of those things skew narcissistic or with severe empathy deficits or any of that.

Brilliant people seem to be drawn from roughly the same ethical and moral distribution as the general public.

falcor84 3 hours ago||
To be clear, I didn't mean to imply that all intelligent people are s-tier deceivers, but rather only that all s-tier deceivers are intelligent. Going with your metaphor, in order to put all of your INT into LIE, you need to have something in your INT pool.
twelve40 14 hours ago|||
the parent asked for moronity OR fraud, kind of a low bar lol
benreesman 14 hours ago||
The lesson here, and from pretty much any page of any history book you care to flip to, is that sooner or later there's a bill that comes due for advancing the worst people to the highest posts.

If you're not important to someone powerful, lying, cheating, stealing, and generally doing harm for personal profit will bring you to an unpleasant end right quick.

But the longer you can keep the con going, the bigger the bill: its an unserviceable debt. So Skilling and Meriwether were able to bring down whole companies, close offices across entire cities.

This is by no means the worst case though, because if your institutions fail to kick in? There's no ceiling, its like being short a stock in a squeeze.

You keep it going long enough, its your country, or your entire civilization.

You want the institutions to kick in before that.

nancyminusone 17 minutes ago||||
Henry Ford. He could make cars. His other endeavors were that of a lunatic.
skeeter2020 13 hours ago||||
I think he's busy arranging a UFC fight on the whitehouse lawn for the next Independence Day?
kj4211cash 11 hours ago||||
are we going to overlook the big orange elephant in the white house? listen to him talk. it's hard for me to believe he wouldn't be labeled a moron by most if he wasn't the President.
Avshalom 15 hours ago||||
What does the name of his company Meta refer to?
chemeril 15 hours ago||||
SBF comes to mind.
umbra07 11 hours ago||
SBF wasn't as successful though. His success wasn't even in the same stratosphere as Zuckerberg. His company was around for 3 years. Facebook has been around for over two decades. In terms of net worth, SBF was somewhere around 60th, I think? Zuckerberg was no. 2. Same thing with their respective companies.
twelve40 15 hours ago||||
> a special order, 350 gallons, and had it shipped from Los Angeles. A few days after the order arrived, Hughes announced he was tired of banana nut and wanted only French vanilla ice cream

yes, there are plenty

more recent example, every single person who touched epstein

umbra07 11 hours ago||
Nobody involved with Epstein was as successful as Zuckerberg. Howard Hughes's net worth is 55B adjusted for inflation. And I don't think, "he became known for his eccentric behavior and reclusive lifestyle—oddities that were caused in part by his worsening obsessive-compulsive disorder (OCD), chronic pain from a near-fatal plane crash, and increasing deafness." fits my "total moron" criteria.
fcarraldo 15 hours ago||||
Elon Musk.
wetpaws 15 hours ago|||
[dead]
notTooFarGone 7 hours ago||||
If you would let a billion monkeys invest there also would be a monkey billionaire by chance.
twelve40 15 hours ago||||
past performance does not guarantee future results

also, great for the Wall Street, mixed bag for us the people

kulahan 16 hours ago|||
$1.8 trillion in investor hopes and dreams, but of course they make zero dollars in profit, don’t know how to turn a profit, don’t have a product anyone would pay a profitable amount for, and have yet to show any real-world use that isn’t kinda dumb because you can’t trust anything it says anyways.
daniel_iversen 16 hours ago|||
Meta makes > $160 billion in revenue and is profitable itself, of course they’re going to invest in future longer term revenue streams! Apple is the counter example in a way who have maintained a lot of cash reserves (which seems to by the way have dwindled a LOT as I just checked..?)
kulahan 14 hours ago|||
I’m talking about OpenAI, not companies subsidizing their value-losing LLMs with value-generating branches of the company.
ArnoVW 14 hours ago|||
All that money was outside the US. The theory was for some time that they were waiting for the right moment (change of administration/legislation) so that they could officially recognize the profit in the US cheaply
9cb14c1ec0 13 hours ago|||
When it comes to recouping cost, a lot of people don't consider the insane amount of depreciation expense brought on by the up to 1 trillion (depending on which estimate) that has been invested in AI buildouts. That depreciation expense could easily be more than the combined revenue of all AI companies.
tshaddox 14 hours ago|||
If birth rates are as much a cause for concern as many people seem to think, and we absolutely need to solve it (instead of solving, for instance, the fact that the economy purportedly requires exponential population growth forever), perhaps we should hope that AGI comes soon.
otabdeveloper4 7 hours ago||
We're not in a stable state now, it's not about population growth, it's about not literally dying out.

With a birth rate of 1 population will halve every generation. This is an apocalyptic scenario and incompatible with industrial civilization.

Davidzheng 12 hours ago|||
I think it's rather easy for them to recoup those costs, if you can disrupt some industry with a full AI company with almost no employees and outcompete everyone else, that's free money for you.
dathinab 3 hours ago|||
I think they are trying to do something like this(1) by long term providing a "business suite", i.e. something comparable to g suite or microsoft 360.

For a lot of the things which work well with current AI technology it's supper convenient to have access to all your customer private data (even if you don't train on them, but e.g. stuff like RAG systems for information retrieval are one of the things which already with the current state of LLMs work quite well). This also allows you to compensate hallucinations, non understanding of LLMs and similar by providing (working) links (or inclusions of snippets of) sources where you have the information from and by having all relevant information in the context window of the LLM instead of it's "learned" data from training you in general get better results. I mean RAG systems already did work well without LLMs to be used in some information retrieval products.

And the thing is if your user has to manually upload all potentially relevant business documents you can't really make it work well, but what if they anyway upload all of them to your company because they use your companies file sharing/drive solution?

And lets not even consider the benefits you could get from a cheaper plan where you are allowed to train on the companies data after anonymizing (like for micro companies, too many people thing "they have nothing to hide" and it's anonymized so okay right? (no)). Or you going rogue and just steal trade secrets to then breach into other markets it's not like some bigger SF companies had been found to do exactly that (I think it was amazon/amazon basics).

(1:) Through in that case you still have employees until you AI becomes good enough to write all you code, instead of "just" being a tool for developers to work faster ;)

energy123 10 hours ago|||
Possibly but not necessarily. Competition can erode all economic rents, no matter how useful a product is.
pbreit 16 hours ago|||
Must "AGI" match human intelligence exactly or would outperforming in some functions and underpformin in others qualify?
crooked-v 16 hours ago|||
For me, "AGI" would come in with being able to reliably perform simple open-ended tasks successfully without needing any specialized aid or tooling. Not necessarily very well, just being capable of it in the first place.

For a specific example of what I mean, there's Vending-Bench - even very 'dumb' humans could reliably succeed on that test indefinitely, at least until they got terminally bored of it. Current LLMs, by contrast, are just fundamentally incapable of that, despite seeming very 'smart' if all you pay attention to is their eloquence.

carefulfungi 13 hours ago|||
If someone handed you an envelope containing a hidden question, and your life depended on a correct answer, would you rather pick a random person out of the phone book or an LLM to answer it?

On one hand, LLMs are often idiots. On the other hand, so are people.

crooked-v 12 hours ago|||
That's not at all analogous to what I'm talking about. The comparison would be picking an LLM or a random person out of the phone book to, say, operate a vending machine... and we already know LLMs are unable to do that, given the results of Vending-Bench.
carefulfungi 56 minutes ago||
More than 10% of the global population is illiterate. Even in first world countries, numeracy rates are 75-80%. I think you overestimate how many people could pass the benchmark.

Edit - rereading, my comment sounds far too combative. I mean it only as an observation that AI is catching up quickly vs what we manage to teach humans generally. Soon, if not already, LLMs will be “better educated” than the average global citizen.

bookman117 10 hours ago|||
I'd learn as much as I could about what the nature of the question would be beforehand and pay a human with a great track record of handing such questions.
dathinab 3 hours ago||||
no it doesn't has to it just has to be "general"

as in it can learn by itself to solve any kind of generic task it can practically interface it (at lest which isn't way to complicated).

to some degree LLMs can do so theoretically but

- learning (i.e. training them) is way to slow and costly

- domain adoption (later learning) often has a ton of unintended side effects (like forgetting a bunch of important previously learned things)

- it can't really learn by itself in a interactive manner

- "learning" by e.g. retrieving data from knowledge data base and including it into answers (e.g. RAG) isn't really learning but just information retrieval, also it has issues with context windows and planing

I could imagine OpenAI putting together multiple LLMs + RAG + planing systems etc. to create something which technically could be named AGI but which isn't really the break through people associate with AGI in the not too distant future.

saubeidl 16 hours ago||||
Where would you draw the line? Any ol' computer outperforms me in doing basic arithmetic.
kulahan 16 hours ago|||
This is a question of how we quantify intelligence, and there aren’t many great answers. Still, basic arithmetic is probably not the right guideline for intelligence. My guess has always been that it’ll lie somewhere in ability to think critically, which they still have not even attempted yet, because it doesn’t really work with LLMs as they’re structured today.
hkt 16 hours ago|||
I'd suggest anything able to match a professional doing knowledge work. Original research from recognisably equivalent cognition, or equal abilities with a skilled practitioner of (eg) medicine.

This sets the bar high, though. I think there's something to the idea of being able to pass for human in the workplace though. That's the real, consequential outcome here: AGI genuinely replacing humans, without need for supervision. That's what will have consequences. At the moment we aren't there (pre-first-line-support doesn't count).

root_axis 16 hours ago||||
At the very least, it needs to be able to collate training data, design, code, train, fine tune and "RLHF" a foundational model from scratch, on its own, and have it show improvements over the current SOTA models before we can even begin to have the conversation about whether we're approaching what could be AGI at some point in the future.
kadushka 14 hours ago||
I cannot do all that. Am I not generally intelligent?
OJFord 15 hours ago|||
That would be human; I've always understood the General to mean 'as if it's any human', i.e. perhaps not absolute mastery, but trained expertise in any domain.
babuloseo 15 hours ago||
what social political reasons, can you name some of these? we are 100% ready for AGI.
kadushka 14 hours ago||
Are you ready to lose your job, permanently?
hackable_sand 10 hours ago|||
I'd be down for that. Making fast food is tiring work.
kiney 11 hours ago|||
looking forward to it
vessenes 17 hours ago||
Good take from Dwarkesh. And I love hearing his updates on where he’s at. In brief - we need some sort of adaptive learning; he doesn’t see signs of it.

My guess is that frontier labs think that long context is going to solve this: if you had a quality 10mm token context that would be enough to freeze an agent at a great internal state and still do a lot.

Right now the long context models have highly variable quality across their windows.

But to reframe: will we have 10mm token useful context windows in 2 years? That seems very possible.

nicoburns 14 hours ago||
How long is "long"? Real humans have context windows measured in decades of realtime multimodal input.
vessenes 2 hours ago|||
I think there’s a good clue here to what may work for frontier models — you definitely do not remember everything about a random day 15 years ago. By the same token, you almost certainly remember some things about a day much longer ago than that, if something significant happened. So, you have some compression / lossy memory working that lets you not just be a tabula rasa about anything older than [your brain’s memory capacity].

Some architectures try to model this infinite, but lossy, horizon with functions that are amenable as a pass on the input context. So far none of them seem to beat the good old attention head, though.

MarcelOlsz 6 hours ago|||
Speak for yourself. I can barely remember what I did yesterday.
kranke155 16 hours ago|||
I believe in Demmis when he says we are 10 years away from - from AGI.

He basically made up the field (out of academia) for a large number of years and OpenAI was partially founded to counteract his lab, and the fears that he would be there first (and only).

So I trust him. Sometime around 2035 he expects there will be AGI which he believes is as good or better than humans in virtually every task.

eikenberry 15 hours ago|||
When someone says 10 years out in tech it means there are several needed breakthroughs that they think could possibly happen if things go just right. Being an expert doesn't make the 10 years more accurate, it makes the 'breakthroughs needed' part more meaningful.
moralestapia 7 hours ago|||
>He basically made up the field (out of academia) for a large number of years

Not even close.

Davidzheng 12 hours ago|||
I'm sure we'll have true test-time-learning soon (<5 years)but it will be more expensive. Alphaproof (for Deepmind's IMO attempt) already has this.
imtringued 6 hours ago||
There was a company that claimed to have solved it and we hear nothing but the sound of crickets from them.
izzydata 17 hours ago||
Not only do I not think it is right around the corner. I'm not even convinced it is even possible or at the very least I don't think it is possible using conventional computer hardware. I don't think being able to regurgitate information in an understandable form is even an adequate or useful measurement of intelligence. If we ever crack artificial intelligence it's highly possible that in its first form it is of very low intelligence by humans standards, but is truly capable of learning on its own without extra help.
Waterluvian 17 hours ago||
I think the only way that it’s actually impossible is if we believe that there’s something magical and fundamentally immeasurable about humans that leads to our general intelligence. Otherwise we’re just machines, after all. A human brain is theoretically reproducible outside standard biological mechanisms, if you have a good enough nanolathe.

Maybe our first AGI is just a Petri dish brain with a half-decent python API. Maybe it’s more sand-based, though.

knome 15 hours ago|||
>Maybe our first AGI is just a Petri dish brain with a half-decent python API.

https://www.oddee.com/australian-company-launches-worlds-fir...

the entire idea feels rather immoral to me, but it does exist.

j_bum 10 hours ago||
I’m curious why you find it immoral?
knome 9 hours ago|||
I don't think it particularly moral to start heading down a path wherein we are essentially aiming to create enslaved cloned vat brains. I know that's not what they have, and that they're nowhere near that, but if they succeed in these early stages, more and more complex systems will follow in time. I don't think it a particularly healthy direction to explore.
superfrank 9 hours ago|||
Because the lack of semicolons is pure hubris and an affront to God
Balgair 16 hours ago||||
-- A human brain is theoretically reproducible outside standard biological mechanisms, if you have a good enough nanolathe.

Sort of. The main issue is the energy requirements. We could theoretically reproduce a human brain in SW today, it's just that it would be a really big energy hog and run very slowly and probably become insane quickly like any person trapped in a sensory deprived tank.

The real key development for AI and AGI is down at the metal level of computers- the memristor.

https://en.m.wikipedia.org/wiki/Memristor

The synapse in a brain is essentially a memristive element, and it's a very taxing one on the neuron. The equations is (change in charge)/(change in flux). Yes, a flux capacitor, sorta. It's the missing piece in fundamental electronics.

Making simple 2 element memristors is somewhat possible these days, though I've not really been in the space recently. Please, if anyone knows where to buy them, a real one not a claimed to be one, let me know. I'm willing to pay good money.

In Terms of AI, a memristor would require a total redesign of how we architect computers ( goodbye busses and physically separate memory, for one). But, you'd get a huge energy and time savings benefit. As in, you can run an LLM on a watch battery or small solar cell and let the environment train them to a degree.

Hopefully AI will accelerate their discovery and facilitate their introduction into cheap processing and construction of chips.

josefx 16 hours ago||||
> and fundamentally immeasurable about humans that leads to our general intelligence

Isn't AGI defined to mean "matches humans in virtually all fields"? I don't think there is a single human capable of this.

andy99 17 hours ago||||
If by "something magical" you mean something we don't understand, that's trivially true. People like to give firm opinions or make completely unsupported statements they feel should be taken seriously ("how do we know humans intelligence doesn't work the same way as next token prediction") about something nobody understand.
Waterluvian 16 hours ago||
I mean something that’s fundamentally not understandable.

“What we don’t yet understand” is just a horizon.

preisschild 9 hours ago||||
> Maybe our first AGI is just a Petri dish brain with a half-decent python API

This reminds me of The Thought Emporium's project of teaching rat brain cells to play doom

https://www.youtube.com/watch?v=bEXefdbQDjw

somewhereoutth 16 hours ago||||
Our silicon machines exist in a countable state space (you can easily assign a unique natural number to any state for a given machine). However, 'standard biological mechanisms' exist in an uncountable state space - you need real numbers to properly describe them. Cantor showed that the uncountable is infinitely more infinite (pardon the word tangle) than the countable. I posit that the 'special sauce' for sentience/intelligence/sapience exists beyond the countable, and so is unreachable with our silicon machines as currently envisaged.

I call this the 'Cardinality Barrier'

bakuninsbart 16 hours ago|||
Cantor talks about countable and uncountable infinities, both computer chips and human brains are finite spaces. The human brain has roughly 100b neurons, even if each of these had an edge with each other and these edges could individually light up signalling different states of mind, isn't that just `2^100b!`? That's roughly as far away from infinity as 1.
somewhereoutth 16 hours ago||
But this signalling (and connections) may be more complex than connected/unconnected and on/off, such that we cannot completely describe them [digitally/using a countable state space] as we would with silicon.
chowells 15 hours ago||
If you think it can't be done with a countable state space, then you must know some physics that the general establishment doesn't. I'm sure they would love to know what you do.

As far as physicists believe at the moment, there's no way to ever observe a difference below the Planck level. Energy/distance/time/whatever. They all have a lower boundary of measurability. That's not as a practical issue, it's a theoretical one. According to the best models we currently have, there's literally no way to ever observe a difference below those levels.

If a difference smaller than that is relevant to brain function, then brains have a way to observe the difference. So I'm sure the field of physics eagerly awaits your explanation. They would love to see an experiment thoroughly disagree with a current model. That's the sort of thing scientists live for.

lettuceconstant 10 hours ago||
[dead]
Waterluvian 16 hours ago||||
That’s an interesting thought. It steps beyond my realm of confidence, but I’ll ask in ignorance: can a biological brain really have infinite state space if there’s a minimum divisible Planck length?

Infinite and “finite but very very big” seem like a meaningful distinction here.

I once wondered if digital intelligences might be possible but would require an entire planet’s precious metals and require whole stars to power. That is: the “finite but very very big” case.

But I think your idea is constrained to if we wanted a digital computer, is it not? Humans can make intelligent life by accident. Surely we could hypothetically construct our own biological computer (or borrow one…) and make it more ideal for digital interface?

nicoburns 14 hours ago|||
Absolutely nothing in the real world is truly infinite. Infinity is just a useful mathematical fiction that closely approximate the real world for large enough (or small enough in the case of infinitesimals) things.

But biological brain have significantly greater state space than conventional silicon computers because they're analog. The voltage across a transistor varies approximately continuously, but we only measure a single bit from that (or occasionally 2 for nand).

saubeidl 16 hours ago|||
Isn't a Planck length just the minimum for measurability?
layer8 16 hours ago|||
Not quite. Smaller wavelengths mean higher energy, and a photon with Planck wavelength would be energetic enough to form a black hole. So you can’t meaningfully interact electromagnetically with something smaller than the Planck length. Nor can that something have electromagnetic properties.

But since we don’t have a working theory of quantum gravity at such energies, the final verdict remains open.

triclops200 16 hours ago|||
Measurability is essentially a synonym for meaningful interaction at some measurement scale. When describing fundamental measurability limits, you're essentially describing what current physical models consider to be the fundamental interaction scale.
richk449 16 hours ago||||
It sounds like you are making a distinction between digital (silicon computers) and analog (biological brains).

As far as possible reasons that a computer can’t achieve AGI go, this seems like the best one (assuming computer means digital computer of course).

But in a philosophical sense, a computer obeys the same laws of physics that a brain does, and the transistors are analog devices that are being used to create a digital architecture. So whatever makes you brain have uncountable states would also make a real digital computer have uncountable states. Of course we can claim that only the digital layer on top matters, but why?

layer8 16 hours ago||||
Physically speaking, we don’t know that the universe isn’t fundamentally discrete. But the more pertinent question is whether what the brain does couldn’t be approximated well enough with a finite state space. I’d argue that books, music, speech, video, and the like demonstrate that it could, since those don’t seem qualitatively much different from how other, analog inputs stimulate our intellect. Or otherwise you’d have to explain why an uncountable state space would be needed to deal with discrete finite inputs.
dwaltrip 14 hours ago||||
Please describe in detail how biological mechanisms are uncountable.

And then you need to show how the same logic cannot apply to non-biological systems.

jandrewrogers 16 hours ago||||
> 'standard biological mechanisms' exist in an uncountable state space

Everything in our universe is countable, which naturally includes biology. A bunch of physical laws are predicated on the universe being a countable substrate.

lettuceconstant 10 hours ago||
[dead]
saubeidl 16 hours ago||||
That is a really insightful take, thank you for sharing!
coffepot77 16 hours ago|||
Can you explain why you think the state space of the brain is not finite? (Not even taking into account countability of infinities)
sandworm101 16 hours ago||||
A brain in a jar, with wires so that we can communicate with it, already exists. Its called the internet. My brain is communicating with you now through wires. Replacing my keyboard with implanted electrodes may speed up the connection, but it wont fundimentally change the structure or capabilities of the machine.
Waterluvian 16 hours ago||
Wait, are we all just Servitors?!
frizlab 17 hours ago|||
> if we believe that there’s something magical and fundamentally immeasurable about humans that leads to our general intelligence

It’s called a soul for the believers.

agumonkey 17 hours ago|||
Then there's the other side of the issue. If your tool is smarter than you.. how do you handle it ?

People are joking online that some colleagues use chatgpt to answer questions from other teammates made by chatgpt, nobody knows what's going on anymore.

breuleux 16 hours ago|||
I think the issue is going to turn out to be that intelligence doesn't scale very well. The computational power needed to model a system has got to be in some way exponential in how complex or chaotic the system is, meaning that the effectiveness of intelligence is intrinsically constrained to simple and orderly systems. It's fairly telling that the most effective way to design robust technology is to eliminate as many factors of variation as possible. That might be the only modality where intelligence actually works well, super or not.
airstrike 16 hours ago||
What does "scale well" mean here? LLMs right now aren't intelligent so we're not scaling from that point on.

If we had a very inefficient, power hungry machine that was 1:1 as intelligent as a human being but could scale it very inefficiently to be 100:1 a human being it might still be worth it.

navels 17 hours ago|||
why not?
izzydata 17 hours ago||
I'm not an expert by any means, but everything I've seen of LLMs / machine learning looks like mathematical computation no different than what computers have always been doing at a fundamental level. If computers weren't AI before than I don't think they are now just because the maths they are doing has changed.

Maybe something like the game of life is more in the right direction. Where you set up a system with just the right set of rules with input and output and then just turn it on and let it go and the AI is an emergent property of the system over time.

hackinthebochs 16 hours ago||
Why do you have a preconception of what an implementation of AGI should look like? LLMs are composed of the same operations that computers have always done. But they're organized in novel ways that have produced novel capabilities.
izzydata 16 hours ago||
I am expressing doubt. I don't have any preconceptions. I am open to being convinced of anything that makes more sense.
tempusalaria 14 hours ago||
Even as someone who is skeptical about LLMs, I’m not sure how anyone can look at what was achieved in AlphaGo and not at least consider the possibility that NNs could be superhuman in basically every domain at some point
paulpauper 16 hours ago|||
I agree. There is no define or agreed upon consensus of what AGI even means or implies. Instead, we will continue to see incremental improvements at the sort of things AI is good at, like text and image generation, generating code, etc. The utopia dream of AI solving all of humanity's problems as people just chill on a beach basking in infinite prosperity are unfounded.
hyperbovine 10 hours ago||
> There is no define or agreed upon consensus of what AGI even means or implies.

Agreed, however defining ¬AGI seems much more straightforward to me. The current crop of LLMs, impressive though they may be, are just not human level intelligent. You recognize this as soon as you spend a significant amount of time using one.

It may also be that they are converging on a type of intelligence that is fundamentally not the same as human intelligence. I’m open to that.

colechristensen 17 hours ago|||
>I don't think being able to regurgitate information in an understandable form is even an adequate or useful measurement of intelligence.

Measuring intelligence is hard and requires a really good definition of intelligence, LLMs have in some ways made the definition easier because now we can ask the concrete question against computers which are very good at some things "Why are LLMs not intelligent?" Given their capabilities and deficiencies, answering the question about what current "AI" technology lacks will make us better able to define intelligence. This is assuming that LLMs are the state of the art Million Monkeys and that intelligence lies on a different path than further optimizing that.

https://en.wikipedia.org/wiki/Infinite_monkey_theorem

baxtr 17 hours ago|||
I think the same.

How do you call people like us? AI doomers? AI boomers?!

npteljes 16 hours ago|||
"AI skeptics", like here: https://www.techopedia.com/the-skeptics-who-believe-ai-is-a-...
izzydata 14 hours ago||
This article is about being skeptical that what people currently call AI that is actually LLMs is going to be a transformative technology.

Myself and many others are skeptical that LLMs are even AI.

LLMs / "AI" may very well be a transformative technology that changes the world forever. But that is a different matter.

paulpauper 16 hours ago||||
There is a middle ground of people believe AI will lead to improvements in some respects of life, but will not liberate people from work or anything grandiose like that.
baxtr 16 hours ago||
I am big fan of AI tools.

I just don’t see how AGI is possible in the near future.

Mistletoe 17 hours ago|||
Realists.
dinkumthinkum 17 hours ago|||
I think you are very right to be skeptical. It’s refreshing to see another such take as it is so strange to see so many supposedly technical people just roll down the track of assuming this is happening when there are some fundamental problems with this idea. I understand why non-technical are ready to marry and worship it or whatever but for serious people I think we need to think more critically.
ActorNightly 17 hours ago|||
Exactly. Ive said this from the start.

AGI is being able to simulate reality in high enough accuracy, faster than reality (which includes being able to simulate human brains), which so far doesn't seem to be possible (due to computational irreducebility)

kachapopopow 17 hours ago||
There is something easy you can always do to tell if something is just hype: we will never be able to make something smarter than a human brain on purpose. It effectively has to happen either naturally or by pure coincidence.

The amount of computing power we are putting in only changes that luck by a tiny fraction.

echoangle 17 hours ago||
> we will never be able to make something smarter than a human brain on purpose. It effectively has to happen either naturally or by pure coincidence.

Why is that? We can build machines that are much better than humans in some things (calculations, data crunching). How can you be certain that this is impossible in other disciplines?

kachapopopow 17 hours ago||
that's just a tiny fraction of what a human brain can do, sure we can get something better in very narrow subjects, but something as being able to recognize patterns apply that to solve problems is something way beyond anything we can even think of right now.
echoangle 17 hours ago||
Ok, but how does that mean that we will never be able to do it? Imagine telling people 500 years ago that you will build a machine that can bring the to the moon. Maybe AGI is like that, maybe it’s really impossible. But how can people be confident that AGI is something humans can’t create?
kachapopopow 17 hours ago||
What we have right now with llms is bruteforcing our way to create something 'smarter' than a human, of course it can happen, but it's not something that can be 'created' by a human. An llm as small as 3b already performed more calculations than all the calculations done in the entire human history.
merizian 15 hours ago||
The problem with the argument is that it assumes future AIs will solve problems like humans do. In this case, it’s that continuous learning is a big missing component.

In practice, continual learning has not been an important component of improvement in deep learning history thus far. Instead, large diverse datasets and scale have proven to work the best. I believe a good argument for continual learning being necessary needs to directly address why the massive cross-task learning paradigm will stop working, and ideally make concrete bets on what skills will be hard for AIs to achieve. I think generally, anthropomorphisms lack predictive power.

I think maybe a big real crux is the amount of acceleration you can achieve once you get very competent programming AIs spinning the RL flywheel. The author mentioned uncertainty about this, which is fair, and I share the uncertainty. But it leaves the rest of the piece feeling too overconfident.

Davidzheng 12 hours ago||
Well Alphaproof used test-time-training methods to generate similar problems (alphazero style) for each question it encounters.
imtringued 6 hours ago|||
>The problem with the argument is that it assumes future AIs will solve problems like humans do. In this case, it’s that continuous learning is a big missing component. >I think generally, anthropomorphisms lack predictive power.

I didn't expect someone get this part so wrong the way you did. Continuous learning has almost nothing to do with humans and anthropomorphism. If anything, continuous learning is the bitter lesson cranked up to the next level. Rather than carefully curating datasets using human labor, the system learns on its own even when presented with an unfiltered garbage data stream.

>I believe a good argument for continual learning being necessary needs to directly address why the massive cross-task learning paradigm will stop working, and ideally make concrete bets on what skills will be hard for AIs to achieve.

The reason why I in particular am so interested in continual learning has pretty much zero to do with humans. Sensors and mechanical systems change their properties over time through wear and tear. You can build a static model of the system's properties, but the static model will fail, because the real system has changed and you now have a permanent modelling error. Correcting the modelling error requires changing the model, hence continual learning has become mandatory. I think it is pretty telling that you failed to take the existence of reality (a separate entity from the model) into account. The paradigm didn't stop working, it never worked in the first place.

It might be difficult to understand the bitter lesson, but let me rephrase it once more: Generalist compute scaling approaches will beat approaches based around human expert knowledge. Continual learning reduces the need for human expert knowledge in curating datasets, making it the next step in the generalist compute scaling paradigm.

827a 12 hours ago||
Continuous learning might not have been important in the history of deep learning evolution yet, but that might just be because the deep learning folks are measuring the wrong thing. If you want to build the most intelligent AI ever, based on whatever synthetic benchmark is hot this month, then you'd do exactly what the labs are doing. If you want to build the most productive and helpful AI; intelligence isn't always the best goal. Its usually helpful, but an idiot who learns from his mistakes is often more valuable than a egotistical genius.
energy123 2 hours ago||
The LLM does learn from its mistakes - while it's training. Each epoch it learns from its mistakes.
Herring 17 hours ago||
Apparently 54% of American adults read at or below a sixth-grade level nationwide. I’d say AGI is kinda here already.

https://en.wikipedia.org/wiki/Literacy_in_the_United_States

yeasku 17 hours ago||
Does a country failed education system has anything to do with AGI?
Davidzheng 12 hours ago|||
Yes if you measure AGI against median human.
yeasku 11 hours ago||
Why do you think a US citizen represents the median human?

Let me guess, you are from USA.

decimalenough 11 hours ago|||
The US citizen is probably well above the median human. There are many countries where the education system is abysmal, many people are stunted because of inadequate nutrition, etc.
yeasku 11 hours ago||
If it is well above it also not a good average human.

What is your point?

decimalenough 10 hours ago||
That in raw IQ terms LLMs already beat the median human, American or otherwise, and have thus achieved some definitions of "AGI".

That doesn't mean they're capable of completing many human tasks, much less improving themselves, which is generally considered the bar for "real" AGI/super intelligence.

Davidzheng 11 hours ago|||
I'm not.
yeasku 11 hours ago||
Then why should we use Americans as the average human?
ch4s3 16 hours ago||||
The stat is skewed wildly by immigration. The literacy level of native born Americans is higher. The population of foreign born adults is nearly 20% of the total adult population, and as you can imagine many are actively learning English.
Herring 15 hours ago|||
It’s not skewed much by immigration. This is because the native-born population is much larger.

See: https://www.migrationpolicy.org/sites/default/files/publicat...

51% of native-born adults scored at Level 3 or higher. This is considered the benchmark for being able to manage complex tasks and fully participate in a knowledge-based society. Only 28% of immigrant adults achieved this level. So yes immigrants are in trouble, but it’s still a huge problem with 49% native-born below Level 3.

refurb 12 hours ago||
In my mind “literate” is not “hand complex tasks”, it’s a basic ability to read and write.

Seems like the standards have changed over time?

robertritz 11 hours ago||
Considering the amount of digitalization in society, more government regulations, etc. I think basic literacy alone does not guarantee you can participate in society effectively.
refurb 10 hours ago||
But in the past, literacy was typically defined as "bare minimum".

It's fine if we want to change it to "sufficiently master language to do a white collar job", but if the standard changes we shouldn't be surprised fewer people meet it.

dankwizard 14 hours ago|||
The immigration is actually working to boost literacy levels. Americans have been falling off for a long time.
mopenstein 17 hours ago|||
What percentage of those people could never read above a certain grade level? Could 100% of humans eventually, with infinite resources and time, all be geniuses? Could they read and comprehend all the works produced by mankind?

I'm curious.

kranke155 16 hours ago||
No but they could probably read better. Just look at the best education systems in the world and propagate that. Generally, all countries should be able to replicate that.
thousand_nights 17 hours ago|||
very cool. now let's see the LLM do the laundry and wash my dishes

yes you're free to give it a physical body in the form of a robot. i don't think that will help.

korijn 17 hours ago|||
The ability to read is all it takes to have AGI?
skybrian 13 hours ago|||
From an economics perspective, a more relevant comparison would be to the workers that a business would normally hire to do a particular job.

For example, for a copy-editing job, they probably wouldn't hire people who can't read all that well, and never mind what the national average is. Other jobs require different skills.

Herring 12 hours ago||
Life is a lot bigger than just economics.

See here for example: https://data.worldhappiness.report/chart

The US economy has never been richer, but overall happiness just keeps dropping. So they vote for populists. Do you think more AI will help?

I think it’s wiser to support improving education.

skybrian 12 hours ago||
I don’t know whether it will or not. I seems like people are worrying about the economic impact of AI though?
dinkumthinkum 16 hours ago||
Yet, those illiterate people can still solve enormous amounts of challenges that LLMs cannot.
mark_l_watson 1 hour ago||
I find myself in 100% +1000 strong agreement with this article, and I wrote something very short on the same topic a few days ago https://marklwatson.substack.com/p/ai-needs-highly-effective...

I love LLMs, especially smaller local models running on Ollama, but I also think the FOMO investing in massive data centers and super scaling is misplaced.

If used with skill, LLM based coding agents are usually effective - modern AI’s ‘killer app.’

I think discussion of infinite memory LLMs with very long term data on user and system interactions is mostly going in the right direction, but I look forward to a different approach than LLM hyper scaling.

machiaweliczny 2 hours ago||
My layman take on it:

1) We need some way of reliable world model building from LLM interface

2) RL/search is real intelligence but needs viable heuristic (fitness fn) or signal - how to obtain this at scale is biggest question -> they (rich fools) will try some dystopian shit to achieve it - I hope people will resist

3) Ways to get this signal: human feedback (viable economic activity), testing against internal DB (via probabilistic models - I suspect human brain works this way), simulation -> though/expensive for real world tasks but some improvements are there, see robotics improvements

4) Video/Youtube is next big frontier but currently computationally prohibitive

5) Next frontier possibly is this metaverse thing or what Nvidia tries with physics simulations

I also wonder how human brain is able to learn rigorous logic/proofs. I remember how hard it was to adapt to this kind of thinking so I don't think it's default mode. We need a way to simulate this in computer to have any hope of progressing forward. And not via trick like LLM + math solver but some fundamental algorithmic advances.

tom_m 1 hour ago|
If it was, they would have released it. Another problem is the definition is not well defined. Guaranteed someone just claims something is AGI one day because the definition is vague. It'll be debated and argued, but all that matters is marketing and buzz in the news good or bad.
More comments...