Top
Best
New

Posted by mooreds 7/6/2025

I don't think AGI is right around the corner(www.dwarkesh.com)
374 points | 442 commentspage 6
im3w1l 7/7/2025|
I think current LLMs are smart enough to trigger the intelligence explosion. And that we are in the early stages of that.
aeve890 7/7/2025|
An explosion by definition happens in a very short period of time. How long is supposed to be this explosion we already are? It reads like a Tom Clancy book.
im3w1l 7/7/2025||
Timeframes are hard to predict. I just notice these signs: That it can suggest and reason about strategies to improve itself, and break those strategies down into smaller goals. That it can integrate with toals to (to some extent) work on those goals.
m3kw9 7/7/2025||
Nobody has agreed on any definition of AGI, there are plenty of “makes sense” definitions though.
schnitzelstoat 7/7/2025||
Honestly, I think LLM's are a distraction from AGI. It seems to me that the path to AGI will likely be some sort of Reinforcement Learning approach.

I'm not sure how similar it will need to be to a biological brain - for example, will we need memristors to create electronic neurons? Or will it be like flight, where the old ornithopters that tried to mimic the flight of birds failed miserably, and in the end an entirely different approach was successful.

tim333 7/7/2025|
LLMs seem quite similar to the part of the human brain where you speak quickly without thinking. They don't do the thinking and learning bit that brains do well though. Something needs to be changed or added on I guess.
seydor 7/6/2025||
AGI should be able to answer this question
PikachuEXE 7/7/2025||
What AI Can Never Be | John Vervaeke

https://youtu.be/HAJclcj25uM

PikachuEXE 7/7/2025||
Can artificial intelligence truly become wise? In this landmark lecture, John Vervaeke explores the future of AI through a lens few dare to examine: the limits of intelligence itself. He unpacks the critical differences between intelligence, rationality, reasonableness, and wisdom—terms often used interchangeably in discussions around AGI. Drawing from decades of research in cognitive science and philosophy, John argues that while large language models like ChatGPT demonstrate forms of generalized intelligence, they fundamentally lack core elements of human cognition: embodiment, caring, and participatory knowing.

By distinguishing between propositional, procedural, perspectival, and participatory knowing, he reveals why the current paradigm of AI is not equipped to generate consciousness, agency, or true understanding. This lecture also serves as a moral call to action: if we want wise machines, we must first become wiser ourselves.

00:00 Introduction: AI, AGI, and the Nature of Intelligence 02:00 What is General Intelligence? 04:30 LLMs and the Illusion of Generalization 07:00 The Meta-Problems of Intelligence: Anticipation & Relevance Realization 09:00 Relevance Realization: The Hidden Engine of Intelligence 11:30 How We Filter Reality Through Relevance 14:00 The Limits of LLMs: Predicting Text vs. Anticipating Reality 17:00 Four Kinds of Knowing: Propositional, Procedural, Perspectival, Participatory 23:00 Embodiment, Consciousness, and Narrative Identity 27:00 The Role of Attention, Care, and Autopoiesis 31:00 Culture as Niche Construction 34:00 Why AI Can’t Participate in Meaning 37:00 The Missing Dimensions in LLMs 40:00 Rationality vs. Reasonableness 43:00 Self-Deception, Bias, and the Need for Self-Correction 46:00 Caring About How You Care: The Core of Rationality 48:00 Wisdom: Aligning Multiple Selves and Temporal Scales 53:00 The Social Obligation to Cultivate Wisdom 55:00 Alter: Cultivating Wisdom in an AI Future

adwn 7/7/2025||
Please don't just link to another video or article without a comment or at least a short description. Why should I watch this 1 hour long video?
jppope 7/7/2025||
I've said this before but I'll say it again- AGI is right around the corner, because we don't have a technical definition of what it is... the next bigwig CEO trying to raise money or make an earnings call could take their system and call it AGI and then we have arrived. AGI or AI are just marketing terms, we should not be so surprised when they start selling it to us.
tclancy 7/6/2025||
Here I was worried.
pablocacaster 7/7/2025||
Llms shit the bed in the real world, i have never seen them work as 'AGI' , sorry its just the transformers + the extra sauce of the apis, so much pollution for a thing that fails between 50% and 90% of the time.
kachapopopow 7/6/2025||
Honestly, o3 pro with actual 1m context window (every model right now drops out at around 128k) that's as fast and cheap as 4o is already good enough for me.
tedsanders 7/6/2025||
o3 pro doesn't have a 1M context window, unfortunately. GPT-4.1 and Gemini 2.5 do, though.
kachapopopow 7/6/2025||
That's why I said "if". And that's a lot more plausible than an AGI.
namenotrequired 7/7/2025||
You didn’t say “if”
kachapopopow 7/7/2025||
It's kind of implied.
v5v3 7/6/2025||
Thats nice to know.

What's that got to do with this post though.

kachapopopow 7/6/2025||
I don't feel like the AGI people are talking about isn't necessary, something like that would at minimum require as much compute as neurons and synapses that of a teen (minus the requirements to maintain a body).

I think what we have right now with some (very difficult to achieve, but possible in the forseeable future) tweaks we can already see 95% of what an "AGI" could do come true: put most of the population out of jobs, work together and improve itself (to a limited degree) and cause general chaos.

v5v3 7/6/2025||
It would put people out of their 'current jobs' which many of them hate and only do to pay the bills.

A lot of people would be far happier and would find something better to do with their day if universal income came along.

Take developers as an example, many don't enjoy the corporate CRUD apps they do.

deadbabe 7/6/2025|
I’ve noticed it’s becoming a lot more popular lately for people to come out and say AGI is still very, very far away. Is the hype cycle ending somewhat? Have we passed peak LLM?

Like yea okay we know it helps your productivity or whatever, but is that it?

gjm11 7/7/2025||
Patel isn't saying that AGI is still very, very far away. He says his best estimate is 2032 and "ASI" in 2028 is "a totally plausible outcome".

(He thinks it might be quite a long way away: "the 2030s or even the 2040s", and it seems to me that the "2040s" scenarios are ones in which substantially longer than that is also plausible.)

andy99 7/6/2025||
Maybe - anecdotally, HN at least is not very receptive to the idea that transformers are not (or with more data will never be) sentient somehow, and almost every post I see about this is followed up by the idiotic "how do we know human intelligence isn't the same thing", as if there's some body of commentators whose personal experience with consciousness somehow leads them to believe it might be achievable with matrix math.

Anyway, I don't think we're over the peak yet, the tech adjacent pseudo intellectuals that feed these bubbles (VCs etc) still very much think that math that generates a plausible transcript is alive.

oasisaimlessly 7/6/2025||
> experience with consciousness somehow leads them to believe it might be achievable with matrix math

That's trivially true if you subscribe to materialism; QM is "just matrix math".

aeve890 7/7/2025|||
>QM is "just matrix math".

Err no. You can solve QM without using matrices. Matrix math is just a tool.

JohnKemeny 7/6/2025|||
You're not making the point you think you're making.
More comments...