Top
Best
New

Posted by mooreds 7/6/2025

I don't think AGI is right around the corner(www.dwarkesh.com)
374 points | 442 commentspage 4
robwwilliams 7/6/2025|
What is the missing ingredient? Any commentary that dies not define these ingredients is not useful.

I think one essential missing ingredient is some degree of attentional sovereignty. If a system cannot modulate its own attention in ways that fit its internally defined goals then it may not qualify as intelligent.

Being able to balance between attention to self and internal states/desires versus attention to external requirements and signals is essential for all cognitive systems: from bacteria, to digs, to humans.

chrsw 7/6/2025||
We don't need AGI, whatever that is.

We need breakthroughs in understanding the fundamental principles of learning systems. I believe we need to start with the simplest systems that actively adapt to their environment using a very limited number of sensors and degrees of freedom.

Then scale up from there in sophistication, integration and hierarchy.

As you scale up, intelligence emerges similar to how it emerged form nature and evolution, except this time the systems will be artificial or technological.

SirMaster 7/7/2025||
I think most people think AGI is achieved when a machine can do at least everything that humans can do.

Like not necessarily physical things, but mental or digital things.

Humans will create a better LLM, (say GPT-5) than all the other LLMs that currently exist.

If you tasked any current LLM with creating a GPT-5 LLM that is better than itself, can it do it? If not then it's probably not AGI and has some shortcomings making it not general or intelligent enough.

mgraczyk 7/7/2025||
Important for HN users in particular to keep in mind: It is possible (and IMO likely) that the article is mostly true and ALSO that software engineering will be almost completed automated within the next few years.

Even the most pessimistic timelines have to account for 20-30x more compute, models trained on 10-100x more coding data, and tools very significantly more optimized for the task within 3 years

yladiz 7/7/2025|
It's naive to think that software engineering will be automated any time soon, especially in a few years. Even if we accept that the coding part can be, the hard part of software engineering isn't the coding, it's the design, getting good requirements, and general "stakeholder management". LLMs are not able to adequately design code with potentially ambiguous requirements considered, and thinking about future ones too, so until it gets anywhere close to being able to do that we're not going to automate it.
mgraczyk 7/7/2025||
I currently use LLMs to design code with ambiguous requirements. It works quite well and scales better than hiring a team. Not perfect but getting better all the time.

The key is to learn how to use them for your use case and to figure out what specific things they are good for. Staying up to date as they improve is probably the most valuable skill for software engineers right now

m3kw9 7/7/2025||
When someone talks about AGI and then there is a public discussion about it, it’s very analogous to a cat talking to a duck. Everyone responds with a different fantasy version of of AGI in their minds.

Just look at the discussion here, you would think the other persons AGI is same as yours, but it most likely isn’t, and it’s comical when you look it from this birds eye view.

colesantiago 7/6/2025||
Dwarkesh's opinion on AGI doesn't actually matter, he is now an investor in many AI companies.

He doesn't care if he is right or wrong.

gjm11 7/7/2025|
I think your reasoning could stand to be made more explicit.
jmugan 7/6/2025||
I agree with the continual-learning deficiency, but some of that learning can be in the form of prompt updates. The saxophone example would not work for that, but the "do my taxes" example might. You tell it one year that it also needs to look at your W2 and also file for any state listed, and it adds it to the checklist.
Mikhail_Edoshin 7/6/2025||
It is not. There is a certain mechanism in our brain that works in the same way. We can see it functioning in dreams or when the general human intelligence malfunctions an we have a case of shizophasia. But human intelligence is more than that. We are not machines. We are souls.

This does not make current AI harmless; it is already very dangerous.

Mikhail_Edoshin 7/7/2025|
One thing that is obvious when you deal with current AI is that it is very adept with words but lacks understanding. This is because understanding is different from forming word sequences. Understanding is based on the shared sameness. We, people, are same. We are parts of a whole. This is why we are able to understand each other with such a faulty media as words. AI lacks that sameness. There is nobody there. Only a word fountain.
andsoitis 7/7/2025||
Even if AGI were right around the corner, is there really anything anyone who does not own it or control should do differently?

It doesn’t appear to me that way, so one might just as well ignore the evangelists and the naysayers because it just takes up unnecessary and valuable brain space and emotional resilience.

Deal with it if and when it gets here.

j45 7/6/2025|
Even if something like AGI existed soon, or already does privately, it's likely at a very high requirement of horsepower and cost, limiting it's general and broad availability, leaving it leaving it in the hands of the few vs the many, and optimizing that may take it's sweet time.
More comments...