Top
Best
New

Posted by mooreds 1 day ago

I don't think AGI is right around the corner(www.dwarkesh.com)
340 points | 398 commentspage 4
bilsbie 1 day ago|
I guess using history as a guide it might be like self driving. We mostly believed it was right around the corner in 2012. Lots of impressive driving.

2025 were so close but mostly not quite human level. Another 5 years at least

Barrin92 1 day ago|
>2025 were so close

we're not even close right now. Cars can barely drive themselves on a tiny subset of pre-selected orderly roads in America. We sort of have driver assistance on virtual rails. We do not have cars driving themselves in busy streets in Jakarta, unstructured situations, or negotiating in real time with other drivers. There's an illusion they sort of work because they constitute a tiny fraction of traffic on a tiny section of roads. Make half of all cars in Rome autonomous for a day and you'd have the biggest collection of scrap metal in the world

And that's only driving.

robwwilliams 1 day ago||
What is the missing ingredient? Any commentary that dies not define these ingredients is not useful.

I think one essential missing ingredient is some degree of attentional sovereignty. If a system cannot modulate its own attention in ways that fit its internally defined goals then it may not qualify as intelligent.

Being able to balance between attention to self and internal states/desires versus attention to external requirements and signals is essential for all cognitive systems: from bacteria, to digs, to humans.

tmsh 1 day ago||
I think the timelines are more like half that. Why? The insane goldrush when people start using autonomous agents that make money.

Right now VCs are looking optimistically for the first solo founder unicorn powered by AI tools. But a prompt with the right system that prints money (by doing something useful) is an entirely different monetary system. Then everyone focuses on it and the hype 10x’s. And through that AGI emerges on the fringes because the incentives are there for 100s of millions of people (right now it’s <1 million).

mellosouls 22 hours ago||
Related Dwarkesh discussion from a couple of months ago:

https://news.ycombinator.com/item?id=43719280

(AGI Is Still 30 Years Away – Ege Erdil and Tamay Besiroglu | 174 points | 378 comments)

chrsw 1 day ago||
We don't need AGI, whatever that is.

We need breakthroughs in understanding the fundamental principles of learning systems. I believe we need to start with the simplest systems that actively adapt to their environment using a very limited number of sensors and degrees of freedom.

Then scale up from there in sophistication, integration and hierarchy.

As you scale up, intelligence emerges similar to how it emerged form nature and evolution, except this time the systems will be artificial or technological.

mgraczyk 1 day ago||
Important for HN users in particular to keep in mind: It is possible (and IMO likely) that the article is mostly true and ALSO that software engineering will be almost completed automated within the next few years.

Even the most pessimistic timelines have to account for 20-30x more compute, models trained on 10-100x more coding data, and tools very significantly more optimized for the task within 3 years

yladiz 1 day ago|
It's naive to think that software engineering will be automated any time soon, especially in a few years. Even if we accept that the coding part can be, the hard part of software engineering isn't the coding, it's the design, getting good requirements, and general "stakeholder management". LLMs are not able to adequately design code with potentially ambiguous requirements considered, and thinking about future ones too, so until it gets anywhere close to being able to do that we're not going to automate it.
mgraczyk 1 day ago||
I currently use LLMs to design code with ambiguous requirements. It works quite well and scales better than hiring a team. Not perfect but getting better all the time.

The key is to learn how to use them for your use case and to figure out what specific things they are good for. Staying up to date as they improve is probably the most valuable skill for software engineers right now

mediumsmart 17 hours ago||
AGI is not going to happen. Fake it till you make it only goes so far.

The funny thing is that some people actually think they want that.

kissgyorgy 22 hours ago||
AGI is a scam. I'm pretty sure every big name in AI knows it's nowhere near and LLMs won't get us there. It's just marketing helping Sam and alike to get those billions and the hype alive.
jmugan 1 day ago||
I agree with the continual-learning deficiency, but some of that learning can be in the form of prompt updates. The saxophone example would not work for that, but the "do my taxes" example might. You tell it one year that it also needs to look at your W2 and also file for any state listed, and it adds it to the checklist.
Mikhail_Edoshin 1 day ago|
It is not. There is a certain mechanism in our brain that works in the same way. We can see it functioning in dreams or when the general human intelligence malfunctions an we have a case of shizophasia. But human intelligence is more than that. We are not machines. We are souls.

This does not make current AI harmless; it is already very dangerous.

Mikhail_Edoshin 1 day ago|
One thing that is obvious when you deal with current AI is that it is very adept with words but lacks understanding. This is because understanding is different from forming word sequences. Understanding is based on the shared sameness. We, people, are same. We are parts of a whole. This is why we are able to understand each other with such a faulty media as words. AI lacks that sameness. There is nobody there. Only a word fountain.
More comments...