Posted by mooreds 1 day ago
2025 were so close but mostly not quite human level. Another 5 years at least
we're not even close right now. Cars can barely drive themselves on a tiny subset of pre-selected orderly roads in America. We sort of have driver assistance on virtual rails. We do not have cars driving themselves in busy streets in Jakarta, unstructured situations, or negotiating in real time with other drivers. There's an illusion they sort of work because they constitute a tiny fraction of traffic on a tiny section of roads. Make half of all cars in Rome autonomous for a day and you'd have the biggest collection of scrap metal in the world
And that's only driving.
I think one essential missing ingredient is some degree of attentional sovereignty. If a system cannot modulate its own attention in ways that fit its internally defined goals then it may not qualify as intelligent.
Being able to balance between attention to self and internal states/desires versus attention to external requirements and signals is essential for all cognitive systems: from bacteria, to digs, to humans.
Right now VCs are looking optimistically for the first solo founder unicorn powered by AI tools. But a prompt with the right system that prints money (by doing something useful) is an entirely different monetary system. Then everyone focuses on it and the hype 10x’s. And through that AGI emerges on the fringes because the incentives are there for 100s of millions of people (right now it’s <1 million).
https://news.ycombinator.com/item?id=43719280
(AGI Is Still 30 Years Away – Ege Erdil and Tamay Besiroglu | 174 points | 378 comments)
We need breakthroughs in understanding the fundamental principles of learning systems. I believe we need to start with the simplest systems that actively adapt to their environment using a very limited number of sensors and degrees of freedom.
Then scale up from there in sophistication, integration and hierarchy.
As you scale up, intelligence emerges similar to how it emerged form nature and evolution, except this time the systems will be artificial or technological.
Even the most pessimistic timelines have to account for 20-30x more compute, models trained on 10-100x more coding data, and tools very significantly more optimized for the task within 3 years
The key is to learn how to use them for your use case and to figure out what specific things they are good for. Staying up to date as they improve is probably the most valuable skill for software engineers right now
The funny thing is that some people actually think they want that.
This does not make current AI harmless; it is already very dangerous.