Top
Best
New

Posted by pegasus 10/26/2025

A definition of AGI(arxiv.org)
305 points | 514 commentspage 5
Rover222 10/27/2025|
I always find it interesting how the majority of comments on threads like this on HN are dismissive of current AI systems as "gimmicks", yet some of the most successful people on the planet think it's worth plowing a trillion dollars into them.

I don't know who's right, but the dichotomy is interesting.

TehCorwiz 10/27/2025|
Success is just a measure of how much you can separate other people from their money. It’s possible to be successful and produce nothing of value.
Rover222 10/27/2025||
You don't suppose it is at times also a measure of knowing how to skate where the puck is heading?
hardenedsecure 10/27/2025||
A forecast by one of the authors of the paper: 50% chance that AGI is reached according to the definition by end of 2028, 80% by end of 2030. https://ai-frontiers.org/articles/agis-last-bottlenecks
habinero 10/27/2025|
People say things like this all the time. It's as reliable as the latest prediction for the rapture and about as scientific.
jimbohn 10/27/2025||
I think "our" mistake is that we wanted to make a modern human first, while being unable to make an animal or even a caveman, and we lost something in the leap-frog. But we effectively have a database of knowledge that has become interactive thanks to reinforcement learning, which is really useful!
tim333 10/26/2025||
Maybe we need a new term. I mean AGI just means artificial general intelligence as opposed to specialised AI like chess computers and never came with a particular level it had to be. Most people think of it as human level intelligence so perhaps we should call it that?
joomla199 10/27/2025||
All models are wrong, but some are useful. However when it comes to cognition and intelligence we seem to be in the “wrong and useless” era or maybe even “wrong and harmful” (history seems to suggest this as a necessary milestone…anyone remember “humorism”?)
l5870uoo9y 10/26/2025||
Long-term memory storage capacity[1] scores 0 for both GPT-4 and GPT-5. Are there any workable ideas or concepts for solving this?

[1]: The capability to continually learn new information (associative, meaningful, and verbatim). (from the publication)

sureglymop 10/26/2025||
I think that's a good effort! I remember mentioning the need for this here a few months ago: https://news.ycombinator.com/item?id=44468198
jncfhnb 10/27/2025||
Completely wrong direction. AGI will not emerge from getting smarter. It will emerge from being a stateful system in a real environment.

You need context from internal system state that isn’t faked with a giant context window.

stephc_int13 10/27/2025||
You need some expertise in a field to see past the amazing imitation capabilities of LLMs and get a realistic idea of how mediocre they are. The more you work with it the less you trust it. This is not _it_.
adamzwasserman 10/27/2025|
I wish them luck. Any consensus at all, on any definition at all, would be a boon to mankind. Unfortunately I am certain that all we have to look forward to is endless goal post shifting.
giancarlostoro 10/27/2025|
Maybe AGI should have levels / phases to achieve towards 100% or a maximum level?
More comments...