Top
Best
New

Posted by pegasus 10/26/2025

A definition of AGI(arxiv.org)
305 points | 514 commentspage 3
AlgernonDmello 10/29/2025||
Opinions without 'pi' are just onions! Its not wrong to have an opinion, but they should be based off some grounded fact/truth.

Before defining AGI, I guess we need to define Intelligence and align on whose definition of intelligence can be considered the grounded truth. The theory of multiple intelligences (Howard Gardner) seems the most accepted today. Is there anything better?

Going with the assumption that there is no other, this definition of AGI only considers the following intelligences. - Verbal-Linguistic - Logical-Mathematical - Visual-Spatial - Musical - Naturalist

It doesnt/barely mentions - Bodily-Kinesthetic - Interpersonal - Intrapersonal - Existential So this definition fees incomplete

ants_everywhere 10/26/2025||
> To operationalize this, we ground our methodology in Cattell-Horn-Carroll theory, the most empirically validated model of human cognition

Cattell-Horn-Carroll theory, like a lot of psychometric research, is based on collecting a lot of data and running factor analysis (or similar) to look for axes that seem orthogonal.

It's not clear that the axes are necessary or sufficient to define intelligence, especially if the goal is to define intelligence that applies to non-humans.

For example reading and writing ability and visual processing imply the organism has light sensors, which it may not. Do all intelligent beings have vision? I don't see an obvious reason why they would.

Whatever definition you use for AGI probably shouldn't depend heavily on having analyzed human-specific data for the same reason that your definition of what counts as music shouldn't depend entirely on inferences from a single genre.

SirMaster 10/27/2025||
I don't think it's really AGI until you can simply task it with creating a new better version of itself and it can succeed in doing that all on its own.

A team of humans can and will make a GPT-6. Can a team of GPT-5 agents make GPT-6 all on its own if you give it the resources necessary to do so?

vages 10/27/2025|
This is called Recursive AI, and is briefly mentioned in the paper.
Animats 10/27/2025||
Paper: https://arxiv.org/pdf/2510.18212

That 10-axis radial graph is very interesting. Do others besides this author agree with that representation?

The weak points are speed and long-term memory. Those are usually fixable in computing system. Weak long-term memory indicates that, somehow, a database needs to be bolted on. I've seen at least one system, for driving NPCs, where, after something interesting has happened, the system is asked to summarize what it learned from that session. That's stored somewhere outside the LLM and fed back in as a prompt when needed.

None of this addresses unstructured physical manipulation, which is still a huge hangup for robotics.

flimflamm 10/27/2025|
I would focus on lowest of the axis. It does not help if some of the axis are at 100% if one of the axis is lacking.
Animats 10/28/2025||
My point is that the axes chosen are important, and if this is a good rating system, we ought to see those radial charts for the different models and systems available.
SirMaster 10/27/2025||
Isn't part of the cognitive versatility of a human how fast and well they can learn a new subject without having to ingest so much training content on it?

Like in order for an LLM to come close to a human proficiency on a topic, the LLM seems to have to ingest a LOT more content than a human.

UltraSane 10/26/2025||
I would define AGI as any artificial system that could learn any skill a human can by using the same inputs.
Geee 10/26/2025||
How about AFI - artificial fast idiot. Dumber than a baby, but faster than an adult. Or AHI - artificial human imitator.

This is bad definition, because human baby is already AGI when it's born and it's brain is empty. AGI is the blank slate and ability to learn anything.

jagrsw 10/26/2025|
That "blank slate" idea doesn't really apply to humans, either.

We are born with inherited "data" - innate behaviors, basic pattern recognition, etc. Some even claim that we're born with basic physics toolkit (things are generally solid, they move). We then build on that by being imitators, amassing new skills and methods simply by observation and performing search.

Geee 10/26/2025||
Sure, there's lots of inbuilt stuff like basic needs and emotions. But still, baby doesn't know anything about the world. It's the ability to collect data and train on it that makes it AGI.
jagrsw 10/27/2025||
> baby doesn't know anything about the world

That's wrong. It knows how to process and signal low carbohydrate levels in the blood, and it knows how to react to a perceived threat (the Moro reflex).

It knows how to follow solid objects with its eyes (when its visual system adapts) - it knows that certain visual stimuli correspond to physical systems.

Could it be that your concept of "know" is defined as common sense "produces output in English/German/etc"?

Geee 10/28/2025||
No, I totally agree that there's all kinds of innate knowledge, but it's very similar for humans and animals. I don't think this knowledge is intelligence. My point was that a baby is already an AGI, and it shouldn't require a lifetime of learning to become one. Also, if intelligence is just problem solving (like an IQ test) then it should be independent of knowledge.
A4ET8a8uTh0_v2 10/26/2025||
I was going to make a mildly snide remark about how once it can consistently make better decision than average person, it is automatically qualifies, but the paper itself is surprisingly thoughtful in describing both: where we are and where it would need to be.
daxfohl 10/27/2025|
It's easy: we have reached AGI when there are zero jobs left. Or at least non manual labor jobs. If there is a single non-physical job left, then that means that person must be doing something that AI can't, so by definition, it's not AGI.

I think it'll be a steep sigmoid function. For a long time it'll be a productivity booster, but not enough "common sense" to replace people. We'll all laugh about how silly it was to worry about AI taking our jobs. Then some AI model will finally get over that last hump, maybe 10 or 20 years from now (or 1000, or 2}, and it will be only a couple months before everything collapses.

__MatrixMan__ 10/27/2025|
I dislike your definition. There are many problems besides economic ones. If you defined "general" to mean "things the economy cares about", then what do you call the sorts of intelligences that are capable of things that the economically relevant ones are not?

A specific key opens a subset of locks, a general key would open all locks. General intelligence, then, can solve all solvable problems. It's rather arrogant to suppose that humans have it ourselves or that we can create something that does.

daxfohl 10/27/2025||
It also partitions jobs into physical and intellectual aspects alone. Lots of jobs have a huge emotional/relational/empathetic components too. A teacher could get by being purely intellectual, but the really great ones have motivational/inspirational/caring aspects that an AI never could. Even if an AI says the exact same things, it doesn't have the same effect because everyone knows it's just an algorithm.
ZoomZoomZoom 10/27/2025||
And most people get by on those jobs by faking the emotional component, at least some of the time. AGI presumably can fake perfectly and never burn out.
habinero 10/27/2025||
> And most people get by on those jobs by faking the emotional component

If you think this is true, I would say you should leave artificial life alone until you can understand human beings better.

ZoomZoomZoom 10/27/2025||
Have a long talk with any working teacher or therapist. If you think the regular workload is adequate for them to offer enough genuine emotional support for all the people they work with, always, everyday, regardless of their personal circumstances, you're mistaken. Or the person you're talking with is incredibly lucky.
daxfohl 10/27/2025||
It doesn't have to be much, or intentional, or even good for that matter. My kids practice piano because they don't want to let their teacher down. (Well, one does. The other is made to practice because WE don't want to let the teacher down).

If the teacher was a robot, I don't think the piano would get practiced.

IDK how AI gains that ability. The requirement is basically "being human". And it seems like there's always going to be a need for humans in that space, no matter how smart AI gets.

More comments...