Top
Best
New

Posted by pegasus 10/26/2025

A definition of AGI(arxiv.org)
305 points | 514 commentspage 4
_heimdall 10/26/2025|
I'm also frustrated by the lack of clear definitions related to AI.

Do you know what's more frustrating, though? Focusing so heavily on definitions that we miss the practicality of it (and I'm guilt of this at times too).

We can debate definitions of AGI, but given that we don't know what a new model or system is capable of until its built and tested in the real world we have more serious questions in my opinion.

Debates over AI risk, safety, and alignment are still pretty uncommon and it seems most are happy enough to accept Jevons Paradox. Are we really going to unleash whatever we do build just to find out after the fact whether or not its AGI?

keepamovin 10/27/2025||
I think if you can put an AI in a humanoid robot (control for appearance), and it can convince me that it's a human after interacting it for a couple of months (control for edgecases), I'd consider it AGI. Surely it might be "smarter than" a human, but for the purpose of my assessing whether it's AGI, interacting with something "way smarter" would be distracting and hamper the assessment, so it has to be "play human" for the purpose of the task. If it can do that, AGI, I'd say. That would be pretty cool. Surely, this is coming, soon.
incomingpain 10/27/2025||
A "general intelligence" is equivalent to a golden retriever or dolphin. A human general intelligence is a $3/hr minimum wage worker from some undeveloped country.

https://en.wikipedia.org/wiki/Cattell%E2%80%93Horn%E2%80%93C...

If a person has all those criteria, they are superintelligent. They are beyond genius.

The AGI definition problem is that everyone keeps conflating AGI with ASI, Artificial Super Intelligence.

paulcx 10/28/2025||
What if the Wright brothers had to pass a “bird exam”? That’s how we’re defining AGI today. Stop grading feathers; start designing thrust. Check out my new post: "If This Is How We Define AGI, I'm Out" - https://open.substack.com/pub/paulochen/p/if-this-is-how-we-...
jedberg 10/26/2025||
To define AGI, we'd first have to define GI. Humans are very different. As park rangers like to say, there is an overlap between the smartest bears and the dumbest humans, which is why sometimes people can't open bear-proof trash cans.

It's a similar debate with self driving cars. They already drive better than most people in most situations (some humans crash and can't drive in the snow either for example).

Ultimately, defining AGI seems like a fools errand. At some point the AI will be good enough to do the tasks that some humans do (it already is!). That's all that really matters here.

lukan 10/26/2025|
" At some point the AI will be good enough to do the tasks that some humans do (it already is!). That's all that really matters here."

What matters to me is, if the "AGI" can reliably solve the tasks that I give to it and that requires also reliable learning.

LLM's are far from that. It takes special human AGI to train them to make progress.

jedberg 10/26/2025||
> What matters to me is, if the "AGI" can reliably solve the tasks that I give to it and that requires also reliable learning.

How many humans do you know that can do that?

Jensson 10/26/2025||
Most humans can reliably do the job they are hired to do.
jedberg 10/26/2025||
Usually they require training and experience to do so. You can't just drop a fresh college grad into a job and expect them to do it.
lukan 10/27/2025|||
But given enough time, they will figure it out on their own. LLM's cannot do that ever.

Once they can ... I am open to revisit my assumptions about AGI.

dns_snek 10/27/2025|||
They may require training but that training is going to look vastly different. We can start chatting about AGI when AI can be trained with as few examples and as little information as humans are, when they can replace human workers 1:1 (in everything we do) and when they can self-improve over time just like humans can.
CaptainOfCoit 10/26/2025||
> defining AGI as matching the cognitive versatility and proficiency of a well-educated adult

Seems most of the people one would encounter out in the world might not posses AGI, how are we supposed to be able to train our electrified rocks to have AGI if this is the case?

If no one has created a online quiz called "Are you smarter than AGI?" yet based on the proposed "ten core cognitive domains", I'd be disappointed.

oidar 10/26/2025||
This is fine for a definition of AGI, but it's incomplete. It misses so many parts of the cognition that make humans flexible and successful. For example, emotions, feelings, varied pattern recognition, propreception, embodied awareness, social skills, and navigating ambiguous situation w/o algorithms. If the described 10 spectrums of intelligence were maxed by an LLM, it would still fall short.
pixl97 10/26/2025|
Eh, I don't like the idea of 'intelligence' of any type using humans as the base line. It blinds it to our own limitations and things that may not be limits to other types of intelligence. The "AI won't kill us all because it doesn't have emotions" problem is one of these. For example, just because AI doesn't get angry, doesn't mean it can't recognize your anger and manipulate if given such a directive to.
oidar 10/26/2025||
I agree, my point is that the cognition that creates emotion (and others) is of a different quality than the 10 listed in the paper.
bananaflag 10/26/2025||
I can define AGI in a line:

an entity which is better than any human at any task.

Fight me!

Balgair 10/26/2025||
Mine has always been:

I have 2 files. One is a .pdf . The other is a .doc . One file has a list of prices and colors in 2 columns. The other file has a list of colors and media in 2 columns. There are incomplete lists here and many to one matching.

To me, if I can verbally tell the AI to give me a list of prices and media from those two files, in a .csv file, and it'll ask back some simple questions and issues that it needs cleaned up to accomplish this, then that is AGI to me.

It is an incredibly simple thing for just about any middle school graduate.

And yet! I have worked with PhDs that cannot do this. No joke!

Something this simple, just dead running numbers, dumb accounting, is mostly beyond us.

bedane 10/27/2025||
a significant % of what I do day-to-day is dedicated to the task of finding sexual partners. how does this translate?

if it doesn't, how do you define "any task"?

bananaflag 10/28/2025||
I imagine an AGI (properly disguised as a humanoid) would be a drop-in replacement able to seduce humans.

Same for any task you imagine.

aprilfoo 10/27/2025||
Filling forms is a terribly artificial activity in essence. They are also very culturally biased, but that fits well with the material the NNs have been trained with.

So, surely those IQ-related tests might be acceptable rating tools for machines and they might get higher scores than anyone at some point.

Anyway, is the objective of this kind of research to actually measure the progress of buzzwords, or amplify them?

almosthere 10/27/2025|
This is kind of annoying.

The "computer" on star trek TNG was basically agentic LLMs (it knows what you mean when you ask it things, and it could solve things and modify programs by telling it what changes to make)

Data on ST:TNG was more like AGI. It had dreams, argued for itself as a sentient being, created art, controlled its own destiny through decision making.

More comments...