Do you know what's more frustrating, though? Focusing so heavily on definitions that we miss the practicality of it (and I'm guilt of this at times too).
We can debate definitions of AGI, but given that we don't know what a new model or system is capable of until its built and tested in the real world we have more serious questions in my opinion.
Debates over AI risk, safety, and alignment are still pretty uncommon and it seems most are happy enough to accept Jevons Paradox. Are we really going to unleash whatever we do build just to find out after the fact whether or not its AGI?
https://en.wikipedia.org/wiki/Cattell%E2%80%93Horn%E2%80%93C...
If a person has all those criteria, they are superintelligent. They are beyond genius.
The AGI definition problem is that everyone keeps conflating AGI with ASI, Artificial Super Intelligence.
It's a similar debate with self driving cars. They already drive better than most people in most situations (some humans crash and can't drive in the snow either for example).
Ultimately, defining AGI seems like a fools errand. At some point the AI will be good enough to do the tasks that some humans do (it already is!). That's all that really matters here.
What matters to me is, if the "AGI" can reliably solve the tasks that I give to it and that requires also reliable learning.
LLM's are far from that. It takes special human AGI to train them to make progress.
How many humans do you know that can do that?
Once they can ... I am open to revisit my assumptions about AGI.
Seems most of the people one would encounter out in the world might not posses AGI, how are we supposed to be able to train our electrified rocks to have AGI if this is the case?
If no one has created a online quiz called "Are you smarter than AGI?" yet based on the proposed "ten core cognitive domains", I'd be disappointed.
an entity which is better than any human at any task.
Fight me!
I have 2 files. One is a .pdf . The other is a .doc . One file has a list of prices and colors in 2 columns. The other file has a list of colors and media in 2 columns. There are incomplete lists here and many to one matching.
To me, if I can verbally tell the AI to give me a list of prices and media from those two files, in a .csv file, and it'll ask back some simple questions and issues that it needs cleaned up to accomplish this, then that is AGI to me.
It is an incredibly simple thing for just about any middle school graduate.
And yet! I have worked with PhDs that cannot do this. No joke!
Something this simple, just dead running numbers, dumb accounting, is mostly beyond us.
if it doesn't, how do you define "any task"?
Same for any task you imagine.
So, surely those IQ-related tests might be acceptable rating tools for machines and they might get higher scores than anyone at some point.
Anyway, is the objective of this kind of research to actually measure the progress of buzzwords, or amplify them?
The "computer" on star trek TNG was basically agentic LLMs (it knows what you mean when you ask it things, and it could solve things and modify programs by telling it what changes to make)
Data on ST:TNG was more like AGI. It had dreams, argued for itself as a sentient being, created art, controlled its own destiny through decision making.