Like in order for an LLM to come close to a human proficiency on a topic, the LLM seems to have to ingest a LOT more content than a human.
AGI was already here the day ChatGPT released: That's Peter Norvig's take too: https://www.noemamag.com/artificial-general-intelligence-is-...
To some people this is self-evident so the terms are equivalent, but it does require some extra assumptions: that the AI would spend time developing AI, that human intelligence isn't already the maximum reachable limit, and that the AGI really is an AGI capable of novel research beyond parroting from its training set.
I think those assumptions are pretty easy to grant, but to some people they're obviously true and to others they're obviously false. So depending on your views on those, AGI and ASI will or will not mean the same thing.
Prerequisite for recursive self-improvement and far short of ASI, any conception of AGI really really needs to be expanded to include some kind of self-model. This is conspicuously missing from TFA. Related basic questions are: What's in the training set? What's the confidence on any given answer? How much of the network is actually required for answering any given question?
Partly this stuff is just hard and mechanistic interpretability as a field is still trying to get traction in many ways, but also, the whole thing is kind of fundamentally not aligned with corporate / commercial interests. Still, anything that you might want to call intelligent has a working self-model with some access to information about internal status. Things that are mentioned in TFA (like working memory) might be involved and necessary, but don't really seem sufficient
By this logic, a vast parallel search running on Commodore 64s that produces an answer after BeaverNumber(100) years would be almost AGI, which doesn't pass the sniff test.
A more meaningful metric would be more multiplicative in nature.
That 10-axis radial graph is very interesting. Do others besides this author agree with that representation?
The weak points are speed and long-term memory. Those are usually fixable in computing system. Weak long-term memory indicates that, somehow, a database needs to be bolted on. I've seen at least one system, for driving NPCs, where, after something interesting has happened, the system is asked to summarize what it learned from that session. That's stored somewhere outside the LLM and fed back in as a prompt when needed.
None of this addresses unstructured physical manipulation, which is still a huge hangup for robotics.
Cattell-Horn-Carroll theory, like a lot of psychometric research, is based on collecting a lot of data and running factor analysis (or similar) to look for axes that seem orthogonal.
It's not clear that the axes are necessary or sufficient to define intelligence, especially if the goal is to define intelligence that applies to non-humans.
For example reading and writing ability and visual processing imply the organism has light sensors, which it may not. Do all intelligent beings have vision? I don't see an obvious reason why they would.
Whatever definition you use for AGI probably shouldn't depend heavily on having analyzed human-specific data for the same reason that your definition of what counts as music shouldn't depend entirely on inferences from a single genre.
I think it'll be a steep sigmoid function. For a long time it'll be a productivity booster, but not enough "common sense" to replace people. We'll all laugh about how silly it was to worry about AI taking our jobs. Then some AI model will finally get over that last hump, maybe 10 or 20 years from now (or 1000, or 2}, and it will be only a couple months before everything collapses.
A specific key opens a subset of locks, a general key would open all locks. General intelligence, then, can solve all solvable problems. It's rather arrogant to suppose that humans have it ourselves or that we can create something that does.
If you think this is true, I would say you should leave artificial life alone until you can understand human beings better.
If the teacher was a robot, I don't think the piano would get practiced.
IDK how AI gains that ability. The requirement is basically "being human". And it seems like there's always going to be a need for humans in that space, no matter how smart AI gets.