I think it'll be a steep sigmoid function. For a long time it'll be a productivity booster, but not enough "common sense" to replace people. We'll all laugh about how silly it was to worry about AI taking our jobs. Then some AI model will finally get over that last hump, maybe 10 or 20 years from now (or 1000, or 2}, and it will be only a couple months before everything collapses.
A specific key opens a subset of locks, a general key would open all locks. General intelligence, then, can solve all solvable problems. It's rather arrogant to suppose that humans have it ourselves or that we can create something that does.
If you think this is true, I would say you should leave artificial life alone until you can understand human beings better.
If the teacher was a robot, I don't think the piano would get practiced.
IDK how AI gains that ability. The requirement is basically "being human". And it seems like there's always going to be a need for humans in that space, no matter how smart AI gets.
This is bad definition, because human baby is already AGI when it's born and it's brain is empty. AGI is the blank slate and ability to learn anything.
We are born with inherited "data" - innate behaviors, basic pattern recognition, etc. Some even claim that we're born with basic physics toolkit (things are generally solid, they move). We then build on that by being imitators, amassing new skills and methods simply by observation and performing search.
That's wrong. It knows how to process and signal low carbohydrate levels in the blood, and it knows how to react to a perceived threat (the Moro reflex).
It knows how to follow solid objects with its eyes (when its visual system adapts) - it knows that certain visual stimuli correspond to physical systems.
Could it be that your concept of "know" is defined as common sense "produces output in English/German/etc"?
Do you know what's more frustrating, though? Focusing so heavily on definitions that we miss the practicality of it (and I'm guilt of this at times too).
We can debate definitions of AGI, but given that we don't know what a new model or system is capable of until its built and tested in the real world we have more serious questions in my opinion.
Debates over AI risk, safety, and alignment are still pretty uncommon and it seems most are happy enough to accept Jevons Paradox. Are we really going to unleash whatever we do build just to find out after the fact whether or not its AGI?
It's a similar debate with self driving cars. They already drive better than most people in most situations (some humans crash and can't drive in the snow either for example).
Ultimately, defining AGI seems like a fools errand. At some point the AI will be good enough to do the tasks that some humans do (it already is!). That's all that really matters here.
What matters to me is, if the "AGI" can reliably solve the tasks that I give to it and that requires also reliable learning.
LLM's are far from that. It takes special human AGI to train them to make progress.
How many humans do you know that can do that?
Once they can ... I am open to revisit my assumptions about AGI.
I don't know who's right, but the dichotomy is interesting.
The "computer" on star trek TNG was basically agentic LLMs (it knows what you mean when you ask it things, and it could solve things and modify programs by telling it what changes to make)
Data on ST:TNG was more like AGI. It had dreams, argued for itself as a sentient being, created art, controlled its own destiny through decision making.
So, surely those IQ-related tests might be acceptable rating tools for machines and they might get higher scores than anyone at some point.
Anyway, is the objective of this kind of research to actually measure the progress of buzzwords, or amplify them?
an entity which is better than any human at any task.
Fight me!
I have 2 files. One is a .pdf . The other is a .doc . One file has a list of prices and colors in 2 columns. The other file has a list of colors and media in 2 columns. There are incomplete lists here and many to one matching.
To me, if I can verbally tell the AI to give me a list of prices and media from those two files, in a .csv file, and it'll ask back some simple questions and issues that it needs cleaned up to accomplish this, then that is AGI to me.
It is an incredibly simple thing for just about any middle school graduate.
And yet! I have worked with PhDs that cannot do this. No joke!
Something this simple, just dead running numbers, dumb accounting, is mostly beyond us.
if it doesn't, how do you define "any task"?
Same for any task you imagine.