Top
Best
New

Posted by pegasus 3 days ago

A definition of AGI(arxiv.org)
304 points | 506 commentspage 4
daxfohl 3 days ago|
It's easy: we have reached AGI when there are zero jobs left. Or at least non manual labor jobs. If there is a single non-physical job left, then that means that person must be doing something that AI can't, so by definition, it's not AGI.

I think it'll be a steep sigmoid function. For a long time it'll be a productivity booster, but not enough "common sense" to replace people. We'll all laugh about how silly it was to worry about AI taking our jobs. Then some AI model will finally get over that last hump, maybe 10 or 20 years from now (or 1000, or 2}, and it will be only a couple months before everything collapses.

__MatrixMan__ 3 days ago|
I dislike your definition. There are many problems besides economic ones. If you defined "general" to mean "things the economy cares about", then what do you call the sorts of intelligences that are capable of things that the economically relevant ones are not?

A specific key opens a subset of locks, a general key would open all locks. General intelligence, then, can solve all solvable problems. It's rather arrogant to suppose that humans have it ourselves or that we can create something that does.

daxfohl 3 days ago||
It also partitions jobs into physical and intellectual aspects alone. Lots of jobs have a huge emotional/relational/empathetic components too. A teacher could get by being purely intellectual, but the really great ones have motivational/inspirational/caring aspects that an AI never could. Even if an AI says the exact same things, it doesn't have the same effect because everyone knows it's just an algorithm.
ZoomZoomZoom 3 days ago||
And most people get by on those jobs by faking the emotional component, at least some of the time. AGI presumably can fake perfectly and never burn out.
habinero 3 days ago||
> And most people get by on those jobs by faking the emotional component

If you think this is true, I would say you should leave artificial life alone until you can understand human beings better.

ZoomZoomZoom 3 days ago||
Have a long talk with any working teacher or therapist. If you think the regular workload is adequate for them to offer enough genuine emotional support for all the people they work with, always, everyday, regardless of their personal circumstances, you're mistaken. Or the person you're talking with is incredibly lucky.
daxfohl 3 days ago||
It doesn't have to be much, or intentional, or even good for that matter. My kids practice piano because they don't want to let their teacher down. (Well, one does. The other is made to practice because WE don't want to let the teacher down).

If the teacher was a robot, I don't think the piano would get practiced.

IDK how AI gains that ability. The requirement is basically "being human". And it seems like there's always going to be a need for humans in that space, no matter how smart AI gets.

Geee 3 days ago||
How about AFI - artificial fast idiot. Dumber than a baby, but faster than an adult. Or AHI - artificial human imitator.

This is bad definition, because human baby is already AGI when it's born and it's brain is empty. AGI is the blank slate and ability to learn anything.

jagrsw 3 days ago|
That "blank slate" idea doesn't really apply to humans, either.

We are born with inherited "data" - innate behaviors, basic pattern recognition, etc. Some even claim that we're born with basic physics toolkit (things are generally solid, they move). We then build on that by being imitators, amassing new skills and methods simply by observation and performing search.

Geee 3 days ago||
Sure, there's lots of inbuilt stuff like basic needs and emotions. But still, baby doesn't know anything about the world. It's the ability to collect data and train on it that makes it AGI.
jagrsw 3 days ago||
> baby doesn't know anything about the world

That's wrong. It knows how to process and signal low carbohydrate levels in the blood, and it knows how to react to a perceived threat (the Moro reflex).

It knows how to follow solid objects with its eyes (when its visual system adapts) - it knows that certain visual stimuli correspond to physical systems.

Could it be that your concept of "know" is defined as common sense "produces output in English/German/etc"?

Geee 2 days ago||
No, I totally agree that there's all kinds of innate knowledge, but it's very similar for humans and animals. I don't think this knowledge is intelligence. My point was that a baby is already an AGI, and it shouldn't require a lifetime of learning to become one. Also, if intelligence is just problem solving (like an IQ test) then it should be independent of knowledge.
A4ET8a8uTh0_v2 3 days ago||
I was going to make a mildly snide remark about how once it can consistently make better decision than average person, it is automatically qualifies, but the paper itself is surprisingly thoughtful in describing both: where we are and where it would need to be.
_heimdall 3 days ago||
I'm also frustrated by the lack of clear definitions related to AI.

Do you know what's more frustrating, though? Focusing so heavily on definitions that we miss the practicality of it (and I'm guilt of this at times too).

We can debate definitions of AGI, but given that we don't know what a new model or system is capable of until its built and tested in the real world we have more serious questions in my opinion.

Debates over AI risk, safety, and alignment are still pretty uncommon and it seems most are happy enough to accept Jevons Paradox. Are we really going to unleash whatever we do build just to find out after the fact whether or not its AGI?

jedberg 3 days ago||
To define AGI, we'd first have to define GI. Humans are very different. As park rangers like to say, there is an overlap between the smartest bears and the dumbest humans, which is why sometimes people can't open bear-proof trash cans.

It's a similar debate with self driving cars. They already drive better than most people in most situations (some humans crash and can't drive in the snow either for example).

Ultimately, defining AGI seems like a fools errand. At some point the AI will be good enough to do the tasks that some humans do (it already is!). That's all that really matters here.

lukan 3 days ago|
" At some point the AI will be good enough to do the tasks that some humans do (it already is!). That's all that really matters here."

What matters to me is, if the "AGI" can reliably solve the tasks that I give to it and that requires also reliable learning.

LLM's are far from that. It takes special human AGI to train them to make progress.

jedberg 3 days ago||
> What matters to me is, if the "AGI" can reliably solve the tasks that I give to it and that requires also reliable learning.

How many humans do you know that can do that?

Jensson 3 days ago||
Most humans can reliably do the job they are hired to do.
jedberg 3 days ago||
Usually they require training and experience to do so. You can't just drop a fresh college grad into a job and expect them to do it.
dns_snek 3 days ago|||
They may require training but that training is going to look vastly different. We can start chatting about AGI when AI can be trained with as few examples and as little information as humans are, when they can replace human workers 1:1 (in everything we do) and when they can self-improve over time just like humans can.
lukan 3 days ago|||
But given enough time, they will figure it out on their own. LLM's cannot do that ever.

Once they can ... I am open to revisit my assumptions about AGI.

Rover222 3 days ago||
I always find it interesting how the majority of comments on threads like this on HN are dismissive of current AI systems as "gimmicks", yet some of the most successful people on the planet think it's worth plowing a trillion dollars into them.

I don't know who's right, but the dichotomy is interesting.

TehCorwiz 3 days ago|
Success is just a measure of how much you can separate other people from their money. It’s possible to be successful and produce nothing of value.
Rover222 3 days ago||
You don't suppose it is at times also a measure of knowing how to skate where the puck is heading?
almosthere 3 days ago||
This is kind of annoying.

The "computer" on star trek TNG was basically agentic LLMs (it knows what you mean when you ask it things, and it could solve things and modify programs by telling it what changes to make)

Data on ST:TNG was more like AGI. It had dreams, argued for itself as a sentient being, created art, controlled its own destiny through decision making.

aprilfoo 3 days ago||
Filling forms is a terribly artificial activity in essence. They are also very culturally biased, but that fits well with the material the NNs have been trained with.

So, surely those IQ-related tests might be acceptable rating tools for machines and they might get higher scores than anyone at some point.

Anyway, is the objective of this kind of research to actually measure the progress of buzzwords, or amplify them?

oidar 3 days ago||
This is fine for a definition of AGI, but it's incomplete. It misses so many parts of the cognition that make humans flexible and successful. For example, emotions, feelings, varied pattern recognition, propreception, embodied awareness, social skills, and navigating ambiguous situation w/o algorithms. If the described 10 spectrums of intelligence were maxed by an LLM, it would still fall short.
pixl97 3 days ago|
Eh, I don't like the idea of 'intelligence' of any type using humans as the base line. It blinds it to our own limitations and things that may not be limits to other types of intelligence. The "AI won't kill us all because it doesn't have emotions" problem is one of these. For example, just because AI doesn't get angry, doesn't mean it can't recognize your anger and manipulate if given such a directive to.
oidar 3 days ago||
I agree, my point is that the cognition that creates emotion (and others) is of a different quality than the 10 listed in the paper.
bananaflag 3 days ago|
I can define AGI in a line:

an entity which is better than any human at any task.

Fight me!

Balgair 3 days ago||
Mine has always been:

I have 2 files. One is a .pdf . The other is a .doc . One file has a list of prices and colors in 2 columns. The other file has a list of colors and media in 2 columns. There are incomplete lists here and many to one matching.

To me, if I can verbally tell the AI to give me a list of prices and media from those two files, in a .csv file, and it'll ask back some simple questions and issues that it needs cleaned up to accomplish this, then that is AGI to me.

It is an incredibly simple thing for just about any middle school graduate.

And yet! I have worked with PhDs that cannot do this. No joke!

Something this simple, just dead running numbers, dumb accounting, is mostly beyond us.

bedane 3 days ago||
a significant % of what I do day-to-day is dedicated to the task of finding sexual partners. how does this translate?

if it doesn't, how do you define "any task"?

bananaflag 2 days ago||
I imagine an AGI (properly disguised as a humanoid) would be a drop-in replacement able to seduce humans.

Same for any task you imagine.

More comments...