Top
Best
New

Posted by mooreds 1 day ago

I don't think AGI is right around the corner(www.dwarkesh.com)
334 points | 383 commentspage 3
seydor 18 hours ago|
We should stop building AGIntelligence and focus on building reasoning engines instead. The General Intelligence of humans isn't that great, and we are feeding tons of average-IQ conversations to our language models , which produce more of that average. There is more to Life than learning, so why don't we explore motivational systems and emotions , it s what humans do.
baobabKoodaa 1 day ago||
Hey, we were featured in this article! How cool is that!

> I’m not going to be like one of those spoiled children on Hackernews who could be handed a golden-egg laying goose and still spend all their time complaining about how loud its quacks are.

tom_m 10 hours ago||
If it was, they would have released it. Another problem is the definition is not well defined. Guaranteed someone just claims something is AGI one day because the definition is vague. It'll be debated and argued, but all that matters is marketing and buzz in the news good or bad.
dzonga 14 hours ago||
the worse thing about 'AI' is seeing 'competent' people such as Software Engineers putting their brains to the side and believing AI is the all and be all.

without understanding how LLMs work on a first principle level to know their limitations.

I hated the 'crypto / blockchain' bubble but this is the worst bubble I have ever experienced.

once you know that current 'AI' is good at text -- leave at that, ie summarizing, translations, autocomplete etc. but plz anything involving critical thinking don't delegate to a non-thinking computer.

babymetal 23 hours ago||
I've been confused with the AI discourse for a few years, because it seems to make assertions with strong philosophical implications for the relatively recent (Western) philosophical conversation around personal identity and consciousness.

I no longer think that this is really about what we immediately observe as our individual intellectual existence, and I don't want to criticize whatever it is these folks are talking about.

But FWIW, and in that vein, if we're really talking about artificial intelligence, i.e. "creative" and "spontaneous" thought, that we all as introspective thinkers can immediately observe, here are references I take seriously (Bernard Williams and John Searle from the 20th century):

https://archive.org/details/problemsofselfph0000will/page/n7...

https://archive.org/details/intentionalityes0000sear

Descartes, Hume, Kant and Wittgenstein are older sources that are relevant.

[edit] Clarified that Williams and Searle are 20th century.

electrograv 19 hours ago||
Intelligence and consciousness are two different things though, and some would argue they may even be almost completely orthogonal. (A great science fiction book called Blindsight by Peter Watts explores this concept in some detail BTW, it’s a great read.)
tim333 9 hours ago||
I think what some ancient philosopher said becomes less interesting when the things are working. Instead of what is thought we move onto why didn't the code ChatGPT produced compile and is Claude better.
js4ever 1 day ago||
I was thinking the same about AI in 2022 ... And I was so wrong!

https://news.ycombinator.com/item?id=33750867

kfarr 20 hours ago|
Hopefully no hobos were injured in the process
jacquesm 1 day ago||
AGI by 'some definition' is a red herring. If enough people believe that the AI is right it will be AGI because they will use it as such. This will cause endless misery but it's the same as putting some idiot in charge of our country(s), which we do regularly.
andsoitis 17 hours ago||
Even if AGI were right around the corner, is there really anything anyone who does not own it or control should do differently?

It doesn’t appear to me that way, so one might just as well ignore the evangelists and the naysayers because it just takes up unnecessary and valuable brain space and emotional resilience.

Deal with it if and when it gets here.

bilsbie 1 day ago|
I guess using history as a guide it might be like self driving. We mostly believed it was right around the corner in 2012. Lots of impressive driving.

2025 were so close but mostly not quite human level. Another 5 years at least

Barrin92 1 day ago|
>2025 were so close

we're not even close right now. Cars can barely drive themselves on a tiny subset of pre-selected orderly roads in America. We sort of have driver assistance on virtual rails. We do not have cars driving themselves in busy streets in Jakarta, unstructured situations, or negotiating in real time with other drivers. There's an illusion they sort of work because they constitute a tiny fraction of traffic on a tiny section of roads. Make half of all cars in Rome autonomous for a day and you'd have the biggest collection of scrap metal in the world

And that's only driving.

More comments...