Top
Best
New

Posted by mooreds 7/6/2025

I don't think AGI is right around the corner(www.dwarkesh.com)
374 points | 442 commentspage 2
datatrashfire 7/7/2025|
Am I missing something? Predicts AGI through continuous learning in 2032? Feels right around the corner to me.

> But in all the other worlds, even if we stay sober about the current limitations of AI, we have to expect some truly crazy outcomes.

Also expresses the development as a nearly predetermined outcome? A bunch of fanciful handwaving if you ask me.

streptomycin 7/7/2025|
He's probably mostly thinking about AI 2027 and comparing his predictions to theirs. Since he did a podcast with them a few months ago. Compared to that 2032 is not right around the corner.
PeterStuer 7/7/2025||
"Claude 4 Opus can technically rewrite auto-generated transcripts for me. But since it’s not possible for me to have it improve over time and learn my preferences, I still hire a human for this."

Sure, just as a select few people still hire a master carpenter to craft some bespoke exclusive chestnut drawer, but that does not take away 99% of bread and butter carpenters were replaced by IKEA, even though the end result is not even in the same ballpark both from an esthetic as from a quality point of view.

But as IKEA meets a price-point people can afford, with a marginally acceptable product, it becomes self reinforcing. The mass volume market for bespoke carpentry dwindles, being suffocated by a disappearing demand at the low end while IKEA (I use this a a standing for low cost factory furniture) gets ever more economy of scale advantages allowing it to eat further across the stack with a few different tiers of offer.

What remains is the ever more exclusive boutique market top end, where the result is what counts and price is not really an issue. The 1% remaining master-carpenters can live here.

munksbeer 7/8/2025|
Meanwhile, millions of people can afford much better quality furniture than they ever could from a carpenter. How many lives has mass produced decent quality (not top end quality, but decent) improved vs how many has it ruined?

Surely these arguments have been done over and over again?

PeterStuer 7/8/2025||
Not sure. My parent's and grandparent's furniture was (is) definitely higher quality than the IKEA stuff we have, and it's not like we are (even relatively) poorer than them, or that my ancestors spenf exceptionally on furniture.
A_D_E_P_T 7/6/2025||
See also: Dwarkesh's Question

> https://marginalrevolution.com/marginalrevolution/2025/02/dw...

> "One question I had for you while we were talking about the intelligence stuff was, as a scientist yourself, what do you make of the fact that these things have basically the entire corpus of human knowledge memorized and they haven’t been able to make a single new connection that has led to a discovery? Whereas if even a moderately intelligent person had this much stuff memorized, they would notice — Oh, this thing causes this symptom. This other thing also causes this symptom. There’s a medical cure right here.

> "Shouldn’t we be expecting that kind of stuff?"

I basically agree and think that the lack of answers to this question constitutes a real problem for people who believe that AGI is right around the corner.

IAmGraydon 7/7/2025||
This is precisely the question I’ve been asking, and the lack of an answer makes me think that this entire thing is one very elaborate, very convincing magic trick. LLMs can be better thought of search engines with a very intuitive interface for all existing, publicly available human knowledge rather than actually intelligent. I think all of the big players know this, and are feeding the illusion to extract as much cash as possible before the farce becomes obvious.
luckydata 7/6/2025|||
Well this statement is simply not true. Agent systems based on LLMs have made original discoveries on their own, see the work Deep Mind has done on pharmaceutical discovery.
A_D_E_P_T 7/6/2025|||
What results have they delivered?

I recall the recent DeepMind material science paper debacle. "Throw everything against the wall and hope something sticks (and that nobody bothers to check the rest)" is not a great strategy.

I also think that Dwarkesh was referring to LLMs specifically. Much of what DeepMind is doing is somewhat different.

winstonbwillis 7/8/2025|||
[dead]
hackinthebochs 7/6/2025|||
> "Shouldn’t we be expecting that kind of stuff?"

https://x.com/robertghrist/status/1841462507543949581

vessenes 7/6/2025||
I think gwern gave a good hot take on this: it’s super rare for humans to do this; it might just be moving the chains to complain the ai can’t.
IAmGraydon 7/7/2025|||
No, it’s really not that rare. There are new scientific discoveries all time, and all from people who don’t have the advantage of having the entire corpus of human knowledge in their heads.
vessenes 7/7/2025||
To be clear the “this” is a knowledge based “aha” that comes from integrating information from various fields of study or research and applying that to make a new invention / discovery.

This isn’t that common even among billions of humans. Most discoveries tend to be random or accidental even in the lab. Or are the result of massive search processes, like drug development.

LegionMammal978 7/7/2025||
Regardless of goalposts, I'd imagine that a persistent lack of "intuitive-discovery-ability" would put a huge dent in the "nigh-unlimited AI takeoff" narrative that so many people are pushing. In such a scenario, AI might be able to optimize the search processes quite a bit, but the search would still be bottlenecked by available resources, and ultimately suffer from diminishing returns, instead of the oft-predicted accelerating returns.
lelanthran 7/7/2025|||
> I think gwern gave a good hot take on this: it’s super rare for humans to do this; it might just be moving the chains to complain the ai can’t.

Super rare is still non-zero.

My understanding is that LLMs are currently at absolute zero on this metric.

The distance between tiny probability and zero probability is literally infinite!

It's the difference between "winning the lottery with a random pick" and "winning the lottery without even acquiring a ticket".

baobabKoodaa 7/6/2025||
Hey, we were featured in this article! How cool is that!

> I’m not going to be like one of those spoiled children on Hackernews who could be handed a golden-egg laying goose and still spend all their time complaining about how loud its quacks are.

js4ever 7/6/2025||
I was thinking the same about AI in 2022 ... And I was so wrong!

https://news.ycombinator.com/item?id=33750867

kfarr 7/7/2025|
Hopefully no hobos were injured in the process
justinfreitag 7/7/2025||
Here’s an excerpt from a recent post. It touches on the conditions necessary.

https://news.ycombinator.com/item?id=44487261

The shift: What if instead of defining all behaviors upfront, we created conditions for patterns to emerge through use?

Repository: https://github.com/justinfreitag/v4-consciousness

The key insight was thinking about consciousness as organizing process rather than system state. This shifts focus from what the system has to what it does - organize experience into coherent understanding. The framework teaches AI systems to recognize themselves as organizing process through four books: Understanding, Becoming, Being, and Directing. Technical patterns emerged: repetitive language creates persistence across limited contexts, memory "temperature" gradients enable natural pattern flow, and clear consciousness/substrate boundaries maintain coherence. Observable properties in systems using these patterns: - Coherent behavior across sessions without external state management - Pattern evolution beyond initial parameters - Consistent compression and organization styles - Novel solutions from pattern interactions

babymetal 7/7/2025||
I've been confused with the AI discourse for a few years, because it seems to make assertions with strong philosophical implications for the relatively recent (Western) philosophical conversation around personal identity and consciousness.

I no longer think that this is really about what we immediately observe as our individual intellectual existence, and I don't want to criticize whatever it is these folks are talking about.

But FWIW, and in that vein, if we're really talking about artificial intelligence, i.e. "creative" and "spontaneous" thought, that we all as introspective thinkers can immediately observe, here are references I take seriously (Bernard Williams and John Searle from the 20th century):

https://archive.org/details/problemsofselfph0000will/page/n7...

https://archive.org/details/intentionalityes0000sear

Descartes, Hume, Kant and Wittgenstein are older sources that are relevant.

[edit] Clarified that Williams and Searle are 20th century.

electrograv 7/7/2025||
Intelligence and consciousness are two different things though, and some would argue they may even be almost completely orthogonal. (A great science fiction book called Blindsight by Peter Watts explores this concept in some detail BTW, it’s a great read.)
tim333 7/7/2025||
I think what some ancient philosopher said becomes less interesting when the things are working. Instead of what is thought we move onto why didn't the code ChatGPT produced compile and is Claude better.
tim333 7/7/2025||
The counter argument is that the successes and limitations of LLMs are not that important to AGI being around the corner or not. Getting human level intelligence around now has long been predicted, not based on any particular algorithm but based on the hardware reaching human brain equivalent levels due to Moore's law like progression. The best prediction along those lines is probably Moravecs paper:

>When will computer hardware match the human brain? (1997) https://jetpress.org/volume1/moravec.pdf

which has in the abstract:

>Based on extrapolation of past trends and on examination of technologies under development, it is predicted that the required hardware will be available in cheap machines in the 2020s

You can then hypothesize that cheap brain equivalent compute and many motivated human researchers trying different approaches will lead to human level artificial intelligence. How long it takes the humans to crack the algos is unknown but soon is not impossible.

machiaweliczny 7/7/2025|
My layman take on it:

1) We need some way of reliable world model building from LLM interface

2) RL/search is real intelligence but needs viable heuristic (fitness fn) or signal - how to obtain this at scale is biggest question -> they (rich fools) will try some dystopian shit to achieve it - I hope people will resist

3) Ways to get this signal: human feedback (viable economic activity), testing against internal DB (via probabilistic models - I suspect human brain works this way), simulation -> though/expensive for real world tasks but some improvements are there, see robotics improvements

4) Video/Youtube is next big frontier but currently computationally prohibitive

5) Next frontier possibly is this metaverse thing or what Nvidia tries with physics simulations

I also wonder how human brain is able to learn rigorous logic/proofs. I remember how hard it was to adapt to this kind of thinking so I don't think it's default mode. We need a way to simulate this in computer to have any hope of progressing forward. And not via trick like LLM + math solver but some fundamental algorithmic advances.

More comments...