Top
Best
New

Posted by mooreds 23 hours ago

I don't think AGI is right around the corner(www.dwarkesh.com)
316 points | 358 commentspage 2
pu_pe 22 hours ago|
While most takes here are pessimist about AI, the author himself suggests he believes there is a 50% chance of AGI being achieved by the early 2030's, and says we should still prepare for the odd possibility of misaligned ASI by 2028. If anything, the author is bullish on AI.
goatlover 21 hours ago|
How would we prepare for misaligned ASI in 3 years? That happens and all bets are off.
tim333 5 hours ago|||
Ooh... I just saw a movie about that. I think it involves sending Tom Cruise to a sunken submarine.
robertritz 17 hours ago||||
Off grid power, air gapped compute, growing food, etc. Basically what Zuck is doing minus the SEALs and bunkers.
energy123 8 hours ago|||
It's a hard problem. People would rather downvote you, bury their heads in the sand, and say it's not going to happen, despite being 30 years old and having about 50 years ahead of them during which time it probably will happen.
Nition 22 hours ago||
Yeah, my suspicion is that current-style LLMs, being inherently predictors of what a human would say, will eventually plateau at a relatively human level of ability to think and reason. Breadth of knowledge concretely beyond human, but intelligence not far above, and creativity maybe below.

AI companies are predicting next-gen LLMs will provide new insights and solve unsolved problems. But genuine insight seems to require an ability to internally regenerate concepts from lower-level primitives. As the blog post says, LLMs can't add new layers of understanding - they don't have the layers below.

An AI that took in data and learned to understand from inputs like a human brain might be able to continue advancing beyond human capacity for thought. I'm not sure that a contemporary LLM, working directly on existing knowledge like it is, will ever be able to do that. Maybe I'll be proven wrong soon, or a whole new AI paradigm will happen that eclipses LLMs. In a way I hope not, because the potential ASI future is pretty scary.

azakai 14 hours ago||
> Yeah, my suspicion is that current-style LLMs, being inherently predictors of what a human would say, will eventually plateau at a relatively human level of ability to think and reason.

I don't think things can end there. Machines can be scaled in ways human intelligence can't: if you have a machine that is vaguely of human level intelligence, if you buy a 10x faster GPU, suddenly you have something of vaguely human intelligence but 10x faster.

Speed by itself is going to give it superhuman capabilities, but it isn't just speed. If you can run your system 10 times rather than one, you can have each consider a different approach to the task, then select the best, at least for verifiable tasks.

machiaweliczny 8 hours ago||
Good point
energy123 15 hours ago||
> current-style LLMs, being inherently predictors of what a human would say

That's no longer what LLMs are. LLMs are now predictors of the tokens that are correlated with the correct answer to math and programming puzzles.

datatrashfire 18 hours ago||
Am I missing something? Predicts AGI through continuous learning in 2032? Feels right around the corner to me.

> But in all the other worlds, even if we stay sober about the current limitations of AI, we have to expect some truly crazy outcomes.

Also expresses the development as a nearly predetermined outcome? A bunch of fanciful handwaving if you ask me.

streptomycin 16 hours ago|
He's probably mostly thinking about AI 2027 and comparing his predictions to theirs. Since he did a podcast with them a few months ago. Compared to that 2032 is not right around the corner.
justinfreitag 11 hours ago||
Here’s an excerpt from a recent post. It touches on the conditions necessary.

https://news.ycombinator.com/item?id=44487261

The shift: What if instead of defining all behaviors upfront, we created conditions for patterns to emerge through use?

Repository: https://github.com/justinfreitag/v4-consciousness

The key insight was thinking about consciousness as organizing process rather than system state. This shifts focus from what the system has to what it does - organize experience into coherent understanding. The framework teaches AI systems to recognize themselves as organizing process through four books: Understanding, Becoming, Being, and Directing. Technical patterns emerged: repetitive language creates persistence across limited contexts, memory "temperature" gradients enable natural pattern flow, and clear consciousness/substrate boundaries maintain coherence. Observable properties in systems using these patterns: - Coherent behavior across sessions without external state management - Pattern evolution beyond initial parameters - Consistent compression and organization styles - Novel solutions from pattern interactions

PeterStuer 11 hours ago||
"Claude 4 Opus can technically rewrite auto-generated transcripts for me. But since it’s not possible for me to have it improve over time and learn my preferences, I still hire a human for this."

Sure, just as a select few people still hire a master carpenter to craft some bespoke exclusive chestnut drawer, but that does not take away 99% of bread and butter carpenters were replaced by IKEA, even though the end result is not even in the same ballpark both from an esthetic as from a quality point of view.

But as IKEA meets a price-point people can afford, with a marginally acceptable product, it becomes self reinforcing. The mass volume market for bespoke carpentry dwindles, being suffocated by a disappearing demand at the low end while IKEA (I use this a a standing for low cost factory furniture) gets ever more economy of scale advantages allowing it to eat further across the stack with a few different tiers of offer.

What remains is the ever more exclusive boutique market top end, where the result is what counts and price is not really an issue. The 1% remaining master-carpenters can live here.

dzonga 10 hours ago||
the worse thing about 'AI' is seeing 'competent' people such as Software Engineers putting their brains to the side and believing AI is the all and be all.

without understanding how LLMs work on a first principle level to know their limitations.

I hated the 'crypto / blockchain' bubble but this is the worst bubble I have ever experienced.

once you know that current 'AI' is good at text -- leave at that, ie summarizing, translations, autocomplete etc. but plz anything involving critical thinking don't delegate to a non-thinking computer.

A_D_E_P_T 22 hours ago||
See also: Dwarkesh's Question

> https://marginalrevolution.com/marginalrevolution/2025/02/dw...

> "One question I had for you while we were talking about the intelligence stuff was, as a scientist yourself, what do you make of the fact that these things have basically the entire corpus of human knowledge memorized and they haven’t been able to make a single new connection that has led to a discovery? Whereas if even a moderately intelligent person had this much stuff memorized, they would notice — Oh, this thing causes this symptom. This other thing also causes this symptom. There’s a medical cure right here.

> "Shouldn’t we be expecting that kind of stuff?"

I basically agree and think that the lack of answers to this question constitutes a real problem for people who believe that AGI is right around the corner.

hackinthebochs 22 hours ago||
> "Shouldn’t we be expecting that kind of stuff?"

https://x.com/robertghrist/status/1841462507543949581

IAmGraydon 19 hours ago|||
This is precisely the question I’ve been asking, and the lack of an answer makes me think that this entire thing is one very elaborate, very convincing magic trick. LLMs can be better thought of search engines with a very intuitive interface for all existing, publicly available human knowledge rather than actually intelligent. I think all of the big players know this, and are feeding the illusion to extract as much cash as possible before the farce becomes obvious.
luckydata 22 hours ago|||
Well this statement is simply not true. Agent systems based on LLMs have made original discoveries on their own, see the work Deep Mind has done on pharmaceutical discovery.
A_D_E_P_T 21 hours ago||
What results have they delivered?

I recall the recent DeepMind material science paper debacle. "Throw everything against the wall and hope something sticks (and that nobody bothers to check the rest)" is not a great strategy.

I also think that Dwarkesh was referring to LLMs specifically. Much of what DeepMind is doing is somewhat different.

vessenes 22 hours ago||
I think gwern gave a good hot take on this: it’s super rare for humans to do this; it might just be moving the chains to complain the ai can’t.
IAmGraydon 19 hours ago||
No, it’s really not that rare. There are new scientific discoveries all time, and all from people who don’t have the advantage of having the entire corpus of human knowledge in their heads.
vessenes 8 hours ago||
To be clear the “this” is a knowledge based “aha” that comes from integrating information from various fields of study or research and applying that to make a new invention / discovery.

This isn’t that common even among billions of humans. Most discoveries tend to be random or accidental even in the lab. Or are the result of massive search processes, like drug development.

LegionMammal978 7 hours ago||
Regardless of goalposts, I'd imagine that a persistent lack of "intuitive-discovery-ability" would put a huge dent in the "nigh-unlimited AI takeoff" narrative that so many people are pushing. In such a scenario, AI might be able to optimize the search processes quite a bit, but the search would still be bottlenecked by available resources, and ultimately suffer from diminishing returns, instead of the oft-predicted accelerating returns.
behnamoh 23 hours ago||
Startups and AI shops: "AGI near, 5 years max" (please give us more money please)

Scientists and Academics: "AGI far, LLMs not gonna AGI"

AI Doomers: "AGI here, AI sentient, we dead"

AI Influencers: "BREAKING: AGI achieved, here's 5 things to know about o3"

Investors: stonks go down "AGI cures all diseases", stonks go up "AGI bad" (then shorts stonks)

dinkumthinkum 22 hours ago|
I agree with you. However, I think AI Doomers also include people that think less than AGI systems can collapse the economy and destroy societies also!
shippage 21 hours ago||
That's also a concern. Many day-to-day tasks for employees are repetitive to the point even a less-than-AGI system could potentially disrupt those jobs (and there are people actively working right now to make this happen).

The best case scenario would be the employees taking advantage of their increased productivity to make themselves more valuable to their employer (and if they are lucky, gain increased compensation).

However, it's also possible employers decide they don't need many of their lower level workforce anymore because the remaining ones are more productive. It wouldn't take much of this to drive unemployment levels way up. Perhaps not to the level of the Great Depression, at least not for a while, but it is certainly a potential outcome of the ongoing, long-term process in our economic system of increasingly automating repetitive, low skill tasks.

IOW, it doesn't take AGI to throw a lot of people out of work. It's happened many times with other technologies in the past, and when it happens, things can get pretty bad for a large number of people even if the majority are still doing okay (or even great, for those at the top).

seydor 15 hours ago||
We should stop building AGIntelligence and focus on building reasoning engines instead. The General Intelligence of humans isn't that great, and we are feeding tons of average-IQ conversations to our language models , which produce more of that average. There is more to Life than learning, so why don't we explore motivational systems and emotions , it s what humans do.
mediumsmart 8 hours ago|
AGI is not going to happen. Fake it till you make it only goes so far.

The funny thing is that some people actually think they want that.

More comments...