Top
Best
New

Posted by mooreds 7/6/2025

I don't think AGI is right around the corner(www.dwarkesh.com)
374 points | 442 commentspage 3
seydor 7/7/2025|
We should stop building AGIntelligence and focus on building reasoning engines instead. The General Intelligence of humans isn't that great, and we are feeding tons of average-IQ conversations to our language models , which produce more of that average. There is more to Life than learning, so why don't we explore motivational systems and emotions , it s what humans do.
boshalfoshal 7/7/2025||
Keep in mind - this is not reaffirming HN's anti-AGI/extremely long timeline beliefs.

The article explicitly states that he thinks we will have an AI system that "Will be able to do your taxes" by 2028, and a system that could basically replace all white collar work by 2032.

I think an autonomous system that can reliably do your taxes with minimal to no input is already very very good, and 2032 being the benchmark time for being able to replace 90% - all white collar work is pretty much AGI, in my opinion.

Fwiw I think the fundamental problems he describes in the article that are AGI blockers are likely to be solved sooner than we think. Labs are not stupid enough to throw all their eggs and talent into the scaling basket, they are most definitely allocating resources to tackling problems like the ones described in the article, while putting the remaining resources into bottom line production (scale current model capibilities w/o expensive R&D and reduce serving/training cost).

mathiaspoint 7/7/2025|
When was OTS written again? That was effectively an expert system that could do your taxes and it was around at least ten years ago. It didn't even need transformers.

No one has a good benchmark for what AGI is. Already LLMs are more capable at most tasks than most random people off the street. I think at this point people keep asking about because they're trying to ask some deeper philosophical question like "when will it be human" but don't want to say that because it sounds silly.

dragonwriter 7/7/2025||
> Already LLMs are more capable at most tasks than most random people off the street.

I cannot imagine having the narrow conceptualization of the universe of human tasks necessary to even be able to say this with a straight face, irrespective of ones qualitative assessment of how well LLMs do the things that they are capable of doing.

mathiaspoint 7/7/2025||
Why don't you go out with a laptop and offer people cash prizes. See how well the average person does.
chairmansteve 7/8/2025||
Yeah, I'll ask a carpenter to build a simple crud app.

Then I'll ask the LLM to frame a house.

mythrwy 7/6/2025||
No of course not. But it doesn't need to be to realize profound effects.

LLMs don't model anything but are still very useful. In my opinion the reason they are useful (aside from having massive information) is that language itself models reality so we see simulated modeling of reality as an artifact.

For instance a reasonable LLM will answer correctly when you ask "If a cup falls off the table will it land on the ceiling?". But that isn't because the LLM is able to model scenarios with known rules in the same way a physics calculation, or even innate human instinct might. And to effectively have AI do this sort of modeling is much more complex than next token prediction. Even dividing reality into discrete units may be a challenge. But without this type of thinking I don't see full AGI arising any time.

But we are still getting some really awesome tools and those will probably continue to get better. They really are powerful and a bit scary if you poke around.

mark_l_watson 7/7/2025||
I find myself in 100% +1000 strong agreement with this article, and I wrote something very short on the same topic a few days ago https://marklwatson.substack.com/p/ai-needs-highly-effective...

I love LLMs, especially smaller local models running on Ollama, but I also think the FOMO investing in massive data centers and super scaling is misplaced.

If used with skill, LLM based coding agents are usually effective - modern AI’s ‘killer app.’

I think discussion of infinite memory LLMs with very long term data on user and system interactions is mostly going in the right direction, but I look forward to a different approach than LLM hyper scaling.

jacquesm 7/6/2025||
AGI by 'some definition' is a red herring. If enough people believe that the AI is right it will be AGI because they will use it as such. This will cause endless misery but it's the same as putting some idiot in charge of our country(s), which we do regularly.
bawana 7/7/2025||
AI Will change much tho, even if is like an autistic child. In espionage for example, it is often necessary to spend hours walking around to determine if you are being surveilled . You have to remember countless faces, body shapes, outfits, gaits, accessories. Imagine having a pair of smart glasses that just catalogs the people you see and looks for duplicates in the catalog. YOLO algos can do this fast. Since no ident is needed, it can all be done on device. Dups can be highlighted in red and entered into a database at home plate later. Meanwhile, you can know if your clean if no dups show up for 20 min
bilsbie 7/6/2025||
I guess using history as a guide it might be like self driving. We mostly believed it was right around the corner in 2012. Lots of impressive driving.

2025 were so close but mostly not quite human level. Another 5 years at least

Barrin92 7/6/2025|
>2025 were so close

we're not even close right now. Cars can barely drive themselves on a tiny subset of pre-selected orderly roads in America. We sort of have driver assistance on virtual rails. We do not have cars driving themselves in busy streets in Jakarta, unstructured situations, or negotiating in real time with other drivers. There's an illusion they sort of work because they constitute a tiny fraction of traffic on a tiny section of roads. Make half of all cars in Rome autonomous for a day and you'd have the biggest collection of scrap metal in the world

And that's only driving.

dzonga 7/7/2025||
the worse thing about 'AI' is seeing 'competent' people such as Software Engineers putting their brains to the side and believing AI is the all and be all.

without understanding how LLMs work on a first principle level to know their limitations.

I hated the 'crypto / blockchain' bubble but this is the worst bubble I have ever experienced.

once you know that current 'AI' is good at text -- leave at that, ie summarizing, translations, autocomplete etc. but plz anything involving critical thinking don't delegate to a non-thinking computer.

qwertox 7/7/2025||
My assumption on AGI is that it needs to have all the features of ASI, but be resource constrained enough to not reach the potential an ASI must have.

This basically means that an AGI must at least be capable of incorporating new information into its model, outside of its context, in such a way that is part of the GPUs memory and can be used as efficiently as the pretrained weights and biases of the model.

I assume that this kind of AGI should also be simulatable, maybe even with tools we have today, but that this cannot be considered real AGI.

tmsh 7/6/2025|
I think the timelines are more like half that. Why? The insane goldrush when people start using autonomous agents that make money.

Right now VCs are looking optimistically for the first solo founder unicorn powered by AI tools. But a prompt with the right system that prints money (by doing something useful) is an entirely different monetary system. Then everyone focuses on it and the hype 10x’s. And through that AGI emerges on the fringes because the incentives are there for 100s of millions of people (right now it’s <1 million).

More comments...