Top
Best
New

Posted by mooreds 6 days ago

AI might yet follow the path of previous technological revolutions(www.economist.com)
184 points | 295 commentspage 3
RobertEva 5 days ago|
I like the “normal tech” lens: diffusion and process change matter more than model wow. Ask a boring question—what got cheaper? If the answer is drafting text/code and a 10–30% cut in time-to-ship, that’s real; if it’s just a cool demo, it isn’t.
josefritzishere 6 days ago||
If you read the paper, they make a good case that AI is just a normal technology. They're a bit dissmissive, but they're not alone in that. The AI sector has been all too much hype and far too little substance.
Frieren 5 days ago||
> Adoption is also hampered by the fact that much knowledge is tacit and organisation-specific, data may not be in the right format and its use may be constrained by regulation.

The article mentions three times regulations as a problem. It never says what such regulations are. Is it the GDPR and the protection of people's data? Is it anti-discrimination regulations that AI bias break regularly? We do not know because the article does not say. Probably because they are knowledgeable enough to avoid publicly attacking citizens rights. But they lack the moral integrity to remove the anti-regulatory argument.

akomtu 6 days ago||
Normal? AI is an alien technology to us, and we are being "normalized" to become compatible with it.
aeternum 6 days ago|
AI actually seems far less alien than steam engines, trains, submarines, flight, and space travel.

People weren't sure if human bodies could handle moving at >50mph.

akomtu 5 days ago||
All those steam engines, trains and submarines were steps toward what we are seeing now. AI is the logical culmination and the purpose of technology.
aeternum 5 days ago||
People said similar things about the internet: never before has all human knowledge been available in one place (they forgot about libraries apparently).

I think it's more likely that AI is just a further concentration of human knowledge. It makes it even more accessible but will AI actually add to it?

Why doesn't the logical culmination of technology require quantum computers?

Or the merging of human and machine brains?

Or a solar system-scale particle accelerator?

Or whatever the next technology is that we aren't even aware of yet?

akomtu 5 days ago||
AI means technology no longer needs humans. With AI, technology will become a quasi-lifeform, competing with us for resources.
aeternum 4 days ago||
I'm not convinced, is AI really sufficient to develop into a lifeform or specifically, to develop a separate will?

It's quite possible it's more like the computer in Star Trek. Highly capable and able to perform complex agentic tasks but lacks a will to act on its own.

giardini 6 days ago||
How about a link that works?

Neither the OP's URL nor djoldman's archive link allow access to the article!8-((

giardini 6 days ago||
OK, now djoldman's archive link above works!
smalltorch 6 days ago||
[dead]
ktallett 6 days ago||
What do they mean what if? It is similarly based to something that has existed for around 4 decades. It of course is at a higher standard of efficiency and able to search through and combine more data but it isn't new. It is just a normal technology and this was why myself and many others were shocked at the initial hype.
Eisenstein 6 days ago||
> It is similarly based to something that has existed for around 4 decades.

Four decades ago was 1985. The thing is, there was a huge jump in progress from then until now. If we took something which had a nice ramped progress, like computer graphics, and instead of ramping up we went from '1985' to '2025' in progress over the course of a few months, do you think there wouldn't be a lot of hype?

ktallett 6 days ago|||
But we have ramped up slowly, it's just not been given in quite this form before. We have previously only used it in settings where accuracy is a focus.
johnbellone 6 days ago|||
> Four decades ago was 1985

Don't remind me.

tim333 5 days ago||
The unusual feature of AI now as opposed to the last 4 decades is that it is approaching human intelligence. Assuming that progress continues, exceeding human intelligence will have different economic consequences to being a fair bit worse as was the case mostly.
ctoth 6 days ago||
What if this paper actually took things seriously?

A serious paper would start by acknowledging that every previous general-purpose technology required human oversight precisely because it couldn't perceive context, make decisions, or correct errors - capabilities that are AI's core value proposition. It would wrestle with the fundamental tension: if AI remains error-prone enough to need human supervisors, it's not transformative; if it becomes reliable enough to be transformative, those supervisory roles evaporate.

These two Princeton computer scientists, however, just spent 50 pages arguing that AI is like electricity while somehow missing that electricity never learned to fix itself, manage itself, or improve itself - which is literally the entire damn point. They're treating "humans will supervise the machines" as an iron law of economics rather than a temporary bug in the automation process that every profit-maximizing firm is racing to patch. Sometimes I feel like I'm losing my mind when it's obvious that GPT-5 could do better than Narayanan and Kapoor did in their paper at understanding historical analogies.

nottorp 6 days ago||
> because it couldn't perceive context, make decisions, or correct errors - capabilities that are AI's core value proposition

I could ask the same thing then. When will you take "AI" seriously and stop attributing the above capabilities to it?

simonh 6 days ago|||
LLMs do have to be supervised by humans and do not perceive context or correct errors, and it’s not at all clear this is going to change any time soon. In fact it’s plausible that this is due to basic problems with the current technology. So if you’re right, sure, but I’m certainly not taking that as a given.
cubefox 5 days ago||
They do already correct errors since OpenAI introduced its o1 model. Since then the improvements have been significant. It seems practically certain that their capabilities will keep growing rapidly. Do you think AI will suddenly stagnate such that models are not much more capable in five years than they are now? That would be absurd. Look back five years, and we are practically in the AI stone age.
cubefox 6 days ago||
Exactly. People seem to want to underhype AI. It's like a chimpanzee saying: humans are just normal apes.

Delusional.

g42gregory 6 days ago||
While I feel silly to take seriously something printed in The Economist, I would like to mention that people tend to overestimate the short-term impact of any technology and underestimate its long-term impacts. Maybe AI will follow the same route?
65 6 days ago|
Ah yes, disgraced tabloid The Economist, no one should ever take their writing seriously!
g42gregory 6 days ago||
I used to read it and subscribe to it, a while back. I would not technically categorize them as a tabloid. They serve a different purpose.
techlatest_net 6 days ago|
[dead]
More comments...