Posted by mooreds 6 days ago
The article mentions three times regulations as a problem. It never says what such regulations are. Is it the GDPR and the protection of people's data? Is it anti-discrimination regulations that AI bias break regularly? We do not know because the article does not say. Probably because they are knowledgeable enough to avoid publicly attacking citizens rights. But they lack the moral integrity to remove the anti-regulatory argument.
People weren't sure if human bodies could handle moving at >50mph.
I think it's more likely that AI is just a further concentration of human knowledge. It makes it even more accessible but will AI actually add to it?
Why doesn't the logical culmination of technology require quantum computers?
Or the merging of human and machine brains?
Or a solar system-scale particle accelerator?
Or whatever the next technology is that we aren't even aware of yet?
It's quite possible it's more like the computer in Star Trek. Highly capable and able to perform complex agentic tasks but lacks a will to act on its own.
Neither the OP's URL nor djoldman's archive link allow access to the article!8-((
Four decades ago was 1985. The thing is, there was a huge jump in progress from then until now. If we took something which had a nice ramped progress, like computer graphics, and instead of ramping up we went from '1985' to '2025' in progress over the course of a few months, do you think there wouldn't be a lot of hype?
Don't remind me.
A serious paper would start by acknowledging that every previous general-purpose technology required human oversight precisely because it couldn't perceive context, make decisions, or correct errors - capabilities that are AI's core value proposition. It would wrestle with the fundamental tension: if AI remains error-prone enough to need human supervisors, it's not transformative; if it becomes reliable enough to be transformative, those supervisory roles evaporate.
These two Princeton computer scientists, however, just spent 50 pages arguing that AI is like electricity while somehow missing that electricity never learned to fix itself, manage itself, or improve itself - which is literally the entire damn point. They're treating "humans will supervise the machines" as an iron law of economics rather than a temporary bug in the automation process that every profit-maximizing firm is racing to patch. Sometimes I feel like I'm losing my mind when it's obvious that GPT-5 could do better than Narayanan and Kapoor did in their paper at understanding historical analogies.
I could ask the same thing then. When will you take "AI" seriously and stop attributing the above capabilities to it?
Delusional.