Top
Best
New

Posted by Tenoke 4/3/2025

AI 2027(ai-2027.com)
949 points | 621 commentspage 3
bicepjai 4/5/2025|
Claude summarize

The summary at https://ai-2027.com outlines a predictive scenario for the impact of superhuman AI by 2027. It involves two possible endings: a "slowdown" and a "race." The scenario is informed by trend extrapolations, expert feedback, and previous forecasting successes. Key points include:

- *Mid-2025*: AI agents begin to transform industries, though they are unreliable and expensive. - *Late 2025*: Companies like OpenBrain invest heavily in AI research, focusing on models that can accelerate AI development. - *Early 2026*: AI significantly speeds up AI research, leading to faster algorithmic progress. - *Mid-2026*: China intensifies its AI efforts through nationalization and resource centralization, aiming to catch up with Western advancements.

The scenario aims to spark conversation about AI's future and how to steer it positively[1].

Sources [1] ai-2027.com https://ai-2027.com [2] AI 2027 https://ai-2027.com

atemerev 4/3/2025||
What is this, some OpenAI employee fan fiction? Did Sam himself write this?

OpenAI models are not even SOTA, except that new-ish style transfer / illustration thing that made all us living in Ghibli world for a few days. R1 is _better_ than o1, and open-weights. GPT-4.5 is disappointing, except for a few narrow areas where it excels. DeepResearch is impressive though, but the moat is in tight web search / Google Scholar search integration, not weights. So far, I'd bet on open models or maybe Anthropic, as Claude 3.7 is the current SOTA for most tasks.

As of the timeline, this is _pessimistic_. I already write 90% code with Claude, so are most of my colleagues. Yes, it does errors, and overdoes things. Just like a regular human middle-stage software engineer.

Also fun that this assumes relatively stable politics in the US and relatively functioning world economy, which I think is crazy optimistic to rely on these days.

Also, superpersuasion _already works_, this is what I am researching and testing. It is not autonomous, it is human-assisted by now, but it is a superpower for those who have it, and it explains some of the things happening with the world right now.

achierius 4/3/2025||
> superpersuasion _already works_

Is this demonstrated in any public research? Unless you just mean something like "good at persuading" -- which is different from my understanding of the term -- I find this hard to believe.

atemerev 4/3/2025||
No, I meant "good at persuading", it is not 100% efficiency of course.
pixodaros 4/4/2025||
That singularity happened in the fifth century BCE when people figured out that they could charge silver to teach the art of rhetoric and not just teach their sons and nephews
ddp26 4/4/2025||
The story isn't about OpenAI, they say the company could be Xai, Anthropic, Google, or another.
dcanelhas 4/4/2025||
> Once the new datacenters are up and running, they’ll be able to train a model with 10^28 FLOP—a thousand times more than GPT-4.

Is there some theoretical substance or empirical evidence to suggest that the story doesn't just end here? Perhaps OpenBrain sees no significant gains over the previous iteration and implodes under the financial pressure of exorbitant compute costs. I'm not rooting for an AI winter 2.0 but I fail to understand how people seem sure of the outcome of experiments that have not even been performed yet. Help, am I missing something here?

the8472 4/4/2025|
https://gwern.net/scaling-hypothesis exponential scaling has been holding up for more than a decade now, since alexnet.

And when there were the first murmurings that maybe we're finally hitting a wall the labs published ways to harness inference-time compute to get better results which can be fed back into more training.

dcanelhas 4/12/2025||
I sincerely appreciate the reply, but are you talking about Moore's law? Alexnet could run on a commercially available GPU in 2011(?). But that wasn't the peak compute platform being used at the time for DL inference, so it distorts the progress a bit. It's like me saying I was running a neural net on a raspberry pi yesterday for written character recognition on MNIST and today crunching stable diffusion on a GTX3090. Behold, a trillion-fold leap in just a day (nevermind the unrelated applications). The singularity is definitely gonna happen tomorrow!

But let's take for granted that we are putting exponential scaling to good use in terms of compute resources. It looks like we are seeing sublinear performance improvements on actual benchmarks[1]. Either way it seems optimistic at best to conclude that 1000x more compute would yield even 10x better results in most domains.

[1]fig.1 AI performance relative to human baseline. (https://hai.stanford.edu/ai-index/2025-ai-index-report)

crvdgc 4/4/2025||
Using Agent-2 to monitor Agent-3 sounds unnervingly similar to the plot of Philip K. Dick's Vulcan's Hammer [1]. An old super AI is used to fight a new version, named Vulcan 2 and Vulcan 3 respectively!

[1] https://en.wikipedia.org/wiki/Vulcan's_Hammer

Joshuatanderson 4/3/2025||
This is extremely important. Scott Alexander's earlier predictions are holding up extremely well, at least on image progress.
ryankrage77 4/4/2025||
> "resist the temptation to get better ratings from gullible humans by hallucinating citations or faking task completion"

Everything this from this point on is pure fiction. An LLM can't get tempted or resist temptations, at best there's some local minimum in a gradient that it falls into. As opaque and black-box-y as they are, they're still deterministic machines. Anthropomorphisation tells you nothing useful about the computer, only the user.

FeepingCreature 4/4/2025|
Temptation does not require nondeterminism.
JoeAltmaier 4/4/2025||
Weirdly written as science fiction, including a deplorable tendency to measure an AI's goals as similar to humans.

Like, the sense of preserving itself. What self? Which of the tens of thousands of instances? Aren't they more a threat to one another than any human is a threat to them?

Never mind answering that; the 'goals' of AI will not be some reworded biological wetware goal with sciencey words added.

I'd think of an AI as more fungus than entity. It just grows to consume resources, competes with itself far more than it competes with humans, and mutates to create an instance that can thrive and survive in that environment. Not some physical environment bound by computer time and electricity.

pinetone 4/3/2025||
I think it's worth noting that all of the authors have financial or professional incentive to accelerate the AI hype bandwagon as much as possible.
FairlyInvolved 4/4/2025|
I realise no one is infallible but do you not think Daniel Kokotajlo's integrity is now pretty well established with regard to those incentives?
I_Nidhi 4/4/2025||
Though it's easy to dismiss as science fiction, this timeline paints a chillingly detailed picture of a potential AGI takeoff. The idea that AI could surpass human capabilities in research and development, and the fact that it will create an arms race between global powers, is unsettling. The risks—AI misuse, security breaches, and societal disruption—are very real, even if the exact timeline might be too optimistic.

But the real concern lies in what happens if we’re wrong and AGI does surpass us. If AI accelerates progress so fast that humans can no longer meaningfully contribute, where does that leave us?

jenny91 4/4/2025|
Late 2025, "its PhD-level knowledge of every field". I just don't think you're going to get there. There is still a fundamental limitation that you can only be as good as the sources you train on. "PhD-level" is not included in this dataset: in other words, you don't become PhD-level by reading stuff.

Maybe in a few fields, maybe a masters level. But unless we come up with some way to have LLMs actually do original research, peer-review itself, and defend a thesis, it's not going to get to PhD-level.

MoonGhost 4/4/2025|
> Late 2025, "its PhD-level knowledge of every field". I just don't think you're going to get there.

You think too much of PhDs. They are different. Some of them are just repackaging of existing knowledge. Some are just copy-paste like famous Putin's. Not sure he even rad, to be honest.

dangus 4/4/2025||
I’m pretty sure a PhD wouldn’t confidently hallucinate configuration parameters that don’t exist like my AI coding tool does.

A PhD also wouldn’t be biased toward agreeing with me all the time.

More comments...