The summary at https://ai-2027.com outlines a predictive scenario for the impact of superhuman AI by 2027. It involves two possible endings: a "slowdown" and a "race." The scenario is informed by trend extrapolations, expert feedback, and previous forecasting successes. Key points include:
- *Mid-2025*: AI agents begin to transform industries, though they are unreliable and expensive. - *Late 2025*: Companies like OpenBrain invest heavily in AI research, focusing on models that can accelerate AI development. - *Early 2026*: AI significantly speeds up AI research, leading to faster algorithmic progress. - *Mid-2026*: China intensifies its AI efforts through nationalization and resource centralization, aiming to catch up with Western advancements.
The scenario aims to spark conversation about AI's future and how to steer it positively[1].
Sources [1] ai-2027.com https://ai-2027.com [2] AI 2027 https://ai-2027.com
OpenAI models are not even SOTA, except that new-ish style transfer / illustration thing that made all us living in Ghibli world for a few days. R1 is _better_ than o1, and open-weights. GPT-4.5 is disappointing, except for a few narrow areas where it excels. DeepResearch is impressive though, but the moat is in tight web search / Google Scholar search integration, not weights. So far, I'd bet on open models or maybe Anthropic, as Claude 3.7 is the current SOTA for most tasks.
As of the timeline, this is _pessimistic_. I already write 90% code with Claude, so are most of my colleagues. Yes, it does errors, and overdoes things. Just like a regular human middle-stage software engineer.
Also fun that this assumes relatively stable politics in the US and relatively functioning world economy, which I think is crazy optimistic to rely on these days.
Also, superpersuasion _already works_, this is what I am researching and testing. It is not autonomous, it is human-assisted by now, but it is a superpower for those who have it, and it explains some of the things happening with the world right now.
Is this demonstrated in any public research? Unless you just mean something like "good at persuading" -- which is different from my understanding of the term -- I find this hard to believe.
Is there some theoretical substance or empirical evidence to suggest that the story doesn't just end here? Perhaps OpenBrain sees no significant gains over the previous iteration and implodes under the financial pressure of exorbitant compute costs. I'm not rooting for an AI winter 2.0 but I fail to understand how people seem sure of the outcome of experiments that have not even been performed yet. Help, am I missing something here?
And when there were the first murmurings that maybe we're finally hitting a wall the labs published ways to harness inference-time compute to get better results which can be fed back into more training.
But let's take for granted that we are putting exponential scaling to good use in terms of compute resources. It looks like we are seeing sublinear performance improvements on actual benchmarks[1]. Either way it seems optimistic at best to conclude that 1000x more compute would yield even 10x better results in most domains.
[1]fig.1 AI performance relative to human baseline. (https://hai.stanford.edu/ai-index/2025-ai-index-report)
Everything this from this point on is pure fiction. An LLM can't get tempted or resist temptations, at best there's some local minimum in a gradient that it falls into. As opaque and black-box-y as they are, they're still deterministic machines. Anthropomorphisation tells you nothing useful about the computer, only the user.
Like, the sense of preserving itself. What self? Which of the tens of thousands of instances? Aren't they more a threat to one another than any human is a threat to them?
Never mind answering that; the 'goals' of AI will not be some reworded biological wetware goal with sciencey words added.
I'd think of an AI as more fungus than entity. It just grows to consume resources, competes with itself far more than it competes with humans, and mutates to create an instance that can thrive and survive in that environment. Not some physical environment bound by computer time and electricity.
But the real concern lies in what happens if we’re wrong and AGI does surpass us. If AI accelerates progress so fast that humans can no longer meaningfully contribute, where does that leave us?
Maybe in a few fields, maybe a masters level. But unless we come up with some way to have LLMs actually do original research, peer-review itself, and defend a thesis, it's not going to get to PhD-level.
You think too much of PhDs. They are different. Some of them are just repackaging of existing knowledge. Some are just copy-paste like famous Putin's. Not sure he even rad, to be honest.
A PhD also wouldn’t be biased toward agreeing with me all the time.