Top
Best
New

Posted by Tenoke 4/3/2025

AI 2027(ai-2027.com)
949 points | 621 commentspage 2
infecto 4/3/2025|
Could not get through the entire thing. It’s mostly a bunch of fantasy intermingled with bits of possible interesting discussion points. The whole right side metrics are purely a distraction because entirely fiction.
archagon 4/3/2025|
Website design is nice, though.
porphyra 4/3/2025||
Seems very sinophobic. Deepseek and Manus have shown that China is legitimately an innovation powerhouse in AI but this article makes it sound like they will just keep falling behind without stealing.
aoanevdus 4/4/2025||
Don’t assume that because the article depicts this competition between the US and China, that the authors actually want China to fail. Consider the authors and the audience.

The work is written by western AI safety proponents, who often need to argue with important people who say we need to accelerate AI to “win against China” and don’t want us to be slowed down by worrying about safety.

From that perspective, there is value in exploring the scenario: ok, if we accept that we need to compete with China, what would that look like? Is accelerating always the right move? The article, by telling a narrative where slowing down to be careful with alignment helps the US win, tries to convince that crowd to care about alignment.

Perhaps, people in China can make the same case about how alignment will help China win against US.

MugaSofer 4/3/2025|||
That whole section seems to be pretty directly based on DeepSeek's "very impressive work" with R1 being simultaneously very impressive, and several months behind OpenAI. (They more or less say as much in footnote 36.) They blame this on US chip controls just barely holding China back from the cutting edge by a few months. I wouldn't call that a knock on Chinese innovation.
clayhacks 4/6/2025||
But it also assumes China would never really catch up to American chip companies. China is already investing heavily in chip R&D and things like RISC-V, I think it’s very plausible that lag window shrinks over this horizon. Perhaps even flipping given their much larger willingness to use industrial policy for goals they want achieved.
princealiiiii 4/3/2025|||
Stealing model weights isn't even particularly useful long-term, it's the training + data generation recipes that have value.
hexator 4/4/2025|||
Yes, it's extremely sinophobic and entirely too dismissive of China. It's pretty clear what the author's political leanings are, by what they mention and by what they do not.
a3w 4/3/2025|||
How so? Spoiler: US dooms mankind, China is the saviour in the two endings.
usef- 4/4/2025|||
In both endings it's saying that because compute becomes the bottleneck, and US has far more chips. Isn't it?
ugh123 4/3/2025|||
Don't confuse innovation with optimisation.
pixl97 4/3/2025||
Don't confuse designing the product with winning the market.
Sugimot0 4/9/2025||
Exactly how I read it, this reeks of the war drive toward China, nonsensical predictions and comical red scare portrayals, "legions of ccp spies". Just in time for the new McCarthyism rolling out.
ikerino 4/3/2025||
Feels reasonable in the first few paragraphs, then quickly starts reading like science fiction.

Would love to read a perspective examining "what is the slowest reasonable pace of development we could expect." This feels to me like the fastest (unreasonable) trajectory we could expect.

layer8 4/3/2025||
The slowest is a sudden and permanent plateau, where all attempts at progress turn out to result in serious downsides that make them unworkable.
9dev 4/3/2025|||
Like an exponentially growing compute requirement for negligible performance gains, on the scale of the energy consumption of small countries? Because that is where we are, right now.
photonthug 4/4/2025|||
Even if this were true, it's not quite the end of the story is it? The hype itself creates lots of compute and to some extent the power needed to feed that compute, even if approximately zero of the hype pans out. So an interesting question becomes.. what happens with all the excess? Sure it probably gets gobbled up in crypto ponzi schemes, but I guess we can try to be optimistic. IDK, maybe we get to solve cancer and climate change anyway, not with fancy new AGI, but merely with some new ability to cheaply crunch numbers for boring old school ODEs.
admiralrohan 4/3/2025|||
No one knows what will happen. But these thought experiments can be useful as a critical thinking practice.
ddp26 4/4/2025|||
The forecasts under "Research" are distributions, so you can compare the 10th percentile vs 90th percentile.

Their research is consistent with a similar story unfolding over 8-10 years instead of 2.

zmj 4/4/2025|||
If you described today's AI capabilities to someone from 3 years ago, that would also sound like science fiction. Extrapolate.
FeepingCreature 4/4/2025||
> Feels reasonable in the first few paragraphs, then quickly starts reading like science fiction.

That's kind of unavoidably what accelerating progress feels like.

superconduct123 4/3/2025||
Why are the biggest AI predictions always made by people who aren't deep in the tech side of it? Or actually trying to use the models day-to-day...
AlphaAndOmega0 4/3/2025||
Daniel Kokotajlo released the (excellent) 2021 forecast. He was then hired by OpenAI, and not at liberty to speak freely, until he quit in 2024. He's part of the team making this forecast.

The others include:

Eli Lifland, a superforecaster who is ranked first on RAND’s Forecasting initiative. You can read more about him and his forecasting team here. He cofounded and advises AI Digest and co-created TextAttack, an adversarial attack framework for language models.

Jonas Vollmer, a VC at Macroscopic Ventures, which has done its own, more practical form of successful AI forecasting: they made an early stage investment in Anthropic, now worth $60 billion.

Thomas Larsen, the former executive director of the Center for AI Policy, a group which advises policymakers on both sides of the aisle.

Romeo Dean, a leader of Harvard’s AI Safety Student Team and budding expert in AI hardware.

And finally, Scott Alexander himself.

kridsdale3 4/3/2025|||
TBH, this kind of reads like the pedigrees of the former members of the OpenAI board. When the thing blew up, and people started to apply real scrutiny, it turned out that about half of them had no real experience in pretty much anything at all, except founding Foundations and instituting Institutes.

A lot of people (like the Effective Altruism cult) seem to have made a career out of selling their Sci-Fi content as policy advice.

MrScruff 4/4/2025|||
I kind of agree - since the Bostrom book there is a cottage industry of people with non-technical backgrounds writing papers about singularity thought experiments, and it does seem to be on the spectrum with hard sci-fi writing. A lot of these people are clearly intelligent, and it's not even that I think everything they say is wrong (I made similar assumptions long ago before I'd even heard of Ray Kurzweil and the Singularity, although at the time I would have guessed 2050). It's just that they seem to believe their thought process and Bayesian logic is more rigourous than it actually is.
flappyeagle 4/3/2025|||
c'mon man, you don't believe that, let's have a little less disingenuousness on the internet
arduanika 4/3/2025||
How would you know what he believes?

There's hype and there's people calling bullshit. If you work from the assumption that the hype people are genuine, but the people calling bullshit can't be for real, that's how you get a bubble.

flappyeagle 4/4/2025||
Because they are not the same in any way. It’s not a bunch of junior academics, it’s literally including someone who worked at OpenAI
arduanika 4/7/2025|||
I asked you how you know kridsdale3 believes X, and you're reply is basically, "because I believe Y". I hope you don't call yourself a rationalist, given that you're hazy on the meaning of "because" and struggle with theory of mind.

Sure, OpenAI put up with one of these safety larpers for a few years while it was part of their brand. Reasonable people can disagree on how much that counts for.

You're right it's not a bunch of junior academics. It's not even a bunch of junior academics. This stuff would never pass muster in a reputable academic peer-reviewed journal, so from an academic perspective, this is not even the JV stuff. That's why they have to found their own bizarro network of foundations and so on, to give the appearance of seriousness and legitimacy. This might fool people who aren't looking closely, but the trick does not work on real academics, nor does it work on the silent majority of those who are actually building the tech capabilities.

nice_byte 4/4/2025||||
this sounds like a bunch of people who make a living _talking_ about the technology, which lends them close to 0 credibility.
mickelsen 4/4/2025||
[dead]
pixodaros 4/4/2025||||
Scott Alexander, for what its worth, is a psychiatrist, race science enthusiast, and blogger whose closest connection to software development is Bay Area house parties and a failed startup called MetaMed (2012-2015) https://rationalwiki.org/wiki/MetaMed
Bjorkbat 4/5/2025||||
Minor pet-peeve of mine, I really don't like the term "superforecaster". First time I encountered it was in association with some guy who was making predictions a year or two out.

Which to be fair it actually is kind of impressive if someone can make accurate predictions about the future that far head, but only because people are really bad at predicting the future.

Implicitly when I hear "superforecaster" I think they're someone that's really good at predicting the future, but deeper inspection often reveals that "the future" is constrained to the next 2 years. Beyond that they tend to be as bad as any other "futurist".

superconduct123 4/3/2025||||
I mean either researchers creating new models or people building products using the current models

Not all these soft roles

torginus 4/3/2025|||
Because these people understand human psychology and how to play on fears (of doom, or missing out) and insecurities of people, and write compelling narratives while sounding smart.

They are great at selling stories - they sold the story of the crypto utopia, now switching their focus to AI.

This seems to be another appeal to enforce AI regulation in the name of 'AI safetyiism', which was made 2 years ago but the threats in it haven't really panned out.

For example an oft repeated argument is the dangerous ability of AI to design chemical and biological weapons, I wish some expert could weigh in on this, but I believe the ability to theorycraft pathogens effective in the real world is absolutely marginal - you need actual lab work and lots of physical experiments to confirm your theories.

Likewise the dangers of AI systems to exfiltrate themselves to multi-million dollar AI datacenter GPU systems everyone supposedly just has lying about, is ... not super realistc.

The ability of AIs to hack computer systems is much less theoretical - however as AIs will get better at black-hat hacking, they'll get better at white-hat hacking as well - as there's literally no difference between the two, other than intent.

And here in lies a crucial limitation of alignment and safetyism - sometimes there's no way to tell apart harmful and harmless actions, other than whether the person undertaking them means well.

rglover 4/3/2025|||
Aside from the other points about understanding human psychology here, there's also a deep well they're trying to fill inside themselves. That of being someone who can't create things without shepherding others and see AI as the "great equalizer" that will finally let them taste the positive emotions associated with creation.

The funny part, to me, is that it won't. They'll continue to toil and move on to the next huck just as fast as they jumped on this one.

And I say this from observation. Nearly all of the people I've seen pushing AI hyper-sentience are smug about it and, coincidentally, have never built anything on their own (besides a company or organization of others).

Every single one of the rational "we're on the right path but not quite there" takes have been from seasoned engineers who at least have some hands-on experience with the underlying tech.

ZeroTalent 4/3/2025|||
People who are skilled fiction writers might lack technical expertise. In my opinion, this is simply an interesting piece of science fiction.
bpodgursky 4/3/2025|||
Because you can't be a full time blogger and also a full time engineer. Both take all your time, even ignoring time taken to build talent. There is simply a tradeoff of what you do with your life.

There are engineers with AI predictions, but you aren't reading them, because building an audience like Scott Alexander takes decades.

m11a 4/4/2025||
If so, then it seems the solution is for HN to upvote the random qualified engineer with AI predictions?
FeepingCreature 4/4/2025|||
I use the models daily and agree with Scott.
meowface 4/5/2025||
You are an SSCite though and therefore are biased.

(That said, I agree with you. But I know I myself am biased to agree with Scott.)

FeepingCreature 4/6/2025||
I mean, it's more that I agree with Eliezer... but like, I've used them since GPT-2. I really do think that "this is it", the tech will just keep scaling.
ohgr 4/3/2025|||
In the path to self value people explain their worth by what they say not what they know. If what they say is horse dung, it is irrelevant to their ego if there is someone dumber than they are listening.

This bullshit article is written for that audience.

Say bullshit enough times and people will invest.

HeatrayEnjoyer 4/4/2025||
So what's the product they're promoting?
moralestapia 4/4/2025||
Their ego.
Tenoke 4/3/2025||
..The first person listed is ex-OpenAI.
zvitiate 4/3/2025||
There's a lot to potentially unpack here, but idk, the idea that humanity entering hell (extermination) or heaven (brain uploading; aging cure) is whether or not we listen to AI safety researchers for a few months makes me question whether it's really worth unpacking.
9dev 4/3/2025||
Maybe people should just don’t listen to AI safety researchers for a few months? Maybe they are qualified to talk about inference and model weights and natural language processing, but not particularly knowledgeable about economics, biology, psychology, or… pretty much every other field of study?

The hubris is strong with some people, and a certain oligarch with a god complex is acting out where that can lead right now.

arduanika 4/4/2025||
It's charitable of you to think that they might be qualified to talk about inference and model weights and such. They are AI safety researchers, not AI researchers. Basically, a bunch of doom bloggers, jerking each other in a circle, a few of whom were tolerated at one of the major labs for a few years, to do their jerking on company time.
amelius 4/3/2025||
If we don't do it, someone else will.
achierius 4/3/2025|||
That's obviously not true. Before OpenAI blew the field open, multiple labs -- e.g. Google -- were intentionally holding back their research from the public eye because they thought the world was not ready. Investors were not pouring billions into capabilities. China did not particularly care to focus on this one research area, among many, that the US is still solidly ahead in.

The only reason timelines are as short as they are is because of people at OpenAI and thereafter Anthropic deciding that "they had no choice". They had a choice, and they took the one which has chopped at the very least years off of the time we would otherwise have had to handle all of this. I can barely begin to describe the magnitude of the crime that they have committed -- and so I suggest that you consider that before propagating the same destructive lies that led us here in the first place.

pixl97 4/3/2025||
The simplicity of the statement "If we don't do it, someone else will." and thinking behind it eventually means someone will do just that unless otherwise prevented by some regulatory function.

Simply put, with the ever increasing hardware speeds we were dumping out for other purposes this day would have come sooner than later. We're talking about only a year or two really.

achierius 4/4/2025|||
But every time, it doesn't have to happen yet. And when you're talking about the potential deaths of millions, or billions, why be the one who spawns the seed of destruction in their own home country? Why not give human brotherhood a chance? People have, and do, hold back. You notice the times they don't, and the few who don't -- you forget the many, many more who do refrain from doing what's wrong.

"We have to nuke the Russians, if we don't do it first, they will"

"We have to clone humans, if we don't do it, someone else will"

"We have to annex Antarctica, if we don't do it, someone else will"

HeatrayEnjoyer 4/4/2025|||
Cloning? Bioweapons? Ever larger nuclear stockpiles? The world has collectively agreed not to do something more than once. AI would be easier to control than any of the above. GPUs can't be dug out of the ground.
itishappy 4/3/2025||||
Which? Exterminate humanity or cure aging?
ethersteeds 4/3/2025|||
Yes
amelius 4/3/2025|||
The thing whose outcome can go either way.
itishappy 4/3/2025||
I honestly can't tell what you're trying to say here. I'd argue there's some pretty significant barriers to each.
layer8 4/3/2025||||
I’m okay if someone else unpacks it.
throwawaylolllm 4/3/2025|||
[flagged]
MaxfordAndSons 4/3/2025||
As someone who's fairly ignorant of how AI actually works at a low level, I feel incapable of assessing how realistic any of these projections are. But the "bad ending" was certainly chilling.

That said, this snippet from the bad ending nearly made me spit my coffee out laughing:

> There are even bioengineered human-like creatures (to humans what corgis are to wolves) sitting in office-like environments all day viewing readouts of what’s going on and excitedly approving of everything, since that satisfies some of Agent-4’s drives.

arduanika 4/4/2025|
Sigh. When you talk to these people their eugenics obsession always comes out eventually. Set a timer and wait for it.
Philpax 4/4/2025||
While I don't disagree that I've seen a lot of eugenics talk from rationalist(-adjacent)s, I don't think this is an example of it: this is describing how misaligned AI could technically keep humans alive while still killing "humanity."
arduanika 4/4/2025||
Fair enough. Sometimes it comes out as a dark fantasy projected onto their AI gods, rather than a thing that they themselves want to do to us.
ks2048 4/3/2025||
We know this complete fiction because of parts where "the White House considers x,y,z...", etc. - As if the White House in 2027 will be some rational actor reacting sanely to events in the real world.
sivaragavan 4/4/2025||
Thanks to the authors for doing this wonderful piece of work and sharing it with credibility. I wish people see the possibilities here. But we are after all humans. It is hard to imagine our own downfall.

Based on each individual's vantage point, these events might looks closer or farther than mentioned here. but I have to agree nothing is off the table at this point.

The current coding capabilities of AI Agents are hard to downplay. I can only imagine the chain reaction of this creation ability to accelerate every other function.

I have to say one thing though: The scenario in this site downplays the amount of resistance that people will put up - not because they are worried about alignment, but because they are politically motivated by parties who are driven by their own personal motives.

ddp26 4/4/2025||
A lot of commenters here are reacting only to the narrative, and not the Research pieces linked at the top.

There is some very careful thinking there, and I encourage people to engage with the arguments there rather than the stylized narrative derived from it.

kmeisthax 4/3/2025|
> The agenda that gets the most resources is faithful chain of thought: force individual AI systems to “think in English” like the AIs of 2025, and don’t optimize the “thoughts” to look nice. The result is a new model, Safer-1.

Oh hey, it's the errant thought I had in my head this morning when I read the paper from Anthropic about CoT models lying about their thought processes.

While I'm on my soapbox, I will point out that if your goal is preservation of democracy (itself an instrumental goal for human control), then you want to decentralize and distribute as much as possible. Centralization is the path to dictatorship. A significant tension in the Slowdown ending is the fact that, while we've avoided AI coups, we've given a handful of people the ability to do a perfectly ordinary human coup, and humans are very, very good at coups.

Your best bet is smaller models that don't have as many unused weights to hide misalignment in; along with interperability and faithful CoT research. Make a model that satisfies your safety criteria and then make sure everyone gets a copy so subgroups of humans get no advantage from hoarding it.

More comments...