Top
Best
New

Posted by Tenoke 4/3/2025

AI 2027(ai-2027.com)
949 points | 621 commentspage 8
Aldipower 4/4/2025|
No one can predict the future. Really, no one. Sometimes there is a hit, sure, but mostly it is a miss.

The other thing is in their introduction: "superhuman AI" _artificial_ intelligence is always, by definition, different from _natural_ intelligence. That they've chosen the word "superhuman" shows me that they are mixing the things up.

kmoser 4/4/2025|
I think you're reading too much into the meaning of "superhuman". I take it to mean "abilities greater than any single human" (for the same amount of time taken), which today's AIs have already demonstrated.
noncoml 4/3/2025||
2015: We will have FSD(full autonomy) by 2017
wkat4242 4/4/2025|
Well, Teslas do have "Full Self Driving". It's not actually fully self driving and that doesn't even seem to be on the horizon but it doesn't appear to be stopping Tesla supporters.
ugh123 4/4/2025||
I don't see the U.S. nationalizing something like Open Brain. I think both investors and gov't officials will realize its highly more profitable for them to contract out major initiatives to said OpenBrain-company, like an AI SpaceX-like company. I can see where this is going...
lanza 4/4/2025||
Without reading an entire novel's worth of text, do they explain why they picked these dates? They have a separate timeline post where the 90th percentile of superhuman coder is later than 2050. Did they just go for shock value and pick the scariest timeline?
vagab0nd 4/3/2025||
Bad future predictions: short-sighted guesses based on current trends and vibe. Often depend on individuals or companies. Made by free-riders. Example: Twitter.

Good future predictions: insights into the fundamental principles that shape society, more law than speculation. Made by visionaries. Example: Vernor Vinge.

0_____0 4/5/2025||
Fun read, it reminds me a bit of Neuromancer x Universal Paperclips.
someothherguyy 4/4/2025||
I know there are some very smart economists bullish on this, but the economics do not make sense to me. All these predictions seem meaningless outside of the context of humans.
heurist 4/4/2025||
Give AI its own virtual world to live in where the problems it solves are encodings of the higher order problems we present and you shouldn't have to worry about this stuff.
toddmorey 4/3/2025||
I worry more about the human behavior predictions than the artificial intelligence predictions:

"OpenBrain’s alignment team26 is careful enough to wonder whether these victories are deep or shallow. Does the fully-trained model have some kind of robust commitment to always being honest?"

This is a capitalist arms race. No one will move carefully.

fire_lake 4/3/2025|
> OpenBrain still keeps its human engineers on staff, because they have complementary skills needed to manage the teams of Agent-3 copies

Yeah, sure they do.

Everyone seems to think AI will take someone else’s jobs!

More comments...