Top
Best
New

Posted by Tenoke 4/3/2025

AI 2027(ai-2027.com)
949 points | 621 commentspage 6
dalmo3 4/3/2025|
"1984 was set in 1984."

https://youtu.be/BLYwQb2T_i8?si=JpIXIFd9u-vUJCS4

dughnut 4/4/2025||
I don’t know about you, but my takeaway is that the author is doing damage control but inadvertently tipped a hand that OpenAI is probably running an elaborate con job on the DoD.

“Yes, we have a super secret model, for your eyes only, general. This one is definitely not indistinguishable from everyone else’s model and it doesn’t produce bullshit because we pinky promise. So we need $1T.”

I love LLMs, but OpenAI’s marketing tactics are shameful.

ImHereToVote 4/4/2025|
How do you know this?
dangus 4/4/2025||
I don’t think that was their claim that they knew this.

I think it’s hilarious that apparently few have learned from Theranos or WeWork.

OpenAI is in a precarious position. Anything less than AGI will make them look like a bust. They are backed into a situation where they are heavily incentivized to lie and Theranos their way out of this and hope they can actually deliver something that resembles their pie in the sky predictions.

We are at the point where GPT-5 is starting to look like the iPhone 5.

maerF0x0 4/4/2025||
> OpenBrain reassures the government that the model has been “aligned” so that it will refuse to comply with malicious requests

Of course the real issue being that Governments have routinely demanded that 1) Those capabilities be developed for government monopolistic use, and 2) The ones who do not lose the capability (geo political power) to defend themselves from those who do.

Using a US-Centric mindset... I'm not sure what to think about the US not developing AI hackers, AI bioweapons development, or AI powered weapons (like maybe drone swarms or something), if one presumes that China is, or Iran is, etc then whats the US to do in response?

I'm just musing here and very much open to political science informed folks who might know (or know of leads) as to what kinds of actual solutions exist to arms races. My (admittedly poor), understanding of the cold war wasn't so much that the US won, but that the Soviets ran out of steam.

zkmon 4/5/2025||
Nature is exploring ways for next extinction. It tried nuke piles, but somehow they were just sitting there. Next, it is trying out AI. Nature tricks humans into advancing in ways that are not really needed for them and not compatible with their natural evolution. Nature is applying competition internal to a race that can produce things that are completely unnecessary for the survival of the race, but necessary for its extinction.

Goat: Hey human, why are you creating AI?

Human: Because I can. And I can boast of my greatness. I can use it for money. I can weaponize and us it to dominate and control other humans.

Goat: Why you need all that?

Human: If I don't do it, others will do it and they will dominate me and take away all my stuff. It is not fair.

Goat: So it looks like who-owns-what issue. Did you try not owning stuff?

Nature: Shut up goat. I'm trying to do a big reset here.

soupfordummies 4/3/2025||
The "race" ending reads like Universal Paperclips fan fiction :)
croemer 4/4/2025||
Pet peeve how they write FLOPS in the figure when they meant FLOP. Maybe the plural s after FLOP got capitalized. https://blog.heim.xyz/flop-for-quantity-flop-s-for-performan...
barotalomey 4/4/2025||
It's always "soon" for these guys. Every year, the "soon" keeps sliding into the future.
somebodythere 4/4/2025|
AGI timelines have been steadily decreasing over time: https://www.metaculus.com/questions/5121/date-of-artificial-... (switch to all-time chart)
barotalomey 4/4/2025||
You meant to say that people's expectations have shifted. That's expected seeing the amount of hype this tech gets.

Hype affects market value tho, not reality.

somebodythere 4/4/2025||
I took your original post to mean that AI researchers' and AI safety researchers' expectation of AGI arrival has been slipping towards the future as AI advances fail to materialize! It's just, AI advances have been materializing, consistently and rapidly, and expert timelines have been shortening commensurately.

You may argue that the trendline of these expectations is moving in the wrong direction and should get longer with time, but that's not immediately falsifiable and you have not provided arguments to that effect.

barotalomey 4/5/2025||
> and you have not provided arguments to that effect

The burden of proof lies on those with extraordinary claims. I am simply skeptical.

overgard 4/4/2025||
Why is any of this seen as desirable? Assuming this is a true prediction it sounds AWFUL. The one thing humans have that makes us human is intelligence. If we turn over thinking to machines, what are we exactly. Are we supposed to just consume mindlessly without work to do?
theragra 4/5/2025|
By authors? They specifically state they don't like many decisions they describe
overgard 4/8/2025||
No I mean more by the AI hype people. I just see so much excitement around AI and yet it seems likely to be catastrophic.
Fraterkes 4/4/2025||
Completely earnest question for people who believe we are on this exponential trajectory: what should I look out for at the end of 2025 to see if we're on track for that scenario? What benchmark that naysayers think is years away will we have met?
Q6T46nT668w6i3m 4/3/2025|
This is worse than the mansplaining scene from Annie Hall.
arduanika 4/4/2025|
You mean the part where he pulls out Marshal McLuhan to back him up in an argument? "You know nothing of my work..."
More comments...