Top
Best
New

Posted by Tenoke 4/3/2025

AI 2027(ai-2027.com)
949 points | 621 commentspage 9
silexia 4/5/2025|
The accelerated path described here is exactly what would happen. Humans will likely be wiped out in the next few years by our own creation.
disambiguation 4/3/2025||
Amusing sci-fi, i give it a B- for bland prose, weak story structure, and lack of originality - assuming this isn't all AI gen slop which is awarded an automatic F.

>All three sets of worries—misalignment, concentration of power in a private company, and normal concerns like job loss—motivate the government to tighten its control.

A private company becoming "too powerful" is a non issue for governments, unless a drone army is somewhere in that timeline. Fun fact the former head of the NSA sits on the board of Open AI.

Job loss is a non issue, if there are corresponding economic gains they can be redistributed.

"Alignment" is too far into the fiction side of sci-fi. Anthropomorphizing today's AI is tantamount to mental illness.

"But really, what if AGI?" We either get the final say or we don't. If we're dumb enough to hand over all responsibility to an unproven agent and we get burned, then serves us right for being lazy. But if we forge ahead anyway and AGI becomes something beyond review, we still have the final say on the power switch.

dingnuts 4/3/2025||
how am I supposed to take articles like this seriously when they say absolutely false bullshit like this

> the AIs can do everything taught by a CS degree

no, they fucking can't. not at all. not even close. I feel like I'm taking crazy pills. Does anyone really think this?

Why have I not seen -any- complete software created via vibe coding yet?

ladberg 4/3/2025||
It doesn't claim it's possible now, it's a fictional short story claiming "AIs can do everything taught by a CS degree" by the end of 2026.
senordevnyc 4/4/2025||
Ironically, the models of today can read an article better than some of us.
casey2 4/4/2025||
Lesswrong brigade. They are all dropout philosophers just ignore them.
anentropic 4/4/2025||
I'd quite like to watch this on Netflix
pera 4/3/2025||
From the same dilettantes who brought you the Zizians and other bizarre cults... thanks but I rather read Nostradamus
selfhoster11 4/5/2025||
I logged in specifically to say the following: I do not think it is possible that Scott Alexander would sign his name to something that would in any way promote Zizian views. I don't know him personally, but I've read enough to know where he stands.
arduanika 4/3/2025||
What a bad faith argument. No true AI safety scaremonger brat stabs their landlord with a katana. The rationality of these rationalists is 100% uncorrolated with the rationality of *those* rationalists.
WhatsName 4/3/2025||
This is absurd, like taking any trend and drawing a straight line to interpolate the future. If I would do this with my tech stock portfolio, we would probably cross the zero line somewhere late 2025...

If this article were a AI model, it would be catastrophically overfit.

AnimalMuppet 4/3/2025|
It's worse. It's not drawing a straight line, it's drawing one that curves up, on a log graph.
Willingham 4/3/2025||
- October 2027 - 'The ability to automate most white-collar jobs'

I wonder which jobs would not be automated? Therapy? HR?

hsuduebc2 4/3/2025|
Board of directors
mlsu 4/3/2025||
https://xkcd.com/605/
acje 4/3/2025|
2028 human text is too ambiguous a data source to get to AGI. 2127 AGI figures out flying cars and fusion power.
wkat4242 4/4/2025|
I think it also really limits the AI to the context of human discourse which means it's hamstrung by our imagination, interests and knowledge. This is not where an AGI needs to go, it shouldn't copy and paste what we think. It should think on its own.

But I view LLMs not as a path to AGI on their own. I think they're really great at being text engines and for human interfacing but there will need to be other models for the actual thinking. Instead of having just one model (the LLM) doing everything, I think there will be a hive of different more specific purpose models and the LLM will be how they communicate with us. That solves so many problems that we currently have by using LLMs for things they were never meant to do.

More comments...