Top
Best
New

Posted by Tomte 12 hours ago

The sigmoids won't save you(www.astralcodexten.com)
105 points | 142 commentspage 3
patrickmay 6 hours ago|
Stein's Law: "If something cannot go on forever, it will stop."
skybrian 5 hours ago|
Yes, but figuring out when is the hard part.
kubb 6 hours ago||
If the scary AI is so inevitable, why do you feel such an overwhelming need to convince people about that? Surely you can just wait a bit, and they'll see for themselves.
mitthrowaway2 6 hours ago||
By that reasoning, why even warn people about anything? Why do road construction crews put up signs saying "ROAD CLOSED AHEAD" when you can just drive on and see for yourself?
kubb 6 hours ago||
Indeed, why warn people about real things that exist in the world? That is EXACTLY the same as inciting fear about something imaginary (not even projected).
mitthrowaway2 5 hours ago||
In your mind, dangers from AI are imaginary and not even projected, therefore, you don't see any reason to warn about them, because you don't think the dangers are real. You don't believe the road is actually closed up ahead, so you don't think it's necessary to post the sign.

In Scott's mind, dangers from AI are not a known fact, but are somewhere between highly probable and a near-certainty. In his mind, there are well-grounded justifications for believing that AI poses substantial future dangers to the public. Therefore he also believes he should inform people about this, and strives to convince skeptics, so that we might steer clear.

It's easy to understand why someone who believes what you believe about AI would of course not warn people about AI. It's also easy to understand why someone who believes what Scott believes about AI would want to warn people about AI. Your contention is with his confidence for being worried about AI, not his reason for wanting to warn people.

kubb 4 hours ago||
Gosh it's quite embarassing to have to spell it out, but you inserted the part about Scott's motivations. It can't be found in the text.

Neither can any specific discussion of what the dangers are and how we can steer clear. It all comes preplanted in your head. The only thing that Scott is playing on (as far as we can see) is your ingrained fear, by using an ominous headline, and a vague reference to something "scary" in the conclusion.

Of course there was no reason to "warn" you, you already believed in the scary future. Scott is just giving you fuel, which you seem to appreciate.

mitthrowaway2 4 hours ago|||
Is this the first essay of Scott's that you've read?
kubb 2 hours ago||
The first and the last. Doesn't come as a surprise that they're regularly added to HN, often multiple times and rarely ever get more than 5 upvotes.
djeastm 2 hours ago|||
>as far as we can see

If only there were a way to see more of Scott's thoughts on the subject of AI..

kubb 1 hour ago||
Sorry, there are many better things to read than Scott's thoughts.
throwawayk7h 2 hours ago|||
Yeah! And if climate change is so inevitable, why do the people who want to prevent it from happening seem hell-bent on convincing people that climate change is real?
adleyjulian 6 hours ago||
1. It's not inevitable. 2. Those that see AI as an existential risk don't generally think it's a guarantee, but if it's say a 5% chance then that's worth addressing/mitigating. 3. That's not what this article was even about.
kubb 6 hours ago||
Sounds like the burden is on you to explain either

  1. If you're not treating my claim as a black box, explain explicitly what is your model of what the article was about? Are you aware, for example of the last paragraph of the article? I think that WAS what the article was about. Do you have specific opinions on e.g. how I went wrong and where my model differs?
  2. If you are treating it as a black box, what's your default expectation based on the law of Nothing Ever Happens?
Just kidding, you don't need to explain anything. A"I" fearmongers should though.
adleyjulian 5 hours ago||
The point of the article is that people are historically bad at predicting when exponential curves plateau, even if they're correct that there will be a plateau.

This does *not* imply the inevitability of AGI. It does not imply AGI is necessarily bad.

It does mean that "the capabilities of AI will eventually plateau" offers no meaningful predictive power or relevance to the overall AI discussion.

nathan_compton 6 hours ago||
A lot of words to say "The initial part of a sigmoidal curve is not very informative about the parameters of the sigmoid function in question."
inglor_cz 6 hours ago|
That is true, but I generally enjoy reading a lot of words from Scott, who has a talent for writing.

The entire plot of the Lord of the Rings could probably be compressed into less than 10 kB of text too.

Edit: this seems to be a controversial comment, but IMHO a blog of Scott Alexander's type is an art form, not just a communication channel.

jeffreyrogers 6 hours ago||
I find him more interesting when he talks about non-AI topics. Lots of other interesting people are like this too. I'd rather get my knowledge on AI from people who have unique insights into it. Scott has a lot of unique perspectives of his own, but his views on AI are bog-standard for his social group.
inglor_cz 6 hours ago||
Frankly, me too, but he is still smart enough to introduce some grains of original thought even into those bog-standard views.
jrflowers 2 hours ago||
I like this article about how we should assume, at any given point, that we are exactly halfway through a phenomenon which relies on a single data point on a graph —-that apparently doesn’t need its relevance or importance explained— to illustrate that this is obviously true for AI in particular
itkovian_ 6 hours ago||
The other thing people don’t understand is exponential curves are self similar. The start of an exponential looks like an exponential. People always look at and think ‘well that’s it it’s exponential now, have missed it, can’t sustain’. Nope.

Good example of this is number of submissions to neurips/icml/iclr. In 2017 that curve was exponential.

addaon 6 hours ago||
https://xkcd.com/605/
ngruhn 5 hours ago||
> all exponentials eventually become sigmoids

Except innovation. When one sigmoid tapers off we keep finding new ones to keep the climb going.

inglor_cz 6 hours ago||
Hmmm, this is quite an interesting take by Scott.

Lindy's Law is not actually a law and many exact minds will be provoked by the very name; it also fails spectacularly in certain contexts (e.g. lifetime of a single organism, though not necessarily existence of entire species).

But at the same time, I am willing to take its invocation in the context of AI somewhat seriously. There is an international arms race with China, which has less compute, but more engineers and scientists. This sort of intellectual arms race does not exhaust itself easily.

A similar space race in the 1950s and 1960s progressed from first unmanned spaceflight to a moonwalk in mere 12 years, which is probably less than what it takes to approve a bicycle lane in Chicago now.

krupan 6 hours ago||
"There is an international arms race with China"

I keep seeing this. Where did it come from? Has China said that they intend to attack other countries using AI? Have other countries declared that they intend to attack China with AI?

Also, why does anyone believe that AI could actually be that dangerous, given it's inherent unpredictable and unreliable performance? I would be terrified to rely on AI in a life or death situation.

aspenmartin 6 hours ago|||
AI in war is like Palintirs whole business model. You have a system that can effectively deal with ambiguity and has superhuman performance on reasoning plus superhuman physical abilities via embodiment…

Inherent unpredictable and unreliable performance is also quite the feature of human beings as well.

dmbche 6 hours ago||||
https://www.forbes.com/sites/greatspeculations/2025/11/25/wh...
inglor_cz 6 hours ago|||
It was a metaphor. I meant, and later clarified, an intellectual arms race.

BTW your handle is an actual Czech word, minus a diacritic sign ("křupan"), and a bit amusing one. It basically means hillbilly. Not that it matters, just FYI.

Anyway: AI will be used in military context, and it probably already is. Both for target acquisition and maybe even driving the weapon itself. As of now, the Ukrainians are almost certainly operating some AI-enabled killer drones.

krupan 1 hour ago||
That's funny, I was told my real last name is a swear word in Czech
mitthrowaway2 6 hours ago||
It's not a law per se, but there are rules for reasoning under uncertainty to get the most out of what limited knowledge you have, and Lindy's law arises from that. To do better than Lindy's law requires having additional information about the problem beyond just the one data point.
devmor 6 hours ago||
"Exponentials all tend to become sigmoids but you can't predict exactly when" is a true statement, but I'm not sure it needed an article.

This doesn't say much, and the author fights their own points a couple times, suggesting that they maybe didn't think through what they wanted to write until they were in the middle of writing it and started realizing their assumptions didn't match what they expected the data to say.

I really don't get the point of what I just read.

aspenmartin 6 hours ago|
The point is the tiring arguments from AI skeptics saying “things are flattening, they have to” which while technically correct says nothing because no one knows when that will happen and we see no mechanism for this yet. Lindy’s law as a reasonable prediction under total uncertainty is interesting and insightful and a lot of people don’t know about it or why it holds. I did enjoy the reference to this!
solid_fuel 3 hours ago|||
Nah this is making a category error. You're assuming that AI skeptics agree that models are demonstrating intelligence along the same axis as humans and that with further improvement they will become equivalent to humans. I am an AI skeptic, and I disagree with this assessment.

Model reasoning is on an s-curve, which is improving.

Model intelligence is not the same as reasoning. It's a different axis, and one I have not seen much movement on.

See, humans have a recursive form of intelligence which is capable of self-reflection and introspection. LLMs can only reason about tokens which have already been emitted. Humans and LLMs do not share the same form of reasoning, and general human-like intelligence will not arise from the current architecture of LLMs. Therefore it is a mistake to assume that continual improvement on the reasoning scale will result in something that is equivalent enough to humans on the intelligence axis to replace all labor.

aspenmartin 2 hours ago||
> You're assuming that AI skeptics agree that models are demonstrating intelligence along the same axis as humans and that with further improvement they will become equivalent to humans.

No definitely not saying this and I don’t quite know what it means

> Model reasoning is on an s-curve, which is improving.

Is this saying two different things? I think I might agree with this in principle as in maybe there is some sort of s curve or something like it but do we see evidence of this? Where?

> Model intelligence is not the same as reasoning. It's a different axis, and one I have not seen much movement on.

Can you clarify this? What is the distinction and what makes you say you have “not seen much progress?”

> See, humans have a recursive form of intelligence which is capable of self-reflection and introspection. LLMs can only reason about tokens which have already been emitted

LLMs do self reflection and introspection in context, and tweaks such as value functions (serving a similar purpose to intuition or emotion) may make this better? Why do you feel self reflection and introspection are a fundamental limitation here? Models reason over tokens they have emitted and also with their own sense and learned behavior already. Are you just talking about continual learning? Also I feel people just latch onto LLMs as if this is all of AI. Why? SSMs, memory networks, recurrent neural networks etc etc etc are all part of AI but aren’t as popular because they can’t yet compete with LLMs in terms of scaling laws and training efficiency due to e.g. hardware and software optimization and investment being focused on LLMs. If something else comes along that works better we’ll just start scaling that.

> Humans and LLMs do not share the same form of reasoning, and general human-like intelligence will not arise from the current architecture of LLMs.

Very strong statement, any theoretical or experimental basis for this? I also don’t particularly care personally other than as a point of curiosity. Why does it matter if AI systems will develop equivalent reasoning mechanisms as humans? In fact it may be much better not to.

> Therefore it is a mistake to assume that continual improvement on the reasoning scale will result in something that is equivalent enough to humans to replace all labor.

Idk I didn’t say this explicitly but I also dont think it matters if we have a system “equivalent to humans” or one that “replaces all labor”.

solid_fuel 2 hours ago||
~~Slate Star Codex~~ Astral Codex Ten, the original article, was making the argument that "model intelligence" is on an s-curve and from there it was drawing the conclusion that the curve will likely continue and models will reach human level intelligence or beyond.

I am making that argument that how we measure model intelligence is flawed, and we are actually measuring something that is closer to "reasoning" than "intelligence". If you want evidence, we'll need a different form of tests, but how about I just gesture at the fact that GPT supposedly outscored PhDs on a broad range of subjects at least a year ago and to date is not replacing PhD jobs.

We see this pattern of high scores on tests but mediocre performance in the real world all over the place. From that, I draw the conclusion that it can reason like a PhD, but it can't think like a PhD.

So, we may see an s-curve on the measure of model reasoning but that doesn't imply they will overtake us or even match us on measures of intelligence.

As to your other questions:

> LLMs do self reflection and introspection in context,

> Why do you feel self reflection and introspection are a fundamental limitation here? Models reason over tokens they have emitted and also with their own sense and learned behavior already. Are you just talking about continual learning?

I disagree that models are reflecting and introspecting in a way equivalent to human intelligence here. They can reason over tokens which have been emitted, but by the same measure they cannot reason over tokens which have not been emitted. It's hard to make this point without drawing some diagrams, but I believe that human intelligence has internal loops, where many ideas may be turned over simultaneously before an action is taken. In comparison, an LLM might "feel uncertain" about a token before emitting it, but once it is emitted that uncertainty and the other near neighbor options are lost and the LLM is locked into the track that was set by the top-choice token. I think this is where hallucinations arise from, amongst other issues.

Context isn't sufficient for an internal reasoning loop because the tokens that compose context lose a lot of the information the network itself generated when picking those tokens. They occupy a much lower dimensional space than the "internal reasoning" processes of the transformer do.

>> Humans and LLMs do not share the same form of reasoning, and general human-like intelligence will not arise from the current architecture of LLMs.

> Very strong statement, any theoretical or experimental basis for this?

It's just my theory, but this is what I have been gesturing at. You already know about RNNs so I'll put it in those terms: the core of an intelligent network should be an RNN, not a transformer, but we fundamentally cannot train a network like that to work like an LLM because backprop doesn't work when there is infinite recursion and without being able to bootstrap off of the knowledge and reasoning baked into human text, there's no sufficient source of training material beyond being embodied.

---

EDIT:

I missed this, which I also want to reply to:

> Why does it matter if AI systems will develop equivalent reasoning mechanisms as humans? In fact it may be much better not to.

I actually agree that it may be better if they did not develop equivalent reasoning, but I don't see a world in which machines replace human labor without being intellectually equivalent.

As I think about it though, "dumb" machines which can following reasoning but not think like humans are a rather scary proposition, honestly. Seems like a tool that would be wielded without restraint by those in power to control those who aren't.

devmor 3 hours ago|||
But those skeptics are initially responding to the constant AI hype claims that we are exponentially growing to AGI. So this article is in fact just a (very poorly thought through) attempt at saying “nuh uh, the hype might be true, you can’t prove it’s not yet!
aspenmartin 3 hours ago||
Yet the evidence is on the side of the hype? We don’t see any mechanism or cogent framework for what limits exist here theoretically that I’m aware of, are you? Epoch had a great article a year ago looking at several bottlenecks in terms of scale and back then we were about 4 orders of magnitude away from hitting them. We’re probably now closer to 3. Yet scale is only part of the performance equation, a fairly big chunk of progress is from algorithmic or curation related contributions. The point of the article is:

> But those skeptics are initially responding to the constant AI hype claims that we are exponentially growing to AGI.

This is a meaningless statement or at best just strawmanning.

t43562 1 hour ago||
Why is the evidence on the side of the hype? Why do you assume something is size X just because nobody has proved it's smaller yet?

The evidence is just whatever it is - we cannot make predictions with it.

BoredPositron 6 hours ago|
If you use the log scale you'll see that the time horizon of opus 4.6 was as expected...
afthonos 6 hours ago||
As expected by the exponential. The Wharton study was predicting when the exponential would turn into a sigmoid.
ReptileMan 6 hours ago||
Everything is linear on a log log scale with a fat marker.
More comments...