Top
Best
New

Posted by Tomte 11 hours ago

The sigmoids won't save you(www.astralcodexten.com)
87 points | 122 commentspage 2
andai 6 hours ago|
Well, curve shape aside, the high watermark might be lower than where it tapers off.

https://news.ycombinator.com/item?id=46199723

Brendinooo 5 hours ago||
> then what is their model?

My mental model has been 3D computer graphics: doubling the polygon count had huge returns early on but delivered diminishing returns over time.

Ultimately, you can't make something look more realistic than real.

I don't know what the future holds, but the answer to the question "can LLMs be more realistic than real" will determine much about whether or not you think the curve will level off soon.

the8472 18 minutes ago|
The equivalent bar in this domain would be human intelligence, and we already have growing lists of tasks where machines outperform humans. We even known of natural systems that outperform humans on some metrics, e.g. bird-brains have higher neuron density than ours because evolution had to optimize more for weight.
janalsncm 5 hours ago||
> What if you don’t fully understand the process? AI forecasters know some things (like how data centers work and how much it costs to build them). But they’re unsure about other things (researchers keep inventing new paradigms of data generation that get over data walls, but for how long?), and other things are entirely opaque (What is intelligence really? Why do scaling laws work? Might they just stop working at some point?) Is there anything you can do here?

This is the crux of the article. To a large extent continued progress depends on a stable increase in compute, an increase in training data, and an increase in good ideas to squeeze more out of both of them.

One calculation you could do is a survival function: for each of the above, how long before it is disrupted? For example, China could crack down on AI or invade Taiwan. Or data centers become politically unpopular in the US. Or, we could run out of great ideas. Very hard to predict.

dsign 5 hours ago||
We did hit the sigmoid's plateau on airplane speed, but the applications of airplane speed are still coming (how fast can a Chinese company airship the PCB you ordered three minutes ago?). I expect the the same will happen with LLMs, though I also happen to believe things are just getting started on end capabilities.
jsmcgd 4 hours ago||
> It’s true that birth rates must eventually flatten out and become sigmoid

All positive growth eventually flattens out and becomes sigmoid, but a lot of phenomena experience negative growth and nose dive. No gentle curve, but a hard kink and perfect flat line at zero. Forever. I think it would be a stretch to categorize that pattern as sigmoid. Predicting a sigmoid pattern for negative growth implies some sort of a soft landing (depending on your definition of soft).

We can think of many populations that are no longer with us. So just a caution about over applying this reasoning in the negative case.

Qem 46 minutes ago|
> All positive growth eventually flattens out and becomes sigmoid, but a lot of phenomena experience negative growth and nose dive.

https://en.wikipedia.org/wiki/Seneca_effect

zkmon 5 hours ago||
The curve is a smoothed step curve (y=1 if x>1 otherwise 0). Nature doesn't allow any change to happen instantly at any degree of rate of change. The curveis just a manifestation a change with exponential smoothening of the sharp corners.

For example, When a car starts, it's speed and acceleration become more than zero. But what about rate of change in higher degrees? It suddenly doesn't change from zero acceleration to non-zero. That means the car has a non-zero derivative at all degrees. In other words, the movement is exponential. The same thing happens in reverse when the car reaches a constant speed.

pyrale 1 hour ago||
Such a long article to say that neither side has a fucking idea about what will happen next.

While we're at it, the "exponentials are actually sigmoïds" meme is not necessarily true. While exponentials are never exponentials, sigmoids are not guaranteed. Overshoot-and-collapse examples also happen in tech, e.g. the dotcom bubble, or the successive AI winters.

andrewflnr 1 hour ago|
It's really not that long, and is quite clear that its main point is about how to reason when you realize no one actually knows what's going on.
krupan 5 hours ago||
News flash: predicting the future is hard
energy123 5 hours ago|
The individual who is the best at predicting the future is predicting ASI and full labor automation by 2040:

https://xcancel.com/peterwildeford/status/202963666232244661...

dsign 4 hours ago|||
My own bet is end of that decade: somewhere between 2045 and 2050.

Ofc "full labor automation" has a certain spread of meaning. A sliver of population will always find ways to hold to a job or run one or many businesses. But there will be "enough" labor automation for it to be a social ticking bomb. That, in fact, does not depend on better models nor better AI than we have today. By 2045 there will be a couple of generations that has been outsourcing their thinking to AI for most of their adult lives. Some of them may still work as legal flesh of sorts, but many won't get to be middle man and will find no job.

Also, if you could replace your senator today by an untainted version of a frontier model (of today), would you do it? Would it be a better ruler? What are the odds of you not wanting to push that button in the next twenty years, after a few more batches of incompetent and self-serving politicians?

renticulous 2 hours ago||
Complexity of our human world has gone up so much that humanity actually needs something like AI to ensure further progress. It's impossible to expect a human to learn all the fields in a shallow manner (and be a generalist politician) or one field in full depth (ie expert to push the frontier).
Aurornis 5 hours ago||||
> The individual who is the best at predicting the future

Going to need a big citation for that claim

hirvi74 3 hours ago|||
No need. That man predicted he would be the best at predicting the future.
margalabargala 5 hours ago|||
Source: trust me bro
solid_fuel 2 hours ago||||
> The individual who is the best at predicting the future

Yeah well my prophet says he can beat up your prophet in a fight.

---

Here in reality, I'm not accustomed to taking random predictions without backing evidence as if they were truth.

layer8 5 hours ago||||
Predicting who will predict the future best is hard.
gerikson 5 hours ago||||
Past results is no guarantee of future performance.
margalabargala 5 hours ago|||
> The individual who is the best at predicting the future

Lol

patrickmay 5 hours ago||
Stein's Law: "If something cannot go on forever, it will stop."
skybrian 5 hours ago|
Yes, but figuring out when is the hard part.
kubb 5 hours ago|
If the scary AI is so inevitable, why do you feel such an overwhelming need to convince people about that? Surely you can just wait a bit, and they'll see for themselves.
mitthrowaway2 5 hours ago||
By that reasoning, why even warn people about anything? Why do road construction crews put up signs saying "ROAD CLOSED AHEAD" when you can just drive on and see for yourself?
kubb 5 hours ago||
Indeed, why warn people about real things that exist in the world? That is EXACTLY the same as inciting fear about something imaginary (not even projected).
mitthrowaway2 4 hours ago||
In your mind, dangers from AI are imaginary and not even projected, therefore, you don't see any reason to warn about them, because you don't think the dangers are real. You don't believe the road is actually closed up ahead, so you don't think it's necessary to post the sign.

In Scott's mind, dangers from AI are not a known fact, but are somewhere between highly probable and a near-certainty. In his mind, there are well-grounded justifications for believing that AI poses substantial future dangers to the public. Therefore he also believes he should inform people about this, and strives to convince skeptics, so that we might steer clear.

It's easy to understand why someone who believes what you believe about AI would of course not warn people about AI. It's also easy to understand why someone who believes what Scott believes about AI would want to warn people about AI. Your contention is with his confidence for being worried about AI, not his reason for wanting to warn people.

kubb 3 hours ago||
Gosh it's quite embarassing to have to spell it out, but you inserted the part about Scott's motivations. It can't be found in the text.

Neither can any specific discussion of what the dangers are and how we can steer clear. It all comes preplanted in your head. The only thing that Scott is playing on (as far as we can see) is your ingrained fear, by using an ominous headline, and a vague reference to something "scary" in the conclusion.

Of course there was no reason to "warn" you, you already believed in the scary future. Scott is just giving you fuel, which you seem to appreciate.

mitthrowaway2 3 hours ago|||
Is this the first essay of Scott's that you've read?
kubb 1 hour ago||
The first and the last. Doesn't come as a surprise that they're regularly added to HN, often multiple times and rarely ever get more than 5 upvotes.
djeastm 1 hour ago|||
>as far as we can see

If only there were a way to see more of Scott's thoughts on the subject of AI..

kubb 1 hour ago||
Sorry, there are many better things to read than Scott's thoughts.
throwawayk7h 1 hour ago|||
Yeah! And if climate change is so inevitable, why do the people who want to prevent it from happening seem hell-bent on convincing people that climate change is real?
adleyjulian 5 hours ago||
1. It's not inevitable. 2. Those that see AI as an existential risk don't generally think it's a guarantee, but if it's say a 5% chance then that's worth addressing/mitigating. 3. That's not what this article was even about.
kubb 5 hours ago||
Sounds like the burden is on you to explain either

  1. If you're not treating my claim as a black box, explain explicitly what is your model of what the article was about? Are you aware, for example of the last paragraph of the article? I think that WAS what the article was about. Do you have specific opinions on e.g. how I went wrong and where my model differs?
  2. If you are treating it as a black box, what's your default expectation based on the law of Nothing Ever Happens?
Just kidding, you don't need to explain anything. A"I" fearmongers should though.
adleyjulian 4 hours ago||
The point of the article is that people are historically bad at predicting when exponential curves plateau, even if they're correct that there will be a plateau.

This does *not* imply the inevitability of AGI. It does not imply AGI is necessarily bad.

It does mean that "the capabilities of AI will eventually plateau" offers no meaningful predictive power or relevance to the overall AI discussion.

More comments...