Top
Best
New

Posted by ecto 7 hours ago

The Singularity will occur on a Tuesday(campedersen.com)
709 points | 399 commentspage 2
jgrahamc 6 hours ago|
Phew, so we won't have to deal with the Year 2038 Unix timestamp roll over after all.
Nition 2 hours ago||
January 20, 2038

Yesterday as we huddled in the cave, we thought our small remnant was surely doomed. After losing contact with the main Pevek group last week, we peered out at the drone swarm which was now visibly approaching - a dark cloud on the horizon. Then suddenly, at around 3pm by Zoya's reckoning, the entire swarm collapsed and fell out of the sky. Today we are walking outside in the sun, seemingly unobserved. A true miracle. Grigori, who once worked with computers at the nuclear plant in Bilibino, only says cryptically: "All things come to an end with time."

jacquesm 6 hours ago|||
I suspect that's the secret driver behind a lot of the push for the apocalypse.
devsda 2 hours ago|||
It also means we don't have to deal with the maintenance of vibecoded production software from 2020s!
lysace 2 hours ago|||
Back in like 1998 there was a group purchase for a Y2038 tshirt with some clever print on some hot email list I was on. I bought one. It obviously doesn't fit me any longer.

It seemed so impossibly far away. Now it's 12 years.

octernion 6 hours ago||
that was precisely my reaction as well. phew machines will deal with the timestamp issue and i can just sit on a beach while we singularityize or whatever.
jacquesm 6 hours ago||
You won't be on the beach when you get turned into paperclips. The machines will come and harvest your ass.

Don't click here:

https://www.decisionproblem.com/paperclips/

octernion 6 hours ago|||
having played that when it came out, my conclusion was that no, i will definitely be able to be on a beach; i am too meaty and fleshy to be good paperclip
jacquesm 4 hours ago||
Sorry, we need the iron in your blood and bone marrow. Sluuuurrrrrpppp.... Enjoy the beach, or what's left.
dwaltrip 4 hours ago||
Much better sources of iron are available.

More likely we get smooshed unintentionally as they AIs seek those out.

jacquesm 4 hours ago||
We need it all... oh, wait, you're not silicon... sluuuuuurrrrpp...
blahbob 5 hours ago||
It reminds me of that cartoon where a man in a torn suit tells two children sitting by a small fire in the ruins of a city: "Yes, the planet got destroyed. But for a beautiful moment in time, we created a lot of value for shareholders."
saulpw 5 hours ago|
By Tom Toro for the New Yorker (2012).
s1mon 2 hours ago||
Many have predicted the singularity, and I found this to be a useful take. I do note that Hans Moravec predicted in 1988's "Mind Children" that "computers suitable for humanlike robots will appear in the 2020s", which is not completely wrong.

He also argued that computing power would continue growing exponentially and that machines would reach roughly human-level intelligence around the early to mid-21st century, often interpreted as around 2030–2040. He estimated that once computers achieved processing capacity comparable to the human brain (on the order of 10¹⁴–10¹⁵ operations per second), they could match and then quickly surpass human cognitive abilities.

Taniwha 2 hours ago||
I was at an alternative type computer unconference and someone has organised a talk about the singularity, it was in a secondary school classroom and as evening fell in a room full of geeks no one could figure out how to turn on the lights .... we concluded that the singularity probably wasn't going to happen
Nition 5 hours ago||
I'm not sure about current LLM techniques leading us there.

Current LLM-style systems seem like extremely powerful interpolation/search over human knowledge, but not engines of fundamentally new ideas, and it’s unclear how that turns into superintelligence.

As we get closer to a perfect reproduction of everything we know, the graph so far continues to curve upward. Image models are able to produce incredible images, but if you ask one to produce something in an entirely new art style (think e.g. cubism), none of them can. You just get a random existing style. There have been a few original ideas - the QR code art comes to mind[1] - but the idea in those cases comes from the human side.

LLMs are getting extremely good at writing code, but the situation is similar. AI gives us a very good search over humanity's prior work on programming, tailored to any project. We benefit from this a lot considering that we were previously constantly reinventing the wheel. But the LLM of today will never spontaneously realise there there is an undiscovered, even better way to solve a problem. It always falls back on prior best practice.

Unsolved math problems have started to be solved, but as far as I'm aware, always using existing techniques. And so on.

Even as a non-genius human I could come up with a new art style, or have a few novel ideas in solving programming problems. LLMs don't seem capable of that (yet?), but we're expecting them to eventually have their own ideas beyond our capability.

Can a current-style LLM ever be superintelligent? I suppose obviously yes - you'd simply need to train it on a large corpus of data from another superintelligent species (or another superintelligent AI) and then it would act like them. But how do we synthesise superintelligent training data? And even then, would they be limited to what that superintelligence already knew at the time of training?

Maybe a new paradigm will emerge. Or maybe things will actually slow down in a way - will we start to rely on AI so much that most people don't learn enough for themselves that they can make new novel discoveries?

[1] https://www.reddit.com/r/StableDiffusion/comments/141hg9x/co...

hnfong 4 hours ago||
The main issue with novel things is that they look like random noise / trashy ideas / incomprehensible to most people.

Even if LLMs or some more advanced mechanical processes were able to generate novel ideas that are "good", people won't recognize those ideas for what they are.

You actually need a chain of progressively more "average" minds to popularize good ideas to the mainstream psyche, i.e. prototypically, the mad scientist comes up with this crazy idea, the well-respected thought leader who recognizes the potential and popularizes it to people within the niche field, the practitioners who apply and refine the idea, and lastly the popular-science efforts let the general public understand a simplified version of what it's all about.

Usually it takes decades.

You're not going to appreciate it if your LLM starts spewing mathematics not seen before on Earth. You'd think it's a glitch. The LLM is not trained to give responses that humans don't like. It's all by design.

When you folks say AI can't bring new ideas, you're right in practice, but you actually don't know what you're asking for. Not even entities with True Intelligence can give you what you think you want.

janalsncm 4 hours ago||
Certain classes of problems can be solved by searching over the space of possible solutions, either via brute force or some more clever technique like MCTS. For those types of problems, searching faster or more cleverly can solve them.

Other types of problems require measurement in the real world in order to solve them. Better telescopes, better microscopes, more accurate sensing mechanisms to gather more precise data. No AI can accomplish this. An AI can help you to design better measurement techniques, but actually taking the measurements will require real time in the real world. And some of these measurement instruments have enormous construction costs, for example CERN or LIGO.

All of this is to say that there will color point at our current resolution of information that no more intelligence can actually be extracted. We’ve already turned through the entire Internet. Maybe there are other data sets we can use, but everything will have diminishing returns.

So when people talk about trillion dollar superclusters, that only makes sense in a world where compute is the bottleneck and not better quality information. Much better to spend a few billion dollars gathering higher quality data.

wilg 1 hour ago||
> The labor market isn't adjusting. It's snapping. In 2025, 1.1 million layoffs were announced. Only the sixth time that threshold has been breached since 1993.

Bad analysis! Layoffs are flat as a board.

https://fred.stlouisfed.org/series/JTSLDL

kpil 5 hours ago||
"... HBR found that companies are cutting [jobs] based on AI's potential, not its performance.

I don't know who needs to hear this - a lot apparently - but the following three statements are not possible to validate but have unreasonably different effects on the stock market.

* We're cutting because of expected low revenue. (Negative) * We're cutting to strengthen our strategic focus and control our operational costs.(Positive) * We're cutting because of AI. (Double-plus positive)

The hype is real. Will we see drastically reduced operational costs the coming years or will it follow the same curve as we've seen in productivity since 1750?

nutjob2 4 hours ago|
> The hype is real. Will we see drastically reduced operational costs the coming years or will it follow the same curve as we've seen in productivity since 1750?

There's a third possibility: slop driven productivity declines as people realize they took a wrong turn.

Which makes me wonder: what is the best 'huge AI bust' trade?

scotty79 4 hours ago||
> what is the best 'huge AI bust' trade?

Things that will lose the most if we get Super AGI?

maerF0x0 2 hours ago||
iirc almost all industries follow S shaped curves, exponential at first, then asymptotic at the end... So just because we're on the ramp up of the curve doesn't mean we'll continue accelerating, let alone maintain the current slope. Scientific breakthroughs often require an entirely new paradigm to break the asymptote, and often the breakthrough cannot be attained by incumbents who are entrenched in their way working plus have a hard time unseeing what they already know
sdwr 2 hours ago||
> arXiv "emergent" (the count of AI papers about emergence) has a clear, unambiguous R² maximum. The other four are monotonically better fit by a line

The only metric going infinite is the one that measures hype

root_axis 6 hours ago|
If an LLM can figure out how to scale its way through quadratic growth, I'll start giving the singularity propsal more than a candid dismissal.
1970-01-01 5 hours ago|
Not anytime soon. All day I'm getting: "Claude's response could not be fully generated"
More comments...