Top
Best
New

Posted by bilsbie 6/30/2025

There are no new ideas in AI, only new datasets(blog.jxmo.io)
490 points | 289 commentspage 4
krunck 6/30/2025|
Until these "AI" systems become always-on, always-thinking, always-processing, progress is stuck. The current push button AI - meaning it only processes when we prompt it - is not how the kind of AI that everyone is dreaming of needs to function.
fwip 6/30/2025|
From a technical perspective, we can do that with a for loop.

The reason we don't do it isn't because it's hard, it's because it yields worse results for increased cost.

nyrulez 6/30/2025||
Things haven't changed much in terms of truly new ideas since electricity was invented. Everything else is just applications on top of that. Make the electrons flow in a different way and you get a different outcome.
nomel 6/30/2025|
> Make the electrons flow in a different way and you get a different outcome.

This happens to be the basis of every aspect of our biology.

blobbers 7/1/2025||
Why is DeepSeek specifically called out?
TimByte 7/1/2025||
What happens when we really run out of fresh, high-quality data? YouTube and robotics make sense as next frontiers, but they come with serious scaling, labeling, and privacy headaches
ChaoPrayaWave 7/1/2025|
Feels like we’ve built this massive engine that runs on high octane data, but never stopped to ask what happens when the fuel runs dry. Maybe it’s time to focus more on efficient learning, not just feeding more and more.
AbstractH24 7/1/2025||
Imagine if the original moores law tracked how often CPUs doubled the semi conductors while still functioning properly 50% of the time.

I don’t think it would have had the same impact

anon291 6/30/2025||
I mean there's no new ideas for saas but just new applications and that worked out pretty well
ks2048 6/30/2025||
The latest LLMs are simply multiplying and adding various numbers together... Babylonians were doing that 4000 years ago.
bobson381 6/30/2025||
You are just a lot of interactions of waves. All meaning is assigned. I prefer to think of this like the Goedel generator that found new formal expressions for the Principia - because we have a way of indexing concept-space, there's no telling what we might find in the gaps.
thenaturalist 6/30/2025||
But on clay tables, not in semi-conductive electron prisons separated by one-atom-thick walls.

Slight difference to those methods, wouldn't you agree?

geysersam 7/1/2025||
No it's exactly the same. Everything old is new again...
thenaturalist 7/1/2025||
Results != methodology
NetRunnerSu 7/1/2025||
Because the externally injected loss function will empty the brain of the model.

Models need to decide for themselves what they should learn.

Eventually, after entering the open world, reinforcement learning/genetic algorithms are still the only perpetual training solution.

https://github.com/dmf-archive/PILF

b0a04gl 6/30/2025||
[dead]
Night_Thastus 6/30/2025|
Man I can't wait for this '''''AI''''' stuff to blow over. The back and forth gets a bit exhausting.
Culonavirus 7/1/2025|
Within the bounds of HN audience I would definitely describe myself as an A(G)I skeptic.

But even I can see that this ""AI"" stuff is not going to blow over. That ship has sailed. Even if the current models get only marginal improvements, the momentum is unquestionably, inarguably there to make the adoption and productization 10x or even 100x wider than it is now. Robotics, automatization, self-driving, all kinds of kiosks, military applications (gathering and merging sensor data, controlling drone swarms, etc.)...

Just the amount of money (it's going to be trillions before the decade is over) and the amount of students in the field (basically all computer science degrees nowadays teach AI in some form) guarantees we're stuck with ""AI"" forever (at least until it kills us or merges with us)

actionfromafar 7/1/2025|||
1000x more than it is now, at least. Imagine every kind of electronic thing ever made, but with a sprinkle of AI.

Why?

For the same reason now 32-bit CPUs and a megabyte of RAM run some Javascript or MicroPython to check a handful of logical conditions and flip a couple of I/O bits is no longer a custom curcuit or a handful of TTL-chips wired together.

Night_Thastus 7/1/2025|||
The reason I think it's going to blow over is that even the best models are frequently quite terrible. The fundamental problems of how they work can't really be fixed because they're a feature - the way that LLMs work.

And no one has found a way to make any money with it. All the tech companies are burning money by the truckload so investors don't lose confidence, but none of them have actually shown it's a good financial investment.

At the end of the day, I don't think anyone is going to want to pay what it really costs to run these models, just for a result that is so unreliable. Once they start to stagnate everyone will lose interest.

The only reason it might stick around is because investors will get desperate to get returns and go full sunk-cost once it starts looking like they made a bad call. (Which they will blame the companies for, of course)

More comments...