Top
Best
New

Posted by bilsbie 1 day ago

There are no new ideas in AI only new datasets(blog.jxmo.io)
423 points | 224 commentspage 3
sakex 16 hours ago|
There are new things being tested and yielding results monthly in modelling. We've deviated quite a bit from the original multi head attention.
Kapura 20 hours ago||
Here's an idea: make the AIs consistent at doing things computers are good at. Here's an anecdote from a friend who's living in Japan:

> i used chatgpt for the first time today and have some lite rage if you wanna hear it. tldr it wasnt correct. i thought of one simple task that it should be good at and it couldnt do that.

> (The kangxi radicals are neatly in order in unicode so you can just ++ thru em. The cjks are not. I couldnt see any clear mapping so i asked gpt to do it. Big mess i had to untangle manually anyway it woulda been faster to look them up by hand (theres 214))

> The big kicker was like, it gave me 213. And i was like, "why is one missing?" Then i put it back in and said count how many numbers are here and it said 214, and there just werent. Like come on you SHOULD be able to count.

If you can make the language models actually interface with what we've been able to do with computers for decades, i imagine many paths open up.

cheevly 20 hours ago|
Many of us have solved this with internal tooling that has not yet been shared or released to the public.
layer8 19 hours ago||
This needs to be generalized however. For example, if you present an AI with a drawing of some directed graph (a state diagram, for example), it should be able to answer questions based on the precise set of all possible paths in that graph, without someone having to write tooling for diagram or graph processing and traversal. Or, given a photo of a dropped box of matches, an AI should be able to precisely count the matches, as far as they are individually visible (which a human could do by keeping a tally while coloring the matches). There are probably better examples, these are off the cuff.

There’s an infinite repertoire of such tasks that combine AI capabilities with traditional computer algorithms, and I don’t think we have a generic way of having AI autonomously outsource whatever parts require precision in a reliable way.

snapcaster 19 hours ago||
What you're describing sounds like agentic tool usage. Have you kept up with the latest developments on that? it's already solved depending on how strict you define your criteria above
layer8 17 hours ago||
My understanding is that you need to provide and configure task-specific tools. You can’t combine the AI with just a general-purpose computer and have the AI figure out on its own how to make use of it to achieve with reliability and precision whatever task it is given. In other words, the current tool usage isn’t general-purpose in the way the LLM itself is, and also the LLM doesn’t reason about its own capabilities in order to decide how to incorporate computer use to compensate for its own weaknesses. Instead you have to tell the LLM what it should apply the tooling for.
tantalor 20 hours ago||
> If data is the only thing that matters, why are 95% of people working on new methods?

Because new methods unlock access to new datasets.

Edit: Oh I see this was a rhetorical question answered in the next paragraph. D'oh

lossolo 19 hours ago||
I wrote about it around a year ago here:

"There weren't really any advancements from around 2018. The majority of the 'advancements' were in the amount of parameters, training data, and its applications. What was the GPT-3 to ChatGPT transition? It involved fine-tuning, using specifically crafted training data. What changed from GPT-3 to GPT-4? It was the increase in the number of parameters, improved training data, and the addition of another modality. From GPT-4 to GPT-40? There was more optimization and the introduction of a new modality. The only thing left that could further improve models is to add one more modality, which could be video or other sensory inputs, along with some optimization and more parameters. We are approaching diminishing returns." [1]

10 months ago around o1 release:

"It's because there is nothing novel here from an architectural point of view. Again, the secret sauce is only in the training data. O1 seems like a variant of RLRF https://arxiv.org/abs/2403.14238

Soon you will see similar models from competitors." [2]

Winter is coming.

1. https://news.ycombinator.com/item?id=40624112

2. https://news.ycombinator.com/item?id=41526039

tolerance 19 hours ago|
And when winter does arrive, then what? The technology is slowing down while its popularity picks up. Can sparks fly out of snow?
imiric 6 hours ago|||
> And when winter does arrive, then what?

If the technology is useful, the Slope of Enlightenment, followed by the Plateau of Productivity.

blibble 13 hours ago|||
the trillion dollar funding tap is turned off, the prices charged then will have to reflect the costs

shortly thereafter the entire ecosystem will collapse

b0a04gl 20 hours ago||
if datasets are the new codebases ,then the real IP can be dataset version control. how you fork ,diff ,merge and audit datasets like code. every team says 'we trained on 10B tokens' but what if we can answer 'which 5M tokens made reasoning better', 'which 100k made it worse'. then we can start being targeted leverage
lsy 18 hours ago||
This seems simplistic, tech and infrastructure play a huge part here. A short and incomplete list of things that contributed:

- Moore's law petering out, steering hardware advancements towards parallelism

- Fast-enough internet creating shift to processing and storage in large server farms, enabling both high-cost training and remote storage of large models

- Social media + search both enlisting consumers as data producers, and necessitating the creation of armies of Mturkers for content moderation + evaluation, later becoming available for tagging and rlhf

- A long-term shift to a text-oriented society, beginning with print capitalism and continuing through the rise of "knowledge work" through to the migration of daily tasks (work, bill paying, shopping) online, that allows a program that only produces text to appear capable of doing many of the things a person does

We may have previously had the technical ideas in the 1990s but we certainly didn't have the ripened infrastructure to put them into practice. If we had the dataset to create an LLM in the 90s, it still would have been astronomically cost-prohibitive to train, both in CPU and human labor, and it wouldn't have as much of an effect on society because you wouldn't be able to hook it up to commerce or day-to-day activities (far fewer texts, emails, ecommerce).

rar00 21 hours ago||
disagree, there are a few organisations exploring novel paths. It's just that throwing new data at an "old" algorithm is much easier and has been a winning strategy. And, also, there's no incentive for a private org to advertise a new idea that seems to be working (mine's a notable exception :D).
TimByte 8 hours ago||
What happens when we really run out of fresh, high-quality data? YouTube and robotics make sense as next frontiers, but they come with serious scaling, labeling, and privacy headaches
ChaoPrayaWave 7 hours ago|
Feels like we’ve built this massive engine that runs on high octane data, but never stopped to ask what happens when the fuel runs dry. Maybe it’s time to focus more on efficient learning, not just feeding more and more.
blobbers 14 hours ago||
Why is DeepSeek specifically called out?
krunck 20 hours ago|
Until these "AI" systems become always-on, always-thinking, always-processing, progress is stuck. The current push button AI - meaning it only processes when we prompt it - is not how the kind of AI that everyone is dreaming of needs to function.
fwip 20 hours ago|
From a technical perspective, we can do that with a for loop.

The reason we don't do it isn't because it's hard, it's because it yields worse results for increased cost.

More comments...