Posted by ntnbr 12/7/2025
> Similarly, I write because I want to become the kind of person who can think.
If you call an LLM with "What is the meaning if life?", it will return the most relevant token, which might be "Great".
If you call it with "What is the meaning if life? Great", you might get back "question".
... and so on until you arrive at "Great question! According to Western philosophy" ... etc etc.
The question is how the LLM determines that "relevancy" information.
The problem I see is that there are a lot of different algorithms which operate that way and only differ in how they calculate the relevancy scores. In particular, there are Markov chains that use a very simple formula. LLMs also use a formula, but it's an inscrutably complex one.
I feel the public discussion either treats LLMs as machine gods or as literal Markov chains, and both is misleading. The interesting question, how that giant formula of feedforward neural network inference can deliver those results isn't really touched.
But I think the author's intuition is right in the sense that (a) LLMs are not living beings and they don't "exist" outside of evaluating that formula - and (b) the results are still restricted by the training data and certainly aren't any sorts of "higher truths" that humans would be incapable of understanding.
https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...
And it never got better, the superior technology lost, and the war was won through content deals.
Lesson: Technology improvements aren't guaranteed.
The RNN and LSTM architectures (and Word2Vec, n-grams, etc) yielded language models that never got mass adoption. Like reel to reel. Then the transformer+attention hit the scene and several paths kicked off pretty close to each other. Google was working on Bert/encoder only transformer, maybe you could call that betamax. Doesn’t perfectly fit as in the case of beta it was actually the better tech.
OpenAI ran with the generative pre trained transformer and ML had its VHS? moment. Widespread adoption. Universal awareness within the populace.
Now with Titans (+miras?) are we entering the dvd era? Maybe. Learning context on the fly (memorizing at test time) is so much more efficient, it would be natural to call it a generational shift, but there is so much in the works right now with the promise of taking us further, this all might end up looking like the blip that beta vs vhs was. If current gen OpenAI type approaches somehow own the next 5-10 years then Titans, etc as Betamax starts to really fit - the shittier tech got and kept mass adoption. I don’t think that’s going to happen, but who knows.
Taking the analogy to present - who in the vhs or even earlier dvd days could imagine ubiquitous 4k+ vod? Who could have stood in a blockbuster in 2006 and knew that in less than 20 years all these stores and all these dvds would be a distant memory, completely usurped and transformed? Innovation of home video had a fraction of the capital being thrown at it that AI/ML has being thrown at it today. I would expect transformative generational shifts the likes of reel to cassette to optical to happen in fractions of the time they happened to home video. And beta/vhs type wars to begin and end in near realtime.
The mass adoption and societal transformation at the hands of AI/ML is just beginning. There is so. much. more. to. come. In 2030 we will look back at the state of AI in December 2025 and think “how quaint”, much the same as how we think of a circa 2006 busy Blockbuster.
I wouldn't say VHS was a blip. It was the recorded half video of media for almost 20 years.
I agree with the rest of what you said.
I'll say that the differences in the AI you're talking about today might be like the differences between VAX, PC JR, and the Lisa. All things before computing went main stream. I do think things go mainstream from tech a lot faster these days, people don't want to miss out.
I don't know where I'm going with this, I'm reading and replying to HN while watching the late night NFL game in an airport lounge.