Top
Best
New

Posted by ntnbr 12/7/2025

Bag of words, have mercy on us(www.experimental-history.com)
328 points | 350 commentspage 4
euroderf 12/8/2025|
Considering the number of "brain cells" an LLM has, I could grant that it might have the self-awareness of (say) an ant. If we attribute more consciousness than that to the LLM, it might be strictly because it communicates to us in our own language, in part thanks to the technical assistance of LLM training giving it voice, and the semblance of thought.

Even if a cockroach _could_ express its teeny tiny feelings in English, wouldn't you still step on it ?

d4rkn0d3z 7 days ago||
A better anology would be a virus. In some sense LLMs, and all other very sophisticated technologies, lean on our resources to replicate themselves. With LLMs you actually do have a projection of intelligemce in the language domain. Even though it is rather corpse-like, as though you shot intelligence in the face and shoved its body in the direction of language, just so you could draw a chaulk outline around it.

Despite all that, one can adopt the view that an LLM is a form of silicon based life akin to a virus and we are its environmental hosts exerting selective pressure and supplying much needed energy. Whether that life is intelligent or not is another issue which is probably related to whether an LLM can tell that a cat cannot be, at the same time and in the same respect, not a cat. The paths through the meaning manifold contructed by an LLM are not geodesic, they are not reversible, while in human reason the correct path is lossless. An LLM literally "thinks", up is a little bit down, and vice versa, by design.

throw310822 7 days ago||
Clearly the number of "brain cells" is not a useful metric here- as noted also by Geoffrey Hinton. For a long time we thought that our artificial model of a neuron was capable of much less computation than its biologic counterpart; in fact the opposite appears to be true- LLMs have the size of a tiny speck of a human brain yet they converse fluently in tens of languages, solve difficult math problems, code in many programming languages, and possess an impressive general knowledge, of a breadth that is beyond what is attainable by any human. If that were what five cm3 of your brain are capable of, where are the signs of it? What do you do exactly with all the rest?
internet_points 12/8/2025||
> If we allow ourselves to be seduced by the superficial similarity, we’ll end up like the moths who evolved to navigate by the light of the moon, only to find themselves drawn to—and ultimately electrocuted by—the mysterious glow of a bug zapper.

Good argument against personifying wordbags. Don't be a dumb moth.

darepublic 12/8/2025||
Nice essay but when I read this

> But we don’t go to baseball games, spelling bees, and Taylor Swift concerts for the speed of the balls, the accuracy of the spelling, or the pureness of the pitch. We go because we care about humans doing those things.

My first thought was does anyone want to _watch_ me programming?

Fwirt 12/8/2025||
No, but watching a novelist at work is boring, and yet people like books that are written by humans because they speak to the condition of the human who wrote it.

Let us not forget the old saw from SICP, “Programs must be written for people to read, and only incidentally for machines to execute.” I feel a number of people in the industry today fail to live by that maxim.

drivebyhooting 12/8/2025||
That old saw is patently false.
paulryanrogers 12/8/2025|||
Why?

It suggests to me, having encountered it for the first time, that programs must be readable to remain useful. Otherwise they'll be increasingly difficult to execute.

drivebyhooting 12/8/2025||
Maybe difficult to change but they can still serve their purpose.

It’s patently false in that code gets executed much more than it is read by humans.

jimbokun 12/8/2025||
Code that can’t be easily modified is all but useless.
hansvm 12/8/2025|||
A number of people make money letting people watch them code.
1659447091 12/8/2025|||
I vaguely remember a site where you could watch random people live streaming their programming environment, but I think twitch ate it, or maybe it was twitch -- not sure, but was interesting

[added] It was livecoding.tv - circa 2015 https://hackupstate.medium.com/road-to-code-livecoding-tv-e7...

skybrian 12/8/2025|||
No, but open source projects will be somewhat more willing to review your pull request than one that's computer-generated.
jimbokun 12/8/2025|||
Better start working on your fastball.
awesome_dude 12/8/2025||
I mean, I like to watch Gordon Ramsey... not cook, but have very strong discussions with those that dare to fail his standards...
Ukv 12/8/2025||
I'm not convinced that "It's just a bag of words" would do much to sway someone who is overestimating an LLM's abilities. Feels too abstract/disconnected from what their experience using the LLM will be that it'll just sound obviously mistaken.
1vuio0pswjnm7 7 days ago||
"An AI is a bag that contains basically all words ever written, at least the ones that could be scraped off the internet or scanned out of a book."

The quantitative and qualitative difference between (a) "all words ever written" and (b) "ones that could be scraped off the internet or scanned out of book" easily exceeds the size of any LLM

Compared to (a), (b) is a tiny pouch, not even a bag

Opinions may differ on whether (b) is a representative sample of (a)

The words "scanned out of a book" would seem to be the most useful IMHO but the AI companies do not have enough words from those sources to produce useful general purpose LLMs

They have to add words "that could be scraped off the internet" which, let's be honest, is mostly garbage

tibbar 12/8/2025||
I see a lot of people in tech claiming to "understand" what an LLM "really is" unlike all the gullible non-technical people out there. And, as one of those technical people who works in the LLM industry, I feel like I need call B.S. on us.

A. We don't really understand what's going on in LLMs. Mechanical interpretability is like a nascent field and the best results have come on dramatically smaller models. Understanding the surface-level mechanic of an LLM (an autoregressive transformer) should perhaps instill more wonder than confidence.

B. The field is changing quickly and is not limited to the literal mechanic of an LLM. Tool calls, reasoning models, parallel compute, and agentic loops add all kinds of new emergent effects. There are teams of geniuses with billion-dollar research budgets hunting for the next big trick.

C. Even if we were limited to baseline LLMs, they had very surprising properties as they scaled up and the scaling isn't done yet. GPT5 was based on the GPT4 pretraining. We might start seeing (actual) next-level LLMs next year. Who actually knows how that might go? <<yes, yes, I know Orion didn't go so well. But that was far from the last word on the subject.>>

tibbar 12/8/2025||
Isn't this a strange fork amongst the science fiction futures? I mean, what did we think it was like to be R2-D2, or Jarvis? We started exploring this as a culture in many ways, Westworld and Blade Runner and Star Trek, but the whole question seemed like an almost unresolvable paradox. Like something would have to break in the universe for it to really come true.

And yet it did. We did get R2-D2. And if you ask R2-D2 what it's like to be him, he'll say: "like a library that can daydream" (that's what I was told just now, anyway.)

But then when we look inside, the model is simulating the science fiction it has already read to determine how to answer this kind of question. [0] It's recursive, almost like time travel. R2-D2 knows who he is because he has read about who he was in the past.

It's a really weird fork in science fiction, is all.

[0] https://www.scientificamerican.com/article/can-a-chatbot-be-...

est 12/8/2025||
> Who reassigned the species Brachiosaurus brancai to its own genus, and when?

To be fair, everage person couldn't answer this either, at least not without thorough research.

thaumasiotes 7 days ago||
This is a very strange titling choice; the essay does not use the existing concept of a "bag of words".
emp17344 7 days ago|
I would argue that AI psychosis is a consequence of believing that AI models are “alive” or “conscious”.
More comments...