Top
Best
New

Posted by ntnbr 12/7/2025

Bag of words, have mercy on us(www.experimental-history.com)
328 points | 350 commentspage 3
coppsilgold 12/8/2025|
Is a brain not a token prediction machine?

Tokens in form of neural impulses go in, tokens in the form of neural impulses go out.

We would like to believe that there is something profound happening inside and we call that consciousness. Unfortunately when reading about split-brain patient experiments or agenesis of the corpus callosum cases I feel like we are all deceived, every moment of every day. I came to realization that the confabulation that is observed is just a more pronounced effect of the normal.

MyOutfitIsVague 7 days ago||
Could an LLM trained on nothing and looped upon itself eventually develop language, more complex concepts, and everything else, based on nothing? If you loop LLMs on each other, training them so they "learn" over time, will they eventually form and develop new concepts, cultures, and languages organically over time? I don't have an answer to that question, but I strongly doubt it.

There's clearly more going on in the human mind than just token prediction.

coppsilgold 7 days ago||
If you come up with a genetic algorithm scaffolding to affect both the architecture and the training algorithm, and then you instantiate it in an artificial selection environment, and you also give it trillions generations to evolve evolvability just right (as life had for billions of years) then the answer is yes, I'm certain it will and probably much sooner than we did.

Also, I think there is a very high chance that given an existing LLM architecture there exists a set of weights that would manifest a true intelligence immediately upon instantiation (with anterograde amnesia). Finding this set of weights is the problem.

MyOutfitIsVague 7 days ago||
I'm certain it wouldn't, and you're certain it would, and we have the same amount of evidence (and probably roughly the same means for running such an expensive experiment). I think they're more likely to go slowly mad, degrading their reasoning to nothing useful rather than building something real, but that could be different if they weren't detached from sensory input. Human minds looping for generations without senses, a world, or bodies might also go the same way.

> Also, I think there is a very high chance that given an existing LLM architecture there exists a set of weights that would manifest a true intelligence immediately upon instantiation (with anterograde amnesia).

I don't see why that would be the case at all, and I regularly use the latest and most expensive LLMs and am aware enough of how they work to implement them on the simplest level myself, so it's not just me being uninformed or ignorant.

coppsilgold 7 days ago||
The attention mechanism is capable of computing, in my thought experiment where you can magically pluck a weights-set from a trillion-dimensional space the tokens the machine will predict will only have a tiny subset dedicated to language. We have no capability of training such a system at this time, much like we have no way of training a non-differentiable architecture.
protocolture 7 days ago|||
> Is a brain not a token prediction machine?

I would say that, token prediction is one of the things a brain does. And in a lot of people, most of what it does. But I dont think its the whole story. Possibly it is the whole story since the development of language.

jimbokun 7 days ago||
We know that consciousness exists because we constantly experience it. It’s really the only thing we can ever know with certainty.

That’s the point of “I think therefore I am.”

danielbln 7 days ago||
You know that your own consciousness exists, that's where certainty ends. The rest of us might just pretend. :)
layer8 12/8/2025||
Ugly giant bags of mostly words are easy to confuse with ugly giant bags of mostly water.
emsign 7 days ago||

  But we don’t go to baseball games, spelling bees, and
  Taylor Swift concerts for the speed of the balls, the
  accuracy of the spelling, or the pureness of the
  pitch. We go because we care about humans doing those
  things. It wouldn’t be interesting to watch a bag of
  words do them—unless we mistakenly start treating
  that bag like it’s a person.unless we mistakenly
  start treating that bag like it’s a person.
That seems to be the marketing strategy of some very big, now AI dependend companies. Sam Altman and others exaggerating and distorting the capabilities and future of AI.

The biggest issue when it comes to AI is still the same truth as with other technology. It's important who controls it. Attributing agency and personality to AI is a dangerous red flag.

nephihaha 7 days ago|
A lot of us wouldn't go to a Taylor Swift concert. I had to endure several days of interrupted commuting thanks to them though.

Support alternative and independent bands. They're around, and many are enjoyable. (Some are not but avoid them LOL.)

kace91 12/8/2025||
I’ve made this point several times: sure, an anthropomorphized LLM is misleading, but would you rather have them seem academic?

At least the human tone implies fallibility, you don’t want them acting like interactive Wikipedia.

andai 12/8/2025||
It's a concussed savant with anretrograde amnesia in a hyperbolic time chamber.
binary132 12/8/2025||
Yes I would VERY much prefer that they not use that awful casual drivel.
danielbln 7 days ago||
So configure your LLM of choice to not.
jimbokun 7 days ago||
Best quote from the article:

> That’s also why I see no point in using AI to, say, write an essay, just like I see no point in bringing a forklift to the gym. Sure, it can lift the weights, but I’m not trying to suspend a barbell above the floor for the hell of it. I lift it because I want to become the kind of person who can lift it. Similarly, I write because I want to become the kind of person who can think.

altmanaltman 7 days ago||
I don't really like the assumption that anyone who uses AI to, say, write an essay, is not the "kind of person who can think."

And using AI to replace things you find recreational is not the point. If you got paid $100 each time you lifted a weight, would you see a point in bringing a forklift to the gym if it's allowed? Or will that make you a person who is so dumb that they cannot think, as the author is implying?

lotyrin 7 days ago|||
As capable as they get, I still don't see a lot of uses for these things, myself, still. Sometimes if I'm fundamentally uninspired I'll have a model roll the dice, decide what I do or don't like about where it went to create a sense of momentum, but that's the limit. There's never any of its output in my output, even in spirit unless it managed to go somewhere inspiring, it's just a way to let me warm up my generation and discrimination muscles. "Someone is wrong on the internet"-as-a-service, basically.

Generally, if I come across an opportunity to produce ideas or output, I want to capitalize on it for growing my skills and produce an individual and authentic artistic expression where I want to have very fine control over the output in a way that prompt-tweak-verify simply cannot provide.

I don't value the parts it fills in which weren't intentional on the part of the prompter, just send me your prompt instead. I'd rather have a crude sketch and a description than a high fidelity image that obscures them.

But I'm also the kind of person that never enjoyed manufactured pop music or blockbusters unless there's a high concept or technical novelty in addition to the high budget, generally prefer experimental indie stuff, so maybe there's something I just can't see.

altmanaltman 7 days ago||
Yeah, that makes sense. If people don't see uses for AI, they shouldn't use it. But going out of the way to imply that people who use AI cannot think is pretty stupid in itself imo. I am not sure how to put this, but maybe to continue with your example, I like a lot of indie stuff as well, but I don't think anyone who watches, say, Fast and Furious, cannot think or is stupid, unless they explicitly make it the case by speaking, etc.

So my issue is that you shouldn't dismiss AI use as trash just because AI has been used. You should dismiss it as trash because it is trash. But the post says is that you should dismiss it as trash because AI was involved in it somewhere so i feel that's a very shitty/wrong attitude to have.

lotyrin 7 days ago||
I actually do think that people who prefer content of fidelity over content of intent are making a mistake, yes. I don't think they're incapable of thinking, I don't care to apply any virtue labels to this preference, but they are literally preferring not to think.

LLMs can only produce things by and for people who prefer not to do the work the LLMs are doing for them. Most of the time I do not prefer this.

Like, there was a 2-panel comic that went around the RPG community a bit back where it was something like "Game Master using LLM to generate 10 pages of backstory for his campaign setting from a paragraph" in the first panel and "Player using LLM to summarize the 10 page backstory into a paragraph" in the second. Neither of these people care for the filler (because they didn't produce or consume it) so it's turned the two-LLM system into a game of telephone.

klipt 7 days ago||||
The same person could use a forklift at work, and lift weights manually at the gym.

Just pick the right tool for the job: don't take the forklift into the gym, and don't try to overhead press thousands of pounds that would fracture your spine.

jimbokun 7 days ago|||
I notice you make no concrete defense of the value of having an AI write an essay for you.
altmanaltman 7 days ago||
I’m not trying to claim AI-written essays are inherently “valuable” in some grand philosophical sense... just that using a tool doesn’t automatically mean someone can’t think.

People use calculators without being unable to do maths, and use spellcheck without being unable to spell.

AI can help some get past the blank-page phase or organize thoughts they already have. For others, it’s just a way to offload the routine parts so they can focus on the substance.

If someone only outsources everything to an AI, there’s not much growth there sure. But the existence of bad use cases doesn’t invalidate the reasonable ones.

Aloha 7 days ago|||
If you're writing an essay to prove you can or to speak your words - then you should do it yourself - but sometimes you just need an essay to summarize a complex topic as a deliverable.
monegator 7 days ago|||
tough most people either don't get it or are lay people that do not want to become the kind of people who can think. I go with the second one
b112 7 days ago|||
Russ Hanneman's thigh implants are a key example. Appearances are all to some people. Actual growth is meaningless to them.

The problem with AI, is that they waste the time of dedicated, thinking humans which care to improve themselves. If I write a three paragraph email on a technical topic, and some yahoo responds with AI, I'm now responding to gibberish.

The other side may not have read, may not understand, and is just interacting to save time. Now my generous nature, which is to help others and interact positively, is being wasted to reply to someone who seems to have put thought and care into a response, but instead was just copying and pasting what something else output.

We have issues with crackers on the net. We have social media. We have political interference. Now we have humans pretending to interact, rendering online interactions even more silly and harmful.

If this trend continues, we'll move back to live interaction just to reduce this time waste.

salicaster 7 days ago||
[dead]
acituan 7 days ago|||
If the motivation structure is there I don’t see an inherent reason for people to refuse cultivating themselves. Going with the gym analogy lay people did not need gyms when physical work was the norm, cultivation was readily accomplished.

If anything there is a competing motivational structure in which people are incentivized not to think but to consume, react, emote etc. Information processing skills of the individual being deliberately eroded/hijacked/bypassed is not a AI thing. The most obvious example is ads. Thinkers are simply not good for business.

happosai 7 days ago||
Gym is a great analogy here since only a small fraction of population goes to gyms. Most people just came fat after work was no longer physical and mobility was achieved with cars.
startupsfail 7 days ago||
Below is the worst quote... It is plain wrong to see an LLM as a bags of words. LLMs pre-trained on large datasets of text are world models. LLMs post-trained with RL are RL-agents that use these modeling capabilities.

> We are in dire need of a better metaphor. Here’s my suggestion: instead of seeing AI as a sort of silicon homunculus, we should see it as a bag of words.

patrickmay 7 days ago|||
LLMs aren't world models, they are language models. It will be interesting to see which of the LLM implementation techniques will be useful in building world models, but that's not what we have now.
startupsfail 5 days ago||
Can you give an example of some part of the physical world or infosphere that an LLM can't model, at least approximately?
b112 7 days ago|||
When you see a dog, or describe the entity, do you discuss the genetic makeup or the bone structure?

No, you describe the bark.

The end result is what counts. Training or not, it's just spewing predictive, relational text.

danielbln 7 days ago||
So do we, but that's helpful.
b112 6 days ago||
" Training or not, it's just spewing predictive, relational text."

If you're responding to that, "so do we" is not accurate.

We're not spewing predictive, relational text. We're communicating, after thought, and the output is meant to communicate something specifically.

With AI, it's not trying to communicate an idea. It's just spewing predictive text. There's no thought to it. At all.

codeulike 7 days ago||
Here’s my suggestion: instead of seeing AI as a sort of silicon homunculus, we should see it as a bag of words.

The best way to think about LLMs is to think of them as a Model of Language, but very Large

hermitcrab 7 days ago||
"People who experience sleep paralysis sometimes hallucinate a demon-like creature sitting on their chest"

Interestingly, the experience of sleep paralysis seems to change with the culture. Previously, people experienced it as being ridden by a night hag or some other malevolent supernatural being. More recently, it might account for many supposed alien abductions.

The experience of sleep paralysis sometimes seems to have a sexual element, which might also explain the supposed 'probings'!

zkmon 7 days ago||
But the issue is, 99.999% of the humans won't see is as a bag of words. Because it is easier to go by instincts and see it as a person and assume that it actually knows about magic tricks, can invent new science or theory of everything, and can solve all world problems. Back in the 90's or early 2000's I have seen people writing poems praying and seeking blessings from the Google goddess. People are insanely greedy and instinct-driven. Given this truth, what's the fall-out?
Peteragain 7 days ago|
The article is actually about the way we humans are extremely charitable when it comes to ascribing a ToM (theory of mind) and goes on to the Gym model of value. Nice. The comments drop back into the debate I originally saw Hinton describe on The Newyorker: do LLMs construct models (of the world) - that is do they think the way we think we think - or are they "glorified auto complete". I am going for the GAF view. But glorified auto complete is far more useful than the name suggests.
ptidhomme 7 days ago|
Those billion parameters, they are a model of the world. Autocomplete is such a shortsighted understanding of LLMs.
Peteragain 1 day ago|||
Sorry for the late response. Yes that is Hinton's argument, and the claim made by the believers. On the other hand, if the GAC explanation is correct, an explanation might be that what we humans write down (that is, the training corpus) is a model of the world, and LLMs reconstruct (descriptions of) human understanding.
patrickmay 7 days ago|||
They're a model of language, not of the world.
ptidhomme 7 days ago||
A model of language is a model of the world, else it being pure gibberish.
marcosdumay 7 days ago||
A model of language is a model of a tiny specialized part of the world: language.

And if anybody gets annoyed that my comment is tautological, get annoyed by the people that made the comment necessary.

ptidhomme 7 days ago||
When you ask an LLM a question about cars, it needs an inner representation of what a car is (how imperfect it may be) to answer your question. A model of "language" as you want to define it would output a grammatically correct wall of text that goes nowhere.
marcosdumay 7 days ago||
A map of how concepts relate in language is not a model of the world, except on the extremely limited sense that languages are part or the world.

And yeah, that wasn't clear before people created those machines that can speak but can't think. But it should be completely obvious to anybody that interacts with them for a small while.

ptidhomme 6 days ago||
"How concepts relate" is called a model. That it uses language to be interacted with is irrelevant to the fact that it's a model of of a worldly concept.

What of multi modal models according to you ? Are they "models of eyesight", "models of sound", or pixels or wavelengths... C'mon.

More comments...