Posted by ntnbr 12/7/2025
Tokens in form of neural impulses go in, tokens in the form of neural impulses go out.
We would like to believe that there is something profound happening inside and we call that consciousness. Unfortunately when reading about split-brain patient experiments or agenesis of the corpus callosum cases I feel like we are all deceived, every moment of every day. I came to realization that the confabulation that is observed is just a more pronounced effect of the normal.
There's clearly more going on in the human mind than just token prediction.
Also, I think there is a very high chance that given an existing LLM architecture there exists a set of weights that would manifest a true intelligence immediately upon instantiation (with anterograde amnesia). Finding this set of weights is the problem.
> Also, I think there is a very high chance that given an existing LLM architecture there exists a set of weights that would manifest a true intelligence immediately upon instantiation (with anterograde amnesia).
I don't see why that would be the case at all, and I regularly use the latest and most expensive LLMs and am aware enough of how they work to implement them on the simplest level myself, so it's not just me being uninformed or ignorant.
I would say that, token prediction is one of the things a brain does. And in a lot of people, most of what it does. But I dont think its the whole story. Possibly it is the whole story since the development of language.
That’s the point of “I think therefore I am.”
But we don’t go to baseball games, spelling bees, and
Taylor Swift concerts for the speed of the balls, the
accuracy of the spelling, or the pureness of the
pitch. We go because we care about humans doing those
things. It wouldn’t be interesting to watch a bag of
words do them—unless we mistakenly start treating
that bag like it’s a person.unless we mistakenly
start treating that bag like it’s a person.
That seems to be the marketing strategy of some very big, now AI dependend companies. Sam Altman and others exaggerating and distorting the capabilities and future of AI.The biggest issue when it comes to AI is still the same truth as with other technology. It's important who controls it. Attributing agency and personality to AI is a dangerous red flag.
Support alternative and independent bands. They're around, and many are enjoyable. (Some are not but avoid them LOL.)
At least the human tone implies fallibility, you don’t want them acting like interactive Wikipedia.
> That’s also why I see no point in using AI to, say, write an essay, just like I see no point in bringing a forklift to the gym. Sure, it can lift the weights, but I’m not trying to suspend a barbell above the floor for the hell of it. I lift it because I want to become the kind of person who can lift it. Similarly, I write because I want to become the kind of person who can think.
And using AI to replace things you find recreational is not the point. If you got paid $100 each time you lifted a weight, would you see a point in bringing a forklift to the gym if it's allowed? Or will that make you a person who is so dumb that they cannot think, as the author is implying?
Generally, if I come across an opportunity to produce ideas or output, I want to capitalize on it for growing my skills and produce an individual and authentic artistic expression where I want to have very fine control over the output in a way that prompt-tweak-verify simply cannot provide.
I don't value the parts it fills in which weren't intentional on the part of the prompter, just send me your prompt instead. I'd rather have a crude sketch and a description than a high fidelity image that obscures them.
But I'm also the kind of person that never enjoyed manufactured pop music or blockbusters unless there's a high concept or technical novelty in addition to the high budget, generally prefer experimental indie stuff, so maybe there's something I just can't see.
So my issue is that you shouldn't dismiss AI use as trash just because AI has been used. You should dismiss it as trash because it is trash. But the post says is that you should dismiss it as trash because AI was involved in it somewhere so i feel that's a very shitty/wrong attitude to have.
LLMs can only produce things by and for people who prefer not to do the work the LLMs are doing for them. Most of the time I do not prefer this.
Like, there was a 2-panel comic that went around the RPG community a bit back where it was something like "Game Master using LLM to generate 10 pages of backstory for his campaign setting from a paragraph" in the first panel and "Player using LLM to summarize the 10 page backstory into a paragraph" in the second. Neither of these people care for the filler (because they didn't produce or consume it) so it's turned the two-LLM system into a game of telephone.
Just pick the right tool for the job: don't take the forklift into the gym, and don't try to overhead press thousands of pounds that would fracture your spine.
People use calculators without being unable to do maths, and use spellcheck without being unable to spell.
AI can help some get past the blank-page phase or organize thoughts they already have. For others, it’s just a way to offload the routine parts so they can focus on the substance.
If someone only outsources everything to an AI, there’s not much growth there sure. But the existence of bad use cases doesn’t invalidate the reasonable ones.
The problem with AI, is that they waste the time of dedicated, thinking humans which care to improve themselves. If I write a three paragraph email on a technical topic, and some yahoo responds with AI, I'm now responding to gibberish.
The other side may not have read, may not understand, and is just interacting to save time. Now my generous nature, which is to help others and interact positively, is being wasted to reply to someone who seems to have put thought and care into a response, but instead was just copying and pasting what something else output.
We have issues with crackers on the net. We have social media. We have political interference. Now we have humans pretending to interact, rendering online interactions even more silly and harmful.
If this trend continues, we'll move back to live interaction just to reduce this time waste.
If anything there is a competing motivational structure in which people are incentivized not to think but to consume, react, emote etc. Information processing skills of the individual being deliberately eroded/hijacked/bypassed is not a AI thing. The most obvious example is ads. Thinkers are simply not good for business.
> We are in dire need of a better metaphor. Here’s my suggestion: instead of seeing AI as a sort of silicon homunculus, we should see it as a bag of words.
No, you describe the bark.
The end result is what counts. Training or not, it's just spewing predictive, relational text.
If you're responding to that, "so do we" is not accurate.
We're not spewing predictive, relational text. We're communicating, after thought, and the output is meant to communicate something specifically.
With AI, it's not trying to communicate an idea. It's just spewing predictive text. There's no thought to it. At all.
The best way to think about LLMs is to think of them as a Model of Language, but very Large
Interestingly, the experience of sleep paralysis seems to change with the culture. Previously, people experienced it as being ridden by a night hag or some other malevolent supernatural being. More recently, it might account for many supposed alien abductions.
The experience of sleep paralysis sometimes seems to have a sexual element, which might also explain the supposed 'probings'!
And if anybody gets annoyed that my comment is tautological, get annoyed by the people that made the comment necessary.
And yeah, that wasn't clear before people created those machines that can speak but can't think. But it should be completely obvious to anybody that interacts with them for a small while.
What of multi modal models according to you ? Are they "models of eyesight", "models of sound", or pixels or wavelengths... C'mon.