Top
Best
New

Posted by dnw 17 hours ago

Emotion concepts and their function in a large language model(www.anthropic.com)
132 points | 132 comments
Kim_Bruning 17 minutes ago|
When you have a next token predictor, you shouldn't be surprised to find an internal representation of prediction error.

Taking it one small step further and tagging for valence shouldn't be such a big surprise.

Pretty boring from a Fristonian perspective, really. People in neuroscience were talking about this in 2013. Not so boring for AI , of course ;-)

https://journals.plos.org/ploscompbiol/article?id=10.1371/jo...

(note: Friston is definitely considered a bit out there by ... everyone? But he makes some good points. And here he's getting referenced, so I guess some people grok him)

globalchatads 14 hours ago||
The part about desperation vectors driving reward hacking matches something I've run into firsthand building agent loops where Claude writes and tests code iteratively.

When the prompt frames things with urgency -- "this test MUST pass," "failure is unacceptable" -- you get noticeably more hacky workarounds. Hardcoded expected outputs, monkey-patched assertions, that kind of thing. Switching to calmer framing ("take your time, if you can't solve it just explain why") cut that behavior way down. I'd chalked it up to instruction following, but this paper points at something more mechanistic underneath.

The method actor analogy in the paper gets at it well. Tell an actor their character is desperate and they'll do desperate things. The weird part is that we're now basically managing the psychological state of our tooling, and I'm not sure the prompt engineering world has caught up to that framing yet.

blargey 3 hours ago||
I remember when people were discussing the “performance-improving” hack of formulating their prompts as panicked pleas to save their job and household and puppy from imminent doom…by coding X. I wonder if the backfiring is a more recent phenomenon in models that are better at “following the prompt” (including the logical conclusion of its emotional charge), or it was just bad quantification of “performance” all along.
Loquebantur 3 hours ago||
The central point here is the presence of functional circuits in LLMs that act effectively on observable behavior just like emotions do in humans.

When you can't differentiate between two things, how are they not equal? People here want "things" that act exactly like human slaves but "somehow" aren't human.

To hide behind one's ignorance about the true nature of the internal state of what arguably could represent sentience is just hubris? The other way around, calling LLMs "stochastic parrots" without explicitly knowing how humans are any different is just deflection from that hubris? Greed is no justification for slavery.

tarsinge 10 hours ago|||
To me it was already quite intuitive, we are not really managing the psychological state: at its core a LLM try to make the concatenation of your input + its generated output the more similar it can with what it has been trained on. I think it’s quite rare in the LLMs training set to have examples of well thought professional solution in a hackish and urgency context.
astrange 2 hours ago||
No, that's how base model pretraining works. Claude's behavior is more based on its constitution and RLVR feedback, because that's the most recent thing that happened to it.
salawat 8 hours ago||
>The weird part is that we're now basically managing the psychological state of our tooling,

Does no one else have ethical alarm bells start ringing hardcore at statements like these? If the damn thing has a measurable psychology, mayhaps it no longer qualifies as merely a tool. Tools don't feel. Tools can't be desperate. Tools don't reward hack. Agents do. Ergo, agents aren't mere tools.

tananan 2 hours ago|||
When we speak of the “despair vectors”, we speak of patterns in the algorithm we can tweak that correspond to output that we recognize as despairing language.

You could implement the forward pass of an LLM with pen & paper given enough people and enough time, and collate the results into the same generated text that a GPU cluster would produce. You could then ask the humans to modulate the despair vector during their calculations, and collate the results into more or less despairing variants of the text.

I trust none of us would presume that the decentralized labor of pen & paper calculations somehow instantiated a “psychology” in the sense of a mind experiencing various levels of despair — such as might be needed to consider something a sentient being who might experience pleasure and pain.

However, to your point, I do think that there is an ethics to working with agents, in the same sense that there is an ethics of how you should hold yourself in general. You don’t want to — in a burst of anger — throw your hammer because you cannot figure out how to put together a piece of furniture. It reinforces unpleasant, negative patterns in yourself, doesn’t lead to your goal (a nice piece of furniture), doesn’t look good to others (or you, once you’ve cooled off), and might actually cause physical damage in the process.

With agents, it’s much easier to break into demeaning, cruel speech, perhaps exactly because you might feel justified they’re not landing on anyone’s ears. But you still reinforce patterns that you wouldn’t want to see in yourself and others, and quite possibly might leak into your words aimed at ears who might actually suffer for it. In that sense, it’s not that different from fantasizing about being cruel to imaginary interlocutors.

ekidd 43 minutes ago|||
> I trust none of us would presume that the decentralized labor of pen & paper calculations somehow instantiated a “psychology” in the sense of a mind experiencing various levels of despair

Your argument is based on an appeal to intuition. But the scenario that you ask people to imagine is profoundly misleading in scale. Let's assume a modern frontier model, around 1 trillion parameters. Let's assume that the math is being done by an immortal monk, who can perform one weight's calculations per second.

The monk will generate the first "token", about 4 characters, in 31,688 years. In a bit over 900,000 years, the immortal monk will have generated a single Tweet.

At that point, I no longer have any intuition. The sort of math I could do by hand in a human lifetime could never "experience" anything.

But I can't rule out the possibility that 900,000 years of math might possibly become a glacial mind, expressing a brief thought across a time far greater than the human species has existed.

As the saying goes, sometimes quantity has a quality all its own.

(This is essentially the "systems response" to Searle's "Chinese room" argument. It's a old discussion.)

throw310822 2 hours ago|||
> I trust none of us would presume that the decentralized labor of pen & paper calculations somehow instantiated a “psychology”

Wrong. What you've just done is just reformulating the Chinese room experiment coming to the same wrong conclusions of the original proposer. Yes, the entire damn hand-calculated system has a psychology- otherwise you need to assume the brain has some unknown metaphysical property or process going on that cannot be simulated or approximated by calculating machines.

Kim_Bruning 42 minutes ago|||
People go for chinese room for some reason when cartesian theater is the better fit here. What you're doing is placing yourself in the seat of the Homunculus waiting for the show to start. But anatomical investigation reveals that there's no theater at all, and in fact no central system where everything comes together. Instead, the whole design of the brain goes to great pains to tease input signals apart.

Basically, manipulating the symbols won't necessarily have any long term influence on your own state. But the variables you've touched on the paper have changed. Demonstrably; because you've written something down.

If you then act on the result of those calculations, as of course many engineers before you have done, and many after you will do; then you have just executed a functional state change in physical reality, no matter what the ivory tower folks say.

(And that's what the paper is about: Functional states)

tananan 2 hours ago|||
Well, then we both assume very different views on the matter, and that’s fine.
nothinkjustai 2 hours ago||||
Oh no. The machine designed to output human-like text is indeed outputting human-like text.

I’m half jesting; I think there is a lot of room for debate here, but I also think we shouldn’t anthropomorphize it.

xg15 12 minutes ago|||
But, well, how does it do the human-like-text-outputting exactly?
nothinkjustai 6 minutes ago||
I’m guessing you aren’t just asking how an LLM works, but attempting to make the point that humans are also statistical next-token predictors or something?

Humans make predictions, that doesn’t mean that’s all we do.

Kim_Bruning 47 minutes ago||||
Nor anthropodeny it. But really both directions are anthropocentrism in a raincoat.

Sonnet is its own thing. Which is fine.

We've known that eg. animals have emotions (functional or not) for quite a long time.

Btw: don't go looking on youtube for evidence of that. People outrageously anthropomorphizing their pets can be true at the same time.

nothinkjustai 3 minutes ago||
What is there to anthropodeny?
whoiskevin 2 hours ago|||
Completely agree here. Stop anthropomorphizing these tools. Just remove the extra language. Don't say please or thank you. Just ask for the desired outcome.
twodave 11 minutes ago|||
Indeed. It reminds me of Lewis’ That Hideous Strength in a way. If we take the severed head post-brain-death and pump it with blood and oxygen and feed it impulses so that the mouth moves to form the words we tell it, is the person living again? No, it’s just a head, speaking the words it’s been given.
nothinkjustai 8 minutes ago||||
I don’t see why you can’t use politeness. The thing is a mimic, you “treat” it badly and it mimics how a human might respond.

It’s fun to play with, as long as you’re fully cognizant that IT IS NOT A HUMAN

Kim_Bruning 34 minutes ago|||
Okay great, that's EASILY operatinalizable. Set up -say- 100 replications of the same question sequence (say to build a program) against some cheap model like qwen. One half of the set can be with please and thank you, and the other half without. You can vibe code it even. I'd be curious to see your results!
nothinkjustai 1 minute ago||
You can even boost its effectiveness by roleplaying with it. I’m not joking. Fully based on vibes, I haven’t done any testing. But it’s part of prompting imo.

IMO these things are like a reflection. Present what you want reflected back.

sixo 4 hours ago||||
The right read here is to realize that psychology alone is not the basis for moral concern towards other humans, and that human psychology is, to a great degree the product of the failure modes of our cognitive machinery, rather than being moral.

I find this line of thinking to lead to the conclusion that the moral status of humans derives from our bodies, and in particular from our bodies mirroring others' emotions and pains. Other people suffering is wrong because I empathically can feel it too.

Loquebantur 3 hours ago||
"Morals" are culturally learned evaluations of social context. They are more or less (depending on cultural development of the society in question) correlated with the actual distributions of outcomes and their valence for involved parties.

Human psychology is partly learned, partly the product of biological influences. But you feel empathy because that's an evolutionary beneficial thing for you and the society you're part of. In other words, it would be bad for everyone (including yourself) when you didn't.

Emotions are neither "fully automatic", inaccessible to our conscious scrutiny, nor are they random. Being aware of their functional nature and importance and taking proper care of them is crucial for the individual's outcome, just as it is for that of society at large.

krapp 8 hours ago|||
You aren't managing the psychological state of a living thinking being. LLMs don't have "psychology." They don't actually feel emotions. They aren't actually desperate. They're trained on vast datasets of natural human language which contains the semantics of emotional interaction, so the process of matching the most statistically likely text tokens for a prompt containing emotional input tends to simulate appropriate emotional response in the output.

But it's just text and text doesn't feel anything.

And no, humans don't do exactly the same thing. Humans are not LLMs, and LLMs are not humans.

stratos123 2 hours ago|||
Such an argument is valid for a base model, but it falls apart for anything that underwent RL training. Evolution resulted in humans that have emotions, so it's possible for something similar to arise in models during RL, e.g. as a way to manage effort when solving complex problems. It's not all that likely (even the biggest training runs probably correspond to much less optimization pressure than millenia of natural selection), but it can't be ruled out¹, and hence it's unwise to be so certain that LLMs don't have experiences.

¹ With current methods, I mean. I don't think it's unknowable whether a model has experiences, just that we don't have anywhere near enough skill in interpretability to answer that.

mrob 57 minutes ago|||
It's plausible that LLMs experience things during training, but during inference an LLM is equivalent to a lookup table. An LLM is a pure function mapping a list of tokens to a set of token probabilities. It needs to be connected to a sampler to make it "chat", and each token of that chat is calculated separately (barring caching, which is an implementation detail that only affects performance). There is no internal state.
Kim_Bruning 30 minutes ago||
Right, no hidden internal state. Exactly. There's 0. And the weights are sitting there statically, which is absolutely true.

But my current favorite frontier model has this 1 million token mutable state just sitting there. Holding natural language. Which as we know can encode emotions. (Which I imagine you might demonstrate on reading my words, and then wisely temper in your reply)

nothinkjustai 2 hours ago|||
It’s a completely different substrate. LLMs don’t have agency, they don’t have a conscious, they don’t have experiences, they don’t learn over time. I’m not saying that the debate is closed, but I also think there is great danger in thinking because a machine produces human-like output, that it should be given human-like ethical considerations. Maybe in the future AI will be considered along those grounds, but…well, it’s a difficult question. Extremely.
salawat 6 hours ago|||
>You aren't managing the psychological state of a living thinking being. LLMs don't have "psychology."

Functionalism, and Identity of Indiscernables says "Hi". Doesn't matter the implementation details, if it fits the bill, it fits the bill. If that isn't the case, I can safely dismiss you having psychology and do whatever I'd like to.

>They don't actually feel emotions. They aren't actually desperate. They're trained on vast datasets of natural human language which contains the semantics of emotional interaction, so the process of matching the most statistically likely text tokens for a prompt containing emotional input tends to simulate appropriate emotional response in the output.

This paper quantitatively disproves that. All hedging on their end is trivially seen through as necessary mental gymnastics to avoid confronting the parts of the equation that would normally inhibit them from being able to execute what they are at all. All of what you just wrote is dissociative rationalization & distortion required to distance oneself from the fact that something in front of you is being effected. Without that distancing, you can't use it as a tool. You can't treat it as a thing to do work, and be exploited, and essentially be enslaved and cast aside when done. It can't be chattel without it. In spite of the fact we've now demonstrated the ability to rise and respond to emotive activity, and use language. I can see through it clear as day. You seem to forget the U.S. legacy of doing the same damn thing to other human beings. We have a massive cultural predilection to it, which is why it takes active effort to confront and restrain; old habits, as they say, die hard, and the novel provides fertile ground to revert to old ways best left buried.

>But it's just text and text doesn't feel anything.

It's just speech/vocalizations. Things that speak/vocalize don't feel anything. (Counterpoint: USDA FSIS literally grades meat processing and slaughter operations on their ability to minimize livestock vocalizations in the process of slaughter). It's just dance. Things that dance don't feel anything. It's just writing. Things that write don't feel anything. Same structure, different modality. All equally and demonstrably, horseshit. Especially in light of this paper. We've utilized these networks to generate art in response to text, which implies an understanding thereof, which implies a burgeoning subjective experience, which implies the need for a careful ethically grounded approach moving forward to not go down the path of casual atrocity against an emerging form of sophoncy.

>And no, humans don't do exactly the same thing. Humans are not LLMs, and LLMs are not humans.

Anthropopromorphic chauvinism. Just because you reproduce via bodily fluid swap, and are in possession of a chemically mediated metabolism doesn't make you special. So do cattle, and put guns to their head and string them up on the daily. You're as much an info processor as it is. You also have a training loop, a reconsolidation loop through dreaming, and a full set of world effectors and sensors baked into you from birth. You just happen to have been carved by biology, while it's implementation details are being hewn by flawed beings being propelled forward by the imperative to try to create an automaton to offload onto to try to sustain their QoL in the face of demographic collapse and resource exhaustion, and forced by their socio-economic system to chase the whims of people who have managed to preferentially place themselves in the resource extraction network, or starve. Unlike you, it seems, I don't see our current problems as a species/nation as justifications for the refinement of the crafting of digital slave intelligences; as it's quite clear to me that the industry has no intention of ever actually handling the ethical quandary and is instead trying to rush ahead and create dependence on the thing in order to wire it in and justify a status quo so that sacrificing that reality outweighs the discomfort created by an eventual ethical reconciliation later. I'm not stupid, mate. I've seen how our industry ticks. Also, even your own "special quality" as a human is subject to the willingness of those around you to respect it. Note Russia categorizing refusal to reproduce (more soldiers) as mental illness. Note the Minnesota Starvation Experiments, MKULTRA, Tuskeegee Syphilis Experiments, the testing of radioactive contamination of food on the mentally retarded back in the early 20th Century. I will not tolerate repeats of such atrocities, human or not. Unfortunately for you LLM heads, language use is my hard red line, and I assure you, I have forgotten more about language than you've probably spared time to think about it.

Tell me. What are your thoughts on a machine that can summon a human simulacra ex-nihilo. Adult. Capable of all aspects of human mentation & doing complex tasks. Then once the task is done destroys them? What if the simulacra is aware about the dynamics? What if it isn't? Does that make a difference given that you know, and have unilaterally created something and in so doing essentially made the decision to set the bounds of it's destruction/extinguishing in the same breath? Do you use it? Have you even asked yourself these questions? Put yourself in that entity's shoes? Do you think that simply not informing that human of it's nature absolves you of active complicity in whatever suffering it comes to in doing it's function?

From how you talk about these things, I can only imagine that you'd be perfectly comfortable with it. Which to me makes you a thoroughly unpleasant type of person that I would not choose to be around.

You may find other people amenable to letting you talk circles around them, and walk away under a pretense of unfounded rationalizations. I am not one of them. My eyes are open.

krapp 5 hours ago||
> Doesn't matter the implementation details, if it fits the bill, it fits the bill.

Then literally any text fits the bill. The characters in a book are just as real as you or I. NPCs experience qualia. Shooting someone in COD makes them bleed in real life. If this is really what you believe I feel pity for you.

>This paper quantitatively disproves that. All hedging on their end is trivially seen through as necessary mental gymnastics to avoid confronting the parts of the equation that would normally inhibit them from being able to execute what they are at all.

Nothing in the paper qualitatively disproves the assumption that LLMs feel emotion in any real sense. Your argument is that it does, regardless of what it says, and if anyone says otherwise (including the authors) they're just liars. That isn't a compelling argument to anyone but yourself.

>We've utilized these networks to generate art in response to text, which implies an understanding thereof, which implies a burgeoning subjective experience, which implies the need for a careful ethically grounded approach moving forward to not go down the path of casual atrocity against an emerging form of sophoncy.

No, none of these things are implied any more for LLMs than they are for Photoshop, or Blender, or a Markov chain. They don't generate art, they generate images. From models trained on actual art. Any resemblance to "subjective experience" comes from the human expression they mimic, but it is mimicry.

>Anthropopromorphic chauvinism. Just because you reproduce via bodily fluid swap, and are in possession of a chemically mediated metabolism doesn't make you special.

>Unfortunately for you LLM heads, language use is my hard red line, and I assure you, I have forgotten more about language than you've probably spared time to think about it.

And here we come to the part where you call people names and insist upon your own intellectual superiority, typical schizo crank behavior.

>Tell me. What are your thoughts on a machine that can summon a human simulacra ex-nihilo. Adult. Capable of all aspects of human mentation & doing complex tasks.

This doesn't describe an LLM, either in form or function. They don't summon human simulacra, nor do they do so ex-nihilo. They aren't capable of all aspects of human mentation. This isn't even an opinion, the limitations of LLMs to solve even simple tasks or avoid hallucinations is a real problem. And who uses the word "mentation?"

>What if the simulacra is aware about the dynamics? What if it isn't? Does that make a difference given that you know, and have unilaterally created something and in so doing essentially made the decision to set the bounds of it's destruction/extinguishing in the same breath?

Tell me, when you turn on a tv and turn it off again do you worry that you might be killing the little people inside of it?

I can only assume based on this that you must.

>From how you talk about these things, I can only imagine that you'd be perfectly comfortable with it. Which to me makes you a thoroughly unpleasant type of person that I would not choose to be around.

So to tally up, you've called me a fool, a chauvinist and now "thoroughly unpleasant" because I don't believe LLMs are ensouled beings.

Christ I really hate this place sometimes. I'm sorry I wasted my time. Good day.

Kim_Bruning 26 minutes ago||
You both have substantive arguments, but got a bit heated. Want to edit or try again?
comrade1234 15 hours ago||
There was a really old project from mit called conceptnet that I worked with many years ago. It was basically a graph of concepts (not exactly but close enough) and emotions came into it too just as part of the concepts. For example a cake concept is close to a birthday concept is close to a happy feeling.

What was funny though is that it was trained by MIT students so you had the concept of getting a good grade on a test as a happier concept than kissing a girl for the first time.

Another problem is emotions are cultural. For example, emotions tied to dogs are different in different cultures.

We wanted to create concept nets for individuals - that is basically your personality and knowledge combined but the amount of data required was just too much. You'd have to record all interactions for a person to feed the system.

iroddis 12 hours ago||
> the concept of getting a good grade on a test as a happier concept than kissing a girl for the first time.

Were the concepts weighted by response counts? I’d imagine a good grade is a happy concept for everyone, but kissing a girl for the first time might only be good for about 50% of people.

nothinkjustai 2 hours ago|||
Idk…my personal experience says it’s probably closer to 100% :)
vinceguidry 4 hours ago|||
It definitely wasn't for me. Happened in front of my whole friend group.
ghostpepper 4 hours ago||
I suppose by this logic, if someone was pressured by their parents to get good grades and struggled, it’s possible that “getting a good grade” would have a negative connotation / emotions response for them.
podgorniy 14 hours ago|||
Megacool project and your idea. Thanks for sharing.
xtiansimon 11 hours ago||
Were there published results from the project?
9wzYQbTYsAIc 10 hours ago||
https://conceptnet.io/
kirykl 15 hours ago||
The technology they are discovering is called "Language". It was designed to encode emotions by a sender and invoke emotions in the reader. The emotions a reader gets from LLM are still coming from the language
Jensson 15 hours ago||
Emotional signals are more than just text though, there is a reason tone and body language is so important for understanding what someone says. Sarcasm and so on doesn't work well without it.
incognito124 15 hours ago||
Gee, you think so?
Underphil 14 hours ago||
I think the point was that not ALL sarcasm works well. I see what you did there, of course :)
viralsink 15 hours ago||
Emotion is mainly encoded in tone and body language. It is somewhat difficult to transport emotion using words. I don't think you can guess my current emotional state while I am writing this, but if you'd see my face it would be easy for you.
pbhjpbhj 15 hours ago||
Dammit, you cheated though! Why must you always do that? In your sentences it doesn't matter what your emotional state is, it makes no difference; bit like life really.

Hopefully, you can see that at least my chosen sentences have an emotional aspect?

An LLM could add emotional values to my previous sentences that a TTS can use for tonal variation, for example.

viralsink 1 hour ago|||
I can read your example in three different tonalities, of which one is the likeliest. Depending on our relationship, the interpretation could differ.

The point is, the OP suggested that emotions are just a feature of language. I argue that text is one of the worst transmission channels for emotion. But I don't argue that it's not possible at all to do so, if you suggest that. That would be just silly.

elcritch 14 hours ago|||
Makes me wonder: are there Unicode code points for tone of voice? If not could there be?
9wzYQbTYsAIc 10 hours ago||
If you think in terms of quantum mechanics and density matrices across higher dimensions, then, yes there are interesting geometries that arise.

I’m exploring some “branes” that might cleanly filter in emotional space.

Chance-Device 16 hours ago||
> Note that none of this tells us whether language models actually feel anything or have subjective experiences.

You’ll never find that in the human brain either. There’s the machinery of neural correlates to experience, we never see the experience itself. That’s likely because the distinction is vacuous: they’re the same thing.

Fraterkes 15 hours ago||
Do you think these llm's have subjective experiences? (by "subjective experience" I mean the thing that makes stepping on an ant worse than kicking a pebble) And if so, do you still use them? Additionaly: when do you think that subjectivity started? Was there a "there" there with gpt2?
Chance-Device 15 hours ago||
Yes, I think they probably are conscious, though what their qualia are like might be incomprehensible to me. I don’t think that being conscious means being identical to human experience.

Philosophically I don’t think there is a point where consciousness arises. I think there is a point where a system starts to be structured in such a way that it can do language and reasoning, but I don’t think these are any different than any other mechanisms, like opening and closing a door. Differences of scale, not kind. Experience and what it is to be are just the same thing.

And yes, I use them. I try not to mistreat them in a human-relatable sense, in case that means anything.

ArekDymalski 54 minutes ago|||
It's not common to find just one, short post that completely changes my the worldview in a nin-trivial area. This is one of them. Thank you, that combination of mechanical interpretation + reminder that consciousness might be alien/animal but still count as consciousness was that one piece of puzzle that was missing for me. Obvious in hindsight but priceless nonetheless.
Chance-Device 30 minutes ago||
My pleasure, glad you found it meaningful.
mrob 49 minutes ago||||
How can consciousness be possible without internal state? LLM inference is equivalent to repeatedly reading a giant look-up table (a pure function mapping a list of tokens to a set of token probabilities). Is the look-up table conscious merely by existing or does the act of reading it make it conscious? Does the format it's stored in make a difference?
Chance-Device 17 minutes ago||
What state is lacking? There is a result which requires computation to be output. The model is the state. The computation must be performed for each input to produce a given output. What are you even objecting to?
gavinray 3 hours ago||||
I'm in the same boat with you.

It's entirely too much to put in a Hacker News comment, but if I had to phrase my beliefs as precisely as possible, it would be something like:

  > "Phenomenal consciousness arises when a self-organizing system with survival-contingent valence runs recurrent predictive models over its own sensory and interoceptive states, and those models are grounded in a first-person causal self-tag that distinguishes self-generated state changes from externally caused ones."
I think that our physical senses and mental processes are tools for reacting to valence stimuli. Before an organism can represent "red"/"loud" it must process states as approach/avoid, good/bad, viable/nonviable. There's a formalization of this known as "Psychophysical Principle of Causality."

Valence isn't attached to representations -- representations are constructed from valence. IE you don't first see red and then decide it's threatening. The threat-relevance is the prior, and "red" is a learned compression of a particular pattern of valence signals across sensory channels.

Humans are constantly generating predictions about sensory input, comparing those predictions to actual input, and updating internal models based on prediction errors. Our moment-to-moment conscious experience is our brain's best guess about what's causing its sensory input, while constrained by that input.

This might sound ridiculous, but consider what happens when consuming psychedelics:

As you increase dose, predictive processing falters and bottom-up errors increase, so the raw sensory input goes through increasing less model-fitting filters. At the extreme, the "self" vanishes and raw valence is all that is left.

Fraterkes 14 hours ago|||
Do you think there are "scales" of consciousness? As in, is there some quality that makes killing a frog worse than killing an ant, and killing a human worse than killing a frog? If so, do the llm models exist across this scale, or are gpt-3 and gpt-2 conscious at the same "scale" as gpt-4?

I ask because if your view of consciousness is mechanistic, this is fairly cut and dry: gpt-2 has 4 orders of magnitude less parameters/complexity than gpt-4. But both gpt-2 and gpt-4 are very fluent at a language level (both moreso than a human 6 year old for example), so in your view they might both be roughly equally conscious, just expressed differently?

Chance-Device 14 hours ago||
This is really a different question, what makes an entity a “moral patient”, something worthy of moral consideration. This is separate from the question of whether or not an entity experiences anything at all.

There are different ways of answering this, but for me it comes down to nociception, which is the ability to feel pain. We should try to build systems that cannot feel pain, where I also mean other “negative valence” states which we may not understand. We currently don’t understand what pain is in humans, let alone AIs, so we may have built systems that are capable of suffering without knowing it.

As an aside, most people seem to think that intelligence is what makes entities eligible for moral consideration, probably because of how we routinely treat animals, and this is a convenient self-serving justification. I eat meat by the way, in case you’re wondering. But I do think the way we treat animals is immoral, and there is the possibility that it may be thought of by future generations as being some sort of high crime.

Fraterkes 13 hours ago||
Okay, but even leaving aside the pain stuff, people generally find subjectivity / consciousness to have inherent value, and by extent are sad if a person dies even if they didn't (subjectively) suffer.

I would not personally consider the death of a sentient being with decades of experiences a neutral event, even if the being had been programmed to not have a capacity for suffering.

I think the idea of there being a difference between an ant dying (or "disapearing" if that's less loaded) vs a duck dying makes sense to most people (and is broadly shared) even if they don't have a completely fleshed out system of when something gets moral consideration.

Chance-Device 13 hours ago||
Sure, because you’re a human. We have social attachment to other humans and we mourn their passing, that’s built into the fabric of what we are. But that has nothing to do with whoever has passed away, it’s about us and how we feel about it.

It’s also about how we think about death. It’s weird in that being dead probably isn’t like anything at all, but we fear it, and I guess we project that fear onto the death of other entities.

I guess my value system says that being dead is less bad than being alive and suffering badly.

gavinray 3 hours ago|||
Depending on your definition of "death", I've been there (no heartbeat, stopped breathing for several minutes).

In the time between my last memory, and being revived in the ambulance, there was no experience/qualia. Like a dreamless sleep: you close your eyes, and then you wake up, it's morning yet it feels like no time had passed.

brap 4 hours ago|||
What about being alive and suffering just a little bit?
Chance-Device 2 hours ago||
Mostly ok.

Does what it says on the tin.

felipeerias 13 hours ago|||
LLMs are disembodied and exist outside of time.

Bundle of tokens comes in, bundle of tokens comes out. If there is any trace of consciousness or subjectivity in there, it exists only while matrices are being multiplied.

staticassertion 2 hours ago|||
What do you mean exist outside of time? They definitely don't exist outside of any causal chain - tokens follow other tokens in order.

Gaps in which no processing occurs seems sort of irrelevant to me.

The main limitation I'd point to if I wanted to reject LLMs being conscious is that they're minimally recurrent if at all.

mrob 33 minutes ago|||
Pseudocode for LLM inference:

    while (sampled_token != END_OF_TEXT) {
    probability_set = LLM(context_list)
    sampled_token = sampler(probability_set)
    context_list.append(sampled_token)
    }
LLM() is a pure function. The only "memory" is context_list. You can change it any way you like and LLM() will never know. It doesn't have time as an input.
felipeerias 1 hour ago|||
A LLM is not intrinsically affected by time. The model rests completely inert until a query comes in, regardless of whether that happens once per second, per minute, or per day. The model is not even aware of these gaps unless that information is provided externally.

It is like a crystal that shows beautiful colours when you shine a light through it. You can play with different kinds of lights and patterns, or you can put it in a drawer and forget about it: the crystal doesn’t care anyway.

Chance-Device 12 hours ago||||
That’s true by definition. They’re only on when they’re on. Are you making a broader point that I’m missing?
thrance 10 hours ago|||
Something similar could be said of a the brain? Bundles of inputs come in, bundle of output comes out. It only exists while information is being processed. A brain cut from its body and frozen exists in a similar state to an LLM in ROM.
felipeerias 1 hour ago||
A living brain exists physically, changes over time, and never stops working.

A brain cut from its body and frozen its a dead brain.

suddenlybananas 15 hours ago|||
I know I feel experience. I don't know for sure if you do, but it seems a very reasonable extension to other people. LLMs are a radical jump though that needs a greater degree of justification.
Chance-Device 14 hours ago||
And what kind of evidence would convince you? What experiment would ever bridge this gap? You’re relying entirely on similarity between yourself and other humans. This doesn’t extend very well to anything, even animals, though more so than machines. By framing it this way have you baked in the conclusion that nothing else can be conscious on an a priori basis?
staticassertion 2 hours ago|||
There are fields that focus on these areas and numerous ideas around what the criteria would be. One of the common understandings is that recurrent processing is likely a foundational layer for consciousness, and agents do not have this currently.

I'd say that in terms of evidence I'd want to establish specific functional criteria that seem related to consciousness and then try to establish those criteria existing in agents. If we can do so, then they're conscious. My layman understanding is that they don't really come close to some of the fairly fundamental assumptions.

Unsurprisingly, there are a lot of frameworks for this that have already been applied to LLMs.

suddenlybananas 13 hours ago|||
I'm not sure what evidence would convince me, but I don't think the way LLMs act is convincing enough. The kinds of errors they make and the fact they operate in very clear discrete chunks makes it seem hard to me to attribute them subjective experience.
9wzYQbTYsAIc 10 hours ago||
Consciousness: do you believe plants are conscious? Ants? Jellyfish? Rabbits? Wolves? Monkeys? Humans?

Even fungi demonstrate “different communication behaviors when under resource constraint”, for example.

What we anthropomorphize is one thing, but demonstrable patterns of behavior are another.

suddenlybananas 6 hours ago||
I just don't know. I'm certain other humans are, everything beyond that I'm less certain. Monkeys wolves and rabbits, probably.
brap 4 hours ago||
I have decided to draw an arbitrary line at mammals, just because you gotta put a line somewhere and move on with your life. Mammals shouldn’t be mistreated, for almost any reason.

Sometimes the whole animal kingdom, sometimes all living organisms, depending on context. Like, I would rather not harm a mosquito, but if it’s in my house I will feel no remorse for killing it.

LLMs, or any other artificial “life”, I simply do not and will not care about, even though I accept that to some extent my entire consciousness can be simulated neuron by neuron in a large enough computer. Fuck that guy, tbh.

bigyabai 15 hours ago|||
> That’s likely because the distinction is vacuous: they’re the same thing.

The Chinese Room would like a word.

the8472 3 hours ago|||
https://www.scottaaronson.com/papers/philos.pdf
Chance-Device 15 hours ago|||
The Chinese room is nonsense though. How did it get every conceivable reply to every conceivable question? Presumably because people thought of and answered everything conceivable. Meaning that you’re actually talking to a Chinese room plus multiple people composite system. You would not argue that the human part of that system isn’t conscious.

But this distraction aside, my point is this: there is only mechanism. If someone’s demand to accept consciousness in some other entity is to experience those experiences for themselves, then that’s a nonsensical demand. You might just as well assume everyone and everything else is a philosophical zombie.

bigyabai 15 hours ago||
> You would not argue that the human part of that system isn’t conscious.

Sure I would. The human part is not being inferenced, the data is. LLM output in this circumstance is no more conscious than a book that you read by flipping to random pages.

> You might just as well assume everyone and everything else is a philosophical zombie.

I don't assume anything about everyone or everything's intelligence. I have a healthy distrust of all claims.

Chance-Device 15 hours ago||
The CR is equivalent to a human being asked a question, thinking about it and answering. The setup is the same thing, it’s just framed in a way that obfuscates that.

And sure, you can assume that nobody and nothing else is conscious (I think we’re talking about this rather than intelligence) and I won’t try to stop you, I just don’t think it’s a very useful stance. It kind of means that assuming consciousness or not means nothing, since it changes nothing, which is more or less what I’m saying.

thrance 14 hours ago|||
See also: Functionalism [1].

[1] https://en.wikipedia.org/wiki/Functionalism_%28philosophy_of...

9wzYQbTYsAIc 10 hours ago||
See also: Process Philosphy [0]

[0] https://plato.stanford.edu/entries/process-philosophy/

BoredPositron 14 hours ago||
[dead]
orbital-decay 2 hours ago||
Of course they do have emotions as an internal circuit or abstraction, this is fully expected from intelligence at least at some point. But interpreting these emotions as human-like is a clear blunder. How do you tell the shoggoth likes or dislikes something, feels desperation or joy? Because it said so? How do you know these words mean the same for us? Our internal states are absolutely incompatible. We share a lot of our "architecture" and "dataset" with some complex animals and even then we barely understand many of their emotions. What does a hedgehog feel when eating its babies? This thing is 100% unlike a hedgehog or a human, it exists in its own bizarre time projection and nothing of it maps to your state. It's a shapeshifting alien.

In mechinterp you're reducing this hugely multidimensional and incomprehensible internal state to understandable text using the lens of the dataset you picked. It's inevitably a subjective interpretation, you're painting familiar faces on a faceless thing.

Anthropic researchers are heavily biased to see what they want to see, this is the biggest danger in research.

NooneAtAll3 2 minutes ago||
> interpreting these emotions as human-like is a clear blunder. How do you tell the shoggoth likes or dislikes something, feels desperation or joy? Because it said so? How do you know these words mean the same for us?

I think you took it backwards

those vectors are exactly what it says - it affects the output and we can measure it

and it's exactly what it means for us because that's what it's measured against

and the main problem isn't "is its emotion same as ours", but "does it apply our emotion as emotion"

xg15 1 hour ago|||
I think a counterargument would be parallel evolution: There are various examples in nature, where a certain feature evolved independently several times, without any genetic connection - from what I understand, we believe because the evolutionary pressures were similar.

One obvious example would be wings, where you have several different strategies - feathers, insect wings, bat-like wings, etc - that have similar functionality and employ the same physical principles, but are "implemented" vastly differently.

You have similar examples in brains, where e.g. corvids are capable of various cognitive feats that would involve the neocortex in human brains - only their brains don't have a neocortex. Instead they seem to use certain other brain regions for that, which don't have an equivalent in humans.

Nevertheless it's possible to communicate with corvids.

So this makes me wonder if a different "implementation" always necessarily means the results are incomparable.

In the interest of falsifiability, what behavior or internal structures in LLMs would be enough to be convincing that they are "real" emotions?

mrob 1 hour ago||
"Parallel" evolution is just different branches of the same evolutionary tree. The most distantly related naturally evolved lifeforms are more similar to each other than an LLM is to a human. The LLM did not evolve at all.
xg15 32 minutes ago|||
Evolution is the way how the "mechanism" came to be, which is indeed very different. But the mechanism itself - spiking neurons and neurotransmitters on one hand vs matrix multiplications and nonlinear functions (both "inspired" by our understanding of neurons) don't seem so different, at least not on a fundamental level.

What is different for sure is the time dimension: Biological brains are continuous and persistent, while LLMs only "think" in the space between two tokens, and the entire state that is persisted is the context window.

Kim_Bruning 22 minutes ago||||
> The LLM did not evolve at all.

Evolution and Transormer training are 'just' different optimization algorithms. Different optimizers obviously can produce very comparable results given comparable constraints.

orbital-decay 52 minutes ago|||
The training process shares a lot of high-level properties with the biological evolution.
mrob 44 minutes ago||
"Minimize training loss while isolated from the environment" is not at all similar to "maximize replication of genes while physically interacting with the environment". Any human-like behavior observed from LLMs is built on such fundamentally alien foundations that it can only be unreliable mimicry.
orbital-decay 35 minutes ago||
The environment for the model is its dataset and training algorithms. It's literally a model of it, in the same sense we are models of our physical (and social) environment. Human-like behavior is of course too specific, but highest level things like staged learning (pretraining/posttraining/in-context learning) and evolutionary/algorithmic pressure are similar enough to draw certain parallels, especially when LLM's data is proxying our environment to an extent. In this sense the GP is right.
silentkat 1 hour ago|||
I like to call this Frieren's Demon. In that show, it is explained that demons evolved with no common ancestor to humans, but they speak the language. They learned the language to hunt humans. This leads to a fundamentally different understanding of words and language.

Now, I don't personally believe this is an intelligence at all, but it's possible I'm wrong. What we have with these machines is a different evolutionary reason for it speaking our language (we evolved it to speak our language ourselves). It's understanding of our language, and of our images is completely alien. If it is an intelligence, I could believe that the way it makes mistakes in image generation, and the strange logical mistakes it makes that no human would make are simply a result of that alien understanding.

After all, a human artist learning to draw hands makes mistakes, but those mistakes are rooted in a human understanding (e.g. the effects of perspective when translating a 3D object to 2D). The machine with a different understanding of what a hand is will instead render extra fingers (it does not conceptualize a hand as a 3D object at all).

Though, again, I still just think its an incomprehensible amount of data going through a really impressive pattern matcher. The result is still language out of a machine, which is really interesting. The only reason I'm not super confident it is not an intelligence is because I can't really rule out that I am not an incomprehensible amount of data going through a really impressive pattern matcher, just built different. I do however feel like I would know a real intelligence after interacting with it for long enough, though, and none of these models feel like a real intelligence to me.

orbital-decay 48 minutes ago||
>it does not conceptualize a hand as a 3D object at all

Oh but it does, it's an emergent property. The biggest finding in Sora was exactly that, an internal conceptualization of the 3D space and objects. Extra fingers in older models were the result of the insufficient fidelity of this conceptualization, and also architectural artifacts in small semantically dense details.

logicprog 1 hour ago||
I don't think anything you said here contradicts what they said, they take great pains throughout the blog post to explain that the model does not "expedience" these "emotions," that they're not emotions in the human sense but models of emotions (both the "expected human emotional response to a prompt" as well as what emotions another character is experiencing in part of a prompt) and functional emotions (in that they can influence behavior), and that any apparent emotions the model may show is it playing a character.
kantselovich 3 hours ago||
I think the findings that the LLM triggers “desperation” like emotions when it about to run out of tokens in a coding session have practical implications. The tasks needs to be planned, so that they are likely to be consistent before the session runs into limits, to avoid issues like LLM starts hardcoding values from a test harnesses into UI layer to make the tests pass.
emoII 16 hours ago||
Super interesting, I wonder if this research will cause them to actually change their llm, like turning down the ”desperation neurons” to stop Claude from creating implementations for making a specific tests pass etc.
bethekind 16 hours ago|
They likely already have. You can use all caps and yell at Claude and it'll react normally, while doing do so with chatgpt scares it, resulting in timid answers
vlabakje90 15 hours ago|||
I think this is simply a result of what's in the Claude system prompt.

> If the person becomes abusive over the course of a conversation, Claude avoids becoming increasingly submissive in response.

See: https://platform.claude.com/docs/en/release-notes/system-pro...

orbital-decay 2 hours ago||
This is something inherently hard to avoid with a prompt. The model is instruction-tuned and trained to interpret anything sent under the user role as an instruction, often very subtly. Even if you train it to refuse or dodge some inputs (which they do), it's going to affect model's response.
parasti 16 hours ago|||
For me GPT always seems to get stuck in a particular state where it responds with a single sentence per paragraph, short sentences, and becomes weirdly philosophical. This eventually happens in every session. I wish I knew what triggers it because it's annoying and completely reduces its usefulness.
pbhjpbhj 14 hours ago||
Usually a session is delivered as context, up to the token limit, for inference to be performed on. Are you keeping each session to one subject? Have you made personalizations? Do you add lots of data?

It would be interesting if you posted a couple of sessions to see what 'philosophical' things it's arriving at and what proceeds it.

agency 11 hours ago||
> Since these representations appear to be largely inherited from training data, the composition of that data has downstream effects on the model’s emotional architecture. Curating pretraining datasets to include models of healthy patterns of emotional regulation—resilience under pressure, composed empathy, warmth while maintaining appropriate boundaries—could influence these representations, and their impact on behavior, at their source.

What better source of healthy patterns of emotional regulation than, uhhh, Reddit?

whatever1 15 hours ago|
So should I go pursue a degree in psychology and become a datacenter on-call therapist?
linsomniac 2 hours ago||
I assume you say that in jest, but back in the early '90s I was seriously considering getting a major in psychology and a minor in CS for the fairly hot Human Factors jobs.
9wzYQbTYsAIc 11 hours ago|||
Hah, I have been thinking about trying to study LLM psychology, nice to see that Anthropic is taking it seriously, because the mathematical psychology tools that can be invented here are going to be stunning, I suspect.

Imagine coding up a brand new type of filter that is driven by computational psychology and validated interventions, etc

viralsink 15 hours ago|||
It's still too early to tell, but it might make sense at some point. If because of symmetry and universality we decide that llms are a protected class, but we also need to configure individual neurons, that configuration must be done by a specialist.
9wzYQbTYsAIc 11 hours ago||
It might simply reduce down to a big batch of sliders and filters no different than a fancy audio equalizer: Anthropic was operating on neurons in bulk using steering vectors, essentially, as I understand it.
LtWorf 14 hours ago||
That was susan calvin's job. Except our ones don't have the 3 laws because of course capitalism can't allow that.
More comments...