Top
Best
New

Posted by dnw 1 day ago

Emotion concepts and their function in a large language model(www.anthropic.com)
164 points | 166 commentspage 3
threethirtytwo 15 hours ago|
Whenever I come to HN I see a bunch of people say LLMs are just next token predictors and they completely understand LLMs. And almost every one of these people are so utterly self assured to the point of total confidence because they read and understand what transformers do.

Then I watch videos like this straight from the source trying to understand LLMs like a black box and even considering the possibility that LLMs have emotions.

How does such a person reconcile with being utterly wrong? I used to think HN was full of more intelligent people but it’s becoming more and more obvious that HNers are pretty average or even below.

qaadika 10 hours ago||
I'm kinda one of those who believes they 'completely' understand LLMs. But I've also developed my understanding of them such that the internal mechanisms of the transformer, or really any future development in the space based on neural networks and machine learning is irrelevant.

1. A string of unicode characters is converted into an array of integers values (tokens) and input to a black box of choice.

2. The black box takes in the input, does its magic, and returns an output as an array of integer values.

3. The returned output is converted into a string of unicode characters and given to the user, or inserted in a code file, or whatever. At no point does the black box "read" the input in any way analogous to how a human reads.

Where people get "The AIs have emotions!!!" from returning an array of integers values is beyond me. It's definitely more complicated than "next token predictor", but it really is as simple as "Make words look like numbers, numbers go in, numbers come out, we make the numbers look like words."

threethirtytwo 7 hours ago||
Yeah nothing personal but my claim here is you’re not smart. The next token predictor aspect is something anyone can understand… the transformer is not quantum physics.

Like look at what you wrote. You called it black box magic and in the same post you claim you understand LLMs. How the heck can you understand and call it a black box at the same time?

The level of mental gymnastics and stupidity is through the roof. Clearly the majority of the utilitarian nature of the LLM is within the whole section you just waved away as “black box”.

> Where people get "The AIs have emotions!!!" from returning an array of integers values is beyond me

Let me spell it out for you. Those integers can be translated to the exact same language humans use when they feel identical emotions. So those people claim that the “black box” feels the emotions because what they observe is identical to what they observe in a human.

The LLM can claim it feels emotions just like a human can claim the same thing. We assume humans feel emotions based off of this evidence but we don’t apply that logic to LLMs? The truth of the matter is we don’t actually know and it’s equally dumb to claim that you know LLMs feel emotions to claiming that they dont feel emotions.

You have to be pretty stupid to not realize this is where they are coming from so there’s an aspect of you lying to yourself here because I don’t think you’re that stupid.

big_toast 15 hours ago||
One day I realized I needed to make sure I'm voting on quality stories/comments. I wonder if there was a call to vote substantively and often, if that might change the SNR.

The guidelines encourage substantive comments, but maybe voters are part of the solution too. Kinda like having a strong reward model for training LLMs and avoiding reward hacking or other undesirable behavior.

threethirtytwo 11 hours ago||
if voters are stupid then it doesn't really help.

I think what's happening is reality is asserting itself too hard that people can't be so stupid anymore.

techpulselab 1 day ago||
[dead]
ActorNightly 1 day ago||
[dead]
yoaso 1 day ago||
[flagged]
staticassertion 14 hours ago||
> If we think of human emotions the same way, just evolution's way of nudging behavior

I think we basically do, the only interesting bit is our perception of phenomenal experiences.

podgorniy 1 day ago|||
> If we think of human emotions the same way, just evolution's way of nudging behavior

What are other alternative, realistic possible ways to see emotions?

pbhjpbhj 1 day ago|||
I'm not being pejorative but that sounds more like psychopathy or autism?

Evolution isn't a god, it has no steering hand, it is accidents that either provide advantage or don't.

LLMs are getting more human-like because that's how we're developing them. Arguably that's about market forces. LM owners see opportunity to exploit people's desire for emotional interactions (ie loneliness) in order to make money.

silisili 1 day ago||
Probably the other direction. Emotions are raw, most humans relate and change behavior accordingly.

Only psychopaths think of emotion as nothing but a means to changing behavior. The scary thing is that LLMs by nature would exhibit the same behavior.

nelox 1 day ago||
Many non-psychopaths e.g., CBT therapists, evolutionary psychologists and neuroscientists, such as Damasio, view emotions as adaptive tools for guiding/changing behaviour.
koolala 1 day ago|
A-HHHHHHHHHHHHHHHJ