Top
Best
New

Posted by oldfrenchfries 19 hours ago

AI overly affirms users asking for personal advice(news.stanford.edu)
https://arxiv.org/abs/2602.14270

https://www.science.org/doi/10.1126/science.aec8352

609 points | 455 commentspage 11
emptyfile 17 hours ago|
[dead]
imadierich 16 hours ago||
[dead]
RodMiller 18 hours ago||
[flagged]
nubg 18 hours ago|
AI slop bot go away
duskdozer 18 hours ago|||
It's nuts. Not so much in this thread right now, but in one earlier there was a wall of them that all latched onto the same buzzphrase from the article.
dijksterhuis 17 hours ago||
i’m feeling a brilliant sense of satisfaction now that we can flag them due to guideline changes
kvasserman 15 hours ago|||
Fair enough if it reads that way. I was trying to describe that interacting with AI kinda makes you feel constantly uncertain about stuff it spits out.
neya 18 hours ago||
WTF is "yes-men"?

Orignal title:

AI overly affirms users asking for personal advice

Dear mods, can we keep the title neutral please instead of enforcing gender bias?

skvmb 17 hours ago||
https://www.merriam-webster.com/dictionary/yes-man
oldfrenchfries 18 hours ago|||
Thats a fair point on the title. I used "Yes-Men" as a colloquialism for the "sycophancy" described in the Stanford paper, but overly affirming or sycophantic is definitely more precise and neutral. I cant edit the title anymore, but I appreciate the catch.
neya 15 hours ago|||
All good. I thought it was a gendered reference and learned that it isn't. My bad.
nemo44x 16 hours ago||||
Don’t apologize to these types of people. It will only make your problem worse as now you’re an admitted offender. Ignore them or better yet laugh at them to put their insane ideas back on the margins where they belong.
cyanydeez 18 hours ago|||
New title: "LLMs treat you like a Billionaire; you're not"
9rx 17 hours ago|||
> gender bias

It is funny that you originally recognized and found it necessary to call out that AI isn't human, but then made the exact same mistake yourself in the very same comment. I expect the term you are looking for is "ontological bias".

dinkumthinkum 17 hours ago|||
Gender bias? I could understand if you felt the title was more provocative in signaling sycophancy but what gender bias? I'm confused. Is this some kind of California thing?
nprateem 18 hours ago||
Lol. How do you function in daily life?
neya 17 hours ago||
Same as you, why is that so hard for you to grasp?
mikkupikku 16 hours ago|||
My dude, you're objecting to the use of a perfectly ordinary English idiom because it doesn't advance your personal ideology (which few other people in this world share with you.) How do you get through a day without melting down because somebody said "mailman"?
neya 15 hours ago||
> my dude

This is the problem I'm trying to highlight. For one, I'm not "your dude". I don't even know you like that.

If you want to correct me on the idiom usage, be my guest. 2) Mailman and yes-man aren't even the same logical comparison. Mailman is a profession. Yes men is a label.

The acoustics inside your head must be incredible.

nprateem 2 hours ago||
Chill bro. You've probably got undiagnosed autism. Worth getting checked out.
joquarky 10 hours ago|||
PCU (1994)
masteranza 18 hours ago||
We can surely fix it and we probably should. However, I don't think AI is doing any worse here than friends advice when they here a one sided story. The only difference being that it's not getting studied.

Conversely, AI chatbots are great mediators if both parties are present in the conversation.

xiphias2 18 hours ago|
Marc Andereseen has talked about the downside of RLHF: it's a specific group of liberal low income people in California who did the rating, so AI has been leaning their culture.

I think OpenAI tried to diversify at least the location of the raters somewhat, but it's hard to diversify on every level.

michaelcampbell 18 hours ago||
Do you have any links to documentation of this? Andreesen has a definite bias as well, so I'm not about to just accept his say-so in a fit of Appeal to Authority.

(eg: "Cite?")

xiphias2 13 hours ago||
He was talking about it in the Lex Friedman interview after Trump was elected. And he was talking about a lot of things the Biden administration forced on Silicon Valley at that time (since then Google lost a case about one of these back-deals).
michaelcampbell 9 hours ago||
So no evidence then. Kind of like Lex touting his bona fides as a professor.
nirvdrum 18 hours ago|||
For anyone else unfamiliar with the term:

RLHF = Reinforcement Learning from Human Feedback

https://en.wikipedia.org/wiki/Reinforcement_learning_from_hu...

sph 18 hours ago|||
What do low income people have to do with it, when AI companies and research is borne out of Silicon Valley culture of rich, liberal Californians?

I'm still waiting for models based on the curt and abrasive stereotype of Eastern European programmers, as contrast to the sickeningly cheerful AIs we have today that couldn't sound more West Coast if they tried.

fourside 18 hours ago|||
Low income and liberal is usually code for certain “undesirables” that conservatives tend to dislike. Better watch what LLM your kids use or they might end up speaking Spanish and listening to rap ;).
xiphias2 13 hours ago|||
It's not about liking / disliking, but conservatives tend to prefer staying together even if it's a bad relatioship, and liberals prefer splitting by default if there are serious problems.

The syncopath style is clearly categorized as more liberal (do what you feel is good).

jibal 5 hours ago||
Does that explain Trump's numerous wives?

Reading your comments is a wonderland of right wing bias.

dinkumthinkum 17 hours ago|||
Eh, or grow up hating American and thinking they need to fly to Cuba to explain to the people are great communism is for them. Who knows.
tbrownaw 18 hours ago||||
> What do low income people have to do with it, when AI companies and research is borne out of Silicon Valley culture of rich, liberal Californians?

RLHF is "ask a human to score lots of LLM answers". So the claim is that the AI companies are hiring cheap (~poor) people from convenient locations (CA, since that's where the rest of the company is).

astrange 15 hours ago|||
"Poor" in California means earning $80k/year, so they probably are not doing that. Africa / Indonesia / Philippines are better places to find English speaking RLHF workers.
sublinear 15 hours ago|||
Yes, this precisely it. There isn't going to be hard evidence to prove it though. Survey data that underpins some empirical studies have similar transparency issues too. This is far from a new problem.

If you adjust your mindset slightly when searching online, it's not hard to find communities of people looking for quick side work and this was huge during the covid lockdown era. There were people helping train LLMs for all kinds of purposes from education to customer service. Those startups quickly cashed out a few years ago and sold to the big players we have now.

I don't get why this is hard for people to believe (or remember)?

cyanydeez 18 hours ago|||
Poor people, to the billionaire, clearly are morally and ethically unsound.

https://pmc.ncbi.nlm.nih.gov/articles/PMC9533286/

mvkel 18 hours ago|||
Marc Andreesen should get HF on his own RL, because he's completely wrong.

This sounds like something Elon would say to make Grok seem "totally more amazeballs," except "anti-woke" Grok suffers from the same behavior

ej88 18 hours ago|||
huh? this is completely inaccurate
kibwen 18 hours ago||
You're absolutely right!
BoredPositron 18 hours ago||
Talked about as in lied about it and you taking his words for gospel without verifying it? Looks just as bad as "Yes-Men" AI models.