Top
Best
New

Posted by oldfrenchfries 6 hours ago

AI overly affirms users asking for personal advice(news.stanford.edu)
https://arxiv.org/abs/2602.14270
391 points | 317 commentspage 2
gurachek 5 hours ago|
I had exactly this between two LLMs in my project. An evaluator model that was supposed to grade a coaching model's work. Except it could see the coach's notes, so it just... agreed with everything. Coach says "user improved on conciseness", next answer is shorter, evaluator says yep great progress. The answer was shorter because the question was easier lol.

I only caught it because I looked at actual score numbers after like 2 weeks of thinking everything was fine. Scores were completely flat the whole time. Fix was dumb and obvious — just don't let the evaluator see anything the coach wrote. Only raw scores. Immediately started flagging stuff that wasn't working. Kinda wild that the default behavior for LLMs is to just validate whatever context they're given.

anotheraccount9 4 hours ago||
AI being a Yes-Man is slowly sabotaging it's own answers, because it negatively impact the user's decision. Yes/No are equally important, within a coherent context, for objective reasons. But being supported in the wrong direction is a castastrophe multiplier, down the road. The AI should be neutral, doubtful at times.
adamtaylor_13 2 hours ago||
Interestingly, you can simply tell models to not be sycophantic and they'll listen.

Claude is almost annoyingly good at pushing back on suggestions because my global CLAUDE.md file says to do so. I rarely get Claude "you're absolutely right"ing me because I tell it to push back.

oldfrenchfries 6 hours ago||
There is a striking data visualization showing the breakup advice trend over 15 years on Reddit. You can see the "End relationship" line spike as AI and algorithmic advice take over:

https://www.reddit.com/r/dataisbeautiful/comments/1o87cy4/oc...

Sharlin 6 hours ago||
More interesting, IMO, is the general trend that started long before LLMs. The fact that "dump them" is the standard answer to any relationship question is a meme by now. The LLMs appear to be doing exactly what one would expect them to be doing based on their training corpus.
doubled112 6 hours ago|||
"There is more than one fish in the sea" has been relationship advice for centuries. It might be about being dumped, but I've also thought it useful for considering dumping somebody too.
Sharlin 5 hours ago||
No, that's not it. We're talking about posts like "we had a silly little quarrel about something that would need fifteen minutes to clear up and make both happy if we both just try to adult a bit" and commenters being adamant that deleting gym and facebooking up and so on is clearly the only choice. Most of said commenters probably not being in any position to give advice on relationships to others.
dec0dedab0de 5 hours ago||||
if things are so bad that you’re posting on reddit then breaking up is usually the best answer.
nibbleyou 5 hours ago|||
I see this being said often but I don't understand.

A lot of people posting there are young and may well be in their first relationship. It makes sense for them to ask a question in the community they spend their most time in - which is reddit

the_af 5 hours ago|||
Most people overshare on reddit and it's completely unrelated to the seriousness of the situation.

It's also a meme that people will ask the dumbest, most trivial interpersonal conflict questions on Reddit that would be easily solved by just talking to the other person. E.g. on r/boardgames, "I don't like to play boardgames but my spouse loves them, what can I do?" or "someone listens to music while playing but I find it distracting, what can I do?" (The obvious answer of "talk to the other person and solve it like grownups" is apparently never considered).

On relationship advice, it often takes the form "my boy/girlfriend said something mean to me, what shall I do?" (it's a meme now that the answer is often "dump them").

If LLMs train on this...

astrange 3 hours ago||||
> The LLMs appear to be doing exactly what one would expect them to be doing based on their training corpus.

That is not how full LLM training works. That is how base model pretraining works.

est 4 hours ago||||
the year is 2015

smart phones took over the world, social networks happened.

Turns out they are the best sterializer human ever invented.

I just wrote a blog https://blog.est.im/2026/stdin-09

1970-01-01 6 hours ago|||
This is the correct take. The advice preceded the LLM boom. They were trained on the 'dump them' advice and proceeded to reinforce the take. So why did the relationship advice change dramatically? I speculate attribution to the disinformation campaigns during this time. They were and still are grossly underestimated.
to11mtm 5 hours ago||
Not sure what sorts of disinformation campaigns you're referring to...

There is something more interesting to consider however; the graph starts to go up in 2013, less than 6 months after the release of Tinder.

1970-01-01 1 hour ago||
These. https://en.wikipedia.org/wiki/Russian_disinformation#Social_...
falcor84 6 hours ago|||
Isn't the fact that a person is asking an AI whether to leave their partner in its own an indication that they should?

EDIT: typo

nomorewords 6 hours ago|||
How is it an indication? I think people on here don't realize that most of the people don't think things through as much as (software) engineers
hnfong 6 hours ago|||
In my local(?) community (like in my city, not my industry) there is a saying "if you had to ask for relationship advice, then you probably should break up".

There is some rationale to that. People tend to hold onto relationships that don't lead anywhere in fear of "losing" what they "already have". It's probably a comfort zone thing. So if one is desperate enough to ask random strangers online about a relationship, it's usually biased towards some unresolvable issue that would have the parties better of if they break up.

magicalhippo 5 hours ago|||
> So if one is desperate enough to ask random strangers online about a relationship

I'd me more inclined to ask random strangers on the internet than close friends...

That said, when me and my SO had a difficult time we went to a professional. For us it helped a lot. Though as the counselor said, we were one of the few couples which came early enough. Usually she saw couples well past the point of no return.

So yeah, if you don't ask in time, you will probably be breaking up anyway.

hnfong 1 hour ago||
I would speculate that, if a couple goes to a professional for help, they have much better chances than asking on a random forum online...
otabdeveloper4 5 hours ago|||
> relationships that don't lead anywhere

Relationships are not transactions that are supposed to "lead somewhere".

ambicapter 5 hours ago|||
You’re being a bit pedantic here “leading somewhere” is accepted shorthand for a lasting, satisfying relationship that is good for both parties.
SpicyLemonZest 5 hours ago|||
Most people engage in romantic relationships because they'd like to find someone to marry and settle down with. Nothing but respect for the people who've thought it through and decided that's not for them, but what's much more common is failing to think it through or worrying it would be awkward/scary/"cringe" to take their relationship goals seriously.

That's what people are pointing to when they talk about relationships not "leading anywhere". If you want to be married in 5-10 years, and you're 2 years into an OK relationship with someone you don't want to marry, it's going to suck to break up with them but you have to do it anyway.

falcor84 4 hours ago||||
Maybe I'm too much of a hopeless romantic, but from my perspective and experience, when someone is good for you, you'll fight for that relationship regardless of what others say, and conversely when you're in a situation where your actively asking and willing to consider "leave" from someone who isn't a very close friend or a therapist as applicable, then it's likely you're looking for external validation for what you've already essentially decided.
rusty_venture 5 hours ago|||
Wait, other people don’t make decision trees and mind maps and pro/con lists and consult chatbots before making decisions? Are they just flying through life by the seat of their pants? That doesn’t seem like a very solid framework for achieving desired outcomes.
nprateem 5 hours ago||
I heard about someone once who could decide whether to buy a new t-shirt in less than 3 months.
oldfrenchfries 5 hours ago||||
The idea that asking implies a yes is actually a pretty common logical fallacy. In relationship science, we call this "Relational Ambivalence" and its a completely normal part of any longterm commitment.
duskdozer 5 hours ago||||
>asking an AI whether to leave your partner

is that what they're asking though? because "relationship advice" is pretty vague

falcor84 4 hours ago||
That's a good point. If an AI respond to a "what should I get my boyfriend for Christmas?" with a "You should leave him", that's a very different issue.
dinkumthinkum 4 hours ago||||
No, but it is an indication of brain-rot to make a question seriously and also to think that it means the conclusion is foregone. It is an advent of our childlike current generations. Of course, the moment anything becomes difficult or unpleasant, one should quit, apparently. Surely, this kind of resiliency is what got humanity so far.
falcor84 4 hours ago||
I didn't imply it's a "foregone conclusion", but just said it's an indication - in the sense of increasing the likelihood. Just like a person asking an AI "what does it feel like to bleed out?" could be them researching for a novel, but is nevertheless an indication of a potential serious issue.
raincole 4 hours ago|||
Is this comment human hallucination? You can clearly see the trend is always going up. It only went down a bit during Covid.
jubilanti 5 hours ago||
Or that people are using AI to write perfectly calibrated ragebait that gets upvoted with a bunch of genuine human clicks.
stared 5 hours ago||
There is a fine line between "following my instructions" (is what I want it to do) vs "thinking all I do is great" (risky, and annoying).

A good engineer will also list issues or problems, but at the same time won't do other than required because (s)he "knows better".

The worst is that it is impossible to switch off this constant praise. I mean, it is so ingrained in fine tuning, that prompt engineering (or at least - my attempts) just mask it a bit, but hard to do so without turning it into a contrarian.

But I guess the main issue (or rather - motivation) is most people like "do I look good in this dress?" level of reassurance (and honesty). It may work well for style and decoration. It may work worse if we design technical infrastructure, and there is more ground truth than whether it seems nice.

svara 5 hours ago||
Yeah, and if you ask it to be critical specifically to get a different perspective or just to avoid this bias, it'll go over the top in the opposite direction.

This is imo currently the top chatbot failure mode. The insidious thing is that it often feels good to read these things. Factual accuracy by contrast has gotten very good.

I think there's a deeper philosophical dimension to this though, in that it relates to alignment.

There are situations where in the grand scheme of things the right thing to do would be for the chatbot to push back hard, be harsh and dismissive. But is it the really aligned with the human then? Which human?

thesis 4 hours ago||
Humans do this too though. I have close friends that ask for advice. Sometimes if I know there’s risk in touchy subjects I will preface with “do you want my actual advice, or just looking for a sounding board”

I’ve seen firsthand people have lost friends over honesty and telling them something they don’t want to hear.

It’s sad really. I don’t want friends that just smile to my face and are “yes-men” either.

intended 4 hours ago|
The difference is that SOME humans do this. As you mentioned, people have lost relationships over telling others what they didn’t want to hear.

Conflating this with how LLM chatbots behave is an incorrect equivalence, or a badly framed one.

Fricken 1 hour ago||
Usually when people are seeking advice they aren't really seeking advice, they're seeking confidence. They already know they need to make changes, and are seeking the confidence to make them.
lifis 3 hours ago||
Avoiding this generally needs to be the main consideration when writing prompts.

When appropriate, explicitly tell it to challenge your beliefs and assumptions and also try to make sure that you don't reveal what you think the answer is when making a question, and also maybe don't reveal that you are involved. Hedge your questions, like "Doing X is being considered. Is it a viable plan or a catastrophic mistake? Why?". Chastise the LLM if it's unnecessarily praising or agreeable. ask multiple LLMs. Ask for review, like "Are you sure? What could possibly go wrong or what are all possible issues with this?"

jmount 3 hours ago|
Telling it to "challenge your beliefs" prompting for text that imitates challenging your beliefs. That may not be as re-centering as one would hope.
zone411 4 hours ago|
I built this benchmark this month: https://github.com/lechmazur/sycophancy. There are large differences between LLMs. There are large differences between LLMs. For example, Mistral Large 3 and GPT-4.1 will initially agree with the narrator, while Gemini will disagree. I swap sides, so this is not about possible viewpoint bias in the LLMs. But another benchmark shows that Gemini will then change its view very easily in a multi-turn conversation while Kimi K2.5 or Grok won't: https://github.com/lechmazur/persuasion.
More comments...