Top
Best
New

Posted by oldfrenchfries 9 hours ago

AI overly affirms users asking for personal advice(news.stanford.edu)
https://arxiv.org/abs/2602.14270

https://www.science.org/doi/10.1126/science.aec8352

464 points | 360 commentspage 4
bilsbie 5 hours ago|
Has anyone found a good prompt to fix this? It seems like a subtle problem because it’s 90% too agreeable but will sometimes get really stubborn.
verdverm 5 hours ago|
There is no sufficient prompt because this is trained into them during mid-late phases. It's ingrained into the weights
graemep 8 hours ago||
There are plenty of sycophantic humans around, especially with regard to relationship advice.

I find there is an inverse relationship between how willing people are to give relationship advice, and how good their advice is (whether looking at sycophancy or other factors).

griffzhowl 8 hours ago||
Because sycophancy in humans is motivated not by the wellbeing of the person seeking advice, but by the interests of the sycophant in gaining favour.

It makes sense that this behaviour would be seen in LLMs, where the company optimizes towards of success of the chatbot rather than wellbeing of the users.

xhkkffbf 8 hours ago||
Yup. I know too many people who have a default message when asked for relationship advice: oh, my, the other person is terrible and you should break up.

It's an easy default and it causes so many problems.

chasd00 4 hours ago||
AI being the ultimate yes-man is probably why CEOs like it so much.
astennumero 7 hours ago||
I always add the following at the end of every prompt. "Be realistic and do not be sycophantic". Which will always takes the conversation to brutal dark corners and panic inducing negative side.
Lionga 7 hours ago|
Don't forget a good old "don't hallucinate" in your proompting skills
markdog12 6 hours ago||
"AI overly affirms users, and that's bad" - everyone nods. "Modern society overly affirms people, and that's bad" - ....
hax0ron3 7 hours ago||
For what it's worth, that wasn't my experience at all the last time I consulted ChatGPT for relationship advice. It was supportive, but in an honest tough love way.
kapral18 7 hours ago||
Not AI chatbots but Claude models. Pandering and rushed thinking is the bane of anthropic models. And since they are the most popular ones they poison the whole ecosystem.
ookblah 6 hours ago||
ask ai for advice, ask it to steelman an argument, ask to replay what your situation from the other perspective (if it's involving people), push it hard to agree with you and pander to you, then push it to disagree with you, etc.

once you have all the "bounds" just make your own decision. i find this helps a lot, basically like a rubber duck heh.

deeg 8 hours ago||
I do find them cloying at times. I was using Gemini to iterate over a script and every time I asked it to make a change it started a bunch of responses with "that's a smart final step for this task! ...".
storus 6 hours ago|
To combat sycophancy it's always good to ask the devil's advocate view of whatever the conversation was about in the end.
More comments...