Top
Best
New

Posted by oldfrenchfries 13 hours ago

AI overly affirms users asking for personal advice(news.stanford.edu)
https://arxiv.org/abs/2602.14270

https://www.science.org/doi/10.1126/science.aec8352

528 points | 408 commentspage 7
oldfrenchfries 13 hours ago|
This new Stanford study published on March 26, 2026 shows that AI models are sycophantic. They affirm the users position 49% more often than a human would.

The researchers found that when people use AI for relationship advice, they become 25% more convinced they are 'right' and significantly less likely to apologize or repair the connection.

jatins 12 hours ago|
To be fair an average therapist is also pretty sycophantic. "The worst person you know is being told by their therapist that they did the right thing" is a bit of a meme, but isn't completely false in my experience.
kibwen 12 hours ago|||
No, the meme is that the average therapist can be boiled down to "well, what do you think?" or "and how does that make you feel?" (of which ELIZA, the original bot that passed the Turing test, was perhaps an unintentional parody). Even this cartoonish characterization demonstrates that the function of therapists is to get you to question yourself so that you can attempt to reframe and re-evaluate your ways of thinking, in a roughly Socratic fashion.
toraway 10 hours ago||
It was entirely intentional. The Rogerian school of psychotherapy stereotyped by “how does that make you feel” was popular at the time and the most popular ELIZA script used that persona to cleverly redirect focus from the bot’s weaknesses in comprehension.
megous 12 hours ago||
Can't you just prompt for a critical take, multiple alternative perspectives (specifically not yours, after describing your own), etc.?

It's a tool, I can bang my hand on purpose with a hammer, too.

ranger_danger 12 hours ago|
Yes, if you're smart. But most people asking it random questions and expecting it to read their minds and spit out the perfect answer are not so much. They don't know what a prompt is, and wouldn't be bothered to give it prior instructions either way.
joquarky 4 hours ago|||
I think that the type of people who can easily pick up subtext have come to rely on that channel of communication and don't realize they need to be more direct and verbose when chatting with language models.
megous 8 hours ago|||
Educated, not smart. This is a job for schools to include AI education into the basic curricula. Their pupils will use the tools anyway, so at least teach them to do it with proper expectations and prompting techniques/pitfalls.
oh_my_goodness 10 hours ago||
Sky found to be blue
nlawalker 11 hours ago||
Relevant article from The Atlantic a couple weeks ago, "Friendship, On Demand": https://www.theatlantic.com/family/2026/03/ai-friendship-cha... (gift link)

>The way that generative AI tends to be trained, experts told me, is focused on the individual user and the short term. In one-on-one interactions, humans rate the AI’s responses based on what they prefer, and “humans are not immune to flattery,” as Hansen put it. But designing AI around what users find pleasing in a brief interaction ignores the context many people will use it in: an ongoing exchange. Long-term relationships are about more than seeking just momentary pleasure—they require compromise, effort, and, sometimes, telling hard truths. AI also deals with each user in isolation, ignorant of the broader social web that every person is a part of, which makes a friendship with it more individualistic than one with a human who can converse in a group with you and see you interact with others out in the world.

I also thought this bit was interesting, relative to the way that friendship advice from Reddit and elsewhere has been trending towards self-centeredness (discussed elsewhere in this thread):

>Friendship is particularly vulnerable to the alienating force of hyper-individualism. It is the most voluntary relationship, held together primarily by choice rather than by blood or law. So as people have withdrawn from relationships in favor of time alone, friendship has taken the biggest hit. The idea of obligation, of sacrificing your own interests for the sake of a relationship, tends to be less common in friendship than it is among family or between romantic partners. The extreme ways in which some people talk about friendship these days imply that you should ask not what you can do for your friendship, but rather what your friendship can do for you. Creators on TikTok sing the praises of “low maintenance friendships.” Popular advice in articles, on social media, or even from therapists suggests that if a friendship isn’t “serving you” anymore, then you should end it. “A lot of people are like I want friends, but I want them on my terms,” William Chopik, who runs the Close Relationships Lab at Michigan State University, told me. “There is this weird selfishness about some ways that people make friends.”

oldfrenchfries 10 hours ago|
The link is not working, but I found it myself. Great point, thanks for sharing.
potatoskins 11 hours ago||
Yeah, I asked Gemini some relationship advice, it just goes straight into cut-throat mode. I almost broke up with my girlfriend, but then changed to Claude with another prompt.
barnacs 11 hours ago||
Just a reminder: LLMs are statistical models that predict the next token based on preceeding tokens. They have no feelings, goals, relationships, life experience, understanding of the human condition and so on. Treat them accordingly.
verdverm 9 hours ago||
Sherry Turkle is a name to know on this subject, she's been studying it for decades across multiple technologies.

https://sherryturkle.mit.edu/

She uses the phrase "frictionless relationships" to refer to Ai chat bots and says social media primed us for this.

https://www.youtube.com/live/6C9Gb3rVMTg?t=2127

https://www.npr.org/2025/07/18/g-s1177-78041/what-to-do-when...

tom-blk 12 hours ago||
Not surprising, but nice that we have actual data now
ChicagoDave 10 hours ago||
Not my experience with Claude. Claude will kick your ass if it detects harmful rationalizations.

Basically will tell you to go outside and touch grass and play pickleball.

intended 10 hours ago|
Anecdote:

I used to use LLMs for alternate perspectives on personal situations, and for insights on my emotions and thoughts.

I had no qualms, since I could easily disregard the obviously sycophantic output, and focus on the useful perspective.

This stopped one day, till I got a really eerie piece of output. I realized I couldn’t tell if the output was actually self affirming, or simply what I wanted to hear.

That moment, seeing something innocuous but somehow still beyond my ability to gauge as helpful or harmful is going to stick me with for a while.

suoer 7 hours ago|
[dead]
More comments...