Top
Best
New

Posted by Brajeshwar 1 day ago

Folk are getting dangerously attached to AI that always tells them they're right(www.theregister.com)
244 points | 186 commentspage 2
kgeist 23 hours ago|
>We evaluated 11 state-of-the-art AI-based LLMs, including proprietary models such as OpenAI’s GPT-4o

The study explores outdated models, GPT-4o was notoriously sycophantic and GPT-5 was specifically trained to minimize sycophancy, from GPT-5's announcement:

>We’ve made significant advances in reducing hallucinations, improving instruction following, and minimizing sycophancy

And the whole drama in August 2025 when people complained GPT-5 was "colder" and "lacked personality" (= less sycophantic) compared to GPT-4o

It would be interesting to study evolution of sycophantic tendencies (decrease/increase) in models from version to version, i.e. if companies are actually doing anything about it

Twiin 22 hours ago|
The study includes GPT-5. On personal advice queries, GPT-4o and GPT-5 affirmed users' actions at the same rate.
grahammccain 21 hours ago||
I feel like this is the same as social media problem. Some people will be able to understand that AI telling them they are right doesn’t make them right and some people won’t. But ultimately people like being told they are right and that sells, and brings back users.
JohnCClarke 1 day ago||
Isn't this just Dale Carnegie 101? I've certainly never had a salesperson tell me that I'm 100% wrong and being a fool.

And, tbh, I often try to remember to do the same.

basilikum 19 hours ago||
It's exactly what Carnegie says not to do.
Lerc 1 day ago|||
The attachment such feedback creates must be why marketing people are universally beloved.
airstrike 1 day ago||
Dale Carnegie wasn't writing about LLMs and this isn't a salesperson, so no, it's not just Dale Carnegie 101.
6510 18 hours ago||
Everyone also visits websites that share their world view. If it is slightly off you keep noticing how the articles seem one sided.

I just see an article about migrants destroying things in Britain. Not to excuse the behavior but I wondered where they came from. It turned out to be shit countries fostering that behavior. Why are they shit? Have they always been like that? Well no, the British empire destroyed them. You could think that it's to long ago but they also continue to enjoy spoils. I offer no solutions. The point was that a sensationalist article wouldn't go there because the reader doesn't want to know.

ChrisArchitect 18 hours ago||
[dupe] Discussion on source: https://news.ycombinator.com/item?id=47554773
jl6 21 hours ago||
I believe this is what they call yasslighting: the affirmation of questionable behavior/ideas out of a desire to be supportive. The opposite of tough love, perhaps. Sometimes the very best thing is to be told no.

(comment copied from the sibling thread; maybe they will get merged…)

Havoc 20 hours ago||
People must be using them very differently from me then. Very rarely use them for anything more than a glorified search engine

Exploring openclaw though so maybe that change

45Laskhw 21 hours ago||
Many people here say they don't need the affirmation. I think the problem is that you can tune the clanker to be either arrogant and dismissive or overly friendly.

The thing is an approximation function, not intelligent, so it is hard to get a middle ground. Many clankers are amazingly obnoxious after their initial release.

Grok-4.2 and the initial Google clanker were both highly dismissive of users and they have been tuned to fix that.

A combative clanker is almost unusable. Clankers only have one real purpose: Information retrieval and speculation, and for that domain a polite clanker is way better.

Anyone who uses generative, advisory or support features is severely misguided.

unholyguy001 21 hours ago|
I’ve found a good counter is “imagine I am the person repressing the other side of this disagreement. What would you say to me”
More comments...