Top
Best
New

Posted by Brajeshwar 1 day ago

Folk are getting dangerously attached to AI that always tells them they're right(www.theregister.com)
275 points | 216 commentspage 3
tempodox 19 hours ago|
Of course this is intentional. The providers want to make their stuff as addictive as possible, like so much other digital crack sold on the internet.
Havoc 21 hours ago||
People must be using them very differently from me then. Very rarely use them for anything more than a glorified search engine

Exploring openclaw though so maybe that change

spl757 10 hours ago||
People have a hard time not being stupid
unholyguy001 22 hours ago||
I’ve found a good counter is “imagine I am the person repressing the other side of this disagreement. What would you say to me”
imglorp 22 hours ago||
Is there a good prompt addition to skip all the gratuitous affirmation and tell me when I'm wrong?
gdulli 20 hours ago|
It doesn't know when you're wrong! Pretend I'm shaking you by your shoulders as I'm saying this, because it's really important to understand!
FromTheFirstIn 20 hours ago||
And it can NEVER know when you’re wrong!
AbrahamParangi 1 day ago||
AI is less deranging than partisan news and social media, measurably so according to a recent study https://www.ft.com/content/3880176e-d3ac-4311-9052-fdfeaed56...
ycombinator_acc 14 hours ago||
Am I the only one with the opposite experience? I can’t remember the last time GPT told me I was right. It always finds something to nitpick (sometimes wrongly).

Maybe it’s OpenAI being aware of the “attachment” issue and combating it by overcompensating in the opposite direction.

zone411 1 day ago||
I built two related benchmarks this month: https://github.com/lechmazur/sycophancy and https://github.com/lechmazur/persuasion. There are large differences between LLMs. For example, good luck getting Grok to change its view, while Gemini 3.1 Pro will usually disagree with the narrator at first but then change its position very easily when pushed.
jasonlotito 1 day ago||
Krafton's CEO found out the hard way that relying on AI is dumb, too. I think it's always helpful to remind people that just because someone has found success doesn't mean they're exceptionally smart. Luck is what happens when a lack of ethics and a nat 20 meet.

https://courts.delaware.gov/Opinions/Download.aspx?id=392880

> Meanwhile, Kim sought ChatGPT’s counsel on how to proceed if Krafton failed to reach a deal with Unknown Worlds on the earnout. The AI chatbot prepared a “Response Strategy to a ‘No-Deal’ Scenario,” which Kim shared with Yoon. The strategy included a “pressure and leverage package” and an “implementation roadmap by scenario.”

allpratik 20 hours ago|
The another side of the story is that AI literally can subtly justify any thought. Even if that thought is ethically in the grey area.

I fear this will give license to people to act on their thoughts which maybe harmful for them in ways no one can even imagine right now.

More comments...