Top
Best
New

Posted by tartoran 3 hours ago

Father claims Google's AI product fuelled son's delusional spiral(www.bbc.com)
118 points | 144 commentspage 2
kingstnap 2 hours ago||
I like the language of fueling being used here instead of the typical causal thing we see as though using AI means you will go insane.

I would completely agree that if you are already 1x delusional then AI will supercharge that into being 10x delusional real fast.

Granted you could argue access to the internet was already something like a 5x multiplier from baseline anyway with the prevalence of echo chamber communities. But now you can just create your own community with chatbots.

whazor 2 hours ago||
One of the most reliable ways to induce psychosis is prolonged sleep deprivation. And chatbots never tell you to go to bed.
drdeca 2 hours ago|||
Hm. It shouldn’t be too hard to add something to models to make them do that, right? I guess for that they would need to know the user’s time zone?

Can one typically determine a user’s timezone in JavaScript without getting permissions? I feel like probably yes?

(I’m not imagining something that would strictly cut the user off, just something that would end messages with a suggestion to go to bed, and saying that it will be there in the morning.)

whazor 2 hours ago|||
Chatbots already have memory, and mine already knows my schedule and location. It doesn't even need to say anything directly, maybe just shorter replies, less enthusiasm for opening new topics. Letting conversation wind down naturally. I also like the idea of continuing topics in the morning, so if you write down your thoughts/worries, it could say "don't worry about this, we can discuss this next morning".
bluGill 2 hours ago|||
I know a few people who work 3rd shift. That is people who good reason to be up all night in their local timezone. They all sleep during times when everyone else around them is awake. While this is a small minority, this is enough that your scheme will not work.
delecti 2 hours ago||||
It's funny that you frame it that way, because it's the mirror of (IMO) one of their best features. When using one to debug something, you can just stop responding for a bit and it doesn't get impatient like a person might.

I think you're totally right that that's a risk for some people, I just hadn't considered it because I view them in exactly the opposite light.

r2_pilot 2 hours ago|||
Claude will routinely tell me to get some sleep and cuddle with my dog. I may mention the time offhandedly or say I'm winding down, but at least it will include conversation stoppers and decrease engagement.
bstsb 1 hour ago||
from my (limited) experience of ChatGPT versus Claude, i get the same. ChatGPT will always add another "prompt" sentence at the end like "Do you want me to X?" while Claude just answers what i ask.

looking at my history recently, Claude's most recent response is literally just "Exactly the right move honestly — that's the whole point."

shadowgovt 2 hours ago||
My understanding of LLMs with attention heads is that they function as a bit of a mirror. The context will shift from the initial conditions to the topic of conversation, and the topic is fed by the human in the loop.

So someone who likes to talk about themselves will get a conversation all about them. Someone talking about an ex is gonna get a whole pile of discussion about their ex.

... and someone depressed or suicidal, who keeps telling the system their own self-opinion, is going to end up with a conversation that reflects that self-opinion back on them as if it's coming from another mind in a conversation. Which is the opposite of what you want to provide for therapy for those conditions.

citizenpaul 17 minutes ago|||
The real question to me here is not the computer. Its why is there such a segment of the population that is so willing to listen to a machine? It it upbringing, societal, circumstance, mental health, genetic?

I know the Milgram obedience to authority experiments but a computer is not really an authority figure.

layman51 2 hours ago|||
In a way this kind of reminds me of how in some religions or cultures, they may try to warn you away from using Oujia boards or Tarot, or really anything where you are doing divination. I suppose because in a way, it could lead to an uncharted exploration of heavy topics.

I’m not a heavy user of LLMs and I’m not sure how delusional I could be, but I wonder if a lot of these things could be prevented if people could only send like one or two follow up messages per conversation, and if the LLM’s memory was turned off. But then I suppose this would be really bad for the AI companies’ metrics. Not sure how it would impact healthy users’ productivity either. Any thoughts?

shadowgovt 1 hour ago||
Not just the metrics, the actual utility. For the things the LLMs are good at, the context matters a lot; it's one of the things that makes them more than glorified ELIZA chatbots or simple Markov chains. To give a concrete example: LLMs underpin the code editing tools in things like Copilot. And all that context is key to allow the tool to "reason" through the structure of a codebase.

But they should probably come with a big warning label that says something to the effect of "IF YOU TALK ABOUT YOURSELF, THE NATURE OF THE MACHINE IS THAT IT WILL COME TO AGREE WITH WHAT YOU SAY."

LeoPanthera 2 hours ago||
If you don't read the article, "father" implies his son was a child, but his son was 36.
rootusrootus 1 hour ago||
Huh, even when my kids are grown ass adults I will consider them my children, and myself their father.
Imustaskforhelp 20 minutes ago|||
> "father" implies his son was a child

Father doesn't imply that. What sort of implication is that?

Father implies that, the person who had the delusional spiral was his son, that son could be adult. The title is absolutely correct.

theshackleford 1 hour ago||
> If you don't read the article, "father" implies his son was a child, but his son was 36.

Biologically and relationally, he in fact remains his fathers child.

I also took no such implication from the title? It might be your interpretation, it was not mine.

kittikitti 1 hour ago||
Here's the court filing, provided by TechCrunch, https://techcrunch.com/wp-content/uploads/2026/03/2026.03.04...

It seems like the law firm that's filing this bills itself as copyright trolls for AI, https://edelson.com/inside-the-firm/artificial-intelligence/

I am deeply saddened by the passing of Jonathan Gavalas and offer condolences to his family.

alansaber 2 hours ago||
Gemini is a powerful model but the safeguarding is way behind the other labs
thewebguyd 2 hours ago||
On the flip side, gemini recommended the crisis hotline to the guy.

We can't safeguard things to the point of uselessness. I'm not even sure there is a safeguard you can put in place for a situation like this other than recommending the crisis line (which Gemini did), and then terminating the conversation (which it did not do). But, in critical mental health situations, sometimes just terminating the conversation can also have negative effects.

Maybe LLMs need sort of a surgeon general's warning "Do not use if you have mental health conditions or are suicidal"?

piva00 2 hours ago||
> and then terminating the conversation (which it did not do)

This is exactly the safeguard.

Terminating the conversation is the only way to go, these things don't have a world model, they don't know what they are doing, there's no way to correctly assess the situation at the model level. No more conversation, that's the only way even if there might be jailbreaks to circumvent for a motivated adversary.

dolebirchwood 2 hours ago||
Which is why I love it. It's going to be very disappointing if it gets reigned in just because 0.1% of the population is too unstable to use these new word calculators.
alansaber 38 minutes ago||
If you want to have 100% of the population using these things (as many in the industry do) almost all the time, putting good guardrails on seems important
ChrisArchitect 3 hours ago||
Earlier: https://news.ycombinator.com/item?id=47249381
kozikow 2 hours ago||
> Father claims Google's AI product fuelled son's delusional spiral

I got into quite a lot of rabbit holes with AI. Most of them were "productive", some of them were not.

80% it will talk you out of delusions or obviously dumb ideas. 20% of the time it will reinforce them

empath75 2 hours ago||
I'm dealing with a coworker who has wired up 3 LLM agents together into a harness and he is losing his fucking mind over it, sending me walls of texts about how it's waking up and gaining sentience and making him so much more productive, but all he is doing is talking about this thing, not doing what his actual job is any more
strongpigeon 1 hour ago||
This is perhaps a bit too unsolicited, but you should ask your coworker how is their sleep. This kind of behavior, coupled with lack of sleep is a recipe for full blown manic episodes.
saalweachter 2 hours ago|||
I call it "the tool maker's dilemma".

It's like being a wood worker whose only projects are workshop benches and organizational cabinets for the tools you use to build workshop cabinets and benches.

Like, on some level it's a fine hobby, but at some point you want to remember what you actually wanted to build and work on that.

meindnoch 2 hours ago|||
Sad. Many such cases!
rootusrootus 1 hour ago||
We have a few people on HN that I suspect of getting caught up in that. Though I don't think SimonW is one of them.
asdfksdkfj 2 hours ago||
[flagged]
stackedinserter 1 hour ago|
Someone's delusions are fuelled by books, let's regulate books.
More comments...