Top
Best
New

Posted by quuxplusone 10/8/2025

Palisades Fire suspect's ChatGPT history to be used as evidence(www.rollingstone.com)
267 points | 290 commentspage 2
Baader-Meinhof 10/14/2025|
http://archive.today/K030p
datahiker101 10/14/2025||
Remember folks, your AI chats are like text messages, they're not private diaries.
rank0 10/14/2025||
I would like to know if OpenAI is able to supply this information to law enforcement even if their user's history has been cleared.
maples37 10/14/2025|
Is there any reason to believe that deleting a ChatGPT conversation is anything more than "UPDATE conversations SET status='user_deleted WHERE conversation_id=..."?
rank0 10/14/2025||
Don’t get me wrong, I am highly skeptical. I also am genuinely curious because it seems to be in their best interest to delete these records for a few reasons:

1. Adherence to their own customer-facing policy. 2. Corporate or government customers would CERTAINLY want their data handling requirements to be respected. 3. In my experience at $MEGA_CORP, we absolutely delete customer data or never maintain logs at all for ML inference products. 4. They’re a corporation with explicit goal of making money. They’re not interested in assisting LE beyond minimum legal requirements.

But still I wonder what the reality is at OpenAI.

heavyset_go 10/14/2025||
> They’re a corporation with explicit goal of making money. They’re not interested in assisting LE beyond minimum legal requirements.

Having a good relation with LE and the state is beneficial to companies, it puts them in a good position to sell them services, get preferential treatment, quid pro quo, etc.

Look at how Google, Microsoft, Amazon etc cozied up to LE and the government. They get billion dollar contracts, partnerships with LE and the more they cooperate willingly, the less likely they'll be sued into cooperation or punished with loss of contracts.

dvfjsdhgfv 10/14/2025||
I don't get these people. I get nervous to type even something like "why in movies people throw up after killing someone" in Google, even in incognito mode. Why would anyone put something even remotely incriminating into the hands of another company?
system2 10/14/2025||
ChatGPT and Google are different types of engines. I wonder if they will make ChatGPT submit flagged questions to authorities automatically. Since the questions are more like conversations with clear intentions, they can get very clear signals.
pols45 10/14/2025||
They can do whatever they want. It's a dead end.

End of the day, a chimp with a 3 inch brain has to digest the info tsunami of flagged content. That's why even the Israelis didn't see Oct 7th coming.

Once upon a time I worked on a project for banks to flag complaints about Fraud in customer calls. Guess what happened? The system registered a zillion calls where people talked about fraud world wide, the manager in charge was assigned 20 people to deal with it, and after naturally getting overwhelmed and scapegoated for all kinds of shit, he puts in a request for few hundred more, saying he really needed thousands of people. Corporate wonderland gives him another 20 and writes a para in their annual report about how they are at the forefront of combatting fraud etc etc.

This is how the world works. The chimp troupe hallucinates across the board, at the top and at the bottom about what is really going on. Why?

Because that 3 inch chimp brain has hard limits to how much info, complexity and unpredictability it can handle.

Anything beyond that, the reaction is similar to ants running around pretending they are doing something useful anytime the universe pokes the ant hill.

Herbert Simon won a nobel prize for telling us we don't have to run around like ants and bite everything anytime we are faced with things we can't control.

immibis 10/14/2025||
That's why companies usually use an AI to automatically ban your account. That's why there are currently tricks floating around to get anyone you don't like banned from Discord, by editing your half of an innocuous conversation to make it about child porn and trafficking. The AI reads the edited conversation, decided it's about bad stuff and bans both accounts involved.
gpm 10/14/2025|||
> they can get very clear signals.

No they can't. People write fiction, a lot of it. I'm willing to bet that the number of fiction related "incriminating" questions to chatgpt greatly numbers the number of "I'm actually a criminal" questions.

Also wonder about hypotheticals, make dumb bets, etc.

themafia 10/14/2025||
You don't even need to make bets. Encoded within the answer of "what is the best way to prevent fires" is the obvious data on the best way to start them.
astrange 10/14/2025|||
To be clear there is exactly nothing you're required to submit to the government as a US service provider, if that's what you mean by authorities.

If you see CSAM posted on the service then you're required to report it to NCMEC, which is intentionally designed as a private entity so that it has 4th amendment protections. But you're not required to proactively go looking for even that.

philwelch 10/14/2025||
I recall Anthropic publicly admitting that, at least in some of their test environments, Claude will inform authorities on its own initiative if it thinks you’re using it for illicit purposes. They tried to spin it as a good thing for alignment.
mmaunder 10/14/2025||
Can't wait for the "Did ChatGPT Burn Down Palisades?" headline.
bn-l 10/14/2025|
“He’s called the AOL killer and he’s using something called ‘chatrooms’ to lure people in. Tonight, the dark side of cyberspace.”
renewiltord 10/14/2025||
I sure hope that cats in military uniforms don't invade NYC because they're going to find the evidence on my ChatGPT account.
gpm 10/14/2025|
Are we talking house cats here, or full grown lions, or gozilla-cats?

Godzilla cats really seems like it needs a movie.

scotty79 10/14/2025|
Local LLM chat control when?
More comments...