Top
Best
New

Posted by quuxplusone 6 days ago

Palisades Fire suspect's ChatGPT history to be used as evidence(www.rollingstone.com)
226 points | 215 commentspage 2
datahiker101 11 hours ago|
Remember folks, your AI chats are like text messages, they're not private diaries.
rank0 9 hours ago||
I would like to know if OpenAI is able to supply this information to law enforcement even if their user's history has been cleared.
maples37 9 hours ago|
Is there any reason to believe that deleting a ChatGPT conversation is anything more than "UPDATE conversations SET status='user_deleted WHERE conversation_id=..."?
rank0 8 hours ago||
Don’t get me wrong, I am highly skeptical. I also am genuinely curious because it seems to be in their best interest to delete these records for a few reasons:

1. Adherence to their own customer-facing policy. 2. Corporate or government customers would CERTAINLY want their data handling requirements to be respected. 3. In my experience at $MEGA_CORP, we absolutely delete customer data or never maintain logs at all for ML inference products. 4. They’re a corporation with explicit goal of making money. They’re not interested in assisting LE beyond minimum legal requirements.

But still I wonder what the reality is at OpenAI.

heavyset_go 3 hours ago||
> They’re a corporation with explicit goal of making money. They’re not interested in assisting LE beyond minimum legal requirements.

Having a good relation with LE and the state is beneficial to companies, it puts them in a good position to sell them services, get preferential treatment, quid pro quo, etc.

Look at how Google, Microsoft, Amazon etc cozied up to LE and the government. They get billion dollar contracts, partnerships with LE and the more they cooperate willingly, the less likely they'll be sued into cooperation or punished with loss of contracts.

thaumasiotes 19 hours ago||
Hmmm.

I have a "saved" history in Google Gemini. The reason I put "saved" in scare quotes is that Google feels free to change the parts of that history that were supplied by Gemini. They no longer match my external records of what was said.

Does ChatGPT do the same thing? I'd be queasy about relying on this as evidence.

kevin_thibedeau 2 hours ago||
ChatGPT will generate output and immediately censor it if some oversight code deems it problematic. Ask about court decisions in sex crime cases without being creepy and you can see it in action.
x______________ 18 hours ago||
Could you post some details about this or make a write-up? I'd be interested in reading more about this.
thaumasiotes 18 hours ago||
I'm not sure what details would add. What happened:

1. I engaged with Gemini.

2. I found the results wanting, and pasted them into comment threads elsewhere on the internet, observing that they tended to support the common criticism of LLMs as being "meaning-blind".

3. Later, I went back and viewed the "history" of my "saved" session.

4. My prompts were not changed, but the responses from Gemini were different. Because of the comment threads, it was easy for me to verify that I was remembering the original exchange correctly and Google was indulging in some revision of history.

crazygringo 9 hours ago|||
If this is verifiably true, you should contact a journalist. Meaning if it's still in your Gemini history and the comments you posted are still up.

This would be a major tech news story. "Google LLM rewriting user history" would be a scandal. And since online evidence is used in court, it could have significant legal implications. You'd be helping people.

This is much too important to merely be a comment on HN.

IIAOPSW 9 hours ago||||
The fact that this happened and that you have evidence of it make it enormously interesting even if the actual substance of the prompts and the response are mundane as hell. Please post.
kshacker 12 hours ago||||
Not trying to excuse google but wonder why that happens. I have had my own issues with ChatGPT memory but that's more like it forgets the context and spits out something gibberish at a later invocation counter to what it said earlier in the thread. But that's because it is buggy.

Rewriting history requires computes which is more malicious. Why would someone burn compute to rewrite your stuff given that rewrites are not free? Once again not defending google trying to think through what's going on.

jonbiggums22 10 hours ago|||
Maybe they use some kind of response caching to save resources and the original pointer is now pointing to a newer response to the same question? Still would be an insane way to do that for a history log unless they're trying to memory hole previous instances of past poor performances or wrong think.
thaumasiotes 12 hours ago|||
My best guess is that when they changed the model backing "Gemini" they regenerated the conversations.

I can't think of any reason it would make sense to do that, though.

mmaunder 19 hours ago||
Can't wait for the "Did ChatGPT Burn Down Palisades?" headline.
bn-l 14 hours ago|
“He’s called the AOL killer and he’s using something called ‘chatrooms’ to lure people in. Tonight, the dark side of cyberspace.”
dvfjsdhgfv 15 hours ago||
I don't get these people. I get nervous to type even something like "why in movies people throw up after killing someone" in Google, even in incognito mode. Why would anyone put something even remotely incriminating into the hands of another company?
renewiltord 20 hours ago||
I sure hope that cats in military uniforms don't invade NYC because they're going to find the evidence on my ChatGPT account.
gpm 20 hours ago|
Are we talking house cats here, or full grown lions, or gozilla-cats?

Godzilla cats really seems like it needs a movie.

system2 20 hours ago||
ChatGPT and Google are different types of engines. I wonder if they will make ChatGPT submit flagged questions to authorities automatically. Since the questions are more like conversations with clear intentions, they can get very clear signals.
pols45 18 hours ago||
They can do whatever they want. It's a dead end.

End of the day, a chimp with a 3 inch brain has to digest the info tsunami of flagged content. That's why even the Israelis didn't see Oct 7th coming.

Once upon a time I worked on a project for banks to flag complaints about Fraud in customer calls. Guess what happened? The system registered a zillion calls where people talked about fraud world wide, the manager in charge was assigned 20 people to deal with it, and after naturally getting overwhelmed and scapegoated for all kinds of shit, he puts in a request for few hundred more, saying he really needed thousands of people. Corporate wonderland gives him another 20 and writes a para in their annual report about how they are at the forefront of combatting fraud etc etc.

This is how the world works. The chimp troupe hallucinates across the board, at the top and at the bottom about what is really going on. Why?

Because that 3 inch chimp brain has hard limits to how much info, complexity and unpredictability it can handle.

Anything beyond that, the reaction is similar to ants running around pretending they are doing something useful anytime the universe pokes the ant hill.

Herbert Simon won a nobel prize for telling us we don't have to run around like ants and bite everything anytime we are faced with things we can't control.

immibis 16 hours ago||
That's why companies usually use an AI to automatically ban your account. That's why there are currently tricks floating around to get anyone you don't like banned from Discord, by editing your half of an innocuous conversation to make it about child porn and trafficking. The AI reads the edited conversation, decided it's about bad stuff and bans both accounts involved.
gpm 20 hours ago|||
> they can get very clear signals.

No they can't. People write fiction, a lot of it. I'm willing to bet that the number of fiction related "incriminating" questions to chatgpt greatly numbers the number of "I'm actually a criminal" questions.

Also wonder about hypotheticals, make dumb bets, etc.

themafia 20 hours ago||
You don't even need to make bets. Encoded within the answer of "what is the best way to prevent fires" is the obvious data on the best way to start them.
astrange 20 hours ago|||
To be clear there is exactly nothing you're required to submit to the government as a US service provider, if that's what you mean by authorities.

If you see CSAM posted on the service then you're required to report it to NCMEC, which is intentionally designed as a private entity so that it has 4th amendment protections. But you're not required to proactively go looking for even that.

philwelch 10 hours ago||
I recall Anthropic publicly admitting that, at least in some of their test environments, Claude will inform authorities on its own initiative if it thinks you’re using it for illicit purposes. They tried to spin it as a good thing for alignment.
whatsupdog 19 hours ago|
[flagged]
hshdhdhehd 19 hours ago|
Climate change "I'll provide the tinder you provide the spark!"
More comments...