Top
Best
New

Posted by ilamont 1 day ago

Opus 4.7 knows the real Kelsey(www.theargumentmag.com)
356 points | 183 commentspage 6
rdevilla 12 hours ago|
The joke's on you all for willingly posting this content online for it to later be harvested by AI.

Nobody is forcing you to use these systems. The hackers have always said this moment, or something like it, would come, from beneath their canopies of tin foil. I've posted almost nothing online - not under pseudonyms nor real names - for over a decade. I sat on this HN username for almost 12 years before making a single post - and now HN forms the overwhelming majority of my port 443 footprint, where I state up front that everything is now associated to my real name.

Complete magick is possible when you simply refuse to participate in the things that society has tacitly assumed everybody does.

phalangion 11 hours ago||
How do you propose a journalist work without posting their writing online?
tempaccount5050 10 hours ago|||
Thinking that you can hide from it is absurd. Your country has been spying on you for decades. The Internet and phones are tapped. That game is so so so over and has been for a long time. I'd rather live free and deal with the consequences than hide in my basement with a tinfoil hat on. In fact, I was fired this year for my political views. Got doxxed at work. Now I'm somewhere better. Sometimes it's for the best.
Retr0id 11 hours ago|||
I find it fulfilling to enrich the commons.
stavros 11 hours ago||
Let's all just never talk to anyone unless it's face to face, for fear that an AI will read it.
arjie 11 hours ago||
Man, the day we get Satoshi Nakomoto out will be the day we must bow to our privacy destroying overlords. For the moment, they can’t tell me from my posts: unknown rando that I am.
SoKamil 10 hours ago||
Luckily for Nakamoto, there have been so many attempts at deanonymizing that I bet prediction is too contaminated with noise.
SJMG 9 hours ago||
As another user suggested, train on the corpus that ends with the white paper publication.
SoKamil 3 hours ago|||
That’s not feasible. Apparently only SOTA models present this behavior. Having cutoff date at paper publication significantly hinders its capabilities. Besides that, try to convince anyone to spend millions upon millions of dollars to train a model with primary goal of possibly being able to deanonymize one person.
smeej 7 hours ago|||
But then compare it to the corpus of any of the suspects since the whitepaper publication.

It's one thing to sound like Satoshi before the whitepaper, but does anyone still sound like Satoshi?

lepset 11 hours ago||
https://www.nytimes.com/2026/04/08/business/bitcoin-satoshi-...
arjie 10 hours ago||
Well, feeding Opus 4.7 a bunch of Adam Back texts (which I human-removed his name from) and asking it if Satoshi Nakomoto could have written them results in Claude explaining to me why this is someone else in Nakomoto's circle who is not Satoshi himself. So one of two things are true:

* Adam Back is not Satoshi Nakomoto - as he claims

* Opus 4.7 is not sufficiently a dox-machine yet

Razengan 11 hours ago||
After skimming through the article:

Why not just write everything through an AI? (to obfuscate your "style")

igregoryca 10 hours ago||
Article:

> To avoid this, you will probably need to intentionally write in a very different style than you usually do (or to have AIs rewrite all your prose for you, but, ugh, that’s not a world I look forward to living in).

I agree. The amount of vague and cliche'd AI writing I read on the daily is already exhausting enough.

It would be interesting if you could train a model to sprinkle random red herrings throughout your text in a minimally disruptive way. But I fear you might have to stretch the definition of "minimally disruptive" to make it robust against detection.

fy20 9 hours ago||
Or do it the other way, and have other people use an AI to write in your style.
Barbing 9 hours ago||
Like the way the Tor project wants to appear to have one single user
ur-whale 6 hours ago||
If he does the same tests every time new models come out, and - I assume - uses the same dataset to do that, then is it not a possibility the said dataset is now part of the training set for the next round and therefore identifying who posted the text a fairly easy proposition ?
londons_explore 4 hours ago||
So now we can track down satoshi nakamoto?
rexpop 11 hours ago||
Is Kelsey Piper a celebrity writer? She may be in a different class.
7e 12 hours ago||
Always send your public posts through a local LLM to de-style you.
switz 12 hours ago|
Please do not wash your authentic writing through an LLM.
_the_inflator 4 hours ago||
I think that multiple truth can be true at the same time without contradicting each other.

As for the credibility: of course this wasn’t a statistical approach at all. Also there was no standardized procedure to allow comparison by factor analysis. Of course you can compare apples with oranges or whatever.

So where to go from here? I don’t see any proof at all. This is proof that AI is infallible? No? A random approach that is absolutely not reliable because of at least being reproducible and reconstructive.

Claude knows what and how? Is it AI or a google search? Discord selling data? Posting on a public forum?

Your style is a fingerprint?

A non deterministic something can generate texts that are identified to be likely personal x - or not. What is imitation if you use auto generated content that is published somewhere somehow? Or others to imitate your style?

I think this is a party trick to scare people. Nothing else. For example image search is way more revealing even before AI.

If there is an uncertainty I would deflect my existence instead of fighting for it. Streisand effect in reverse.

The main problem are weirdos who stalk you or whatever to harm you and rely on AI.

I honestly find it stunning that people with higher education in science topics in just a year deleted everything they hopefully learned at university or school. I am disappointed and feel personally insulted whenever I hear “I asked AI”

Yesterday I talked to another member of Mensa and she is happy about AI so her book project now mustn’t be written by her but AI.

Is no one among us who knows how to do scientifically sound research? I spend countless hours at a copy machine to transfer book pages onto paper so that I could work through it without the book.

I think that it became to easy to draw conclusions based on AI. I worked for a professor and I advised her to not permit Wikipedia as source references back around 2010 because of being to easy. Meta sources vs originals.

We should all not worry about AI, because you prove nothing. There hasn’t been any anonymity at least for 20 years. It just depends on who can reliably identify you.

AI doesn’t. Deterministic behavior aka pattern do. Meta, Google, Apple etc. all know us. I am fine for advertising which is the proof on the one hand.

The only reason I would be worried is state controlled data. This is where the shit hits the fan. Chat control, EU cloud, no reliance on USA aka a prison which observes your every step.

So after a long hand written text: data is your currency. Don’t opt for anonymity but for freedom of choice and the right to be granted certain rights. The information part isn’t the problem, never was. The enforcement part is. And ads don’t do harm, oppression does.

And remember: oppression works best under any circumstances. Freedom is the only antipode there is.

In totalitarian regimes no AI was needed to stage a case against someone who wasn’t in favor of the leaders liking.

In short: freedom works despite no anonymity, oppression couldn’t care less.

And how about being automatically reported to the state for conducting such innocent prompting?

Do you know what saves you from state oppression? Publicity. Transparency doesn’t work with a no one.

We live in a Nietzsche like anti world to a certain extend. You hopefully choose the right thing to do. Or do you want to Streisand your anonymity?

wutwutwat 9 hours ago||
Just wait until all the conversations you've ever had with AI (which 100% is training on them as well as keeping it's own memories about you that you have no control over) starts getting used to answer questions other people have asked about you.

That's my theory of what's to come, anyway.

People talk to these things not understanding the implications, and can get extremely personal. The model and companies behind it know who you are, you discuss details that reveal what you do, where you live, where you work, what you search for, and you probably signed in with an oauth provider like github or google, which is more than enough of a thread to start pulling on to learn more about you/link other things to you from on the open internet. It'll all get sucked up into the model and before you know it I'll be able to ask a model about my coworker (you) and get back answers from conversations you had with a model a year or two prior, exposing details about you that you might not want out there. And even if that isn't supposed to be allowed, how well has it worked out so far when it comes to data exfiltration and guardrails. If the model has info on you, being told not to share it won't protect you or that data.

bhouston 10 hours ago|
.
jefftk 10 hours ago||
> Opus as implemented in Claude's web interface has memory and awareness of who the user is.

Kelsey knows this:

To make sure it wasn’t somehow feeding my account information to Claude even in Incognito Mode, I asked a friend to run these tests on his computer, and he received the same result; I also got the same result when I tested it through the API.

When I tested this with my own writing several LessWrong commenters tested it with the snippets I provided (see comments) and saw that it could identify me: https://www.jefftk.com/p/automated-deanonymization-is-here

skeledrew 9 hours ago|||
You should check out some of the other comments where works of others were also tested, and all were correctly identified. Like https://news.ycombinator.com/item?id=47970219
gbear605 10 hours ago|||
Several others have reproduced this for Kelsey, and she's certainly not technologically illiterate.
mediaman 10 hours ago||
She says she has memory disabled. I don’t think Kelsey is technologically illiterate.
More comments...