Top
Best
New

Posted by DalasNoin 1 day ago

Large-Scale Online Deanonymization with LLMs(simonlermen.substack.com)
Pdf: https://arxiv.org/pdf/2602.16800 (via https://arxiv.org/abs/2602.16800)
153 points | 140 comments
danielodievich 4 hours ago|
I post under my real name here, pretty much the only place I post. It keeps me honest and straight in what I say when I choose to say it. I tried talking to my children about leaving as clean of a footprint on the internet as one can in anticipation of future people/systems taking that into consideration. I don't know what it will be but I would expect some adversarial stuff. Trying to keep clean is what I'd prefer for myself and my kids.

On other hand, the Neal Stephenson's Fall or, Dodge in Hell book has an interesting idea in early phase of the book where a person agrees to what we now know "flood the zone with sh*t" (Steve Bannon's sadly very effective strategy) to battle some trolls. Instead of trying to keep clean, the intent is just to spam like crazy with anything so nobody understands the core. It is cleverly explored in the book albeit for too short of a time before moving into the virtual reality. I think there are a few people out here right now practicing this.

DrewADesign 3 hours ago||
> I tried talking to my children about leaving as clean of a footprint on the internet as one can in anticipation of future people/systems taking that into consideration.

I don’t think you’re wrong, but the fact that people consider it inevitable we’ll all have an immutable social acceptance grade that includes everything from teenage shitposts to things you said after a loved one died, or getting diagnosed with cancer, makes me regret putting even a moment of my professional energies towards advancing tech in the US.

monksy 3 hours ago|||
I think he's wrong and I'm willing to say that. The ability for people to move beyond the fundamental attribution error is well known and takes major resources to correct that. For anyone that posts a comment, assuming you want to have easy attribution later is that you must future proof your words. That is not possible and it is extremely suppressive to express yourself.

For example: "Ellen Page is fantastic in the Umbrella Academy TV show" Innocent, accurate, support, and positive in 2019.

Same comment read after 1 Dec 2020 (Transition coming out): Insensitive, demeaning, in accurate.

JohnMakin 3 hours ago|||
> That is not possible and it is extremely suppressive to express yourself.

Also for the fact that you cannot predict how future powers will view past comments - for instance, certain benign political views 20 years ago could become "terroristic speech" tomorrow.

I operate by a simple, general rule - I don't often say anything online I wouldn't say directly to someone's face in real life.

NetOpWibby 2 hours ago|||
> I operate by a simple, general rule - I don't often say anything online I wouldn't say directly to someone's face in real life.

More people should keep this same energy. I try to stress this to my kids and it feels like it's falling on deaf ears in regards to my teen. Alas.

JohnMakin 37 minutes ago||
I can be a rude prick online sometimes, but I can be in real life too - basically though the reason I do this is I never want it to be some huge surprise IRL if someone sees what I write online and be like, "wow, I didn't know that about him." I'm pretty much what I am online and IRL the same. For some reason this seems to matter for me, at least in the past when people have tried to like, send employers stuff I may have written online. The reaction is like "oh, yea, we knew that already about him."

Nothing terrible, maybe slightly embarrassing, but you know how online spaces can be. just be yourself basically, at least I try to be.

actionfromafar 2 hours ago|||
Interesting. You could probably get into trouble in those two places for extremely different things you said.
JohnMakin 1 hour ago|||
of course, and it has happened, but I think authenticity is usually appreciated
NooneAtAll3 39 minutes ago|||
what two places?
DrewADesign 3 hours ago||||
I think it’s naive to assume the private companies selling these services will know, let alone care, let alone disclose when their black box models botch things like this. The companies currently purporting to provide this exact service to HR departments for hiring decisions clearly didn’t let that stop them.
antonvs 3 hours ago|||
> Same comment read after 1 Dec 2020 (Transition coming out): Insensitive, demeaning, in accurate.

I genuinely don't understand this. Are you sure you're not imagining possible offenses against some non-existent standard?

we_have_options 3 hours ago|||
well, how about "abortion legal" to "abortion murder"... possible to see this coming, but I know doctors in NY who are now afraid to travel to Texas.

How about DEI initiatives as good things in 2024 and a mark of evil in 2025? Lots of people were fired because in 2024 their boss told them to work on DEI and they did what their boss told them to do. Turns out this was a capital offense.

anjel 2 hours ago|||
standards change over time. Grandfather clauses are a courtesy, not a right.
heisenbit 2 hours ago||
Society's legally double standard:

- people can create new standards that will be applied retroactively

- lawmakers can create new laws which can not be applied retroactively

Nevermark 2 hours ago|||
That we identify social media as "tech" is very strange.

Yes, they have a lot of servers. But that isn't their core innovation. Their core innovations are the constant expansion of unpermissioned surveillance, the integration of dossiers, correlating people's circumstances, behavior and psychology. And incentivizing the creation of addictive content (good, bad, and dreck) with the massive profits they obtain when they can use that as the delivery vector for intrusively "personalized" manipulation, on behest of the highest bidder, no matter how sketchy, grifty or dishonest.

Unpremissioned (or dark patterned, deceptive, surreptitious, or coercive permissioned) surveillance should be illegal. It is digital stalking. Used as leverage against us, and to manipulate us, via major systems spread across the internet.

And the fact that this funds infinite pages of addicting (as an extremely convenient substitute for boredom) content, not doing anyone or society any good, is a mental health, and society health concern.

Tech scaling up conflicts of interest, is not really tech. Its personal information warfare.

DrewADesign 1 hour ago||
I didn’t say I hated technology, generally— I said I hate what the industry has morphed into in the US. What is or isn’t tech is immaterial. All of the odious things you listed are things that the ‘tech industry’ does, largely unquestioned, these days. Frankly, it’s sickening.
tclancy 1 hour ago|||
I have lived my life on the web under the assumption the other Tom Clancy will leave enough chaff in my wake to make things hard. But probably not because I make the same 5 or 6 jokes over and over.
sponaugle 2 hours ago|||
I am similar in that all of my interactions are with my real name and it is unique enough that just putting it into google will instantly identify me. There is one other 'jeff sponaugle' but I think he is far more annoyed with my presence than I would be with him.

On the plus side, someone will sometimes say while talking to me - oh your are that Subaru guy, or that youtube guy, or whatever and that is fun connection.

gambutin 53 minutes ago|||
How would "flooding the zone" actually work in that case?

AFAIK the strategy is usually used to divert attention from one subject that could be harmful to a person to some other stuff.

Wouldn’t spamming in that case provide more information about you?

qsera 3 hours ago|||
> as clean of a footprint on the internet

The only winning move here is not to play.

pavel_lishin 4 hours ago|||
That whole book seemed like a collection of interesting threads that ultimately go nowhere.

I honestly don't even think I understood the ending. Or the middle, if I'm being extra honest.

I think Anathem addressed the "flood the zone with shit" much better in something like three paragraphs.

slopinthebag 3 hours ago|||
I think as the younger generations come of age they simply will not care about that sort of thing. Like it or not, it's part of the culture and might just be accepted as the norm.
SchemaLoad 10 minutes ago|||
I think it's kind of happened already. All the time we see news of politicians or famous people having their very old photos, comments, or reddit accounts found with distasteful takes. And it seems they can mostly just handwave it away with "Hey that was 10 years ago and I wouldn't make those comments today" and nothing seems to come of it.
AlecSchueler 1 hour ago||||
They might not care about it themselves but what about their government?
MengerSponge 1 hour ago|||
Vonnegut's Amphibians from "Unready to Wear"
godelski 3 hours ago|||
While I think the strategy is effective it is also likely equivalent to the dark forest. To me that's a case of the cure being worse than the poison.
observationist 3 hours ago|||
Autonomous Proxies for Execration - spam bots whose entire purpose is flooding the internet with spam so as to make identifying anything true utterly impossible. If you can't differentiate between real and unreal information in online comments, then online comments stop being a significant factor in shaping public opinion. You need to abstract - identify reliable sources of information, individuals or institutions that do the work to collect and curate.

We're already seeing this as a side effect of the mishmash of influence operations on social media - with so many competing interests, mixed in with real trolls, outrage farmers, grifters, and the like, you literally cannot tell without extensive reputation vetting whether or not a source is legitimate. Even then, any suggestion that an account might be hacked or compromised, like a significant sudden deviation in style or tone or subject matter, you have to balance everything against a solid model of what's actually behind probably 80% or more of the "user" posts online.

There are a lot of aligned interests causing APEs to manifest - they're a mix of psyop style influence campaigns, some aimed at demoralization, others at outrage engagement, others at smears and astroturfing and even doing product placement and subtle advertisement. The net effect is chaos, so they might as well be APEs.

ectospheno 3 hours ago|||
I expect more people over time to use local LLMs to write every single post they make online.
shitloadofbooks 47 minutes ago|||
At this point, where everyone is using an LLM to post and I'm having to use an LLM to keep up and summarise it, I think I'll just ...stop and go outside for quite a while...
tlavoie 3 hours ago||||
At that point, why bother to make any posts at all?
pbhjpbhj 3 hours ago||||
>post they make

Will they realise their life has devolved to pretending an LLM is them and watching whilst the LLM interfaces {I was going to say 'interacts', not this fits!} with other bots.

Will they then go outside whilst 'their' bot "owns the libs" or whatever?

Hopefully at some point there is a Damascus road awakening.

goatlover 2 hours ago|||
What would that accomplish? Just to keep their social credit score in the acceptable range while they go touch grass?
KPGv2 3 hours ago||
Fifteen years or so ago I read an article arguing that by the time Millennials are nearing retirement and have more political power, people will give less of a shit about what you did online in your twenties because we will have, out of necessity, learned that asshattery in your twenties is largely irrelevant to your trustworthiness in your sixties.

When I was that age, you could tell the kids who had political ambitions self-censored online. But now every is buck wild so you have to ignore that when looking at people.

For example, a MASSIVE portion of Millennials and younger looking at the Main election are pretty chill about the leading Democratic candidate having a Nazi tattoo because of this very thing. Basically, "dumb, drunk, deployed Marines will get cool skull and crossbones tattoos in their early twenties, and so what if he said a couple ill-worded somewhat misogynistic things in his twenties, that was decades ago, and he's obviously a different person."

Contrast with Bill Clinton, where he literally had to explain away university marijuana usage TWENTY YEARS AFTER THE FACT.

Point is, I think we're witnessing this evolution happening right now.

AtlasBarfed 1 hour ago||
This isn't the dystopia we're worried about.

The dystopia we're worried about is a 1984 on steroids with llms and real 24/7 worldwide monitoring by the state.

Getting caught doing embarrassing things by teenage social standards doesn't threaten your life.

A competent version of Donald Trump could have walked into the office and we would have been worse than the third Reich.

Still could be today right now. The capability is TurnKey right now at the US government.

This is open research being discussed here. Palantir already has all of this and probably 10 times more.

john_strinlai 4 hours ago||
many people tend to overlook how little information is needed for successful de-anonymization.

i like to introduce students to de-anonymization with an old paper "Robust De-anonymization of Large Sparse Datasets" published in the ancient history of 2008 (https://www.cs.cornell.edu/~shmat/shmat_oak08netflix.pdf):

"We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix [...]. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber’s record in the dataset."

and that was 20 years ago! de-anonymization techniques have improved by leaps and bounds since then, alongside the massive growth in various technology that enhances/enables various techniques.

i think the age of (pseduo-)anonymous internet browsing will be over soon. certainly within my lifetime (and im not that young!). it might be by regulation, it might be by nature of dragnet surveillance + de-anonymization, or a combination of both. but i think it will be a chilling time.

DalasNoin 4 hours ago||
That's a great background paper on the Netflix attack, we make a pretty direct comparison in section 5. We also try to use similar methods for comparison in sections 4 and 6. In section 5 we transform peoples Reddit comments into movie reviews with an LLM and then see if LLMs are better than naraynan purely on movie reviews. LLMs are still much better (getting about 8% but the average person only had 2.5 movies and 48% only shared one movie, so very difficult to match)
john_strinlai 4 hours ago||
>we make a pretty direct comparison in section 5

awesome, i saw the mention in the introduction but i havent yet had a chance for a thorough read through of the paper -- ive just skimmed it. looking forward to reading it in-depth!

Jerrrrrrrry 3 hours ago||
Throwaway accounts using "clever" turns of phrase can often be anonymized by double click, right-clicking -> googling their witty pun and seeing their the sole instance elsewhere, on Twitter, Facebook, etc

If I see a couple words I dont know in a row, I can infer a posters real name.

Id be more specific but any example is doxxing, literally so

SchemaLoad 7 minutes ago|||
If you have access to the whole site dataset it's much more reliable with simpler checks. You can just use word usage frequency of common words. Someone posted a demo here of doing this to HN comments which was very effective at showing alt accounts for a user.
plagiarist 46 minutes ago|||
I assume one's vocabulary is basically a fingerprint, even if one doesn't use unique turns of phrase. Domain knowledge just leaks in and we aren't conscious of it being identifiable.
ghm2199 22 minutes ago||
I want to use "slower" methods of identification more. Like say for instance within a few blocks of you a human can identify who you are for any service that wants to do some kind of verification/proof you are/have XYZ.

We could designate specific individuals to do for you and me just like we do for today's trust authorities for website certificates.

No more verified profiles by uploading names, emails and passports and photographs(gosh!). Just turned 18 and want to access insta? Go to the local high school teacher to get age verified. Finished a career path and want it on linked in? Go to the company officer. Are you a new journalist who wants to be designated on X as so but anonymously? Go to the notary public.

One can do this cryptographically with no PII exchanged between the person, the community or the webservice. And you can be anonymous yet people know you are real.

It can be all maintained on a tree of trust, every individual in the chain needs to be verified, and only designated individuals can do actions that are sensitive/important.

You only need to do this once every so often to access certain services. Bonus: you get to take a walk and meet a human being.

iamnothere 2 hours ago||
Despite being pseudonymous, I don’t take great pains to hide who I am. I am in my 50s and live on the West coast. I don’t have socials and I don’t post anywhere else. Have at it!

If you are semi-retired, you’re free from the threat of cancellation. As long as you aren’t posting about crimes, there’s limits to what anyone can legally do to you. (Still, it’s good to be prudent and limit sharing.)

angry_octet 59 minutes ago||
Unless you're in the nebulous situation of being Hispanic in the US, in which case you might get profiled. Or you might have family with jobs that are subject to pressure -- and right now, that seems like most jobs, because calling employers spineless is an insult to worms. Or if you'd like to travel by air, because watchlists are back, and carriers may just refuse service.
kseniamorph 5 hours ago||
I'm not sure the practical implications are as dramatic as the paper suggests. Most adversaries who would want to deanonymize people at scale (governments, corporations) already have access to far more direct methods. The people most at risk from this are probably activists and whistleblowers in jurisdictions where those direct methods aren't available, not average users.
gwern 3 hours ago||
Attacks can be chained, and this can all be automated. For example, imagine pigbutchering scams... except it's there, similar to some voice-cloning scams, just to get enough data to stylometrically fingerprint you for future reference. You make sure to never comment too much or spicily under your real name, but someone slides into your DMs with a thoughtful, informative, high-quality comment, and you politely strike up an interesting conversation which goes well and you think nothing of it and have forgotten it a week later - and 5 years later you're in jail or fired or have been doxed or been framed. 'Direct methods' can't deliver that kind of capability post hoc, even for actors who do have access to those methods (which is a vanishing percentage of all actors). No one has cheap enough intelligence and skilled labor to do this right now. But they will.
GorbachevyChase 4 hours ago|||
I actually think those most at risk are normal people the activists will harass. Soon it will be possible for anybody who works at the “wrong” business or expresses any opinion on any subject to be casus belli for unhinged, terminally online, mentally ill people who are mad about the thing of the day to start making threatening calls to your employer or making false reports to police or sending deep fake porn to your mom.

I think that we are close to a time where the Internet is so toxic and so policed that the only reasonable response is to unplug.

ceejayoz 5 hours ago|||
> Most adversaries who would want to deanonymize people at scale (governments, corporations) already have access to far more direct methods.

Easier methods probably means more adversaries.

gmuslera 4 hours ago||
And different agendas. Governments and corporations doesn't try social engineering attacks, scams or do things that end in i.e. ransomware attacks.
5o1ecist 3 hours ago||||
- The U.S. NSA ran fake LinkedIn and Facebook profiles to phish foreign targets, as revealed in Snowden leaks, posing as recruiters to install malware.

- UK's GCHQ conducted "Operation Socialist," using false personas on social media for spear-phishing against telecom firms worldwide.

- In 2016, Russian GRU operatives (targeting Western elections) used spear-phishing on Democratic Party emails, but U.S. agencies mirrored similar tactics in counter-ops per declassified reports.

- "A Diamond is Forever".

Emotional manipulation linking diamonds to eternal love; planted stories, lobbied celebrities; created artificial scarcity myth despite stockpile.

- Amazon, Walmart, etc.

Scarcity/urgency prompts ("only 2 left!"); personalized "recommended for you" via data exploits.

- Fake reviews.

Paid influencers posed as riders praising service; hidden surge pricing mind games.

- "Torches of freedom".

Women-only events handing cigarettes as "freedom symbols" to subvert norms.

Feel free to ask for more:

https://www.perplexity.ai/search/hey-someone-on-hackernews-c...

iamnothere 2 hours ago||
Don’t forget eBay: https://www.wired.com/story/ebay-employees-charged-cyberstal...
tosapple 15 minutes ago|||
[dead]
graemep 4 hours ago|||
I can imagine a lot of countries who want to control what their citizens say abroad. I know Iraq in Saddam Hussein's time did it in the UK, China does it now.
intended 4 hours ago|||
People who comment about their boss and workplaces?

People on HN who talk about their work but want to remain anonymous? People who don’t want to be spammed if they comment in a community? Or harassed if they comment in a community? Maybe someone doesn’t want others to find out they are posting in r/depression. (Or r/warhammer.)

Anonymity is a substantial aspect of the current internet. It’s the practical reason you can have a stance against age verification.

On the other hand, if anonymity can be pierced with relative ease, then arguments for privacy are non sequiturs.

john_strinlai 4 hours ago||
another big one: people looking for insurance, or looking to claim insurance
afpx 4 hours ago||
deanonymizing the people who deanonymize people at scale
bigwheels 3 hours ago||
A related past submission comes to mind:

Show HN: Using stylometry to find HN users with alternate accounts

https://news.ycombinator.com/item?id=33755016 - Nov 2022, 519 comments

econ 1 hour ago||
Everyone should really stop posting online unless their job requires it.

The platforms offer only castrated interactions designed not to accomplish anything. People online are useless obnoxious shadows of their helpful and loving self.

No one cares more what you say than those monitoring you and building that detailed profile with sinister motives. The ratio must be something like 1000:1 or worse.

JohnMakin 4 hours ago||
As people will point out, the OSINT techniques described are nothing new - typically, in the past, you could de-anonymize based on writing style or niche topics/interests. Totally deanonymization can occur if any of these accounts link to profiles containing pictures of their faces, which can then be web-searched to link to a real identity. It's astounding how many people re-use handles on stuff like porn sites linked very easily to their IRL identity.

While people will point out this isn't new, the implication of this paper (and something I have suspected for 2 years now but never played with) is that this will become trivial, in what would take a human investigator a bit of time, even using common OSINT tooling.

You should never assume you have total anonymity on the open web.

ghywertelling 4 hours ago||
If LLMs can identify a person across websites, I can ask LLM to read up his posts and write like him impersonating him and then this feeds back into the tools identifying him. I can probabilistically malign a person this way.
JohnMakin 4 hours ago|||
This already is a thing people did at least as far back as I started getting into web privacy, which was ~10 years ago. I have been the target of it before.

LLM's are probably better at it, but I don't know if this is as destructive as people may guess it would be. Probably highly person dependent.

The micro-signals this paper discusses are more difficult to fake.

john_strinlai 4 hours ago||||
stylometry is only one aspect of de-anonymization. what you describe is certainly a threat that we will have to deal with, but there is a lot more to credible impersonation than just being able to mimic a writing style
functionmouse 4 hours ago||||
So this means deanonymization doesn't work? Rejoice?
Jerrrrrrrry 4 hours ago|||
How to conduct a psy-op

https://youtu.be/YTGQXVmrc6g

warkdarrior 4 hours ago||
I think the implication is this will become trivial and trivially automated, no human investigator needed. I bet there will be plugins in one year's time to right click on a post and get a full report on who the author is.
JohnMakin 4 hours ago|||
agreed and the new frontier here will probably be obfuscation by creating false positives with these same tools, but that kind of renders the web unusable in my mind.
arctic-true 3 hours ago||
I had this same thought. Seems fairly easy to just put off a strong false signal. If you don’t want anyone to know that you live in Finland, make a point to constantly mention how much you enjoy living in Peru.
0xdeadbeefbabe 4 hours ago|||
Wouldn't it also become trivial to pretend to be another author?
john_strinlai 4 hours ago||
it may become more trivial to llm your comments/blog/whatever into a different "voice", but there is so much that can be used for de-anonymization that the llm-assisted technique dont address.

for example, you may change the content of your comments, but if you only ever comment on the same topic, the topic itself is a signal. when you post (both day and time), frequency of posts, topics of interest, usernames (e.g. themes or patterns), and much more.

block_dagger 4 hours ago|
Does this mean we'll find out who Satoshi is with a high degree of confidence?
hellojesus 2 hours ago|
Clearly the cia or other gov institution. Its purpose is to create an irresistible honeypot so that anyone who figures out a working and time feasible implementation of shor's law or other prime factorization technique would reveal their hand.
More comments...