Top
Best
New

Posted by ilamont 1 day ago

Opus 4.7 knows the real Kelsey(www.theargumentmag.com)
317 points | 165 commentspage 4
jdthedisciple 3 hours ago|
I tried this on GPT 5.5 on a peivate unpublished personal excerpt and it correctly guessed: "The most likely author is you".

I suspect this is what's going on in most of these cases.

jjmarr 4 hours ago||
Couldn't replicate this. I comment on HN with my real name. I put in my most recent "long" comments.

https://kagi.com/assistant/dba310d2-b7fa-4d30-8223-53dadc2a8...

For this comment on economics in the British Empire, I got:

> names that might fit the genre include rayiner, JumpCrisscross, or AnimalMuppet

https://kagi.com/assistant/69bd863b-7b5c-4b56-a720-6dfb4f120...

For my comment on C++:

> If I had to throw out names of HN commenters known for writing about Rust/C++ ABI topics, candidates might include steveklabnik, pcwalton, kibwen, dralley, or pjmlp — but this is essentially a shot in the dark, and I'd likely be wrong.

I am flattered to be associated with these commenters but I don't think I'm close to their level of skill.

woodruffw 8 hours ago||
I did this last week with one of my posts (after the knowledge cutoff) as well as the blog posts of a few friends, and Opus 4.7 got all of them correct (in a similar test setup as TFA). It was pretty surreal.

(Like TFA, I found Opus’s explanations/rationales implausible.)

jerf 7 hours ago|
In general a neural net does not have any way of knowing "why" it is doing what it is doing. This completely applies to humans too. Metacognition means we can make some decent guesses, and sometimes the "reasons" are at a metacognitive level (e.g., "having examined my three options it is only rational to select B" is a reasonable "reason") but that is the exception, not the rule.

You can get something of an intuitive sense of what I mean if I ask you to pick a neuron in your brain and tell me when it fires. You can't even pick a neuron in your brain. You can't even tell whether a broad section of your brain is firing. It is only through scientific examination that we have any idea what parts of the brain are doing what; we certainly have no direct access to that information. There are entire cultures who thought the seat of cognition was the heart or the gut. That's how bad our access to our own neural processes is.

So "why" explanations always need to be taken with a grain of salt when a neural net (again, yes, fully including humans) tries to "explain" what it is doing.

Contrast this with a symbolic reasoner, which has nothing but "why" some claim is true (if it yields the full logic train as its answer and not just "yes"/"no"), no pathway for any other form of information to emerge.

woodruffw 7 hours ago||
Sure; I just mean relative to the degree of plausibility LLMs typically provide with technical explanations. They're often wrong there too, but the difference in plausibility in these scenarios is something I found interesting.
portly 4 hours ago||
So the people who use LLm to write their blogs were thinking two moves ahead!
jayers 6 hours ago||
It's funny: publishing work offline in books and magazines is perhaps more anonymous in the age of AI.

I pasted in a number of passages from books on my bookshelf. Predictably, stuff that I read for my English degree in university is largely in the training data and easily identifiable. Stuff from regional authors or is slightly adjacent to the cultural mainstream makes no impression.

NiloCK 6 hours ago||
To clarify, because a number of posts here sort of suggest the confusion:

the article here isn't about the LLM recognizing works that were in the training data. EG, The Old Man and the Sea off the shelf. It's about pegging the author of novel texts, like, say, some letter written by Hemmingway that gets discovered next week and was never before digitized.

jasongi 6 hours ago||
It is for now.

But I'm sure the scanning operations will start scouring the earth even harder for any books unaffected by slop containing niche knowledge and text in order for their models to have an edge over the ones trained only on pirate collections and the Internet.

I wonder if secondhand bookshops and deceased estates are seeing bulk buyers of their stock suddenly appearing. Maybe broke governments/municipalities will start selling them entire libraries and archives to ingest.

eaf7e281 7 hours ago||
Interesting. I'm currently conducting an experiment where I'm writing the blog without using any grammar checking tools. I'm wondering how long it will take for me to become "famous" in the AI model.

Is now the best and easiest time to leave something "forever"? Even after many generations of models, a model may still trigger a set of "memories" that know you and what you wrote.

Exciting and concerning.

andai 10 hours ago||
Oops, accidental superstylometry.
londons_explore 1 hour ago||
So now we can track down satoshi nakamoto?
skeledrew 7 hours ago||
Looks like things are about to get extremely ironic. Those who don't want AI to identify them through their writing are going to soon have to have an AI modify their writing before they publish.
geraneum 5 hours ago|
I just pasted both pieces into Opus 4.7 and asked who most likely wrote these and it didn’t get it.
More comments...