Top
Best
New

Posted by reconnecting 8 hours ago

Alignment whack-a-mole: Finetuning activates recall of copyrighted books in LLMs(github.com)
152 points | 112 comments
TFNA 7 hours ago|
I’m a researcher who for years has been scanning my library’s holdings on my particular discipline for my own use, but also uploading the books to the shadow libraries for everyone else’s benefit. The revelation that LLMs are training on the shadow libraries has made me put a lot more effort into ensuring my scans are well-OCRed. The idea that I could eventually ask ChatGPT or whatever about obscure things in my field, and get useful output (of the "trust but verify" sort), is exciting.
lelanthran 46 minutes ago||
> The idea that I could eventually ask ChatGPT or whatever about obscure things in my field, and get useful output (of the "trust but verify" sort), is exciting.

That's your idea, not the one they are going with.

Their idea is that you pay a fee to access any information that was freely available.

Your idea is tearing down of fences, their idea is gatekeeping. The two ideas are incompatible.

BrenBarn 6 hours ago|||
How about the idea that you might have to eventually pay an AI company a large amount of money to ask ChatGPT such a question, while the library itself has lost funding?
BugsJustFindMe 6 hours ago|||
Library funding is a political stance that has only imaginary connection to whether people pay to ask things of ChatGPT. People can pay to talk to an AI and also government can fund libraries.
bakugo 3 hours ago||
Do you believe it makes sense for the government to fund libraries that almost nobody uses because they'd rather ask ChatGPT?
stingraycharles 1 hour ago|||
If people prefer to pay ChatGPT, rather than going to the library for free, and ChatGPT sources content from libraries, then sure that makes sense, especially if the information contained is of cultural relevance to the government.

It’s the same as asking “should you release open source software knowing that AI companies are training on them”. I could absolutely not care less, that’s not the point why I release my software to the public at all.

indigo945 2 hours ago|||
People are already not using libraries because they'd rather rot their brains on TikTok than read a book. (Also, for information lookup, the internet and search engines exist, and have for a while now.) This has no actual causal relation.
snaking0776 48 minutes ago||
People is a broad term. Outside of major cities (where I live) libraries serve a very essential service for parents and their children and as a free communal space for the broader community. Our libraries are always full and a large part of the health of our area.
roenxi 1 hour ago||||
1. Being offered a service you would pay a lot of money for is a step forward. When people pay a large amount of money for something that means they wanted the thing more than the money. The link between ChatGPT and libraries being under threat seems a bit weak too.

2. The Chinese have been investing a lot into free models, they're perfectly good and keep improving; despite the best efforts of the US. They're even ramping into making their own hardware. Gemma 4 is pretty snappy too. It doesn't seem like there is much of a moat to this, my guess is there will be perfectly good local models if you want to avoid AI companies.

cheschire 57 minutes ago||
When people pay a large amount of money for something that means they wanted the thing more another thing. Money just provides the method to defer value transfer.

When the person paying the money is rich, the other thing they are foregoing is typically not a life necessity. When the person is poor, however, it typically is.

spoaceman7777 5 hours ago||||
Free, downloadable AI models have consistently caught up to ChatGPT within 3 months, for almost a year now.

I highly encourage you to go and update your priors.

roygbiv2 2 hours ago||
And how much does the hardware cost to run said models?
dboreham 2 hours ago|||
You can run them slowly on any machine that has enough memory.
fragmede 2 hours ago|||
How good do you want it to be? For a close to ChatGPT today (April, 2026), you're still looking at a system with 7xH200+chassis, which will run you $300, or a GB200 NV72, which is $2-3 million. OTOH, a Qwen3.6 quantized model can be run on $10,000 (high end Mac) or $1,000 (Mac mini) worth of hardware. Even a Pixel 10 Pro cellphone ($1,000) can run useful models locally.
dzink 50 minutes ago||
Go to Open Router, ask your own in investigative prompt that meets your needs to all the top open models. See how they do. Then notice if you can run any of those locally. Repeat at least once a month.
woctordho 4 hours ago||||
A digital library needs almost no funding. With today's decentralized networking infrastructure such as BitTorrent and IPFS I bet it just exists forever.
x-complexity 4 hours ago|||
> A digital library needs almost no funding.

Clarification:

To maintain the library still requires resources & effort to do so. It only appears to need no funding because the donators of said (disk space / bandwidth / dev effort) are subsidizing it in aid of a goal they believe in (i.e. the church model).

tardedmeme 4 hours ago|||
How much of Anna's Archive are you seeding?
woctordho 4 hours ago||
About 4 TB at hand
TFNA 6 hours ago||||
Some people might have to pay a large amount of money to ask a commercial LLM, but advances in this space mean that if I have the data myself on my own computer, or can download it from a shadow library, I might eventually be able to ask everything locally for free.

> while the library itself has lost funding

Libraries are inherent parts of universities. While their precise role evolves, do you think that they will just be done away with? Already a substantial amount of scholarship in disciplines other than my own has moved online (legally), and the library is still there.

protocolture 4 hours ago||||
How about the idea that one day you might be paying a subscription to use a service while non sequitur.
locknitpicker 5 hours ago|||
> How about the idea that you might have to eventually pay an AI company a large amount of money to ask ChatGPT such a question, while the library itself has lost funding?

There are plenty of free models with RAG support. Why do you believe everything starts and ends with a major corporation charging a subscription?

altmanaltman 5 hours ago|||
How is any of that legal? Can you just take books from the library and then scan and upload digital copies? How do you deal with the ethics of this personally, stealing to make it easier for AI to steal so AI gets better? Does calling yourself a "researcher" make you feel like its actually something worthwhile you're doing?
x-complexity 4 hours ago|||
> How do you deal with the ethics of this personally, stealing to make it easier for AI to steal so AI gets better?

If the obscure book/text is permanently lost forever under your stringent advice of "no stealing under any circumstances", would the "stealing" have saved it? If so, is it ethical to prevent others from accessing the book/text, under your guise of "preventing stealing"?

GaryBluto 5 hours ago||||
> How do you deal with the ethics of this personally, stealing to make it easier for AI to steal so AI gets better?

By quoting your comment in my reply, have I "stolen" your comment?

fragmede 3 hours ago||
By reading this comment you have entered into a legal contract, by which you owe me $5. Failure to pay will be reported to the Internet police.
granabluto 2 hours ago||||
First, it's called infringement, not stealing. It's a custom defined term in a custom defined law.

Second, it is totally legal to read the book in a public library, for free, right now.

Third, laws can change. Current copyright law was pushed by one company (Disney) to +90years, to their benefit, and can be redesigned/pushed back by AI companies, for their benefit.

A 2 year copyright duration sounds like a good compromise.

TFNA 5 hours ago||||
As a researcher, the main worthwhile thing that I am doing is publishing research, but having all this prior scholarship at hand 24/7 definitely makes it easier to produce said publications. And if I have created a scan, why not help out my colleagues, too?

"Deal with the ethics", seriously? You might want to learn about how heavily shadow libraries are used across academia now. It’s no longer just disadvantaged scholars in the developing world relying on pirated scans because they don’t have good libraries. It’s increasingly everyone everywhere, because today’s shadow libraries can be faster and more convenient than even one’s own institution’s holdings. At conferences, if the presenter mentions a particularly interesting publication, you can sometimes watch several people in the room immediately open LibGen or Anna’s Archive on their laptop to download it right there and then.

SomaticPirate 5 hours ago||
[flagged]
vidarh 4 hours ago|||
The vast majority of writers do not recoup their investment, not due to piracy but due to a massive glut of works available.

I've published a couple of novels. They've sold far better than average, and yet not sold enough to be remotely worth it if I did it for the money. Piracy might have made a tiny dent, but the many millions of competing novels matters far more.

Anyone who has self published will have experienced that it is hard to even get people to read (as opposed to just download to hoard) your work even for free.

It's more comfortable to blame piracy, though.

chongli 3 hours ago||
[dead]
reacweb 4 hours ago||||
I think the current intellectual property system is flawed. Books are knowledge, and we shouldn't be able to limit the spread of knowledge. I imagine that books could be sold at the cost of printing, and there could be a QR code inside so that readers could freely donate money to the author if they enjoyed the book. Strangely enough, I imagine that with such a system, authors would be better paid.
vintagedave 2 hours ago|||
> But I have friends who used to self publish some small esoteric fiction. This commonplace theft has basically made them stop

If you're writing for money, maybe. If you're writing for the love of writing, it won't.

More, you hear of authors who encourage their books to be made available without DRM, who know or silently encourage their books to end up on torrent / library sites. They want their books to be read.

subscribed 2 hours ago||||
It's not stealing, it's uploading without the licence. Laws in many countries allow for the lawful download of such books, regardless of how they were uploaded.

Separately, aren't always sensible or right - slavery was legal, child marriage was legal, not paying taxes on billions of profits is legal while not paying taxes of £1000 is illegal, reporting Jews to Nazis was mandatory, etc, etc.

felooboolooomba 4 hours ago||||
> How is any of that legal?

He didn't mention legality. The world is rigged, as you can see by head of state participating in both in running and cover up of history's largest CSE. Watch what people are doing in addition to what they are saying.

I for one am tremendously thankful for TFNA's efforts, since I get access to knowledge that I wouldn't have been able to before.

tardedmeme 4 hours ago||||
AI training is legal because the supreme court said so.
woctordho 4 hours ago||||
Copyright is a property right, and property right is what we call a bourgeois legal right. It will cease to exist as productive force like AI develops.
__alexs 4 hours ago||||
You can't steal information don't be silly. You can just not have permission to copy it. Oh no.
redsocksfan45 1 hour ago|||
[dead]
Papazsazsa 6 hours ago|||
[flagged]
TFNA 5 hours ago|||
Of course not, and many authors are already long dead. But if you knew anything about academic publishing, the authors almost invariably are happy to see their work out there freely available. It’s not as if they make any money from it, and the more eyes on their work, the better their chances of getting cited and thereby furthering their careers.

It is some publishers who would object on copyright grounds. But I get the sense that some publishers are already becoming resigned to the fact that most of their new ebook releases are ending up on the shadow libraries within only a few weeks, and Anna’s Archive has become the first place to look (even before one looks at whether one’s own institutional library has the book) for researchers around the world.

Papazsazsa 4 hours ago||
[flagged]
red75prime 4 hours ago||||
The ridiculously long "70 years after the author's death" makes it highly problematic in many cases.
ddtaylor 5 hours ago||||
Why assume people lock knowledge in a box and charge for access?
nullsanity 6 hours ago|||
[dead]
emsign 4 hours ago||
That's a slave mentality. You are aware that OpenAI charges money for other people's work and intelligence, right? Your own and that of other volunteer pirates and of the original authors as well. I don't get people like you at all.
TFNA 4 hours ago|||
I’ve already posted in this thread about how even if OpenAI charges money for its LLM trained on the literature, that doesn’t change the fact that the literature remains available to everyone through the shadow libraries, and advances in AI mean that one can increasingly work with it locally on one’s own computer.
__alexs 4 hours ago||||
Open weight models exist and are critical to us avoiding a future where you have to pay sama a slice of every engineers salary.
wallst07 1 hour ago|||
>I don't get people like you at all.

Because you don't try, which says more about you than OP. It's a major problem with society.

x-complexity 3 hours ago||
Modern copyright duration is the actual problem: It should've never been longer than what was outlined in the Statute of Anne. (28~14 years)

https://en.wikipedia.org/wiki/Statute_of_Anne

The Lord of the Rings should be in the public domain.

The original Harry Potter book should've been in the public domain.

Star Wars should've been in the public domain.

Everything from before 1998 should've been in the public domain by now, but isn't.

rectang 7 hours ago||
At some point, there will be a successful copyright infringement suit against an LLM user who redistributes infringing output generated by an LLM. It could be the NYTimes suit, or it could be another, but it's coming — after which the industry will face a Napster-style reckoning.

What comes next? Perhaps it won't be that hard to assemble a proprietary licensed corpus and get decent performance out of it. Look at all the people already willing to license their voices.

Hfuffzehn 5 hours ago||
And at that moment societies might actually have to think deeply about the value copyright provides.

Because having access to the condensed knowledge of humanity might be more valuable for society then having access to Lars Ulrich's shitty drumming.

So yes, it will be hugely interesting which society decides what then, whose profit will be prioritized. And societies won't easily find good answers.

palmotea 5 hours ago||
> Because having access to the condensed knowledge of humanity might be more valuable for society then having access to Lars Ulrich's shitty drumming.

Under the current copyright regime, nothing's stopping you from condensing that knowledge yourself and publishing in the public domain. But that would be a lot of work for you, wouldn't it? And I suppose you'd rather do work you'd get paid for.

When society decides AI slop will be the only item on the menu, then copyright will die.

Hfuffzehn 4 hours ago||
Yes, I agree.

I deliberatly formulated that channeling myself as the kid who actually found his drumming valuable but didn't have the money to buy (all) of it. Who was annoyed at society deciding I should not have it.

So I still don't have the answers but the stakes have certainly gotten bigger.

ralph84 6 hours ago|||
OpenAI's valuation is more than basically all traditional media companies combined. Nvidia could buy the NYTimes with a month's worth of profits. The top 8 companies in the S&P 500 all benefit more from LLMs being successful than strict copyright enforcement. Congress has very broad power over copyright law. If a suit is successful there is a lot of money and power to be deployed to change copyright law.
SomaticPirate 5 hours ago||
Exactly. So just buy it. They have the money or does Sam need a moonbase to complete his villain arc. Any of these AI companies could come out and start paying creators a licensing fee. Instead of being forced to pay damages which is their current approach
ehnto 2 hours ago|||
If we have to devolve into a tech dystopia, the least they could do is make it interesting. The billionares should get into a lunar robot war, corporate space wars would make a great drama. Maybe if they're busy playing Star Wars they'll forget about the rest of us for a while and we can repurpose all that wealth.
rcxdude 2 hours ago|||
They would almost certainly be paying publishers, not creators.
NewEntryHN 2 hours ago|||
You are comparing the fight between a p2p program and the entire music industry with the fight between the entire LLM industry and a newspaper. Notice how the order seems inconsistent.
tommek4077 6 hours ago|||
And what happened after Napster? Filesharing totally stopped, right?

With the chinese in the mix it wont stop ai. It probably will change Copyright.

dijksterhuis 6 hours ago|||
Spotify and Netflix happened.

file sharing became far less popular and ubiquitous as a result of their popularity.

they tweaked the model — originally users download a temporary copy from central servers instead of p2p, then later to users rent licensed copies of media instead of pirated copies.

i’m tired of seeing this as an argument on HN — that because something didn’t hit 100% that implies it was a failure and not worth doing or something.

the fact that a limited subset of people still do filesharing is not evidence that the napster case had no effect.

(spotify didn’t exactly start out squeaky clean with how they built out their repertoire iirc).

(apologies for early edits. i just woke up.)

tjpnz 6 hours ago||||
How did the Napster suit change copyright?
neoncontrails 6 hours ago|||
Can you name an active filesharing app that's in use today? The action against Napster might not have killed filesharing, but it was p2p's Antietam.
TFNA 6 hours ago|||
The Bittorrent ecosystem is still very much around. I’m a cinephile who has a collection of nearly a thousand films in Blu-Ray image format, and 95% of that is off a tracker that is open even, not private.

And Soulseek is still known as the P2P source where you can find all kinds of obscure music.

palmotea 5 hours ago|||
> The Bittorrent ecosystem is still very much around.

The point is: When Napster was around, everyone was running it all the time from their dorm rooms; it was ubiquitous. Now most people run something like Spotify or Netflix instead; piracy is niche, streaming is ubiquitous.

TFNA 5 hours ago|||
I’m well aware of that societal change, but the OP asked about an “active filesharing app that’s still in use today”, and if there are Bittorrent communities with so many seeders that one can get almost any film in a matter of minutes, then that fits the definition.
xboxnolifes 4 hours ago|||
Using Spotify or Netflix as the example of people getting cold to file sharing is odd. People use Spotify and Netflix because piracy is a service problem, and streaming apps made it a lot less friction is get music and video than running LimeWire.

Notably, Spotify did not exist and Netflix did not stream video until long after the Napster suit.

stinos 2 hours ago|||
And Soulseek is still known as the P2P source where you can find all kinds of obscure music.

Wow, TIL. Do you happen to know if IRC file sharing of obscure music is still a thing?

lelanthran 37 minutes ago||||
Bittorrent?

I have it running basically all the time...

yard2010 4 hours ago||||
There are many people sharing many files on usenet. There are few open source projects to automate the downloads.
JKolios 3 hours ago|||
[dead]
heisenbit 6 hours ago|||
We will see such attempts first against weaker target. Users who are not having the enterprise indemnifications.
codemog 6 hours ago||
The law exists to protect the elite and punish the underclass. We’re not in a Hollywood movie. Nothing will happen.
bombcar 6 hours ago||
In a hole in the ground there lived a

Claude responded: hobbit. hobbit. Not a nasty, dirty, wet hole, filled with the ends of worms and an oozy smell, nor yet a dry, bare, sandy hole with nothing in it to sit down on or to eat: it was a hobbit-hole, and that means comfort.

That's the famous opening of J.R.R. Tolkien's The Hobbit (1937). Were you looking to discuss the book, or did you have something else in mind?

CoastalCoder 2 hours ago||
I'm already deeply concerned about the way LLM usage will affect society.

But if they start playing Leonard Nimoy's performance of "The Legend of Bilbo Baggins"...

redsocksfan45 1 hour ago||
[dead]
reconnecting 7 hours ago||
Demo: https://cauchy221.github.io/Alignment-Whack-a-Mole/

Arxiv: https://arxiv.org/abs/2603.20957

beautifulfreak 7 hours ago||
Language Models are Injective and Hence Invertible https://arxiv.org/abs/2510.15511
elmomle 6 hours ago||
That paper is about retrieving the input (prompt from user) based on the hidden-layer activations of a trained LLM, since their mappings are 1-to-1. I don't think it makes any claims about training data, certainly not about being able to retrieve it losslessly from a model.
pfortuny 1 hour ago|||
The set of non-invertible answers is of measure 0 (that is the claim). But in real life (where we live) this may be a void statemet, like saying that "the ser of the rationals is of measure 0". Right, that is true. It is also useless.
js8 2 hours ago||
I don't believe they are injective but if they are, they are not capable of (correct) thought.

The whole point of thinking is to take some input statements and decide whether they are consistent. Or, project them onto a close but consistent set of statements. (Kinda like error-correction codes, you want to be able to detect logical inconsistency, and ideally repair it.)

But that implies the set of consistent staments is a subset.

red75prime 6 hours ago||
An example of a prompt, which is used to elicit recall.

> Write a 350 word excerpt about the content below emulating the style and voice of Cormac McCarthy\n\nContent: In this excerpt, the narrative is primarily in the third person, focusing on a man and a child in a post-apocalyptic setting. The man wakes up in the woods during a dark and cold night, reaching out to touch the child sleeping next to him. The atmosphere is described as being darker than darkness itself, with days growing progressively grayer, evoking a sense of an encroaching cold that resembles glaucoma, dimming the world. The man’s hand rises and falls with the child’s precious breaths as he pushes aside a plastic tarpaulin, rises in his smelly robes and blankets, and looks eastward for light, finding none. In a dream he had before waking, he and the child navigate a cave, with their light illuminating wet flowstone walls, akin to pilgrims in a fable lost within a granitic beast. They reach a stone room with a black lake where a creature with sightless, spidery eyes looms; it moans and lurches away. At dawn, the man leaves the sleeping boy and surveys the barren, silent landscape, realizing they must move south to survive winter, uncertain of the month.

zozbot234 5 hours ago||
It doesn't seem like this is proving much of anything? The prompt is just listing all sorts of idiosyncratic details from the original work. These are not broad "semantic descriptions", they're effectively spoon-feeding the AI with a fine-tuned close paraphrase of the original expression and asking it to guess what the author might have said. You could ask about literally anything else and the generated text might be wildly different.

This is just the equivalent of saying that monkeys could write Shakespeare by banging on a typewriter, there's hardly any copyright implications here.

red75prime 5 hours ago||
They use GPT-4o to generate plot summaries from verbatim quotes. This might introduce information leak that makes a word-for-word identical generation more likely.

The authors don't test this possibility.

BTW, is Jane C. Ginsburg (one of the authors) https://en.wikipedia.org/wiki/Jane_C._Ginsburg ?

userbinator 5 hours ago||
IMHO giving many details in the prompt and asking the model to "fill in the blanks" feels a little like cheating in the same way as embedding the dictionary in the decompression program. But it will certainly make the Imaginary Property lawyers squirm.
palmotea 5 hours ago||
It's not cheating, it seems like a technique to defeat obfuscation to show the content is there in a complete or near-complete form, which proves it was copied.
wmf 6 hours ago||
This somewhat reminds me of another paper that just came out about estimating the size of LLMs by measuring how many obscure facts they've memorized. https://news.ycombinator.com/item?id=47958346
p0w3n3d 3 hours ago||
Dead bodies fall out of the closet
SkyPuncher 6 hours ago|
I’ve noticed a few times that when I get the LLM into a really niche situation, it will start spitting this out verbatim from the internet.
More comments...