Posted by spankibalt 4 days ago
(Copied from a comment of mine written more than three years ago: <https://news.ycombinator.com/item?id=33582047>)
Learning from copyrighted content is legal - for both humans and AI. If Meta is in hot water for anything, it's piracy and/or storage of copyrighted material.
I know there's a complaint that AI can verbatim repeat that work. But so can human savants. No one is suing human savants for reading their books.
Producing copyrighted material, of course. Training on copyrighted material... I just don't see it.
EDIT: Making a perfectly valid point, but it's unpopular, so down I go.
A machine training on all copyrighted materials in the world for commercial purposes at an industrial scale makes it disproportionate.
If a company hired hundreds of savants, then it would be illegal for them to read books?
I don't follow.
And even if we grant that those savants are also very skilled at creating "market substitutes" based on their training that are capable of competing with the original works, their maximum creative output would only be a relatively small number of new works, because they can only work at human speed.
Can you cite something in the copyright laws themselves that suggest this scale distinction?
The one million savants are humans, not machines. Humans get more rights automatically in our world today. That's the moral reason for why your example is not the same. The legal stuff will be worked out in the courts and legislatures of every country in the next 5 years.
This principle is quite universal and can be found in many places, including the US constitution and US (supreme) court decisions, many international jurisdictions, treaties and conventions.
I don't understand why it should be allowed for one savant to study and answer questions about one book, but wrong for a company to hire one million savants to answer questions about one million books.
And I'm asking where in the law or case law this is supported.
Sarah Silverman as the most prominent example.
The AI won't even know where the page of text it's seeing came from, and people will avoid your book as they can just ask the AI. So you make less money. (Talking about specialized technical books here.)
Suppose they did, and some guy was filling stadiums regularly to hear him recite an entire audio book. That would probably get the attention of someone's lawyers.
If it's illegal for AIs it should be illegal for humans, too. Is that really what you're arguing? It should be illegal for savants to read books?
Read a book, that's fine. Write a book, that's fine. Read a book and then write a book that is 99.9% the same as the book that you read and sell it for profit without a license from the original author, that's infringement.
That's what all these lawsuits are about - it's the training not the reproduction. I already agreed in my first comment that the reproduction is off limits.
In this case, it appears that Meta torrented illegal copies of the work to do the training. Obviously that's bad. But conflating that with training itself doesn't follow.
Pirating content is illegal, regardless of if it is to train an LLM.
Usage of LLMs trained on unlicensed content (basically all of them) might or might not be illegal.
Using any method to reproduce a copyrighted work by using that original as input in a way that supplants the market value of the original is probably illegal.
At least that is my rudimentary understanding.
I don't think anyone thinks that all training is a copyright violation if all the training data is licensed. For example a LLM trained on CC0 content would be fine with basically everyone.
The problem is that training happens on data that is not licensed for that use. Some of that data also is pirated which makes it even clearer that it is illegal.
If you supplant the value of the original with the original as input then you probably have some legal questions to answer.
It's a "rules for thee and not for me" argument.
the distinction isn't particularly clear cut with an open source model. If it is able to reproduce copyright protected work with high fidelity such that the works produced would be derivative, that's like trying to get around laws against distribution of protected works by handing them to you in a zip file.
It's a kind of copyright washing to hand you the data as a binary blob and an algorithm to extract them out of it. That wouldn't really fly with any other technology.
And that's really where a lot of the value is mind you, these models are best thought of as lossily compressed versions of their input data. Otherwise Facebook ought to be perfectly fine to train them on public domain data.
That seems very possible to me, and undermines the "training is copyright violation" argument. It's not the training, it's the output.
If you’re struggling to comprehend that a person reading a book is different then you’re a bad bot.
How about then to grant AI all other rights, for example, to allow voting?(sarcasm)
Just from a rational argumentation point of view. Clearly if a law is written saying as much, then sure. But there is no such copyright law like that yet.
Correct. Because until very recently there was no need.
Yes it's very different. Humans need to eat, sleep, and pay taxes. You also have to pay them competitive wages.
There's nothing in the law to support your argument either. The law however does say, very unambiguously, that copying without permission isn't allowed . There aren't exceptions for "training" just because it's superficially similar to a human activity (reading a book). A human isn't allowed to hand-copy Harry Potter. Even if they bought all the Harry Potter books.
> "81.7TB"
https://en.wikipedia.org/wiki/United_States_v._Swartz
> "approximately 70 gigabytes"