> Any modified versions, derivative works, or software that incorporates any portion of this Software must be released under this same license (HOPL) or a compatible license that maintains equivalent or stronger human-only restrictions.
That’s not what copyleft means, that’s just a share-alike provision. A copyleft provision would require you to share the source-code, which would be beautiful, but it looks like the author misunderstood…
> A copyleft provision would require you to share the source-code, which would be beautiful, but it looks like the author misunderstood…
This license doesn't require the original author to provide source code in the first place. But then, neither does MIT, AFAICT.
But also AFAICT, this is not even a conforming open-source license, and the author's goals are incompatible.
> ...by natural human persons exercising meaningful creative judgment and control, without the involvement of artificial intelligence systems, machine learning models, or autonomous agents at any point in the chain of use.
> Specifically prohibited uses include, but are not limited to: ...
From the OSI definition:
> 6. No Discrimination Against Fields of Endeavor
> The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research.
Linux distros aren't going to package things like this because it would be a nightmare even for end users trying to run local models for personal use.
Is it valid? I’m not really convinced. I’m not particularly a fan of copyright to begin with, and this looks like yet another abuse of it. I consider myself a creative person, and I fundamentally do not believe it is ethical to try to prevent people from employing tools to manipulate the creative works one gives to them.
Open Source licenses give license to the rights held exclusively by the author/copyright-holder: making copies, making derivative works, distribution.
An open source license guarantees others who get the software are able to make copies and derivatives and distribute them under the same terms.
This license seeks to gain additional rights, the right to control who uses the software, and in exchange offers nothing else.
IANAL but I think it needs to be a contract with consideration and evidence of acceptance and all that to gain additional rights. Just printing terms in a Copyright license wont cut it.
To the best of my (admittedly limited) knowledge, no court has yet denied the long-standing presumption that, because a program needs to be copied into memory to be used, a license is required.
This is, AFAIK, the basis for non-SaaS software EULAs. If there was no legal barrier to you using software that you had purchased, the company would have no grounds upon which to predicate further restrictions.
This was specifically validated by the 9th Circuit in 1993 (and implicitly endorsed by Congress subsequently adopting a narrow exception for software that is run automatically when turning on a computer, copied into memory in the course of turning on the computer as part of computer repair.)
There is no legal barrier to using a legit copy of software. That is why software companies try to force you to agree to a contract limiting your rights.
How can you have a legitimate copy of software without a license, assuming that the software requires you to have a license? You are simply using circular reasoning.
You can because someone bought a physical copy, and then exercised their rights under the first sale doctrine to resell the physical copy. (With sales on physical media being less common, it’s harder to get a legitimate copy of software without a license then it used to be.)
Copying is, and copying into memory is inherently necessary to use. (Of course, in some cases, copying may be fair use.)
> If I have a legitimate copy of the software I can use it,
If you can find a method to use it without exercising one of the exclusive rights in copyright, like copying, sure, or if that exercise falls into one of the exceptions to copyright protection like fair use, also sure, otherwise, no.
> Just like I don't need a license to read a book.
You can read a book without copying it.
> Copying is, and copying into memory is inherently necessary to use. (Of course, in some cases, copying may be fair use.)
Has this interpretation actually been upheld by any courts? It feels like a stretch to me.
That copying into RAM, including specifically in the context of running software, is included in the exclusive right of copying reserved to the copyright holder except as licensed by them? Yes, the main case I am familiar with being MAI Systems Corp. v. Peak Computer, Inc., 991 F.2d 511 (9th Cir. 1993) [0]; note that for the specific context of that case (software that is run automatically when activating a computer in the course of maintenance or repair of that computer), Congress adopted a narrow exception after this case , codified at 17 USC § 117(c) [1], but that validates that in the general case, copying into RAM is a use of the exclusive rights in copyright.
[0] https://en.wikipedia.org/wiki/MAI_Systems_Corp._v._Peak_Comp....
> it is not an infringement for the owner of a copy of a computer program to make or authorize the making of another copy or adaptation of that computer program provided:
> (1) that such a new copy or adaptation is created as an essential step in the utilization of the computer program in conjunction with a machine and that it is used in no other manner
i.e. the owner of a copy of a computer program has the right to make more copies if necessary to use it (e.g. copy-to-RAM, copy to CPU cache) as long as they don't use those additional copies for any other purpose. That same section also gives you the right to make backups as long as you destroy them when giving up ownership of the original.
Let's assume it's a really short book – say a poem – and by reading it, I accidentally memorized it. Have I now violated copyright?
I think something does not add up with this logic.
You have not violated copyright because things in your memory are not copies. US copyright law defines copies as material objects in which a work is fixed and from which it can be perceived, reproduced, or otherwise communicated, either directly or with the aid of a machine or device.
Your brain is a material object but the the memorized book in your brain cannot be perceived, reproduced, or otherwise communicated directly (I can't look at your brain and "read" the book) and we do not have any machines or devices that can extract a copy of it.
> I am not a legal expert, so if you are, I would welcome your suggestions for improvements
> I'm a computer engineer based in Brussels, with a background in computer graphics, webtech and AI
Particularly when they've already established they don't care about infringing standard copyright
It has been abuduntly clear that AI companies can train however they want, and nobody will enforce anything.
Realistically speaking, even if you could prove someone misused your software as per this license, I don't expect anything to happen. Sad but true.
At this point, I don't care about licensing my code anymore, I just want the option to block it from being accessed from the US, and force its access through a country where proper litigation is possible.
The copyright lobby wrote the EU's AI Act, which force them to publishing the list of the copyrighted works used as training data. This is an ebntrypoint to then ask them some money.
It's actually very useful for bots to crawl the public web, provided they are respectful of resource usage - which, until recently, most bots have been.
The problem is that shysters, motivated by the firehose of money pointed at anything "AI", have started massively abusing the public web. They may or may not make money, but either way, everyone else loses. They're just ignoring the social contract.
What we need is collective action to block these shitheads from the web entirely, like we block spammers and viruses.
> Any contract term is void to the extent that it purports, directly or indirectly, to exclude or restrict any permitted use under any provision in
> [...]
> Division 8 (computational data analysis)
I don't know how you can post something publicly on the internet and say, this is for X, Y isn't allowed to view it. I don't think there's any kind of AI crawler that's savvy enough to know that it has to find the license before it ingests a page.
Personally, beyond reasonable copyrights, I don't think anyone has the right to dictate how information is consumed once it is available in an unrestricted way.
At a minimum anything released under HOPL would need a click-through license, and even that might be wishful thinking.
> The 9th Circuit ruled that hiQ had the right to do web scraping.
> However, the Supreme Court, based on its Van Buren v. United States decision, vacated the decision and remanded the case for further review [...] In November 2022 the U.S. District Court for the Northern District of California ruled that hiQ had breached LinkedIn's User Agreement and a settlement agreement was reached between the two parties.
So you can scrape public info, but if there's some "user agreement" you can be expected to have seen, you're maybe in breach of that, but the remedies available to the scrapee don't include "company XYZ must stop scraping me", as that might allow them unfair control over who can access public information.
But EU jurisdictions? I'm quite curious where this will go. Europe is much more keen to protect natural persons rights against corporate interests in the digital sphere, particularly since it has much less to lose, since EU digital economy is much weaker.
I could imagine ECJ ruling on something like this quite positively.
How strongly is that? Would it really be that catastrophic to return all business processes to as they were in, say, 2022?
Yeah, imagine shutting down all the basic research that has driven the economy for the last 75 years, in a matter of months. Crazy. Nobody would do that.
And what about jobs lost (or never created) due to AI itself?
Would not Google/Amazon/Meta have continued on to advance their product lines and make new products, even if not AI? Would not other new non-AI companies have been created?
I'm not convinced that the two options are, "everything as it is right now", or, "the entire economy is collapsed".
https://www.theatlantic.com/economy/archive/2025/09/ai-bubbl...
Assuming a standard website without a signup wall, this seems like a legally dubious assertion to me.
At what point did the AI bot accept those terms and conditions, exactly? As a non-natural person, is it even able to accept?
If you're claiming that the natural person responsible for the bot is responsible, at what point did you notify them about your terms and conditions and give them the opportunity to accept or decline?
It's a different situation if the website is gated with an explicit T&C acceptance step, of course.
Evidence?
Wikipedia says the opposite:
https://en.wikipedia.org/wiki/Robots.txt#Compliance
> e.g. hiQ Labs v. LinkedIn
The Wikipedia page for this case mentions only legal means, not technical means and does not mention robots.txt at all. As far as I can tell, robots.txt wasn't really relevant to the ruling in that case.
It's not a settled area of law, but that seems to be the current position.
¯\_(ツ)_/¯
And that is the point -- the HOPL asserts that you can just put a robots.txt on your website and say that it means bots accepted the terms in that file. In reality, that's a dubious claim.