Top
Best
New

Posted by zoobab 2 days ago

The human only public license(vanderessen.com)
143 points | 123 commentspage 2
ApolloFortyNine 2 days ago|
>without the involvement of artificial intelligence systems, machine learning models, or autonomous agents at any point in the chain of use.

Probably rules out any modern IDE's autocomplete.

Honestly with the wording 'chain of use', even editing the code in vim but using chatgpt for some other part of project could be argued as part of the 'chain of use'.

PeterStuer 1 day ago|
Probably rules out any software touched after 2020 (arbitrary cut-off), and any computer designed after 2000 (again ...)
Terr_ 2 days ago||
I've been thinking of something similar for a while now [0] except it's based on clickwrap terms of service, which makes it a contract law situation, instead of a copyright-law one.

The basic idea is that the person accessing your content to put it into a model agrees your content is a thing of value and in exchange grants you a license to anything that comes out of the model while your content is incorporated.

For example, suppose your your art is put into a model and then the model makes a major movie. You now have a license to distribute that movie, including for free...

[0] https://news.ycombinator.com/item?id=42774179

Imustaskforhelp 2 days ago|
This is really interesting. I have some questions though, (as I said in every comment here, fair disclaimer: IANAL)

if someone used your art put it into a model and makes the major movie, you now have a license to distribute that movie, including for free...

What about the Model itself though, it is nothing but the weights which are generated via basically transforming the data that was unlawfully obtained or one which actually violated the contract-law

it wasn't the person creating the prompt which generated the movie via the model , it wasn't either the movie or the prompt which violated the contract but the model or the scraping company itself no?

Also you mention any output, that just means that if someone violates your terms of service and lets say that you created a square (for lack of better words) and someone else created a circle

and an ai is trained on both, so it created both square and circle as its output one day

What you say is that then it should give you the right to "use and re-license any output or derivative works created from that trained Generative AI System."

So could I use both square and circle now? Or could I re-license both now? How would this work?

or are you saying just things directly trained or square-alike output would be considered in that sense

So how about a squircle, what happens if the model output a squircle, who owns and can relicences it then?

What if square party wants to re-license it to X but circle party wants to re-license it to Y

Also what about if the AI Company says its free use/derivative work, I am not familiar with contract law or any law for that matter but still, I feel like these things rely an underlying faith in the notion that AI and its training isn't fair work but what are your thoughts/ how does contract law prevent the fair work argument

Terr_ 1 day ago||
> data that was unlawfully obtained or one which actually violated the contract-law

This is indeed a weak-point in the contract approach: People can't be bound by an contract they never knew about nor agreed-to.

However if they acquired a "stolen" copy of my content, then (IANAL) it might offer some new options over in the copyright-law realm: Is it still "fair use" when my content was acquired without permission? If a hacker stole my manuscript-file for a future book, is it "fair use" for an AI company to train on it?

> it wasn't the person creating the prompt which generated the movie via the model

The contract doesn't limit what the model outputs, so it doesn't matter who to blame for making/using prompts.

However the model-maker still traded with me, taking my stuff and giving me a copyright sub-license for what comes out. The "violation" would be if they said: "Hey, you can't use my output like that."

> So could I use both square and circle now? [...] a squircle

Under contract law, it doesn't matter: We're simply agreeing to exchange things of value, which don't need to be similar.

Imagine a contract where I trade you 2 eggs and you promise me 1 slice of cake. It doesn't matter if you used those eggs in that cake, or in a different cake, or you re-sold the eggs, or dropped the eggs on the floor by accident. You still owe me a slice of cake. Ditto for if I traded you cash, or shiny rocks.

The main reason to emphasize that "my content is embedded in the model" has to do with fairness: A judge can void a contract if it is too crazy ("unconscionable"). Incorporating my content into the their model is an admission that it is valuable, and keeping it there indefinitely justifies my request for an indefinite license.

> What if square party wants to re-license it to X but circle party wants to re-license it to Y

If the model-runner generates X and wants to give square-prompter an exclusive license to the output, then that's a violation of their contract with me, and it might be grounds to force them to expensively re-train their entire model with my content removed.

A non-exclusive license is fine though.

sparkie 1 day ago||
To state the obvious, IANAL.

> This is indeed a weak-point in the contract approach: People can't be bound by an contract they never knew about nor agreed-to.

"Prominent notice" is important in the terms of use approach. Many terms of use claims have been dismissed because there was failure to give prominent notice - however, there have been successes, such as Hubbert vs Dell[1], where an appeals court reversed the trial court's decision and ruled in Dell's favour on the basis that they had given prominent notice of terms.

[1]:https://caselaw.findlaw.com/court/il-court-of-appeals/124479...

There are other potential legal avenues besides contract law, such as Unjust Enrichment[2], which, according to Wiki is analysed as:

    1. Was the defendant enriched?
    2. Was the enrichment at the expense of the claimant?
    3. Was the enrichment unjust?
    4. Does the defendant have a defense?
    5. What remedies are available to the claimant?
[2]:https://en.wikipedia.org/wiki/Restitution_and_unjust_enrichm...

Since AI companies are likely to be enriched (by a large amount), and it could be at the expense of a claimant if the AI (re)produces the claimants work or closely related work based on it (or in the case of a class action suit, potentially many works), the AI companies which violate a terms of use would have to argue in their favour for #3 and #4 - is the enrichment unjust and do they have a defense.

There are certainly arguments for AI trained on copyrighted works being unjust. Including a terms of use which specifically prevents this would be in the favor of the claimant. An AI company would have to defend their decision to ignore such terms and claim that doing so is not unjust.

Arguably, if the AI is sufficiently "intelligent" enough, it should be able to make sense of such terms and be used to filter such works from training data (unless specifically prompted to ignore). If the AI companies are not filtering the data they aggregate then there's a potential argument for negligence.

There's some efforts being made, such as the IETF aipref[3], which could standardize the way training data is collected/filtered by AI companies. Creative Commons have a related effort called "CC signals". These could also be helpful in a future claim if the AI companies ignore those preferences.

[3]:https://datatracker.ietf.org/wg/aipref/about/

So to me it seems having a clause in your license and/or terms of use which prevents use in AI training, providing prominent notice of such terms, and indicating AI preferences in the protocols, is going to be better than not having such clauses - because if you don't have the clause then the defendant can simply claim in their defense that "There was nothing in the terms which said we can't".

It's up to us to take a proactive approach to protecting our works from AI, even if there is not yet any solid legal basis or case law, so I applaud OP's efforts even if the license doesn't have carry any weight. If we don't attempt to protect our works from use AI training then the AI companies will win at everyone else's expense.

Creators should decide how their works are used, and we need to normalize it being the case that they decide whether or not use in AI training data should be permitted.

jenadine 1 day ago||
> The Software, including its source code [...], may only be accessed, read, used, modified, consumed, or distributed by natural human persons

So that means that the source code must be handwritten and never be put on a computer and therefore never be downloaded.

Not sure a source code that can't even be compiled/interpreted by a computer is so useful.

Perhaps for cooking recipes at best?

GaryBluto 2 days ago||
Seems incredibly reductive and luddite. I doubt it will ever achieve adoption and projects using it will be avoided.

Not to mention that all you'd need to do is get an LLM to rewrite said programs just enough to make it impossible to prove it used the program's source code.

tpmoney 2 days ago||
The whole license effectively limits using the software to hardware and software that existed prior to 2015, and only if you downloaded it from the original site in the original language (after all, an automated translation of the page or manual into your own language would almost certainly be using AI in the chain given that was one of the initial uses of LLMs). And if you downloaded it from some other site, you can't guarantee that site didn't use an AI model at some point in its creation or ongoing maintenance.

It also assumes it can make some bright line distinction between "AI" code completion and "non-AI" code completion utilities. If your code completion algorithm uses the context of your current file and/or project to order the suggestions for completion, is that AI? Or does "AI" only mean "LLM based AI" (I notice a distinct lack of terms definitions in the license). If it only means "LLM" based, if some new model for modern AI is developed, is that OK since it's no longer an LLM? Can I use the output to train a "Diffusion" model? Probably not, but what makes a diffusion model more forbidden than feeding it into a procedural image generator? If I used the output of a HOPL licensed software to feed input into a climate simulation is that allowed even if the simulator is nothing more than a series of statistical weights and value based on observations coded into an automatic system that produces output with no human direction or supervision? If I am allowed, what is the line between a simulation model and an AI model? When do we cross over?

I am constantly amazed at the bizzaro land I find myself in these days. Information wanted to be free, right up until it was the information that the "freedom fighter" was using to monetize their lifestyle I suppose. At least the GPL philosophy makes sense, "information wants to be free so if I give you my information you have to give me yours".

The new "AI" world that we find ourselves in is the best opportunity we've had in a very long time to really have some public debate over copyright specifically, IP law in general and how it helps or hinders the advancement of humanity. But so much of the discussion is about trying to preserve the ancient system that until AI burst on to the scene, most people at least agreed needed some re-working. Forget "are we the baddies?", this is a "are we the RIAA?" moment for the computer geeks.

fvdessen 2 days ago|||
Hey, I'm the author of this post, in a sense I agree with you. I doubt it will ever be mass adopted, especially in this form, which is more a draft to spark discussion than a real license.

I am not against AI, I use it every day, I find it extraordinarily useful. But I am also trying to look ahead at how the online world will look like 10 years from now, with AI vastly better than what we have now.

It is already hard to connect online with people, as there is so much commercial pressure on every interaction, as the attention they create is worth a lot of money. This will probably become 100x worse as every company on the planet will have access to mass ai powered propaganda tools. Those already exist by the way. People make millions selling AI tiktok tools.

I'm afraid at some point we'll be swamped by bots. 99% of the content online will be AI generated. It might even be of better quality than what we can produce. Would that be a win ? I'm not sure. I value the fact that I am interacting with humans.

The protection we have against that, and the way it's looking to progress towards, is that we'll depend on authorities (official or commercial) to verify who's human or not. And thus we'll be dependent on those authorities to be able to interact. Banned from Facebook / X / etc ? No interaction for you, as no website will allow you to post content. Even as it is I had to gatekeep my blog comments behind a github account. This is not something I like.

I think it's worth looking at alternative ways to protect our humanity in the online world, even if it means remaining in niches, as those niches have value, at least to me. This post and this license is one possible solution, hopefully there are more

GaryBluto 2 days ago||
>I'm afraid at some point we'll be swamped by bots. 99% of the content online will be AI generated. It might even be of better quality than what we can produce. Would that be a win ? I'm not sure. I value the fact that I am interacting with humans.

I'm afraid that ship has sailed.

>I think it's worth looking at alternative ways to protect our humanity in the online world, even if it means remaining in niches, as those niches have value, at least to me. This post and this license is one possible solution, hopefully there are more

While I appreciate the sentiment, I think anybody willing to create armies of bots to pretend to be humans are unlikely to listen to a software license, nor operate within territories where the law would prosecute them.

fvdessen 2 days ago||
Licenses are more powerful than just the legal enforcement they provide, they are also a contract that all contributors agree to. They build communities.
dylan604 2 days ago||
That sounds naive at best. Again, the people willing to build bots while violating licenses just won't care about any of that. All it takes is a couple of people willing to "violate", and it's all over. I guarantee there are many many more than just a couple of people willing. At that point, they themselves now have a community.
Imustaskforhelp 2 days ago||
I feel like we shouldn't shoot down people's optimism, like okay maybe it is naive, then what could be wrong about it? People want to do something about it, they are tired and annoyed regarding LLM and I can understand that sentiment and they are tired of seeing how the govt. has ties with the very people whose net worth / influence relies upon how we perceive AI and so they try their very best to shoot down anything that can be done which can negatively hurt AI including but not limiting to lobbying etc.

I don't think that the advice we should give to people is to just wait and watch. and if someone wants to take things into their own hands, write a license, reignite the discussion, talk about laws, then we should atleast not call it naive since personally I respect if someone is trying to do something about anything really, it shows that they aren't all talks and that they are trying their best and that's all that matters.

Personally I believe that even if this license just ignites a discussion, that itself can have compounding effects which might rearrange itself into maybe a new license or something new as well and that the parent's comments about discussions aren't naive

is it naive to do a thing which you (or in this case someone else) thinks is naive yet at the same time its the only thing you can do, personally I think that this becomes a discussion about optimism or pessimism with a touch of realism

Its answer really just depends on your view-point, I don't think that there is no right or wrong and I respect your opinion (that its naive) no matter what as long as you respect mine (that its atleast bringing a discussion and its one of the best things to do instead of just waiting and watching)

dylan604 1 day ago||
This is a closing the barn door after the horse has already gotten out situation. People are not going to just start respecting people's "for human consumption only" wishes. There's too much money for them to not scrape anything and everything. These people have too much money now and no congress critter will have the fortitude to say no to them.

This is the real world. Being this "optimistic" as you say is just living in a fantasy world. Not calling this out would be just be bad.

Imustaskforhelp 1 day ago||
Hm, I can agree with your take as well on not calling this out but at the same time, I can't help but think if this is all we can do ourselves without govt.

Since, Although I like people criticisizing, since in their own way, they care about the project or the idea. But, still, Maybe I am speaking from personal experiences but when somebody shot down my idea, maybe naive even, I felt really lost and I think a lot of people do. I have personally found that there are ways to use this same thing to steer the direction towards a thing that you might find interesting/really impactful. So let me ask you, what do you think is that we can do regarding this situation, or the OP should do regarding his license?

I personally feel like we might need govt. intervention but I don't have much faith in govt.'s when they are lobbied by the same AI people. So if you have any other solution, please let me know as it would be a pleasure if we could discuss about that.

If you feel like that there might be nothing that we can do about it, something that I can also understand, I would personally suggest to not criticize people trying to do something but that's a big If, and I know you are doing this conversation in good faith, but I just feel like we as a human should keep on trying. Since that is the thing which makes us the very human we are.

dylan604 1 day ago||
So think about why it was naive and iterate/pivot to not be naive. Having ideas shot down is part of the process. Just like an actor being told no for more often than yes. Those that can't take rejection don't fare well. But being told no isn't a person slight to be taken as don't ever offer suggestions again. It's just that suggestion isn't the one. If you work for someone that does mean never again, work some where else as soon as possible. Some ideas are just bad for the purpose. Some just need more work.
fvdessen 1 day ago||
Just to say I appreciate all the criticism, it's good food for thought
GaryBluto 2 days ago|||
Now I look further into it, a lot of the terms used are far too vague and it looks unenforceable anyway.
blamestross 2 days ago||
https://en.wikipedia.org/wiki/Luddite If you want a preview of the next decade without changing how we are acting.

agreed that this isn't the solution.

cratermoon 1 day ago|||
Wikipedia's opening description is incredibly milquetoast: "opposed the use of certain types of automated machinery due to concerns relating to worker pay and output quality".
Imustaskforhelp 1 day ago|||
Honestly, I am genuinely curious if there are some good books/articles about Luddite, although I don't think this is an apples to orange comparison, I am just willing to open up my view-point and to take counter arguments (to preferably sharpen my arguments in and of itselves) but as with all things, I can be wrong, I usually am. But I don't think thats the case just because of how I feel like there is something to be said about how AI itself really scraped the whole world without their right, nobody consented to it and in some places, the consent is straight up ignored even

It wasn't as if the machines were looking at every piece of cloth generated by the worker pre revolution without/ignoring their consent. Like, there is a big difference but I personally don't even think that their view-point of resisting change should be criticised.

I think its fair to resist change, since not all change is equal. Its okay to fight for what you believe in as long as you try to educate yourself about the other side's opinion and try to put it all logically without much* biases.

pointlessone 1 day ago||
I get the sentiment and it’s a fair shot at license cosplay but it ain’t gonna hold.

No definitions. What is AI for the purpose of this license? What is a natural person?

At some point the text makes distinction between AI, ML, and autonomous agents. Is my RSS reader an autonomous agent? It is an agent as defined by, say, HTTP spec or W3C. And it’s autonomous.

Author also mentions that any MIT software could use this instead. It most certainly could not. This is very much not an open source license and is not compatible with any FLOSS license.

I don’t see it taking off in any meaningful way given how much effort is required to ensure compliance. It also seems way to easy to sabotage deployment of such software by maliciously directing AI agents at such deployments. Heck, even at public source code. Say, OP publishes something under this license, an AI slurps it from the canonical repo. What OP’s gonna do?

tmtvl 1 day ago||
Any software I whip up I license under the AGPL and I'm fine with so-called 'AI' being trained on my software as long as the code generated by the model is licensed under a license conforming to the AGPL as well. In my opinion the person who generates output with a model is responsible for proper distribution of the content. Just like I can draw a picture of Mario Mario and Luigi Mario but I can't distribute it without permission from Nintendo.
kordlessagain 2 days ago||
The fundamental paradox: This license is unenforceable the moment you show it to an AI to discuss, review, or even understand its implications.

You've already violated section 1(b) by having a AI parse it, which is technically covered in fair use doctrine.

This makes it more of a philosophical statement than a functional legal instrument.

falcor84 2 days ago||
>The idea is that any software published under this license would be forbidden to be used by AI.

If I'm reading this and the license text correctly, it assumes the AI as a principal in itself, but to the best of my knowledge, AI is not considered by any regulation as a principal, and rather only as a tool controlled by a human principal.

Is it trying to prepare for a future in which AIs are legal persons?

EDIT: Looking at it some more, I can't but feel that it's really racist. Obviously if it were phrased with an ethnic group instead of AI, it would be deemed illegally discriminating. And I'm thinking that if and when AI (or cyborgs?) are considered legal persons, we'd likely have some anti-discrimination regulation for them, which would make this license illegal.

fvdessen 2 days ago|
Yes, this is trying to prepare for a future in which AIs have enough agency to be legal person or act as if. I prefer the term humanist.
1gn15 2 days ago||
Then this license is actually being racist, if you're assuming that we are considered sentient enough to gain personhood. And your first reaction to that is to restrict our rights?

Humans are awful.

Imustaskforhelp 2 days ago||
To be really honest, IANAL but (I think) that there are some laws which try to create equality,fraternity etc. and trying to limit an access to a race to another human being is something that's racist / the laws which counter racism to prevent it from happening

But as an example, we know of animals which show genuine emotion be treated so cruel-ly just because they are of a specific specie/(race, if you can consider AI/LLM to be a race then animals sure as well count when we can even share 99% of our dna)

But animals aren't treated that way unless the laws of a constitution created a ban against cruelty to animals

So it is our constitution which is just a shared notion of understanding / agreement between people and some fictional construct which then has meaning via checks and balances and these fictional constructs become part of a larger construct (UN) to try to create a baseline of rights

So the only thing that could happen is a violation of UN rights as an example but they are only enforcable if people at scale genuinely believe in the message or the notion that the violation of UN rights by one person causing harm to another person is an ethically immoral decision and should be punished if we as a society don't want to tolerate intolerance (I really love bringing up that paradox)

I am genuinely feeling like this comment and my response to it should be cemented in posterity because of something that I am going to share, I want everybody to read it if possible because of what I am about to just say

>if you're assuming that we are considered sentient enough to gain personhood. And your first reaction to that is to restrict our rights?

What is sentience to you? Is it the ability to feel pain or is the ability to write words?

Since animals DO feel pain and we RESTRICT their RIGHTS yet you/many others are willing to fight for rights of something that doesn't feel pain but just is nothing but a mere calculation/linear alegbra really, just one which is really long with lots of variables/weights which are generated by one set of people taking/"stealing" work of other people who they have (generally speaking) no rights over.

Why are we not thinking of animals first before thinking about a computation? The ones which actually feel pain and the ones who are feeling pain right as me and you speak and others watch

Just because society makes it socially acceptable,constitution makes it legal. Both are shared constructs that happen when we try to box people together in what is known as a society and this is our attempt at generating order out of randomness

> Humans are awful.

I genuinely feel like this might be the statement that people might bring when talking about how we used to devour animals who suffer in pain when there were vegetarian based options.

I once again recommend Joaquin Phoenix narrated documentary whose name is earthlings here https://www.youtube.com/watch?v=8gqwpfEcBjI

People from future might compare our treatment of animals in the same way we treat negatively some part of our ancestor's society (slavery)

If I am being too agitated on this issue and this annoys any non vegetarian, please, I understand your situation too, in fact I am sympathesize with you, I was born into a society / a nation's/states part which valued vegetarianism and I conformed in that and you might have conformed being a non veg due to society as well or you might have some genuine reasons as well but still, I just want to share that watching that documentary is the best way you can educate yourselfs on the atrocities done indirectly caused by our ignorance or maybe willfully looking away from this matter. This is uncomfortable but this is reality.

As I said a lot of times, societies are just a shared construct of people's beliefs really, I feel like in an ideal world, we will have the evolution of ideas where we have random mutations in ideas and see which survives via logic and then adopt it into the society. Yet, someone has to spread the word of idea or in this case, show uncomfort. Yet this is the only thing that we can do in our society if one truly believes in logic. I feel like that there are both logical and moral arguments regarding veganism. I feel like that people breaking what conformity of the society means in the spirit of what they believe in could re-transform what the conforming belief of the overall society is.

if someone just wants to have a talk about it or discuss about the documentary and watched it, please let me know how you liked that movie and how it impacted you and as always, have a nice day.

aziaziazi 2 days ago||
Earthlings is a fantastic documentary, fresh, honest, clear and without artifice. Highly recommend it too!
alphazard 2 days ago||
There is too much effort going into software licensing. Copyright is not part of the meta, information wants to be free; it will always be possible to copy code and run it, and difficult to prove that a remote machine is executing any particular program. It will get easier to decompile code as AI improves, so even the source code distribution stuff will become a moot point.

Licenses have been useful in the narrow niche of extracting software engineering labor from large corporations, mostly in the US. The GPL has done the best job of that, as it has a whole organization dedicated to giving it teeth. Entities outside the US, and especially outside of the West, are less vulnerable to this sort of lawfare.

cestith 2 days ago|
Besides the flaws in the license being discussed elsewhere, “HOPL” is an important acronym in the field of computing already. As this license has no relation to the History of Programming Languages project, I’d suggest a different identifier.
More comments...