Top
Best
New

Posted by dahlia 4 hours ago

Is legal the same as legitimate: AI reimplementation and the erosion of copyleft(writings.hongminhee.org)
134 points | 131 comments
ordu 2 hours ago|
I believe it is a narrow view of the situation. If we take a look into the history, into the reasons for inventing GPL, we'll see that it was an attempt to fight copyrights with copyrights. The very name 'copyleft' is trying to convey the idea.

What AI are eroding is copyright. You can re-implement not just a GPL program, but to reverse engineer and re-implement a closed source program too, people have demonstrated it already, there were stories here on HN about it.

AI is eroding copyright, so there may no longer be a need for the GPL. GNU should stop and rethink its stance, chuck away the GPL as the main tool to fight evil software corporations and embrace LLM as the main weapon.

davidw 2 hours ago||
> LLM as the main weapon

LLM's - to date - seem to require massive capital expenditures to have the highest quality ones, which is a monumental shift in power towards mega corporations and away from the world of open source where you could do innovative work on your own computer running Linux or FreeBSD or some other open OS.

I don't think that's an exciting idea for the Free Software Foundation.

Perhaps with time we'll be able to run local ones that are 'good enough', but we're not there yet.

There's also an ethical/moral question that these things have been trained on millions of hours of people's volunteer work and the benefits of that are going to accrue to the mega corporations.

Edit: I guess the conclusion I come to is that LLM's are good for 'getting things done', but the context in which they are operating is one where the balance of power is heavily tilted towards capital, and open source is perhaps less interesting to participate in if the machines are just going to slurp it up and people don't have to respect the license or even acknowledge your work.

ordu 1 hour ago|||
> LLM's - to date - seem to require massive capital expenditures to have the highest quality ones, which is a monumental shift in power towards mega corporations and away from the world of open source

Yeah, a bit of a conundrum. But I don't think that fighting for copyright now can bring any benefits for FOSS. GNU should bring Stallman back and see whether he can come with any new ideas and a new strategy. Alternatively they could try without Stallman. But the point is: they should stop and think again. Maybe they will find a way forward, maybe they won't but it means that either they could continue their fight for a freedom meaningfully, or they could just stop fighting and find some other things to do. Both options are better then fighting for copyright.

> There's also an ethical/moral question that these things have been trained on millions of hours of people's volunteer work and the benefits of that are going to accrue to the mega corporations.

I want a clarify this statement a bit. The thing with LLM relying on work of others are not against GPU philosophy as I understand it: algorithms have to be free. Nothing wrong with training LLMs on them or on programs implementing them. Nothing wrong with using these LLMs to write new (free) programs. What is wrong are corporations reaping all the benefits now and locking down new algorithms later.

I think it is important, because copyright is deemed to be an ethical thing by many (I think for most people it is just a deduction: abiding the law is ethical, therefore copyright is ethical), but not for GNU.

balamatom 7 minutes ago||
>Yeah, a bit of a conundrum.

IMO the primary significant trend in AI. Doesn't get talked about nearly enough. Means the AI is working, I guess.

>GNU should bring Stallman back ... Alternatively they could try without Stallman.

Leave Britney alone >:(

>copyright is deemed to be an ethical thing by many (I think for most people it is just a deduction: abiding the law is ethical, therefore copyright is ethical)

I've busted out "intellectual property is a crime against humanity" at layfolk to see if that shortcuts through that entire little politico-philosophical minefield. They emote the requisite mild shock when such things as crimes against humanity are mentioned; as well as at someone making such a radical statement which seems to come from no familiar species of echo chamber; and then a moment later they begin to very much look like they see where I'm coming from.

zozbot234 1 hour ago||||
> LLM's - to date - seem to require massive capital expenditures to have the highest quality ones

There are near-SOTA LLM's available under permissive licenses. Even running them doesn't require prohibitive expenses on hardware unless you insist on realtime use.

thenewnewguy 1 hour ago||||
Is massive capital expenditure not also required to enforce the GPL? If some company steals your GPLed code and doesn't follow the license, you will have to sue them and somebody will have to pay the lawyers.
davidw 45 minutes ago||
> Is massive capital expenditure not also required to enforce the GPL?

It's nowhere near the order of magnitude of the kind of spending they're sinking into LLM's. The FSF and other groups were reasonably successful at enforcing the GPL, operating on a budget 1000's of times smaller than that of AI companies.

cloverich 20 minutes ago||
Right but LLM companies are building frontier models with frontier talent while trying to sock up demand with a loss leader strategy, on top of an historic infrastructure build out.

Being able to coat efficiently run frontier models is i think, not a high priced endeavor for an org (compared to an individual).

IMO the proposition is little fishy, but its not totally without merit and imo deserves investigation. If we are all worried about our jobs, even via building custom for sale software, there is likely something there that may obviate the need at least for end user applications. Again, im deeply skeptical, but it is interesting.

jacquesm 1 hour ago|||
> There's also an ethical/moral question that these things have been trained on millions of hours of people's volunteer work and the benefits of that are going to accrue to the mega corporations.

This was already the case and it just got worse, not better.

davidw 1 hour ago||
At a certain point, I think we had reached a kind of equilibrium where some corporations were decent open source citizens. They understood that they could open source things like infrastructure or libraries and keep their 'crown jewels' closed. And while Stallman types might not have been happy with that, it seemed to work out for people.

Now they've just hoovered up all the free stuff into machines that can mix it up enough to spit it out in a way that doesn't even require attribution, and you have to pay to use their machine.

jacquesm 1 hour ago||
AI essentially gatekeeps all of open source to companies to pluck from to their hearts content. And individual contributors using these tools and freely mixing it with their own - usual minor - contributions are another step of whitewashing because they're definitely not going to own up to writing only 5% of the stuff they got paid for.

Before we had RedHat and Ubuntu, who at least were contributing back, now we have Microsoft, Anthropic and OpenAI who are racing to lock the barn door around their new captive sheep. It's just a massive IP laundromat.

stebalien 2 hours ago|||
Copyleft is a mirror of copyright, not a way to fight copyright. It grants rights to the consumer where copyright grants rights to the creator. Importantly, it gives the end-user the right to modify the software running on their devices.

Unfortunately, there are cases where you simply can't just "re-implement" something. E.g., because doing so requires access to restricted tools, keys, or proprietary specifications.

ordu 2 hours ago|||
These are words of Stallman:

"So, I looked for a way to stop that from happening. The method I came up with is called “copyleft.” It's called copyleft because it's sort of like taking copyright and flipping it over. [Laughter] Legally, copyleft works based on copyright. We use the existing copyright law, but we use it to achieve a very different goal."

https://writings.hongminhee.org/2026/03/legal-vs-legitimate/

dathinab 1 hour ago|||
> flipping it over.

i.e. mirroring it

> use it to achieve a very different goal."

"very different goal" isn't the same as "fundamentally destroying copyright"

the very different goal include to protect public code to stay public, be properly attributed, prevent companies from just "sizing" , motivate other to make their code public too etc.

and even if his goals where not like that, it wouldn't make a difference as this is what many people try to archive with using such licenses

this kind of AI usage is very much not in line with this goals,

and in general way cheaper to do software cloning isn't sufficient to fix many of the issues the FOSS movement tried to fix, especially not when looking at the current ecosystem most people are interacting with (i.e. Phones)

---

("sizing"): As in the typical MS embrace, extend and extinguish strategy of first embracing the code then giving it proprietary but available extensions/changes/bug fixes/security patches to then make them no longer available if you don't pay them/play by their rules.

---

Through in the end using AI as a "fancy complicated" photocopier for code is as much removing copyright as using a photocopier for code would. It doesn't matter if you use the photocopier blind folded and never looked at the thing you copied.

sarchertech 1 hour ago|||
That’s not a rebuttal of the OP’s point. None of that says anything about fighting copyright. It literally says he flipped it which is wha the OP said when they said it’s a mirror.
rileymat2 2 hours ago|||
> It grants rights to the consumer where copyright grants rights to the creator.

It also grants one major right/feature to the creator, the ability to spread their work while keeping it as open as they intend.

webstrand 2 hours ago|||
Its purpose "if you run the software you should be able to inspect and modify that software, and to share those modifications with your peers" not explicitly resist copyright. Yes copyright is bad in that it often prevents one from doing that, but it is not the purpose of the GPL to dismantle copyright.

Reducing it to "well you can clone the proprietary software you're forced to use by LLM" is really missing the soul of the GPL.

pocksuppet 1 hour ago||
If not for copyright, you could always do that and copyleft wouldn't be needed.
johnofthesea 1 hour ago|||
> AI is eroding copyright, so there may no longer be a need for the GPL. GNU should stop and rethink its stance, chuck away the GPL as the main tool to fight evil software corporations and embrace LLM as the main weapon.

Is this LLM thing freely available or is it owned and controlled by these companies? Are we going to rent the tools to fight "evil software corporations"?

cozzyd 1 hour ago||
easy, we ask Claude to write an open-source freely-available version of Claude with equal or better capabilities.
dathinab 2 hours ago|||
> we'll see that it was an attempt to fight copyrights with copyrights

it's not that simple

yes, GPLs origins have the idea of "everyone should be able to use"

but it also is about attribution the original author

and making sure people can't just de-facto "size public goods"

the kind of AI usage is removing attribution and is often sizing public goods in a way far worse then most companies which just ignored the license did

so today there is more need then ever in the last few decades for GPL like licenses

amiga386 1 hour ago||
You've said "size" twice in comments, did you mean "seize"?
cubefox 2 hours ago|||
That's naive. Copyright doesn't just apply to software. There already have been countless lawsuits about copying music long before the term "open source" was invented. No, changing the lyrics a bit doesn't circumvent copyright. Nor does translating a Stephen King novel to German and switching the names of the places and characters.

A court ordered the first Nosferatu movie to be destroyed because it had too many similarities to Dracula. Despite the fact that the movie makes rather large deviations from the original.

If Claude was indeed asked to reimplement the existing codebase, just in Rust and a bit optimized, that could well be a copyright violation. Just like rephrasing A Song ot Ice and Fire a bit, and switching to a different language, doesn't remove its copyright.

zozbot234 1 hour ago||
Claude was asked to implement a public API, not an entire codebase. The definition of a public API is largely functional; even in an unusually complex case like the Java standard facilities (which are unusually creative even in the structure and organization of the API itself) the reimplementation by Google was found to be fair use.
cubefox 1 hour ago||
> Claude was asked to implement a public API, not an entire codebase.

Allegedly. There have been several people who doubted this story. So how to find out who is right? Well, just let Claude compare the sources. Coincidentally, Claude Opus 4.6 doesn't just score 75.6% on SWE-bench Verified but also 90.2% on BigLaw Bench.

It's like our copyright lawyer is conveniently also a developer. And possibly identical to the AI that carried out the rewrite/reimplemention in question in the first place.

xantronix 36 minutes ago|||
So not only are we moving goalposts here, but we've decided the GNU team should join the other team? I don't understand how GNU would see mass model LLM training as anything but the most flagrant violations of their ethos. LLM labs, in their view, would be among the most evil software corporations to have ever existed.
re-thc 1 hour ago|||
> What AI are eroding is copyright.

At the moment it's people that are eroding copyright. E.g. in this case someone did something.

"AI" didn't have a brain, woke up and suddenly decided to do it.

Realistically nothing to do with AI. Having a gun doesn't mean you randomly shoot.

thomastjeffery 1 hour ago||
While I personally agree with you, Richard Stallman (the creator of the GPL) does not. He has always advocated in favor of strong copyright protection, because the foundation of the GPL is the monopoly power granted by copyright. The problem that the GPL is intended to solve is proprietary software.

Generative models (AI) are not really eroding copyright. They are calling its bluff. The very notion of intellectual property depends on a property line: some arbitrary boundary where the property begins and ends. Generative models blur that line, making it impractical to distinguish which property belongs to whom.

Ironically, these models are made by giant monopolistic corporations whose wealth is quite literally a market valuation (stock price) of their copyrights! If generative models ever become good enough to reimplement CUDA, what value will NVIDIA have left?

The reality is that generative models are nowhere near good enough to actually call the bluff. Copyright is still the winning hand, and that is likely to continue, particularly while IP holders are the primary authors of law.

---

This whole situation is missing the forest for the trees. Intellectual Property is bullshit. A system predicated on monopoly power can only result in consolidated wealth driving the consolidation of power; which is precisely what has happened. The words "starving artist" ring every bit as familiar today as any time in history. Copyright has utterly failed the very goals it was explicitly written with.

It isn't the GPL that needs changing. So long as a system of copyright rules the land, copyleft is the best way to participate. What we really need is a cohesive political movement against monopoly power; one that isn't conveniently ignorant of copyright as its most significant source.

sharkjacobs 2 hours ago||
> Blanchard's account is that he never looked at the existing source code directly. He fed only the API and the test suite to Claude and asked it to reimplement the library from scratch

This feels sort of like saying "I just blindly threw paint at that canvas on the wall and it came out in the shape of Mickey Mouse, and so it can't be copyright infringement because it was created without the use of my knowledge of Micky Mouse"

Blanchard is, of course, familiar with the source code, he's been its maintainer for years. The premise is that he prompted Claude to reimplement it, without using his own knowledge of it to direct or steer.

dathinab 2 hours ago||
> Blanchard is, of course, familiar with the source code, he's been its maintainer for years.

I would argue it's irrelevant if they looked or didn't look at the code. As well as weather he was or wasn't familiar with it.

What matters is, that they feed to original code into a tool which they setup to make a copy of it. How that tool works doesn't really matter. Neither does it make a difference if you obfuscate that it's an copy.

If I blindfold myself when making copies of books with a book scanner + printer I'm still engaging in copyright infringement.

If AI is a tool, that should hold.

If it isn't "just" a tool, then it did engage in copyright infringement (as it created the new output side by side with the original) in the same way an employee might do so on command of their boss. Which still makes the boss/company liable for copyright infringement and in general just because you weren't the one who created an infringing product doesn't mean you aren't more or less as liable of distributing it, as if you had done so.

margalabargala 1 hour ago|||
> If it isn't "just" a tool, then it did engage in copyright infringement

Copyright infringement is a thing humans do. It's not a human.

Just like how the photos taken by a monkey with a camera have no copyright. Human law binds humans.

malicka 1 hour ago||
Correct. The human who shares the copy is the one who engages in copyright infringement.
margalabargala 45 minutes ago||
So, let's say that rather than actually touching any copyrighted material, a human merely tells an AI about how to go onto the internet and find copyrighted material, download it, and ingest it for training. The AI, fully autonomously, does so, and after training itself on the material deletes it so no human ever downloads, consumes, or shares it.

If we are saying AI is "more than a tool", which seems to be the case courts are leaning since they've ruled AI output without direct human involvement is not copyrightable[0], then the above seems like it would be entirely legal.

[0] https://www.copyright.gov/newsnet/2025/1060.html

spullara 2 hours ago|||
if the actual text of the code isn't the same or obviously derivative, copyright doesn't apply at all.
yorwba 32 minutes ago|||
Copyright protects even very abstract aspects of human creative expression, not just the specific form in which it is originally expressed. If you translate a book into another language, or turn it into a silent movie, none of the actual text may survive, but the story itself remains covered by the original copyright.

So when you clone the behavior of a program like chardet without referencing the original source code except by executing it to make sure your clone produces exactly the same output, you may still be infringing its copyright if that output reflects creative choices made in the design of chardet that aren't fully determined by the functional purpose of the program.

sigseg1v 1 hour ago||||
What does derivative mean here? Because IMO it means that the existing work was used as input. So if you used a LLM and it was trained on the existing work, that's a derivative work. If you rot13 encode something as input, so you can't personally read it, and then a device decides to rot13 on it again and output it, that's a derivative work.
spullara 1 hour ago|||
In order for it to be creatively derivative you would need to copy the structure, logic, organization, and sequence of operations not just reimplement the functionality. It is pretty clear in this case that wasn't done.
ghostpepper 1 hour ago||||
As a cynical person I assume all the frontier LLMs were trained on datasets that include every open source project, but as a thought experiment, if an LLM was trained on a dataset that included every open source project _execept_ chardet, do you think said LLM would still be able to easily implement something very similar?
spullara 1 hour ago||
There is no doubt in my mind that it could still do it.
nicole_express 1 hour ago||||
Of course, the problem with this interpretation is that all modern LLMs are derivatives from huge amounts of text under completely different licenses, including "All rights reserved", and therefore can not be used for any purpose.

I'm not sure how you square the circle of "it's alright to use the LLM to write code, unless the code is a rewrite of an open source project to change its license".

satvikpendem 1 hour ago||||
> Because IMO it means that the existing work was used as input

That's your opinion (since you said "IMO"), not the actual legal definition.

bmcahren 1 hour ago||||
LLMs do not encode nor encrypt their training data. The fact they can recite training data is a defect not a default. You can understand this more simply by calculating the model size as an inverse of a fantasy compression algorithm that is 50% better than SOTA. You'll find you'd still be missing 80-90% of the training data even if it were as much of a stochastic parrot as you may be implying. The outputs of AI are not derivative just because they saw training data including the original library.

Then onto prompting: 'He fed only the API and (his) test suite to Claude'

This is Google v Oracle all over again - are APIs copyrightable?

satvikpendem 1 hour ago||
> This is Google v Oracle all over again - are APIs copyrightable?

Yes this is the best way to ask the question. If I take a public facing API and reimplement everything, whether it's by human or machine, it should be sufficient. After all, that's what Google did, and it's not like their engineers never read a single line of the Java source code. Even in "clean room" implementations, a human might still have remembered or recalled a previous implementation of some function they had encountered before.

wizzwizz4 1 hour ago|||
See also: https://monolith.sourceforge.net/, which seeks to ask the question:

> But how far away from direct and explicit representations do we have to go before copyright no longer applies?

NSUserDefaults 50 minutes ago|||
If you pirate a movie and reencode it, does that apply as well? You can still watch the movie and it is “obviously” the same movie. Here you can use the program and it is, to the user, also the same.
logicprog 2 hours ago|||
I just don't see how it's relevant whether he did look or didn't. In my opinion, it's not just legally valid to make a re-implementation of something if you've seen the code as long as it doesn't copy expressive elements. I think it's also ethically fine as well to use source code as a reference for re-implementing something as long as it doesn't turn into an exact translation.
simonw 1 hour ago|||
Right. The alternative is that we reward Dan for his 14 years of volunteer maintenance of a project... by banning him from working on anything similar under a different license for the rest of his life.
atomicnumber3 1 hour ago||||
It's actually not legally fine, or at least it's extremely dangerous. Projects that re-implement APIs presented by extremely litigious companies specifically do not allow people who, for instance, have seen the proprietary source code to then work on the project.
jpc0 1 hour ago|||
I don't think fear or legal action makes it illegal.

If I know it is legal to make a turn at a red light. And I know a court will uphold that I was in the right but a police officer will fine me regardless and I would need to go to actually pursue some legal remedy I'm unlikely to do it regardless of whether it is legal because it is expensive, if not in money but time.

In the case of copyright lawsuits they are notoriously expensive and long so even if a court would eventually deem it fine, why take the chance.

sunshowers 1 hour ago|||
My understanding is that that is a maximalist position for the avoidance of risk, and is sufficient but probably not necessary.
sarchertech 1 hour ago|||
Ignoring the legal or ethical concerns. Let’s say we live in a world where the cost of copying code is so close to zero that it’s indistinguishable from a world without copyright.

Anything you put out can and will be used by whatever giant company wants to use it with no attribution whatsoever.

Doesn’t that massively reduce the incentive to release the source of anything ever?

satvikpendem 1 hour ago|||
No, because (most) people don't work on OSS for vanity, they do it to help other people, whether it's individuals or groups of individuals, ie corporations.

It's the same question as, if an AI can generate "art", or photographers can capture a scene better than any (realistic) painter, then will people still create art? Obviously yes, and we see it of course after Stable Diffusion was released three years ago, people are still creating.

pocksuppet 1 hour ago||||
Yes, and it reduces the incentives to release binaries too. Such a world will be populated by almost entirely SaaS, which can still compete on freedom.
intrasight 1 hour ago|||
Most commercial software that I've used has the model of a legal moat around a pretty crappy database schema.

The non IP protection has largely been in the effort involved in replicating an application's behavior and that effort is dropping precipitously.

axus 1 hour ago|||
Oracle had it's day in court with Google over the Java APIs. Reimplementing APIs can be done without copyright infringement, but Oracle must have tried to find real infringement during discovery.

In this case, we could theoretically prove that the new chardet is a clean reimplementation. Blanchard can provide all of the prompts necessary to re-implement again, and for the cost of the tokens anyone can reproduce the results.

Aurornis 2 hours ago|||
Can anyone find the actual quote where Blanchard said this?

My understanding was that his claim was that Claude was not looking at the existing source code while writing it.

pklausler 1 hour ago|||
Conveniently ignoring the likelihood that Claude had been trained on the freely accessible source code.
mrgoldenbrown 1 hour ago|||
Does he have access to Claude's training data? How can he claim Claude wasn't trained on the original code?
SpicyLemonZest 1 hour ago|||
Isn't this a red herring? An API definition is fair use under Google v. Oracle, but the test suite is definitely copyrightable code!
esafak 2 hours ago|||
If you only stick to the API and ignore the implementation, it is not Mickey Mouse any more but a rodent. If it was just a clone it wouldn't be 50x as fast. Nevertheless, APIs apparently can be copyrightable. I generally disagree with this; it's how PC compatibles took off, giving consumers better options.
amarant 1 hour ago|||
Wait what, didn't oracle lose the case against Google? Have I been living in an alternate reality where API compatibility is fair use?
Copyrightest 1 hour ago|||
[dead]
re-thc 2 hours ago|||
> This feels sort of like saying "I just blindly threw paint at that canvas on the wall and

> He fed only the API and the test suite to Claude and asked it

Difference being Claude looked; so not blind. The equivalent is more like I blindly took a photo of it and then used that to...

Technically did look.

amarant 1 hour ago||
The article is poorly written. Blanchard was a chardet maintainer for years. Of course he had looked at it's code!

What he claimed, and what was interesting, was that Claude didn't look at the code, only the API and the test suite. The new implementation is all Claude. And the implementation is different enough to be considered original, completely different structure, design, and hey, a 48x improvement in performance! It's just API-compatible with the original. Which as per the Google Vs oracle 2021 decision is to be considered fair use.

mrgoldenbrown 1 hour ago|||
did he claim that Claude wasn't trained on the original? Or just that he didn't personally provide Claude with a copy?
amarant 1 hour ago||
I recon the latter, how would he know what was in Claude's training data?
re-thc 1 hour ago|||
> What he claimed, and what was interesting, was that Claude didn't look at the code

Who opened the PR? Who co-authored the commits? It's clearly on Github.

> Blanchard was a chardet maintainer for years. Of course he had looked at its code!

So there you have it. If he looked, he co-authored then there's that.

kjksf 1 hour ago||
If I put my signature on Picasso painting, it doesn't make me co-author of said painting.

Blanchard is very clear that he didn't write a single line of code. He isn't an author, he isn't a co-author.

Signing GitHub commit doesn't change that.

re-thc 1 hour ago||
> Blanchard is very clear that he didn't write a single line of code

He used Claude to write it. Difference? The fact that I write on the notepad vs printed it out = I didn't do it?

> Signing GitHub commit doesn't change that.

That's the equivalent of me saying I didn't kill anyone. The fingerprints on the knife doesn't change that.

satvikpendem 59 minutes ago||
I'll take a commit authored by someone else and then git amend the author to myself, did I write that commit then? By your logic I did apparently.
babypuncher 1 hour ago||
What if we said that generative AI output is simply not copyrightable. Anything an AI spits out would automatically be public domain, except in cases where the output directly infringes the rights of an existing work.

This would make it so relicensing with AI rewrites is essentially impossible unless your goal is to transition the work to be truly public domain.

I think this also helps somewhat with the ethical quandary of these models being trained on public data while contributing nothing of value back to the public, and disincentivize the production of slop for profit.

kjksf 1 hour ago|||
We did in fact say so.

https://www.carltonfields.com/insights/publications/2025/no-...

> No Copyright Protection for AI-Assisted Creations: Thaler v. Perlmutter

> A recent key judicial development on this topic occurred when the U.S. Supreme Court declined to review the case of Thaler v. Perlmutter on March 2, 2026, effectively upholding lower court rulings that AI-generated works lacking human authorship are not eligible for copyright protection under U.S. law

pseudalopex 28 minutes ago||
> > A recent key judicial development on this topic occurred when the U.S. Supreme Court declined to review the case of Thaler v. Perlmutter on March 2, 2026, effectively upholding lower court rulings that AI-generated works lacking human authorship are not eligible for copyright protection under U.S. law

This was AI summary? Those words were not in the article.

The courts said Thaler could not have copyright because he refused to list himself as an author.

idle_zealot 59 minutes ago|||
> This would make it so relicensing with AI rewrites is essentially impossible unless your goal is to transition the work to be truly public domain.

That's not true at all. Anyone could follow these steps:

1. Have the LLM rewrite GPL code.

2. Do not publish that public domain code. You have no obligation to.

3. Make a few tweaks to that code.

4. Publish a compiled binary/use your code to host a service under a proprietary license of your choice.

kelseyfrog 1 hour ago||
In the corporate world, we've started using reimplementation as a way to access tooling that security won't authorize.

Sec has a deny by default policy. Eng has a use-more-AI policy. Any code written in-house is accepted by default. You can see where this is going.

We've been using AI to reimplement tooling that security won't approve. The incentives conspired in the worst outcome, yet here we are. If you want a different outcome, you need to create different incentives.

PaulDavisThe1st 1 hour ago||
If Blanchard is claiming not to have been substantively involved in the creation of the new implementation of chardet (i.e. "Claude did it"), then the new implementation is machine generated, and in the USA cannot be copyright and thus cannot be licensed.

If he is claiming to have been somehow substantively "enough" involved to make the code copyrightable, then his own familiarity with the previous LGPL implementation makes the new one almost certainly a derivative of the original.

largbae 1 hour ago||
This is only worth arguing about because software has value. Putting this in context of a world where the cost of writing code is trending to 0, there are two obvious futures:

1. The cost continues to trend to 0, and _all_ software loses value and becomes immediately replaceable. In this world, proprietary, copyleft and permissive licenses do not matter, as I can simply have my AI reimplement whatever I want and not distribute it at all.

2. The coding cost reduction is all some temporary mirage, to be ended soon by drying VC money/rising inference costs, regulatory barriers, etc. In that world we should be reimplementing everything we can as copyleft while the inferencing is good.

sarchertech 1 hour ago||
There’s an other option. The cost of copying existing software trends to 0, but the cost of writing new software stays far enough above 0 that it is still relatively expensive.
anonymous_sorry 1 hour ago|||
There was a recent ruling that LLM output is inherently public domain (presumably unless it infringes some existing copyright). In which case it's not possible to use them to "reimplement everything we can as copyleft".
dathinab 1 hour ago||
it's more complicated, the ruling was that AI can't be an author and the thing in question is (de-facto) public domain because it has no author in context of the "dev" claim it was fully build by AI

but AI assisted code has an author and claiming it's AI assisted even if it is fully AI build is trivial (if you don't make it public that you didn't do anything)

also some countries have laws which treat it like a tool in the sense that the one who used it is the author by default AFIK

casey2 1 hour ago||
The value of software has never been tied to the cost of writing it, even if you don't distribute it your still breaking the law.
largbae 1 hour ago||
The article is proceeding from the premise that a reimplementation is legal (but evil). To help my understanding of your comment, do you mean:

1. An LLM recreating a piece of software violates its copyright and is illegal, in which case LLM output can never be legally used because someone somewhere probably has a copyright on some portion of any software that an LLM could write.

2. You read my example as "copying a project without distributing it", vs. "having an LLM write the same functionality just for me"

AndriyKunitsyn 45 minutes ago||
There's a Japanese version of that page, written in classical text writing direction, in columns. Which is cool. Makes me wonder, though - how readable is it with so many English loanwords which should be rotated sideways to fit into columns?
ddellacosta 29 minutes ago|
Total digression but yeah, that layout is stupid and the way those words are dropped in using Romaji makes no sense. That's not how Japanese people lay out pages on the web. In fact I don't think I've ever seen a Japanese web page laid out like a book like this, and in general I'd expect the English proper nouns and words that don't have obvious translations to get transliterated into Katakana. Smells like automatic conversion added by someone not really familiar with common practices for presenting Japanese on the web.
drnick1 2 hours ago||
It should be noted that the Rust community is also guilty of something similar. That is, porting old GPL programs, typically written in C, to Rust and relicensing them as MIT.
ineedasername 1 hour ago||
This article is setting up a bit of a moving target. Legal vs legitimate is at least only a single vague question to be defined but then the target changes to “socially legitimate” defined only indirectly by way of example, like aggressive tax avoidance as “antisocial”— and while I tend to agree with that characterization my agreement is predicated on a layering of other principals.

The fundamental problem is that once you take something outside the realm of law and rule of law in its many facets as the legitimizing principal, you have to go a whole lot further to be coherent and consistent.

You can’t just leave things floating in a few ambiguous things you don’t like and feel “off” to you in some way- not if you’re trying to bring some clarity to your own thoughts, much less others. You don’t have to land on a conclusion either. By all means chew over things, but once you try to settle, things fall apart if you haven’t done the harder work of replacing the framework of law with that of another conceptual structure.

You need to at least be asking “to what ends? What purpose is served by the rule?” Otherwise you’re stuck in things where half the time you end up arguing backwards in ways that put purpose serving rules, the maintenance of the rule with justifications ever further afield pulled in when the rule is questioned and edge cases reached. If you’re asking, essentially, “is the spirit of the rule still there?” You’ve got to stop and fill in what that spirit is or you or people that want to control you or have an agenda will sweep in with their own language and fill the void to their own ends.

ticulatedspline 1 hour ago||
Surprised they don't mention Google LLC v. Oracle America, Inc. Seems a bit myopic to condone the general legality while arguing "you can only use it how I like it".

It also doesn't talk about the far more interesting philosophical queston. Does what Blanchard did cover ALL implementations from Claude? What if anyone did exactly what he did, feed it the test cases and say "re-implement from scratch", ostensibly one would expect the results to be largely similar (technically under the right conditions deterministically similar)

could you then fork the project under your own name and a commercial license? when you use an LLM like this, to basically do what anyone else could ask it to do how do you attach any license to it? Is it first come first serve?

If an agent is acting mostly on its own it feels like if you found a copy of Harry Potter in the fictional library of Babel, you didn't write it, just found it amongst the infinite library, but if you found it first could you block everyone else that stumbles on a near-identical copy elsewhere in the library? or does each found copy represent a "Re-implementation" that could be individually copyrighted?

skybrian 1 hour ago|
Broadly speaking, the “freedom of users” is often protected by competition from competing alternatives. The GNU command line tools were replacements for system utilities. Linux was was a replacement for other Unix kernels. People chose to install them instead of proprietary alternatives. Was it due to ideology or lower cost or more features? All of the above. Different users have different motivations.

Copyleft could be seen as an attempt to give Free Software an edge in this competition for users, to counter the increased resources that proprietary systems can often draw on. I think success has been mixed. Sure, Linux won on the server. Open source won for libraries downloaded by language-specific package managers. But there’s a long tail of GPL apps that are not really all that appealing, compared to all the proprietary apps available from app stores.

But if reimplementing software is easy, there’s just going to be a lot more competition from both proprietary and open source software. Software that you can download for free that has better features and is more user-friendly is going to have an advantage.

With coding agents, it’s likely that you’ll be able to modify apps to your own needs more easily, too. Perhaps plugin systems and an AI that can write plugins for you will become the norm?

jacquesm 1 hour ago|
> Was it due to ideology or lower cost or more features?

It was due to access.

More comments...