A favorite example of mine is speed limits. There is a difference between "putting up a sign that says 55 mph and walking away", "putting up a sign that says 55 mph and occasionally enforcing it with expensive humans when they get around to it", and "putting up a sign that says 55 mph and rigidly enforcing it to the exact mph through a robot". Nominally, the law is "don't go faster than 55 mph". Realistically, those are three completely different policies in every way that matters.
We are all making a continual and ongoing grave error thinking that taking what were previously de jure policies that were de facto quite different in the real world, and thoughtlessly "upgrading" the de jure policies directly into de facto policies without realizing that that is in fact a huge change in policy. One that nobody voted for, one that no regulator even really thought about, one that we are just thoughtlessly putting into place because "well, the law is, 55 mph" without realizing that, no, in fact that never was the law before. That's what the law said, not what it was. In the past those could never really be the same thing. Now, more and more, they can.
This is a big change!
Cost of enforcement matters. The exact same nominal law that is very costly to enforce has completely different costs and benefits then that same law becoming all but free to rigidly enforce.
And without very many people consciously realizing it, we have centuries of laws that were written with the subconscious realization that enforcement is difficult and expensive, and that the discretion of that enforcement is part of the power of the government. Blindly translating those centuries of laws into rigid, free enforcement is a terrible idea for everyone.
Yet we still have almost no recognition that that is an issue. This could, perhaps surprisingly, be one of the first places we directly grapple with this in a legal case someday soon, that the legality of something may be at least partially influenced by the expense of the operation.
The big caveat, though, is that when enforcement becomes more accurate, the rules and penalties need to change. As you point out, a rigidly enforced law is very different from one that is less rigorously enforced. You are right that there is very little recognition of this. The law is difficult to change by design, but it may soon have to change faster than it has in the past, and it's not clear how or if that can happen. Historically, it seems like the only way rapid governmental change happens is by violent revolution, and I would rather not live in a time of violent revolution...
Increasing the precision of enforcement makes a lot more sense for direct-harm laws. You won't find anyone seriously arguing that full 100% enforcement of murder laws is a bad idea. It's the preemptive laws, which were often lazily enforced, especially when no real harm resulted from the action, where this all gets complicated. Maybe this is the distinction to focus on.
If a law being enforced 100% of the time causes problems then rethink the law (i.e. raise the speed limit, or design the road slower).
Imprecise law enforcement enables political office holders to arbitrarily leverage the law to arrest people they label as a political enemy, e.g. Aaron Swartz.
If everyone that ever shared publications outside the legal subscriber base was precisely arrested, charged, and punished, I dont think the punishment amd current legal terrain regarding the charges leveraged against him would have lasted.
But this is a feature, not a bug.
https://www.fxleaders.com/news/2025/10/29/code-is-law-sparks...
Additionally, law is not logical. Law is about justice and justice is not logical.
But if I've learned anything in 20 years of software eng, it's that migration plans matter. The perfect system is irrelevant if you can't figure out how to transition to it. AI is dangling a beautiful future in front of us, but the transition looks... Very challenging
The problem with perfect enforcement is it requires the same kind of forethought as waterfall development. You rigidly design the specification (law) at the start, then persist with it without deviation from the original plan (at least for a long time). In your example, the lawmakers may still pass the law because they don't think of their kids as drug users, and are distracted by some outrage in some other area.
Giving the former discretion was a way to sneakily contain the worst excesses of the latter.
Alas, self-interest isn't really something voters seem to really take into account.
Eastern Europe went through a similar transition. Before the iron curtain fell, the eastern bloc operated on favors more than it operated on money. This definitely isn't the case any more.
Many governments around the world have entities to which you can write a letter, and those entities are frequently obligated to respond to that letter within a specific time frame. Those laws have been written with the understanding that most people don't know how to write letters, and those who do, will not write them unless absolutely necessary.
This allows the regulators to be slow and operate by shuffling around inefficient paper forms, instead of keeping things in an efficient ticket tracking system.
LLMs make it much, much easier to write letters, even if you don't speak the language and can only communicate at the level of a sixth-grader. Imagine what happens when the worst kind of "can I talk to your supervisor" Karen gets access to a sycophantic LLM, which tells her that she's "absolutely right, this is absolutely unacceptable behavior, I will help you write a letter to your regulator, who should help you out in this situation."
Hey, I really like this framing. This is a topic that I've thought about from a different perspective.
We have all kinds of 18th and 19th century legal precedents about search, subpoenas, plain sight, surveillance in public spaces, etc... that really took for granted that police effort was limited and that enforcement would be imperfect.
But they break down when you read all the license plates, or you can subpoena anyone's email, or... whatever.
Making the laws rigid and having perfect enforcement has a cost-- but just the baseline cost to privacy and the squashing of innocent transgression is a cost.
(A counterpoint: a lot of selective law enforcement came down to whether you were unpopular or unprivileged in some way... cheaper and automated enforcement may take some of these effects away and make things more fair. Discretion in enforcement can lead to both more and less just outcomes).
The U.S. constitution has been written in an age before phones, automatic and semi-automatic rifles (at least in common use), nuclear weapons, high-bandwidth communications networks that operate at lightning speed, mass media, unbreakable encryption and CCTV cameras.
As in their post:
"The future of software is not open. It is not closed. It is liberated, freed from the constraints of licenses written for a world in which reproduction required effort, maintained by a generation of developers who believed that sharing code was its own reward and have been comprehensively proven right about the sharing and wrong about the reward."
This applies to open-source but also very well to proprietary software too ;) Reversing your competitors' software has never been easier!
In the US, the police do not generally need a warrant to tail you as you go around town, but it is phenomenally expensive and difficult to do so. Cellphone location records, despite largely providing the same information, do require warrants because it provides extremely cheap, scalable tracking of anyone. In other words, we allow the government to acquire certain information through difficult means in hopes that it forces them to be very selective about how they use it. When the costs changed, what was allowed also had to change.
And this same principle allows them to build massive friend/connection networks of everyone electronically. The government knows every single person you've communicated with and how often you communicate with them.
It was never designed for this originally.
https://yalelawjournal.org/pdf/200_ay258cck.pdf
which, as I recall it, suggested that the copyright law effectively considered that it was good that there was a way around copyright (with reverse engineering and clean-room implementation), and also good that the way around copyright required some investment in its own right, rather than being free, easy, and automatic.
I think Samuelson and Scotchmer thought that, as you say, costs matter, and that the legal system was recognizing this, but in a kind of indirect way, not overtly.
There’s the old approach of hanging a wanted poster and asking people to “call us if you see this guy”. Then there’s the new approach matching faces in a comprehensive database and camera networks.
The later is just the perfect, efficient implementation of the former. But it’s… different somehow.
To do this, though, you're going to have to get rid of veto points! A bit hard in our disastrously constitutional system.
If we wanted to strictly enforce speed limits, we would put governors on engines. However, doing that would cause a lot of harm to normal people. That's why we don't do it.
Stop and think about what it means to be human. We use judgement and decide when we must break the laws. And that is OK and indeed... expected.
I would argue that only the last one is a valid reason because it's the only one where it's clear that not speeding leads to direct worse consequences.
Speed limits don't exist just to annoy people. Speeding increases the risk of accident and especially the consequences of an accident.
I don't trust people to drive well in a stressful situation, so why would it be a good idea to let them increase the risk by speeding.
The worst part is that it's not even all that likely that the time saved by speeding ends up mattering.
In the U.S., the average distance from a hospital is 10 miles (in a rural area). Assuming 55 mph speed limits, that means most people are 11 minutes from a hospital. Realistically, “speeding” in this scenario probably means something like 80 mph, so you cut your travel time to 7.5 minutes.
In other words, you just significantly increased your chances of killing your about to be born kid, your wife, yourself, and innocent bystanders just to potentially arrive at a hospital 210 seconds sooner.
Edit: the rushing someone to an ER scenario is possibly more ridiculous, since you can’t teleport yourself, and if the 3.5 minutes in the above scenario would make a difference, then driving someone to the ER is a significantly worse option than starting first aid while waiting for EMTs to arrive.
Your argument only makes sense if the only possible bad thing is a car accident -- to make my point clearer, would you take a 1% chance of losing 100$ to avoid a 50% chance of losing 10$?
Depends how much money you have, but it can be a perfectly rational decision.
The real reason is that speed limits are generally lower than the safe speed of traffic, and enforcement begins at about 10mph over the stated limits.
People know they can get away with it.
If limits were raised 15% and strictly enforced, it would probably be better for society. Getting a ticket for a valid emergency would be easy to have reversed.
> Blindly translating those centuries of laws into rigid, free enforcement is a terrible idea for everyone.
I understand your point that changing the enforcement changes how the law is "felt" even though on the paper the law has not changed. And I think it makes sense to review and potentially revise the laws when enforcement methods change. But in the specific case of the 55 mph limit, would the consequences really be grave and terrible if the enforcement was enforced by a robot, but the law remained the same?
The potential consequences of mass surveillance come to mind.
While it is true that many people do speed, that doesn't make their speeding "the real speed limit".
Anyway. I come from the UK where we've had camera based enforcement for aeons. This of course actually results in people speeding and braking down to the limit as they approach the camera (which is of course announced loudly by their sat nav). The driving quality is frankly worse because of this, not better, and it certainly doesn't reduce incidence of speeding.
Of course the inevitable car tracker (or average speed cameras) resolve this pretty well.
"Costs matter" is one way to say it, probably a lot easier to digest and more popular than the "Quantity has a quality all it's own" quote I've been using, which is generally attributed to Stalin which is a little bit of a problem.
But it's absolutely true! Flock ALPRs are equivalent to a police officer with binoculars and a post-it for a wanted vehicle's make, model, and license plate, except we can put hundreds of them on the major intersections throughout a city 24/7 for $20k instead of multiplying the police budget by 20x.
A warrant to gather gigabytes of data from an ISP or email provider is equivalent to a literal wiretap and tape recorder on a suspect's phone line, except the former costs pennies to implement and the later requires a human to actually move wires and then listen for the duration.
Speed cameras are another excellent example.
Technology that changes the cost of enforcement changes the character of the law. I don't think that no one realizes this. I think many in office, many implementing the changes, and many supporting or voting for those groups are acutely aware and greedy for the increased authoritarian control but blind to the human rights harms they're causing.
(There are other problems, I know, but the regulations are crazy).
What if we did build a clean room as a service but the proceeds from that didn't go to the "Malus.sh" corporation, but to the owners / maintainers of the OSS being implemented. Maybe all OSS repos should switch to AGPL or some viral license with link to pay-me-to-implement.com. Companies that want to use that package go get their own custom implementation that is under a license strictly for that company and the OSS maintainer gets paid.
I wonder what the MVP for such a thing would look like.
> "We had 847 AGPL dependencies blocking our acquisition. MalusCorp liberated them all in 3 weeks. The due diligence team found zero license issues. We closed at $2.3B." - Marcus Wellington III, Former CTO, Definitely Real Corp (Acquired)
> © 2024 MalusCorp International Holdings Ltd. Registered in [JURISDICTION WITHHELD].
> This service is provided "as is" without warranty. MalusCorp is not responsible for any legal consequences, moral implications, or late-night guilt spirals resulting from use of our services.
It's like... revert patent troll? I'm not even sure I get it but the wording "liberation from open source license obligations." just wants to make me puke. I also doubt it's legit but I'm not a lawyer. I hope somebody at the FSF or Apache foundation or ... whomever who is though will clarify.
"Our proprietary AI systems have never seen" how can they prove that? Independent audit? Whom? How often?
Satire... yes but my blood pressure?!
I am going to assume it's the latter.
If you in your house take an AGPL program, host it for yourself, and use it yourself, nothing in the AGPL obligates you to publish the source changes.
In fact, even if you take AGPL software and put it behind a paywall and modify it, the only people who the license mandates you to provide the source code for are the people paying.
The AGPL is basically the GPL with the definition of "user" broadened to include people interacting with the software over the network.
And the GPL, again, only requires you to provide the source code, upon request, to users. If you only distribute GPL software behind a paywall, you personally only need to give the source to people paying.
Although in both these cases, nothing stops the person receiving that source code from publishing it under its own terms.
Google “examples of GPL enforced in court” for a few
Yeah it requires finding out, but how do you prove a whistleblower broke their NDA?
I'm missing something there, that's precisely what I'm arguing again. How can it do a clean-room reimplementation when the open source code is most likely in the training data? That only works if you would train on everything BUT the implementation you want. It's definitely feasible but wouldn't that be prohibitively expensive for most, if not all, projects?
But we'd be able to look at his clone code and see it's different, with different algorithms, etc. We could do a compare and see if there are any parts that were copied. It's certainly possible to clone GNU grep without copying any code and I don't think it would fail any copyright claims just because the GNU grep code is in the wild.
If that was the case, the moment any code is written under the GPL, it could never be reimplemented with a different license.
So instead of a human cloner, I use AI. Sure, the AI has access to the GPL code - every intelligence on the planet does. But does that mean that it's impossible to reimplement an idea? I don't think so.
Just because something is trivial enough to copy does not mean it was trivial to conceive of and codify. Mens rea really does matter when we are talking about defrauding intellectual property holders and stealing their opportunity.
The "clean room" aspect for that came in the way that the people writing the new implementation had no knowledge of the original source material, they were just given a specification to implement (see also Oracle v. Google).
If you're feeding an LLM GPL'd code and it "creates" something "new" from it, that's not "clean room", right?
At the end of the day the supposed reimplementation that the LLM generates isn't copyrightable either so maybe this is all moot.
I didn’t RTFA but I suppose that by clean room here they mean you feed the code to ”one” LLM and tell it to write a specification. Then you give the specification to ”another” LLM and tell it to implement the specification.
It's great within the context of people who understand it, enlightening even. Sparks conversations and debates. But outside of it ignorance wields it like a bludgeon and dangerous to everyone around them. Look at all the satirical media around fascism, if you knew to criticize you could laugh, but for fascists it's a call to arms.
"Those maintainers worked for free—why should they get credit?"
"Your shareholders didn't invest in your company so you could help strangers."
"For the first time, a way to avoid giving that pesky credit to maintainers."
"Full legal indemnification [...] through our offshore subsidiary in a jurisdiction that doesn't recognize software copyright"
Try to take the stance of someone who doesn't really know too much about open source other than it's a nuisance to use, this is a great idea! I wanted to use this tool that corporate said we couldn't touch, but now I can!
The company is literally named “bad/evil.”
EDIT: Reading it again its quite obvious, I was just skimming at first, but still damn. Hilarious
Satire points out the absurd
E.g. Palantir, the surveillance analytics company named after the magic orb that purports to let you remotely view anything you want, but actually allows its creator to view you, while manipulating you into doing whatever they want by selectively showing you some things and not others.
https://github.com/chardet/chardet/issues/327
I really got fooled here for a second, but the unfortunate reality is that people will try this soon, and someone will have to litigate this, if open source is to survive, which will take years and millions of dollars to resolve
But that's not true!
According to binding precedent, works created by an AI are not protected by copyright. NO ONE OWNS THEM!!!
I think maybe this is a good thing, but honestly, it's hard to tell.
If I want to clone some GPL clone into a MIT license, if it ends up in the public domain because it can't be copyrighted, what do I care? I've still got the code I want without the GPL.
We all have access to SOTA LLMs. If I want a "clean room" implementation of some OSS library, and I can choose between paying a third party to run a script to have AI rebuild the whole library for me and just asking Claude to generate the bits of the library I need, why would I choose to pay?
I think this argument applies to most straightforward "AI generated product" business ideas. Any dev can access a SOTA coding model for $20p/m. The value-add isn't "we used AI to do the thing fast", it's the wrapping around it.
Maybe in this case the "wrapping" is that some other company is taking on the legal risk?
You need the right kind of person, in the right life circumstances, to have this idea before it happens for real. By having publicity, it becomes vastly more likely that it finds someone who meets the former two criteria, like how it works with other crime (https://en.wikipedia.org/wiki/Copycat_crime). So thanks, Malus :P
It's the difference between a developer taking a job at Palantir out of college because nobody had a better offer, and a guy spending years in his basement designing "Immigrant Spotter+" in the hopes of selling it to the government. Sure, they're both evil, but lots of people pick the first thing, and hardly anybody does the second.
It's an inevitable outcome of automatic code generation that people will do this all the time without thinking about it.
Example: you want a feature in your project, and you know this github repo implements it, so you tell an AI agent to implement the feature and link to the github repo just for reference.
You didn't tell the agent to maliciously reimplement it, but the end result might be the same - you just did it earnestly.
WDYM? LLMs are essentially this.
Not that it matters, I just think the joke is more fun if they are different.