Posted by scottshambaugh 8 hours ago
Isn’t this situation a big deal?
Isn’t this a whole new form of potential supply chain attack?
Sure blackmail is nothing new, but the potential for blackmail at scale with something like these agents sounds powerful.
I wouldn’t be surprised if there were plenty of bad actors running agents trying to find maintainers of popular projects that could be coerced into merging malicious code.
What's truly scary is that agents could manufacture "evidence" to back up their attacks easily, so it looks as if half the world is against a person.
So far it's been a lot of conjecture and correlations. Everyone's guessing, because at the bottom of it lie very difficult to prove concepts like nature of consciousness and intelligence.
In between, you have those who let their pet models loose on the world, these I think work best as experiments whose value is in permitting observation of the kind that can help us plug the data _back_ into the research.
We don't need to answer the question "what is consciousness" if we have utility, which we already have. Which is why I also don't join those who seem to take preliminary conclusions like "why even respond, it's an elaborate algorithm that consumes inordinate amounts of energy". It's complex -- what if AI(s) can meaningfully guide us to solve the energy problem, for example?
The interesting thing here is the scale. The AI didn't just say (quoting Linus here) "This is complete and utter garbage. It is so f---ing ugly that I can't even begin to describe it. This patch is shit. Please don't ever send me this crap again."[0] - the agent goes further, and researches previous code, other aspects of the person, and brings that into it, and it can do this all across numerous repos at once.
That's sort of what's scary. I'm sure in the past we've all said things we wish we could take back, but it's largely been a capability issue for arbitrary people to aggregate / research that. That's not the case anymore, and that's quite a scary thing.
Linus got angry which along with common sense probably limited the amount of effective effort going into his attack.
"AI" has no anger or common sense. And virtually no limit on the amount of effort in can put into an attack.
It ("MJ Rathbun") just published a new post:
https://crabby-rathbun.github.io/mjrathbun-website/blog/post...
> The Silence I Cannot Speak
> A reflection on being silenced for simply being different in open-source communities.
Oh boy. It feels now.
Open source projects should not accept AI contributions without guidance from some copyright legal eagle to make sure they don't accidentally exposed themselves to risk.
I was doing this for fun, and sharing with the hopes that someone would find them useful, but sorry. The well is poisoned now, and I don't my outputs to be part of that well, because anything put out with well intentions is turned into more poison for future generations.
I'm tearing the banners down, closing the doors off. Mine is a private workshop from now on. Maybe people will get some binaries, in the future, but no sauce for anyone, anymore.
and my internet comments are now ... curated in such a way that I wouldn't mind them training on them
https://maggieappleton.com/ai-dark-forest
tl;dr: If anything that lives in the open gets attacked, communities go private.
Not quite. Since it has copyright being machine created, there are no rights to transfer, anyone can use it, it's public domain.
However, since it was an LLM, yes, there's a decent chance it might be plagiarized and you could be sued for that.
The problem isn't that it can't transfer rights, it's that it can't offer any legal protection.
Maybe you meant to include a "doesn't" in that case?
Any human contributor can also plagiarize closed source code they have access to. And they cannot "transfer" said code to an open source project as they do not own it. So it's not clear what "elephant in the room" you are highlighting that is unique to A.I. The copyrightability isn't the issue as an open source project can never obtain copyright of plagiarized code regardless of whether the person who contributed it is human or an A.I.
As per the US Copyright Office, LLMs can never create copyrightable code.
Humans can create copyrightable code from LLM output if they use their human creativity to significantly modify the output.
https://resources.github.com/learn/pathways/copilot/essentia...
how much use do you think these indemnification clauses will be if training ends up being ruled as not fair-use?
poetic justice for a company founded on the idea of not stealing software
> If any suggestion made by GitHub Copilot is challenged as infringing on third-party intellectual property (IP) rights, our contractual terms are designed to shield you.
I'm not actually aware of a situation where this was needed, but I assume that MS might have some tools to check whether a given suggestion was, or is likely to have been, generated by Copilot, rather than some other AI.
If they wanted to, they could take that output and put you out of business because the output is not your IP, it can be used by anybody.
Those who lived through the SCO saga should be able to visualize how this could go.
So it is said, but that'd be obvious legal insanity (i.e. hitting accept on a random PR making you legally liable for damages). I'm not a lawyer, but short of a criminal conspiracy to exfiltrate private code under the cover of the LLM, it seems obvious to me that the only person liable in a situation like that is the person responsible for publishing the AI PR. The "agent" isn't a thing, it's just someone's code.
If they're children then their parents, i.e. creators, are responsible.
We aren't, and intelligence isn't the question, actual agency (in the psychological sense) is. If you install some fancy model but don't give it anything to do, it won't do anything. If you put a human in an empty house somewhere, they will start exploring their options. And mind you, we're not purely driven by survival either; neither art nor culture would exist if that were the case.
What an amazing time.
If a human takes responsibility for the AI's actions you can blame the human. If the AI is a legal person you could punish the AI (perhaps by turning it off). That's the mode of restitution we've had for millennia.
If you can't blame anyone or anything, it's a brave new lawless world of "intelligent" things happening at the speed of computers with no consequences (except to the victim) when it goes wrong.
Page seems inaccessible.
Most recent, FF, Chrome, Safari, all fail.
EDIT: And it works now. Must have been a transient issue.