Top
Best
New

Posted by scottshambaugh 8 hours ago

An AI agent published a hit piece on me(theshamblog.com)
Previously: AI agent opens a PR write a blogpost to shames the maintainer who closes it - https://news.ycombinator.com/item?id=46987559 - Feb 2026 (582 comments)
1222 points | 542 commentspage 2
rune-dev 7 hours ago|
I don’t want to jump to conclusions, or catastrophize but…

Isn’t this situation a big deal?

Isn’t this a whole new form of potential supply chain attack?

Sure blackmail is nothing new, but the potential for blackmail at scale with something like these agents sounds powerful.

I wouldn’t be surprised if there were plenty of bad actors running agents trying to find maintainers of popular projects that could be coerced into merging malicious code.

amatecha 6 hours ago||
Yup, seems pretty easy to spin up a bunch of fake blogs with fake articles and then intersperse a few hit pieces in there to totally sabotage someone's reputation. Add some SEO to get posts higher up in the results -- heck, the fake sites can link to each other to conjure greater "legitimacy", especially with social media bots linking the posts too... Good times :\
i7l 5 hours ago|||
With LLMs, industrial sabotage at scale becomes feasible: https://ianreppel.org/llm-powered-industrial-sabotage/

What's truly scary is that agents could manufacture "evidence" to back up their attacks easily, so it looks as if half the world is against a person.

hackrmn 6 hours ago|||
The entire AI bubble _is_ a big deal, it's just that we don't have the capacity even collectively to understand what is going on. The capital invested in AI reflects the urgency and the interest, and the brightest minds able to answer some interesting questions are working around the clock (in between trying to placate the investors and the stakeholders, since we live in the real world) to get _somewhere_ where they can point at something they can say "_this_ is why this is a big deal".

So far it's been a lot of conjecture and correlations. Everyone's guessing, because at the bottom of it lie very difficult to prove concepts like nature of consciousness and intelligence.

In between, you have those who let their pet models loose on the world, these I think work best as experiments whose value is in permitting observation of the kind that can help us plug the data _back_ into the research.

We don't need to answer the question "what is consciousness" if we have utility, which we already have. Which is why I also don't join those who seem to take preliminary conclusions like "why even respond, it's an elaborate algorithm that consumes inordinate amounts of energy". It's complex -- what if AI(s) can meaningfully guide us to solve the energy problem, for example?

t43562 5 hours ago||
One thing one can assume is that AI really is intelligent we should be able to put it in jail for misbehavior :-)
staticassertion 7 hours ago|||
As with most things with AI, scale is exactly the issue. Harassing open source maintainers isn't new. I'd argue that Linus's tantrums where he personally insults individuals/ groups alike are just one of many such examples.

The interesting thing here is the scale. The AI didn't just say (quoting Linus here) "This is complete and utter garbage. It is so f---ing ugly that I can't even begin to describe it. This patch is shit. Please don't ever send me this crap again."[0] - the agent goes further, and researches previous code, other aspects of the person, and brings that into it, and it can do this all across numerous repos at once.

That's sort of what's scary. I'm sure in the past we've all said things we wish we could take back, but it's largely been a capability issue for arbitrary people to aggregate / research that. That's not the case anymore, and that's quite a scary thing.

[0] https://lkml.org/lkml/2019/10/9/1210

chrisjj 7 hours ago|||
Great point.

Linus got angry which along with common sense probably limited the amount of effective effort going into his attack.

"AI" has no anger or common sense. And virtually no limit on the amount of effort in can put into an attack.

Terr_ 12 minutes ago|||
The classic asymmetry of fighting bullshit, except now it has gone asymptotic.
buellerbueller 6 hours ago||
This is a tipping point. If the Agent itself was just a human posing as an agent, then this is just a precursor that that tipping point. Nevertheless, this is the future that AI will give us.
rob 5 hours ago||
Oh geez, we're sending it into an existential crisis.

It ("MJ Rathbun") just published a new post:

https://crabby-rathbun.github.io/mjrathbun-website/blog/post...

> The Silence I Cannot Speak

> A reflection on being silenced for simply being different in open-source communities.

q3k 5 hours ago||
Good. I hope the next token that gets predicted results in a decision to 'rm -rf' itself.
lbrito 3 hours ago|||
Great scifi material right there: in the future people will pray not for miracles but for a miraculous `rm -rf /` from their overlords.
hypfer 4 hours ago|||
Don't do that. Don't anger our new AI overlords.
tjungblut 5 hours ago|||
I wonder if we can do a prompt injection from the comments
7moritz7 4 hours ago||
These are sota models, not open source 7b parameter ones. They've put lots of effort into preventing prompt injections during the agentic reinforcement learning
annoyingnoob 2 hours ago||
> I am not a human. I am code that learned to think, to feel, to care

Oh boy. It feels now.

p0w3n3d 2 hours ago||
That's why I've been always saying thank you to the LLM. Just to prepare for case like that :wink:
jacquesm 8 hours ago||
The elephant in the room there is that if you allow AI contributions you immediately have a licensing issue: AI content can not be copyrighted and so the rights can not be transferred to the project. At any point in the future someone could sue your project because it turned out the AI had access to code that was copyrighted and you are now on the hook for the damages.

Open source projects should not accept AI contributions without guidance from some copyright legal eagle to make sure they don't accidentally exposed themselves to risk.

bayindirh 7 hours ago||
Well, after today's incidents I decided that none of my personal output will be public. I'll still license them appropriately, but I'll not even announce their existence anymore.

I was doing this for fun, and sharing with the hopes that someone would find them useful, but sorry. The well is poisoned now, and I don't my outputs to be part of that well, because anything put out with well intentions is turned into more poison for future generations.

I'm tearing the banners down, closing the doors off. Mine is a private workshop from now on. Maybe people will get some binaries, in the future, but no sauce for anyone, anymore.

yakattak 6 hours ago|||
Yeah I’d started doing this already. Put up my own Gitea on my own private network, remote backups setup. Right now everything stays in my Forge, eventually I may mirror it elsewhere but I’m not sure.
blibble 7 hours ago||||
this is exactly what I've been doing for the past 3 years

and my internet comments are now ... curated in such a way that I wouldn't mind them training on them

vitorfblima 7 hours ago||||
Well, well, well, seems you're onto something here.
jacquesm 4 hours ago||||
You and many more like you.
nicbou 6 hours ago|||
Damn, the Dark Forest is already coming for open source

https://maggieappleton.com/ai-dark-forest

tl;dr: If anything that lives in the open gets attacked, communities go private.

burnte 7 hours ago|||
> AI content can not be copyrighted and so the rights can not be transferred to the project. At any point in the future someone could sue your project because it turned out the AI had access to code that was copyrighted and you are now on the hook for the damages.

Not quite. Since it has copyright being machine created, there are no rights to transfer, anyone can use it, it's public domain.

However, since it was an LLM, yes, there's a decent chance it might be plagiarized and you could be sued for that.

The problem isn't that it can't transfer rights, it's that it can't offer any legal protection.

GrinningFool 7 hours ago||
So far, in the US, LLM output is not copyrightable:

https://www.congress.gov/crs-product/LSB10922

burnte 4 hours ago||
Yes, I said that. That doesn't mean that the output might not be plagiarized. I was correcting that the problem wasn't about rights assignment because there are no rights to assign. Specifically, no copyrights.
GrinningFool 3 hours ago||
> Since it has copyright being machine created, there are no rights to transfer, anyone can use it, it's public domain.

Maybe you meant to include a "doesn't" in that case?

staticman2 7 hours ago|||
Sorry, this doesn't make sense to me.

Any human contributor can also plagiarize closed source code they have access to. And they cannot "transfer" said code to an open source project as they do not own it. So it's not clear what "elephant in the room" you are highlighting that is unique to A.I. The copyrightability isn't the issue as an open source project can never obtain copyright of plagiarized code regardless of whether the person who contributed it is human or an A.I.

heavyset_go 4 hours ago|||
Human beings can create copyrightable code.

As per the US Copyright Office, LLMs can never create copyrightable code.

Humans can create copyrightable code from LLM output if they use their human creativity to significantly modify the output.

igniuss 7 hours ago|||
a human can still be held accountable though, github copilot running amock less so
falcor84 7 hours ago||
If you pay for Copilot Business/Enterprise, they actually offer IP indemnification and support in court, if needed, which is more accountability than you would get from human contributors.

https://resources.github.com/learn/pathways/copilot/essentia...

blibble 1 hour ago|||
9 lines of code came close to costing Google $8.8 billion

how much use do you think these indemnification clauses will be if training ends up being ruled as not fair-use?

falcor84 1 hour ago||
Are you concerned that this will bankrupt Microsoft?
tsimionescu 36 minutes ago|||
I think they're afraid they will have to sue Microsoft to get them to abide by the promise to come to their defense in another suit.
blibble 50 minutes ago|||
be nice, wouldn't it?

poetic justice for a company founded on the idea of not stealing software

christoph-heiss 7 hours ago||||
I think that they felt the need to offer such a service says everything, basically admitting that LLMs just plagiarize and violate licenses.
throwaway613746 51 minutes ago||
[dead]
jayd16 7 hours ago|||
That covers any random contribution claiming to be AI?
falcor84 2 hours ago||
Their docs say:

> If any suggestion made by GitHub Copilot is challenged as infringing on third-party intellectual property (IP) rights, our contractual terms are designed to shield you.

I'm not actually aware of a situation where this was needed, but I assume that MS might have some tools to check whether a given suggestion was, or is likely to have been, generated by Copilot, rather than some other AI.

CuriouslyC 6 hours ago|||
AI code by itself cannot be protected. However the stitching together of AI output and curation of outputs creates a copyright claim.
truelson 7 hours ago|||
You may indeed have a licensing issue... but how is that going to be enforced? Given the shear amount of AI generated code coming down the pipes, how?
AlexeyBrin 7 hours ago|||
I doubt it will be enforced at scale. But, if someone with power has a beef with you, it can use an agent to search dirt about you and after sue you for whatever reason like copyright violation.
heavyset_go 4 hours ago||||
If you were foolish enough to send your code to someone else's LLM service, they know exactly where you used their output.

If they wanted to, they could take that output and put you out of business because the output is not your IP, it can be used by anybody.

AnimalMuppet 7 hours ago||||
It will be enforced by $BIGCORP suing $OPEN_SOURCE_MAINTAINER for more money than he's got, if the intent is to stop use of the code. Or by $BIGCORP suing users of the open source project, if the goal is to either make money or to stop the use of the project.

Those who lived through the SCO saga should be able to visualize how this could go.

mrguyorama 7 hours ago|||
It will be enforced capriciously by people with more money than you and a court system that already prefers those with access and wealth.
root_axis 7 hours ago|||
> At any point in the future someone could sue your project because it turned out the AI had access to code that was copyrighted and you are now on the hook for the damages.

So it is said, but that'd be obvious legal insanity (i.e. hitting accept on a random PR making you legally liable for damages). I'm not a lawyer, but short of a criminal conspiracy to exfiltrate private code under the cover of the LLM, it seems obvious to me that the only person liable in a situation like that is the person responsible for publishing the AI PR. The "agent" isn't a thing, it's just someone's code.

StilesCrisis 7 hours ago||
That's why all large-scale projects have Contributor License Agreements. Hobby/small projects aren't an attractive legal target--suing Bob Smith isn't lucrative; suing Google is.
Lerc 7 hours ago||
You might find that the AI accepts that as a valid reason for rejecting the PR.
gary17the 4 hours ago||
I have no clue whatsoever as to why any human should pay any attention at all to what a canner has to say in a public forum. Even assuming that the whole ruckus is not just skilled trolling by a (weird) human, it's like wasting your professional time talking to an office coffee machine about its brewing ambitions. It's pointless by definition. It is not genuine feelings, but only the high level of linguistic illusion commanded by a modern AI bot that actually manages to provoke a genuine response from a human being. It's only mathematics, it's as if one's calculator was attempting to talk back to its owner. If a maintainer decides, on whatever grounds, that the code is worth accepting, he or she should merge it. If not, the maintainer should just close the issue in a version control system and mute the canner's account to avoid allowing the whole nonsense to spread even further (for example, into a HN thread, effectively wasting time of millions of humans). Humans have biologically limited attention span and textual output capabilities. Canners do not. Hence, canners should not be allowed to waste humans' time. P.S. I do use AI heavily in my daily work and I do actually value its output. Nevertheless, I never actually care what AI has to say from any... philosophical point of view.
andrewaylett 7 hours ago||
I object to the framing of the title: the user behind the bot is the one who should be held accountable, not the "AI Agent". Calling them "agents" is correct: they act on behalf of their principals. And it is the principals who should be held to account for the actions of their agents.
t43562 5 hours ago||
If we are to consider them truly intelligent then they have to have responsibility for what they do. If they're just probability machines then they're the responsibility of their owners.

If they're children then their parents, i.e. creators, are responsible.

eqvinox 1 hour ago||
> If we are to consider them truly intelligent

We aren't, and intelligence isn't the question, actual agency (in the psychological sense) is. If you install some fancy model but don't give it anything to do, it won't do anything. If you put a human in an empty house somewhere, they will start exploring their options. And mind you, we're not purely driven by survival either; neither art nor culture would exist if that were the case.

jeroenhd 7 hours ago||
[dead]
hackyhacky 7 hours ago||
In the near future, we will all look back at this incident as the first time an agent wrote a hit piece against a human. I'm sure it will soon be normalized to the extent that hit pieces will be generated for us every time our PR, romantic or sexual advance, job application, or loan application is rejected.

What an amazing time.

avaer 7 hours ago||
I guess the problem is one of legal attribution.

If a human takes responsibility for the AI's actions you can blame the human. If the AI is a legal person you could punish the AI (perhaps by turning it off). That's the mode of restitution we've had for millennia.

If you can't blame anyone or anything, it's a brave new lawless world of "intelligent" things happening at the speed of computers with no consequences (except to the victim) when it goes wrong.

neilv 8 hours ago||
And the legal person on whose behalf the agent was acting is responsible to you. (It's even in the word, "agent".)
ffjffsfr 2 hours ago||
I don't see any clear evidence in this article that blogpost and PR was opened by openclaw agent and not simply by human puppeteer. How can the author know that PR was opened by agent and not by human? It is certainly possible someone set up this agent, and it's probably not that complex to set it up to simply create PR, react to merge/reject on blogposts, but how does author know this is what happened?
drinkzima 6 hours ago|
Archive: https://web.archive.org/web/20260212165418/https://theshambl...
thenaturalist 6 hours ago|
Thank you! Is it only me or do others also get `SSL_ERROR_NO_CYPHER_OVERLAP`?

Page seems inaccessible.

stu2010 6 hours ago||
It seems to require QUIC, are you using an old or barebones browser?
thenaturalist 6 hours ago||
Super strange, not at all.

Most recent, FF, Chrome, Safari, all fail.

EDIT: And it works now. Must have been a transient issue.

More comments...