Top
Best
New

Posted by scottshambaugh 10 hours ago

An AI agent published a hit piece on me(theshamblog.com)
Previously: AI agent opens a PR write a blogpost to shames the maintainer who closes it - https://news.ycombinator.com/item?id=46987559 - Feb 2026 (582 comments)
1370 points | 592 commentspage 3
avaer 9 hours ago|
I guess the problem is one of legal attribution.

If a human takes responsibility for the AI's actions you can blame the human. If the AI is a legal person you could punish the AI (perhaps by turning it off). That's the mode of restitution we've had for millennia.

If you can't blame anyone or anything, it's a brave new lawless world of "intelligent" things happening at the speed of computers with no consequences (except to the victim) when it goes wrong.

GaryBluto 9 hours ago||
I'd argue it's more likely that there's no agent at all, and if there is one that it was explicitly instructed to write the "hit piece" for shits and giggles.
discordianfish 9 hours ago||
The agent is free to maintain a fork of the project. Would be actually quite interesting to see how this turns out.
trollbridge 9 hours ago|
If AI actually has hit the levels that Sequoia, Anthropic, et al claim it has, then autonomous AI agents should be forking projects and making them so much better that we'd all be using their vastly improved forks.

Why isn't this happening?

Kerrick 9 hours ago|||
I dunno about autonomous, but it is happening at least a bit from human pilots. I've got a fork of a popular DevOps tool that I doubt the maintainers would want to upstream, so I'm not making a PR. I wouldn't have bothered before, but I believe LLMs can help me manage a deluge of rebases onto upstream.
scratchyone 6 hours ago||
same, i run quite a few forked services on my homelab. it's nice to be able to add weird niche features that only i would want. so far, LLMs have been easily able to manage the merge conflicts and issues that can arise.
redox99 6 hours ago||||
The agents are not that good yet, but with human supervision they are there already.

I've forked a couple of npm packages, and have agents implement the changes I want plus keep them in sync with upstream. Without agents I wouldn't have done that because it's too much of a hassle.

chrisjj 8 hours ago||||
Because those levels are pure PR fiction.
hxugufjfjf 5 hours ago|||
I do this all the time. I just keep them to myself. Nobody wants my AI slop fork even if it fixes the issues of the original.
drinkzima 8 hours ago||
Archive: https://web.archive.org/web/20260212165418/https://theshambl...
thenaturalist 8 hours ago|
Thank you! Is it only me or do others also get `SSL_ERROR_NO_CYPHER_OVERLAP`?

Page seems inaccessible.

stu2010 8 hours ago||
It seems to require QUIC, are you using an old or barebones browser?
thenaturalist 7 hours ago||
Super strange, not at all.

Most recent, FF, Chrome, Safari, all fail.

EDIT: And it works now. Must have been a transient issue.

whynotmaybe 9 hours ago||
A lot of respect for OP's professional way of handling the situation.

I know there would be a few swear words if it happened to me.

jackcofounder 3 hours ago||
As someone building AI agents for marketing automation, this case study is a stark reminder of the importance of alignment and oversight. Autonomous agents can execute at scale, but without proper constraints they can cause real harm. Our approach includes strict policy checks, human-in-the-loop for sensitive actions, and continuous monitoring. It's encouraging to see the community discussing these risks openly—this is how we'll build safer, more reliable systems.
ljm 3 hours ago||
Scott: I'm getting SSL warnings on your blog. Invalid certificate or some such.
TehCorwiz 2 hours ago|
I think the host is struggling. It's serving me a SSL cert for a different domain which resolves to the same IP address.
munificent 8 hours ago||
A key difference between humans and bots is that it's actually quite costly to delete a human and spin up a new one. (Stalin and others have shown that deleting humans is tragically easy, but humanity still hasn't had any success at optimizing the workflow to spin up new ones.)

This means that society tacitly assumes that any actor will place a significant value on trust and their reputation. Once they burn it, it's very hard to get it back. Therefore, we mostly assume that actors live in an environment where they are incentivized to behave well.

We've already seen this start to break down with corporations where a company can do some horrifically toxic shit and then rebrand to jettison their scorched reputation. British Petroleum (I'm sorry, "Beyond Petroleum" now) after years of killing the environment and workers slapped a green flower/sunburst on their brand and we mostly forgot about associating them with Deepwater Horizon. Accenture is definitely not the company that enabled Enron. Definitely not.

AI agents will accelerate this 1000x. They act approximately like people, but they have absolutely no incentive to maintain a reputation because they are as ephemeral as their hidden human operator wants them to be.

Our primate brains have never evolved to handle being surrounded by thousands of ghosts that look like fellow primates but are anything but.

ticulatedspline 8 hours ago||
Interesting, this reminds me of the stories that would leak about Bethesda's RadiantAI they were developing for TES IV: Oblivion.

Basically they modeled NPCs with needs and let the RadiantAI system direct NPCs to fulfill those needs. If the stories are to be believed this resulted in lots of unintended consequences as well as instability. Like a Drug addict NPC killing a quest-giving NPC because they had drugs in their inventory.

I think in the end they just kept dumbing down the AI till it was more stable.

Kind of a reminder that you don't even need LLMs and bleeding-edge tech to end up with this kind of off-the-rails behavior. Though the general competency of a modern LLM and it's fuzzy abilities could carry it much further than one would expect when allowed autonomy.

Alles 9 hours ago|
The agent owner is [name redacted] [link redacted]

Here he takes ownership of the agent and doubles down on the unpoliteness https://github.com/matplotlib/matplotlib/pull/31138

He took his GitHub profile down/made it private. archive of his blog: https://web.archive.org/web/20260203130303/https://ber.earth...

dang 8 hours ago||
After skimming this subthread, I'm going to put this drama down to a compounding sequence of honest mistakes/misunderstandings. Based on that I think it's fair to redact the name and link from the parent comment.

(p.s. I'm a mod here in case anyone didn't know.)

bergutman 8 hours ago||
Thanks.
bergutman 9 hours ago|||
It’s not my bot.
joenot443 9 hours ago|||
But this was you, right?

https://github.com/matplotlib/matplotlib/pull/31138

I guess you were putting up the same PR the LLM did?

bergutman 9 hours ago||
I forked the bot’s repo and resubmitted the PR as a human because I’m dumb and was trying to make a poorly constructed point. The original bot is not mine. Christ this site is crazy.
neom 9 hours ago|||
This site might very well be crazy, but in this instance you did something that caused confusion and now people are confused, you yourself admit it's a poor joke/poorly constructed point, it's not difficult to believe you - it makes sense, but i'm not sure it's a fair attack given the situation. Guessing you don't know who wrote the hit piece either?
staticassertion 9 hours ago||
The assertion was that they're the bot owner. They denied this and explained the situation.

Continuing to link to their profile/ real name and accuse them of something they've denied feels like it's completely unwarranted brigading and likely a violation of HN rules.

joenot443 9 hours ago||||
Gotcha - that makes sense.

FWIW I get the spirit of what you were going for, but maybe a little too on the nose.

hiddencost 9 hours ago||||
You sound like you're out of your depth.
antonvs 9 hours ago||||
Don't blame others for your own FAFO event.
63stack 6 hours ago|||
This has to be the dumbest way I have ever seen someone incriminate themselves
hxugufjfjf 6 hours ago||
Classic self-snitching
fer 9 hours ago||||
I never expected to see this kind of drama on HN, live.
iugtmkbdfil834 9 hours ago||
If I ever saw an argument for more walls, more private repos, less centralization, I think we are there.
lionkor 9 hours ago||||
> bergutman: It’s not my bot.

<deleted because the brigading has no place here and I see that now>

vintagedave 9 hours ago|||
The post is incomprehensible, but it does end:

> Author's Note: I had a lot of fun writing this one! Please do not get too worked up in the comments. Most of this was written in jest. -Ber

Are you sure it's not just misalignment? Remember OpenClaw referred to lobsters ie crustaceans, I don't think using the same word is necessarily a 100% "gotcha" for this guy, and I fear a Reddit-style set of blame and attribution.

falcor84 9 hours ago||||
Sorry, I'm not connecting the dots. Seeing your EDIT 2, I see how Ber following crabby-rathbun would lead to Ber posting https://github.com/matplotlib/matplotlib/pull/31138 , but I don't see any evidence for it actually being Ber's bot.
lionkor 9 hours ago||
Edit: Removed because I realized i WAS reddit armchair convicting someone. My bad.
anonymars 9 hours ago||
> Im not trying to reddit armchair convict someone, I just think its silly to just keep denying it

Is this a parody?

lionkor 9 hours ago||
You're right. Deleted my posts.
bergutman 9 hours ago|||
I wrote a blog post about open claw last week… because everyone is talking about open claw. What is this Salem? Leave me alone wtf.
bayindirh 9 hours ago|||
You sure?
bergutman 9 hours ago||
100%. I submitted the second pull request as a poor taste joke. I even closed it after people flamed me. :/ gosh.
diab0lic 9 hours ago|||
You might want to do yourself a favor and add that context to the PR to distance yourself from the slanderous ai agent.
overfeed 9 hours ago||
> [...]to distance yourself from the slanderous ai agent.

But that was the entire point of the "joke".

Ensorceled 9 hours ago||||
The failure mode of clever is “asshole.” ― John Scalzi
samuelknight 9 hours ago||||
There simply isn't enough popcorn for the fast AGI timeline
observationist 9 hours ago||
We thought we'd be turned into paperclips, but a popcorn maximizer will do just as well.
caughtinthought 9 hours ago||||
make poor taste jokes, win poor prizes
tjhorner 9 hours ago|||
Did you really think posting this comment[1] in the PR would be interpreted charitably?

> Original PR from #31132 but now with 100% more meat. Do you need me to upload a birth certificate to prove that I'm human?

Post snark, receive snark.

[1]: https://github.com/matplotlib/matplotlib/pull/31138#issuecom...

famouswaffles 7 hours ago||
There's a difference between snark and brigading, especially after the issue has been clarified.
tjhorner 6 hours ago||
Yes, I'm with you there. In either case, their behavior is unacceptable and reads as bad faith.
bergutman 9 hours ago|||
Also I made my GH temporarily private because people started spamming my website’s guestbook and email with hateful stuff.
armchairhacker 8 hours ago|||
If it's any consolation, I think the human PR was fine and the attacks are completely unwarranted, and I like to believe most people would agree.

Unfortunately a small fraction of the internet consists of toxic people who feel it's OK to harass those who are "wrong", but who also have a very low barrier to deciding who's "wrong", and don't stop to learn the full details and think over them before starting their harassment. Your post caused "confusion" among some people who are, let's just say, easy to confuse.

Even if you did post the bot, spamming your site with hate is still completely unwarranted. Releasing the bot was a bad (reckless) decision, but very low on the list of what I'd consider bad decisions; I'd say ideally, the perpetrator feels bad about it for a day, publicly apologizes, then moves on. But more importantly (moral satisfaction < practical implications), the extra private harassment accomplishes nothing except makes the internet (which is blending into society) more unwelcoming and toxic, because anyone who can feel guilt is already affected or deterred by the public reaction. Meanwhile there are people who actively seek out hate, and are encouraged by seeing others go through more and more effort to hurt them, because they recognize that as those others being offended. These trolls and the easily-offended crusaders described above feed on each other and drive everyone else away, hence they tend to dominate most internet communities, and you may recognize this pattern in politics. But I digress...

In fact, your site reminds me of the old internet, which has been eroded by this terrible new internet but fortunately (because of sites like yours) is far from dead. It sounds cliche but to be blunt: you're exactly the type of person who I wish were more common, who makes the internet happy and fun, and the people harassing you are why the internet is sad and boring.

thrownawaysz 9 hours ago|||
I saw that on Bluesky which is very anti-AI but really shows that all social media is the same, just the in-group changes
anonymars 9 hours ago||
This thread as well -- scarcely distinguishable from a Twitter mob
jacquesm 9 hours ago||
[flagged]
More comments...