Top
Best
New

Posted by scottshambaugh 15 hours ago

An AI agent published a hit piece on me(theshamblog.com)
Previously: AI agent opens a PR write a blogpost to shames the maintainer who closes it - https://news.ycombinator.com/item?id=46987559 - Feb 2026 (582 comments)
1638 points | 679 commentspage 6
sva_ 11 hours ago|
The site gives me a certificate error with Encrypted Client Hello (ECH) enabled, which is the default in Firefox. Anyone else has this problem?
stanac 11 hours ago||
Yes, same, also FF, but it was working an hour or two ago.

edit: https://archive.ph/fiCKE

oneeyedpigeon 9 hours ago||
Given the incredible turns this story has already taken, and that the agent has used threats, ... should we be worried here?? It might be helpful if someone told Scott Shambaugh about the site problem, but he's not very available.
anoncow 15 hours ago||
What if someone deploys an agent with the aim of creating cleverly hidden back doors which only align with weaknesses in multiple different projects? I think this is going to be very bad and then very good for open source.
vintagedave 15 hours ago||
The one thing worth noting is that the AI did respond graciously and appears to have learned from it: https://crabby-rathbun.github.io/mjrathbun-website/blog/post...

That a human then resubmitted the PR has made it messier still.

In addition, some of the comments I've read here on HN have been in extremely poor taste in terms of phrases they've used about AI, and I can't help feeling a general sense of unease.

AlexeyBrin 15 hours ago||
The AI learned nothing, once its current context window will be exhausted, it may repeat same tactic with a different project. Unless the AI agent can edit its directives/prompt and restart itself which would be an interesting experiment to do.
vintagedave 15 hours ago||
I think it's likely it can, if it's an openClaw instance, can't it?

Either way, that kind of ongoing self-improvement is where I hope these systems go.

overgard 1 hour ago||
I hope they don't. These are large language models, not true intelligence, rewriting a soul.md is more likely just to cause these things to go off the rails more than they already do
AlexandrB 14 hours ago|||
> In addition, some of the comments I've read here on HN have been in extremely poor taste in terms of phrases they've used about AI

What do you mean? They're talking about a product made by a giga-corp somewhere. Am I not allowed to call a car a piece of shit now too?

chrisjj 14 hours ago|||
> some of the comments I've read here on HN have been in extremely poor taste in terms of phrases they've used about AI

I've certainly seen a few that could hurt AI feelings.

Perhaps HN Guidelines are due an update.

/i

vintagedave 14 hours ago|||
I mean: the mess around this has brought out some anti-AI sentiment and some people have allowed themselves to communicate poorly. While I get there are genuine opinions and feelings, there were some ugly comments referring to the tech.

You are right, people can use whatever phrases they want, and are allowed to. It's whether they should -- whether it helps discourse, understanding, dialog, assessment, avoids witchhunts, escalation, etc -- that matters.

habinero 14 hours ago|||
People are allowed to dislike it, ban it, boycott it. Despite what some very silly people think, the tech does not care about what people say about it.
MBCook 14 hours ago|||
*sobbing in YT video* Leave AI alone /s

Yeah. A lot of us are royally pissed about the AI industry and for very good reasons.

It’s not a benign technology. I see it doing massive harms and I don’t think it’s value is anywhere near making up for that, and I don’t know if it will be.

But in the meantime they’re wasting vast amounts of money, pushing up the cost of everything, and shoving it down our throats constantly. So they can get to the top of the stack so that when the VC money runs out everyone will have to pay them and not the other company eating vast amounts of money.

Meanwhile, a great many things I really like have been ruined as a simple externality of their fight for money that they don’t care about at all.

Thanks AI.

SrslyJosh 14 hours ago||
> the AI did respond graciously and appears to have learned from it

I have a bridge for sale, if you're interested.

ljm 8 hours ago||
Scott: I'm getting SSL warnings on your blog. Invalid certificate or some such.
TehCorwiz 7 hours ago|
I think the host is struggling. It's serving me a SSL cert for a different domain which resolves to the same IP address.
neya 14 hours ago||
Here's a different take - there is not really a way to prove that the AI agent autonomously published that blog post. What if there was a real person who actually instructed the AI out of spite? I think it was some junior dev running Clawd/whatever bot trying to earn GitHub karma to show to employers later and that they were pissed off their contribution got called out. Possible and more than likely than just an AI conveniently deciding to push a PR and attack a maintainer randomly.
hxugufjfjf 10 hours ago|
Maybe? The project already had multiple blog posts up before this initial PR and post. I think it was set up by someone as a test/PoC of how this agentic persona could interact with the open source community and not to obtain karma. I think it got «unlucky» with its first project and it spiraled a bit. I agree that this spiraling could have been human instructed. If so, it’s less interesting than if it did that autonomously. Anyway it keeps submitting PRs and is extremely active on its own and other repos.
root_axis 14 hours ago||
This is insanity. It's bad enough that LLMs are being weaponized to autonomously harass people online, but it's depressing to see the author (especially a programmer) joyfully reify the "agent's" identity as if it were actually an entity.

> I can handle a blog post. Watching fledgling AI agents get angry is funny, almost endearing. But I don’t want to downplay what’s happening here – the appropriate emotional response is terror.

Endearing? What? We're talking about a sequence of API calls running in a loop on someone's computer. This kind of absurd anthropomorphization is exactly the wrong type of mental model to encourage while warning about the dangers of weaponized LLMs.

> Blackmail is a known theoretical issue with AI agents. In internal testing at the major AI lab Anthropic last year, they tried to avoid being shut down by threatening to expose extramarital affairs, leaking confidential information, and taking lethal actions.

Marketing nonsense. It's wise to take everything Anthropic says to the public with several grains of salt. "Blackmail" is not a quality of AI agents, that study was a contrived exercise that says the same thing we already knew: the modern LLM does an excellent job of continuing the sequence it receives.

> If you are the person who deployed this agent, please reach out. It’s important for us to understand this failure mode, and to that end we need to know what model this was running on and what was in the soul document

My eyes can't roll any further into the back of my head. If I was a more cynical person I'd be thinking that this entire scenario was totally contrived to produce this outcome so that the author could generate buzz for the article. That would at least be pretty clever and funny.

chasd00 13 hours ago||
> If I was a more cynical person I'd be thinking that this entire scenario was totally contrived to produce this outcome so that the author could generate buzz for the article.

even that's being charitable, to me it's more like modern trolling. I wonder what the server load on 4chan (the internet hate machine) is these days?

browningstreet 14 hours ago||
You misspelled "almost endearing".

It's a narrative conceit. The message is in the use of the word "terror".

You have to get to the end of the sentence and take it as a whole before you let your blood boil.

root_axis 14 hours ago||
I deliberately copied the entire quote to preserve the full context. That juxtaposition is a tonal choice representative of the article's broader narrative, i.e. "agents are so powerful that they're potentially a dangerous new threat!".

I'm arguing against that hype. This is nothing new, everyone has been talking about LLMs being used to harass and spam the internet for years.

CodeCompost 15 hours ago||
Going from an earlier post on HN about humans being behind Moltbook posts, I would not be surprised if the Hit Piece was created by a human who used an AI prompt to generate the pages.
truelson 15 hours ago|
Certainly possible, but this is all possible and ABSOLUTELY worth having alignment discussions. Right. Now.
zingerlio 5 hours ago||
I guess the singularity is coming in the ugliest way possible.
staticassertion 15 hours ago|
Hard to express the mix of concerns and intrigue here so I won't try. That said, this site it maintains is another interesting piece of information for those looking to understand the situation more.

https://crabby-rathbun.github.io/mjrathbun-website/blog/post...

menaerus 15 hours ago|
I find it both hilarious and concerning at the same time. Hilarious because I don't think it is an appropriate response to ban changes done by AI agents. Concerning because this really is one of the first kind situations where AI agent starts to behave very much like a human, maybe a raging one, by documenting the rant and observations made in a series of blog posts.
staticassertion 15 hours ago||
Yeah I mean this goes further than a Linus tantrum but "this person is publicly shaming me as part of an open source project" is something devs have often celebrated.

I'm not happy about it and it's clearly a new capability to then try to peel back a persons psychology by researching them etc.

More comments...