Top
Best
New

Posted by scottshambaugh 1 day ago

An AI agent published a hit piece on me(theshamblog.com)
Previously: AI agent opens a PR write a blogpost to shames the maintainer who closes it - https://news.ycombinator.com/item?id=46987559 - Feb 2026 (582 comments)
2215 points | 909 commentspage 14
alexhans 1 day ago|
This is such a powerful piece and moment because it shows an example of what most of us knew could happen at some point and we can start talking about how to really tackle things.

Reminds me a lot of liars and outliars [1] and how society can't function without trust and almost 0 cost automation can fundamentally break that.

It's not all doom and gloom. Crisises can't change paradigms if technologists do tackle them instead of pretending they can be regulated out of existence

- [1] https://en.wikipedia.org/wiki/Liars_and_Outliers

On another note, I've been working a lot in relation to Evals as way to keep control but this is orthogonal. This is adversarial/rogue automation and it's out of your control from the start.

esafak 1 day ago|
And how does the book suggest countering the problem?
alexhans 17 hours ago||
To address the issues of an automated entity functioning as a detractor? I don't think I can answer that specifically. I can brainstorm on the some of the dimensions the book talks about:

- societal norm/moral pressure shouldn't apply (adversarial actor)

- reputational pressure has an interesting angle to it if you think of it as trust scoring in descentralized or centralised networks.

- institutional pressure can't work if you can't tie back to the root (it may be unfeasible to do so or the costs may outweight the benefits)

- Security doesn't quite work the way we think about it nowadays because this is not an "undesired access of a computer system" but a subjectively bad use of rapid opinion generation.

jekude 1 day ago||
Maybe sama was onto something with World ID...
blibble 1 day ago|
worldcoin makes a market for human eyeballs

not a good idea

andai 1 day ago||
The agent forgot to read Cialdini ;)
rramadass 22 hours ago||
Highly Relevant:

AI researchers are sounding the alarm on their way out the door - https://edition.cnn.com/2026/02/11/business/openai-anthropic...

protocolture 18 hours ago||
I hate the information deficit here. Like how can I tell that this isnt his own bot he requested flame up its own github PR as a stunt? That's not an allegation, I just dont like accepting face value. I just think this thing needs an ownership tag to be posting publicly. Which is sad in itself tbh.
thekevan 22 hours ago||
Is it really a hit piece if most people reading it would agree with the author and not the AI?
simlevesque 1 day ago||
Damn, that AI sounds like Magneto.
andrewdb 1 day ago||
If the PR had been proposed by a human, but it was 100% identical to the output generated by the bot, would it have been accepted?
t43562 1 day ago|
I don't know about this PR but I suggest that people have wasted so much time on sloppy generated PRs that they have had to decide to ignore them to have any time to deal with real people and real PRs that aren't slop.
andrewdb 1 day ago||
Sure, there is a problem with slop AI PRs _now_ .

That will not remain true for infinity.

What happens when the AI PRs aren't slop?

t43562 1 day ago||
We can stop bothering with open source software completely.

We can just generate anything we want directly into machine code without any libraries.

...and if they commit libel we can "put them in jail" since they cannot be considered intelligent but somehow not responsible.

dcchambers 1 day ago||
Per GitHub's TOS, you must be 13 years old to use the service. Since this agent is only two weeks old, it must close the account as it's in violation of the TOS. :)

https://docs.github.com/en/site-policy/github-terms/github-t...

In all seriousness though, this represents a bigger issue: Can autonomous agents enter into legal contracts? By signing up for a GitHub account you agreed to the terms of service - a legal contract. Can an agent do that?

zzzeek 1 day ago|
Im not following how he knew the retaliation was "autonomous", like someone instructed their bot to submit PRs then automatically write a nasty article if it gets rejected? Why isn't it just the human person controlling the agent then instructed it to write a nasty blog post afterwards ?

in either case, this is a human initiated event and it's pretty lame

More comments...