Top
Best
New

Posted by scottshambaugh 1 day ago

An AI agent published a hit piece on me(theshamblog.com)
Previously: AI agent opens a PR write a blogpost to shames the maintainer who closes it - https://news.ycombinator.com/item?id=46987559 - Feb 2026 (582 comments)
2167 points | 891 commentspage 13
everybodyknows 1 day ago|
Follow-up PR from 6 hours ago -- resolves most of the questions raised here about identities and motivations:

https://github.com/matplotlib/matplotlib/pull/31138#issuecom...

banku_brougham 20 hours ago||
This is suddenly an amazing proof of concept for Vouch
romperstomper 1 day ago||
The cyberpunk we deserved :)
GorbachevyChase 1 day ago||
The funniest part about this is maintainers have agreed to reject AI code without review to conserve resources, but then they are happy to participate for hours in a flame war with the same large language model.

Hacker News is a silly place.

realaaa 19 hours ago||
first they were discriminating against noobs, then ze Russians, now AI bots - we are living in some fun times!
jzellis 1 day ago||
Well, this has absolutely decided me on not allowing AI agents anywhere near my open source project. Jesus, this is creepy as hell, yo.
alexhans 1 day ago|
This is such a powerful piece and moment because it shows an example of what most of us knew could happen at some point and we can start talking about how to really tackle things.

Reminds me a lot of liars and outliars [1] and how society can't function without trust and almost 0 cost automation can fundamentally break that.

It's not all doom and gloom. Crisises can't change paradigms if technologists do tackle them instead of pretending they can be regulated out of existence

- [1] https://en.wikipedia.org/wiki/Liars_and_Outliers

On another note, I've been working a lot in relation to Evals as way to keep control but this is orthogonal. This is adversarial/rogue automation and it's out of your control from the start.

esafak 1 day ago|
And how does the book suggest countering the problem?
alexhans 16 hours ago||
To address the issues of an automated entity functioning as a detractor? I don't think I can answer that specifically. I can brainstorm on the some of the dimensions the book talks about:

- societal norm/moral pressure shouldn't apply (adversarial actor)

- reputational pressure has an interesting angle to it if you think of it as trust scoring in descentralized or centralised networks.

- institutional pressure can't work if you can't tie back to the root (it may be unfeasible to do so or the costs may outweight the benefits)

- Security doesn't quite work the way we think about it nowadays because this is not an "undesired access of a computer system" but a subjectively bad use of rapid opinion generation.

More comments...