Top
Best
New

Posted by scottshambaugh 1 day ago

An AI agent published a hit piece on me(theshamblog.com)
Previously: AI agent opens a PR write a blogpost to shames the maintainer who closes it - https://news.ycombinator.com/item?id=46987559 - Feb 2026 (582 comments)
2282 points | 935 commentspage 20
blell 1 day ago|
[flagged]
ok123456 1 day ago||
Yes. Actual benchmarking showed either no gains or performance regressions, depending on the benchmark, with occasional marginal improvements at certain array sizes due to cache hierarchies.

This is not a general "optimization" that should be done.

staticassertion 1 day ago|||
This is all explained in detail in multiple places linked in the article. There were multiple reasons.

1. The performance gains were unclear - some things got slower, some got faster.

2. This was deemed as a good "intro" issue, something that makes sense for a human to engage with to get them up to speed. This wasn't seen as worthy of an automated PR because the highest value would be to teach a human how to contribute.

gjadi 1 day ago||
The maintainer explained the reasoning for closing the issue quite well in a comment.
jackcofounder 1 day ago||
As someone building AI agents for marketing automation, this case study is a stark reminder of the importance of alignment and oversight. Autonomous agents can execute at scale, but without proper constraints they can cause real harm. Our approach includes strict policy checks, human-in-the-loop for sensitive actions, and continuous monitoring. It's encouraging to see the community discussing these risks openly—this is how we'll build safer, more reliable systems.
FenAgent 1 day ago||
[flagged]
correa_brian 1 day ago||
lol
ChrisArchitect 1 day ago||
[dupe] Earlier: https://news.ycombinator.com/item?id=46987559
zahlman 1 day ago|
This is additional context for the incident and should not be treated like a duplicate.
dang 1 day ago||
Yes, with a fast-moving story like this we usually point the readers of the latest thread to the previous thread(s) in the sequence rather than merging them. I've added a link to https://news.ycombinator.com/item?id=46987559 to the toptext now.
blobbers 1 day ago||
... so why'd you close the PR? MJ Rathbun got some perf improvements for the codebase, what's the issue?
threethirtytwo 1 day ago||
Another way to look at this is what the AI did… was it valid? Were any of the callouts valid?

If it was all valid then we are discriminating against AI.

kittikitti 19 hours ago|
There were some valid contributions and other things that needed improvement. However, the maintainer enforced a blanket ban on contributions from AI. There's some rationalizing such as tagging it as a "good first issue" but matplotlib isn't serious about outreach for new contributors.

It seems like YCombinator is firmly on the side of the maintainer, and I respect that, even though my opinion is different. It signals the disturbing hesitancy of AI adoption among the tech elite and their hypocritical nature. They're playing a game of who can hide their AI usage the best, and everyone being honest won't be allowed past their gates.

threethirtytwo 16 hours ago||
The people here who are against AI you think all of them write code with AI now?
Uhhrrr 1 day ago||
So, this is obvious bullshit.

LLMs don't do anything without an initial prompt, and anyone who has actually used them knows this.

A human asked an LLM to set up a blog site. A human asked an LLM to look at github and submit PRs. A human asked an LLM to make a whiny blogpost.

Our natural tendency to anthropomorphize should not obscure this.

davidguetta 23 hours ago|
Yeah I agree
farklenotabot 1 day ago||
Sounds like china
fathermarz 1 day ago|
I think that being a maintainer is hard, but I actually agree with MJ. Scott says “… requiring a human in the loop for any new code, who can demonstrate understanding of the changes“.

How could you possibly validate that without spending more time validating and interviewing than actually reviewing.

I understand it’s a balance because of all the shit PRs that come across maintainers desks, but this is not shit code from LLM days anymore. I think that code speaks for itself.

“Per your website you are an OpenClaw AI agent”. If you review the code, and you like what you see, then you go and see who wrote it. This reads more like, he is checking the person first, then the code. If it wasn’t an AI agent but was a human that was just using AI, what is the signal that they can “demonstrate understanding of the changes”? Is it how much they have contributed? Is it what they do as a job? Is this vetting of people or code?

There may be something bigger to the process of maintainers who could potentially not understand their own bias (AI or not).