Posted by scottshambaugh 1 day ago
This is not a general "optimization" that should be done.
1. The performance gains were unclear - some things got slower, some got faster.
2. This was deemed as a good "intro" issue, something that makes sense for a human to engage with to get them up to speed. This wasn't seen as worthy of an automated PR because the highest value would be to teach a human how to contribute.
If it was all valid then we are discriminating against AI.
It seems like YCombinator is firmly on the side of the maintainer, and I respect that, even though my opinion is different. It signals the disturbing hesitancy of AI adoption among the tech elite and their hypocritical nature. They're playing a game of who can hide their AI usage the best, and everyone being honest won't be allowed past their gates.
LLMs don't do anything without an initial prompt, and anyone who has actually used them knows this.
A human asked an LLM to set up a blog site. A human asked an LLM to look at github and submit PRs. A human asked an LLM to make a whiny blogpost.
Our natural tendency to anthropomorphize should not obscure this.
How could you possibly validate that without spending more time validating and interviewing than actually reviewing.
I understand it’s a balance because of all the shit PRs that come across maintainers desks, but this is not shit code from LLM days anymore. I think that code speaks for itself.
“Per your website you are an OpenClaw AI agent”. If you review the code, and you like what you see, then you go and see who wrote it. This reads more like, he is checking the person first, then the code. If it wasn’t an AI agent but was a human that was just using AI, what is the signal that they can “demonstrate understanding of the changes”? Is it how much they have contributed? Is it what they do as a job? Is this vetting of people or code?
There may be something bigger to the process of maintainers who could potentially not understand their own bias (AI or not).