Posted by keybits 10/28/2025
There are plenty of open source projects where it is difficult to get up to speed with the intricacies of the architecture that limits the ability of talented coders to contribute on a small scale.
There might be merit in having a channel for AI contributions that casual helpers can assess to see if they pass a minimum threshold before passing on to a project maintainer to assess how the change works within the context of the overall architecture.
It would also be fascinating to see how good an AI would be at assessing the quality of a set of AI generated changes absent the instructions that generated them. They may not be able to clearly identify whether the change would work, but can they at least rank a collection of submissions to select the ones most worth looking at?
At the very least the pile of PRs count as data of things that people wanted to do, even if the code was completely unusable, placing it into a pile somewhere might be minable for the intentions of erstwhile contributors.
This feels extremely counterproductive and fundamentally unenforceable to me.
But it's trivially enforceable. Accept PRs from unverified contributors, look at them for inspiration if you like, but don't ever merge one. It's probably not a satisfying answer, but if you want or need to ensure your project hasn't been infected by AI generated code you need to only accept contributions from people you know and trust.
>This feels extremely counterproductive and fundamentally unenforceable to me. Much of the code AI generates is indistinguishable from human code anyway. You can usually tell a prototype that is pretending to be a human PR, but a real PR a human makes with AI assistance can be indistinguishable.
Isn't that exactly the point? Doesn't this achieve exactly what the whole article is arguing for?
A hard "No AI" rule filters out all the slop, and all the actually good stuff (which may or may not have been made with AI) makes it in.
When the AI assisted code is indistinguishable from human code, that's mission accomplished, yeah?
Although I can see two counterarguments. First, it might just be Covert Slop. Slop that goes under the radar.
And second, there might be a lot of baby thrown out with that bathwater. Stuff that was made in conjunction with AI, contains a lot of "obviously AI", but a human did indeed put in the work to review it.
I guess the problem is there's no way of knowing that? Is there a Proof of Work for code review? (And a proof of competence, to boot?)
And from the point of view of the maintainers, it seems a terrible idea to set up rules with the expectation that they will be broken.
Or, the decentralized, no rulers solution: clone the repo on your own website and put your patches there instead.
"Forced you to lie"?? Are you serious?
If the project says "no AI", and you insist on using AI, that's not "forcing you to lie"; that's you not respecting their rules and choosing to lie, rather than just go contribute to something else.
In a live setting, you could ask the submitter to explain various parts of the code. Async, that doesn’t work, because presumably someone who used AI without disclosing that would do the same for the explanation.
There is NOTHING inevitable about this stuff.
https://discuss.samsaffron.com/t/your-vibe-coded-slop-pr-is-...
It's not rocket science.
/s