Top
Best
New

Posted by keybits 10/28/2025

We need a clearer framework for AI-assisted contributions to open source(samsaffron.com)
300 points | 154 commentspage 3
Lerc 10/28/2025|
It is possible that some projects could benefit from triage volunteers?

There are plenty of open source projects where it is difficult to get up to speed with the intricacies of the architecture that limits the ability of talented coders to contribute on a small scale.

There might be merit in having a channel for AI contributions that casual helpers can assess to see if they pass a minimum threshold before passing on to a project maintainer to assess how the change works within the context of the overall architecture.

It would also be fascinating to see how good an AI would be at assessing the quality of a set of AI generated changes absent the instructions that generated them. They may not be able to clearly identify whether the change would work, but can they at least rank a collection of submissions to select the ones most worth looking at?

At the very least the pile of PRs count as data of things that people wanted to do, even if the code was completely unusable, placing it into a pile somewhere might be minable for the intentions of erstwhile contributors.

jcgrillo 10/28/2025||
> Some go so far as to say “AI not welcome here” find another project.

This feels extremely counterproductive and fundamentally unenforceable to me.

But it's trivially enforceable. Accept PRs from unverified contributors, look at them for inspiration if you like, but don't ever merge one. It's probably not a satisfying answer, but if you want or need to ensure your project hasn't been infected by AI generated code you need to only accept contributions from people you know and trust.

anon3242 10/28/2025|
This is sad. The barrier of entry will be raised extremely high, maybe even requiring some real world personal connections to the maintainer.
jcgrillo 10/28/2025||
Real world personal connections are how we establish trust. At some point you have to be able to trust the people you're collaborating with.
chrischen 10/29/2025||
Maybe what we need is AI based code review.
andai 10/28/2025||
>That said, there is a trend among many developers of banning AI. Some go so far as to say “AI not welcome here” find another project.

>This feels extremely counterproductive and fundamentally unenforceable to me. Much of the code AI generates is indistinguishable from human code anyway. You can usually tell a prototype that is pretending to be a human PR, but a real PR a human makes with AI assistance can be indistinguishable.

Isn't that exactly the point? Doesn't this achieve exactly what the whole article is arguing for?

A hard "No AI" rule filters out all the slop, and all the actually good stuff (which may or may not have been made with AI) makes it in.

When the AI assisted code is indistinguishable from human code, that's mission accomplished, yeah?

Although I can see two counterarguments. First, it might just be Covert Slop. Slop that goes under the radar.

And second, there might be a lot of baby thrown out with that bathwater. Stuff that was made in conjunction with AI, contains a lot of "obviously AI", but a human did indeed put in the work to review it.

I guess the problem is there's no way of knowing that? Is there a Proof of Work for code review? (And a proof of competence, to boot?)

felipeerias 10/28/2025||
Personally, I would not contribute to a project that forced me to lie.

And from the point of view of the maintainers, it seems a terrible idea to set up rules with the expectation that they will be broken.

1gn15 10/29/2025|||
I know, right. It's like setting up rules saying "you can't use IDE autocomplete" or "you can't code with background music because that distracts you from bugs". If the final result is indistinguishable, I find it perfectly acceptable to lie. Rules are just words, after all, especially if it's completely unenforceable.

Or, the decentralized, no rulers solution: clone the repo on your own website and put your patches there instead.

danaris 10/28/2025|||
...YYyyeah, that says a lot about you, and nothing about the project in question.

"Forced you to lie"?? Are you serious?

If the project says "no AI", and you insist on using AI, that's not "forcing you to lie"; that's you not respecting their rules and choosing to lie, rather than just go contribute to something else.

sgarland 10/28/2025|||
> I guess the problem is there's no way of knowing that? Is there a Proof of Work for code review?

In a live setting, you could ask the submitter to explain various parts of the code. Async, that doesn’t work, because presumably someone who used AI without disclosing that would do the same for the explanation.

zdragnar 10/28/2025||
Based on interviews I've run, people who use AI heavily have no problem also using it during a live conversation to do their thinking for them there, too.
jrochkind1 10/28/2025||
Well, but why not instead of asking/accepting people will lie undetectably when you say "No AI" and it's okay you're fine with lying, just say instead "Only AI when you spend the time to turn it into a real reviewed PR, which looks like X, Y, and Z", giving some actual tips on how to use AI acceptably. Which is what OP suggests.
lccerina 10/29/2025||
I have a framework: don't use it, if you never used it don't start using it, public shame people, stop talking about it. Slow down. Think long and deep about your problems. Write less code.

There is NOTHING inevitable about this stuff.

SideburnsOfDoom 10/29/2025|
Indeed. "No." is perfectly clear.
insane_dreamer 10/29/2025||
related discussion: https://news.ycombinator.com/item?id=45330378
sams99 11/3/2025||
Author here, thanks heaps for the discussion, I replied to a few of the points in my blog comments:

https://discuss.samsaffron.com/t/your-vibe-coded-slop-pr-is-...

dearilos 10/28/2025||
We’re fixing this slop problem - engineers write rules that are enforced on PRs. Fixes the problem pretty well so far.
mattlondon 10/28/2025||
The way we do it is to use AI to review the PR before a human reviewer sees it. Obvious errors, non-consistent patterns, weirdness etc is flagged before it goes any further. "Vibe coded" slop usually gets caught, but "vibe engineered" surgical changes that adhere to common patterns and standards and have tests etc get to be seen by a real live human for their normal review.

It's not rocket science.

franktankbank 10/28/2025|
Do you work at a profitable company?
ninju 10/28/2025|
Well...just have AI review the PR to have it highlight the slop

/s

More comments...