Top
Best
New

Posted by keybits 10/28/2025

We need a clearer framework for AI-assisted contributions to open source(samsaffron.com)
300 points | 154 commentspage 2
vrighter 11/6/2025|
Simple framework: You must not attempt to offload the actual thinking part to someone else. You should be able to answer any reasonable question about the code you submitted. If you can't you should be banned from any further contributions, permanently.
prymitive 10/28/2025||
The problem with AI isn’t new, it’s the same old problem with technology: computers don’t do what you want, only what you tell them. A lot of PRs can be judged by how well they are described and justified, it’s because the code itself isn’t that important, it’s the problem that you are solving with it that is. People are often great at defining problems, AIs less so IMHO. Partially because they simply have no understanding, partially because they over explain everything to a point where you just stop reading, and so you never get to the core of the problem. And even if you do there’s a good chance AI misunderstood the problem and the solution is wrong in a some more or less subtle way. This is further made worse by the sheer overconfidence of AI output, which quickly erodes any trust that they did understand the problem.
darkwater 10/28/2025||
The title doesn't make justice to the content.

I really liked the paragraph about LLMs being "alien intelligence"

   > Many engineers I know fall into 2 camps, either the camp that find the new class of LLMs intelligent, groundbreaking and shockingly good. In the other camp are engineers that think of all LLM generated content as “the emperor’s new clothes”, the code they generate is “naked”, fundamentally flawed and poison.

   I like to think of the new systems as neither. I like to think about the new class of intelligence as “Alien Intelligence”. It is both shockingly good and shockingly terrible at the exact same time.

   Framing LLMs as “Super competent interns” or some other type of human analogy is incorrect. These systems are aliens and the sooner we accept this the sooner we will be able to navigate the complexity that injecting alien intelligence into our engineering process leads to.
It's a similitude I find compelling. The way they produce code and the way you have to interact with them really feels "alien", and when you start humanizing them, you get emotions when interacting with it and that's not correct. I mean, I do get emotional and frustrated even when good old deterministic programs misbehaved and there was some bug to find and squash or work-around, but the LLM interactions can bring the game to a complete new level. So, we need to remember they are "alien".
wat10000 10/28/2025||
I’m reminded of Dijkstra: “The question of whether machines can think is about as relevant as the question of whether submarines can swim.”

These new submarines are a lot closer to human swimming than the old ones were, but they’re still very different.

andai 10/28/2025|||
Some movements expected alien intelligence to arrive in the early 2020s. They might have been on the mark after all ;)
reedlaw 10/28/2025|||
Isn't the intelligence of every other person alien to ourselves? The article ends with a need to "protect our own engineering brands" but how is that communicated? I found this [https://meta.discourse.org/t/contributing-to-discourse-devel...] which seems woefully inadequate. In practice, conventions are communicated through existing code. Are human contributors capable of grasping an "engineering brand" by working on a few PRs?
darkwater 10/29/2025||
> Isn't the intelligence of every other person alien to ourselves?

If we agree that we are all humans and assume that all the other humans are conscious as one is, I think we can extrapolate that there is generic "human intelligence" concept. Even if it's pretty hard do nail it down, and even if there are several definitions of human intelligence out there.

For the other part of the comment, not too familiar with Discourse opensource approach but I guess that those rules are there mainly for employees, but since they develop in the open and public, they make them public as well.

reedlaw 10/29/2025||
My point was that AI-produced code is not so foreign than no human could produce it, nor do any two humans produce the same style of code. So I'm not sure exactly what the idea of "engineering brand" is meant to protect.
keiferski 10/28/2025||
This is why at a fundamental level, the concept of AGI doesn't make a lot of sense. You can't measure machine intelligence by comparing it to a human's. That doesn't mean machines can't be intelligent...but rather that the measuring stick cannot be an abstracted human being. It can only be the accumulation of specific tasks.
gordonhart 10/28/2025||
> As engineers it is our role to properly label our changes.

I've found myself wanting line-level blame for LLMs. If my teammate committed something that was written directly by Claude Code, it's more useful to me to know that than to have the blame assigned to the human through the squash+merge PR process.

Ultimately somebody needs to be on the hook. But if my teammate doesn't understand it any better than I do, I'd rather that be explicit and avoid the dance of "you committed it, therefore you own it," which is better in principle than in practice IMO.

andrewflnr 10/29/2025|
If your teammate doesn't understand it, they shouldn't have committed it. This isn't a "dance", it's basic responsibility for your actions.
anal_reactor 10/28/2025||
An idea occurred to me. What if:

1. Someone raises a PR

2. Entry-level maintainers skim through it and either reject or pass higher up

3. If the PR has sufficient quality, the PR gets reviewed by someone who actually has merge permissions

lapcat 10/28/2025||
> That said it is a living demo that can help make an idea feel more real. It is also enormously fun. Think of it as a delightful movie set.

[pedantry] It bothers me that the photo for "think of prototype PRs as movie sets" is clearly not a movie set but rather the set of the TV show Seinfeld. Anyone who watched the show would immediately recognize Jerry's apartment.

DerThorsten 10/28/2025|
Its not the set of the TV show I beliefe, but a recreation.

https://nypost.com/2015/06/23/you-can-now-visit-the-iconic-s...

It looks a bit different wrt. the stuff on the fridge and the items in the cupboard

throwawaysoxjje 10/28/2025|||
It’s this Sony Picture Studios recreation actually:

https://www.reddit.com/r/seinfeld/comments/yfbmn2/sony_pictu...

lapcat 10/28/2025|||
I'm not sure what you mean. Those two photos are very different. The floors are entirely different, the tables are entirely different, one of the chairs/couches is different, even the intercom and light switch are different.

In any case, though, neither one is a movie set.

DerThorsten 10/28/2025||
I think we agree, it looks like the seinfeld set, but it not the orginal set, just something looking very similar.
bradfa 10/29/2025||
The Fedora policy on AI-assisted contributions seems very reasonable: https://communityblog.fedoraproject.org/council-policy-propo...
bloppe 10/28/2025||
Maybe we need open source credit scores. PRs from talented engineers with proven track records of high quality contributions would be presumed good enough for review. Unknown, newer contributors could have a size limit on their PRs, with massive PRs rejected automatically.
mfenniak 10/28/2025||
The Forgejo project has been gently trying to redirect new contributors into fixing bugs before trying to jump into the project to implement big features (https://codeberg.org/forgejo/discussions/issues/337). This allows a new contributor to get into the community, get used to working with the codebase, do something of clear value... but for the project a lot of it is about establishing reputation.

Will the contributor respond to code-review feedback? Will they follow-up on work? Will they work within the code-of-conduct and learn the contributor guidelines? All great things to figure out on small bugs, rather than after the contributor has done significant feature work.

selfhoster11 10/28/2025||
We don't need more KYC, no.
javier123454321 10/28/2025||
Reputation building is not kyc. It is actually the thing that enables anonymization to work in a more sophisticated way.
specproc 10/28/2025|
A bit of a brutal title for what's a pretty constructive and reasonable article. I like the core: AI-produced contributions are prototypes, belong in branches, and require transparency and commitment as a path to being merged.
More comments...