Posted by pjmlp 4 hours ago
"any content submitted that is clearly labelled as LLM-generated (including issues, merge requests, and merge request descriptions) will be immediately closed"
For example:
- What if a non-native English speaker uses the help of an AI model in the formulation of some issue/task?
- What about having a plugin in your IDE that rather gives syntax and small code fragment suggestions ("autocomplete on steroids")? Does this policy mean that the programmers are also restricted on the IDE and plugins that they are allowed to have installed if they want to contribute?
Firefox has direct translation built in. One can self-host libretranslate. There are many free sites to paste in language input and get a direct translation sans filler and AI "interpretation". Just write in your native language or your imperfect English.
How can you be sure the AI translation is accurately convening what was written by the speaker? The reality is you can't accommodate every hypothetical scenario.
> What about having a plugin in your IDE that rather gives syntax and small code fragment suggestions ("autocomplete on steroids")? Does this policy mean that the programmers are also restricted on the IDE and plugins that they are allowed to have installed if they want to contribute?
Nobody is talking about advanced autocomplete when they want to ban AI code. It's prompt generated code.
Unfortunately, when I have seen this in the context of the Rust project, the result has still been the typical verbose word salad that is typical of chat style LLMs. It is better to use a dedicated translation tool, and post the original along with the translation.
> What about having a plugin in your IDE that rather gives syntax and small code fragment suggestions ("autocomplete on steroids")?
Very good question, I myself consider this sort of AI usage benign (unlike agent style usage), and is the only style of AI I use myself (since I have RSI it helps having to type less). You could turn the feature off for just this project though.
> Does this policy mean that the programmers are also restricted on the IDE and plugins that they are allowed to have installed if they want to contribute?
I don't think that follows, but what features you have active in the current project would definitely be affected. From what I have seen all IDEs allow turning AI features on and off as needed.
this so many times - it's so incredibly handy to have the original message from the author, for one I may speak or understand parts of that language and so have an easier time understanding the intent of the translated text. For another I can cut and translate specific parts using whatever tools I want, again giving me more context about what is trying to be communicated.
I've seen this excuse before but in practice the output they copy/paste is extremely verbose and long winded (with the bullet point and heading soup etc.)
Surely non-native speakers can see that structure and tell the LLM to match their natural style instead? No one wants to read a massive wall of text.
It sounds serious and strict, but it applies to content that's 'clearly labelled as LLM-generated'. So what about content that isn't as clear? I don't know what to make of it.
My guess is that the serious tone is to avoid any possible legal issues that may arise from the inadvertent inclusion of AI-generated code. But the general motivation might be to avoid wasting the maintainers' time on reviewing confusing and sloppy submissions that are made using the lazy use of AI (as opposed finely guided and well reviewed AI code).
if (foo == true) { // checking foo is true (rocketship emoji)
20 lines of code;
} else {
the same 20 lines of code with one boolean changed in the middle;
}
Description:(markdown header) Summary (nerd emoji):
This PR fixes a non-existent issue by adding an *if statement** that checks if a variable is true. This has the following benefits:
- Improves performance (rocketship emoji)
- Increases code maintainability (rising bar chart emoji)
- Helps prevent future bugs (detective emoji)
(markdown header) Conclusion:This PR does not just improve performance, it fundamentally reshapes how we approach performance considerations. This is not just design --- it's architecture. Simple, succinct, yet powerful.
## Summary
...
## Problem
...
## Solution
...
## Verification
...
They're too methodical, and duplicate code when they're longer than a single line fix. I've never received a pull request formatted like that from a human.Time consuming work can be done quickly at a fraction of the cost or even almost free with open weights LLMs.
I think part of the battle is actually just getting people to identify which LLM made it to understand if someones contribution is good or not. A javascript project with contributions from Opus 4.6 will probably be pretty good, but if someone is using Mistral small via the chat app, it's probably just a waste of time.