Top
Best
New

Posted by pjmlp 4 hours ago

Redox OS has adopted a Certificate of Origin policy and a strict no-LLM policy(gitlab.redox-os.org)
163 points | 133 commentspage 2
aleph_minus_one 3 hours ago|
While I am more on the AI-hater side, I don't consider this to be a good idea:

"any content submitted that is clearly labelled as LLM-generated (including issues, merge requests, and merge request descriptions) will be immediately closed"

For example:

- What if a non-native English speaker uses the help of an AI model in the formulation of some issue/task?

- What about having a plugin in your IDE that rather gives syntax and small code fragment suggestions ("autocomplete on steroids")? Does this policy mean that the programmers are also restricted on the IDE and plugins that they are allowed to have installed if they want to contribute?

duskdozer 3 minutes ago||
>What if a non-native English speaker uses the help of an AI model in the formulation of some issue/task?

Firefox has direct translation built in. One can self-host libretranslate. There are many free sites to paste in language input and get a direct translation sans filler and AI "interpretation". Just write in your native language or your imperfect English.

cpburns2009 5 minutes ago|||
> What if a non-native English speaker uses the help of an AI model in the formulation of some issue/task?

How can you be sure the AI translation is accurately convening what was written by the speaker? The reality is you can't accommodate every hypothetical scenario.

> What about having a plugin in your IDE that rather gives syntax and small code fragment suggestions ("autocomplete on steroids")? Does this policy mean that the programmers are also restricted on the IDE and plugins that they are allowed to have installed if they want to contribute?

Nobody is talking about advanced autocomplete when they want to ban AI code. It's prompt generated code.

VorpalWay 2 hours ago|||
> What if a non-native English speaker uses the help of an AI model in the formulation of some issue/task?

Unfortunately, when I have seen this in the context of the Rust project, the result has still been the typical verbose word salad that is typical of chat style LLMs. It is better to use a dedicated translation tool, and post the original along with the translation.

> What about having a plugin in your IDE that rather gives syntax and small code fragment suggestions ("autocomplete on steroids")?

Very good question, I myself consider this sort of AI usage benign (unlike agent style usage), and is the only style of AI I use myself (since I have RSI it helps having to type less). You could turn the feature off for just this project though.

> Does this policy mean that the programmers are also restricted on the IDE and plugins that they are allowed to have installed if they want to contribute?

I don't think that follows, but what features you have active in the current project would definitely be affected. From what I have seen all IDEs allow turning AI features on and off as needed.

miningape 50 minutes ago||
> and post the original along with the translation

this so many times - it's so incredibly handy to have the original message from the author, for one I may speak or understand parts of that language and so have an easier time understanding the intent of the translated text. For another I can cut and translate specific parts using whatever tools I want, again giving me more context about what is trying to be communicated.

hypeatei 2 hours ago||
> What if a non-native English speaker uses the help of an AI model in the formulation of some issue

I've seen this excuse before but in practice the output they copy/paste is extremely verbose and long winded (with the bullet point and heading soup etc.)

Surely non-native speakers can see that structure and tell the LLM to match their natural style instead? No one wants to read a massive wall of text.

hagen8 3 hours ago||
They will sooner or later change that policy or get very slow in keeping up.
The-Ludwig 3 hours ago||
Hm, wondering how to enforce this rule. Rules without any means to enforce them can put the honest people into a disadvantage.
goku12 3 hours ago|
> This policy is not open to discussion, any content submitted that is clearly labelled as LLM-generated (including issues, merge requests, and merge request descriptions) will be immediately closed, and any attempt to bypass this policy will result in a ban from the project.

It sounds serious and strict, but it applies to content that's 'clearly labelled as LLM-generated'. So what about content that isn't as clear? I don't know what to make of it.

My guess is that the serious tone is to avoid any possible legal issues that may arise from the inadvertent inclusion of AI-generated code. But the general motivation might be to avoid wasting the maintainers' time on reviewing confusing and sloppy submissions that are made using the lazy use of AI (as opposed finely guided and well reviewed AI code).

dana321 1 hour ago||
Generating small chunks of code with llms to save time works well, as long as you can read and understand the code i don't see what the problem is.
xmodem 24 minutes ago|
The problem is that the well you are drinking from has in fact been poisoned. Maybe you think you can tolerate it but some projects are taking a policy decision that any exposure is too dangerous and that is IMO perfectly reasonable.
algoth1 2 hours ago||
What would constitute "clearly llm generated" though
nananana9 2 hours ago||

  if (foo == true) { // checking foo is true (rocketship emoji)
    20 lines of code;
  } else {
    the same 20 lines of code with one boolean changed in the middle;
  }
Description:

(markdown header) Summary (nerd emoji):

This PR fixes a non-existent issue by adding an *if statement** that checks if a variable is true. This has the following benefits:

  - Improves performance (rocketship emoji)
  - Increases code maintainability (rising bar chart emoji)
  - Helps prevent future bugs (detective emoji)
(markdown header) Conclusion:

This PR does not just improve performance, it fundamentally reshapes how we approach performance considerations. This is not just design --- it's architecture. Simple, succinct, yet powerful.

The-Ludwig 2 hours ago|||
Peak comedy
cpburns2009 12 minutes ago||
The clearly LLM PRs I receive are formatted similarly to:

    ## Summary
    ...

    ## Problem
    ...

    ## Solution
    ...

    ## Verification
    ...
They're too methodical, and duplicate code when they're longer than a single line fix. I've never received a pull request formatted like that from a human.
api 2 hours ago||
AI has the potential to level the playing field somewhat between open source and commercial software and SaaS that can afford armies of expensive paid developers.

Time consuming work can be done quickly at a fraction of the cost or even almost free with open weights LLMs.

flanked-evergl 2 hours ago||
Spiritually Amish
scotty79 2 hours ago||
I see a lot of oss forks in the future where people just fork to fix their issues with LLMs without going through maintainers. Or even doing full LLM rewrites of smaller stuff.
estsauver 3 hours ago||
They're certainly welcome to do whatever they're like, and for a microkernel based OS it might make sense--I think there's probably pretty "Meh" output from a lot of LLMs.

I think part of the battle is actually just getting people to identify which LLM made it to understand if someones contribution is good or not. A javascript project with contributions from Opus 4.6 will probably be pretty good, but if someone is using Mistral small via the chat app, it's probably just a waste of time.

emperorxanu 3 hours ago|
[flagged]
More comments...