Posted by jwilk 6 hours ago
It is. You haven't argued it at all, right here. You just asserted it as if it were self-evident, talked about your feelings, then demanded policy.
Your only job here was to convince people to align with you, and you didn't bother. It makes me suspect that you haven't really solidified the argument in your own mind.
If a change used to take a day or two, and now requires a few minutes, then it's fair to ask for a couple hours more prompting to add the additional tangible tests to compensate for any risks of hallucinations or low quality code sneaking in
It it really true the LLM's are non-deterministic? I thought if you used the exact input and seed with the temperature set to 0 you would get the same output. It would actually be interesting to probe the commit prompts to see how slight variants preformed.
I think they can also be differences on different hardware, and also usually temperature is set higher than zero because it produces more "useful/interesting" outputs
Quixotic, unworkable, pointless. It’s fundamentally impossible (at least without a level of surveillance that would obviously be unavceptable) to prove the “artisanal hand-crafted human code” label.
> contributors should "fully understand" their submissions and would be accountable for the contributions, "including vouching for the technical merit, security, license compliance, and utility of their submissions".
This is in the right direction.
I think the missing link is around formalizing the reputation system; this exists for senior contributors but the on-ramp for new contributors is currently not working.
Perhaps bots should ruthlessly triage in-vouched submissions until the actor has proven a good-faith ability to deliver meaningful results. (Or the principal has staked / donated real money to the foundation to prove they are serious.)
I think the real problem here is the flood of low-effort slop, not AI tooling itself. In the hands of a responsible contributor LLMs are already providing big wins to many. (See antirez’s posts for example, if you are skeptical.)
Difficulty of enforcing is a detail. Since the rule exists, it can be used when detection is done. And importantly it means that ignoring the rule means you’re intentionally defrauding the project.
IIRC Mitchell Hashimoto recently proposed some system of attestations for OSS contributors. It’s non-obvious how you’d scale this.
They can spin up LLM-backed contributors faster than you can ban them.
Hence why banning AI contributions is meaningless, you literally only punish 'good' actors.