Posted by lumpa 14 hours ago
That's exactly the sketchy part here. They turned down known, working and tested, code that came from a partner (bun) due to this policy. Code that 4x'd compile speed.
A general ban makes sense based on their rationalization ("contributor poker"[0]). A total and inflexible ban can lead to a worse outcome for everyone though.
If a senior, experienced, contributor vouches for the code it shouldn't matter if they hand crafted it on stone tablets, generated it with yarrow sticks, or used gpt-3.
No; they turned it down because the vibe-coded PR was crap.
> The rewritten type resolution semantics were designed to avoid these issues, but Bun’s Zig fork does not incorporate the changes (and has not otherwise solved the design problems), which means their parallelized semantic analysis implementation will exhibit non-deterministic behavior. That’s pretty much a non-starter for most serious developers: you don’t want your compilation to randomly fail with a nonsense error 30% of the time.
The flip side of that is that if such a contributor vouches for code that turns out to be poor-quality, this should severely damage their reputation. I've found far too many "senior" developers will give AI a pass on poor coding practices.
> Put more simply, we are going to make these enhancements, but hacking them in for a flashy headline isn’t a good outcome for our users. Instead we’re approaching the problem with the care it deserves, so that when we ultimately ship it, we don’t cause regressions.
These exact changes are already on the roadmap and Bun’s PR is rushing ahead.
How does this have anything to do with ethics? Its their project not yours, they can reject your PR for whatever reason, including you using LLMs for developing that PR. Also they're not assuming autonomous agents submitting PRs. They're saying that they do not accept PRs where any part of the thinking process was outsourced to a LLM.
Even if you disagree with their opinion, the ethical thing to do is to not interact and move on. Not to try to sneak in your LLM assisted PRs without the maintainers consent.
I mostly agree with the assessment.
IMHO: hard, inflexible rules like these are always deeply rooted in biases and personal convictions, not in facts. The suggested policy amendment by Claude at the end is much more honest, logical, and palatable.
No, I don't think that was the argument. As I understood it, unassisted contributions have higher chances to grow a trusted contributor. Not 100% vs 0% chances, but statistically higher. So, given limited resources, it makes sense to prefer unassisted over assisted contributions.
Why would a contributor that uses AI assistance have fewer chances to be trusted?
I'm not talking about AI slop, but a contributor that takes time to understand a problem, find a solution, and discuss pros/cons alternatives. Using LLM assistance, of course.
The more I think about it, the more nonsensical it is. - What if I do everything by hand, but have an LLM review my work at the very end? - What if I have an LLM guide me through the codebase just by specifying the files I should read and in what order, but I do all the reading myself? - What if I do everything by hand, but then use an LLM to optimize a small part of an algorithm?
You can easily see how absurd it is to completely ban LLMs.
What matters is the quality and correctness of the contribution. Even with heavy LLM usage, unless the developer understands what problem they're solving, the quality will be sub-par.
I’m not saying whether or not that’s good or bad. I agree with their approach, but that’s just my opinion and who am I to say what’s right or wrong? I think there’s value to LLMs as a tool to search and learn, but I’m also worried that LLMs make it really easy to focus on only the result and not the process. That process can be really valuable in building good teams, while LLMs can be really good at churning out an assembly line of code.
But it seems that the Zig policy implies that. Otherwise what would be wrong with interacting with contributors using LLMs?
edit: Can't reply because I've posted a whole 4 times.
I believe we have different world views which is hardly a disagreement. Answering my question could pretty well highlight our difference of opinion.
Lmao bro has completely outsourced their thinking to AI, this is comical
It's a critique of low effort PRs compared to the high effort review they require.