Top
Best
New

Posted by lumpa 14 hours ago

The Zig project's rationale for their anti-AI contribution policy(simonwillison.net)
525 points | 284 commentspage 5
njanne 9 hours ago|
[dead]
marlburrow 11 hours ago||
[flagged]
qzgrid37 4 hours ago||
[dead]
jwzxgo 14 hours ago||
[dead]
mapontosevenths 12 hours ago|
> unless it's coming from a known and trusted developer.

That's exactly the sketchy part here. They turned down known, working and tested, code that came from a partner (bun) due to this policy. Code that 4x'd compile speed.

A general ban makes sense based on their rationalization ("contributor poker"[0]). A total and inflexible ban can lead to a worse outcome for everyone though.

If a senior, experienced, contributor vouches for the code it shouldn't matter if they hand crafted it on stone tablets, generated it with yarrow sticks, or used gpt-3.

[0] https://kristoff.it/blog/contributor-poker-and-ai/

lelanthran 11 hours ago|||
> That's exactly the sketchy part here. They turned down known, working and tested, code that came from a partner (bun) due to this policy. Code that 4x'd compile speed.

No; they turned it down because the vibe-coded PR was crap.

> The rewritten type resolution semantics were designed to avoid these issues, but Bun’s Zig fork does not incorporate the changes (and has not otherwise solved the design problems), which means their parallelized semantic analysis implementation will exhibit non-deterministic behavior. That’s pretty much a non-starter for most serious developers: you don’t want your compilation to randomly fail with a nonsense error 30% of the time.

lmm 12 hours ago||||
> If a senior, experienced, contributor vouches for the code it shouldn't matter if they hand crafted it on stone tablets, generated it with yarrow sticks, or used gpt-3.

The flip side of that is that if such a contributor vouches for code that turns out to be poor-quality, this should severely damage their reputation. I've found far too many "senior" developers will give AI a pass on poor coding practices.

JoshTriplett 12 hours ago||||
https://news.ycombinator.com/item?id=47958209
superb_dev 12 hours ago|||
A standout paragraph from that thread:

> Put more simply, we are going to make these enhancements, but hacking them in for a flashy headline isn’t a good outcome for our users. Instead we’re approaching the problem with the care it deserves, so that when we ultimately ship it, we don’t cause regressions.

These exact changes are already on the roadmap and Bun’s PR is rushing ahead.

mapontosevenths 12 hours ago|||
Thanks. That explains away most of my concern.
feverzsj 12 hours ago|||
Quite the contrary, Bun's developers don't even understand language spec. Their slop didn't use the same type resolution semantics as Zig, which makes their implementation exhibits non-deterministic behavior.
CaptainFever 9 hours ago||
[flagged]
peter_griffin 9 hours ago||
>As always, the most ethical thing to do is to just ignore any anti-LLM policies and not disclose anything

How does this have anything to do with ethics? Its their project not yours, they can reject your PR for whatever reason, including you using LLMs for developing that PR. Also they're not assuming autonomous agents submitting PRs. They're saying that they do not accept PRs where any part of the thinking process was outsourced to a LLM.

Even if you disagree with their opinion, the ethical thing to do is to not interact and move on. Not to try to sneak in your LLM assisted PRs without the maintainers consent.

crabmusket 9 hours ago||
Can you elaborate on the ethics of expressly ignoring the wishes of the project ownership?
future_crew_fan 5 hours ago||
Rule should be anti-fully-autonomous-PRs. (LLMs dont push bad code. People use LLMs to push bad code and DDoS the maintainers mental bandwidth)
blenderob 5 hours ago|
Rule should be whatever the people running the project think the rule should be. If you've got your own project, do implement the anti-fully-autonomous-PRs rule for your project. But the creators of Zig do not owe you or me the rule we like.
SuperV1234 7 hours ago||
Perhaps if the Zig maintainers had an LLM review their terrible rationale they would have picked up on the fact that it logically makes no sense.
cuu508 6 hours ago|
Please elaborate?
SuperV1234 6 hours ago||
https://claude.ai/share/f38ee8a6-56f1-408a-a536-211eb34c7045

I mostly agree with the assessment.

IMHO: hard, inflexible rules like these are always deeply rooted in biases and personal convictions, not in facts. The suggested policy amendment by Claude at the end is much more honest, logical, and palatable.

cuu508 6 hours ago|||
> The argument assumes that unassisted PR authorship is what builds trustworthy contributors, and that LLM assistance prevents that growth.

No, I don't think that was the argument. As I understood it, unassisted contributions have higher chances to grow a trusted contributor. Not 100% vs 0% chances, but statistically higher. So, given limited resources, it makes sense to prefer unassisted over assisted contributions.

SuperV1234 6 hours ago||
I don't believe that even the weakened version of the argument works -- it is based on an assumption, not fact.

Why would a contributor that uses AI assistance have fewer chances to be trusted?

I'm not talking about AI slop, but a contributor that takes time to understand a problem, find a solution, and discuss pros/cons alternatives. Using LLM assistance, of course.

franktankbank 4 hours ago||
Because you are at the whims of the bot they are at least partially dependent on.
SuperV1234 3 hours ago||
You could extend that argument to any tool used by the developer, like a linter, sanitizer, the IDE itself, or even auto-completion. Why target LLMs specifically?

The more I think about it, the more nonsensical it is. - What if I do everything by hand, but have an LLM review my work at the very end? - What if I have an LLM guide me through the codebase just by specifying the files I should read and in what order, but I do all the reading myself? - What if I do everything by hand, but then use an LLM to optimize a small part of an algorithm?

You can easily see how absurd it is to completely ban LLMs.

What matters is the quality and correctness of the contribution. Even with heavy LLM usage, unless the developer understands what problem they're solving, the quality will be sub-par.

pico303 2 hours ago|||
I think you’re missing the underlying point. The Zig team is focused on the contributor and their relationship to the project, not on the correctness of the work. People, not product. Yes, an LLM can help you better understand your code and pick up on things you may have missed before you submit your change. But I think they look at it as you’ve then robbed the Zig team of that interaction with the contributor. They lost the opportunity to learn about how that person thinks, and that person lost the opportunity to be mentored and learn from other members of the Zig team. Sure, your code is better, but did you or the team grow from the experience or simply churn out more code?

I’m not saying whether or not that’s good or bad. I agree with their approach, but that’s just my opinion and who am I to say what’s right or wrong? I think there’s value to LLMs as a tool to search and learn, but I’m also worried that LLMs make it really easy to focus on only the result and not the process. That process can be really valuable in building good teams, while LLMs can be really good at churning out an assembly line of code.

SuperV1234 1 hour ago||
My claim that LLMs can benefit the end product and create quality contributions does not imply that the person behind the contribution is less capable/creative/smart than someone who doesn't use LLMs.

But it seems that the Zig policy implies that. Otherwise what would be wrong with interacting with contributors using LLMs?

franktankbank 3 hours ago|||
Would you let your nanny subcontract?

edit: Can't reply because I've posted a whole 4 times.

I believe we have different world views which is hardly a disagreement. Answering my question could pretty well highlight our difference of opinion.

SuperV1234 2 hours ago||
What point are you trying to make? Be explicit.
slopinthebag 12 minutes ago|||
Did you just link an AI chat in an internet comment because you were too lazy to both think of a reply and write one out?

Lmao bro has completely outsourced their thinking to AI, this is comical

GaryBluto 9 hours ago||
I don't think I've ever heard anything positive about Zig. Every time I've seen the project mentioned is them using bizarre black and white moral judgements to justify stupid decisions.
lukaslalinsky 9 hours ago||
You need to look past this. Zig is an excellent low-level language. Thanks to the comptime features, you can have high-level looking APIs while staying down to the metal. It's not for everyone, obviously, but as a language, it is really good.
Pay08 9 hours ago||
You have to be wilfully blind, then. It gets rather frequently praised on HN (as much as any niche language can be), and they certainly don't make black-and-white moral judgements often.
small_model 5 hours ago||
"We wont take contributions from non hand written assembly code, these C 'high level' language patches are not allowed. Zig is a great project and language but it will die on this hill.
ducdetronquito 4 hours ago|
You paint them wrongly as elitists.

It's a critique of low effort PRs compared to the high effort review they require.

lukaslalinsky 9 hours ago|
On multiple occasions over the last months, I have been wishing the Zig/ZSF team would use LLMs. I've found many copy&paste errors that simply wouldn't exist if mundane tasks were delegated to a good LLM. It's even in the Zig community, I've seen PRs to some projects I'm interested in boosting how it was all human made, and containing all kinds of trivial logical errors that even the worst LLM would catch.
lccerina 9 hours ago||
If you see them, why don't you help squash them?
lukaslalinsky 9 hours ago||
I did.
grayhatter 7 hours ago||
no cite?
More comments...