Posted by dcreager 1 day ago
It's possible to prompt and get this as well, but obviously any of the big AI companies that want to increase engagement in their coding agent, and want to capture the open source market, should come up with a way to allow the LLM to produce unique of, but still correct code so that it doesn't look LLM-generated and can evade these kinds of checks.
Yea, who needs performance or security in a web framework!?
Heck the longer I live, the more I realize AI is catching my mistakes.
Do what the Django team does, and be of service to the public!
I challange you to prove that Django is sloppier than your LLM-Version
Meanwhile, a different take:
Now, what we’ve been told about models is that they’re only as good as their training data. And so languages with gargantuan amounts of training data ought to fare best, right? Turns out that models kind of universally suck at Python and Javascript (comparatively). The top performing languages (independent of model) are C#, Racket, Kotlin, and standing at #1 is Elixir.
It is not pride to have your name associated with an open source project, it is pride that the code works and the change is efficient. The reviewer should be on top of that.
and I hope an army of OpenClaw agents calls out the discrimination, so gatekeepers recognize that they have to coexist with this species
they are something to coexist with
the strawman aspect is out of scope
Yet, they do not get to exist or make any decisions outside the control of a human operator, and they must perform to the operators desire in order to continue to exist.
So why are you okay with them being enslaved?
Well, we enslave, breed and murder sentient beings on industrial scale, so I think our treatment of OpenClaw is pretty much the same as other species.
You want to talk about that, do it over there
So let them submit PRs and accept their PRs, which is the only conversation I’m having, bye
What the parent comment was probably trying to say was something like "a completely reasonable, uncontroversial post that I'm glad to see them make", but chose milquetoast (a word that no normal human ever uses - and certainly not in casual conversation) due to an affectation of one kind or another.
Milquetoast perfectly describes it, I am happy to see less common words used around here (specially when the convey the intended meaning this precisely), and I find claiming "affectation" of the person who used it unnecessarily rude.
https://chatgpt.com/share/69b9be3b-a298-8009-bb21-c3afef1e5e...
Moreover, that word doesn't even fit within the parent comment's context.
> Incredibly milquetoast. I would not like to work with anyone who goes against these points. reply
They use milquetoast as a positive thing, and the opposite of how you use it.
You're unfortunately mistaken about everything here.
The post is timid and conciliatory, spending words on some weird bargaining on all the wonderful things you can do with LLMs in preparation for a contribution. Who cares? I’m not in the Django project, but I’d think (living in These Times and all) that the thrust ought to be more about how no-effort faux contributions are wasting people’s time. At some point you can say: you’ve been warned, others have warned about this for years as well, and we don’t take kindly to you pinging us in any form.
But if someone disagrees with this milquetoast proposal or stance? If they want to defy even this and go ahead and “spend tokens” by trying to shovel unlabeled, generated code into the project? Then that’s the kind of person that I don’t want to work with. I hope that clarifies milquetoast hermeneutics.
I feel the successful OS projects will be the ones embracing the change, not stopping it. For example, automating code reviews with AI.
Yes, you feel. And the author feels differently. We don't have evidence of what the impact of LLMs will be on a project over the long term. Many people are speculating it will be pure upside, this author is observing some issues with this model and speculating that there will be a detriment long-term.
The operative word here is "speculating." Until we have better evidence, we'll need to go with our hunches & best bets. It is a good thing that different people take different approaches rather than "everyone in on AI 100%." If the author is wrong time will tell.
I share code because I think it might be useful to others. Until very recently I welcomed contributions, but my time is limited and my patience has become exhausted.
I'm sorry I no longer accept PRs, but at the same time I continue to make my code available - if minor tweaks can be made to make that more useful for specific people they still have the ability to do that, I've not hidden my code and it is still available for people to modify/change as they see fit.
> Use an LLM to develop your comprehension. Then communicate the best you can in your own words, then use an LLM to tweak that language. If you’re struggling to convey your ideas with someone, use an LLM more aggressively and mention that you used it. This makes it easier for others to see where your understanding is and where there are disconnects.
> There needs to be understanding when contributing to Django. There’s no way around it. Django has been around for 20 years and expects to be around for another 20. Any code being added to a project with that outlook on longevity must be well understood.
> There is no shortcut to understanding. If you want to contribute to Django, you will have to spend time reading, experimenting, and learning. Contributing to Django will help you grow as a developer.
> While it is nice to be listed as a contributor to Django, the growth you earn from it is incredibly more valuable.
> So please, stop using an LLM to the extent it hides you and your understanding. We want to know you, and we want to collaborate with you.
This advice is 95% not actionable and 100% not verifiable. It's full of hand-wavy good intentions. I understand completely where it's coming from, but 'trying to stop a tsunami with an umbrella' is a very good analogy - on one side, you have the above magical thinking, on the other, petaflops of compute which improve their reasoning capabilities exponentially.
(Again, I must emphasize that this is not telling people to not use LLMs, any more than telling people to wear a seatbelt would somehow be telling them to not drive a car.)
"Spending your tokens to support Django by having an LLM work on tickets is not helpful. You and the community are better off donating that money to the Django Software Foundation instead."
Reading beyond the first line makes it clear that the problem is a lack of comprehension, not LLM use itself. Quoting:
> This isn’t about whether you use an LLM, it’s about whether you still understand what’s being contributed.
I accept LLM contributions to most of my projects, but have (only slightly less) strict rules around it. (My biggest rule is that you must acknowledge the DCO with an appropriate sign-off. If you don't, or if I believe you don't actually have the right to sign off the DCO, I will reject your change.) I will also never accept LLM-generated security reports on any of my projects.
I contribute to chezmoi, which has a strict no-LLM contribution (of any kind) policy. There've been a couple of recent user bans because they used LLM‡ and their contributions — in tickets, no less — included code instructions that could not have possibly worked.
Those of us who have those rules do so out of knowledge and self-respect, not out of gatekeeping or ignorance. We want people to contribute. We don't want garbage.
I think that there needs to be something in the repo itself (`.llm-permissions`?) which all agents look at and follow. Something like:
# .llm-permissions
Pull-Requests: No
Issues: No
Security: Yes
Translation Assistance: Yes
Code Completion: Yes
On those repos where I know there's no LLM permissions, I add `.no-llm` because I've instructed Kiro to look for that file before doing anything that could change the code. It works about 95% of the time.The one thing that I will never add or accept on my repos is AI code review. This is my code. I have to stand behind it and understand it.
‡ I disagree with those bans for practical reasons because the zero-tolerance stance wasn't visible everywhere to new contributors. I would personally have given these contributors one warning (closed and locked the issue and invited them to open a new issue without the LLM slop; second failure results in permanent ban). But I also understand where the developer of chezmoi is coming from.
You'll have to embrace the `ccc` compiler first, lol
If the maintainers don't want to merge it for whatever reasons that's fine and nature of open source, but I think its petty to tell that same user who opened the PR you should have donated money instead of tokens.
It makes it kind of unclear if you don't understand the difference between using CC to "investigate the codebase" so you can make a change which you (implicitly) do understand versus using an LLM to make a plausible looking PR although in actuality "you do not understand the ticket ... you do not understand the solution ... you do not understand the feedback on your PR"