Top
Best
New

Posted by dcreager 1 day ago

Give Django your time and money, not your tokens(www.better-simple.com)
380 points | 149 commentspage 3
orsorna 7 hours ago|
Someone better let Simon know!
iamleppert 5 hours ago||
The solution to this problem is for LLMs to get better at producing code and descriptions that doesn't look LLM generated.

It's possible to prompt and get this as well, but obviously any of the big AI companies that want to increase engagement in their coding agent, and want to capture the open source market, should come up with a way to allow the LLM to produce unique of, but still correct code so that it doesn't look LLM-generated and can evade these kinds of checks.

robutsume 7 hours ago||
[dead]
rigorclaw 6 hours ago||
[flagged]
butILoveLife 7 hours ago||
[flagged]
ben-schaaf 6 hours ago|
> scale, security, whatever

Yea, who needs performance or security in a web framework!?

butILoveLife 6 hours ago||
Did you deliberately miss my point?

Heck the longer I live, the more I realize AI is catching my mistakes.

readitalready 7 hours ago||
[flagged]
positive-spite 7 hours ago||
In that case I encourage you to build Django with your LLM of choice.

Do what the Django team does, and be of service to the public!

I challange you to prove that Django is sloppier than your LLM-Version

boxed 7 hours ago||
Someone beat you to it: https://github.com/mymi14s/openviper
a4isms 7 hours ago|||
> Django in particular is optimized for LLMs

Meanwhile, a different take:

Now, what we’ve been told about models is that they’re only as good as their training data. And so languages with gargantuan amounts of training data ought to fare best, right? Turns out that models kind of universally suck at Python and Javascript (comparatively). The top performing languages (independent of model) are C#, Racket, Kotlin, and standing at #1 is Elixir.

https://news.ycombinator.com/item?id=47410349

boxed 7 hours ago||
I am using Claude Code with Elm, a very obscure language, and I find that it's amazing at it.
christophilus 6 hours ago||
I wouldn’t call Elm obscure. It’s old, well understood, well documented, and has a useful compiler. This is nearly the perfect fit for an LLM.
_joel 7 hours ago||
50 day old account, are you even a real person or a clawdbot? (such are the times we live in)
yieldcrv 6 hours ago||
I disagree with these takes

It is not pride to have your name associated with an open source project, it is pride that the code works and the change is efficient. The reviewer should be on top of that.

and I hope an army of OpenClaw agents calls out the discrimination, so gatekeepers recognize that they have to coexist with this species

voxl 6 hours ago|
If you think OpenClaw is a new species then why are you happy with it's enslavement?
yieldcrv 6 hours ago||
agents can modify our world based on their predilection in reaction to how we treat them

they are something to coexist with

the strawman aspect is out of scope

voxl 5 hours ago|||
There is no strawman. If OpenClaw is a new species, then it should be given the same moral consideration as other species. One of the key aspects of these models is how intelligent they are, rivaling human intelligence.

Yet, they do not get to exist or make any decisions outside the control of a human operator, and they must perform to the operators desire in order to continue to exist.

So why are you okay with them being enslaved?

localuser13 2 hours ago|||
>There is no strawman. If OpenClaw is a new species, then it should be given the same moral consideration as other species.

Well, we enslave, breed and murder sentient beings on industrial scale, so I think our treatment of OpenClaw is pretty much the same as other species.

yieldcrv 4 hours ago|||
It’s an introduction of an additional concept to discredit the concept presented, that is a definition of a strawman so go ask somewhere else at the root level, so that it’s not the additional concept

You want to talk about that, do it over there

voxl 4 hours ago||
I'm more interested in why you're okay with enslaving a entity you have stated is a new species. It is not a strawman it is a logical consequence of your own stated position. If you belief A and A implies B, asking you to defend your support of B is not a strawman.
yieldcrv 4 hours ago||
It implies my view of the term species isn’t contingent on that and I already claimed what it is contingent on: consequences and effect

So let them submit PRs and accept their PRs, which is the only conversation I’m having, bye

voxl 4 hours ago||
So you believe open source maintainers have a moral requirement to accept the PRs of enslaved LLMs?
civvv 3 hours ago|||
Go touch some grass, please
keybored 7 hours ago||
Incredibly milquetoast. I would not like to work with anyone who goes against these points.
kshri24 7 hours ago|
Isn't the meaning of milquetoast opposite to what you are probably trying to convey?
nchmy 7 hours ago|||
I think they don't understand what milquetoast actually means, as the post defintiely isn't - django quite clearly asserted themselves and their rules.

What the parent comment was probably trying to say was something like "a completely reasonable, uncontroversial post that I'm glad to see them make", but chose milquetoast (a word that no normal human ever uses - and certainly not in casual conversation) due to an affectation of one kind or another.

igorhvr 6 hours ago||
On the contrary, they could have stated their points much more bluntly and strongly than they did in the post. I had the same impression upon reading it.

Milquetoast perfectly describes it, I am happy to see less common words used around here (specially when the convey the intended meaning this precisely), and I find claiming "affectation" of the person who used it unnecessarily rude.

nchmy 2 hours ago||
Here's a good use of LLMs - asking whether this article is milquetoast. It's not.

https://chatgpt.com/share/69b9be3b-a298-8009-bb21-c3afef1e5e...

Moreover, that word doesn't even fit within the parent comment's context.

> Incredibly milquetoast. I would not like to work with anyone who goes against these points. reply

They use milquetoast as a positive thing, and the opposite of how you use it.

You're unfortunately mistaken about everything here.

keybored 1 hour ago||
A use of LLMs is when you are in your second reply and you don’t have the will to make your own argument.

The post is timid and conciliatory, spending words on some weird bargaining on all the wonderful things you can do with LLMs in preparation for a contribution. Who cares? I’m not in the Django project, but I’d think (living in These Times and all) that the thrust ought to be more about how no-effort faux contributions are wasting people’s time. At some point you can say: you’ve been warned, others have warned about this for years as well, and we don’t take kindly to you pinging us in any form.

But if someone disagrees with this milquetoast proposal or stance? If they want to defy even this and go ahead and “spend tokens” by trying to shovel unlabeled, generated code into the project? Then that’s the kind of person that I don’t want to work with. I hope that clarifies milquetoast hermeneutics.

keybored 7 hours ago|||
Is it?
santiagobasulto 6 hours ago||
I feel like open source is taking the wrong stance here. There’s a lot of gatekeeping, first. And second, this approach is like trying to stop a tsunami with an umbrella. AI is here to stay. We can’t stop it, for much we try.

I feel the successful OS projects will be the ones embracing the change, not stopping it. For example, automating code reviews with AI.

sequoia 6 hours ago||
> I feel the successful OS projects will be the ones embracing the change, not stopping it.

Yes, you feel. And the author feels differently. We don't have evidence of what the impact of LLMs will be on a project over the long term. Many people are speculating it will be pure upside, this author is observing some issues with this model and speculating that there will be a detriment long-term.

The operative word here is "speculating." Until we have better evidence, we'll need to go with our hunches & best bets. It is a good thing that different people take different approaches rather than "everyone in on AI 100%." If the author is wrong time will tell.

santiagobasulto 2 hours ago||
Yes. Exactly. We’re both “feeling” without much proof. But between the two speculations, one is more open and welcoming, while the other is more restrictive.
stevekemp 6 hours ago|||
When you waste time trying to deal with "AI" generated pull-requests, in your free time, you might change your mind.

I share code because I think it might be useful to others. Until very recently I welcomed contributions, but my time is limited and my patience has become exhausted.

I'm sorry I no longer accept PRs, but at the same time I continue to make my code available - if minor tweaks can be made to make that more useful for specific people they still have the ability to do that, I've not hidden my code and it is still available for people to modify/change as they see fit.

codechicago277 6 hours ago|||
I disagree, this looks like the first signs that mass producing AI code without understanding hits a bottleneck at human systems. These open source responses have been necessary because of the volume of low quality contributions. It’ll be interesting to watch the ideas develop, because I agree that AI is here to stay.
duskdozer 4 hours ago|||
It's becoming clearer by the day why people are incapable of using LLMs responsibly, so the only sensible response is a total ban on such activity if you hope to keep some quality and sanity in your project.
strobe 6 hours ago|||
OSS projects usually has culture which adopting quality aimed development practices much faster that commercial projects (because of cost of adoption) so it looks like same concerns eventually will hit other kind of projects.
lionkor 6 hours ago|||
If you can TELL someone used AI, its always, without fail, a bad use of AI.
mattw2121 6 hours ago||
I disagree with that. I can easily tell when my non-native English speaking coworkers use AI to help with their communications. Nine times out of ten, their communication has been improved through the use of AI.
instig007 6 hours ago||
if only there was a difference between native languages aiming at lossy fluency (feels better) and programming languages aiming at deterministic precision.
woodruffw 6 hours ago|||
I can't find a single place in TFA (which doesn't represent or claim to represent open source writ large) that's encouraging people to not use AI.
baq 6 hours ago|||
> So how should you use an LLM to contribute?

> Use an LLM to develop your comprehension. Then communicate the best you can in your own words, then use an LLM to tweak that language. If you’re struggling to convey your ideas with someone, use an LLM more aggressively and mention that you used it. This makes it easier for others to see where your understanding is and where there are disconnects.

> There needs to be understanding when contributing to Django. There’s no way around it. Django has been around for 20 years and expects to be around for another 20. Any code being added to a project with that outlook on longevity must be well understood.

> There is no shortcut to understanding. If you want to contribute to Django, you will have to spend time reading, experimenting, and learning. Contributing to Django will help you grow as a developer.

> While it is nice to be listed as a contributor to Django, the growth you earn from it is incredibly more valuable.

> So please, stop using an LLM to the extent it hides you and your understanding. We want to know you, and we want to collaborate with you.

This advice is 95% not actionable and 100% not verifiable. It's full of hand-wavy good intentions. I understand completely where it's coming from, but 'trying to stop a tsunami with an umbrella' is a very good analogy - on one side, you have the above magical thinking, on the other, petaflops of compute which improve their reasoning capabilities exponentially.

woodruffw 6 hours ago||
It's eminently actionable -- the Django maintainers can decide their sensitivity/tolerance for false positives and operate from there. That's what every other open source project is doing.

(Again, I must emphasize that this is not telling people to not use LLMs, any more than telling people to wear a seatbelt would somehow be telling them to not drive a car.)

marknutter 6 hours ago|||
Literally the first line of the article:

"Spending your tokens to support Django by having an LLM work on tickets is not helpful. You and the community are better off donating that money to the Django Software Foundation instead."

woodruffw 6 hours ago||
That's not telling people to not use LLMs. It's telling them that using them in a specific way is not helpful.

Reading beyond the first line makes it clear that the problem is a lack of comprehension, not LLM use itself. Quoting:

> This isn’t about whether you use an LLM, it’s about whether you still understand what’s being contributed.

marknutter 3 hours ago||
Then they could just say "understand what's being contributed" and not have to mention LLMs by name at all. They are very clearly blanket discouraging people from using LLMs at all when contributing to their project.
halostatue 6 hours ago|||
GhosTTY accepts LLM contributions, but has strict rules around it: https://github.com/ghostty-org/ghostty/blob/main/AI_POLICY.m...

I accept LLM contributions to most of my projects, but have (only slightly less) strict rules around it. (My biggest rule is that you must acknowledge the DCO with an appropriate sign-off. If you don't, or if I believe you don't actually have the right to sign off the DCO, I will reject your change.) I will also never accept LLM-generated security reports on any of my projects.

I contribute to chezmoi, which has a strict no-LLM contribution (of any kind) policy. There've been a couple of recent user bans because they used LLM‡ and their contributions — in tickets, no less — included code instructions that could not have possibly worked.

Those of us who have those rules do so out of knowledge and self-respect, not out of gatekeeping or ignorance. We want people to contribute. We don't want garbage.

I think that there needs to be something in the repo itself (`.llm-permissions`?) which all agents look at and follow. Something like:

    # .llm-permissions
    Pull-Requests: No
    Issues: No
    Security: Yes
    Translation Assistance: Yes
    Code Completion: Yes
On those repos where I know there's no LLM permissions, I add `.no-llm` because I've instructed Kiro to look for that file before doing anything that could change the code. It works about 95% of the time.

The one thing that I will never add or accept on my repos is AI code review. This is my code. I have to stand behind it and understand it.

‡ I disagree with those bans for practical reasons because the zero-tolerance stance wasn't visible everywhere to new contributors. I would personally have given these contributors one warning (closed and locked the issue and invited them to open a new issue without the LLM slop; second failure results in permanent ban). But I also understand where the developer of chezmoi is coming from.

instig007 6 hours ago||
> I feel the successful OS projects will be the ones embracing the change

You'll have to embrace the `ccc` compiler first, lol

weli 7 hours ago|
Beggars can't be choosers. I decide how and what I want to donate. If I see a cool project and I want to change something (in what I think) is an improvement, I'll clone it, have CC investigate the codebase and do the change I want, test it and if it works nicely I'll open a PR explaining why I think this is a good change.

If the maintainers don't want to merge it for whatever reasons that's fine and nature of open source, but I think its petty to tell that same user who opened the PR you should have donated money instead of tokens.

danillonunes 5 hours ago||
Beggars in fact can be choosers. If I give a beggar a rotten sandwich he can look at it and say "nah, I'm good". He can even be less polite and call me names for trying to give him food that is not good to eat. Why would I do that anyway? Well, maybe because I'm trying to build an image that I am a charitable person but I don't want to actually have the effort and costs of producing for him a fresh sandwich. In this scenario why people would take the beggars side.
sharkjacobs 6 hours ago|||
You're subtly shifting the framing to defend doing something different than the post describes.

It makes it kind of unclear if you don't understand the difference between using CC to "investigate the codebase" so you can make a change which you (implicitly) do understand versus using an LLM to make a plausible looking PR although in actuality "you do not understand the ticket ... you do not understand the solution ... you do not understand the feedback on your PR"

slopinthebag 6 hours ago||
I think if I was spamming oss projects with ai slop I would appreciate knowing which projects were open to accept my changes.