I remember lawsuits that said the products of AI can't be copyrighted. How does that affect projects?
One use of AI, I think, is going to be uncontroversial: autocomplete suggestions. I've watched a coworker use Supermaven as a Jetbrains plugin (back before it was folded into Cursor) that basically gave him autocomplete on steroids. Instead of autocompleting the function name he was typing, it figured out based on code context which variables he was likely to pass in as parameters. If it was wrong, he kept typing. Once it was right, he hit Tab and saved himself 30 more seconds of typing than he would have saved with traditional autocomplete. Doing that a hundred-plus times over the course of an 8-hour workday adds up pretty quickly. And more importantly, it's obvious to anyone that the code was entirely the creation of his own brain, and the AI autocomplete tool was being used as a typing aid.
When the AI tool is generating entire functions, or entire files, the question of authorship is going to become a lot more unclear.
if you’re not writing your code why do you expect people to read it and follow your lead for whatever your preference is for a convention.
I get people who hand write being fussy about this but you start the article off devaluing coding entirely then pivot to how your codebase is written having value that needs to be followed.
It’s either low value or it isn’t but you can’t approach it as worthless then complain when others view your code as worthless and not worth reading too
Arguably, because LLM tokens are expensive so LLM generated code could be considered a donation? But then so is the labor involved so it's kinda moot. I don't believe people pay software developers to write code for them to contribute to open source projects either (if that makes any sense).
> ...
> But if you ask me, the bigger threat to GitHub's model comes from the rapid devaluation of someone else's code. When code was hard to write and low-effort work was easy to identify, it was worth the cost to review the good stuff. If code is easy to write and bad work is virtually indistinguishable from good, then the value of external contribution is probably less than zero.
> If that's the case, which I'm starting to think it is, then it's better to limit community contribution to the places it still matters: reporting, discussion, perspective, and care. Don't worry about the code, I can push the button myself.
When was writing code ever the hard part?
If contributors aren't solving problems, what good are they? Code that doesn't solve a problem is cruft. And if a problem could be solved trivially, you probably wouldn't need contributions from others to solve it in the first place.
I’ll call this what it is: a commercial product (they have a pricing page) that uses open source as marketing to sell more licenses.
The only PRs they want are ones that offer free professional level labor.
They’re too uncaring about the benefits of an open community to come up with a workflow to adapt to AI.
It honestly gives me a lack of confidence that they can maintain their own code quality standards with their own employees.
Think about it: when/if this company grows to a larger size, if they can’t handle AI slop from contributors how can they handle AI slop from a large employee base?
BigBlueButton had to fork tldraw because of this. https://docs.bigbluebutton.org/new-features/#we-have-forked-...