Posted by msolujic 2 hours ago
Vibe coding kills open source - https://news.ycombinator.com/item?id=46765120 - Jan 2026 (285 comments)
Just because some things suck, for now, doesn't mean open source is being killed. It means software development is changing. It'll be harder to distinguish between a good faith, quality effort that meets all the expectations of quality control without sifting through more contributions.
Anonymous participation will decrease, communities will have to create a minimal hierarchy of curation, and the web of trust built up in these communities will have to become more pragmatic. The relationships and the tools already exist, it's just the shape of the culture that results in good FOSS that will have to update and adapt to the technology.
I work a lot with quants (who can program but are more focused on making money than on clean-code) and Opus 4.5 and Kimi 2.5 are extremely good at giving them architecture guidance. They tend to overcomplicate some things but the result is usually miles better than what they produced without LLMs.
their LLM "assisted" work seems to be the roughly the same quality (i.e. bad), but now there's much more of it
not an improvement
It shifts in a way where „left-pad” kind of thing will not happen because no one will need that kind of „library” because LLM will generate it.
I see it as a positive thing, no single schmuck will be terrorizing whole ecosystem when there will be dozens of of different LLMs that can write such code.
More people with shut in because they will be able to create something commercial or their „thing” won’t matter because LLM will be able to replicate their effort in 5 minutes so no one will be willing to pay for that.
Raising barriers against AI slop will also create a good reason to ignore demanding non-AI slop as well. It might give the real contributors to open source projects some breathing space.
57 Channels and Nothin' On
https://en.wikipedia.org/wiki/57_Channels_(And_Nothin%27_On)
And while banning AI outright is certainly an option at a private company, it also feels like throwing out the baby with the bath water. So we’re all searching for a solution together, I think.
There was a time (decades ago) when projects didn’t need to use pull requests. As the pool of contributors grew, new tools were discovered and applied and made FOSS (and private dev) a better experience overall. This feels like a similar situation.
The internet is worse off.
The sports I participate in got cheaper to start with and are worse. Cultures worse.
What has gotten better because the barrier to entry is lower?
* no longer any pressure to contribute upstream
* no longer any need to use a library at all
* Verbose PRs created with LLMs that are resume-padding
* False issues created with LLM-detection by unsophisticated users
Overall, we've lost the single meeting place of an open-source library that everyone meets at so we can create a better commons. That part is true. It will be interesting to see what follows from this.
I know that for very many small tools, I much prefer to just "write my own" (read: have Claude Code write me something). A friend showed me a worktree manager project on Github and instead of learning to use it, I just had Claude Code create one that was highly idiosyncratic to my needs. Iterative fuzzy search, single keybinding nav, and so on. These kinds of things have low ongoing maintenance and when I want a change I don't need to consult anyone or anything like that.
But we're not at the point where I'd like to run my own Linux-compatible kernel or where I'd even think of writing a Ghostty. So perhaps what's happened is that the baseline for an open-source project being worthwhile to others has increased.
For the moment, for a lot of small ones, I much prefer their feature list and README to their code. Amusing inversion.
As someone who works on medical device software, I see this as a huge plus (maybe a con for FOSS specifically, but a net win overall).
I'm a big proponent of the go-ism "A little copying is better than a little dependency". Maybe we need a new proverb "A little generated code is better than a little dependency". Fewer dependencies = smaller cyberseucity burden, smaller regulatory burden, and more.
Now, obviously foregoing libsodium or something for generated code is a bad idea, but probably 90%+ of npm packages could probably go.
I feel npm gets held to an unreasonable standard. The fact is tons of beginners across the world publish packages to it. Some projects publish lots of packages to it that only make sense for those projects but are public anyway then you have the bulwark pa lager that most orgs use.
It is unfair to me that it’s always held as the “problematic registry”. When you have a single registry for the most popular language and arguably most used language in the world you’re gonna see massive volume of all kinds of packages, it doesn’t mean 90% of npm is useless
FWIW I find most pypi packages worthless and fairly low quality but no ones seems to want to bring that up all the time
Compare this to Java ecosystem where a typical project will get an order of magnitude fewer packages, from vendors you can mostly trust.
So in many senses AI is democratising open-source.
Many projects require a great deal of bureaucracy, hoop-jumping, and sheer dogged persistence to get changes merged. It shouldn't be surprising if some are electing it easier to just vibe-customize their own private forks as they see fit, both skipping that whole mess and allowing for modifications that would've never been approved of in mainline anyway.
AI coding is kind of similar. You tell it what you want and it just sort of pukes it out. You run it then forget about it for the most part.
I think AI coding is kind of going to hit a ceiling, maybe idk, but it'll become an essential part of "getting stuff done quickly".
I'm also not particularly fond of the other extreme of toxic positivity where any problem is just a challenge and everybody is excited to take them on.
Once seems to understate the level of agency people have and the other seems to overstate.
The world is changing. Adapting does seem to be the rational approach.
I don't think Open Source is being killed but it does need to manage the current situation in a way that provides the best outcome.
I have been thinking that there may be merit in AI branches or forks. Open source projects direct any AI produced PRs to the AI branch. Maintainers of that branch curate the changes to send upstream. The maintainers of the original branch need not take an active involvement in the AI branch. If the AI branch is inadequately maintained or curated, then upstream simply receives no patches. In a sense it creates an opportunity for people who want to contribute. It produces a new area where people can compartmentalise their involvement without disrupting the wider project. This would lower the barrier of entry to productively supporting an open source project.
I doubt the benefit of resume-padding will persist long in an AI world. By the very nature of their act, they are showing what they are claiming to do is unremarkable.
I do think that SDKs and utility-focused libraries are going to mostly go away, though, and that's less flashy but does have interesting implications imo.
Perhaps it would be more accurate to say libraries will change in form. There is a very broad spectrum of what libraries do. Some of the very small may just become purpose written inline code. Some of the large, hated-but-necessary libraries might get reduced into manageable chunks if people who use them can utilise AI to strip them down to the necessary component. Projects like that are things that are a lot of work for an individual that make it easier to just bite the bullet and use the bloated mass library. Getting an opportunity to make an AI do that drudge work might lower the threshold that some of those things will be improved.
I also wonder about the idea of skills as libraries. I have already found that I am starting to put code into skills for the AI to use as templates for output. Developing code in this way would let you add the specific abilities of a library to any skill supporting AI.
A simple is this https://htmlpreview.github.io/?https://github.com/Lerc/JustS... which was generated by a skill that contains the source for the image decoders within the skill itself.
I don't think this is a possibility anymore for multiple reasons. As others have already pointed out there are already "open models" available to use and that genie can't be put back in the bottle, restricting the commercial models wouldn't fix the issue.
And secondly, I think the state of commercial LLMs show that the big tech companies behind LLMs have already become far more politically powerful than the traditional content industries. (I don't think this is good thing, but I think it is a thing).
If you had explained the LLM situation to 15-years-ago me in terms of how they are trained (on almost entirely copyrighted material) and what kind of output they could generate and told me Disney hadn't managed (or really even tried) to sue various players out of existence I wouldn't have believed it, yet here we are.
The AI-forgery attacks are highly polished, complete with forged user photos and fake social networking pages.
The legitimate code contributions are from people who have near-zero followers and no obvious track record.
This is topsy-turvy yet good news for open source because it focuses the work on the actual code, and many more people can learn how to contribute.
So long as code is good enough to get in the right ballpark for a PR, then I'm fine cleaning the work up a bit by hand then merging. IMHO this is a great leap forward for delivering better projects.
Another article written by someone who doesn't actually use AI. Claude will literally search "XYZ library 2025" to find libraries. That is essentially equivalent to how it's always worked. It's not just what is in the dataset.
I'm fairly sure you made a typo, but considering the context, it's a pretty funny typo and would kind of demonstrate the point parent was trying to make :)
I agree with you overall though, the CLI agents of today don't really suffer from that issue, how good the model is at using tools and understanding what they're doing is much more important than what specific APIs they remember from the training data.
I was fine with my work being a gift for all of humanity equally, but I did not consent with it being a gift to a for-profit company that I'm not personally benefiting from, that wont even follow the spirit of the open source license.
If AI doesn't have to follow the GPL, then I'm not going to create GPL code.
The small libraries will be eliminated as a viable solution for production use, but that’s a good thing. They are supply chain risk, which is significantly amplified in the LLM age.
It may happen and it will be great if it happens, when open training datasets will replace those libraries to recalibrate LLM output and shift it from legacy to more modern approaches, as well as teaching how to achieve certain things.