Top
Best
New

Posted by msolujic 2 hours ago

How vibe coding is killing open source(hackaday.com)
67 points | 45 comments
dang 1 minute ago|
A recent thread about the paper this article reports on:

Vibe coding kills open source - https://news.ycombinator.com/item?id=46765120 - Jan 2026 (285 comments)

observationist 54 minutes ago||
Things change. The barrier to entry decreased, meaning more things will get created, more people will participate in communal efforts, and quality will depend on AI capabilities and figuring out how to curate well - better tools, less friction between idea and reality, and things get better for everyone.

Just because some things suck, for now, doesn't mean open source is being killed. It means software development is changing. It'll be harder to distinguish between a good faith, quality effort that meets all the expectations of quality control without sifting through more contributions.

Anonymous participation will decrease, communities will have to create a minimal hierarchy of curation, and the web of trust built up in these communities will have to become more pragmatic. The relationships and the tools already exist, it's just the shape of the culture that results in good FOSS that will have to update and adapt to the technology.

LaurensBER 51 minutes ago||
I concur, open-source will be more reputation based and no doubt, in the future, LLMs can also act as a quality gate.

I work a lot with quants (who can program but are more focused on making money than on clean-code) and Opus 4.5 and Kimi 2.5 are extremely good at giving them architecture guidance. They tend to overcomplicate some things but the result is usually miles better than what they produced without LLMs.

blibble 42 minutes ago||
as we're doing anecdotes: I work with quants too

their LLM "assisted" work seems to be the roughly the same quality (i.e. bad), but now there's much more of it

not an improvement

ozim 13 minutes ago|||
I think you have it backwards. Barrier to entry just went up, why would I use a library when I can ask LLM to make one for me.

It shifts in a way where „left-pad” kind of thing will not happen because no one will need that kind of „library” because LLM will generate it.

I see it as a positive thing, no single schmuck will be terrorizing whole ecosystem when there will be dozens of of different LLMs that can write such code.

More people with shut in because they will be able to create something commercial or their „thing” won’t matter because LLM will be able to replicate their effort in 5 minutes so no one will be willing to pay for that.

phatfish 13 minutes ago|||
I think it could be a good thing. The politics sucking the air out of projects and the entitled attitude from people that want something for free NOW was getting tiresome.

Raising barriers against AI slop will also create a good reason to ignore demanding non-AI slop as well. It might give the real contributors to open source projects some breathing space.

reaperducer 8 minutes ago|||
The barrier to entry decreased, meaning more things will get created

57 Channels and Nothin' On

https://en.wikipedia.org/wiki/57_Channels_(And_Nothin%27_On)

tobyjsullivan 42 minutes ago|||
Further to this, the quality problem is affecting the entire industry, not just FOSS. Anyone working on a large enough team has already seen some contributors pushing slop.

And while banning AI outright is certainly an option at a private company, it also feels like throwing out the baby with the bath water. So we’re all searching for a solution together, I think.

There was a time (decades ago) when projects didn’t need to use pull requests. As the pool of contributors grew, new tools were discovered and applied and made FOSS (and private dev) a better experience overall. This feels like a similar situation.

tayo42 39 minutes ago||
As I think about it, I think lowering the barrier to entry does generally ruin things

The internet is worse off.

The sports I participate in got cheaper to start with and are worse. Cultures worse.

What has gotten better because the barrier to entry is lower?

PKop 21 minutes ago||
Of course this is true but seems to be one of the most underrated facts of modern society. Always it is proposed without question to expand access to things, to "democratize" them, open barriers, open borders. But this invariably lowers quality while crowding out those already enjoying them. Think of any quality club, park, vacation spot, restaurant, online forum whatever: none of these are improved for your own usage of the thing by adding more people to it at least beyond some threshold. A lot of this is zero sum, and quality is in tension with quantity.
arjie 1 hour ago||
It does seem like it's harming open source in a few ways:

* no longer any pressure to contribute upstream

* no longer any need to use a library at all

* Verbose PRs created with LLMs that are resume-padding

* False issues created with LLM-detection by unsophisticated users

Overall, we've lost the single meeting place of an open-source library that everyone meets at so we can create a better commons. That part is true. It will be interesting to see what follows from this.

I know that for very many small tools, I much prefer to just "write my own" (read: have Claude Code write me something). A friend showed me a worktree manager project on Github and instead of learning to use it, I just had Claude Code create one that was highly idiosyncratic to my needs. Iterative fuzzy search, single keybinding nav, and so on. These kinds of things have low ongoing maintenance and when I want a change I don't need to consult anyone or anything like that.

But we're not at the point where I'd like to run my own Linux-compatible kernel or where I'd even think of writing a Ghostty. So perhaps what's happened is that the baseline for an open-source project being worthwhile to others has increased.

For the moment, for a lot of small ones, I much prefer their feature list and README to their code. Amusing inversion.

umvi 1 hour ago||
> no longer any need to use a library at all

As someone who works on medical device software, I see this as a huge plus (maybe a con for FOSS specifically, but a net win overall).

I'm a big proponent of the go-ism "A little copying is better than a little dependency". Maybe we need a new proverb "A little generated code is better than a little dependency". Fewer dependencies = smaller cyberseucity burden, smaller regulatory burden, and more.

Now, obviously foregoing libsodium or something for generated code is a bad idea, but probably 90%+ of npm packages could probably go.

no_wizard 1 hour ago|||
> probably 90%+ of npm packages could probably go

I feel npm gets held to an unreasonable standard. The fact is tons of beginners across the world publish packages to it. Some projects publish lots of packages to it that only make sense for those projects but are public anyway then you have the bulwark pa lager that most orgs use.

It is unfair to me that it’s always held as the “problematic registry”. When you have a single registry for the most popular language and arguably most used language in the world you’re gonna see massive volume of all kinds of packages, it doesn’t mean 90% of npm is useless

FWIW I find most pypi packages worthless and fairly low quality but no ones seems to want to bring that up all the time

rpodraza 1 hour ago||
I think you are completely oblivious to the problems plaguing the NPM ecosystem. When you start a typical frontend project using modern technology, you will introduce hundreds, if not thousands of small packages. These packages get new security holes daily, are often maintained by single people, are subject to being removed, to the supply chain attacks, download random crap from github, etc. Each of them should ideally be approved and monitored for changes, uploaded to the company repo to avoid build problem when it gets taken down, etc.

Compare this to Java ecosystem where a typical project will get an order of magnitude fewer packages, from vendors you can mostly trust.

danlitt 53 minutes ago||
If these packages get security holes daily, they probably cannot "just go" as the parent comment suggested (except in the case of a hostile takeover). If they have significant holes, then they must be significant code. Trivial code can just go, but doesn't have any significant quality issues either.
macleginn 1 hour ago|||
Since code-generating AIs were likely trained on them, they won't go too far, though.
OGEnthusiast 1 hour ago|||
It's also now a lot easier to fork an open source project and tweak the last 10% so it works exactly as you want.
gingerlime 56 minutes ago||
Exactly. Whilst I can see the problem with vibe-coded "contribution" that lower the signal/noise ratio on big OSS project, it's also "liberating" in the sense that forking becomes much more viable now. If previously it took time to dive into a project to tweak it to your needs, it's now trivial.

So in many senses AI is democratising open-source.

cosmic_cheese 1 hour ago|||
It may be worth considering how much the impact of LLMs is exacerbated by friction in the contribution process.

Many projects require a great deal of bureaucracy, hoop-jumping, and sheer dogged persistence to get changes merged. It shouldn't be surprising if some are electing it easier to just vibe-customize their own private forks as they see fit, both skipping that whole mess and allowing for modifications that would've never been approved of in mainline anyway.

chrneu 1 hour ago||
AI coding sort of reminds me of when ninite originally came out for windows. It was like a "build your own OS". Check boxes and get what you need in a simple executable.

AI coding is kind of similar. You tell it what you want and it just sort of pukes it out. You run it then forget about it for the most part.

I think AI coding is kind of going to hit a ceiling, maybe idk, but it'll become an essential part of "getting stuff done quickly".

Lerc 1 hour ago||
I really don't like the narrative of 'X is killing Y', or 'Z is dead' Everything being treated as an existential threat.

I'm also not particularly fond of the other extreme of toxic positivity where any problem is just a challenge and everybody is excited to take them on.

Once seems to understate the level of agency people have and the other seems to overstate.

The world is changing. Adapting does seem to be the rational approach.

I don't think Open Source is being killed but it does need to manage the current situation in a way that provides the best outcome.

I have been thinking that there may be merit in AI branches or forks. Open source projects direct any AI produced PRs to the AI branch. Maintainers of that branch curate the changes to send upstream. The maintainers of the original branch need not take an active involvement in the AI branch. If the AI branch is inadequately maintained or curated, then upstream simply receives no patches. In a sense it creates an opportunity for people who want to contribute. It produces a new area where people can compartmentalise their involvement without disrupting the wider project. This would lower the barrier of entry to productively supporting an open source project.

I doubt the benefit of resume-padding will persist long in an AI world. By the very nature of their act, they are showing what they are claiming to do is unremarkable.

milowata 57 minutes ago|
I actually started writing a very similar essay, but the hyperbole got too out of hand – open source isn't dying anytime soon.

I do think that SDKs and utility-focused libraries are going to mostly go away, though, and that's less flashy but does have interesting implications imo.

https://meelo.substack.com/p/a-mild-take-on-coding-agents

Lerc 16 minutes ago||
I'm inclined to agree somewhat about libraries. I'm not entirely certain that it is a bad thing.

Perhaps it would be more accurate to say libraries will change in form. There is a very broad spectrum of what libraries do. Some of the very small may just become purpose written inline code. Some of the large, hated-but-necessary libraries might get reduced into manageable chunks if people who use them can utilise AI to strip them down to the necessary component. Projects like that are things that are a lot of work for an individual that make it easier to just bite the bullet and use the bloated mass library. Getting an opportunity to make an AI do that drudge work might lower the threshold that some of those things will be improved.

I also wonder about the idea of skills as libraries. I have already found that I am starting to put code into skills for the AI to use as templates for output. Developing code in this way would let you add the specific abilities of a library to any skill supporting AI.

A simple is this https://htmlpreview.github.io/?https://github.com/Lerc/JustS... which was generated by a skill that contains the source for the image decoders within the skill itself.

Flavius 1 hour ago||
Open Source isn't a tech stack or a specific way of typing syntax, it’s an ideology. It’s the belief that knowledge and tools should be free to share, study and modify. You cannot kill an idea. Whether I write a function by hand or 'vibe' it into existence with an LLM, the act of liberating that code for others to use remains the same.
AlexandrB 1 hour ago|
What's not the same is that the LLMs used to create the code are highly centralized and controlled. I suspect it's only a matter of time until the content industries start trying to restrict what code LLMs are allowed to produce so that you can't use an LLM to bypass DRM.
charcircuit 1 hour ago|||
There are competent open source LLMs out today. They are not highly centralized.
pocksuppet 44 minutes ago||
There's one at the top of Hacker News right now, Qwen3-Coder-Next: https://news.ycombinator.com/item?id=46872706
georgemcbay 24 minutes ago|||
> I suspect it's only a matter of time until the content industries start trying to restrict what code LLMs are allowed to produce so that you can't use an LLM to bypass DRM.

I don't think this is a possibility anymore for multiple reasons. As others have already pointed out there are already "open models" available to use and that genie can't be put back in the bottle, restricting the commercial models wouldn't fix the issue.

And secondly, I think the state of commercial LLMs show that the big tech companies behind LLMs have already become far more politically powerful than the traditional content industries. (I don't think this is good thing, but I think it is a thing).

If you had explained the LLM situation to 15-years-ago me in terms of how they are trained (on almost entirely copyrighted material) and what kind of output they could generate and told me Disney hadn't managed (or really even tried) to sue various players out of existence I wouldn't have believed it, yet here we are.

jph 1 hour ago||
I maintain multiple open source projects. In the past two months I've seen an uptick in AI-forgery attacks, and also an uptick in legitimate code contributions.

The AI-forgery attacks are highly polished, complete with forged user photos and fake social networking pages.

The legitimate code contributions are from people who have near-zero followers and no obvious track record.

This is topsy-turvy yet good news for open source because it focuses the work on the actual code, and many more people can learn how to contribute.

So long as code is good enough to get in the right ballpark for a PR, then I'm fine cleaning the work up a bit by hand then merging. IMHO this is a great leap forward for delivering better projects.

charcircuit 1 hour ago||
>This also removes the typical more organic selection process of libraries and tooling, replacing it with whatever was most prevalent in the LLM’s training data

Another article written by someone who doesn't actually use AI. Claude will literally search "XYZ library 2025" to find libraries. That is essentially equivalent to how it's always worked. It's not just what is in the dataset.

embedding-shape 1 hour ago|
> "XYZ library 2025"

I'm fairly sure you made a typo, but considering the context, it's a pretty funny typo and would kind of demonstrate the point parent was trying to make :)

I agree with you overall though, the CLI agents of today don't really suffer from that issue, how good the model is at using tools and understanding what they're doing is much more important than what specific APIs they remember from the training data.

dom96 1 hour ago||
How many others are now reluctant to open source their code because they don't want it to end up in the training for an LLM? I certainly am.
dmarcos 1 hour ago||
It definitely feels less fun. Harder to get attribution, build a reputation, a community… Common driving forces for people to contribute to open source.
honestduane 1 hour ago|||
This is honestly why I have stopped contributing to open source.

I was fine with my work being a gift for all of humanity equally, but I did not consent with it being a gift to a for-profit company that I'm not personally benefiting from, that wont even follow the spirit of the open source license.

If AI doesn't have to follow the GPL, then I'm not going to create GPL code.

paodealho 1 hour ago|||
Me. I've never been a maintainer for any big opensource project, so it won't make a dent on anything, but now my contributions are exactly zero.
pelasaco 1 hour ago||
some startups are already avoiding the open source route, exactly because of that. You publish your code, then 2 weeks later, we have dozen of "$PROJ in $LANG rewritten". 30000 LOC + super verbose README.md done in one week, in less than 10 commits, from somebody that never wrote a single line of OSS.
Stevvo 1 hour ago||
Article doesn't seem to have anything new to add to the discussion. It's just a bunch of links to previous anti-AI articles the author has written on stories we have all read before such as the collapse in new stack overflow questions.
ivan_gammel 1 hour ago|
I doubt it’s killing open source. The “too big to fail” software will be maintained no matter what, but the contribution model will change. It is not great, but we can live with it - majority of users of OSS never touch the code, so nothing is going to change for them. For a few enthusiasts the barrier will be higher, but we need some trust building incorporated in the process anyway.

The small libraries will be eliminated as a viable solution for production use, but that’s a good thing. They are supply chain risk, which is significantly amplified in the LLM age.

It may happen and it will be great if it happens, when open training datasets will replace those libraries to recalibrate LLM output and shift it from legacy to more modern approaches, as well as teaching how to achieve certain things.

More comments...