Top
Best
New

Posted by bearsyankees 4 hours ago

Open Source Isn't Dead. Cal.com Just Learned the Wrong Lesson(www.strix.ai)
296 points | 161 comments
tananaev 4 hours ago|
I have an open source project and started receiving a lot of security vulnerability reports in the last few months. A lot of them are extremely corner cases, but there were some legit ones. They're all fixed now. Closed source software won't receive any reports, but it will be exploited with AI. So I definitely agree with the message of this article.
lelanthran 2 hours ago||
> Closed source software won't receive any reports, but it will be exploited with AI.

What makes you so sure that closed-source companies won't run those same AI scanners on their own code?

It's closed to the public, it's not closed to them!

440bx 2 hours ago|||
As someone who works on closed source software and has done for a couple of decades, most companies won't even know about that and of those who do only a fraction give enough of a shit about it to do anything until they are caught with their pants down.
sdoering 1 hour ago|||
Seconded.

Having worked in quite a few agency/consultancy situations, it is far more productive to smash your head against a wall till bleeding, than to get a client to pay for security. The regular answer: "This is table stakes, we pay you for this." Combined with: "Why has velocity gone down, we don't pay you for that security or documentation crap."

There are unexploited security holes in enterprise software you can drive a boring machine through. There is a well paid "security" (aka employee surveillance) company using python2.7 (no, not patched) on each and every machine their software runs on. At some of the biggest companies in this world. They just don't care for updating this, because, why should they. There is no incentive. None.

valeriozen 28 minutes ago||
Yea, its fundamentally an issue of asymmetric economics.

Running AI scanners internally costs money, dev time, and management buy in to actually fix the mountain of tech debt the scanners uncover. As you said there is no incentive for that

But for bad actors the cost of pointing an LLM at an exposed endpoint or reverse engineered binary has dropped to near zero. The attackers tooling just got exponentially cheaper and faster, while the enterprise defenders budget remained at zero.

njyx 11 minutes ago||
In theory though, there is now a new way for community to support open source, but running vulnerability scans in white-hat mode, reporting and patching. That way they burn tokens for a project they love. Even if they couldn't actually contribute code before.

There should be a way to donate your unused tokens on every cycle to open source like rounding up at the chekout!

sevenzero 54 minutes ago|||
Yup, closed source software is a huge pile of shit with good marketing teams. Always was.
baileypumfleet 2 hours ago||||
As I mentioned above, we actually do run these AI scanners on our code, but the problem is it's simply not enough. These AI scanners, including STRIX, don't find everything. Each scanning tool actually finds different results from the other, and so it's impossible to determine a benchmark of what's secure and what's not.
cyanydeez 5 minutes ago||||
Because they're a company. Even if the bar to entry can fit a normal sized american, doesn't mean they will do it, or do it in a systematic way; We know very well that nothing about AI is naturally systematic, so why would you assume it'll happen in a systematic way.
ihaveajob 2 hours ago||||
More eyes, more chances that someone will actually use the tools. Also, the tools and how you use them are not all the same.
phendrenad2 2 hours ago||
With enough copies of GPT printing out the same bulleted list, all bugs are

1. shallow

2. hollow

3. flat

...

suhputt 53 minutes ago||||
[dead]
LunicLynx 2 hours ago|||
Came here to say the same. Same tools + private. In security two different defense-mechanisms are always better than one.
bluebarbet 1 hour ago||
Same tools A, B and C, but minus tools D, E and F, and with a smaller chance that any tools at all will even be used.

Not claiming that it's a slam dunk for open source, but the inverse does not seem correct either.

lelanthran 31 minutes ago|||
> Same tools A, B and C, but minus tools D, E and F,

Why "minus D, E and F"? After all, once you have the harness set up, there's no additional work to add in new models, right?

bluebarbet 9 minutes ago||
The point being that there are always going to be more eyes, and more knowledge of available tools (i.e. including "D, E and F"), and more experience using them, with open source than with a single in-house dev team.
LunicLynx 34 minutes ago|||
Fair enough
Aurornis 4 hours ago|||
> Closed source software won't receive any reports

Not from the automated repo scanners, but bug bounty programs can generate a lot of reports in my experience. AI tools are becoming a problem there, too, because amateurs are drawn to the bounties and will submit anything the AI hallucinates.

Closed source companies can (and should!) also run their own security audits rather than passively waiting for volunteers to spend their tokens on it.

switchbak 3 hours ago|||
Those bug bounty programs now have to compete against the market for 0-days. I suppose they always did, but it seems the economics have changed in the favour of the bad actors - at least from my uninformed standpoint.

That still exists in the OSS world too, having your code out there is no panacea. I think we'll see a real swarm of security issues across the board, but I would expect the OSS world to fare better (perhaps after a painful period).

0x457 43 minutes ago||||
> Not from the automated repo scanners, but bug bounty programs can generate a lot of reports in my experience. AI tools are becoming a problem there, too, because amateurs are drawn to the bounties and will submit anything the AI hallucinates

So just like a pre-AI or worse?

shakna 13 minutes ago||
Worse. [0]

[0] https://hackerone.com/reports/3595764

baileypumfleet 2 hours ago||||
That's absolutely our plan. We have bug bounty programs, we have internal AI scanners, we have manual penetration testing, and a number of other things that enable us to push really hard to find this stuff internally rather than relying on either the good people in the open source community or hackers to find our vulnerabilities.
tananaev 4 hours ago||||
Of course everyone should do their own due diligence, but my point is mostly that open source will have many more eyes and more effort put into it, both by owners, but also community.
LunicLynx 2 hours ago||
But also tools that might not be nice and report security vulnerabilities, but exploit them.

There is no guarantee that open means that they will be discovered.

bearsyankees 4 hours ago||||
+1, at this point all companies need to be continuously testing their whole stack. The dumb scanners are now a thing of the past, the second your site goes live it will get slammed by the latest AI hackers
bmurphy1976 1 hour ago|||
You don't even need a bug bounty program. In my experience there's an army of individuals running low-quality security tools spamming every endpoint they can think (webmaster@ support@ contact@ gdpr@ etc.) with silly non-vulnerabilities asking for $100. They suck now but they will get more sophisticated over time.
rd 3 hours ago|||
I don't follow. It seems obvious that there's more to gain for attackers using AI agents to exploit open source repositories, than there is for good samaritan defenders. In this new closed-source world (for Cal.com), there's nothing stopping them from running their own internal security agent audits, all whilst at least blocking the easiest method of finding zero-days - that is, being open source.

This really just seems like Strix marketing. Which is totally fair, but let's be reasonable here, any open-source business stands to lose way more by continuing to be open-source vs. relying on the benevolence of people scanning their code for them.

bigbadfeline 2 hours ago|||
> It seems obvious that there's more to gain for attackers using AI agents to exploit open source repositories, than there is for good samaritan defenders.

Actually the opposite is obvious - the comment you replied too talked about an abundance of good Samaritan reports - it's strange to speculate on some nebulous "gain" when responding to facts about more then enough reports concerning open source code.

> In this new closed-source world (for Cal.com), there's nothing stopping them from running their own internal security agent audits

That's one good Samaritan for a closed source app vs many for an open source one. Open source wins again.

> any open-source business stands to lose way more

That doesn't make any sense - why would it lose more when it has many more good Samaritans working for it for free?

You seem to forget that the number of vulnerabilities in a certain app is finite, an open source app will reach a secure status much faster than a closed source one, in addition to also gaining from shorter time to market.

In fact, open source will soon be much better and more capable due to new and developing technological and organizational advancements which are next to impossible to happen under a closed source regime.

tetha 2 hours ago||
The main drawback is that you will need to be able to patch quick in the next 3-5 years. We are already seeing this in a few solutions getting attention from various AI-driven security topics and our previous stance of letting fixes "ripen" on the shelf for a while - a minor version or two - is most likely turning problematic. Especially if attackers start exploiting faster and botnets start picking up vulnerabilities faster.

But at that point, "fighting fire with fire" is still a good point. Assuming tokens are available, we could just dump the entire code base, changesets and all, our dependent configuration on the code base, company-internal domain knowledge and previous upgrade failures into a folder and tell the AI to figure out upgrade risks. Bonus points if you have decent integration tests or test setups to all of that through.

It won't be perfect, but combine that with a good tiered rollout and increasing velocity of rollouts are entirely possible.

It's kinda funny to me -- a lot of the agentic hype seems to be rewarding good practices - cooperation, documentation, unit testing, integration testing, local test setups hugely.

NaritaAtrox 2 hours ago||||
Some users might be tech sensitive and have the capacity to check the codebase If a company want to use your platform, it can run an audit with its own staff These are people really concerned about the code, not "good samaritans"
sureMan6 2 hours ago||||
A new user is much more likely to scan the codebase and report vulnerabilities so they can be fixed than illegally exploit them since most people aren't criminals
eddythompson80 2 hours ago||
Exactly. Who even hacks stuff? Most people will report the issue to earn xp and level up than actually exploit it.
dgb23 3 hours ago|||
Isn’t that security by obscurity?
hardsnow 3 hours ago|||
I’ve recently set up nightly automated pentest for my open-source project. I’m considering starting to publish these reports as proof of security posture.

If the cost of security audit becomes marginal, it would seem reasonable to expect projects to publish results of such audits frequently.

There’s probably a quite hefty backlog of medium- and low-severity issues in existing projects for maintainers to suffer through first though.

Johnny_Bonk 2 hours ago||
What do you use for the pentests? any oss libraries?
hardsnow 2 hours ago||
This is a sandbox escape pentest so the only tooling needed is Claude Code and a simple prompt that asks it to follow a workflow: https://github.com/airutorg/airut/blob/main/workflows/sandbo...
giancarlostoro 1 hour ago|||
> Closed source software won't receive any reports, but it will be exploited with AI.

This is what worries me about companies sleeping on using AI to at a bare minimum run code audits and evaluate their security routinely. I suspect as models get better we're going to see companies being hacked at a level never seen before.

Right now we've seen a few different maintainers for open source packages get hacked, who knows how many companies have someone infiltrating their internal systems with the help of AI because nobody wants to do the due dilligence of having a company do security audits on their systems.

baileypumfleet 2 hours ago|||
We actually run AI scanners on our code internally, so we get the benefit of security through obscurity while also layering on AI vulnerability scanning, manual human penetration testing, and a huge array of other defence mechanisms.
advael 1 hour ago||
"Security through obscurity" is a term popularized entirely by the long-standing consensus among security researchers and any expert not being paid to say otherwise that this is a bad idea that doesn't work
devstatic 3 hours ago|||
i agree with his too,

but with cal.com i dont think this is about security lol

open source will always be an advantage just you need to decide wether it aligns with you business needs

baq 4 hours ago|||
given what the clankers can do unassisted and what more they can do when you give them ghidra, no software is 'closed source' anymore
embedding-shape 4 hours ago|||
Guess that kind of depends on your definition of "source", I personally wouldn't really agree with you here.
raddan 12 minutes ago|||
I mean-- to an LLM is there really any difference between the actual source and disassembled source? Informative names and comments probably help them too, but it's not clear that they're necessary.
baq 3 hours ago|||
absolutely agree with you if we're talking about clean room reverse engineering; but in the context of finding vulnerabilities it's a completely different story
criddell 3 hours ago|||
Which models have you had good luck with when working with ghidra?

I analyze crash dumps for a Windows application. I haven't had much luck using Claude, OpenAI, or Google models when working with WinDbg. None of the models are very good at assembly and don't seem to be able to remember the details of different calling conventions or even how some of the registers are typically used. They are all pretty good at helping me navigate WinDbg though.

charcircuit 3 hours ago|||
Assembly is still source code so really it comes down to if the copy protection is obscuring the executable code to the point where the LLM is not able to retrieve it on its own. And if it can't someone motivated could give it the extra help it needs to start tracing how outside inputs gets handled by the application.
kirubakaran 3 hours ago|||
Yes exactly! I'm so glad I took this route with my startup. We can't bury our heads in the sand and think the vulnerabilities don't exist just because we don't know about them.
ofjcihen 3 hours ago||
This might be the most painfully obvious advertisement I’ve ever seen on a forum.
kirubakaran 3 hours ago||
I didn't mean it as such, but I can see why it would seem so. I've edited the link out now. Thanks for the feedback.
cm2187 3 hours ago||
> Closed source software won't receive any reports, but it will be exploited with AI

How so? AI won't have access to the source code. In some cases AI may have access to deployed binaries (if your business deploys binaries) but I am not aware that it has the same capabilities against compiled code than source code.

But in a SAAS world, all AI has access to is your API. It might be still be up to no good but surely you will be several orders of magnitude less exposed than with access to source code.

advael 1 hour ago|||
The opposite is true. Open source barely matters to attackers, especially ones that can be automated. It mostly enables more people (or agents, or people with agents) to notice and fix your vulnerabilities. Secrecy and other asymmetries in the information landscape disproportionately benefit attackers, and the oft-repeated corporate claim that proprietary software is more secure is summarily discounted by most cybersecurity professionals, whether in industry or academic research. This is also seldom the motivation for making products proprietary, but it's more PR-friendly to claim that closing your source code is for security reasons than it is to say that it's for competitive advantage or control over your customers
geoffschmidt 3 hours ago|||
Claude is already shockingly good at reverse engineering. Try it – it's really a step change. It has infinite patience which was always the limited resource in decompiling/deobfuscating most software.
evanelias 1 hour ago||
It's SaaS though. You don't have access to the binary to decompile. There's only so much you can reverse-engineer through public URLs and APIs, especially if the SaaS uses any form of automatic detection of bot traffic.
zenmac 33 minutes ago||
Thanks you. This is what the parent post was trying to say. Don't know why it is down-voted. AI or not, if the API end points are well secured, for example use uuid-v7, then their is little that the ai can gain from just these points.
CodesInChaos 4 hours ago||
> The reasoning provided by their CEO, Bailey Pumfleet, is that AI has automated vulnerability discovery at scale,

That sounds like an excuse. The real reason is probably that it's hard to make a viable business out of developing open source.

riazrizvi 8 minutes ago||
Yes. Before AI the source was a demonstration of your substance. Users would be encouraged to reach out to maintainers to pay for upgrades or custom tweaks or training. Or indirectly pay for advertising while reading docs. After AI those revenue streams have collapsed. Now you have to withdraw enough of the work to make it hard for an individual to recreate with an LLM. The open source needs to be restricted to a rich interaction layer. Cloudflare just announced they are using that model with their services which were already closed source but now they are exposing them through new APIs. So they can capitalize on existing services that were not ripe enough for SaaS before AI, that had to be handled by their in-house professionals services folks. With this move they are using AI to expand/automate their white glove professional services business to smaller customers.
mdp 4 hours ago|||
Exactly. I respect their decision to go closed source if that's what they need to do to make it a viable business, but just be honest about it. Don't make up some excuse around security and open source.
bearsyankees 4 hours ago|||
I don't know if I fully agree with this -- how many people were actually self-hosting cal infra? I def could be wrong though
sixhobbits 31 minutes ago|||
it's not necessarily about people self hosting it, it's about people preferring to pay for hosted stuff that is open source (e.g. I pay for Plausible).

Now it's a lot easier to rewrite open source stuff to get around licensing requirements and have an LLM watch the repo and copy all improvements and fixes, so the bar for a competitor to come along and get 10 years of work for free it a lot lower.

pembrook 29 minutes ago|||
The issue isn’t would-be customers going to the trouble of self hosting to save a measly $30/month.

The issue is competitors popping up to clone your offering with your own codebase.

renewiltord 2 hours ago|||
[flagged]
bruckie 2 hours ago|||
AI makes a great scapegoat. Need to lay off people? "AI." Need to switch to closed source? "AI."
baileypumfleet 1 hour ago|||
We've run an extremely profitable business for five years, raised a seed and a Series A, and grown at 300% a year sustainably while being open source.

Going closed source actually hurts our business more than it benefits it. But it ultimately protects customer data, and that's what we care about the most.

avivo 1 hour ago|||
I think if it ultimately protects customer data in a significant way, I would be for it.

Are you able to share any more detail on how you determined this is the best route? It would be a significant implication for many other pieces of open source software also if so.

(And I say this is someone who just recommended cal.com to someone a few days ago specifically citing the fact that it was open source, that led to increased trust in it.)

I did find the video valuable, for reference for others: https://www.youtube.com/watch?v=JYEPLpgCRck

I think if you are committed to switching back to open source as soon as the threat landscape changes, and you have some metric for what that looks like, that would be valuable to share now.

I would like to see the analysis that you're referencing around open source being 5-10x less secure.

tgrowazay 20 minutes ago|||
By this logic, Linux should switch to closed-source.

All your servers are Linux, so imagine how insecure you are - must switch to windows ASAP.

p_stuart82 3 hours ago|||
separating codebase and leaving 'cal.diy' for hobbyists is pretty much the classic open-core path. the community phase is over and they need to protect their enterprise revenue.

blaming AI scanners is just really convenient PR cover for a normal license change.

mikeryan 3 hours ago|||
It’s also now ridiculously easy to simply cherry pick from open source without actually “using” it.

“I need to do foo in my app. Libraries bar and baz do these bits well. Pick the best from each and let’s implement them here”

I’d not be surprised if npmjs.com and its ilk turn into more a reference site than a package manager backend soon.

wilj 3 hours ago|||
I literally have a Claude Code skill called "/delib" that takes takes in any nodejs project/library and converts it to a dependency-less project only using the standard library.

It started as a what-if joke, but it's turned out to be amazing. So yeah, npmjs.com is just reference site for me now, and node_modules stays tiny.

And the output is honestly superior. I end up with smaller projects, clean code, and a huge suite of property-based tests from the refactor process. And it's fully automatic.

pixel_popping 3 hours ago|||
It's that easy yes, and someday, we will literally be able to prompt "Redo the Linux kernel entirely in Zig" and it will practically make a 1:1 copy.
bobkb 23 minutes ago|||
Interesting - I am interested to know how’s it impacting the codebase size interms of lines of code.
yibers 3 hours ago|||
Ironically, given the recent supply chain attacks, that may be also more secure.
serial_dev 4 hours ago|||
I'd think it's also much easier to spin up a (in some area) slightly better clone and eat into their revenue.
svnt 3 hours ago||
This is part of it for sure. It is also true that many open source business depended on it not being worth the trouble to figure out the hosting setup, ops etc, and the code. Typical open source businesses also make a practice of running a few features back on the public repo.

Now I can take an open source repo and just add the missing features, fix the bugs, deploy in a few hours. The value of integration and bug-fixing when the code is available is now a single capable dev for a few hours, instead of an internal team. The calculus is completely different.

kelnos 1 hour ago|||
Yes, it feels like they've been looking for an excuse to go closed-source, and this one is plausible enough to make it sound like they're only doing it because they "have to".
hxugufjfjf 27 minutes ago||
What an uncharitable take
phillipcarter 4 hours ago||
I mean, it's hard to make a viable business regardless of if the tech is OSS or not, but it's often seen as more challenging this way.
pradn 3 hours ago||
Brilliant piece of content marketing:

1) Pulls you in with a catchy title, that at first glance seems like a dunk on Cal.com (whatever that is).

2) Takes the "we understand your pain" approach to empathize w/ Cal.com, so you feel like you're on the good vibes side.

3) Provides a genuine response to the actual problem Cal.com is dealing with. Something you can't dismiss out of hand.

4) But in the end of the day, the response aligns perfectly with the product they're promoting (a click away to the homepage!)

This mix of genuine ideas and marketing is quite potent. Not saying this is all bad or anything, just found it a bit funny. The mixed-up-ness is the point!

baileypumfleet 1 hour ago||
That's exactly how this read to me too. Ultimately, the whole article is written by a company that does AI vulnerability scanning, and it's to try and get you to sign up for their service.

As it mentions in their article, Strix actually scans the Cal.com codebase and reports vulnerabilities to us. But the reality is, they actually miss so many vulnerabilities that other platforms do find. There's no one platform that seems to be able to reliably find all vulnerabilities, and so simply adopting AI scanners just isn't enough.

kreco 2 hours ago|||
I'm sad to see this article being so upvoted while being kind of empty.

The real content could fit in a comment.

shevy-java 3 hours ago||
Is it good marketing though? I mean personally I do not use AI, and I don't think this opinion of mine will change. I can't look into the future, but right now I don't use nor do I depend on AI. I guess it may work for some people, but even then I am unsure whether that is really good marketing. Riding on a hype train (which AI right now still is) is indeed easier, so that has to be considered.
BloondAndDoom 2 hours ago||
They are in HN front page, therefore it’s good marketing.
victorbjorklund 26 minutes ago||
I don’t believe for a second that the real reason is security by obscurity. They probably believe they can make more money not being open source and this sounds like a better excuse than ”we wanna make more money”.
pembrook 9 minutes ago|
Probably, and I don’t care and kinda wish they boldly said so too. It’s their product to do with what they want, they built it.

One of the ugliest parts of open source is people believing they’re entitled to you working for free forever. And instead of being thankful you gave years of your labor for free, people get angry at you for not continuing to do so forever. And try to shame you as if you’re somehow greedy if that changes.

Do you work exclusively pro-bono on open source projects? Or do you work a job where you only go in if you get paid?

keeda 1 hour ago||
>Security through obscurity is a losing bet against automation

Security through obscurity is only problematic if that is the only, or a primary, layer of defense. As an incremental layer of deterrence or delay, it is an absolutely valid tactic, with its primary function being imposing higher costs on the attacker.

As such if, as people are postulating post-Mythos, security comes down to which side spends more tokens, it is an even more valid strategy to impose asymmetric costs on the attacker.

"With enough AI-balls (heheh) all bugs are shallow."

From a security perspective, the basic calculus of open versus closed comes down to which you expect to be case for your project: the attention donated by the community outweighs the attention (lowered by openness) invested by attackers, or, the attention from your internal processes outweighs the attention costs (increased by obscurity) on attackers. The only change is that the attention from AI is multifold more effective than from humans, otherwise the calculus is the same.

JoshTriplett 3 hours ago||
I wonder whether cal actually has concerns about security (in which case, they're wrong, this argument was false when people made it decades ago), or whether they just took a convenient excuse to do something they wanted to do anyway because Open Source SaaS businesses are hard.
janalsncm 2 hours ago||
Reading between the lines, it seems like they were working with cal.com and used red team bots to find vulnerabilities in cal.com’s code. And they probably found bugs a lot faster than cal.com could fix them. So the CEO balked at the estimated cost of fixing and took his ball home.

This article is effectively an announcement that cal.com is riddled with vulnerabilities, which should be easy to find in an archive of their code.

luke5441 1 hour ago|
Alternatively those scanning tools have the same issue all other security scanners have in that they have too many false positives. And when tuning them to have only few false positives, they miss the true positives.

Then the real work is in investigating each false positive. Can still be useful compared to manual review, but requires real resources.

Meanwhile the flood of false positives causes reputation loss if not addressed. Reputation loss that closed source software does not get. Hence perhaps going closed source.

erelong 2 hours ago||
I'll admit that I agree with a lot of the post but that I can't fully wrap myself around the cybersecurity situation today, is it basically:

-if code is open source or closed source, AI bots can still look for exploits

-so we need to use AI to develop a checklist program regardless to check for currently known and unknown exploits given our current state of AI tools

-we have to just keep running AI tools looking for more security issues as AI models become more powerful, which empowers AI bots attacking but also then AI bots to defensively find exploits and mitigate them

-so it's an ongoing effort to work on

I understand the logic of closing the source to prevent AI bot scans of the code but also fundamentally people won't trust your closed source code because it could contain harmful code, thus forcing it to be open source

Edit: Another thing that comes to mind is people are often dunking here on "vibe coding" however can't we just develop "standards / tools" to "harden" vibe coded software and also help guide well for decisions related to architecture of the program, and so on?

linuxhansl 3 hours ago||
So Cal.com favors security through obscurity.

Open Source was always open to "many eyes" in theory exposing itself to zero-day vulnerabilities. But the "many eyes" go for the good and the bad actors.

As far as I am concerned... Way to go Cal.com, and a good reminder to never use your services.

dom96 3 hours ago|
Isn’t the real danger now not the ability to find security vulnerabilities, but rather, the ability of anyone to ask an LLM agent to rewrite your open source project in another language and thus work around whatever license your project has?
bluGill 2 hours ago||
You can do the same for closed source projects.

There are real limitations of course.

short_sells_poo 3 hours ago|||
This is happening quite a lot actually. People just feed an existing project into their agent harness and have it regenerate more or less the same with a few tweaks and then they publish it.

I'm not sure how this works in the legal sense. A human could ostensibly study an existing project and then rewrite it from scratch. The original work's license shouldn't apply as long as code wasn't copy & pasted, right?

What happens when an automated tool does the same? It's basically just a complicated copy & paste job.

kelnos 1 hour ago||
> A human could ostensibly study an existing project and then rewrite it from scratch.

And likely there would be enough similarities that the rewrite would be considered a derived work under copyright law.

> The original work's license shouldn't apply as long as code wasn't copy & pasted, right?

You don't need to do a literal copy & paste for it to be copyright infringement.

> What happens when an automated tool does the same? It's basically just a complicated copy & paste job.

Sounds like copyright infringement to me.

micromacrofoot 2 hours ago||
A lot of open source projects already have licenses that allow forking and selling the fork, it hasn't been a problem most of the time... there's a lot more to operating open source as a business beyond just shipping the code
kelnos 1 hour ago||
> A lot of open source projects already have licenses that allow forking and selling the fork

If we go by the OSI's definition, a project that doesn't allow this is not "open source". So all open source projects -- not just "a lot" -- allow this.

More comments...