Posted by bearsyankees 4 hours ago
What makes you so sure that closed-source companies won't run those same AI scanners on their own code?
It's closed to the public, it's not closed to them!
Having worked in quite a few agency/consultancy situations, it is far more productive to smash your head against a wall till bleeding, than to get a client to pay for security. The regular answer: "This is table stakes, we pay you for this." Combined with: "Why has velocity gone down, we don't pay you for that security or documentation crap."
There are unexploited security holes in enterprise software you can drive a boring machine through. There is a well paid "security" (aka employee surveillance) company using python2.7 (no, not patched) on each and every machine their software runs on. At some of the biggest companies in this world. They just don't care for updating this, because, why should they. There is no incentive. None.
Running AI scanners internally costs money, dev time, and management buy in to actually fix the mountain of tech debt the scanners uncover. As you said there is no incentive for that
But for bad actors the cost of pointing an LLM at an exposed endpoint or reverse engineered binary has dropped to near zero. The attackers tooling just got exponentially cheaper and faster, while the enterprise defenders budget remained at zero.
There should be a way to donate your unused tokens on every cycle to open source like rounding up at the chekout!
1. shallow
2. hollow
3. flat
...
Not claiming that it's a slam dunk for open source, but the inverse does not seem correct either.
Why "minus D, E and F"? After all, once you have the harness set up, there's no additional work to add in new models, right?
Not from the automated repo scanners, but bug bounty programs can generate a lot of reports in my experience. AI tools are becoming a problem there, too, because amateurs are drawn to the bounties and will submit anything the AI hallucinates.
Closed source companies can (and should!) also run their own security audits rather than passively waiting for volunteers to spend their tokens on it.
That still exists in the OSS world too, having your code out there is no panacea. I think we'll see a real swarm of security issues across the board, but I would expect the OSS world to fare better (perhaps after a painful period).
So just like a pre-AI or worse?
There is no guarantee that open means that they will be discovered.
This really just seems like Strix marketing. Which is totally fair, but let's be reasonable here, any open-source business stands to lose way more by continuing to be open-source vs. relying on the benevolence of people scanning their code for them.
Actually the opposite is obvious - the comment you replied too talked about an abundance of good Samaritan reports - it's strange to speculate on some nebulous "gain" when responding to facts about more then enough reports concerning open source code.
> In this new closed-source world (for Cal.com), there's nothing stopping them from running their own internal security agent audits
That's one good Samaritan for a closed source app vs many for an open source one. Open source wins again.
> any open-source business stands to lose way more
That doesn't make any sense - why would it lose more when it has many more good Samaritans working for it for free?
You seem to forget that the number of vulnerabilities in a certain app is finite, an open source app will reach a secure status much faster than a closed source one, in addition to also gaining from shorter time to market.
In fact, open source will soon be much better and more capable due to new and developing technological and organizational advancements which are next to impossible to happen under a closed source regime.
But at that point, "fighting fire with fire" is still a good point. Assuming tokens are available, we could just dump the entire code base, changesets and all, our dependent configuration on the code base, company-internal domain knowledge and previous upgrade failures into a folder and tell the AI to figure out upgrade risks. Bonus points if you have decent integration tests or test setups to all of that through.
It won't be perfect, but combine that with a good tiered rollout and increasing velocity of rollouts are entirely possible.
It's kinda funny to me -- a lot of the agentic hype seems to be rewarding good practices - cooperation, documentation, unit testing, integration testing, local test setups hugely.
If the cost of security audit becomes marginal, it would seem reasonable to expect projects to publish results of such audits frequently.
There’s probably a quite hefty backlog of medium- and low-severity issues in existing projects for maintainers to suffer through first though.
This is what worries me about companies sleeping on using AI to at a bare minimum run code audits and evaluate their security routinely. I suspect as models get better we're going to see companies being hacked at a level never seen before.
Right now we've seen a few different maintainers for open source packages get hacked, who knows how many companies have someone infiltrating their internal systems with the help of AI because nobody wants to do the due dilligence of having a company do security audits on their systems.
but with cal.com i dont think this is about security lol
open source will always be an advantage just you need to decide wether it aligns with you business needs
I analyze crash dumps for a Windows application. I haven't had much luck using Claude, OpenAI, or Google models when working with WinDbg. None of the models are very good at assembly and don't seem to be able to remember the details of different calling conventions or even how some of the registers are typically used. They are all pretty good at helping me navigate WinDbg though.
How so? AI won't have access to the source code. In some cases AI may have access to deployed binaries (if your business deploys binaries) but I am not aware that it has the same capabilities against compiled code than source code.
But in a SAAS world, all AI has access to is your API. It might be still be up to no good but surely you will be several orders of magnitude less exposed than with access to source code.
That sounds like an excuse. The real reason is probably that it's hard to make a viable business out of developing open source.
Now it's a lot easier to rewrite open source stuff to get around licensing requirements and have an LLM watch the repo and copy all improvements and fixes, so the bar for a competitor to come along and get 10 years of work for free it a lot lower.
The issue is competitors popping up to clone your offering with your own codebase.
Going closed source actually hurts our business more than it benefits it. But it ultimately protects customer data, and that's what we care about the most.
Are you able to share any more detail on how you determined this is the best route? It would be a significant implication for many other pieces of open source software also if so.
(And I say this is someone who just recommended cal.com to someone a few days ago specifically citing the fact that it was open source, that led to increased trust in it.)
I did find the video valuable, for reference for others: https://www.youtube.com/watch?v=JYEPLpgCRck
I think if you are committed to switching back to open source as soon as the threat landscape changes, and you have some metric for what that looks like, that would be valuable to share now.
I would like to see the analysis that you're referencing around open source being 5-10x less secure.
All your servers are Linux, so imagine how insecure you are - must switch to windows ASAP.
blaming AI scanners is just really convenient PR cover for a normal license change.
“I need to do foo in my app. Libraries bar and baz do these bits well. Pick the best from each and let’s implement them here”
I’d not be surprised if npmjs.com and its ilk turn into more a reference site than a package manager backend soon.
It started as a what-if joke, but it's turned out to be amazing. So yeah, npmjs.com is just reference site for me now, and node_modules stays tiny.
And the output is honestly superior. I end up with smaller projects, clean code, and a huge suite of property-based tests from the refactor process. And it's fully automatic.
Now I can take an open source repo and just add the missing features, fix the bugs, deploy in a few hours. The value of integration and bug-fixing when the code is available is now a single capable dev for a few hours, instead of an internal team. The calculus is completely different.
1) Pulls you in with a catchy title, that at first glance seems like a dunk on Cal.com (whatever that is).
2) Takes the "we understand your pain" approach to empathize w/ Cal.com, so you feel like you're on the good vibes side.
3) Provides a genuine response to the actual problem Cal.com is dealing with. Something you can't dismiss out of hand.
4) But in the end of the day, the response aligns perfectly with the product they're promoting (a click away to the homepage!)
This mix of genuine ideas and marketing is quite potent. Not saying this is all bad or anything, just found it a bit funny. The mixed-up-ness is the point!
As it mentions in their article, Strix actually scans the Cal.com codebase and reports vulnerabilities to us. But the reality is, they actually miss so many vulnerabilities that other platforms do find. There's no one platform that seems to be able to reliably find all vulnerabilities, and so simply adopting AI scanners just isn't enough.
The real content could fit in a comment.
One of the ugliest parts of open source is people believing they’re entitled to you working for free forever. And instead of being thankful you gave years of your labor for free, people get angry at you for not continuing to do so forever. And try to shame you as if you’re somehow greedy if that changes.
Do you work exclusively pro-bono on open source projects? Or do you work a job where you only go in if you get paid?
Security through obscurity is only problematic if that is the only, or a primary, layer of defense. As an incremental layer of deterrence or delay, it is an absolutely valid tactic, with its primary function being imposing higher costs on the attacker.
As such if, as people are postulating post-Mythos, security comes down to which side spends more tokens, it is an even more valid strategy to impose asymmetric costs on the attacker.
"With enough AI-balls (heheh) all bugs are shallow."
From a security perspective, the basic calculus of open versus closed comes down to which you expect to be case for your project: the attention donated by the community outweighs the attention (lowered by openness) invested by attackers, or, the attention from your internal processes outweighs the attention costs (increased by obscurity) on attackers. The only change is that the attention from AI is multifold more effective than from humans, otherwise the calculus is the same.
This article is effectively an announcement that cal.com is riddled with vulnerabilities, which should be easy to find in an archive of their code.
Then the real work is in investigating each false positive. Can still be useful compared to manual review, but requires real resources.
Meanwhile the flood of false positives causes reputation loss if not addressed. Reputation loss that closed source software does not get. Hence perhaps going closed source.
-if code is open source or closed source, AI bots can still look for exploits
-so we need to use AI to develop a checklist program regardless to check for currently known and unknown exploits given our current state of AI tools
-we have to just keep running AI tools looking for more security issues as AI models become more powerful, which empowers AI bots attacking but also then AI bots to defensively find exploits and mitigate them
-so it's an ongoing effort to work on
I understand the logic of closing the source to prevent AI bot scans of the code but also fundamentally people won't trust your closed source code because it could contain harmful code, thus forcing it to be open source
Edit: Another thing that comes to mind is people are often dunking here on "vibe coding" however can't we just develop "standards / tools" to "harden" vibe coded software and also help guide well for decisions related to architecture of the program, and so on?
Open Source was always open to "many eyes" in theory exposing itself to zero-day vulnerabilities. But the "many eyes" go for the good and the bad actors.
As far as I am concerned... Way to go Cal.com, and a good reminder to never use your services.
There are real limitations of course.
I'm not sure how this works in the legal sense. A human could ostensibly study an existing project and then rewrite it from scratch. The original work's license shouldn't apply as long as code wasn't copy & pasted, right?
What happens when an automated tool does the same? It's basically just a complicated copy & paste job.
And likely there would be enough similarities that the rewrite would be considered a derived work under copyright law.
> The original work's license shouldn't apply as long as code wasn't copy & pasted, right?
You don't need to do a literal copy & paste for it to be copyright infringement.
> What happens when an automated tool does the same? It's basically just a complicated copy & paste job.
Sounds like copyright infringement to me.
If we go by the OSI's definition, a project that doesn't allow this is not "open source". So all open source projects -- not just "a lot" -- allow this.