Top
Best
New

Posted by bearsyankees 1 day ago

Open Source Isn't Dead. Cal.com Just Learned the Wrong Lesson(www.strix.ai)
296 points | 161 commentspage 2
cadamsdotcom 1 day ago|
> Security testing has to become an automated, integral part of the CI/CD pipeline. When a developer opens a pull request, an AI agent should immediately attempt to exploit it. When infrastructure changes, an AI should autonomously validate the new attack surface. You do not beat automated attackers by turning off the lights; you beat them by running better automation on the inside.

This feels like the core of the article, but it doesn’t prove the need for open source.

Lammy 1 day ago||
I support and encourage the death of “““open source””” so that we may retvrn to Free Software.
agentifysh 1 day ago||
Pretty overreaching claim about another company's internal decisions and open source in general. There is a lot of incentive to stop open source these days.

One of which I am experiencing right now is somebody just copying my repo, not crediting me, didn't even try to change the README. It's pretty discouraging.

The other is security reasons, the premise that volunteers will report vulnerabilities really matter if you are big enough for small portion of people to dedicate themselves, for the most part people take open source tool use it and then forget about it, they only want stuff fixed.

Lastly, open source development kinda sucks so far. I'v been working on a few different tools and the amount of trolling and just bad faith actors I had to deal with is exhausting. On top of that there is a constant stream of people just demanding stuff to be fixed quickly.

cold_tom 1 day ago||
feels like the real shift is not open vs closed, but reaction time AI attackers don’t need perfect access anymore, just enough surface and time. So the question becomes: can you detect+respond faster than they can iterate in that sense, open source might even help -more eyes reduces time to fix, not just time to find
Prunkton 1 day ago||
I'm hopeful the article is right about its prediction, although I'm under the impression the attacker/defender dynamic is asymmetric and the defender on the loosing end. I hope someone can proof me wrong though...

Making the assumption that the same amount of money needed to attack a critical vulnerability is also required to find and fix it.

Lets say we have a project with 100 modules, and it costs us $100 000 to check these modules for vulnerabilities. What is stopping an attacker from spending the same amount of money to scan, lets say 10 modules but this time with 10x the number of tokens per module than the defender had when hardening the software?

pixel_popping 1 day ago||
At the same time, I heavily support open-source and contribute a lot, but I can't necessarily agree that security-through-obfuscation doesn't play a major role in slowing down attacks. Cloudflare have based its whole security being closed-source (for example on its anti-bot mechanism) to be hard to reverse engineer, and they remain leaders as of today with few serious security breaches.

Some things just can't be truly secure as well, ddos protection is mostly a guessing/preventive game, exposing your firewall config/scripts will make you more vulnerable than NOT.

If your codebase isn't exposed, attackers are constrained by the network and other external restrictions which greatly reduce the number of possible trials, even with a swarm of residential proxies, it's not the same at all from inspecting a codebase in depth with thousand of agents and all models.

Divs2890 1 day ago||
Closing your source doesn't close your attack surface,it just closes the community that would have helped you defend it. Security through obscurity is a kind of tradeoff, not a strategy.. i mean that's what I feel.
Talderigi 1 day ago||
feels like people are arguing the wrong axis tbh

- it’s not open vs closed anymore, it’s more like bug finding going a few devs poking around to basically infinite parallel scanners

- so now you don’t get a couple of thoughtful reports, you get a many edge cases and half-real junk. fixing capacity didn’t change though

- closing the repo doesn’t really save you, it just switches from white-box to black-box… and that’s getting pretty damn good anyway

real problem is: vuln discovery scaled, patching didn’t. now everything is a backlog game

6thbit 1 day ago||
Great PR piece by Strix, but I find mixed messages.

Cal.com folks are getting a red team for free, wouldn't that further convince them their closed source software is strong enough?

Isn't Strix's business companies paying for scans regardless of whether the software scanned is open source or closed?

shay_ker 1 day ago|
It's a good question - is blackbox hacking as effective as whitebox hacking, for AI agents? I've gotta assume someone at Anthropic is putting together an eval as we speak.
hansvm 1 day ago|
I don't really know, but I have a story which might prompt some conversation about it.

At $WORK we had a system which, if you traced its logic, could not possibly experience the bug we were seeing in production. This was a userspace control module for an FPGA driver connected to some machinery you really don't want to fuck around with, and the bug had wasted something like three staff+ engineer-years by the time I got there.

Recognizing that the bug was impossible in the userspace code if the system worked as intended end-to-end, the engineers started diving into verilog and driver code, trying to find the issue. People were suspecting miscompilations and all kinds of fun things.

Eventually, for unrelated reasons, I decided to clean up the userspace code (deleting and refactoring things unlocks additional deletion and refactoring opportunities, and all said and done I deleted 80% of the project so that I had a better foundation for some features I had to add).

For one of those improvements, my observation was just that if I had to write the driver code to support the concurrency we were abusing I'd be swearing up a storm and trying to find any way I could to solve a simpler problem instead.

Long story short, I still don't know what the driver bug was, but the actual authors must've felt the same way, since when I opted for userspace code with simpler concurrency demands the bug disappeared.

Tying it back to AI and hacking, the white box approach here literally didn't work, and the black box approach easily illuminated that something was probably fucky. Given that AI can de-minify and otherwise spot patterns from fairly limited data, I wouldn't be shocked if black-box hacking were (at least sometimes) more token-efficient than white-box.

pixl97 1 day ago||
>simpler concurrency demands

This seems to be extremely common. Been a very long time since I looked at Linux kernel stuff, but there were numerous drivers that disabled hardware acceleration or offloading features simply because they became unreliable if they were given heavy loads or deep queues.

More comments...