Posted by speckx 18 hours ago
90d seems long too though.
Think ultimately the big AI houses will need to help the core internet infra guys. Running latest and greatest AI over stuff like nginx and friends makes sense for us all collectively I think
We should be able to turn around a bug report to a patched product ready for QA testing in 1 hour. Standardize/open source it, have the whole software supply chain use it (ex. Linux kernel -> distros -> products that use distros -> users). With AI there's no reason we can't do this, we're just slow.
Imo we are going to have to rely on more layers of security. Systems that are designed to be secure even in the presence of individual vulnerabilities. This has already been happening for a while on mobile platforms and game consoles. Even physical hardware designed to keep particular secrets /keys even from the kernel.
I actually don't think more layers of security will fix this. It would be nice if our systems were more secure... but people are, if nothing else, lazy af. Even when adding security isn't a lot of work, people resist it if it "sounds complicated". So I think we're stuck with the status quo. But the big issue now isn't novel bug types, it's the speed in which they're found. Therefore we need to speed up our response.
When you look at consoles, they have built software that is resistant to outright glitching the CPU.
I've been dealing with a bunch of AI-generated (or at least -assisted) vulnerability reports lately. In many cases the reports include proposed patches to fix the issues.
It's been..... interesting. In many cases, the analysis provided in the report has been accurate and helpful. In some cases, the proposed patches have also been good, and we've accepted them with minimal or no changes.
In other cases, despite finding a valid issue, and even providing a good analysis of the problem, the AI tool's suggested patch has been, quite simply, wrong.
Careful review from somebody who really _understands_ the code -- and the wider context in which it is operating -- is still absolutely necessary. That's not always going to happen in an hour.
imagine patching everything up automatically and it's a malware
everything cooked
Not as worrysome in a philosophical way (since it's not a serious culture) but it's a real issue. And just wait for a nation state to start astroturing helpful little libraries at scale ...
That's when firewalls were widely deployed to provide some layer of protection.
So you can ask yourself, what is the (possibly metaphorical) firewall in the software you depend on?
Is there any way you can decrease attack surface, separate out the most important data in extra-secure (and thus less accessible) systems?
Turns out it did.
Thus we should probably start treating our thinking model of computing as a Dark Forest, not a friendly community. That mitigates these risks to some degree.
There will be much wailing and gnashing of teeth around this, because a lot of tech types really resent having to update constantly, but I don't think people will have a choice. If you have a complicated stack where major or even minor version updates are a huge hassle, I'd start working now to try and clear out the cruft and grease those wheels.
Debian continuously issues security updates for stable versions, ingestable with automatic updates. “Stable” doesn’t mean that vulnerabilities aren’t getting fixed.
The argument that could be made is that keeping up with getting vulnerabilities fixed might become such a high workload that fewer releases can be maintained in parallel, and therefore the lifetime and/or overlap of maintained releases would have to be reduced. But the argument for abandoning stable releases altogether doesn’t seem cogent.
It goes both ways: Stable code that only receives security updates becomes less vulnerable over time, as the likelihood of new vulnerabilities being introduced is comparatively low. From that point of view, stable software actually has a leg up over continuous (“eternal beta” in the worst case) functional updates.
I hate software that forces you to take new features as a condition of obtaining bug and security fixes. We need to keep old "stable" builds around for longer and maintain them better. I know, I know, it is really upsetting to developers to have to backport things to old versions--they wish that all they had to work on was the current branch. But that just causes guys like me to never upgrade because the downside of upgrading (new features) is worse than the upside (security fixes).
It may actually be the opposite.
Debians steady and professional approach on shipping security patches with very little to no functional difference actually enables us to consider and work on automated, autonomous weekly or faster patches of the entire fleet. And once that's in place and trusted, emergency rollouts are very possible and easy.
We have other projects that "move fast and break things" and ship whatever they want in whatever versions they want and those will require constant attention to ship any update for a security topic. These projects require constant human attention to work through their shenanigans to keep them up to date.
> CVE-2026-32105 xrdp
which i see has a fix in sid but not on bookworm
I kind of get your point, but they responded pretty quickly here.
We've just kept building more complex things with more exposure with no recognition that the day of reckoning was coming. And now we are in an untenable situation. With governments spending billions on AI with the big providers it's likely they've found many of these already.
Not so daunting for me having come of age when compiling a kernel specific to a hardware platform was essential.
Custom software that does not fit the usual patterns is not fool proof but it won't be obvious.
Monocultures with all their eggs in one basket are even less secure than truly diverse ecosystems though.
The 3rd one is what I practice when giving companies time to fix their issue. Note, I haven't reported anything to FOSS projects, but to several companies I found exploits in. I give them 5 days. If they don't respond at all in the first day, I deduct 1 day - apparently they're either incompetent or don't care. After the 5 days have passed, I make it public. So far they've all fixed the issue on the 3rd or 4th day.
If I were to report something to a FOSS project, I'd give them a bit more, say 8-9 days. Enough time for everyone to wake up, review the vuln, patch it and ship it. Enough time for all the downstream projects to also ship the patch.
90 days is ridiculous, especially for companies. If I report something on Friday 23:30 and they reply Monday 15:00 - what were they doing during the weekend? Did they forget their software is used 24/7? I had one company complain quite a bit, threatening to sue. When they realized there was no one to sue (me being anonymous with my report), they fixed it in less than a day.
Bottom line - if you're a company offering a product or service, you should have a security team 24/7.
If you're a FOSS project - either alert your users to stop using your software or disable the service yourself, if you can.
If it's an extremely important life-or-death service you can't shut off - then fix it quickly. What are you doing with life-or-death stuff when you can't react quickly enough?
Fuck the 90 days standard - it's what companies want us to do because it's easier on them. If security hasn't been your top priority, you have a few days to make it your top priority.
With AI, that makes even more sense now. Bugs won't be able to stay hidden for months. Especially bugs I've reported like IDORs or SQL injections - things everyone tries first.
(and I love Linux, but getting an "Oh noes!" from Anubis at kernel.org because I don't have cookies enabled (I do??) really makes me not want to report anything to the Linux kernel in particular. If I ever did find something, I'd just immediately post it as a HN comment or something like that)
It depends on the kind of vulnerability, but sometimes in order to fix a problem, you need to do an enormous amount of software engineering. Which needs to be done to a very high standard, because the expectation is that people will push security patches more or less immediately to production.
Of course, this only works if no one else is likely to discover the vulnerability in the meantime!
Why do we need to put up with excuses? If a company has lots of complicated code that would need enormous amount of time to fix, it's on them. They decided to release this code into the wild.
If I publish the vuln publicly, the users would have the option to stop using the software/service until it's patched. If a customer is using a service without caring about security, it's on them. I want to protect the customers who would monitor the news for such vulns and protect themselves.