Top
Best
New

Posted by speckx 18 hours ago

AI is breaking two vulnerability cultures(www.jefftk.com)
341 points | 136 commentspage 2
FuriouslyAdrift 15 hours ago|
Reverse engineering vulnerabilities from patches is red team 101...
Havoc 14 hours ago||
The bugs are bugs description reads pretty insane to me personally but I know linux world has many people valueing principle of it over practical matters.

90d seems long too though.

Think ultimately the big AI houses will need to help the core internet infra guys. Running latest and greatest AI over stuff like nginx and friends makes sense for us all collectively I think

0xbadcafebee 13 hours ago||
We need automated patch and release cycles. So far we've relied on incredibly slow manual processes to accept reports, investigate, verify, patch, and prepare releases. Releasing a fix often takes months. This is way too slow when attackers can just churn out new exploits in hours. We need to iterate on value chain bottlenecks to lower Mean Time To Patch.

We should be able to turn around a bug report to a patched product ready for QA testing in 1 hour. Standardize/open source it, have the whole software supply chain use it (ex. Linux kernel -> distros -> products that use distros -> users). With AI there's no reason we can't do this, we're just slow.

Gigachad 11 hours ago||
On the other hand, automated fast rollouts leads to a crowdstrike type situation where you brick all the computers of the world immediately.

Imo we are going to have to rely on more layers of security. Systems that are designed to be secure even in the presence of individual vulnerabilities. This has already been happening for a while on mobile platforms and game consoles. Even physical hardware designed to keep particular secrets /keys even from the kernel.

0xbadcafebee 7 hours ago|||
The crowdstrike situation wasn't due to fast rollouts, it was due to a total lack of testing. You can do fast rollouts, with testing, and a mandatory QA signoff. It's called 'continuous delivery' rather than 'continuous deployment'.

I actually don't think more layers of security will fix this. It would be nice if our systems were more secure... but people are, if nothing else, lazy af. Even when adding security isn't a lot of work, people resist it if it "sounds complicated". So I think we're stuck with the status quo. But the big issue now isn't novel bug types, it's the speed in which they're found. Therefore we need to speed up our response.

Gigachad 6 hours ago||
Lack of care, slip ups, and bugs are basically a constant that will always exist. But we can architect systems which are secure even in the face of bugs. Multiple layers of security mean that even the most critical kernel bug in iOS can never extract your faceid data or encryption key because the hardware physically isn’t capable of it. OSs like Qubes utilise multiple VMs so any kernel bugs have limited reach.

When you look at consoles, they have built software that is resistant to outright glitching the CPU.

jfk13 12 hours ago|||
Sounds like you're expecting the AI-based tools that are finding bugs to also provide fixes.

I've been dealing with a bunch of AI-generated (or at least -assisted) vulnerability reports lately. In many cases the reports include proposed patches to fix the issues.

It's been..... interesting. In many cases, the analysis provided in the report has been accurate and helpful. In some cases, the proposed patches have also been good, and we've accepted them with minimal or no changes.

In other cases, despite finding a valid issue, and even providing a good analysis of the problem, the AI tool's suggested patch has been, quite simply, wrong.

Careful review from somebody who really _understands_ the code -- and the wider context in which it is operating -- is still absolutely necessary. That's not always going to happen in an hour.

0xbadcafebee 7 hours ago||
Yes, that's why I specified "patched product ready for QA testing". It speeds up the development cycle by making a first pass and ensuring it basically works before passing it to a developer for manual review and a QA tester to ensure the fix doesn't break anything else. Both dev and QA are still in the feedback loop and can make changes until it's ready for release
lofaszvanitt 1 hour ago||
what could go wrong? :DDD

imagine patching everything up automatically and it's a malware

everything cooked

wisty 9 hours ago||
A 3rd culture - the "security though obscurity" culture where some random little library might be a potential weak link, but will anyone really bother to hack it?

Not as worrysome in a philosophical way (since it's not a serious culture) but it's a real issue. And just wait for a nation state to start astroturing helpful little libraries at scale ...

proteal 15 hours ago||
It sounds to me like the safe assumption with software is that no matter how solid your stack is, there are vulnerabilities, potentially catastrophic. A question to folks more experienced than me - if my business depends on software, and I know that my software is almost certainly exploitable, how do I posture my business in such a way as to minimize the impacts of exploits like these?
perlgeek 15 hours ago|
When Windows was the predominant desktop OS in the 90s and maybe early 00s (ok, maybe still is), it was so badly insecure that you could be pretty much sure that it would be easy to compromise.

That's when firewalls were widely deployed to provide some layer of protection.

So you can ask yourself, what is the (possibly metaphorical) firewall in the software you depend on?

Is there any way you can decrease attack surface, separate out the most important data in extra-secure (and thus less accessible) systems?

cryo32 15 hours ago||
I must admit I'm rather enjoying this particular form of shit show, mostly because it was a predication I made in 2023 in the early days of LLMs. It wasn't really a problem related to LLMs but a glaring hole in the thinking of current computing which is the "frustratingly over-connected" and "over-trust" approach to everything. After reading Liu Cixin's "three body problem" and noting the Dark Forest, I applied that to risk vectors and came to the conclusion that our over-connected nature plus some form of acceleration plus some form of negative impact will fuck us big time.

Turns out it did.

Thus we should probably start treating our thinking model of computing as a Dark Forest, not a friendly community. That mitigates these risks to some degree.

9dev 13 hours ago|
If you're into gaming, Cyberpunk 2077 essentially plays in a heavily technologized world, where all compute infrastructure is infested with rogue AI that replicates itself to any technology it can physically get in touch with. The only recourse is a new web, built from first principles, protected by (probably) benevolent AI systems. Every device, every server, is partially occupied by AIs doing their thing on it, virtually networked into a digital universe. I found that a fascinating thought.
Analemma_ 16 hours ago||
I'd argue it's actually breaking three vulnerability cultures. In addition to the two Jeff mentions, I think the culture of delaying upgrades and staying on stable versions for as long as possible is going to become increasingly untenable, if everything that's not latest can be trivially scanned and exploited. In the extreme I think there's a decent chance projects like Debian might have to radically overhaul or just shut down completely - the whole philosophy of slow and steady with old code just won't work.

There will be much wailing and gnashing of teeth around this, because a lot of tech types really resent having to update constantly, but I don't think people will have a choice. If you have a complicated stack where major or even minor version updates are a huge hassle, I'd start working now to try and clear out the cruft and grease those wheels.

layer8 16 hours ago||
> there's a decent chance projects like Debian might have to radically overhaul or just shut down completely - the whole philosophy of slow and steady with old code just won't work.

Debian continuously issues security updates for stable versions, ingestable with automatic updates. “Stable” doesn’t mean that vulnerabilities aren’t getting fixed.

The argument that could be made is that keeping up with getting vulnerabilities fixed might become such a high workload that fewer releases can be maintained in parallel, and therefore the lifetime and/or overlap of maintained releases would have to be reduced. But the argument for abandoning stable releases altogether doesn’t seem cogent.

It goes both ways: Stable code that only receives security updates becomes less vulnerable over time, as the likelihood of new vulnerabilities being introduced is comparatively low. From that point of view, stable software actually has a leg up over continuous (“eternal beta” in the worst case) functional updates.

ryandrake 16 hours ago||
I can only dream, but this may re-popularize (among the rest of the non-Debian software industry) the general best practice of keeping a "sustaining" branch green, buildable, and with frequent releases, for security fixes.

I hate software that forces you to take new features as a condition of obtaining bug and security fixes. We need to keep old "stable" builds around for longer and maintain them better. I know, I know, it is really upsetting to developers to have to backport things to old versions--they wish that all they had to work on was the current branch. But that just causes guys like me to never upgrade because the downside of upgrading (new features) is worse than the upside (security fixes).

tetha 16 hours ago|||
> In the extreme I think there's a decent chance projects like Debian might have to radically overhaul or just shut down completely - the whole philosophy of slow and steady with old code just won't work.

It may actually be the opposite.

Debians steady and professional approach on shipping security patches with very little to no functional difference actually enables us to consider and work on automated, autonomous weekly or faster patches of the entire fleet. And once that's in place and trusted, emergency rollouts are very possible and easy.

We have other projects that "move fast and break things" and ship whatever they want in whatever versions they want and those will require constant attention to ship any update for a security topic. These projects require constant human attention to work through their shenanigans to keep them up to date.

calvinmorrison 16 hours ago||
Not only that but debian has for example, debsecan so you can see on any system what CVEs exist and if your packages are patched. ex from my system I ran it and got

> CVE-2026-32105 xrdp

which i see has a fix in sid but not on bookworm

muvlon 16 hours ago|||
That's not really the culture of debian to be honest. Yes they run old major and minor versions, but they do ship patch updates as fast as they can. Even on debian stable, you absolutely are supposed to update all the time. The culture of "just don't touch it" is a different one (but also exists, I've seen it).
acranox 16 hours ago|||
Debian has updated kernel packages out for the stable release. https://security-tracker.debian.org/tracker/CVE-2026-43284

I kind of get your point, but they responded pretty quickly here.

Analemma_ 16 hours ago||
Oh yeah, to be clear: Debian has always been good about quickly shipping patches to kernel vulnerabilities, and they will continue to be so. I was more thinking about whether they will get overwhelmed if every bit of software they package just has a firehose of vulnerabilities on everything which isn't latest.
pixl97 16 hours ago|||
We are now paying for the sins of our fathers (well and mostly ourselves).

We've just kept building more complex things with more exposure with no recognition that the day of reckoning was coming. And now we are in an untenable situation. With governments spending billions on AI with the big providers it's likely they've found many of these already.

y3ahd0g 15 hours ago|||
Yep. This is why I am using local AI to edit and build my own copies of Linux kernel, Wayland... everything a distribution would ship really.

Not so daunting for me having come of age when compiling a kernel specific to a hardware platform was essential.

Custom software that does not fit the usual patterns is not fool proof but it won't be obvious.

Monocultures with all their eggs in one basket are even less secure than truly diverse ecosystems though.

giancarlostoro 16 hours ago||
Arch Linux to become the only Linux OS left.
Worf 13 hours ago||
> On the other side you have "bugs are bugs" culture. This is especially common in Linux, where the argument is that if the kernel is doing something it shouldn't then someone somewhere may be able to turn it into an attack. Just fix things as quickly as possible, without drawing attention to them. Often people won't notice, with so many changes going past, and there's still time to get machines patched.

The 3rd one is what I practice when giving companies time to fix their issue. Note, I haven't reported anything to FOSS projects, but to several companies I found exploits in. I give them 5 days. If they don't respond at all in the first day, I deduct 1 day - apparently they're either incompetent or don't care. After the 5 days have passed, I make it public. So far they've all fixed the issue on the 3rd or 4th day.

If I were to report something to a FOSS project, I'd give them a bit more, say 8-9 days. Enough time for everyone to wake up, review the vuln, patch it and ship it. Enough time for all the downstream projects to also ship the patch.

90 days is ridiculous, especially for companies. If I report something on Friday 23:30 and they reply Monday 15:00 - what were they doing during the weekend? Did they forget their software is used 24/7? I had one company complain quite a bit, threatening to sue. When they realized there was no one to sue (me being anonymous with my report), they fixed it in less than a day.

Bottom line - if you're a company offering a product or service, you should have a security team 24/7.

If you're a FOSS project - either alert your users to stop using your software or disable the service yourself, if you can.

If it's an extremely important life-or-death service you can't shut off - then fix it quickly. What are you doing with life-or-death stuff when you can't react quickly enough?

Fuck the 90 days standard - it's what companies want us to do because it's easier on them. If security hasn't been your top priority, you have a few days to make it your top priority.

With AI, that makes even more sense now. Bugs won't be able to stay hidden for months. Especially bugs I've reported like IDORs or SQL injections - things everyone tries first.

(and I love Linux, but getting an "Oh noes!" from Anubis at kernel.org because I don't have cookies enabled (I do??) really makes me not want to report anything to the Linux kernel in particular. If I ever did find something, I'd just immediately post it as a HN comment or something like that)

jefftk 13 hours ago|
> 90 days is ridiculous, especially for companies

It depends on the kind of vulnerability, but sometimes in order to fix a problem, you need to do an enormous amount of software engineering. Which needs to be done to a very high standard, because the expectation is that people will push security patches more or less immediately to production.

Of course, this only works if no one else is likely to discover the vulnerability in the meantime!

Worf 13 hours ago||
The company can almost always shut down their service until they fix it. They'll lose money and their customers could also lose money if they depend on the service. That's the price they'll have to pay. Otherwise, they should either work frantically 24/7 to fix the vuln or if they can't, they should accept the fact that they've pushed code without any regard for security and bear the consequences.

Why do we need to put up with excuses? If a company has lots of complicated code that would need enormous amount of time to fix, it's on them. They decided to release this code into the wild.

If I publish the vuln publicly, the users would have the option to stop using the software/service until it's patched. If a customer is using a service without caring about security, it's on them. I want to protect the customers who would monitor the news for such vulns and protect themselves.

jefftk 12 hours ago||
How would you apply this logic to something like https://meltdownattack.com ? The vulnerability was in hardware, discovered by companies that make user level software, and mitigated by changes to OS kernels.
renzom13 2 hours ago||
[flagged]
ElenaDaibunny 3 hours ago|
[flagged]
More comments...