Top
Best
New

Posted by chmaynard 1/2/2026

Linux kernel security work(www.kroah.com)
189 points | 107 comments
coppsilgold 1/3/2026|
Recently, things have been advancing which may finally allow a seamless virtualization experience on the Linux desktop (and match QubesOS in some security aspects).

GPU drivers supporting native contexts with Mesa support.

Wayland sharing between guest and host. It used to be somewhat sloppy (involved protocol parsing; sommilier & wayland-proxy-virtwl) but recently someone undertook a project to do it properly that may soon bear fruit: https://codeberg.org/drakulix/wl-cross-domain-proxy

A VMM to utilize these features: https://github.com/AsahiLinux/muvm

And a solution which ties these things together: https://git.clan.lol/clan/munix

conradev 1/3/2026|
This is really cool!

I wonder if it would be possible to live-migrate apps from one machine running munix to another. You could pause, transfer, and resume the virtual machine.

tuananh 1/3/2026||
this is why redhat is still relevant in 2025. there's always a need for this kind of work.
staticassertion 1/3/2026|
I think you could make a stronger case for the opposite. How does Redhat know which commits to cherry when upstream explicitly won't tell you which are relevant to security?
invokestatic 1/3/2026||
Because Red Hat pays the salaries of dozens (hundreds?) of kernel maintainers all over different subsystems. So they’re subject matter experts, and know exactly which ones are relevant to Red Hat.
thrwwy81faa457 1/3/2026|||
This is the right answer.

Source: 10+y long (past) tenure at RH in a team adjacent to the kernel team.

EDIT: also because companies like RH tend to know, and are happy to know, the details of their customers' deployments. Compare the article:

> Always remember, kernel developers:

> - do not know your use case.

> - do not know what code you use.

> - do not want to know any of this.

staticassertion 1/3/2026||
https://bugzilla.redhat.com/show_bug.cgi?id=1708775 https://www.openwall.com/lists/oss-security/2020/06/23/2

Even RHEL misses things that don't get announced. This is a big issue for LTS kernels and downstreams, although RHEL does a much better job than most due to the nature of the company/ products.

I don't have tons of examples off hand but Spender and Project Zero have a number of examples like this (not necessarily for RHEL! Just in general where lack of CVE led to downstreams not being patched).

https://googleprojectzero.github.io/0days-in-the-wild/0day-R...

Who is helped by this, for example? https://x.com/grsecurity/status/1486795432202276864

> Always remember, kernel developers: > - do not know your use case. > - do not know what code you use. > - do not want to know any of this.

I just found this part so odd. You don't need to know how users are deploying code to know that a type confusion in an unprivileged system call that leads to full control over the kernel is a vulnerability. If someone has a very strange deployment where that isn't the case, okay, they can choose not to patch.

It's odd for every distro to have to think "is this patch for a vulnerability?" with no insight from upstream on the matter. Thankfully, researchers can go out of their way to get a CVE assigned.

botanicalfriend 1/4/2026|||
Paying maintainers doesn't give Red Hat a magic oracle for "which commits matter for security". What you actually end up with is cherry-picking + backporting. Backporting is inherently messy, you can introduce new bugs (including security bugs) while trying to transplant fixes, and omissions are inevitable. And CVEs don't save you here: plenty of security relevant fixes never get a tidy CVE in the first place, and vendors miss fixes because they often pretend the CVE stream is "the security feed".

Greg is pretty blunt about this in the video linked in the article: "If you are not using the latest stable / longterm kernel, your system is insecure" (see 51:40-53:00 in [1]). He also calls out Red Hat explicitly for ending up "off in the weeds" with their fixes.

RHEL as an entire distribution may provide good enough security for most environments. But that is not the same claim as "the RHEL kernel is secure" or "they know exactly which commits are relevant". It is still guesswork plus backports, and you're still running behind upstream fixes (many of which will never get pulled in). It is a comfortable myth.

[1] https://www.youtube.com/watch?v=sLX1ehWjIcw&t=3099s

DebugDruid 1/2/2026||
Sometimes I dream about a 100% secure OS. Maybe formal verification is the key, or Rust, I don’t know. But I would love to know that I can't be hacked.
themafia 1/3/2026||
> But I would love to know that I can't be hacked.

Cool. So social engineering it is. You are your own worst enemy anyways.

staticassertion 1/3/2026||
A world in which the only way to get hacked is to be tricked would be an insane improvement over today. There are a lot of ways to solve social engineering issue with tech solutions too - FIDO2 is one example, as would be app isolation, etc.
jeffbee 1/3/2026|||
The problem is that for the overwhelming majority of use cases the isolation features that are violated by security bugs are not being used for real isolation, but for manageability and convenience. Virtualization, physical host segregation, etc are used to achieve greater isolation. People don't necessarily care about these flaws because they aren't actually exposed to the worst case preconditions. So the amount of contributor attention you could get behind a "100% secure OS" might not be as large as you are hoping. Anyway if you want to work on such things there are various OS development efforts floating around.
nine_k 1/3/2026|||
Isolation is one thing, correctness is another. You may have architecturally perfect, hardware-assisted isolation, but triggering a bug would breach it. This is how a typical break out of a VM, or a container, or a privilege escalation, happens.

There is a difference between a provably secure-by-design system, and a formally proven secure implementation, like Sel4.

ameliaquining 1/3/2026|||
Obligatory https://xkcd.com/2044/.
pjmlp 1/3/2026|||
This has been done multiple times in research, see Verve OS from Microsoft, even Assembly is verified, that is where Dafny came from.

https://en.wikipedia.org/wiki/Verve_(operating_system)

However, worse is better on the market, and quality doesn't pay off, hence why such ideas take decades into mainstream.

fsflover 1/3/2026|||
Here you go: https://qubes-os.org
JCattheATM 1/3/2026|||
That protects against much, but is far from a "100% secure OS". If the specific VM or 'qube' has a vulnerability, anything in that VM could be obtained/interacted with.
fsflover 1/4/2026||
Your VM isn't protected from malware that you run in it. However your OS and other VMs containing sensitive data (in which you of course do not run anything untrusted at all) will stay safe, by design.
JCattheATM 1/4/2026||
> Your VM isn't protected from malware that you run in it.

Right, that was the point - so your suggestion that Qubes is a '100% secure OS' is false.

fsflover 1/4/2026||
The OS is actually secure, isn't it? As well as all your valuable data. The VM gets compromised, after which you can reset it to its original state. See: https://doc.qubes-os.org/en/latest/user/how-to-guides/how-to...
JCattheATM 1/5/2026||
> The OS is actually secure, isn't it?

Not 100% secure, as was your claim.

fsflover 1/5/2026||
It is secure after resetting the Disposable VM. It's impossible to make it better, and I don't even understand what your actual problem is.
JCattheATM 1/5/2026||
> It is secure after resetting the Disposable VM.

What a nonsense answer. That's like saying a bank vault is secure after being rebuilt from being broken into. Meaningless.

It's not 100% secure while using it.

> It's impossible to make it better

Far from it. A formally verified codebase and better protections than DAC would be a start.

> I don't even understand what your actual problem is.

You made a BS claim and have an allergy to admitting you were wrong.

fsflover 1/5/2026||
> That's like saying a bank vault is secure after being rebuilt from being broken into. Meaningless.

Did you even read my reply? All data are safe unlike in your (unrelated) example. Give me your actual threat model. 100% security never existed and never will. Security through correctness never worked and never will. Compartmentalization is the only viable approach.

JCattheATM 1/5/2026||
> All data are safe

This simply isn't the case. Any data in the VM is vulnerable if the VM has a vulnerability allowing exfiltration.

> Give me your actual threat model.

A vulnerability in the VM allowing exfiltration.

> 100% security never existed and never will.

Then why did you suggest Qubes as a 100% secure OS?

Are you now admitting you were wrong to do so?

> Security through correctness never worked and never will.

Security clearly isn't your area of expertise. Security through correctness is indeed a solution to many/most threats.

> Compartmentalization is the only viable approach.

Hardly. It can help, but at most it's a workaround.

fsflover 1/5/2026||
>> Give me your actual threat model.

> A vulnerability in the VM allowing exfiltration.

Thanks, now we can talk technically without accusations.

> Any data in the VM is vulnerable if the VM has a vulnerability allowing exfiltration.

Qubes OS has a possibility to open any file in a dedicated, offline, disposable VM, for reading or for editing [0]. The original VM will not get compromised because it never touches the file. The disposable VM will not allow exfiltration, since it has no network (with the correct configuration).

There is a reason why this OS is chosen for SecureDrop Workstation [1].

> Then why did you suggest Qubes as a 100% secure OS?

There is nothing 100% in this world. Qubes is as close to 100% secure as possible. People often use imprecise expressions for things they wish existed. This is what I expected from your comment.

> Security clearly isn't your area of expertise. Security through correctness is indeed a solution to many/most threats.

Indeed, it is not my area. However it is the area of well-known security professionals whose opinion I trust [2].

[0] https://doc.qubes-os.org/en/latest/user/how-to-guides/how-to...

[1] https://workstation.securedrop.org/en/stable/

[2] https://blog.invisiblethings.org/2008/09/02/three-approaches...

JCattheATM 1/5/2026||
> Thanks, now we can talk technically without accusations.

That was always within your control.

> The disposable VM will not allow exfiltration, since it has no network

Sure, unless you're doing something in the disposable VM that requires network traffic, like browsing.

> Qubes is as close to 100% secure as possible.

No, it isn't. It lacks numerous protections. It serves a purpose against certain threatmodels, but it's far from being close to 100% secure. Like I said, it's essentially a workaround.

> There is nothing 100% in this world.

So you agree Qubes is not a 100% secure OS like the other poster was asking for, correct?

> However it is the area of well-known security professionals whose opinion I trust.

None of them are claiming it is as close to 100% secure as possible. No security expert would. Not even a security hobbyist would. It's a nonsense claim.

fsflover 1/5/2026||
>> The disposable VM will not allow exfiltration, since it has no network

> Sure, unless you're doing something in the disposable VM that requires network traffic, like browsing.

This is called goal shifting. Anyway, in this case Qubes can also save you. You browse untrusted websites in a disposable VM, which doesn't contain anything sensitive. You move any downloaded untrusted files to a dedicated storage VM and never open them there without another, dedicated disposable VM.

You browse trusted websites in another, more trusted VM. More details: https://doc.qubes-os.org/en/latest/user/how-to-guides/how-to...

> It lacks numerous protections. It serves a purpose against certain threatmodels, but it's far from being close to 100% secure. Like I said, it's essentially a workaround.

I challenge you to provide me with a threat model that is not covered with Qubes. You couldn't yet. You can call it a workaround, but it's the only approach that actually works today and in the visible future.

> So you agree Qubes is not a 100% secure OS like the other poster was asking for, correct?

The poster is asking for a fairy-tale. I suggested something realistic that solves the problem instead.

> None of them are claiming it is as close to 100% secure as possible. No security expert would. Not even a security hobbyist would. It's a nonsense claim.

I also don't. But you seem to be seeking 100% security, don't you?

> That was always within your control.

I wasn't talking about my own words.

JCattheATM 1/5/2026||
> This is called goal shifting.

Far from it. You claimed Qubes was a 100% secure OS. I'm pointing out that it's not. Plenty of people use Qubes for browsing.

You are the only person goal shifting, by giving a specific scenario where you think your claim might apply (it still doesn't). When I mention a more common scenario, you call it goal shifting. This is blatantly dishonest.

> You browse untrusted websites in a disposable VM, which doesn't contain anything sensitive. You move any downloaded untrusted files to a dedicated storage VM and never open them there without another, dedicated disposable VM.

Yeah, I know how Qubes works - you're continuing to miss the point. Sometimes, you may have to upload sensitive data, so you do it in a disposable VM. That disposable VM is protected from all your other disposable VMs, but it isn't protected if something manages to get access to that particular disposable VM. Do you get it now? Stop being obtuse, just admit your claim was bogus. Be honest.

> I challenge you to provide me with a threat model that is not covered with Qubes. You couldn't yet.

I already did above lol. Kernel level RCE that grants a remote root shell. Boom.

What you don't understand is that a secure OS could protect against that, and there are such secure OSs in existence - just not targeted at consumers.

Qubes can limit the damage, but it doesn't prevent it. It doesn't even really try.

> You can call it a workaround, but it's the only approach that actually works today and in the visible future.

That's just not true, and it's why institutions that actually need real, verifiable security are not using it. It's a hack mainly used like hobbyist tinkerers like yourself.

> The poster is asking for a fairy-tale. I suggested something realistic that solves the problem instead.

It doesn't solve the problem, it's a workaround.

You don't seem to have the ability to flat out admit you were wrong, but I suppose this is as close as you're capable of coming to doing so. I'll take it.

> I also don't.

You literally did so in your last reply.

> I wasn't talking about my own words.

Right, but I was. If you wanted to have a technical discussion, you could have responded with a technical argument in your first reply to me. You didn't, you chose to preach and be overly defensive instead.

sydbarrett74 1/3/2026||
Anything made by humans can be unmade by humans. Security is a perpetual arms race.
staticassertion 1/3/2026||
"A bug is a bug" lol.

There's a massive difference between "DoS requiring root" and "I can own you from an unprivileged user with one system call". You can say "but that DoS could have been a privesc! We don't know!" but no one is arguing otherwise? The point is that we do know the impact of some bugs is strictly a superset of other bugs, and when those bugs give control or allow a violation of a defined security boundary, those are security bugs.

This has all been explained to Greg for decades, nothing will change so it's just best to accept the state - I'm glad it's been documented clearly.

Know this - your kernel is not patched unless you run the absolute latest version. CVEs are discouraged, vuln fixes are obfuscated, and you should operate under that knowledge.

Attackers know how to watch the commit log for these hidden fixes btw, it's not that hard.

edit: Years later and I'm still rate limited so I can't reply. @dang can this be fixed? I was rate limited for posting about Go like... years ago.

To the person who replies to me:

> This is correct for a lot of different software, probably most of it. Why is this a point that needs to be made?

That's not true at all. You can know if you're patched for any software that discloses vulnerabilities by checking if your release is up to date. That is not true of Linux, by policy, hence this entire post by Greg and the talks he's given about suggesting you run rolling releases.

Sorry but it's too annoying to reply further with this rate limiting, so I'll be unable to defend my points.

tamirzb 1/3/2026|
> Know this - your kernel is not patched unless you run the absolute latest version.

This is correct for a lot of different software, probably most of it. Why is this a point that needs to be made?

thrwwy81faa457 1/3/2026||
(Parent has already replied by editing their original comment, but I'll tack on a bit more info, from my perspective.)

The reason this has to be emphasized is that all new code runs the risk of regressions, and in a production environment, you hate regressions. Therefore, not only do you not want new features, but you also don't want irrelevant bug fixes. Bug fixes, even security fixes, are not magically free of independent regressions. Therefore a valid incentive exists to minimize backports to production environments. And such a balancing act depends on the careful investigation of the impact of known bugs, one by one.

From the fine blog post:

> For those that are always worried “what if a bugfix causes problems”, they should remember that a fix for a known bug is better than the potential of a fix causing a future problem as future problems, when found, will be fixed then.

A whole lot of users can disagree with this. For good, practical reasons. The expected damage of a known bug may be estimated, while an unknown regression brought in by the fix for the known bug may cause way worse damage.

anonnon 1/3/2026||
Meanwhile it's 2026 and Greg's own website still doesn't support TLS.
juliangmp 1/3/2026|
Honestly, until encrypted client hello has widespread support, why bother? I mean I did it for fun the first time and now with caddy its not a lot of effort. But for a personal blog, a completely static site, what benefit do you get from the encryption? Anyone monitoring the traffic will see the domain in clear text anyway. And they'd see the destination IP, which I imagine in this case being one server that has exactly one domain pointed at it.
swinglock 1/3/2026|||
Men in the middle including predatory ISPs can not only spy but also enrich. Injecting JavaScript and embedding ads is the best case scenario. You don't want that.

In addition even without bad actors TLS will prevent random corruption due to flaky infrastructure from breaking the page and even caching those broken assets, preventing a reload from fixing it. TCP/IP alone doesn't sufficiently prevent this.

Am4TIfIsER0ppos 1/3/2026|||
> JavaScript

Why do you allow that RCE in the first place?

swinglock 1/3/2026||
Most users have JS enabled nowadays. Much of the web doesn't work without it. It was just an example.
psnehanshu 1/3/2026|||
TCP ensures what gets sent on one side gets received on the other side. TLS just encrypts the data. So even without TLS, random corruptions won't happen unless someone does MITM attack.
swinglock 1/3/2026|||
No it does not. I've had this happen in legacy systems myself. The checksums of TCP/IP are weak and will let random errors through to L7 if there are enough of them. It's not even CRC and you must bring your own verification if it's critical for your application that the data is correct. TLS does that and more, protecting not only against random corruption but also active attackers. The checks you get for free are to be seen only as an optimization, letting most but not all errors be discarded quick and easy. Just use TLS.
ppseafield 1/3/2026|||
I saw myself years ago that Verizon injected marketing tracking headers into http traffic. My ISP was the MITM.

https://www.eff.org/deeplinks/2014/11/verizon-x-uidh

mqus 1/3/2026|||
Integrity. TLS does prevent man-in-the-middle attacks. For a personal blog, that may not be important but you _do_ get a benefit, even if the encryption is not necessary.
anonnon 1/3/2026||
Yeah, that was my point. This guy is Linus' chief lieutenant and heir apparent, and he doesn't even bother to ensure the integrity of his transmissions is protected through TLS.
miduil 1/3/2026||
> If you are forced to use encryption to report security problems, please reconsider this policy as it feels counterproductive (UK government, this means you…)

LOL

tuananh 1/3/2026||
if they really think that, they should have remove their CNA, no?
theamk 1/3/2026|
Nah, "removing CNA" = "let any security researcher decide what kernel vulnerability is"

And unfortunately, there are plenty of security researchers who are only interested in personal CVE counts, and will try to assign highest priority to a mostly harmless bug.

tuananh 1/4/2026||
but keeping the CNA and decide, "nah, i wont number it" instead?
theamk 1/4/2026||
They do number it.. in fact once they become CNA in 2024, the amount of kernel CVE's has increased almost 10x (see [0])

They just stopped assigning priorities/impact scores to them, because "A simple bugfix for a minor thing for one user could be a major system vulnerability fix for a different user, all depending on how Linux is being used"

(For an example of why having severity rating on CVE is a bad idea, see Redhat's treatment of CVE-2025-68343[1] - they gave "high", score 7. Many security teams in large corps would require a quick patch/kernel upgrade. And yes, this is a null-pointer dereference in a single USB device driver. Even if this is exploitable (which I am not sure about), this driver is _never_ going to be loaded into any our cloud machines, so most of our infra is not affected).

[0] https://www.cvedetails.com/product/47/Linux-Linux-Kernel.htm...

[1] https://access.redhat.com/security/cve/cve-2025-68343

bschmidt25011 1/4/2026||
[dead]
ryanisnan 1/3/2026||
[flagged]
cogman10 1/3/2026|
Why? What benefit would https provide over http when visiting a pure information (and I'm guessing statically generated) website?
ryanisnan 1/16/2026|||
Great question. People answered already, but, yeah, basically what they said.

For hobby sites, you could argue (I think the argument is still weak), about the MITM threat being low enough to not be worth doing something, but this is a security blog.

Security blogs matter, because people follow their guidance. This makes them a potentially attractive target for some groups of people.

fordsmith 1/3/2026||||
If you are on a public network without using a VPN you open yourself up to MITM to inject something malicious
cogman10 1/3/2026||
Sure, but that's ultimately a pretty unlikely attack vector. An attacker still needs to exploit some unknown vulnerability of your web browser in order to get something malicious going.

I basically expect that sort of attack to only be pulled off by a state actor or by a black hat convention for the lolz.

nine_k 1/3/2026|||
ISPs used to inject ads into HTTP-served pages as recently as 10 years ago, I personally remember that. Not only tiny ISPs. I'm not alone: https://superuser.com/questions/902635/isp-is-inserting-ads-...

Ads injection is a relatively benign kind of tampering. It could be much more creative and sinister.

Alupis 1/3/2026||||
It is a valid thing to point out, when implementing https on gkh's site would take all of 15 minutes to set up (let's encrypt or cloudflare or whatever you wish).

Things should be https by default these days. There's zero downside anymore.

bqmjjx0kac 1/3/2026|||
Confidentiality, integrity, and authenticity :)
cogman10 1/3/2026||
> Confidentiality

Reading a blog post about linux security? Do you actually care if the NSA/FBI/CIA/FDA/USDA or anyone else knows you read this particular blog post?

I could understand this argument if we were talking about a social media site, or something more substantial. But a blog post?

> authenticity

It's a linux security blog post. While it's technically possible for a MITM to get in between an inject false information... about linux security? Is that really a real threat?

> integrity

Maybe a real problem, assuming a malicious MITM is trying to target you. But, I suspect, there are other avenues that'd be more fruitful in that case. Just hoping that someone would visit an http site seems like a far fetched concern.

vlovich123 1/3/2026|
I think the most practical reason not to flag which bugs are security bugs is to avoid helping blackhat hackers by painting a giant neon sign and that should be more than enough.

I think all the other explanations are just double-think. Why? If "bugs are just bugs" is really a true sentiment, why is there a separate disclosure process for security bugs? What does it even mean to classify a bug as a security bug during reporting if it's no different than any other bug report? Why are fixes developed in secret & potential embargoes sometimes invoked? I guess some bugs are more equal than others?

fguerraz 1/3/2026||
As mentioned in the article, every bug is potentially a security problem to someone.

If you know that something is a security issue to your organization, you definitely don't want to paint a target on your back by reporting the bug publicly with an email address <your_name>@<your_org>.com. In the end, it is really actually quite rare (given the size of the code base and the popularity of linux) that a bug has a very wide security impact.

The vast majority of security issues don't affect organizations that are serious about security (yes really, SELinux eliminates or seriously reduces the impact of the vast majority of security bugs).

vlovich123 1/3/2026||
The problem with that argument is that the reports don’t necessarily come from the organization for whom it’s an issue. Security researchers unaffiliated not impacted by any such issue still report it this way (eg Project Zero reporting issues that don’t impact Google at all).

Also Android uses SELinux and still has lots of kernel exploits. Believing SELinux solves the vast majority of security issues is fallacious, especially since it’s primarily about securing userspace, not the kernel itself .

suspended_state 1/3/2026||
> The problem with that argument is that the reports don’t necessarily come from the organization for whom it’s an issue.

You can already say that for the majority of the bugs being fixed, and I think that's one of the points: tagging certain bugs as exploitable make it seem like the others aren't. More generally, someone's minor issue might be a major one for someone else, and not just in security. It could be anything the user cares about, data, hardware, energy, time.

Perhaps the real problem is that security is just a view on the bigger picture. Security is important, I'm not saying the opposite, but if it's only an aspect of development, why focus on it in the development logs? Shouldn't it be instead discussed on its own, in separate documents, mailing lists, etc by those who are primarily concerned by it?

vlovich123 1/3/2026||
Are memory leak fixes described as memory leak fixes in the logs or intentionally omitted as such? Are kernel panics or hangs not described in the commit logs even if they only happen in weird scenarios? Thats clearly not what’s happening meaning security bugs are still differently recorded and described through omission.

However you look at it, the only real justification that’s consistent with observed behaviors is that pointing out security vulnerabilities in the development log helps attackers. That explains why known exploitable bugs are reported differently before hand and described differently after the fact in the commit logs. That wouldn’t happen if “a bug is a bug” was actually a genuinely held position.

drysart 1/3/2026|||
> However you look at it, the only real justification that’s consistent with observed behaviors is that pointing out security vulnerabilities in the development log helps attackers.

And on top of your other concerns, this quoted bit smells an awful lot like 'security through obscurity' to me.

The people we really need to worry about today, state actors, have plenty of manpower available to watch every commit going into the kernel and figure out which ones are correcting an exploitable flaw, and how; and they also have the resources to move quickly to take advantage of them before downstream distros finish their testing and integration of upstream changes into their kernels, and before responsible organizations finish their regression testing and let the kernel updates into their deployments -- especially given that the distro maintainers and sysadmins aren't going to be moving with any urgency to get a kernel containing a security-critical fix rolled out quickly because they don't know they need to because *nobody's warned them*.

Obscuring how fixes are impactful to security isn't a step to avoid helping the bad guys, because they don't need the help. Being loud and clear about them is to help the good guys; to allow them to fast-track (or even skip) testing and deploying fixes or to take more immediate mitigations like disabling vulnerable features pending tested fix rollouts.

suspended_state 1/3/2026|||
There are channels in place to discuss security matters in open source. I am by no mean an expert nor very interested in that topic, but just searching a bit led me to

https://oss-security.openwall.org/wiki/mailing-lists

The good guys are certainly monitoring these channels already.

vlovich123 1/3/2026|||
There’s lot of different kinds of bad guys. This probably has marginal impact on state actors. But organized crime or malicious individuals? Probably raises the bar a little bit and part of defense in depth is employing a collection of mitigations to increase the cost of creating an exploit.
suspended_state 1/3/2026|||
> Are memory leak fixes described as memory leak fixes in the logs or intentionally omitted as such? Are kernel panics or hangs not described in the commit logs even if they only happen in weird scenarios?

I don't know nor follow kernel development well enough to answer these questions. My point was just a general reflection, and admittedly a reformulation of Linus's argument, which I think is genuinely valid.

If you allow me, one could frame this differently though: is the memory leak the symptom or the problem?

vlovich123 1/3/2026||
No one is listing the vast number of possible symptoms a security vulnerability could be causing.
suspended_state 1/3/2026||
Indeed nobody does that, because it would just be pointless, it doesn't expose the real issue. Is a security vulnerability a symptom, or the real issue though? Doesn't it depends on the purpose of the code containing the bug?
staticassertion 1/3/2026||
> I think the most practical reason not to flag which bugs are security bugs is to avoid helping blackhat hackers by painting a giant neon sign and that should be more than enough.

It doesn't work. I've looked at the kernel commit log and found vulnerabilities that aren't announced/ marked. Attackers know how to do this. Not announcing is a pure negative.

lfllfkddl 1/3/2026||
Linus argument against labeling some bugs, or even lack of features, as security vulnerabilities, is that all bugs can, with enough work and together with other circumstances, be a security vulnerability. Essentially every commit would need to be labeled as a cve fix, and then it’s just extra work for nothing.
staticassertion 1/3/2026||
> Linus argument against labeling some bugs, or even lack of features, as security vulnerabilities, is that all bugs can, with enough work and together with other circumstances, be a security vulnerability.

This isn't true though. Some bugs are not exploitable, some are trivial to exploit. Even if sometimes we'd end up with a DoS that was actually a privesc, how does that make it pointless to label the ones we know are privescs as such?

You can argue "oh no sometimes we mislabeled a DoS" but most of the time you can tell when something is going to be a powerful vuln or not ahead of time, I think this is a red herring to optimize around.

> Essentially every commit would need to be labeled as a cve fix, and then it’s just extra work for nothing.

This isn't true and has never been true for any other project. There are issues with the CVE system, this is not one of them. Note that the Linux kernel is the standout here - we don't have to guess about issues in the CVE system, we observe them all the time. "We need a CVE for every commit" is not one of them.

More comments...