Top
Best
New

Posted by chmaynard 4 days ago

Linux kernel security work(www.kroah.com)
185 points | 97 commentspage 2
badgersnake 3 days ago|
And then all our customers will demand fixes for them in our docker images, because they’re that smart.

There must be a way to ship a docker image without a kernel, since it doesn’t get used for anything anyway.

derkades 3 days ago||
Huh, how do you unintentionally ship a Linux kernel in a container image? The common base images definitely don't contain the kernel.
staticassertion 3 days ago||
The only thing I can imagine is that they've somehow managed to rely on kernel headers in their image? idk
shepherdjerred 3 days ago||
https://github.com/wolfi-dev
JCattheATM 4 days ago|
Their view that security bugs are just normal bugs remains very immature and damaging. It it somewhat mitigated by Linux having so many eyes on it and so many developers, but a lot of problems in the past could have bee avoided if they adopted the stance the rest of the industry recognizes as correct.
tptacek 4 days ago||
From their perspective, on their project, with the constraints they operate under, bugs are just bugs. You're free to operationalize some other taxonomy of bugs in your organization; I certainly wouldn't run with "bugs are just bugs" in mine (security bugs are distinctive in that they're paired implicitly with adversaries).

To complicate matters further, it's not as if you could rely on any more "sophisticated" taxonomy from the Linux kernel team, because they're not the originators of most Linux kernel security findings, and not all the actual originators are benevolent.

rwmj 4 days ago|||
For sure, but you don't need to file CVEs for every regular bug.
Skunkleton 4 days ago||
In the context of the kernel, it’s hard to say when that’s true. It’s very easy to fix some bug that resulted in a kernel crash without considering that it could possibly be part of some complex exploit chain. Basically any bug could be considered a security bug.
SSLy 4 days ago||
plainly, crash = DoS = security issue = CVE.

QED.

michaelt 4 days ago|||
BRB, raising a CVE complaining the OOM killer exists.
pamcake 4 days ago|||
Memory leaks are usually (accurately) treated as DoS. OoM killer is a mitigation to contain them and not DoS the entire OS.
worthless-trash 4 days ago||||
I could be wrong. But operation by design isn't considered a bug.
samus 4 days ago|||
It is if some other condition is violated that is more important. Then the design might have to be reconsidered.
suspended_state 3 days ago|||
If it is faulty, then it's not a bug, it's a flaw.
lfllfkddl 3 days ago||
It is possible to design a security vulnerability.
worthless-trash 2 days ago||
Oh, now that is an exciting area.
SSLy 3 days ago|||
you either get OOMed or next malloc fails and that's also going to wreck havoc
JCattheATM 4 days ago|||
> From their perspective, on their project, with the constraints they operate under, bugs are just bugs.

That's a pretty poor justification. Their perspective is wrong, and their constraints don't prevent them from treating security bugs differently as they should.

ada0000 4 days ago|||
> almost any bugfix at the level of an operating system kernel can be a “security issue” given the issues involved (memory leaks, denial of service, information leaks, etc.)

On the level of the Linux kernel, this does seem convincing. There is no shared user space on Linux where you know how each component will react/recover in the face of unexpected kernel behaviour, and no SKUs targeting specific use cases in which e.g. a denial of service might be a worse issue than on desktop.

I guess CVEs provide some of this classification, but they seem to cause drama amongst kernel people.

samus 4 days ago|||
You have a pretty strongly worded stance, but you don't provide an argument for it. May I suggest you detail why exactly you think their perspective is wrong, apart from "a lot of problems in the past could have been avoided"?
JCattheATM 3 days ago||
My view here isn't uncommon, even if it's a minority view. I've noticed a lot of people tend to just defend and adopt the stances of projects they like or use without necessarily thinking things through, and I assume that's at least partly the case here.

There's been a lot of criticism written on the kernel devs stance over the last, what, 20 years? One obvious problem is that without giving security bugs, i.e. vulnerabilities priority, systems stay vulnerable until the bug gets patched at whatever place in the queue it happens to be at.

jacobsenscott 4 days ago|||
Classifying bugs as security bugs is just theater - and any company or organization that tries to classify bugs that way is immature and hasn't put any thought into it.

First of all "security" is undefined. Second, nearly every bug can be be exploited in a malicious way, but that way is usually not easy to find. So should every bug be classified as a security bug?

Or should only bugs where a person can think of a way on the spot during triage to exploit that bug as a security bug? In that case only a small subset of your "security" bugs are classified as such.

It is meaningless in all cases.

therealrootuser 4 days ago|||
> nearly every bug can be be exploited in a malicious way This is a bit contextually dependent. "This widget is the wrong color" is probably not a security issue in most cases, unless the widget happens to be a traffic signal, in which case it is a major safety concern.

Even the line between "this is a bug" and "this is just a missing, incomplete, or poorly thought out feature" can get a bit blurry. At a certain point, many engineers get frustrated trying to pick apart the difference between all these ways of classifying the code they are writing and just want to get on with making the system work better.

staticassertion 2 days ago||||
> First of all "security" is undefined.

No it isn't. Security boundaries exist and are explicit. It isn't undefined at all. Going from user X to user Y without permission to do so is an explicit vulnerability.

The kernel has permissions boundaries. They are explicit. It is defined.

> Second, nearly every bug can be be exploited in a malicious way,

No they can't.

ykonstant 3 days ago||||
> "security"

Security is not a dirty word, Blackadder.

JCattheATM 4 days ago|||
> First of all "security" is undefined.

Nonsense.

schmuckonwheels 4 days ago|||
Linus has been very clear on avoiding the opposite, which is the OpenBSD situation: they obsess about security so much that nothing else matters to them, which is how you end up with a mature 30 year old OS that still has a dogshit unreliable filesystem in 2026.

To paraphrase LT, security bugs are important, but so are all the other bugs.

JCattheATM 4 days ago||
OpenBSD doesn't really stress about security so much as they made that their identity and marketing campaign - their OS is lacking too many basic capabilities a security focused OS should have.

> To paraphrase LT, security bugs are important, but so are all the other bugs.

Right, this is wrong, and that's the problem. Security bugs as a class are always going to be more important than certain other classes of bugs.

6r17 3 days ago|||
I have to disagree it's worst than you think ; open-bsd has so many mitigation in place that your computer will probably run 50% slower than a traditional OS. In reality you do not want to be playing 100% safety everywhere because this is simply expensive. You might prefer to create an isolated network on which you can set up un-mitigated servers - those will be able to run at 100% capacity.

This can be looked upon when compiling the linux kernel, the mitigation options are rather numerous - and you'll have to also pick a sleep time ; what i'm saying is - currently linux only allows you to tune a machine to a specific requirement - it's not a spaceship on which you can change the sleep time frequency; dynamically shutdown mitigation ; and imagine that you are performing - In the same spirit, if you are holding keys on anything else than open-bsd ; I hope for you that you have properly looked up what you were installing.

cedws 3 days ago|||
And their ‘no remote holes’ is true for a base install with no packages, not necessarily a full system.

I think the OpenBSD approach of secure coding is outdated. The goal should have always been to take human error out of the equation as much as possible. Rust and other modern memory safe languages move things in that direction, you don’t need ultra strict coding standards and a bible of compiler flags.

JCattheATM 3 days ago||
> I think the OpenBSD approach of secure coding is outdated.

I don't think it's outdated it's a core part of the puzzle. The problem with their approach is they rely on it 100%, and have not enough in place (and yes, I'm aware of all the mitigations they do have) to protect against bugs they miss. This is a lot less true now than it was 15 - 20 years ago, but it's still not great IMO.

akerl_ 4 days ago|||
This feels almost too obvious to be worth saying, but “the rest of the industry” does not in fact have a uniform shared stance on this.
firesteelrain 4 days ago|||
“A bug is a bug” is about communication and prioritization, not ignoring security. Greg’s post spells that out pretty clearly.
JCattheATM 4 days ago||
Yes, that's what I was criticizing....
redleader55 3 days ago|||
The rest of the industry relies on following a CVE list and ticking off vulnerabilities as a way to ensure "owners" are correctly assigned risk and sign it off - because there is nothing else that "owners" could do. The whole security through CVE is broken and is designed to be useful to create large "security organizations" that have the single purpose of annoying everyone with reports without solving any issues.
themafia 4 days ago|||
> a lot of problems in the past could have bee avoided

Such as?

beanjuiceII 4 days ago||
did you read it? because that's not their view at all