Top
Best
New

Posted by flipped 15 hours ago

Dirtyfrag: Universal Linux LPE(www.openwall.com)
664 points | 272 comments
firer 14 hours ago|
This is very similar in root cause and exploitation to Copy Fail.

Which illustrates pretty well something that's lost when relying heavily on LLMs to do work for you: exploration.

I find that doing vulnerability research using AI really hinders my creativity. When your workflow consists of asking questions and getting answers immediately, you don't get to see what's nearby. It's like a genie - you get exactly what you asked for and nothing more.

The researcher who discovered Copy Fail relied heavily on AI after noticing something fishy. If he had to manually wade through lots of code by himself, he would have many more chances to spot these twin bugs.

At the same time, I'm pretty sure that by using slightly less directed prompting, a frontier LLM would found these bugs for him too.

It's a very unusual case of negative synergy, where working together hurt performance.

timcobb 6 hours ago||
> When your workflow consists of asking questions and getting answers immediately, you don't get to see what's nearby.

Very much aligns with my experience. For me this is the most unsatisfying thing about AI-based workflows in general, they miss stuff humans would never miss.

All the time I wonder what am I missing that's right nearby? It's remarkable how many times I have to ask Claude code to fully ingest something before it actually puts it into context. It always tries to laser through to target it's looking for, which is often not what you want it to look for, at least not all you want it to look for. Getting these models to open up their field of vision is tough.

clbrmbr 12 minutes ago|||
It’s interesting to compare how the agentic search performs, with these targeted reads and lots of tool calls in the stream, versus the older but still valid paradigm of using a high-reasoning model like GPT-X-pro and feeding in all the relevant files at once with no tools.

I have found that the “pro” approach is much more holistic and able to tackle rather “creative” problems that require very careful design and the overall artifact is tight and self-consistent. — Claude Code by comparison is incredible in exploration and targeted implementation but indeed is not great at seeing the forest.

ulrikrasmussen 5 hours ago||||
Do you think this is inherent or an artifact of prompting? Curiosity and side quests leads to higher token usage and longer time to finish, so I could understand why current harnesses and system prompts would not encourage that sort of thing.

But what if a coding agent was prompted to be more curious during development? Like a human developer, make mental notes of alternatives to try out and chase suspicious looking code which may seem unrelated to the task at hand. It could even spawn rabbit hole agents in parallel.

Taking a step back, this probably highlights major hazard with the increased usage of LLMs for coding, which is that everyone's style of work is going to converge because most code will be written by the 2-3 most popular models using the same system prompts.

lloeki 3 hours ago||
I've seen something similar, solutions generated feel very pythonic or javaesque in languages that are neither Python nor Java (C, Rust, Ruby)

I've had to explicitly direct the machine to read existing sibling code and follow the specific idioms and patterns in use.

dotancohen 2 hours ago||||

  > All the time I wonder what am I missing that's right nearby?
Add to the prompt "use coding conventions of the file which you are currently editing". That gets the machine (Opus and Sonnet at least) to go over the nearby code and occasionally mention something obvious.
eqvinox 14 hours ago|||
No, unless I'm misreading it it's the *same* root cause: high 32 bits of Extended ESN in IPsec == authencesn module/cipher mode.

The wrong thing got fixed for copy.fail, because people jumped to blame AF_ALG.

[ed.: yes it's the same authencesn issue. https://github.com/V4bel/dirtyfrag/blob/892d9a31d391b7f0fccb... it doesn't say authencesn in the code, only in a comment, but nonetheless, same issue.]

[ed.2: the RxRPC issue is separate, this is about the ESP one]

firer 14 hours ago||
There are two vulnerabilities here.

The RxRPC one is definitely a different root cause (although caused by a very similar mistake).

For the ESP one it's a bit harder to tell. I don't think the wrong thing was fixed, just that there was a very similar bug in almost the same spot. Could be wrong about that though.

eqvinox 14 hours ago||
(you probably wrote this while I was editing my post.)

It's absolutely the same issue in authencesn/ESP. There's another one in RxRPC that is AIUI completely unrelated.

papascrubs 14 hours ago|||
Or a follow up prompt: "find similar classes of bugs". Once the actual case has been layed out finding like bugs isn't too hard. I hear you on the creativity bit. Like any tool, AI can put blinders on. Using it to augment without it fully taking over your workflow is tough.
dgellow 4 hours ago||
Not just like any tool though. Interacting with agents can be incredibly boring and frustrating in a way that I personally do not experience with other technology
riedel 5 hours ago|||
Just on a side note. Negative synergy does not seem so uncommon with machine learning. We did some research maybe 10 yrs ago an human/ML based duplicate detection (for a municipal support ticket system) . Research showed that pure AI and pur human outperformed co-working. Human oversight often e.g. overcorrected machine work. I think it is a nice HCI problem to solve actually to amplify creativity and unique skills in such processes. Particularly if they can be to some degree repetitive and tiresome.
tptacek 14 hours ago|||
I don't follow. LLMs spotted these bugs in the first place. You seem to be saying that these discoveries are indications that they're bad for vulnerability discovery.
firer 14 hours ago|||
From what I understand, the copy fail bug was found by researcher who noticed something weird and then using AI to scan the codebase for instances where that becomes a problem.

I bet that with a slightly looser prompt/harness, the LLM could have found these twin bugs too.

Yet at the same time, I also think that if the human researcher had manually scanned the code, he'd have noticed these bugs too.

FWIW I do think LLMs are great tools for finding vulnerabilities in general. Just that they were visibly not optimally applied in this case.

aerodexis 8 hours ago||
They could also have found all these things at the same time - and are slow-rolling the disclosures.
eqvinox 14 hours ago||||
I don't think the copy.fail people understood the issue they found, as is evident by the heavy focus on AF_ALG/aead_algif, which is essentially "innocent" as we're seeing here.

I think LLMs are great for vulnerability discovery, but you need to not skimp on the legwork and understanding what even you just found there.

tptacek 14 hours ago||
Right but without the LLM the bug doesn't get found at all.
_AzMoo 11 hours ago|||
That's not necessarily true. Who's to say the security researchers wouldn't have found it if they'd searched the code manually?
tptacek 11 hours ago|||
It's an AI security firm! You might just as productively ask "why did all the other engineers who ever looked at this code not find it, and why was Theori the one to actually surface it?".
cp9 9 hours ago||||
I’m hardly going to simp for LLM tools but the fact that the bug existed and no one had reported it seems proof positive no one was about to find it without them
UltraSane 11 hours ago|||
It would have taken a LOT longer but often this kind of manual search is so tedious people just don't do it. LLMs don't get bored.
dgellow 4 hours ago||
> LLMs don't get bored

They do not get bored like a human but they are trained on human language and replicate the same traits, such as laziness, and expressing boredom or annoyance (even if obviously they do not experience anything at all). It’s actually a lot of effort to get them to engage with things at a deeper level without skipping corners

baq 4 hours ago||||
Safer to assume at least one of NSA, Mosad and a few others were sitting on it for years.
eqvinox 13 hours ago|||
Yes, I agree. I'm not the GP poster.
parliament32 14 hours ago||||
No, they did not. Careful of falling for the psychosis.

> This finding was AI-assisted, but began with an insight from Theori researcher Taeyang Lee, who was studying how the Linux crypto subsystem interacts with page-cache-backed data.

https://xint.io/blog/copy-fail-linux-distributions

tptacek 13 hours ago||
Theori is an AI security research firm.
duk3luk3 11 hours ago|||
You appear to want to die on the hill of "This vulnerability would never have been found if we lived in a world without LLM AI" which is a very strange hill to die on.

There's no question that we live in the world where LLM AI was involved in finding the copy fail vulnerability at this specific time, and it's completely normal for people to see a vulnerability and then look closer and find related vulnerabilities or a deeper root cause, but there's no need to adopt an extreme "without AI LLM we don't find these vulnerabilities" position.

tptacek 10 hours ago||
It's weird to say I want to "die on this hill" because that's not even something I believe. There was nothing especially difficult about this particular vulnerability. My only observation that nobody did find it before, then an LLM security firm went out looking for Linux LPEs, and thus it was discovered.

That is a very difficult fact pattern to which to attach the conclusion "LLMs have sabotaged security research" (my paraphrase).

Yokohiii 10 hours ago||
The finding started with human intuition and was assisted by an LLM. You can yell "AI sec firm" 1000 times. A human got it started. You shouldn't die on that hill.
danudey 13 hours ago|||
It seems as though this issue occurred to him, then he used their tool ("Xint Code") to analyze the codebase for instances of it.
ofjcihen 10 hours ago||||
I don’t think that’s what the OP is saying at all, just that using LLMs needs to be a cooperative research process.

Also I see you jumping around a lot to the defense of LLMs when I don’t think anyone is really attacking them. Maybe cool it a bit.

tptacek 10 hours ago||
From the thread that ensued I feel comfortable that my interpretation of the comment (or rather, my confusion about it) was in fact germane.
ofjcihen 10 hours ago||
Germane or not the knee-jerk reactions related to LLMs are getting ridiculous and it seems like it’s the same people throwing down at a moments notice and then chalking it up to a misunderstanding.

So like I said, just chill out.

rayiner 10 hours ago||||
It’s incredible humans spot stuff like this. I guess even more incredible that LLMs can do it!
keybored 3 hours ago|||
Right. Finding the bug is in itself a win. It seems we’re jumping from that spend-electricity-to-find-bugs win to arguing about how some things around it are not quite good or comfy.
refulgentis 13 hours ago|||
It’s very hard to see a root vuln similar to, but not the same as, another discovered by AI, as a lesson about AI not exploring.

Is there a counterfactual where you would say it explored well enough, besides both vulnerabilities published as one?

SubiculumCode 10 hours ago|||
Evidence or are you just riffing?
formerly_proven 14 hours ago|||
These are all page cache poisoning attacks (dirtyfrag, copyfail, dirtypipe). Maybe the page cache should have defense-in-depth measures for SUID binaries?
firer 14 hours ago||
SUID mitigations have nothing to do with the vulnerability itself - just the exploit.

If there's a root cronjob that runs a world readable binary, you could modify it in the page cache and exploit it that way.

Modifying the page cache is a really strong primitive with countless ways to exploit it.

eqvinox 14 hours ago|||
splice() should maybe generally refuse to operate on things you can't write to.
toast0 13 hours ago||
splice is documented to return EBADF if "One or both file descriptors are not valid, or do not have proper read-write mode."

So it seems surprising to me that you can call it when the out fd is not writable? But I didn't retain the information about the vulnerability, so I'm missing something. There was something about copy on write, IIRC?

eqvinox 13 hours ago||
"proper read-write mode" for the input fd is reading only. The exploit is writing to the splice() input fd.

Also, NB, I said permission check, not mode check. The input fd to splice can and will be open for only reading quite often. Doesn't mean the kernel can't still do a write permission check.

(Except I didn't say that here. Oops. Getting confused with my posts.)

toast0 13 hours ago||
OK, I may likely have too much sleep debt to understand, but given the bug is that splice can write to the input fd, you're suggesting maybe splice should only let you use an input fd if the process has access to write to it?

But splice is a more or less a generalization of sendfile, and sendfile is often used for webserving where the serving process does not have ownership of the documents it is serving. It doesn't make sense to limit splice such that it can't do the task it was built for. Maybe splice should just not write to the input fd? :P

cyphar 7 hours ago|||
> But splice is a more or less a generalization of sendfile

Not really, splice(2) is actually more limited, it's an optimisation for reading and writing data between files and pipes without needing to make copies.

sendfile(2) works with any fds because it just exists to remove a fair bit of the copy overhead when doing a userspace read/write loop, but it does actually do a copy.

eqvinox 12 hours ago||||
Yes, it'd curtail splice() usage quite heavily. Maybe too much.

But apparently we can't be trusted with the page cache…

Maybe the kernel using supervisor-read-only flags could be made to work, only issue then is what happens if something does in fact need to write…

semiquaver 12 hours ago|||
Aren’t you just saying “don’t write bugs?”
formerly_proven 13 hours ago|||
True! Building protections (e.g. physical pages in the page cache are not writeable 100% of the time) just for executables has of course countless circumventions as well (e.g. config files). Yeah, there is probably not that much to be done there, actually. Looking at some of the diffs it seems to me like the kernel makes it really not particularly obvious when/how this goes wrong. E.g. the patch for this is to look at an additional flag on the socket buffer to fix an arbitrary page cache write. This feels rather action at a distance. Logically this of course makes sense, the whole point of splice et al is to feed data from one file-like into another file-like, whatever those ends might be. That erases the underlying provenance of the data.
varispeed 13 hours ago||
> When your workflow consists of asking questions and getting answers immediately, you don't get to see what's nearby.

That's why is very very important to just step out and use saved time to go for a walk, to a park, sit on a bench, listen do birds, close eyes and zoom out.

The state we are in is actually brilliant.

john_strinlai 15 hours ago||
"Because the embargo has now been broken, no patches or CVEs exist for these vulnerabilities."

link: https://github.com/V4bel/dirtyfrag

detailed writeup: https://github.com/V4bel/dirtyfrag/blob/master/assets/write-...

importantly:

"Copy Fail was the motivation for starting this research. In particular, xfrm-ESP Page-Cache Write in the Dirty Frag vulnerability chain shares the same sink as Copy Fail. However, it is triggered regardless of whether the algif_aead module is available. In other words, even on systems where the publicly known Copy Fail mitigation (algif_aead blacklist) is applied, your Linux is still vulnerable to Dirty Frag."

mitigation (i have not tested or verified!):

"Because the responsible disclosure schedule and the embargo have been broken, no patch exists for any distribution. Use the following command to remove the modules in which the vulnerabilities occur."

    sh -c "printf 'install esp4 /bin/false\ninstall esp6 /bin/false\ninstall rxrpc /bin/false\n' > /etc/modprobe.d/dirtyfrag.conf; rmmod esp4 esp6 rxrpc 2>/dev/null; true"
conversation around the mitigation suggests you need a reboot or run this after the above on already-exploited machines:

    sudo echo 3 > /prox/sys/vm/drop_caches
progval 14 hours ago||
"sudo" in "sudo echo 3 > /prox/sys/vm/drop_caches" does not do anything because only runs echo, not the write.

And if a machine is already exploited, it's too late to do just that. You need to rebuild the whole disk image because anything on it could be compromised.

john_strinlai 14 hours ago||
>And if a machine is already exploited, it's too late to do just that. You need to rebuild the whole disk image because anything on it could be compromised.

this is more targeted at the people who run the PoC to see if their machine is vulnerable.

just transcribing some relevant stuff from https://github.com/V4bel/dirtyfrag/issues/1 so that people visiting this thread dont need to poke around a bunch of different places.

sounds 8 hours ago|||
Is there any additional info on where it was "published publicly by an unrelated third party"? From the timeline in the writeup:

> 2026-05-07: Submitted detailed information about the vulnerability and the exploit to the linux-distros mailing list. The embargo was set to 5 days, with an agreement that if a third party publishes the exploit on the internet during the embargo period, the Dirty Frag exploit would be published publicly.

> 2026-05-07: Detailed information and the exploit for this vulnerability were published publicly by an unrelated third party, breaking the embargo.

Edit: nevermind, details are further down in the thread:

https://openwall.com/lists/oss-security/2026/05/07/12

And

https://news.ycombinator.com/item?id=48055863

alecco 43 minutes ago||
People are blaming the guy who wrote the exploit for breaking the embargo but it was actually broken in Linux by publishing a fix [1]:

> on 2026-05-05 Steffen Klassert pushed f4c50a4034 to netdev/net.git with Cc: stable@vger.kernel.org.

Once a fix is out it's usual for researchers to race to make the first exploit out of it.

[1] https://afflicted.sh/blog/posts/copy-fail-2.html

dundarious 14 hours ago|||
You can't sudo echo and redirect from the non-sudo shell like that.

    echo 3 | sudo tee /proc/sys/vm/drop_caches
or

    sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches'
Also fixed your typo in /proc...
throw0101c 13 hours ago|||
Also try:

     sudo sysctl -w vm.drop_caches=3
wpollock 13 hours ago||||
Or more simply, use

   su -c 'echo 3 > /proc/sys/vm/drop_caches'
seba_dos1 13 hours ago||
echo 3 | sudo tee /proc/sys/vm/drop_caches
john_strinlai 14 hours ago|||
thanks. copy pasting from the github via my phone, and should have taken the extra few mins
dundarious 12 hours ago||
No worries, overall a very useful summary comment.
danudey 13 hours ago||
Just FYI, you can also mitigate it with `echo 1 > ...`; you don't need to drop everything, dropping `1` clears the page cache and that's enough.

Tested locally on Ubuntu 26.04:

1. Ran the exploit and got root

2. Configured the mitigations

3. Ran `su` again with no parameters and immediately got root again unprompted

4. Cleared the page cache

5. `su` asked for a password

eqvinox 14 hours ago||
And I ask again: why the f*ck is algif_aead getting all the flak for copy.fail? It's authencesn being stupid.

authencesn didn't get fixed. Now we got the results of that, turns out you can access the same (I believe) out of bounds write through plain network sockets.

I wish I thought of that, but I didn't.

[ed.: I'm referring to the through-ESP issue. The RxRPC one is AIUI completely unrelated.]

drmpeg 3 hours ago||
Looks like the esp4 and esp6 fixes have been pushed for 7.0, 6.18, 6.12 and 6.6 kernels.

https://lore.kernel.org/lkml/2026050851-iron-hurdle-6421@gre...

https://lore.kernel.org/lkml/2026050843-unplowed-spinster-cf...

https://lore.kernel.org/lkml/2026050832-remold-faceless-bed0...

https://lore.kernel.org/lkml/2026050825-heaving-spender-13a8...

eqvinox 3 hours ago|
And again it's band-aiding the problem. Can authencesn not be fixed or what?
chromacity 13 hours ago||
If this indeed works on all major distributions, I just continue to be amazed by how irresponsible the maintainers are. We're talking about optional kernel functionality that's presumably useful to something like <0.1% of their userbase, but is enabled by default?... why?

This feels like the practice of Linux distros back in 1999 when they'd ship default installs with dozens of network services exposed to the internet. Except it's not 1999 anymore.

JeremyNT 13 hours ago||
Distro maintainers blacklisting specific functionality because they believe YAGNI is a pretty big ask. They just don't know who is using what. It's always possible for users to go back and tailor their builds for the stuff they actually want.

And... I remember the early days of Linux where I ran `make menuconfig` and selected exactly the functionality I wanted in my kernel. I'd... rather not end up back there.

That said a target for an easy win here is RHEL, which compiles a lot of modules into the kernel rather than leaving them as loadable modules, so the mitigation for e.g. copy fail was impossible. Maybe they could do with a few less of those?

chromacity 12 hours ago|||
You can make precisely the same argument for network services. Who knows, maybe you need telnet and UUCP and NFS and ftpd running on your system?... why should the distro maintainer decide?

Well, because you probably don't, and it's a security risk, so no need to put millions at risk for the benefit of that one person who wants to tinker with packet radio or whatever. Similarly, it would be prudent for distros to not allow autoloading of modules that are extremely niche while giving a simple way to adjust the settings if you want to. God knows they have plenty of GUI configurators and config files already.

akdev1l 12 hours ago||
The thing is that we could simply split those modules into separate packages

No reason why you couldn’t just `dnf install -y kmod-rxrpc` if for whatever reason you need that.

michaelt 11 hours ago||||
Now I think about it, it's kinda weird if non-root users can cause kernel modules to get loaded, without any hardware changes having happened.

If the kernel modules for esp4, esp6 and rxrpc aren't loaded - how is it that a non-root attacker can cause them to get loaded?

pepa65 4 hours ago||
It seems that this is allowed as part of a dependency chain...
atgreen 11 hours ago||||
Don't disagree, but there are eBPF mitigations that work as alternatives to unloading kernel modules.
cassianoleal 3 hours ago||
Can you elaborate on that?
atgreen 41 minutes ago||
Have a look at https://github.com/atgreen/rhel-block-copyfail
cassianoleal 2 minutes ago||
Thanks!

From the sound of it, the same mitigations for Copy Fail 1 are also effective here.

TZubiri 10 hours ago|||
>Distro maintainers blacklisting specific functionality because they believe YAGNI is a pretty big ask

We have forgotten what a distro is, and its modern corruption of the concept is now taken as the definition.

Distributions weren't meant to be competing generic universal bundles of userspace tools in addition to the kernel.

0xbadcafebee 10 hours ago|||
There is no way to disable components you think users won't use and not make it incredibly difficult to use the system. I personally would have no way to know what to enable or not enable based on what I want to do, and I've been using this stupid OS for 25 years.

Linux distro maintainers are the most responsible software maintainers on the planet. Their security practices are miles beyond the stupid programming language package managers, they maintain a select list of packages, vet changes, patch bugs, resolve complex packaging issues, backport fixes, use tiered releases, distribute files to global mirrors, and cryptographically validate all files. And might I remind you, they do all this for free.

nirui 7 hours ago|||
> irresponsible the maintainers are

Today it's 0.1%, tomorrow it might become 100%. User demand is hard to anticipate, so it's reasonable to include small features that don't cost a lot to run by default.

It's not ideal, but you really don't want to prevent user from finishing their task, because maybe then they'll just give you a bad name and switch to another distro.

That's to say, it's not "irresponsible", it's reasonably maximums (at least trying to be).

lunar_rover 12 hours ago|||
In many ways non mobile computers are very much still stuck in 1999. Android is significantly more secure than other Linux systems because it's much younger and had the chance to integrate mandatory access control into the entire stack.
croes 8 hours ago||
Unless your Android doesn’t get any security updates anymore.

https://durovscode.com/google-android-security-update-warnin...

akimbostrawman 4 hours ago||
That is a well know and entirely different issue
croes 3 hours ago||
Is it?

The claim is Android is much more secure than other Linux, but if 40% of all Android devices don‘t get a security patch and you can’t even do it yourself I would call the more secure per se.

Hardening is one part of security, patchability another. Android lacks in the latter.

a96 3 hours ago|||
You can take many computers from 1999 and update them to the best software available today. Most phones won't even do that for a few years. And that is security in the real sense of the word, as in "this won't just pull the rug from under me".

(Of course the problem isn't Android, it's the chipset vendors that the SW depends on. They drop support fast and never give enough info for anyone else to keep things up to date. Also Google.)

mike_hearn 2 hours ago||||
So what? Most devices running Linux don't get security patched, it was ever thus. Think about all the kernels running in wifi routers and other embedded devices.
akimbostrawman 3 hours ago|||
>if 40% of all Android devices don‘t get a security patch

No system will stay secure once it does not receive updates. That does not exclude it from being more secure than another system based on security feature merits as long as it does get updated.

>Hardening is one part of security, patchability another. Android lacks in the latter.

That is not an inherent flaw with android but OEM devices shipping modified android they don't bother keeping up to date. Some OEMs are trying to mitigate this by increasing security update support up to 7 years which still is not long enough but also doesn't make them less secure than a desktop that gets updated longer.

What people forget is that not only desktop and mobile phone software is different but also the hardware. If your desktop pc hardware is out of date / EOL nobody cares usually. Meanwhile on a phone this can be a lot more relevant because security expectations and threat models are a lot higher, for example see all the zero/one click compromise headlines.

croes 1 hour ago||
It is an inherent flaw of android. Imagine no Windows update because Lenovo stopped support for 4 year old notebooks
akimbostrawman 19 minutes ago||
It's 7 years because there limiting factor is hardware firmware support. A lot of desktop hardware does not receive firmware updates above 4 years either but that just gets shrugged off like you do because "OS still gets updates so it means it's secure".
akerl_ 13 hours ago|||
It’s not enabled by default. It’s an optional module that is loaded on demand. The entire setup of the kernel promotes compiling in the core set of things your users will need and offering basically everything else as a module to load on demand.
chromacity 12 hours ago|||
This is a pedantry for the sake of it. If it's present by default and an attacker can trivially cause it to be loaded, it's the same as "on by default".
akerl_ 12 hours ago|||
It’s radically different than on by default.

Having a service that automatically starts and listens on the network is radically different from having a module that a local administrator can load.

If you want to block module loads, you’re one sysctl flag away.

zzrrt 12 hours ago|||
> having a module that a local administrator can load

This is a successful local privilege escalation, so local administrator privs were not needed. In default configuration of all distros, apparently.

> If you want to block module loads, you’re one sysctl flag away.

The modules aren't really the point, it's that unnecessary features (to 99% of us?) were accessible by default without privs.

zbentley 12 hours ago||||
This is "a service that automatically starts". That's what automatic kernel module loading is for!

It's not any different from putting an always-running network service behind socket activation instead. The security boundary/risk is nearly identical between the two.

akerl_ 12 hours ago||
One is remotely accessible. The other is locally accessible.
zbentley 11 hours ago||
The GP you were replying to mentioned a vulnerability "present by default and an attacker can trivially cause it to be loaded".

You responded contrasting a network service with an administrator-loadable module.

This is neither of those. It's an LPE, not a remote exploit. It doesn't require an administrator (root) to load anything. In context of this vuln, it's exactly analogous to socket activation. The scope of an LPE vuln is local; yes. What does that have to do with the rest of your comments?

akerl_ 11 hours ago||
I don't understand what point you're trying to make here.

I originally replied to a comment saying "This feels like the practice of Linux distros back in 1999 when they'd ship default installs with dozens of network services exposed to the internet". It is not like that.

ftheplan9 12 hours ago|||
[flagged]
Sohcahtoa82 12 hours ago||||
> This is a pedantry for the sake of it.

Par for the course for HN.

thayne 9 hours ago|||
How would the attacker cause one of these modules to get loaded without already having root?
staticassertion 8 hours ago|||
Trivially. Kernel modules autoload through various unprivileged mechanisms.
kro 6 hours ago|||
Maybe it would be reasonable for sysadmins to proactively whitelist used / block all exotic unused modules that are not needed in their system configuration.

This would reduce the amount of ring 0 code. But I've never seen such advice.

ActorNightly 12 hours ago|||
Because in order to exploit this, you have to have direct access to the computer. Either through malicious usb device, or by exploiting some supply chain or a known piece of software that will be willingly or automatically installed, and furthermore you need to be able to essentially run arbitrary terminal commands, which is a huge breach of isolation in that software.

If an attacker manages to do all that, its already bad news for you. Escalation to root with this is the least of your worries at that point.

Like someone else below posted, https://xkcd.com/1200/

People need to understand what the vulnerability actually is before freaking out about it.

netheril96 11 hours ago|||
You are assuming that LPE only applies to the user that holds all the sensitive stuff. But it also applies to users created specifically for isolation. Without LPE they would not have access to anything important even if they were compromised.
cluckindan 12 hours ago|||
So a threat actor buys access to a managed kubernetes service, or other linux-based shared hosting platform, and now they have access to the computer.

Hell, GitHub Actions would do.

echoangle 10 hours ago||
Is there any service that relies on Linux user separation or containers to separate different user accounts? I’m pretty sure you’re not supposed to do that and the proper way is to run different instances in virtual machines.
LelouBil 8 hours ago||
Right, you're not supposed to do that...
TacticalCoder 13 hours ago||
> ... but is enabled by default?... why?

We could also wonder why XZ was linked to SSH... But only on systemd-enabled distros (which is a lot of them).

Just... Why?

And then make sure to call to incompetence, instead of malice and say non-sense like "Sure, it only factually affects systemd distros, but this is totally not related to systemd". All I saw though was a systemd backdoor (sorry, exploit).

Now regarding copy.fail that just happened: not all maintainers are irresponsible. And some have, rightfully, bragged that the security measures they preemptively took in their distros made them non vulnerable.

But yup I agree it's madness. Just why. And Ubuntu is a really bad offender: it's as if they did a "yes | .." pipe to configure every single modules as an include directly in the kernel.

"We take security seriously, look we've got the IPsec backdoor (sorry, exploit) modules directly in the kernel". "There's 'sec' in 'IPsec', so we're backdoored (sorry, secure)".

chuckadams 13 hours ago||
xz was not directly linked to ssh, and systemd itself was not providing the backdoor. The weakness is embedded into the architecture of glibc (which has spread to other systems like FreeBSD as well): https://github.com/robertdfrench/ifuncd-up
AshamedCaptain 8 hours ago|||
The entire argumentation here is ridiculous. There's a big jump from "IFUNC undermines RELRO" to "IFUNC is the issue". You could have gotten all but the same effect spawning a thread from a plain init or C++ constructor. No one should think that any relro, r^x or aslr or anything like this is going to deter anyone who can literally control the contents of the libraries which are linked in. They could, literally, spawn a copy of sshd with a patched config if necessary.
TacticalCoder 11 hours ago|||
Sure, but distros not using systemd were not affected.
thom 11 hours ago||
After all these years, we finally have enough eyeballs that all bugs are shallow, and it kinda sucks. How many times a week am I going to be updating my kernel from now on?
tempaccount5050 6 hours ago||
I haven't updated mine. I have a firewall and it's not exposed to the Internet. Need a key to SSH in. Same with my public facing server. Almost none of these exploits are "drop everything now and patch" unless you are somehow exposing yourself stupidly.
rithdmc 1 hour ago|||
> unless you are somehow exposing yourself stupidly

Or, y'know, offer some forms of compute as a service.

baq 3 hours ago|||
If you’re running any sort of CI you’re probably going to have a bad couple of days if everything goes well
HugoTea 20 minutes ago|||
To be honest, CI has always been a massive risk, I'm a bit miffed at how blasé some people are about providing runners.
yread 1 hour ago|||
unless you run pinned CI runners on hardware you control
midtake 3 hours ago|||
I sort of always expect there to be an LPE to root on Linux tbh, if anything this is great news and Linux might be a useful multiuser system after all.
bjackman 5 hours ago|||
Updating your kernel isn't good enough, it never was.

Native unsandboxed execution == root. Only thing that's new is some people started making websites for their LPEs.

https://github.com/google/security-research/tree/master/pocs...

baq 3 hours ago|||
With how things are going the question should be ‘is twice a day often enough?’
dwd 3 hours ago||
At the moment it doesn't seem to be.

Within an hour of be advised of, and running the mitigation for DirtyFrag, my upstream provider has blocked all WHM/cPanel/SSH/FTP/SFTP access with a heads-up on:

CVE-2026-29201 CVE-2026-29202 CVE-2026-29203

which look like a repeat of CVE-2026-41940 a week ago.

brcmthrowaway 11 hours ago||
So you think someone is going to break into your house, find your default credentials somehow and get root access?
thom 4 hours ago|||
I think when there’s a step change in our ability to find one type of vulnerability, other types of vulnerability are probably going to become more common as well. Let’s see where we stand at the end of the year.
sureglymop 9 hours ago|||
With physical access, root access is as simple as setting init=/bin/bash in the kernel parameters from a bootloader. No need for credentials or anything.
anygivnthursday 8 hours ago||
Secure boot and disk enryption are not that unusual nowdays
Asraelite 3 hours ago||
Secure boot doesn't provide security, just control for device manufacturers.

Physical access always means the device is pwned. You can install a keylogger or something similar.

Luker88 1 hour ago||
I can't make it work on nixos. Kernel 7.0.1

I tried fixing the paths and even linking `/bin/bash` to the nix /run/current-system/sw/bin/bash

/etc/passwd is unmodified.

Can anyone else try? CopyFail1 did not work because `su` is only executable, not readable, CopyFail2 worked only partially (changes /etc/passwd but the user is not passwordless)

int0x29 15 hours ago||
I'm curious what broke the embargo. Did it leak or did a third party find it independently?
reisse 12 hours ago||
No embargo exists (or could possibly exist) in the first place.

Linux is open source, so every patch fixing the security bug is immediately visible to everyone. There is no workaround to that by the very design how the kernel is developed. The "embargo" people talking about is the rather stupid notion that if people keep their mouth shut and not write "THIS IS A LPE" straight in the patch description, everyone can pretend vulnerability is not leaked until the "official" message in the mailing list is sent.

This approach might have been defensible before, but in LLM era, when people have automated pipelines feeding diffs straight from the mailing lists to SotA models asking to identify probable security issues fixed by those, it is both stupid and dangerous.

zbentley 11 hours ago|||
My (novice) understanding is that embargoes are intended to provide time to 1) develop a patch and 2) distribute the patch.

For Linux/public open source, what you said is right about 2). Once the patch is visible to anyone, it's trivial to identify exploits for unpatched systems. But 1) is still a valid use-case for embargoes for Linux vulns, right? Like, if this patch had taken a few weeks to develop before being confirmed working and published, that's potentially valid grounds for not sharing details during that time (within reason), no?

account42 1 hour ago||||
The linked announcement specifically mentions that an embargo has been broken.
bjackman 3 hours ago|||
Linux does actually have a proper embargo process. But, you're correct that in this case it wouldn't usually have been followed anyway. Bugs like this are fixed multiple times a week, anyone with basic kernel knowledge can see that they are potentially LPEs.

Usually, nobody even bothers to check. LPEs like this are too common to even categorise effectively.

either-orr 12 hours ago|||
A link to the patch was posted in someone's X account. Someone else saw that and posted a working exploit in less than an hour (potentially exploited by an LLM, though other than the quick turnaround, claim not substantiated).

https://x.com/encrypted_past/status/2052409822998392962

john_strinlai 15 hours ago||
it was published publicly by an unrelated third party
jacobgkau 14 hours ago||
They're asking the nature of the third party's discovery/publishing. Someone on the inside who decided to leak it anonymously? Someone else who was able to access some private communication they shouldn't have been able to see? Or a third party who happened to discover the same vulnerability (which seems less unlikely than normal since this is so similar to Copy Fail), but didn't follow disclosure procedures?
staticassertion 14 hours ago|||
The commit for the fix was public. Someone noticed. An exploit was published.
ahartmetz 14 hours ago||
I think I read on the bug's website that "No fix has been released". I understood that as there is no public fix, but maybe it only means it's not in a tagged version of the kernel and no hotfixed distro kernels have been released?
danudey 12 hours ago|||
The patch was posted to the kernel mailing list; someone saw the e-mail, read the patch, figured it out, and published an exploit very soon after.
tkel 10 hours ago|||
The fix has been commited to the git tree for the `netdev` linux subsystem fork. That's how it was noticed by the grsecurity guy who published an exploit. Then, it will be merged by linus either into a RC/master for the next linux minor version release, or into the patch releases branch by GregKH/Sasha for already-released versions. Or in this case, both, because it's a security fix.
staticassertion 8 hours ago||
Spender didn't publish any exploit afaik
tkel 7 hours ago||
Oh you're right, it was this guy (_SiCk / @encrypted_past) who replied to his post

https://xcancel.com/encrypted_past/status/205240982299839296... https://xcancel.com/encrypted_past https://github.com/0xdeadbeefnetwork https://github.com/0xdeadbeefnetwork/Copy_Fail2-Electric_Boo...

lofaszvanitt 14 hours ago|||
Following disclosure procedures? The main cause that kills the need to take security seriously.
titanomachy 3 hours ago||
I'm not a security expert, but I'm responsible for some (relatively low-stakes) production systems.

It sounds like these two most recent exploits depend on unprivileged user namespaces, and that in fact a high percentage of LPE exploits need this feature. I use rootless containers on a couple of systems (like my dev machine server), but on most of my systems I don't, so it sounds like disabling that would be a good step to hardening my systems against future exploits.

To the security experts: are there any other straightforward configuration changes with such broad-reaching improvement in security posture? Any well-written guides on this subject, something like "top kernel modules to consider disabling if you don't need them"? I'm not talking about the obvious stuff like "disable password SSH", I'm specifically looking for steps that are statistically likely to prevent as-yet-unknown privilege escalation attacks.

staticassertion 4 minutes ago|
You don't need unprivileged user namespaces for this one if you're in a position to get the target kernel module loaded. But yeah, user namespaces are basically the single most significant privesc path in the kernel, maybe io-uring is second. Disabling both (or very carefully deciding what can use them) is one of the best ways to reduce your attack surface.

I don't have any guides but you can determine which kernel modules are already loaded in your system and then just compile those in and block module loading.

Otherwise, shove everything into a container, ideally gvisor, and you've reduced attack surface by a large chunk again via seccomp.

KamiNuvini 14 hours ago|
Does anyone know whether Debian is vulnerable? I tried the exploit on a Debian 12+Debian 13 machine but wasn't able to reproduce it myself.
thaniri 13 hours ago||
I was able to reproduce this issue on kernel 6.12.57+deb13-amd64 running Debian 13 (Trixie), but unable to reproduce it on kernel 6.1.0-42-amd64 running Debian 12 (Bookworm).

For anyone not on the security stream of Debian packages for Bookworm, kernel version 6.1.0-42-amd64 is actually immune to copy.fail. Surprising that it looks to be immune to dirtyfrag. If you haven't already patched on the security stream, you can choose any kernel version that kept commit 2b8bbc64b5c2. I am thinking that the same commit might accidentally be keeping certain Debian 12 kernel versions safe from dirtyfrag as well.

cholmon 13 hours ago|||
I just tried the exploit on a fresh Debian 13 droplet on digitalocean and it worked.
louwrentius 12 hours ago||
I tested on a fully up-to-date Debian 13 and the exploit works. The mitigation also works / confirmed.
More comments...