Top
Best
New

Posted by varunsharma07 15 hours ago

Postmortem: TanStack NPM supply-chain compromise(tanstack.com)
https://github.com/TanStack/router/issues/7383
875 points | 356 commentspage 2
getcrunk 13 hours ago|
I think we are at the point where everyone really needs to run each project in its own vm.

Given the recent lpe vulns docker 100% won’t cut it.

And containers were never meant primarily as a security boundary anyways

Gigachad 12 hours ago||
QubesOS had the right idea. You want layers and layers of security, with multiple VMs at the root.
7373737373 2 hours ago|||
See also: https://genode.org/

Also, in addition to isolation and https://en.wikipedia.org/wiki/Capability-based_security between processes, capability security within processes, see languages like E (https://web.archive.org/web/20260506035108/https://erights.o...) or Monte (https://monte.readthedocs.io/en/latest/index.html)

halfcat 11 hours ago|||
> had the right idea

Is it no longer the right idea?

Gigachad 11 hours ago||
I mean that in the sense that they had the idea way before the wave of rapid linux 0days and supply chain attacks were common. The design they picked has only become more relevant.
omcnoe 11 hours ago|||
Devcontainers (I know it's not a full VM, but it's most prominent version of this "isolated development environment" concept) wouldn't fully protect you against this. Github credentials are automatically pulled into the container. If you are using other cloud services that need to be accessed within the container, this cred stealer will grab their creds too.

It would limit the blast radius, which at least is an improvement.

christophilus 39 minutes ago||
This is one reason I have my own dev container script. And the container pulls nothing in except whatever I explicitly put in my .podman folder. It runs without any GitHub access at all. I do all of that from the host machine.
9cb14c1ec0 13 hours ago|||
Or a vm per container, if you insist on containers. I've have a couple of relaxed weeks recently due to running everything on VMs rather than some random Kubernetes service.
einpoklum 13 hours ago|||
Luckily, projects using more secure language ecosystems like C and C++ are spared this kind of problems :-)
saghm 12 hours ago|||
No, instead the code that isn't from a dependency is what will cause you to get pwned
eqvinox 12 hours ago||
I think you missed the joke/sarcasm there.
saghm 12 hours ago||
It's been less than a month since I responded to a comment on a different thread arguing basically the same thing about C/C++ in a serious way. I've long since lost the ability to distinguish.
eqvinox 11 hours ago||
Fair, I'm in fact not 100% sure it's a joke. But there's a smiley, that's pushing me to 90%.
Havoc 11 hours ago||||
The virus fest of the 90s would like a word with you and your C
aiscoming 7 hours ago||||
you can't get infected through the package manager if your language doesn't have a package manager :) turns out C and C++ were playing 4D chess all along
bpavuk 13 hours ago|||
[dead]
zmmmmm 7 hours ago||
it's not going to help if you share a cache across security boundaries. That is what happened here and seems to be driving a spate of github action related problems.
nrmitchi 12 hours ago||
Appreciate the tanstack postmortem, however the security issue as far as the rest of the npm ecosystem goes is still an ongoing concern, correct?

Is there evidence that any downstream packages that may have pulled/included tanstack packages should be considered safe?

alexjurkiewicz 11 hours ago|
NPM is getting all the attacks and attention because it is the biggest. But there's nothing language specific to this class of attacks.
nrmitchi 9 hours ago||
Yes, that is clear. But in this particular instance the tanstack packages are downstream of a ton of other packages.

Tanstack infected a bunch of other packages; then resolving their issue doesn’t fix the widespread issue

arianvanp 4 hours ago||
Why do we do all these efforts making our build systems hermetic and we end up just using a global mutable cache across branches where the caller picks the key? Failure of industry as a whole. Actually insane.
chuckadams 13 hours ago||
The malware uses a "prepare" hook to use bun to run the payload, an attack that ironically enough, bun is immune to. Enabling lifecycle scripts in dependencies by default in 2026 is just plain malpractice.
sevenzero 5 hours ago||
So how many supply chain attacks do we need to actually change things? Feels like I read about new supply chain attacks every day at this point.
eviks 4 hours ago||
As many as fit in a period of time it takes a better generation of developers to grow up
sevenzero 3 hours ago||
Unfortunately I think devs nowadays (me included) are insanely bad compared to the devs back in the day who actually had to learn about their computers.
killerstorm 2 hours ago||
A lot of things need to be rebuilt from ground up, and many devs would prefer convenience and tradition
ryanschaefer 1 hour ago||
> many devs would prefer convenience and tradition

This is too reductive of the situation.

If it ain’t broke don’t fix it. Except, in this case, unless you have someone tell you it’s broken you won’t even know you need to fix it.

And this is where asymmetry comes in to play. Attackers are free to test and break as much as they want as long as they are silent. Whereas maintainers don’t know if the fix an LLM proposes will actually address the issue or cause some regression elsewhere.

IMO, if Microsoft wants actually good PR around GitHub for once they would offer free LLM security audits on all actions for at least the X most popular repos…

febusravenga 5 hours ago||
I think biggest concern here was cache poisoning.

Well, one of simplest mitigation is that `pull_request_target` jobs shouldn't have access to write to cache, they can read for performance, but not write.

To extrapolate rule, the `pull_request_target` shouldn't have any ways to invoke external side effects.

In most strict scenario, they shouldn't have access to network at all ... or only to GET <safeUrl> - where safeUrls are somehow vetted previously on main, derived from yarn.locks and similar manifests. Pita to setup, no wonder nobody does that.

hirako2000 6 hours ago||
> it's a known GitHub Actions design issue that requires conscious mitigation.

Okay it's a security issue, but just mitigate it as we won't fix it.

In a recent comment people asked me how come GitHub Action isn't a positive added feature since MS acquisition.

postalcoder 12 hours ago||
Wow. Another huge package got compromised. I'm going to repost my PSA[0][1] that I posted after Axios and LiteLLM were compromised. The bit about lifecycle scripts apply too:

PSA: npm/bun/pnpm/uv now all support setting a minimum release age for packages. I also have `ignore-scripts=true` in my ~/.npmrc. Based on the analysis, that alone would have mitigated the vulnerability. bun and pnpm do not execute lifecycle scripts by default. Here's how to set global configs to set min release age to 7 days: ~/.config/uv/uv.toml exclude-newer = "7 days"

  ~/.npmrc 
  min-release-age=7 # days
  ignore-scripts=true
  
  ~/Library/Preferences/pnpm/rc
  minimum-release-age=10080 # minutes
  
  ~/.bunfig.toml
  [install]
  minimumReleaseAge = 604800 # seconds

If you do need to override the global setting, you can do so with a CLI flag:

  npm install <package> --min-release-age 0
  
  pnpm add <package> --minimum-release-age 0
  
  uv add <package> --exclude-newer "0 days"
  
  bun add <package> --minimum-release-age 0

I should add one extra note. There seems to be some concern that the mass adoption of dependency cooldowns will lead to vulnerabilities being caught later, or that using dependency cooldowns is some sort of free-riding. I disagree with that. What you're trading by using dep cooldowns is time preference. Some people will always have a higher time preference than you.

0: https://news.ycombinator.com/item?id=47582220

1: https://news.ycombinator.com/item?id=47513932

63stack 4 hours ago||
The last time I looked at this, using ignore-scripts = true with npm results in "npm run xyz" getting blocked as well, is that still the case?
postalcoder 1 hour ago||
Nope, that's not the case. This blocks lifecycle scripts, but it doesn't block scripts that are explicitly invoked by `npm run`. From the documentation[0]:

  Note that commands explicitly intended to run a particular script, such as 
  npm start, npm stop, npm restart, npm test, and npm run-script will still
  run their intended script if ignore-scripts is set, but they will not 
  run any pre- or post-scripts.

0: https://docs.npmjs.com/cli/v8/commands/npm-run-script#ignore...
ricardobeat 11 hours ago|||
+1 to this. I am glad to have enabled these back in March before the last two waves hit. In addition to that, make sure you have a lockfile committed to your repo and be mindful of adding new dependencies. Use `pnpm install --frozen-lockfile` to avoid surprises.

If you don't have min-release-age set, remember that you can still pull in affected packages via indirect dependencies.

And ideally pin your package manager version too.

SethMLarson 11 hours ago||
pip also supports relative dependency cooldowns starting in v26.1:

~/.config/pip/pip.conf

[install] uploaded-prior-to = P3D

twoodfin 2 hours ago||
LLM probably designed the attack, LLM analyzes the attack and produces the postmortem.

Interesting days.

varunsharma07 15 hours ago|
The Mini Shai-Hulud worm is actively compromising legitimate npm packages by hijacking CI/CD pipelines and stealing developer secrets. StepSecurity's OSS Package Security Feed first detected the attack in official @tanstack packages and is tracking its spread across the ecosystem in real time.
janice1999 14 hours ago|
How did you guys detect it? Do you use it internally or do you monitor popular packages?
More comments...