Posted by Fibonar 3 hours ago
I didn’t need to recount my thought process after the fact. It’s the very same ones I wrote down to help Claude figure out what was happening.
I’m an ML engineer by trade, so having Claude walk me through exactly who to contact and a step by step guide of time-critical actions felt like a game-changer for non-security researchers.
I'm curious whether the security community thinks more non-specialists finding and reporting vulnerabilities like this is a net positive or a headache?
Good thinking on asking Claude to walk you through on who to contact. I had no idea how to contact anyone related to PyPI, so I started by shooting an email to the maintainers and posting it on Hacker News.
While I'm not part of the security community, I think everyone who finds something like this, should be able to report it. There is no point in gatekeeping the reporting of serious security vulnerabilities.
> If you've identified a security issue with a project hosted on PyPI Login to your PyPI account, then visit the project's page on PyPI. At the bottom of the sidebar, click Report project as malware.
The fork-bomb part still seems really weird to me. A pretty sophisticated payload, caught by missing a single `-S` flag in the subprocess call.
I’ve found Claude in particular to be very good at this sort of thing. As for whether it’s a good thing, I’d say it’s a net positive - your own reporting of this probably saved a bigger issue!
We wrote up the why/what happened on our blog twice… the second based on the LiteLLM issue:
https://grith.ai/blog/litellm-compromised-trivy-attack-chain
It's a signal vs noise thing. Most of the grief is caused by bottom feeders shoveling anything they can squint at and call a vulnerability and asking for money. Maybe once a month someone would run a free tool and blindly send snippets of the output promising the rest in exchange for payment. Or emailing the CFO and the General Counsel after being politely reminded to come back with high quality information, and then ignored until they do.
Your report on the other hand was high quality. I read all the reports that came my way, and good ones were fast tracked for fixes. I'd fix or mitigate them immediately if I had a way to do so without stopping business, and I'd go to the CISO, CTO, and the corresponding engineering manager if it mattered enough for immediate response.
I like the presentation <3.
(also beautifully presented!)
The 46-minute window here is telling. If your CI/CD pipeline happens to run during that window, you're exposed. A simple policy of "no package updates within 24h of release" would have completely avoided this, and it costs nothing to implement.
The client side tooling needs work, but that's a major effort in and of itself.
PyPI doesn't block package uploads awaiting security scanning - that would be a bad idea for a number of reasons, most notably (in my opinion) that it would be making promises that PyPI couldn't keep and lull people into a false sense of security.
PyPI has paid organization accounts now which are beginning to form a meaningful revenue stream: https://docs.pypi.org/organization-accounts/pricing-and-paym...
Plus a small fee wouldn't deter malware authors, who would likely have easy access to stolen credit cards - which would expose PyPI to the chargebacks and fraudulent transactions world as well!
If pypi charges money, python libraries will suddenly have a lot of "you can 'uv add git+https://github.com/project/library'" instead of 'uv add library'.
I also don't think it would stop this attack, where a token was stolen.
If someone's generating pypi package releases from CI, they're going to register a credit card on their account, make it so CI can automatically charge it, and when the CI token is stolen it can push an update on the real package owner's dime, not the attackers, so it's not a deterrent.
Also, the iOS app store is an okay counter example. It charges $100/year for a developer account, but still has its share of malware (certainly more than the totally free debian software repository).
(software supply chain security is a component of my work)
I agree that's a bad idea to do so since security scanning is inherently a cat and mouse game.
Let's hypothetically say pypi did block upload on passing a security scan. The attacker now simply creates their own pypi test package ahead of time, uploads sample malicious payloads with additional layers of obfuscation until one passes the scan, and then uses that payload in the real attack.
Pypi would also probably open source any security scanning code it adds as part of upload (as it should), so the attacker could even just do it locally.
("slow is smooth, smooth is fast")
A pattern that worked with for us is treating package supply-chain events as a governance problem as much as a technical one--short, pre-written policy playbook (who gets paged, what evidence to collect, what to quarantine...etc), plus an explicit decision record for "what did we do and why." Even a lightweight template prevents panic driven actions like ad-hoc "just reinstall everything."
On the flip side, waiting N days before adopting new versions helps, but it's a brittle for agent systems becasue they tend to pull dependenceies dynamically and often run unattended. The more robust control is: pin + allowlist, with an internal "permission to upgrade" gate where upgrades to execution-critical deps require a person to sign off (or at least a CI check that includes provenance(sig) verification and a diff or new files). Its boring, but it turns "Oops, compromised wheel" into a contained event rather than an unbounded blast radius.
See how the AI points you in the "right" direction:
What likely happened:
The exec(base64.b64decode('...')) pattern is not malware — it's how Python tooling (including Claude Code's Bash tool) passes code snippets to python -c while avoiding shell escaping issues.
Any base64 string passed to python via cmdline should be considered as HIGHLY suspicious, by default. Or anything executed from /tmp, /var/tmp, /dev/shm. Exfiltrates data to https://models.litellm.cloud/ encrypted with RSA
if @op would have had Lulu or LittleSnitch installed, they would probably have noticed (and blocked) suspicious outbound connections from unexpected binaries.Having said this, uploading a binary to Claude for analysis is a different story.
Maybe the author correctly praises the research capabilities of Claude for some issues. Selecting an Iranian school as a target would be a counterexample.
But the generative parts augmented by claws are a huge and unconditional net negative.