Top
Best
New

Posted by dot_treo 18 hours ago

Tell HN: Litellm 1.82.7 and 1.82.8 on PyPI are compromised(github.com)
About an hour ago new versions have been deployed to PyPI.

I was just setting up a new project, and things behaved weirdly. My laptop ran out of RAM, it looked like a forkbomb was running.

I've investigated, and found that a base64 encoded blob has been added to proxy_server.py.

It writes and decodes another file which it then runs.

I'm in the process of reporting this upstream, but wanted to give everyone here a headsup.

It is also reported in this issue: https://github.com/BerriAI/litellm/issues/24512

565 points | 395 commentspage 4
6thbit 16 hours ago|
title is bit misleading.

The package was directly compromised, not “by supply chain attack”.

If you use the compromised package, your supply chain is compromised.

dlor 13 hours ago|
It's both. They got compromised by another supply chain attack on Trivy initially.
mohsen1 16 hours ago||
If it was not spinning so many Python processes and not overwhelming the system with those (friends found out this is consuming too much CPU from the fan noise!) it would have been much more successful. So similar to xz attack

it does a lot of CPU intensive work

    spawn background python
    decode embedded stage
    run inner collector
    if data collected:
        write attacker public key
        generate random AES key
        encrypt stolen data with AES
        encrypt AES key with attacker RSA pubkey
        tar both encrypted files
        POST archive to remote host
franktankbank 15 hours ago||
I can't tell which part of that is expensive unless many multiples of python are spawned at the same time. Are any of the payloads particularly large?
mathis-l 7 hours ago||
CrewAI (uses litellm) pinned it to 1.82.6 (last good version) 5 hours ago but the commit message does not say anything about a potential compromise. This seems weird. Is it a coincidence? Shouldn’t users be warned about a potential compromise?

https://github.com/crewAIInc/crewAI/commit/8d1edd5d65c462c3d...

mathis-l 7 hours ago|
Dspy handling it openly https://github.com/stanfordnlp/dspy/issues/9500
rgambee 16 hours ago||
Looking forward to a Veritasium video about this in the future, like the one they recently did about the xz backdoor.
johanyc 7 hours ago||
I don't expect one. This kind of attack is pretty common nowadays. The xz attack was special for how long the guy worked for it and how severe it could have been
stavros 16 hours ago||
That was massively more interesting, this is just a straight-up hack.
datadrivenangel 8 hours ago||
This among with some other issues makes me consider ejecting and building my own LLM shim. The different model providers are bespoke enough even within litellm that it sometimes seems like a lot of hassle for not much benefit.

Also the repo is so active that it's very hard to understand the state of issues and PRs, and the 'day 0' support for GPT-5.4-nano took over a week! Still, tough situation for the maintainers who got hacked.

noobermin 12 hours ago||
I have to say, the long line of comments from obvious bots thanking the opener of the issue is a bit too on the nose.
zahlman 8 hours ago|
It doesn't need to be subtle if the goal is just to drown out actual discussion.
getverdict 5 hours ago||
Supply chain compromises in AI tooling are becoming structural, not exceptional. We've seen similar patterns in the last 6 months — Zapier's npm account (425 packages, Shai Hulud malware) and Dify's React2Shell incident both followed the same vector: a trusted package maintainer account as the entry point. The blast radius keeps growing as these tools get embedded deeper into production pipelines.
saharhash 5 hours ago||
Easy tool to check if you/other repos were expoed https://litellm-compromised.com
gaborbernat 6 hours ago||
Recommend reading related blog post https://bernat.tech/posts/securing-python-supply-chain
cpburns2009 14 hours ago|
Looks like litellm is no longer in quarantine on PyPI, and the compromized versions (1.82.7 and 1.82.8) have been removed [1].

[1]: https://pypi.org/project/litellm/#history

More comments...