Top
Best
New

Posted by dot_treo 15 hours ago

Tell HN: Litellm 1.82.7 and 1.82.8 on PyPI are compromised(github.com)
About an hour ago new versions have been deployed to PyPI.

I was just setting up a new project, and things behaved weirdly. My laptop ran out of RAM, it looked like a forkbomb was running.

I've investigated, and found that a base64 encoded blob has been added to proxy_server.py.

It writes and decodes another file which it then runs.

I'm in the process of reporting this upstream, but wanted to give everyone here a headsup.

It is also reported in this issue: https://github.com/BerriAI/litellm/issues/24512

487 points | 371 commentspage 2
cedws 13 hours ago|
This looks like the same TeamPCP that compromised Trivy. Notice how the issue is full of bot replies. It was the same in Trivy’s case.

This threat actor seems to be very quickly capitalising on stolen credentials, wouldn’t be surprised if they’re leveraging LLMs to do the bulk of the work.

varenc 4 hours ago|
What is the rational for the attacker spamming the relevant issue with bot replies? does this benefit them? Maybe it makes discussion impossible to confuse maintainers and delay the time to a fix?
bratao 14 hours ago||
Look like the Founder and CTO account has been compromised. https://github.com/krrishdholakia
jadamson 14 hours ago||
Most his recent commits are small edits claiming responsibility on behalf of "teampcp", which was the group behind the recent Trivy compromise:

https://news.ycombinator.com/item?id=47475888

soco 13 hours ago||
I was just wondering why the Trivy compromise hit only npm packages, thinking that bigger stuff should appear sooner or later. Here we go...
franktankbank 14 hours ago||
[flagged]
shay_ker 13 hours ago|||
A general question - how do frontier AI companies handle scenarios like this in their training data? If they train their models naively, then training data injection seems very possible and could make models silently pwn people.

Do the labs label code versions with an associated CVE to label them as compromised (telling the model what NOT to do)? Do they do adversarial RL environments to teach what's good/bad? I'm very curious since it's inevitable some pwned code ends up as training data no matter what.

tomaskafka 13 hours ago||
Everyone’s (well, except Anthropic, they seem to have preserved a bit of taste) approach is the more data the better, so the databases of stolen content (erm, models) are memorizing crap.
datadrivenangel 13 hours ago|||
This was a compromise of the library owners github acccounts apparently, so this is not a related scenario to dangerous code in the training data.

I assume most labs don't do anything to deal with this, and just hope that it gets trained out because better code should be better rewarded in theory?

Imustaskforhelp 13 hours ago||
I am pretty sure that such measures aren't taken by AI companies, though I may be wrong.
alansaber 13 hours ago||
The API/online model inference definitely runs through some kind of edge safeguarding models which could do this.
nickvec 13 hours ago||
Looks like all of the LiteLLM CEO’s public repos have been updated with the description “teampcp owns BerriAI” https://github.com/krrishdholakia
f311a 12 hours ago||
Their previous release would be easily caught by static analysis. PTH is a novel technique.

Run all your new dependencies through static analysis and don't install the latest versions.

I implemented static analysis for Python that detects close to 90% of such injections.

https://github.com/rushter/hexora

ting0 8 hours ago||
And easily bypassed by an attacker who knows about your static analysis tool who can iterate on their exploit until it no longer gets flagged.
fernandotakai 6 hours ago||
the main things are:

1. pin dependencies with sha signatures 2. mirror your dependencies 3. only update when truly necessary 4. at first, run everything in a sandbox.

samsk 12 hours ago||
Interesting tool, will definitely try - just curious, is there a tool (hexora checker) that ensures that hexora itself and its dependencies are not compromised ? And of course if there is one, I'll need another one for the hexora checker....
f311a 11 hours ago|||
There is no such tool, but you can use other static analyzers. Datadog also has one, but it's not AST-based.
hmokiguess 10 hours ago|||
https://xkcd.com/2044/
tom_alexander 13 hours ago||
Only tangentially related: Is there some joke/meme I'm not aware of? The github comment thread is flooded with identical comments like "Thanks, that helped!", "Thanks for the tip!", and "This was the answer I was looking for."

Since they all seem positive, it doesn't seem like an attack but I thought the general etiquette for github issues was to use the emoji reactions to show support so the comment thread only contains substantive comments.

incognito124 13 hours ago||
In the thread:

> It also seems that attacker is trying to stifle the discussion by spamming this with hundreds of comments. I recommend talking on hackernews if that might be the case.

nickvec 13 hours ago|||
Ton of compromised accounts spamming the GH thread to prevent any substantive conversation from being had.
tom_alexander 13 hours ago||
Oh wow. That's a lot of compromised accounts. Guess I was wrong about it not being an attack.
vultour 13 hours ago|||
These have been popping up on all the TeamPCP compromises lately
jbkkd 13 hours ago|||
Those are all bots commenting, and now exposing themselves as such.
Imustaskforhelp 13 hours ago||
Bots to flood the discussion to prevent any actual conversation.
syllogism 11 hours ago|||
Maintainers need to keep a wall between the package publishing and public repos. Currently what people are doing is configuring the public repo as a Trusted Publisher directly. This means you can trigger the package publication from the repo itself, and the public repo is a huge surface area.

Configure the CI to make a release with the artefacts attached. Then have an entirely private repo that can't be triggered automatically as the publisher. The publisher repo fetches the artefacts and does the pypi/npm/whatever release.

anderskaseorg 10 hours ago||
The point of trusted publishing is supposed to be that the public can verifiably audit the exact source from which the published artifacts were generated. Breaking that chain via a private repo is a step backwards.

https://docs.npmjs.com/generating-provenance-statements

https://packaging.python.org/en/latest/specifications/index-...

saidnooneever 11 hours ago||
this kind of compromise is why a lot of orgs have internal mirrors of repos or package sources so they can stay behind few versions to avoid latest and compromise. seen it with internal pip repos, apt repos etc.

some will even audit each package in there (kind crap job but it works fairly well as mitigation)

syllogism 11 hours ago||
Just keeping a lockfile and updating it weekly works fine for that too yeah
eoskx 13 hours ago||
This is bad, especially from a downstream dependency perspective. DSPy and CrewAI also import LiteLLM, so you could not be using LiteLLM as a gateway, but still importing it via those libraries for agents, etc.
nickvec 13 hours ago||
Wow, the postmortem for this is going to be brutal. I wonder just how many people/orgs have been affected.
eoskx 13 hours ago||
Yep, I think the worst impact is going to be from libraries that were using LiteLLM as just an upstream LLM provider library vs for a model gateway. Hopefully, CrewAI and DSPy can get on top of it soon.
benatkin 13 hours ago||
I'm surprised to see nanobot uses LiteLLM: https://github.com/HKUDS/nanobot

LiteLLM wouldn't be my top choice, because it installs a lot of extra stuff. https://news.ycombinator.com/item?id=43646438 But it's quite popular.

flux3125 12 hours ago||
I completely removed nanobot after I found that. Luckily, I only used it a few times and inside a docker container. litellm 1.82.6 was the latest version I could find installed, not sure if it was affected.
macNchz 10 hours ago|||
Was curious—good number of projects out there with an un-pinned LiteLLM dependencies in their requirements.txt (628 matches): https://github.com/search?q=path%3A*%2Frequirements.txt%20%2...

or pyproject.toml (not possible to filter based on absence of a uv.lock, but at a glance it's missing from many of these): https://github.com/search?q=path%3A*%2Fpyproject.toml+%22%5C...

or setup.py: https://github.com/search?q=path%3A*%2Fsetup.py+%22%5C%22lit...

santiagobasulto 12 hours ago|
I blogged about this last year[0]...

> ### Software Supply Chain is a Pain in the A*

> On top of that, the room for vulnerabilities and supply chain attacks has increased dramatically

AI Is not about fancy models, is about plain old Software Engineering. I strongly advised our team of "not-so-senior" devs to not use LiteLLM or LangChain or anything like that and just stick to `requests.post('...')".

[0] https://sb.thoughts.ar/posts/2025/12/03/ai-is-all-about-soft...

eoskx 12 hours ago|
Valid, but for all the crap that LangChain gets it at least has its own layer for upstream LLM provider calls, which means it isn't affected by this supply chain compromise (unless you're using the optional langchain-litellm package). DSPy uses LiteLLM as its primary way to call OpenAI, etc. and CrewAI imports it, too, but I believe it prefers the vendor libraries directly before it falls back to LiteLLM.
More comments...