Top
Best
New

Posted by dot_treo 1 day ago

Tell HN: Litellm 1.82.7 and 1.82.8 on PyPI are compromised(github.com)
About an hour ago new versions have been deployed to PyPI.

I was just setting up a new project, and things behaved weirdly. My laptop ran out of RAM, it looked like a forkbomb was running.

I've investigated, and found that a base64 encoded blob has been added to proxy_server.py.

It writes and decodes another file which it then runs.

I'm in the process of reporting this upstream, but wanted to give everyone here a headsup.

It is also reported in this issue: https://github.com/BerriAI/litellm/issues/24512

735 points | 438 commentspage 9
bfeynman 1 day ago|
pretty horrifying. I only use it as lightweight wrapper and will most likely move away from it entirely. Not worth the risk
dot_treo 1 day ago|
Even just having an import statement for it is enough to trigger the malware in 1.82.8.
sudorm 17 hours ago||
are there any timestamps available when the malicious versions were published on pypi? I can't find anything but that now the last "good" version was published on march 22.
sudorm 17 hours ago|
according to articles the first malicious version was published at roughly 8:30 UTC and the pypi repo taken down at ~11:25 UTC.
homanp 22 hours ago||
How were they compromised? Phishing?
gkfasdfasdf 1 day ago||
Someone needs to go to prison for this.
claudiug 21 hours ago||
LiteLLM's SOC2 auditor was Delve :))
danielvaughn 23 hours ago||
I work with security researchers, so we've been on this since about an hour ago. One pain I've really come to feel is the complexity of Python environments. They've always been a pain, but in an incident like this, where you need to find whether an exact version of a package has ever been installed on your machine. All I can say is good luck.

The Python ecosystem provides too many nooks and crannies for malware to hide in.

TZubiri 1 day ago||
Thank you for posting this, interesting.

I hope that everyone's course of action will be uninstalling this package permanently, and avoiding the installation of packages similar to this.

In order to reduce supply chain risk not only does a vendor (even if gratis and OS) need to be evaluated, but the advantage it provides.

Exposing yourself to supply chain risk for an HTTP server dependency is natural. But exposing yourself for is-odd, or whatever this is, is not worth it.

Remember that you are programmers and you can just program, you don't need a framework, you are already using the API of an LLM provider, don't put a hat on a hat, don't get killed for nothing.

And even if you weren't using this specific dependency, check your deps, you might have shit like this in your requirements.txt and was merely saved by chance.

An additional note is that the dev will probably post a post-mortem, what was learned, how it was fixed, maybe downplay the thing. Ignore that, the only reasonable step after this is closing a repo, but there's no incentive to do that.

xinayder 1 day ago||
> Remember that you are programmers and you can just program, you don't need a framework, you are already using the API of an LLM provider, don't put a hat on a hat, don't get killed for nothing.

Programming for different LLM APIs is a hassle, this library made it easy by making one single API you call, and in the backstage it handled all the different API calls you need for different LLM providers.

hrmtst93837 20 hours ago|||
One wrapper cuts API churn, but it also widens the supply-chain blast radius you own.
rcleveng 14 hours ago||||
I think almost everyone supports the openai api anyway (even Gemini). Not entirely sure why there needs to be a wrapper.
dragonwriter 13 hours ago||
Msot do, but Anthropic indicates that theirs is "is not considered a long-term or production-ready solution for most use cases" [0]; in any case, where the OpenAI-compatible API isn't the native API, both for cloud vendors other than OpenAI and for self-hosting software, the OpenAI-compatible API is often limited, both because the native API offers features that don't map to the OpenAI API (which a wrapper that presents an OpenAI-compatible API is not going to solve) and because the vendor often lags in implementing support for features in the OpenAI-compatible API—including things like new OpenAI endpoints that may support features that the native API already supports (e.g., adding support for chat completions when completions were the norm, or responses when chat completions were.) A wrapper that used the native API and did its own mapping to OpenAI could, in principle, address that.

[0] https://platform.claude.com/docs/en/api/openai-sdk

TZubiri 22 hours ago||||
>Programming for different LLM APIs is a hassle

That's what they pay us for

I'd get it if it were a hassle that could be avoided, but it feels like you are trying to avoid the very work you are being paid for, like if a MCD employee tried to pay a kid with Happy Meal toys to work the burger stand.

Another red flag, although a bit more arguable, is that by 'abstracting' the api into a more generic one, you achieving vendor neutrality, yes, but you also integrate much more loosely with your vendors, possibly loose unique features (or can only access them with even more 'hassle' custom options, and strategically, your end product will veer into commodity territory, which is not a place you usually want to be.

otabdeveloper4 1 day ago|||
There's only two different LLM APIs in practice (Anthropic and everyone else), and the differences are cosmetic.

This is like a couple hours of work even without vibe coding tools.

dragonwriter 13 hours ago||
> There's only two different LLM APIs in practice (Anthropic and everyone else), and the differences are cosmetic.

There's more than that (even if most other systems also provide a OpenAI compatible API which may or may not expose either all features of the platform or all features of the OpenAI API), and the differences are not cosmetic, but since LiteLLM itself just presents an OpenAI-compatible API, it can't be providing acccess to other vendor features that don't map cleanly to that API, and I don't think its likely to be using the native API for each and being more complete in its OpenAI-compatible implementation of even the features that map naturally than the first-party OpenAI-compatibility APIs.)

circularfoyers 1 day ago|||
Comparing this project to is-odd seems very disingenuous to me. My understanding is this was the only way you could use llama.cpp with Claude Code for example, since llama.cpp doesn't support the Anthropic compatible endpoint and doing so yourself isn't anywhere near as trivial as your comparison. Happy to be corrected if I'm wrong.
jerieljan 22 hours ago||
That's a correct example, and I agree, it is disingenuous to just trivially call this an `is-odd` project.

Back in the days of GPT-3.5, LiteLLM was one of the projects that helped provide a reliable adapter for projects to communicate across AI labs' APIs and when things drifted ever so slightly despite being an "OpenAI-compatible API", LiteLLM made it much easier for developers to use it rather than reinventing and debugging such nuances.

Nowadays, that gateway of theirs isn't also just a funnel for centralizing API calls but it also serves other purposes, like putting guardrails consistently across all connections, tracking key spend on tokens, dispensing keys without having to do so on the main platforms, etc.

There's also more to just LiteLLM being an inference gateway too, it's also a package used by other projects. If you had a project that needed to support multiple endpoints as fallback, there's a chance LiteLLM's empowering that.

Hence, supply chain attack. The GitHub issue literally has mentions all over other projects because they're urged to pin to safe versions since they rely on it.

Blackthorn 23 hours ago||
Edit: ignore this silliness, as it sidesteps the real problem. Leaving it here because we shouldn't remove our own stupidity.

It's pretty disappointing that safetensors has existed for multiple years now but people are still distributing pth files. Yes it requires more code to handle the loading and saving of models, but you'd think it would be worth it to avoid situations like this.

cpburns2009 23 hours ago|
safetensors is just as vulnerable to this sort of exploit using a pth file since it's a Python package.
Blackthorn 23 hours ago||
Yeah, fair enough, the problem here is that the credentials were stolen, the fact that the exploit was packaged into a .pth is just an implementation detail.
somehnguy 18 hours ago||
Perhaps I'm missing something obvious - but what's up with the comments on the reported issue?

Hundreds of downvoted comments like "Worked like a charm, much appreciated.", "Thanks, that helped!", and "Great explanation, thanks for sharing."

kamikazechaser 18 hours ago|
Compromised accounts. The malware targeted ~/.git-credentials.
chillfox 1 day ago|
Now I feel lucky that I switched to just using OpenRouter a year ago because LiteLLM was incredible flaky and kept causing outages.
More comments...