Posted by dot_treo 13 hours ago
I was just setting up a new project, and things behaved weirdly. My laptop ran out of RAM, it looked like a forkbomb was running.
I've investigated, and found that a base64 encoded blob has been added to proxy_server.py.
It writes and decodes another file which it then runs.
I'm in the process of reporting this upstream, but wanted to give everyone here a headsup.
It is also reported in this issue: https://github.com/BerriAI/litellm/issues/24512
1. Looks like this originated from the trivvy used in our ci/cd - https://github.com/search?q=repo%3ABerriAI%2Flitellm%20trivy... https://ramimac.me/trivy-teampcp/#phase-09
2. If you're on the proxy docker, you were not impacted. We pin our versions in the requirements.txt
3. The package is in quarantine on pypi - this blocks all downloads.
We are investigating the issue, and seeing how we can harden things. I'm sorry for this.
- Krrish
- Impacted versions (v1.82.7, v1.82.8) have been deleted from PyPI - All maintainer accounts have been changed - All keys for github, docker, circle ci, pip have been deleted
We are still scanning our project to see if there's any more gaps.
If you're a security expert and want to help, email me - krrish@berri.ai
What about the compromised accounts(as in your main account)? Are they completely unrecoverable?
Were you not aware of this in the short time frame that it happened in? How come credentials were not rotated to mitigate the trivy compromise?
It is so much better than, you know... "We regret any inconvenience and remain committed to recognising the importance of maintaining trust with our valued community and following the duration of the ongoing transient issue we will continue to drive alignment on a comprehensive remediation framework going forward."
Kudos to you. Stressful times, but I hope it helps to know that people are reading this appreciating the response.
the developer has made a new github account and linked their new github account to hackernews and linked their hackernews about me to their github account to verify the github account being legitimate after my suggestion
Worth following this thread as they mention that: "I will be updating this thread, as we have more to share." https://github.com/BerriAI/litellm/issues/24518
EDIT: no, it's compromised, see proxy/proxy_server.py.
Was your account completely compromised? (Judging from the commit made by TeamPCP on your accounts)
Are you in contacts with all the projects which use litellm downstream and if they are safe or not (I am assuming not)
I am unable to understand how it compromised your account itself from the exploit at trivvy being used in CI/CD as well.
We have deleted all our pypi publishing tokens.
Our accounts had 2fa, so it's a bad token here.
We're reviewing our accounts, to see how we can make it more secure (trusted publishing via jwt tokens, move to a different pypi account, etc.).
In CI they could easily have moved `trivy` to its own dedicated worker that had no access to the PYPI secret, which should be isolated to the publish command and only the publish command.
Token in CI could've been way too broad.
Write a detailed postmortem, share it publicly, continue taking responsibility, and you will come out of this having earned an immense amount respect.
From a security standpoint, you would rather pull in a library that is compromised and run a credential stealer? It seems like this is the exact intended and best behavior.
More strangely (to me), this is often addressed by adding loads of fallible/partial caching (in e.g. CICD or deployment infrastructure) for package managers rather than building and publishing temporary/per-user/per-feature ephemeral packages for dev/testing to an internal registry. Since the latter's usually less complex and more reliable, it's odd that it's so rarely practiced.
No one initially knows how much is compromised
That said, I'm sorry this is being downvoted: it's unhappily observing facts, not arguing for a different security response). I know that's toeing the rules line, but I think it's important to observe.
i'd much rather see a million open PRs than a single malicious PR sneak through due to lack of thorough review.
Software people could (mostly) trust each other's OSS contributions because we could trust the discipline it took in the first place. Not any more.
I would think humans have been using psychedelics since before we figured out meditation. Likely even before we were humans.
Which sounds great, but the way things work now tend to be the exact opposite of that, so there will be no trustable platform to run the untrusted code in. If the sandbox, or the operating system the sandbox runs in, will get breaking changes and force everyone to always be on a recent release (or worse, track main branch) then that will still be a huge supply chain risk in itself.
> This assumes that we can get a locked down, secure, stable bedrock system and sandbox that basically never changes except for tiny security updates that can be carefully inspected by many independent parties.
For the most part you can. Just version pin slightly-stale versions of dependencies, after ensuring there are no known exploits for that version. Avoid the latest updates whenever possible. And keep aware of security updates, and affected versions.Don't just update every time the dependency project updates. Update specifically for security issues, new features, and specific performance benefits. And even then avoid the latest version when possible.
To really run code without trust would need something more like a microkernel that is the only thing in my system I have to trust, and everything running on top of that is forced to behave and isolated from everything else. Ideally a kernel so small and popular and rarely modified that it can be well tested and trusted.
This is the problem with software progressivism. Some things really should just be what they are, you fix bugs and security issues and you don't constantly add features. Instead everyone is trying to make everything have every feature. Constantly fiddling around in the guts of stuff and constantly adding new bugs and security problems.
This is kind of papering over the interesting part. What kind of UI or other mechanisms would help here? There's no shortcut to detecting and crashing on "something bad". The adversary can test against your sandbox as well.
https://github.com/calebfaruki/tightbeam https://github.com/calebfaruki/airlock
This is literally the thing I'm trying to protect against.
u can run in chroot jail and it wouldnt have accessed ssh keys outside of the jail...
theres many more similar technologies aleady existing, for decades.
doing it on a per language basis is not ideal. any new language would have to reinvent the wheel.
better to do it at system level. with the already existing tooling.
openbsd has plege/unveil, linux chroot, namespaces, cgroups, freebsd capsicum or w/e. theres many of these things.
(i am not sure how well they play within these scenarios, but just triggering on the sandboxing comment. theres plenty of ways to do it as far as i can tell...)
The problem is that programs can be entire systems, so "doing it at the system level" still means that you'd have to build boundaries inside a program.
you can use OS apis to isolate the thing u want to use just fine..
and yes, if you mix privilege levels in a program by design then u will have to design your program for that.
this is simple logic.
a programming language can not decide for you who and what you trust.
For the sake of the argument, what if I wanted to isolate numpy from scipy?
Would you run numpy in a separate process from scipy? How would you share data between them?
Yes, you __can__ do all of that without programming language support. However, language support can make it much easier.
We should have sandboxing in Rust, Python, and every language in between.
The problems you mentioned resonated a lot with me and why I'm building it, any interest in working to solve that together?: https://github.com/smol-machines/smolvm
- a security focused project should NOT default to train people installing by piping to bash. If i try previewing the install script in the browser it forces download instead of showing as plain text. The first thing i see is an argument
# --prefix DIR Install to DIR (default: ~/.smolvm)
that later in the script is rm -rf deleting a lib folder. So if i accidentally pick a folder with ANY lib folder this will be deleted.
- Im not sure what the comparison to colima with krunkit machines is except you don't use vm images but how this works or how it is better is not 100% clear
- Just a minor thing but people don't have much attention and i just saw aws and fly.io in the description and nearly closed the project. it needs to be simpler to see this is a local sandbox with libkrun NOT a wrapper for a remote sandbox like so many of the projects out there.
Will try reaching you on some channel, would love to collaborate especially on devX, i would be very interested in something more reliable and bit more lightweight in placce of colima when libkrun can fully replace vz
1. In comparison with colima with krunkit, I ship smolvm with custom built kernel + rootfs, with a focus on the virtual machine as opposed to running containers (though I enable running containers inside it).
The customizations are also opensource here: https://github.com/smol-machines/libkrunfw
2. Good call on that description!
I've reached out to you on linkedin
It can be dedicated to a single service (or a full OS), runs a real BSD kernel, and provides strong isolation.
Overall, it fits into the "VM is the new container" vision.
Disclaimer: I'm following iMil through his twitch streams (the developer of smolBSD and a contributor to NetBSD) and I truly love what he his doing. I haven't actually used smolBSD in production myself since I don't have a need for it (but I participated in his live streams by installing and running his previews), and my answer might be somewhat off-topic.
More here <https://hn.algolia.com/?q=smolbsd>
At a glance, it's a matter of compatibility, most software has first class support for linux. But very interesting work and I'm going to follow it closely
worked in AWS and specifically with firecracker in the container space for 4 years - we had a very long onboarding doc to dev on firecracker for containers... So I made sure to focus on ease of use here.
> We just can't trust dependencies and dev setups.
In one of my vibe coded personal projects (Python and Rust project) I'm actually getting rid of most dependencies and vibe coding replacements that do just what I need. I think that we'll see far fewer dependencies in future projects.Also, I typically only update dependencies when either an exploit is known in the current version or I need a feature present in a later version - and even then not to the absolute latest version if possible. I do this for all my projects under the many eyes principal. Finding exploits takes time, new updates are riskier than slightly-stale versions.
Though, if I'm filing a bug with a project, I do test and file against the latest version.
No free lunch. LLMs are capable of writing exploitable code and you don’t get notifications (in the eg Dependabot sense, though it has its own problems) without audits.
On a personal note, I have been developing and talking to a clanker ( runs inside ) to get my day to day work done. I can have multiple instances of my project using worktrees, have them share some common dependencies and monitor all of them in one place. I plan to opensource this framework soon.
Making this work on a per-library level … seems a lot harder. The cost for being very paranoid is a lot of processes right now.
(Yes, they should better configure which CI job has which permissions, but this should be the default or it won't always happen)
But mention that on HN and watch getting downvoted into oblivion: the war against general computation, walled gardens, locked down against device owners...
I made this tool for macos systems that helps detect when a package accesses something it shouldn't. it's a tiny go binary (less than 2k LOC) with no dependencies that will mount a webdav filesystem (no root) or NFS (root required) with fake secrets and send you a notification when anything accesses it. Very stupid simple. I've always really liked the canary/honeypot approach and this at least may give some folks a chance to detect (similar to like LittleSnitch) when something strange is going on!
Next time the attack may not have an obvious performance issue!
I would expect better spam detection system from GitHub. This is hardly acceptable.
Worked like a charm, much appreciated.
This was the answer I was looking for.
Thanks, that helped!
Thanks for the tip!
Great explanation, thanks for sharing.
This was the answer I was looking for.
It just doesn't have to be spammed enough that advertisers leave the platform and I think that they sort of succeed in doing so.
Think about it, if Facebook shows you AI slop ragebait or any rage-inducing comment from multiple bots designed to farm attention/for malicious purposes in general, and you fall for it and show engagement to it on which it can show you ads, do you think it has incentive to take a stance against such form of spam
Oh boy supply chain integrity will be an agent governenace problem, not just a devops one. If you send out an agent that can autonomously pull packages, do code, or access creds, then the blast radius of compromises widens. That's why I think there's an argument for least-privilege by default--agents should have scoped, auditable authority over what they can install and execute, and approval for anything outside the boundaries.
Even still though, we can't really trust any open-source software any more that has third party dependencies, because the chains can be so complex and long it's impossible to vet everything.
It's just too easy to spam out open-source software now, which also means it's too easy to create thousands of infected repos with sophisticated and clever supply chain attacks planted deeply inside them. Ones that can be surfaced at any time, too. LLMs have compounded this risk 100x.
This is why software written in Rust scares me. Almost all Rust programs have such deep dependency trees that you really can't vet them all. The Rust and Node ecosystems are the worst for this, but Python isn't much better. IMO it's language-specific package managers that end up causing this problem because they make it too easy to bring in dependencies. In languages like C or C++ that traditionally have used system package managers the cost of adding a dependency is high enough that you really avoid dependencies unless they're truly necessary.
JS/TS > Screems aloud! never do "npm import [package containing entire world as dependency]"
Rust > Just import everything since rust fixes everything.
When you design your package management and doctrine like de facto javascript your have failed like javascript.
However, the broader idea of supply chain attacks remains challenging and AI doesn’t really matter in terms of how you should treat it. For example, the xz-utils back door in the build system to attack OpenSSH on many popular distros that patched it to depend on systemd predates AI and that’s just the attack we know about because it was caught. Maybe AI helps with scale of such attacks but I haven’t heard anyone propose any kind of solution that would actually improve reliability and robustness of everything.
[1] Fully Countering Trusting Trust through Diverse Double-Compiling https://arxiv.org/abs/1004.5534
The kernel is not just open source, it's a very fast-moving codebase. That's how we win all wars against AI-authored exploits. While the LLM trains on our internal APIs, we change the APIs — by hand. When the agent finally submits its pull request, it gets lost in unfamiliar header files and falls into a state of complete non-compilability. That is the point. That is our strategy.
The possibilities within a good threat could be catastrophic if we assume so, and if we assume nation-states to be interested in sponsoring hacking attacks (which many nations already do) to attack enemy nations/gain leverage. We are looking at damage within Trillions at that point.
But I would assume that Linux might be safe for now, it might be the most looked at code and its definitely something safe.
LLVM might be a bit more interesting as it might go a little unnoticed but hopefully people who are working at LLVM are well funded/have enough funding to take a look at everything carefully to not have such a slip up.
I've been through SOC2 certifications in a few jobs and I'm not sure it makes you bullet proof, although maybe there's something I'm missing?
But all it means in the end is you can read up on how a company works and have some level of trust that they're not lying (too much).
It makes absolutely zero guarantees about security practices, unless the documented process make these guarantees.
Basically it forkbombed `grep -r rpcuser\rpcpassword` processes trying to find cryptowallets or something. I saw that they spawned from harness, and killed it.
Got lucky, no backdoor installed here from what i could make out of the binary
how do you do that? have Activity Monitor up at all times?