Posted by jakelsaunders94 6 days ago
And maybe updating container images with a mechanism similar to renovate with "minimumReleaseTime=7days" or something similar!?
Then you need to run things with as least privilege as possible. Sadly, Docker and containers in general are an anti-pattern here because they’re about convenience first, security second. So the OP should have run the contains as read-only with tight resource limits and ideally IP restrictions on access if it’s not a public service.
Another thing you can do is use Tailscale, or something like it, to keep things being a zero trust, encrypted, access model. Not suitable for public services of course.
And a whole host of other things.
Easier said than done, I know.
Podman makes it easier to be more secure by default than Docker. OpenShift does too, but that's probably taking things too far for a simple self hosted app.
Re: the Internet.
Re: Peer-to-peer.
Re: Video streaming.
Re: AI.
Internet: The Cuckoo's Egg (nation state, more so than criminals maybe, but it's a blurry distinction)
AI: Elon Musk youtube (often cryptocurrency scam) ads
Video streaming was more obviously exclusively a porn thing
https://www.statista.com/statistics/420400/spam-email-traffi...
That's the point, its private by design and unless they tell you, nobody will ever know how much they use and for what. The true hacker spirit.
If you bother to look past news headlines you will find a vibrant community of people paying for legal goods that value privacy before FUD and ignorance.
This kind of fearmongering is already leading us towards a cashless society because "only criminals use it". This is hackernews and not facebook or congress so it should be obvious to everybody here what the end result of criminalizing/demonizing non KYC payments will be (hint: look at china).
As opposed to email?
If I accepted your version of the events then you need to accept you posted a link stating that spam makes up 45% of email traffic, 45% is neither half nor the majority. I suggest you actually read your "hard data" before posting it.
you have to be a fool to think there is less spam 2 years later but thank you for proving once again your clear bias and inability to read.
So... Crypto is illegal so anyone using is defacto a criminal by definition.
Also, for this particular instance this is the best bug bounty program i've ever seen. Running a monero node that hits your daily budget cap is not that bad... It could be way worse like steal you DB creditials and sell it to the highest party... So, crypto actually made this better.
Luckily for me, the software I had installed[1] was in an LXC container running under Incus, so the intrusion never escaped the application environment, and the container itself was configured with low CPU priority so I didn't even notice it until I tried to visit the page and it didn't load.
I looked around a bit and it seemed like an SSH key had been added under the root user, and there were some kind of remote management agents installed. This container was running Alpine so it was pretty easy to identify what processes didn't belong from a simple ps output of the remaining processes after shutting down the actual web application.
In the end, I just scrapped the container, but I did save it in case I ever feel like digging around (probably not). In the end I did learn some useful things:
- It's a good idea to assume your system will get taken over, so ensure it's isolated and suitably resource constrained (looking at you, pay-as-you-go cloud users).
- Make sure you have snapshots and backups, in my case I do daily ZFS snapshots in Incus which makes rolling back to before the intrusion a breeze.
- While ideally anything compromised should be scrapped, rolling back, locking it down and upgrading might be OK depending on the threat.
Regarding the miner itself:
- from what I could see in its configuration it hadn't actually been correctly configured, so it's possible they do some kind of benchmark and just leave the system silently compromised if it's not "worth it", they still have a way in to use it for other purposes.
- no attempt had been made at file system obfuscation, which is probably the only reason I really discovered it. There were literally folders in /root lying around with the word "monero" in them, this could have been easily hidden.
- if they hadn't installed a miner and just silently compromised the system, leaving whatever running on it alone (or even doing a better job at CPU priority), I probably never would have noticed this.
- Hetzner firewall (because ufw doesn't work well with docker) to only allow public access to 443.
- Self-hosted OpenVPN to access all private ports. I also self-host an additional Wireguard instance as a backup VPN.
- Cloudflare Access to protect `*.coolifydomain.com` by default. This would have helped protect the OP's Umami setup since only the OP can access the Umami dashboard. Bypass rules can be created in Cloudflare Access to allow access to other systems that need access using IP or domain.
- Cloudflare Access rules to only allow access to internal admin path such as /wp-admin/ through my VPN IP (or via email OTP to specified email ids).
- Traefik labels on docker-compose files in Coolify to add basic auth to internal services which can't be behind Cloudflare Access such as self-hosted Prefect. This would have also added a login screen before an attacker would see Umami's app.
- I host frontends only on Vercel or Cloudflare workers and host the backend API on the Coolify server. Both of these are confirmed to never have been affected, due to decoupling of application routing.
- Finally a bash cron script running on server every 5 minutes that monitors the resources and sends me an alert via Pushover when the usages are above the defined thresholds. We need monitoring and alert as well, security measures alone are not enough.
Even with all these steps, there will always be edge cases. That's the caveat of self-hosting but at the same time it's very liberating.
Meanwhile companies think the clouds are looking at it.... anyhow. it is a real problem.
AWS explicitly spells this out in their Shared Responsibility Model page [0]
It is not your cloud provider's responsibility to protect you if you run outdated and vulnerable software. It's not their responsibility to prevent crypto-miners from running on your instances. It's not even their responsibility to run a firewall, though the major players at least offer it in some form (ie, AWS Security Groups and ACL).
All of that is on the customer. The provider should guarantee the security of the cloud. The customer is responsible for security in the cloud.
[0] https://aws.amazon.com/compliance/shared-responsibility-mode...
$ sudo ufw default deny incoming
$ sudo ufw default allow outgoing
$ sudo ufw allow ssh
$ sudo ufw allow 80/tcp
$ sudo ufw allow 443/tcp
$ sudo ufw enable
As a user of iptables this order makes me anxious. I used to cut myself out from the server many times because first blocking then adding exceptions. I can see that this is different here as the last command commits the rules...Search engines try to fight slop results with collateral damage mostly in small or even personal websites. Restaurants are happy to be on one platform only: Google Maps. Who needs an expensive website if you're on there and someone posts your menu as one of the pictures? (Ideally an old version so the prices seem cheaper and you can't be pinned down for false advertising.) Open source communities use Github, sometimes Gitlab or Codeberg, instead of setting up a Forgejo (I host a ton of things myself but notice that the community effect is real and also moved away from self hosting a forge). The cherry in top is when projects use Discord chats as documentation and bug reporting "form". Privacy people use Signal en masse, where Matrix is still as niche as it was when I first heard of it. The binaries referred to as open source just because they're downloadable can be found on huggingface, even the big players use that exclusively afaik. Some smaller projects may be hosted on Github but I have yet to see a self-hosted one. Static websites go on (e.g. Github) Pages and back-ends are put on Firebase. Instead of a NAS, individuals as well as small businesses use a storage service like Onedrive or Icloud. Some more advanced users may put their files on Backblaze B2. Those who newly dip their toes in self-hosting increasingly use a relay server to reach their own network, not because they need it but to avoid dealing with port forwarding or setting up a way to privately reach internal services. Security cameras are another good example of this: you used to install it, set a password, and forward the port so you can watch it outside the home. Nowadays people expect that "it just works" on their phone when they plug it in, no matter where they are. That this relies on Google/Amazon and that they can watch all the feeds is acceptable for the convenience. And that's all not even mentioning the death of the web: people who don't use websites anymore the way they were meant (as hyperlinked pages) but work with an LLM as their one-stop shop
Not that the increased convenience, usability, and thus universal accessibility of e.g. storage and private chats is necessarily bad, but the trend doesn't seem to me as you seem to think it is
I can't think of any example of something that became increasingly often self-hosted instead of less across the last 1, 5, or 10 years
If you see a glimmer of hope for the distributed internet, do share because I feel increasingly as the last person among my friends who hosts their own stuff
I've been on the receiving end of attacks that were reported to be the size of more than 10tbps I couldn't imagine how I would deal with that if I didn't have a 3rd party providing such protection - it would require millions $$ a year just in transit contracts.
There is an increasing amount of software that attempts to reverse this, but as someone from https://thingino.com/ said: opensource is riddled with developers that died to starvation (nobody donates to opensource projects).
Yikes. I would still recommend a server rebuild. That is _not_ a safe configuration in 2025, whatsoever. You are very likely to have a much better engineered persistent infection on that system.
The right thing to do is to roll out a new server (you have a declarative configuration right?), migrate pure data (or better, get it from the latest backup), remove the attacked machine off the internet to do a full audit. Both to learn about what compromises there are for the future and to inform users of the IoT platform if their data has been breached. In some countries, you are even required by law to report breaches. IANAL of course.
Examples like this, are why I don’t run a VPS.
I could definitely do it (I have, in the past), but I’m a mediocre admin, at best.
I prefer paying folks that know their shit to run my hosting.