Top
Best
New

Posted by jakelsaunders94 6 days ago

I got hacked: My Hetzner server started mining Monero(blog.jakesaunders.dev)
604 points | 409 comments
3np 6 days ago|
> I also enabled UFW (which I should have done ages ago)

I disrecommend UFW.

firewalld is a much better pick in current year and will not grow unmaintainable the way UFW rules can.

    firewall-cmd --persistent --set-default-zone=block
    firewall-cmd --persistent --zone=block --add-service=ssh
    firewall-cmd --persistent --zone=block --add-service=https
    firewall-cmd --persistent --zone=block --add-port=80/tcp
    firewall-cmd --reload
Configuration is backed by xml files in /etc/firewalld and /usr/lib/firewalld instead of the brittle pile of sticks that is the ufw rules files. Use the nftables backend unless you have your own reasons for needing legacy iptables.

Specifically for docker it is a very common gotcha that the container runtime can and will bypass firewall rules and open ports anyway. Depending on your configuration, those firewall rules in OP may not actually do anything to prevent docker from opening incoming ports.

Newer versions of firewalld gives an easy way to configure this via StrictForwardPorts=yes in /etc/firewalld/firewalld.conf.

dizhn 6 days ago||
If you can, do not expose ports like this 8080:8080, but do this "192.168.0.1:8080:8080" so its bound to a private IP. Then use any old method expose only what you want to the world.

In my own use I have 10.0.10.11 on the vm that I host docker stuff. It doesn't even have its own public IP meaning I could actually expose to 0.0.0.0 if I wanted to but things might change in the future so it's a precaution. That IP is only accessible via wireguard and by the other machines that share the same subnet so reverse proxying with caddy on a public IP is super easy.

andix 6 days ago|||
It's really a trap. I'm surprised they never changed the default to 127.0.0.1 instead of 0.0.0.0. So you would need to explicitly specify it, if you want to bind to all interfaces.
dizhn 6 days ago||
The reason is convenience. There would be a lot more friction if they didn't do it like this for everything other than local development.

Docker also has more traps and not quite as obvious as this. For example, it can change the private IP block its using without telling you. I got hit by this once due to a clash with a private block I was using for some other purpose. There's a way to fix it in the config but it won't affect already created containers.

By the way. While we're here. A public service announcement. You probably do NOT need the userland-proxy and can disable it.

/etc/docker/daemon.json

{ "userland-proxy": false }

hypeatei 6 days ago||
Is there a guide that lists some common options / gotchas in Docker like this?

Some quick searching yields generic advice about keeping everything updated or running in rootless mode.

dizhn 6 days ago||
Not that I'm aware of. Sorry. Here's one my daemon.json files though. It tames the log file size and sets its format. And fixes the IP block so it won't change like I mentioned above.

  {
    "log-driver": "json-file",
    "log-opts": {
      "labels": "production_status",
      "tag": "{{.ImageName}}|{{.Name}}|{{.ImageFullID}}|{{.FullID}}",
      "env": "os,customer",
      "max-size": "10m"
    },
    "bip": "172.17.1.1/24",
    "default-address-pools": [
      {"base": "172.17.0.0/16", "size": 24}
    ]
  }
zwnow 6 days ago|||
Yup the regular "8080:8080" bind resulted in a ransom note in my database on day 1. Bound it to localhost only now.
andix 6 days ago|||
I had the same experience (postgres/postgres on default port). It took me a few days to find out, because the affected database was periodically re-built from another source. I just noticed that for some periods the queries failed until the next rebuild.
zwnow 6 days ago||
Yea plenty of bots scouting the internet for these kinda vulnerabilities, good learning experience, wont happen again :D
szszrk 6 days ago|||
one thing I always forget about, is that you have a whole network of 127.0.0.0/8 , not just one IP.

So you can create multiple addresses with multiple separate "domains" mapped statically in /etc/hosts, and allow multiple apps to listen on "the same" port without conflicts.

dietr1ch 4 days ago|||
Unlike IPv6 localhost that's just the [::1] address. I'm not sure if you can abuse IPv4 in IPv6 to do the same
chasd00 5 days ago|||
I never thought of using localhost like that, I'm surprised that works actually. Typically, if you want a private /8 you would use 10.0.0.0/8 but the standard 192.168.0.0/16 gives you a lot of address space ( 255^2 - 2 IPs (iirc) ) too.

..actually this is very weird. Are you saying you can bind to 127.0.0.2:80 without adding a virtual IP to the NIC? So the concept of "localhost" is really an entire class A network? That sounds like a network stack bug to me heh.

edit: yeah my route table on osx confirms it. very strange (at least to me)

szszrk 5 days ago|||
That was deliberate. Works on Linux and Windows as well. I think this is the current RFC: https://datatracker.ietf.org/doc/html/rfc5735

You can do:

python3 -m http.server -b 127.0.0.1 8080

python3 -m http.server -b 127.0.0.2 8080

python3 -m http.server -b 127.0.0.3 8080

and all will be available.

Private network ranges don't really have the same purpose, they can be routed, you have to always consider conflicts and so on. But here with 127/8 you are in your own world and you don't worry about anything. You can also do tests where you need to expose more than 65k ports :)

You have to also remember these are things established likely before even DNS was a thing, IP space was considered so big that anyone could have a huge chunk of it, and it was mostly managed manually.

dizhn 5 days ago|||
I didn't really know the mechanism of how this worked but if you check your resolv file you might find that the nameserver IP for your localhost is 127.0.0.53 . It is so in recent Linux distros. (Probably a systemd thing)
exceptione 6 days ago|||

  > Specifically for docker it is a very common gotcha that the container runtime can and will bypass firewall rules and open ports anyway. 
Like I said in another comment, drop Docker, install podman.
kh_hk 6 days ago|||
I keep reading comments by podman fans asking to drop Docker and yet every time I have tried to use podman it failed on me miserably. IMHO it would be better if podman was not designed and sold as a docker drop in replacement but its own thing.
exceptione 6 days ago||
That sucks, I never had any problem running a Dockerfile in podman. I don't know what I do differently, but I would as a principle filter out any container that messes with stuff like docker in docker. Podman doesn't need these kind of shenegians.

Also the Docker Compose tool is a well-know exception to the compatibility story. (There is some unofficial podman compose tool, but that is not feature complete and quadlets are better anyway.)

I agree with approaching podman as its own thing though. Yes, you can build a Dockerfile, but buildah lets you build an efficient OCI image from scratch without needing root. For those interested, this document¹ explains how buildah compares to podman and docker.

1. https://github.com/containers/buildah/tree/main/docs/contain...

Nelkins 6 days ago|||
There's a real dearth of blog posts explaining how to use quadlets for the local dev experience, and actually most guides I've seen seem to recommend using podman/Docker compose. Do you use quadlets for local dev and testing?
3np 3 days ago||
Quadlets aren't what I'd personally use for local dev. They are good for running a local headless persistent service. So I wouldn't use it for your service-under-test but they can be a good fit for supporting dev tools like a local package registry, proxy or VPN gateway.

The docs you need for quadlets are basically here: https://docs.podman.io/en/latest/markdown/podman-systemd.uni...

The one gotcha I can think of not mentioned there is that if you run it as a non-root user and want it to run without logging in as that user, you need to: `sudo loginctl enable-linger $USER`.

If you don't vibe with quadlets, it's equally fine to do a normal systemd .service file with `ExecStart=podman run ...`, which quadlets are just convenience sugar for. I'd start there and then return to quadlets if/when you find that becomes too messy. Don't add new abstraction layers just because you can if they don't help.

If you have a more complex service consisting of multiple containers you want to schedule as a single unit, it's also totally fine to combine systemd and compose by having `ExecStart=podman compose up ...`.

Do you want it to run silently in the background with control over autorestarts and log to system journal? Quadlets/systemd.

Do you want to have multiple containers scheduled together (or just prefer it)? Compose.

Do you want to manually invoke it and have the output in a terminal by default? CLI run or compose.

kh_hk 5 days ago||||
A side-effect of running rootless and daemonless is that containers stop on user log out, and I can't believe how all this is to be expected for a newcomer to parse. Because I thought the whole point of containers in production was for them to keep running when you log out.

Of course, when you think about it, nobody expects a command to just survive logging out, but coming from docker, you still have that expectation. And I wonder, am I supposed to be running this on a tmux like the old days? No, you need to do a bunch of systemd/linger/stuff. So being that we are already in systemd land, you keep searching and end up in quadlets, which are a new-ish thing with (last I checked) bad docs, replacing whatever was used before (which has good docs). Docs, being said, that give k8s ptsd. Quadlet, podlet and pods.

It seems that when podman deviates from docker, it does in the least ergonomic way possible. Or maybe I have been lobotomized by years and years of using docker, or maybe my patience threshold is very low nowadays. But this has been my experience. I felt very stupid when I deployed something and it stopped after 5 minutes. I was ready to use podman, because it worked locally. And then it failed in production. Thanks no.

tremon 5 days ago|||
A side-effect of running rootless and daemonless is that containers stop on user log out

This is not a side effect of running rootless, it's a side effect of running systemd (or rather, systemd-logind).

exceptione 5 days ago|||

  loginctl enable-linger
kh_hk 5 days ago||
Yes, but I want it to only apply to podman, not any running task.

    systemctl --user enable podman.socket loginctl enable-linger <USER>

?
exceptione 5 days ago||
You should compare it imho to ssh. If you break your connection, your session is gone. So if you only want certain parts of your session survive, which ones should? Because maybe your container depends on avahi on the host, or cups, or...?

Just a random thought, but if you can create a user on the host that just has the most minimal set of systemd services enabled your container needs, you could apply it to that user.

But still, on a server that wouldn't make much sense imho, as the default user is usually the service user having a minimal set of services enabled. On a desktop, your default user is logged in anyways. So I think this isn't a real problem tbh.

osigurdson 6 days ago|||
I just use Podman's Kubernetes yaml as a compose substitute when running everything locally. This way it is fairly similar to production. Docker compose seems very niche to me now.
dns_snek 6 days ago||||
podman is not a drop-in replacement for Docker. You can replace it with podman but expect to encounter minor inconsistencies and some major differences, especially if you use Docker Compose or you want to use podman in rootless mode. It's far from just being a matter of `alias docker=podman`.

The only non-deprecated way of having your Compose services restart automatically is with Quadlet files which are systemd unit files with extra options specific to containers. You need to manually translate your docker-compose.yml into one or more Quadlet files. Documentation for those leaves a lot to be desired too, it's just one huge itemized man page.

3np 6 days ago||||
This affects podman too.
jsheard 6 days ago||
Not if you run it in rootless mode, which is more of a first class citizen in Podman compared to Docker.
3np 6 days ago||
> Not if you run it in rootless mode.

Same as for docker, yes?

https://docs.docker.com/engine/security/rootless/

wasmitnetzen 6 days ago|||
Rootless exists in Docker, yes, but as OP said, it's not first-class. The setup process is clunky, things break more often. In podman it just works, and podman is leading with features like quadlets, which make docker services just services like any other.
newsoftheday 5 days ago||
No one wants, nor asked for, quadlets.
exceptione 6 days ago|||
nope. You should look at https://docs.docker.com/engine/network/

Networking is just better in podman.

joshuaissac 6 days ago||
> nope. You should look at https://docs.docker.com/engine/network/

That page does not address rootless Docker, which can be installed (not just run) without root, so it would not have the ability to clobber firewall rules.

newsoftheday 5 days ago||||
Nothing in the article talked about podman or podman vs docker. Umami with its NexJS and React CVE vulnerability was the issue. BTW, I use Docker because it works extremely well and because there is so much astroturfing from the podman gang I wouldn't use it if my life depended on it until that shit calms down.
figassis 6 days ago||||
In docker, simply clearly define the interface (ip) and port. It can be 0.0.0.0:80 for example. No bypass happens.
newsoftheday 5 days ago|||
No, I'm happy with Docker, Docker works very well.
gus_ 6 days ago|||
it doesn't matter what netfilter frontend you use if you allow outbound connections from any binary.

In order to stop these attacks, restrict outbound connections from unknown / not allowed binaries.

This kind of malware in particular requires outbound connections to the mining pools. Others downloads scripts or binaries from remote servers, or try to communicate with their c2c servers.

On the other hand, removing exec permissions to /tmp, /var/tmp and /dev/shm is also useful.

bostik 6 days ago|||
> On the other hand, removing exec permissions to /tmp, /var/tmp and /dev/shm is also useful.

Sadly that's more of a duck tape or plaster, because any serious malware will launch their scripts with the proper '/bin/bash /path/to/dropped/payload' invocation. A non-exec mount works reasonably well only against actual binaries dropped into the paths, because it's much less common to launch them with the less known '/bin/ld.so /path/to/my/binary' stanza.

I've also at one time suggested that Debian installer should support configuring a read-only mount for /tmp, but got rejected. Too many packaging scripts depend on being able to run their various steps from /tmp (or more correctly, $TMPDIR).

gus_ 6 days ago||
I agree. That's why I said that it's also useful. It won't work in all scenarios, but in most of the cryptomining attacks, files dropped to /tmp are binaries.
crote 6 days ago||||
It is really unfortunate that a lot of services expect to have write access to their config files, so you can tweak settings with a web UI.

If this weren't the case, plenty of containers could probably have a fully read-only filesystem.

3abiton 6 days ago||||
Is there an automated way of doing this?
3np 6 days ago|||
Two paths:

- Configuration management (ansible, salt, chef, puppet)

- Preconfigured images (NixOS, packer, Guix, atomic stuffs)

For a one-off: pssh

gus_ 6 days ago|||
restricting outbound connections by binary: OpenSnitch .

You can also restrict outbound connections to cryptomining pools and malicious IPs. For example by using IOCs from VirusTotal or urlhaus.bazaar.ch

Aeolun 6 days ago|||
Wasn’t there that npm malware thing a while ago that trashed your home folder if it couldn’t phone home?
mort96 5 days ago|||
I strongly disagree, firewall-cmd is way too complicated. I mean it's probably fine if your main job is being a firewall administrator, but for us who just need to punch a hole in a firewall as a tiny necessary prerequisite for what we actually want to do, it's just too much.

On ufw systems, I know what to do: if a port I need open is blocked, I run 'ufw allow'. It's dead simple, even I can remember it. And if I don't, I run 'ufw --help' and it tells me to run 'ufw allow'.

Firewall-cmd though? Any time I need to punch a hole in the firewall on a Fedora system, I spend way too long reading the extremely over-complicated output of 'firewall-cmd --help' and the monstrous man page, before I eventually give up and run 'sudo systemctl disable --now firewalld'. This has happened multiple times.

If firewalld/firewall-cmd works for you, great. But I think it's an absolutely terrible solution for anyone whose needs are, "default deny all incoming traffic, open these specific ports, on this computer". And it's wild that it's the default firewall on Fedora Workstation.

peanut-walrus 6 days ago|||
Personally I find just using nftables.conf straightforward enough that I don't really understand the need for anything additional. With iptables, it was painful, but iptables has been deprecated for a while now.
DaSHacka 6 days ago||
Same here, I'm surprised most linux users I know like to install firewalld, UFW, or some other overlaying firewall rather than just editing the nftables config directly. It's not very difficult, although I've never really dug deep into the weeds of iptables. I suspect many people who have used iptables long ago in the past assume nftables is samilar and avoid interacting with it directly out of habit.
arein3 5 days ago||
With nftables you need to learn a lot before you cam be partially sure of wbat you do.

With ufw gui you need a single checkbox - block incoming connections.

DaSHacka 5 days ago||
Not sure what you find difficult about it, but I just took the "workstation" config from the gentoo wiki and used it on my laptop.

Perhaps if you're doing more complicated things like bridging interfaces or rerouting traffic it would be more difficult to use than the alternatives, but for a simple whitelist it's extremely easy to configure and modify.

lloydatkinson 6 days ago|||
> Specifically for docker it is a very common gotcha that the container runtime can and will bypass firewall rules and open ports anyway. Depending on your configuration, those firewall rules in OP may not actually do anything to prevent docker from opening incoming ports.

This sounds like great news. I followed some of the open issues about this on GitHub and it never really got a satisfactory fix. I found some previous threads on this "StrictForwardPorts": https://news.ycombinator.com/item?id=42603136.

PeterStuer 6 days ago|||
Hetzner has a free firewall service outside of your machine. You can use that as the first line of defence.
newsoftheday 5 days ago|||
It's a good idea. At OCI, I have the VCN firewall enabled and ufw firewall enabled within my VPS's.
nvarsj 6 days ago||||
The problem with Hetzner's firewall service is it nukes network performance especially on ipv6.
addandsubtract 6 days ago||
It also killed my docker networking, so portainer stopped working.
reddalo 6 days ago|||
That's what I use. Is it enough? Or should I also install a firewall on my machine?
ps 6 days ago|||
Do both. Using provider's firewall service adds another level of defence. But hiccups may occur and firewall rules may briefly disappear (sync issues, upgrades, vm mobility issues) and you services then may become exposed. Happened to me in the past, were "lucky" enough so no damage was taken.
bardsore 6 days ago|||
Security in layers, I'd do both.
sph 6 days ago|||
The problem with firewalld is that it has the worst UX of any program I know. Completely unintuitive options, the program itself doesn’t provide any useful help or hint if you get anything wrong and the documentation is so awful you have to consult the Red Hat manuals that have thankfully been written for those companies that pay thousands per month in support.

It’s not like iptables was any better, but it was more intuitive as it spoke about IPs and ports, not high-level arbitrary constructs such as zones and services defined in some XML file. And since firewalld uses iptables/nftables underneath, I wonder why do I need a worse leaky abstraction on top of what I already know.

I truly hate firewalld.

bingo-bongo 6 days ago|||
Coming from FreeBSD and pf, all Linux firewalls I’ve tried feels clunky _at best_ UX-wise.

I’d love a Linux firewall configured with a sane config file and I think BSD really nailed it. It’s easy to configure and still human readable, even for more advanced firewall gateway setups with many interfaces/zones.

A have no doubt that Linux can do all the same stuff feature-wise, but oh god the UX :/

adrian_b 6 days ago|||
I completely agree.

I have been using for many decades both Linux and FreeBSD, on many kinds of computers.

When comparing Linux with FreeBSD, I probably do not find anything more annoying on Linux than its networking configuration tools.

While I am using Linux on my laptops and desktops and on some servers with computational purposes, on the servers that host networking services I much prefer FreeBSD, for the ease of administration.

ptman 6 days ago||||
nftables is configured like that https://wiki.nftables.org/wiki-nftables/index.php/Simple_rul...
Hendrikto 6 days ago|||
Have you tried nftables? It is so much nicer than iptables.
bingo-bongo 5 days ago||
Yeah, I'm already using nftables and I agree that it's better than eg. iptables (or the numerous frontends for iptables) and probably the best bet we have at this point - but honestly, it's still far from the UX I get from pf - unfortunately :/
ps 6 days ago|||
Hate it as well. Why should I bother learning about zones and abstract away ports, adresses, interfaces etc. only to find out pretty soon that my baremetal server actually always needs fine grained rules at least from the firewalld's point of view.
lofaszvanitt 5 days ago|||
Why people use these abstractions over nftables? Use nftables, it's easy to learn, effective and if you know it you know everything about firewalls.
crote 6 days ago|||
Do either of them work for container-to-container traffic?

Imagine container A which exposes tightly-coupled services X and Y. Container B should be able to access only X, container C should be able to accesd only Y.

For some reason there just isn't a convenient way to do this with Docker or Podman. Last time I looked into it, it required having to manually juggle the IP addressed assigned to the container and having the service explicitly bind to it - which is just needlessly complicated. Can firewalls solve this?

itintheory 5 days ago||
I can't answer your question about Docker or Podman, but in Kubernetes there is the NetworkPolicy API which is designed for exactly this use-case. I'm sure it uses Linux native tooling (iptables, nftables, etc) under the hood, so it's at least within the real of feasible that those tools can be used for this purpose.
arminiusreturns 6 days ago|||
UFW and Firewall-CMD both just use iptables in that context though. The real upgrade is in switching to nftables. I know I'm going to need to learn eBpf as the next step too, but for now nftables is readable and easy to grok especially after you rip out the iptables stuff, but technically nftables is still using netfilter.

And ufw supports nftables btw. I think the real lesson is write your own firewalls and make them non-permissive - then just template that shit with CaC.

arminiusreturns 1 day ago||
edit* Apparently because I haven't used firewall-cmd in a long time, I was wrong on this: firewall-cmd does indeed use nftables by default.
skirge 6 days ago|||
also docker bypasses ufw
Ey7NFZ3P0nzAe 6 days ago|||
You might be interested in ufw-docker: https://github.com/chaifeng/ufw-docker
newsoftheday 5 days ago|||
I've not used firewalld but I have used ufw on my desktops and servers going back to 2006 and can guarantee I have no plans to change from it.
rglover 6 days ago|||
One of those rare HN comments that's just pure gold.
newsoftheday 5 days ago||
The don't use ufw but use firewalld comment? I disagree. I've used ufw since 2006 and have no plans to change. Works great.
denkmoon 6 days ago|||
I’ll just mention Foomuuri here. Its bit of a spiritual successor to shorewall and has firewalld emulation to work with tools compatible with firewalld
3np 6 days ago|||
Thanks! Would be cool to have it packaged for alpine since firewalld requires D-Bus. There is awall but that's still on iptables and IMO at bit clunky to set up.
egberts1 6 days ago|||
Foomuuri is ALMOST there.

I mean there are some payload over payload like GRE VPE/VXLAN/VLAN or IPSec that needs to be written in raw nft if using Foomuuni but it works!.

But I love the Shorewall approach and your configuration gracefully encapsulated Shorewall mechanic.

Disclaimer: I maintain vim-syntax-nftables syntax highlighter repo at Github.

arein3 5 days ago|||
Does it have a gui?
kunley 6 days ago||
..backed by xml?
esaym 6 days ago||
So this is part of the "React2Shell" CVE-2025-55182 issue? I find it interesting that this seems to get so little publicity. Almost like the issue is normal or expected. And it looks like the affected versions go back a little over a year. So if you've deployed anything with Next.js over the last 12 months your web app is now probably part of a million node bot net. And everyone's advice is just "use docker" or "install a firewall".

I'm not even sure what to say, or think, or even how to feel about the frontend ecosystem at this point. I've been debating on leaving the whole "web app" ecosystem as my main employment ventures and applying to some places requiring C++. C++ seems much easier to understand than what ever the latest frontend fad is. /rant

syhol 6 days ago||
Frontend churn has chilled out so much over the last few years. The default webapp stack today has been the same for 5 years now, next.js (9yo) react (12yo) tailwind (8yo) postgres (36yo). I'm not endorsing this stack, it just seems to be the norm now.

Compare that to what we had in the late 00's and early 10's we went through prototype -> mootools -> jquery -> backbone -> angularjs -> ember -> react, all in about 6 years. Thats a new recommended framework every year. If you want to complain about fads and churn, hop on over to AI development, they have plenty.

lobsterthief 5 days ago||
I remember that. To be honest it was exhausting. Fun, but exhausting. It’s nice now to have found a stack that is “just fine” for most things.
hypeatei 6 days ago|||
You can write web apps without touching the hottest JS framework of the week. I've never touched these frameworks that try to blur the line between frontend and backend.

Pick a solid technology (.NET, Java, Go, etc...) for the backend and use whatever you want for your frontend. Voila, less CVEs and less churn!

reedlaw 6 days ago|||
I had a Pangolin instance compromised by this: https://github.com/orgs/fosrl/discussions/2014
h33t-l4x0r 6 days ago|||
I'm hearing about it like crazy because I deployed around 100 Next frontends in that time period. I didn't use server components though so I'm not affected.
mnahkies 6 days ago||
My understanding of the issue is that even if you don't use server components, you're still vulnerable.

Unless you're running a static html export - eg: not running the nextjs server, but serving through nginx or similar

abustamam 5 days ago||
Yeah, crucially it says

> If your app’s React code does not use a server, your app is not affected by this vulnerability. If your app does not use a framework, bundler, or bundler plugin that supports React Server Components, your app is not affected by this vulnerability.

https://react.dev/blog/2025/12/03/critical-security-vulnerab...

So if you have a backend that supports RSC, even if you don't use it, you can still be vulnerable.

GP said they only shipped front ends but that can mean a lot.

Edit:link

azemetre 5 days ago||
They might be referring to another Vercel vulnerability that allowed anyone to bypass their auth with relative ease due to poor engineering practices:

https://nvd.nist.gov/vuln/detail/CVE-2025-29927

That plus the most recent react one, and you have a culture that does not care for their customers but rather chasing fads to help greedy careers.

newsoftheday 5 days ago|||
For my Java based sites, I use HTML/CSS/JS (vanilla js), no frameworks.
miohtama 5 days ago||
Use Svelte? (:
tgtweak 6 days ago||
Just a note - you can very much limit cpu usage on the docker containers by setting --cpus="0.5" (or cpus:0.5 in docker compose) if you expect it to be a very lightweight container, this isolation can help prevent one roudy container from hitting the rest of the system regardless of whether it's crypto-mining malware, a ddos attempt or a misbehaving service/software.
tracker1 6 days ago||
Another is running containers in read-only mode, assuming they support this configuration... will minimize a lot of potential attack surface.
3eb7988a1663 6 days ago||
Never looked into this. I would expect the majority of images would fail in this configuration. Or am I unduly pessimistic?
hxtk 6 days ago|||
Many fail if you do it without any additional configuration. In Kubernetes you can mostly get around it by mounting `emptyDir` volumes to the specific directories that need to be writable, `/tmp` being a common culprit. If they need to be writable and have content that exists in the base image, you'd usually mount an emptyDir to `/tmp` and copy the content into it in an `initContainer`, then mount the same `emptyDir` volume to the original location in the runtime container.

Unfortunately, there is no way to specify those `emptyDir` volumes as `noexec` [1].

I think the docker equivalent is `--tmpfs` for the `emptyDir` volumes.

1: https://github.com/kubernetes/kubernetes/issues/48912

flowerthoughts 6 days ago||||
Readonly and rootless are my two requirements for Docker containers. Most images can't run readonly because they try to create a user in some startup script. Since I want my UIDs unique to isolate mounted directories, this is meaningless. I end up having to wrap or copy Dockerfiles to make them behave reasonably.

Having such a nice layered buildsystem with mountpoints, I'm amazed Docker made readonly an afterthought.

subscribed 6 days ago||
I like steering docker runs with docker-compose, especially with .env files - easy to store in repositories, easy to customise and have sane defaults.
flowerthoughts 6 days ago||
Yeah agreed. I use docker-compose. But it doesn't help if the Docker images try to update /etc/passwd, or force a hardcoded UID, or run some install.sh at runtime instead of buildtime.
tracker1 5 days ago||||
It's hit or miss... you sometimes have to make /tmp writable or another data directory... some images just don't operate right because of initialization steps that happen on first run. It's hit or miss and depends... but a lot of your own apps can definitely be made to work with limited, or no write surface.
s_ting765 6 days ago|||
Depends on specific app use case. Nginx doesn't work with it but valkey will.
freedomben 6 days ago|||
This is true, but it's also easy to set at one point and then later introduce a bursty endpoint that ends up throttled unnecessarily. Always a good idea to be familiar with your app's performance profile but it can be easy to let that get away from you.
moebrowne 6 days ago|||
While this is a good idea I wonder if doing this could allow the intrusion to go undetected for longer - how many people/monitoring systems would notice a small increase in CPU usage compared to all CPUs being maxed out.
miladyincontrol 6 days ago|||
Soft and hard memory limits are worth considering too, regardless of container method.
jakelsaunders94 6 days ago|||
This is a great shout actually. Thanks for pointing it out!
fragmede 6 days ago||
The other thing to note is that docker is for the most part, stateless. So if you're running something that has to deal with questionable user input (images and video or more importantly PDFs), is to stick it on its own VM and then cycle the docker container every hour and the VM every 12, and then still be worried about it getting hacked and leaking secrets.
Koffiepoeder 6 days ago|||
If I can get in once, I can do it again an hour later. I'd be inclined to believe that dumb recycling is not very effective against a persistent attacker.
Saris 4 days ago||
I wonder if a crypto miner like this was a person doing the work, or just an automated thing someone wrote to scan IPs for known vulnerabilities and exploit them automatically.
tgtweak 6 days ago|||
Most of this is mitigated by running docker in an LXC containers (like proxmox does) which grants a lot more isolation than docker on it's own - closer in nature to running separate VMs.
butvacuum 6 days ago||
Too bad it straight doesn't work without heavy mods in pve9
tgtweak 5 days ago||
Illumos had a really nice stack for running containers inside jails and zones... I wonder if any of that ever made it into the linux world. If you broke out of the container you'd just be inside a jail which is even more hardened.
cyphar 5 days ago||
SmartOS constructed a container-like environment using LX-branded zones, they didn't create an in-kernel equivalent to Linux's namespaces which it then nested in a zone. You're probably thinking of the KVM port to Solaris/illumos, which does run in a zone internally to provide additional protection.

While LX-branded zones were a really cool tech demo, maintaining compatibility with Linux long-term would be incredibly painful and you're bound to find all sorts of horrific bugs in production. I believe that Oxide uses KVM to run their Linux guests.

Linux has always supported nested namespaces and you can run Docker containers inside LXC (or Incus) fairly easily. Note that while it does add some additional protection (in particular, it transparently adds user namespaces which is a critical security feature most people still do not enable in Docker) it is still the same technology as containers and so kernel bugs still pose a similar risk.

tgtweak 5 days ago||
Yes it was SmartOS - bcantrill worked on it post-oracle. I remembered Illumos since it was the precursor.
danparsonson 6 days ago||
No firewall! Wow that's brave. Hetzner will let you configure one that runs outside of the box so you might want to add that too, as part of your defense in depth - that will cover you if you make a mistake with ufw. Personally I keep SSH firewalled only to my home address in this way; if I'm out and about and need access, I can just log into Hetzner's website and change it temporarily.
tete 6 days ago||
Firewalls in the majority of cases don't get you much. Yes it's a last line of defense if you do something really stupid and don't even know where or what you configure your services to listen on, but if you don't the difference between running firewalls and not is minuscule.

There are way more important things like actually knowing that you are running software with widely known RCE that don't even use established mechanisms to sandbox themselves it seems.

The way the author describes docker being the savior appears to be sheer luck.

danparsonson 6 days ago|||
The author mentioned they had other services exposed to the internet (Postgres, RabbitMQ) which increases their attack surface area. There may be vulnerabilities or misconfigurations in those services for example.

Good security is layered.

seszett 6 days ago||
But if they have to be exposed then a firewall won't help, and if they don't have to be exposed to the internet then a firewall isn't needed either, just configure them not to listen on non-local interfaces.
spoaceman7777 6 days ago||
This sounds like an extremely effective foot gun.

Just use a firewall.

seszett 6 days ago|||
I'm not sure what you mean, what sounds dangerous to me is not caring about what services are listening to on a server.

The firewall is there as a safeguard in case a service is temporarily misconfigured, it should certainly not be the only thing standing between your services and the internet.

newsoftheday 5 days ago||
A firewall is a safeguard, period. Like the firewall between the driver and engine in a car.
vultour 6 days ago|||
If you're at a point where you are exposing services to the internet but you don't know what you're doing you need to stop. Choosing what interface to listen on is one of the first configuration options in pretty much everything, if you're putting in 0.0.0.0 because that's what you read on some random blogspam "tutorial" then you are nowhere near qualified to have a machine exposed to the internet.
hermannj314 6 days ago||
Don't do anything until you are an expert is excellent gatekeeping, fortunately this is hacker news so we can ignore the gatekeepers!

I suggest people fuck around and find out, just limit your exposure. Spin up a VPS with nothing important, have fun, and delete it.

At some point we are all unqualified to use the internet and we used it anyway.

No one is going to die because your toy project got hacked and you are out $5 in credits, you probably learned a ton in the process.

lgvld 4 days ago||
Absolutely. Thank you.
monster_truck 6 days ago|||
extremely loud incorrect buzzer noise, what are you going to say next "bastion servers are a scam"
Nextgrid 6 days ago|||
But the firewall wouldn't have saved them if they're running a public web service or need to interact with external services.

I guess you can have the appserver fully firewalled and have another bastion host acting as an HTTP proxy, both for inbound as well as outbound connections. But it's not trivial to set up especially for the outbound scenario.

danparsonson 6 days ago||
No you're right, I didn't mean the firewall would have saved them, but just as a general point of advice. And yes a second VPS running opnSense or similar makes a nice cheap proxy and then you can firewall off the main server completely. Although that wouldn't have saved them either - they'd still need to forward HTTP/S to the main box.
Nextgrid 6 days ago||
A firewall blocking outgoing connections (except those whitelisted through the proxy) would’ve likely prevented the download of the malware (as it’s usually done by using the RCE to call a curl/wget command rather than uploading the binary through the RCE) and/or its connection to the mining server.
denkmoon 6 days ago|||
How many people do proper egress filtering though, even when running a firewall
drnick1 6 days ago|||
In practice, this is basically impossible to implement. As a user behind a firewall you normally expect to be able to open connections with any remote host.
metafunctor 6 days ago||
Not impossible at all with a policy-filtering HTTPS proxy. See https://laurikari.github.io/exfilguard/

In this model, hosts don’t need any direct internet connectivity or access to public DNS. All outbound traffic is forced through the proxy, giving you full control over where each host is allowed to connect.

It’s not painless: you must maintain a whitelist of allowed URLs and HTTP methods, distribute a trusted CA certificate, and ensure all software is configured to use the proxy.

danw1979 6 days ago|||
The only time I have ever had a machine compromised in 30 years of running Linux is when I ran something exposed to the internet on a well known port.

I know port scanners are a thing but the act of using non-default ports seems unreasonably effective at preventing most security problems.

rainonmoon 6 days ago|||
This is very, very, very bad advice. A non-standard port is not a defence. It’s not even slightly a defence.
danw1979 5 days ago|||
Did I at any point in my previous comment say that using non-standard ports was my only line of defence ?

Its security through obscurity, which puts you out of view of the vast majority of the chaos of the internet. It by no means protects you from all threats.

bostik 6 days ago|||
Correct. From what I understand, Shodan has had for years a search feature in their paid plans to query for "service X listening on non-standard port". The only sane assumption is that any half-decent internet-census[tm] tool has the same as standard by now.
tonyplee 5 days ago||||
If you do any npm install, pip install ..., docker pull ... / docker run ... , etc in linux. It is very easy to get compromise.

I did docker pull a few times base on some webpost (looks reasonable) and detect app/scripts from inside the docker connect to some .ru sites immediately or a few days later....

jraph 6 days ago|||
I do this too, but I think it should only be a defense in depth thing, you still need the other measures.
jwrallie 6 days ago|||
Password auth being enabled is also very brave. I don’t think fail2ban is necessary personally, but it’s popular enough that it always come up.
t0mk 6 days ago|||
I don't whitelist IPs for ssh anymore, but I always run sshd on randomly selected port, in order to not get noticed by port scanners.

I do it for a really long time already, and until now I am not sure if it has any benefit or it's just umbrella in a sideways storm.

lordnacho 6 days ago|||
As long as you understand it's security by obscurity, rather than by cryptography.

I don't think it's wrong, it's just not the same as eg using a yubikey.

forbiddenlake 5 days ago|||
This won't hide you completely, but it will reduce log spam.

My sshd only listens on the VPN interface

dizhn 6 days ago|||
I have SSH blocked altogether and use wireguard to access the server. If something goes wrong I can always go to the dashboard and reenable SSH for my IP. But ultimately your setup is just as secure. Perhaps a tiny bit less convenient.
xetera 6 days ago|||
For the record this is only available for their VPS offering and not dedis. If you rent a dedi through their server auction you still need to configure your own firewall.
danparsonson 6 days ago||
Dedicated servers can configure external firewalls too; there's a tab for it on the server config. It's basic but functional.
figassis 6 days ago|||
Yup. All my servers are behind Tailscale. The only thing I expose is a load balancer that routes tcp (email) and http. That balancer is running docker, fully firewalled (incl docker bypasses). Every server is behind herzner’s firewall in addition to the internal firewall.

App servers run docker, with images that run a single executable (no os, no shell), strict cpu and memory limits. Most of my apps only require very limited temporary storage so usually no need to mount anything. So good luck executing anything in there.

I used, way back in the day, to run Wordpress sites. Would get hacked monthly every possible way. Learned so much, including the fact that often your app is your threat. With Wordpress, every plugin is a vector. Also the ability to easily hop into an instance and rewrite running code (looking at you scripting languages incl JS) is terrible. This motivated my move to Go. The code I compiled is what will run. Period.

3abiton 6 days ago||
Honestly fail2ban is amazing. I might doa write up on the countless of attempts on my servers.
dizhn 6 days ago||
The only way I've envisioned fail2ban to be of any use at all is if you gather IPs from one server and use them on your whole fleet and I got it running like this for a while. Ultimately I decided that all it does is give you a cleaner log file since by definition its working on logs for attacks/attempts that did not succeed. We need to stop worrying about attempts we see in the logs and let software do its job.
V__ 6 days ago||
> The Reddit post I’d seen earlier? That guy got completely owned because his container was running as root. The malware could: [...]

Is that the case, though? My understanding was, that even if I run a docker container as root and the container is 100% compromised, there still would need to be a vulnerability in docker for it to “attack” the host, or am I missing something?

d4mi3n 6 days ago||
While this is true, the general security stance on this is: Docker is not a security boundary. You should not treat it like one. It will only give you _process level_ isolation. If you want something with better security guarantees, you can use a full VM (KVM/QEMU), something like gVisor[1] to limit the attack surface of a containerized process, or something like Firecracker[2] which is designed for multi-tenancy.

The core of the problem here is that process isolation doesn't save you from whole classes of attack vectors or misconfigurations that open you up to nasty surprises. Docker is great, just don't think of it as a sandbox to run untrusted code.

1. https://gvisor.dev/

2. https://firecracker-microvm.github.io/

tgsovlerkhgsel 6 days ago|||
I hear the "Docker is not a security boundary." mantra all the time, and IIRC it was the official stance of the Docker project a long time ago, but is this really true?

Of course if you have a kernel exploit you'd be able to break out (this is what gvisor mitigates to some extent), nothing seems to really protect against rowhammer/memory timing style attacks (but they don't seem to be commonly used). Beyond this, the main misconfigurations seem to be too wide volume bindings (e.g. something that allows access to the docker control socket from inside the container, or an obviously stupid mount like mounting your root inside the container).

Am I missing something?

hsbauauvhabzb 6 days ago||||
Virtual machines are treated as a security boundary despite the fact that with enough R&D they are not. Hosting minecraft servers in virtual machines is fine, but not a great idea if they’re cohosted on a machine that has billions of dollars in crypto or military secrets.

Docker is pretty much the same but supposedly more flimsy.

Both have non-obvious configuration weaknesses that can lead to escapes.

kevinrineer 5 days ago|||
> Virtual machines are treated as a security boundary despite the fact that with enough R&D they are not. Hosting minecraft servers in virtual machines is fine, but not a great idea if they’re cohosted on a machine that has billions of dollars in crypto or military secrets.

While I generally agree with the technical argument, I fail to see the threat model here. Is it that some external threat would have prior knowledge that an important target is in close proximity to a less hardened one? It doesn't seem viable to me for nation states to spend the expensive R&D to compromise hobbyist-adjacent services in a hope that they can discover more valuable data on the host hypervisor.

Once such expensive malware is deployed, there's a huge risk that all the R&D money is spent on potentially just reconnaissance.

hsbauauvhabzb 5 days ago||
Yes. Docker too.
hoppp 6 days ago|||
Yeah but why would somebody co-host military secrets or billions of dollars? Its a bit of a stretch
hsbauauvhabzb 6 days ago||
I think you’re missing the point, which was that high value targets adjacent to soft targets make escapes a legitimate target, but in low value scenarios vm escapes aren’t worth the R&D
z3t4 6 days ago||
but if you can do it at scale it might still be worth it, like owning thousands of machines
socalgal2 6 days ago||||
that's a really good point .. but, I think 99% of docker users believe it is a a sandbox and treat it as such.
freedomben 6 days ago|||
And not without cause. We've been pitching docker as a security improvement for well over a decade now. And it is a security improvement, just not as much as many evangelists implied.
fragmede 6 days ago||
Must depend on who you've been talking to. Docker's not been pitched for security in the circles I run in, ever.
TacticalCoder 6 days ago||||
Not 99%. Many people run an hypervisor and then a VM just for Docker.

Attacker now needs a Docker exploit and then a VM exploit before getting to the hypervisor (and, no, pwning the VM ain't the same as pwning the hypervisor).

windexh8er 6 days ago|||
Agreed - this is actually pretty common in the Proxmox realm of hosters. I segment container nodes using LXC, and in some specific cases I'll use a VM.

Not only does it allow me to partition the host for workloads but I also get security boundaries as well. While it may be a slight performance hit the segmentation also makes more logical sense in the way I view the workloads. Finally, it's trivial to template and script, so it's very low maintenance and allows for me to kill an LXC and just reprovision it if I need to make any significant changes. And I never need to migrate any data in this model (or very rarely).

briHass 6 days ago|||
'Double-bagging it' was what we called it in my day.
dist-epoch 6 days ago|||
it is a sandbox against unintentional attacks and mistakes (sudo rm -rf /)

but will not stop serious malware

michaelt 6 days ago|||
Firstly, the attacker just wants to mine Monero with CPU, they can do that inside the container.

Second, even if your Docker container is configured properly, the attacker gets to call themselves root and talk to the kernel. It's a security boundary, sure, but it's not as battle-tested as the isolation of not being root, or the isolation between VMs.

Thirdly, in the stock configuration processes inside a docker container can use loads of RAM (causing random things to get swapped to disk or OOM killed), can consume lots of CPU, and can fill your disk up. If you consider denial-of-service an attack, there you are.

Fourthly, there are a bunch of settings that disable the security boundary, and a lot of guides online will tell you to use them. Doing something in Docker that needs to access hot-plugged webcams? Hmm, it's not working unless I set --privileged - oops, there goes the security boundary. Trying to attach a debugger while developing and you set CAP_SYS_PTRACE? Bypasses the security boundary. Things like that.

cyphar 6 days ago|||
You really need to use user namespaces to get this kind of security protection -- running as root inside a container without user namespaces is not secure. Yes, breakouts often require some other bug or misconfiguration but the margin for error is non-existent (for instance, if you add CAP_SYS_PTRACE to your containers it is trivial to break out of them and container runtimes have no way of protecting against that). Almost all container breakouts in the past decade were blocked by user namespaces.

Unfortunately, user namespaces are still not the default configuration with Docker (even though the core issues that made using them painful have long since been resolved).

trhway 6 days ago|||
>there still would need to be a vulnerability in docker for it to “attack” the host, or am I missing something?

non necessary vulnerability per. se. Bridged adapter for example lets you do a lot - few years ago there were a story of something like how a guy got a root in container and because the container used bridged adapter he was able to intercept traffic of an account info updates on GCP

easterncalculus 6 days ago|||
If the container is running in privileged mode you can just talk to the docker socket to the daemon on the host, spawn a new container with direct access to the root filesystem, and then change anything you want as root.
CGamesPlay 6 days ago||
Notably, if you run docker-in-docker, Docker is probably not a security boundary. Try this inside any dind container (especially devcontainers): docker run -it --rm --pid=host --privileged -v /:/mnt alpine sh

I disagree with other commenters here that Docker is not a security boundary. It's a fine one, as long as you don't disable the boundary, which is as easy as running a container with `--privileged`. I wrote about secure alternatives for devcontainers here: https://cgamesplay.com/recipes/devcontainers/#docker-in-devc...

flaminHotSpeedo 6 days ago||
Containers are never a security boundary. If you configure them correctly, avoid all the footguns, and pray that there's no container escape vulnerabilities that affect "correctly" configured containers then they can be a crude approximation of a security boundary that may be enough for your use case, but they aren't a suitable substitute for hardware backed virtualization.

The only serious company that I'm aware of which doesn't understand that is Microsoft, and the reason I know that is because they've been embarrassed again and again by vulnerabilities that only exist because they run multitenant systems with only containers for isolation

vel0city 6 days ago||
Virtual machines are never a security boundary. If you configure them correctly, avoid all the footguns, and pray that there's no VM escape vulnerabilities that affect "correctly" configured VMs then they can be a crude approximation of a security boundary that may be enough for your use case, but they aren't a suitable substitute for entirely separate hardware.

Its all turtles, all the way down.

flaminHotSpeedo 6 days ago||
Yeah, in some (rare) situations physical isolation is a more appropriate level of security. Or if you want to land somewhere in between, you can use VM's with single tenant NUMA nodes.

But for a typical case, VM's are the bare minimum to say you have a _secure_ isolation boundary because the attack surface is way smaller.

vel0city 6 days ago||
Yeah, so secure.

https://support.broadcom.com/web/ecx/support-content-notific...

https://nvd.nist.gov/vuln/detail/CVE-2019-5183

https://nvd.nist.gov/vuln/detail/CVE-2018-12130

https://nvd.nist.gov/vuln/detail/CVE-2018-2698

https://nvd.nist.gov/vuln/detail/CVE-2017-4936

In the end you need to configure it properly and pray there's no escape vulnerabilities. The same standard you applied to containers to say they're definitely never a security boundary. Seems like you're drawing some pretty arbitrary lines here.

TheRealPomax 6 days ago|||
Docker containers with root have rootish rights on the host machine too because the userid will just be 0 for both. So if you have, say, a bind mount that you play fast and loose with, the docker user can create 0777 files outside the docker container, and now we're almost done. Even worse if "just to make it work" someone runs the container with --privileged and then makes the terminal mistake of exposing that container to the internet.
V__ 6 days ago||
Can you explain this a bit further? Wouldn't that 0777 file outside docker be still executed inside the container and not on the host?
necovek 6 days ago||
I believe they meant you could create an executable that is accessible outside the container (maybe even as setuid root one), and depending on the path settings, it might be possible to get the user to run it on the host.

Imagine naming this executable "ls" or "echo" and someone having "." in their path (which is why you shouldn't): as long as you do "ls" in this directory, you've ran compromised code.

There are obviously other ways to get that executable to be run on the host, this just a simple example.

marwamc 6 days ago||
Another example is they would enumerate your directories and find the names of common scripts and then overwrite your script. Or to be even sneakier, they can append their malicious code to an existing script in your filesystem. Now each time you run your script, their code piggybacks.

OTH if I had written such a script for linux I'd be looking to grab the contents of $(hist) $(env) $(cat /etc/{group,passwd})... then enumerate /usr/bin/ /usr/local/bin/ and the XDG_{CACHE,CONFIG} dirs - some plaintext credentials are usually here.

The $HOME/.{aws,docker,claude,ssh}

Basically the attacker just needs to know their way around your OS. The script enumerating these directories is the 0777 script they were able to write from inside the root access container.

tracker1 6 days ago||
If your chosen development environment supports it, look into distroless or empty base containers, and run as --read-only if you can.

Go and Rust tend to lend themselves to these more restrictive environments a bit better than other options.

Nextgrid 6 days ago|||
Container escapes exist. Now the question is whether the attacker has exploited it or not, and what the risk is.

Are you holding millions of dollars in crypto/sensitive data? Better assume the machine and data is compromised and plan accordingly.

Is this your toy server for some low-value things where nothing bad can happen besides a bit of embarrassment even if you do get hit by a container escape zero-day? You're probably fine.

This attack is just a large-scale automated attack designed to mine cryptocurrency; it's unlikely any human ever actually logged into your server. So cleaning up the container is most likely fine.

Havoc 6 days ago|||
I think a root container can talk to docker daemon and launch additional containers...with volume mounts of additional parts of file system etc. Not particularly confident about that one though
minitech 6 days ago|||
Unintentional vulnerabilities in Docker and the kernel aside, it can only do that if it has access to the Docker API (usually through a bind mount of the Unix socket). Having access to the Docker API is equivalent to having root on the host.
czbond 6 days ago||
Well $hit. I have been using Docker for installing NPM modules in interactive projects I was testing out. I believed Docker blocked access to the underlying host (my computer).

Thanks for mentioning it - but now... how does one deal with this?

minitech 6 days ago|||
If you didn’t mount docker.sock or any directory above it (i.e. / or /run by default) or run your containers as --privileged, you’re probably fine with respect to this angle. I’d still recommend rootless containers under unprivileged users* or VMs for extra comfort. Qubes (https://www.qubes-os.org/) is good, even if it’s a little clunkier than it could be.

* but if you’re used to bind-mounting, they’ll be a hassle

Edit: This is by no means comprehensive, but I feel compelled to point it out specifically for some reason: remember not to mount .git writable, folks! Write access to .git is arbitrary code execution as whoever runs git.

3np 6 days ago||||
As sibling mentioned, unless you or the runtime explicitly mount the docker socket, this particular scenario shouldn't affect you.

You might still want to tighten things up. Just adding on the "rootless" part - running the container runtime as an unprivileged user on the host instead of root - you also want to run npm/node as unprivileged user inside the container. I still see many defaulting to running as root inside the container since that's the default of most images. OP touches on this.

For rootless podman, this will run as a user with your current uid and map ownership of mounts/volumes:

    podman run -u$(id -u) --userns=keep-id
jcgl 5 days ago|||
Podman makes this easier to do safely by default. I'd suggest checking that out.
ronsor 6 days ago|||
There would be, but a lot of docker containers are misconfigured or unnecessarily privileged, allowing for escape.

Also, if you've been compromised, you may have a rootkit that hides itself from the filesystem, so you can't be sure of a file's existence through a simple `ls` or `stat`.

miladyincontrol 6 days ago||
> but a lot of docker containers are misconfigured or unnecessarily privileged, allowing for escape

Honestly, citation needed. Very rare unless you're literally giving the container access to write to /usr/bin or other binaries the host is running, to reconfigure your entire /etc, access to sockets like docker's, or some other insane level of over reach I doubt even the least educated docker user would do.

While of course they should be scoped properly, people act like some elusive 0-day container escape will get used on their minecraft server or personal blog that has otherwise sane mounts, non-admin capabilities, etc. You arent that special.

cyphar 6 days ago|||
As a maintainer of runc (the runtime Docker uses), if you aren't using user namespaces (which is the case for the vast majority of users) I would consider your setup insecure.

And a shocking number of tutorials recommend bind-mounting docker.sock into the container without any warning (some even tell you to mount it "ro" -- which is even funnier since that does nothing). I have a HN comment from ~8 years ago complaining about this.

vultour 6 days ago||||
Half the vendor software I come across asks you to mount devices from the host, add capabilities or run the container in privileged mode because their outsourced lowest bidder developers barely even know what a container is. I doubt even the smallest minority of their customers protest against this because apparently the place I work at is always the first one to have a problem with it.
fomine3 6 days ago|||
I've seen many articles with `-v /var/run/docker.sock:/var/run/docker.sock` without scary warning
boomlinde 6 days ago||
What would the intended use case for that be?
jp191919 5 days ago||
Diun
Onavo 6 days ago|||
Either docker or a kernel level exploit. With non-VM containers, you are sharing a kernel.
croemer 6 days ago||
Not proof read by a human. It claims more than once the vulnerability was related to Puppeteer. Hallucination!

"CVE-2025-66478 - Next.js/Puppeteer RCE)"

loloquwowndueo 6 days ago|
TFA mentions it’s mostly a transcript of a Claude session literally in the first paragraph.
themafia 6 days ago|||
That was added as an edit. It does not cover the inaccuracies contained within. It should more realistically say "this article was generated by an LLM and may contain several errors which I didn't bother to find or correct."
loloquwowndueo 6 days ago||
That’s fair!
croemer 6 days ago|||
That doesn't excuse publishing errors.
grugdev42 6 days ago||
Roughly 20 years ago, PHP sites were getting hacked everywhere.

Now, JS sites everywhere are getting hacked.

JS has turned into the very thing it strived to replace.

It's good to see developers haven't changed. ;)

urban_alien 5 days ago|
It's almost like... the problem wasn't the tool at all :P
heavyset_go 6 days ago||
I wouldn't trust that boot image or storage again, I'd nuke it for peace of mind.

That said, do you have an image of the box or a container image? I'm curious about it.

jakelsaunders94 6 days ago|
Yeah I did consider just killing it, I'm going to keep an eye on it for a few days with a gun to it just in case.

I was lucky in that my DB backups were working so all my persistence wax backed up to S3. I think I could stand up another one in an hour.

Unfortunately I didn't keep an image no. I almost didn't have the foresight to investigate before yeeting the whole box into the sun!

muppetman 6 days ago||
Enable connection tracking (if it's not already) and keep looking at the conntrack entires. That's a good way to spot random things doing naughty stuff.
marwamc 6 days ago||
Hahaha OP could be in deep trouble depending on what types of creds/data they had in that container. I had replied to a child comment but I figure best to reply to OP.

From the root container, depending on volume mounts and capabilities granted to the container, they would enumerate the host directories and find the names of common scripts and then overwrite one such script. Or to be even sneakier, they can append their malicious code to an existing script in the host filesystem. Now each time you run your script, their code piggybacks.

OTOH if I had written such a script for linux I'd be looking to grab the contents of $(hist) $(env) $(cat /etc/{group,passwd})... then enumerate /usr/bin/ /usr/local/bin/ and the XDG_{CACHE,CONFIG} dirs - some plaintext credentials are usually here. The $HOME/.{aws,docker,claude,ssh} Basically the attacker just needs to know their way around your OS. The script enumerating these directories is the 0777 script they were able to write from inside the root access container.

cobertos 6 days ago||
Luckily umami in docker is pretty compartimentalized. All data is in the and the DB runs in another container. The biggest thing is the DB credentials. The default config requires no volume mounts so no worries there. It runs unprivileged with no extra capabilities. IIRC don't think the container even has bash, a few of the exploits that tried to run weren't able to due to lack of bash in the scripts they ran.

Deleting and remaking the container will blow away all state associated with it. So there isn't a whole lot to worry about after you do that.

simulator5g 5 days ago||
You could just chain this with another exploit, just because it doesn’t run as root by default doesn’t mean it’s not a big deal.
jakelsaunders94 6 days ago||
Nothing in that container luckily, just what Umami needed to run, so no creds at all. Thanks for the info though!
grekowalski 6 days ago|
Recently, those Monero miners were installing themselves everywhere that had a vulnerable React 19. I had exactly the same problem.
tgsovlerkhgsel 6 days ago||
I love mining malware - it's reasonably visible and causes almost no damage. Essentially, it's like a bug bounty program that you don't have to manage, doesn't generate costly bullshit reports, and only costs you a few bucks of electricity when a vulnerability is found.

If you have decent network or process level monitoring, you're likely to find it, while you might not realize the vulnerable software itself or some stealthier, more dangerous malware that might exploit it.

qingcharles 6 days ago||
I had to nuke my Oracle Cloud box that runs my Umami server. It got hit. Was a good excuse to upgrade version and upgrade all my backup systems etc. Lost a few hours of data while it was returning 500 errors.
More comments...