Posted by untrimmed 10/27/2025
Allowing eight hundred gigabyte containers is gross incompetence. Trying to fix it by scaling the node disk from 2 TB to 2.5 TB is further evidence of incompetence. Understanding that you need to build a hard cap, but not concluding with action items to actually build one - instead just building monitoring for image size - is a clear sign to stay away.
It boggles my mind that the author could understand copy-on-write filesystem semantics but can't imagine how to engineer actual size limits on said filesystem. How is that possible?
.... oh right, the blogpost is LLM slop. So nobody knows what the author actually learned.
Log rotation and disk consuming logs are a tale as old as time
https://sealos.io/_next/image?url=.%2Fimages%2Fcontainerd-hi...
https://sealos.io/_next/image?url=.%2Fimages%2Fbloated-conta...
Either way, hope the user was communicated with or alerted to what's going on.
At the same time, someone said that 800 GB container images are a problem in of themselves no matter the circumstances and they got downvoted for saying so - yet I mostly agree.
Most of mine are about 50-250 MB at most and even if you need big ones with software that's GB in size, you will still be happier if you treat them as something largely immutable. I've never had odd issues with them thanks to this. If you really care about data persistence, then you can use volumes/bind mounts or if you don’t then just throw things into tmpfs.
I'm not sure whether treating containers as something long lived with additional commmits/layers is a great idea, but if it works for other people, then good for them. Must be a pain to run something so foundational for your clients, though, cause you'll be exposed to most of the edge cases imaginable sooner or later.
For stuff like security keys you should typically add them as build --args-- secrets, not as content in the image.
Build args are content in the image: https://docs.docker.com/reference/build-checks/secrets-used-...
Do not use build arguments for anything secret. The values are committed into the image layers.
The thing here is they're using Docker container images like if they were VM disks and they end up with images with almost 300 layers, like in this case. I think LXC or VMs should be a better case for this (but I don't know if they've tested it or why are they using Docker)
2GB is the expected and default size for a docker image. It's a bit bloated even.
It’s № 1 which I could not have guessed at or gone for. Good write-up, love the transparency.