Top
Best
New

Posted by zacwest 8 hours ago

A decade of Docker containers(cacm.acm.org)
213 points | 156 commentspage 3
INTPenis 8 hours ago|
I thought it was 2014 when it launched? The article says the command line interface hasn't changed since 2013.
avsm 7 hours ago|
We first submitted the article to the CACM a while ago. The review process takes some time and "Twelve years of Docker containers" didn't have quite the same vibe.
heraldgeezer 5 hours ago||
I still havent learned it being in IT its so embarassing. Yes I know about the 2-3h Youtube tutorials but just...
1970-01-01 5 hours ago||
I now wonder if we'll end up switching it all back to VMs so the LLMs have enough room to grow and adapt.
skybrian 5 hours ago|
Maybe, but the install will often be done using a Docker file.
callamdelaney 4 hours ago||
The fact that docker still, in 2026, will completely overwrite iptables rules silently to expose containers to external requests is, frankly, fucking stupid.
netrem 3 hours ago|
Indeed. I've had even experienced sysadmins be surprised that their ufw setup will be ignored.
brcmthrowaway 7 hours ago||
I dont use Dockerfile. Am i slumming it?
vvpan 6 hours ago|
Probably? How do you deploy?
rglover 5 hours ago||
Just pull a tarball from a signed URL, install deps, and run from systemd. Rolls out in 30 seconds, remarkably stable. Initial bootstrap of deps/paths is maybe 5 minutes.
user3939382 7 hours ago||
It solves a practical problem that’s obvious. And on one hand the practical where-were-at-now is all that matters, that’s a legitimate perspective.

There’s another one, at least IMHO, that this entire stack from the bottom up is designed wrong and every day we as a society continue marching down this path we’re just accumulating more technical debt. Pretty much every time you find the solution to be, “ok so we’ll wrap the whole thing and then…” something is deeply wrong and you’re borrowing from the future a debt that must come due. Energy is not free. We tend to treat compute like it is.

Maybe I’m in a big club but I have a vision for a radically different architecture that fixes all of this and I wish that got 1/2 the attention these bandaids did. Plan 9 is an example of the theme if not the particular set of solutions I’m referring to.

forrestthewoods 5 hours ago||
I am so thoroughly convinced that Docker is a hacky-but-functional solution to an utterly failed userspace design.

Linux user space decided to try and share dependencies. Docker obliterates this design goal by shipping dependencies, but stuffing them into the filesystem as-if they were shared.

If you’re going to do this then a far far far simpler solution is to just link statically or ship dependencies adjacent to the binary. (Aka what windows does). Replicating a faux “shared” filesystem is a gross hack.

This is a distinctly Linux problem. Windows software doesn’t typically have this issue. Because programs ship their dependencies and then work.

Docker is one way to ship dependencies. So it’s not the worst solution in the world. But I swear it’s a bad solution. My blood boils with righteous fury anytime anyone on my team mentions they have a 15 minute docker build step. And don’t you damn dare say the fix to Docker being slow is to add more layers of complexity with hierarchical Docker images ohmygodiswear. Running a computer program does not have to be hard I promise!!

ahnick 4 hours ago|
Okay, so what's the best solution? What's even just a better solution than Docker? I mean really truly lay out all the details here or link to a blog post that describes in excruciating detail how they shipped a web application and maintained it for years and was less work than Docker containers. Just saying "a far far simpler solution is to just link statically or ship dependencies adjacent to the binary" is ignoring huge swaths of the SDLC. Anyone can cast stones, very few can actually implement a better solution. Bring the receipts.
forrestthewoods 3 hours ago||
The first half of my career was spent shipping video games. There is no such thing as shipping a game in Docker. Not even on Linux. You depend on minimum version of glibc and then ship your damn dependencies.

The more recent half of my career has been more focused on ML and now robotics. Python ML is absolute clusterfuck. It is close to getting resolved with UV and Pixi. The trick there is to include your damn dependencies… via symlink to a shared cache.

Any program or pipeline that relies on whatever arbitrary ass version of Python is installed on the system can die in a fire.

That’s mostly about deploying. We can also talk about build systems.

The one true build system path is a monorepo that contains your damn dependencies. Anything else is wrong and evil.

I’m also spicy and think that if your build system can’t crosscompile then it sucks. It’s trivial to crosscompile for Windows from Linux because Windows doesn’t suck (in this regard). It almost impossible to crosscompile to Linux from Windows because Linux userspace is a bad, broken, failed design. However Andrew Kelley is a patron saint and Zig makes it feasible.

Use a monorepo, pretend the system environment doesn’t exist, link statically/ship adjacent so/dll.

Docker clearly addresses a real problem (that Linux userspace has failed). But Docker is a bad hack. The concept of trying to share libraries at the system level has objectively failed. The correct thing to do is to not do that, and don’t fake a system to do it.

Windows may suck for a lot of reasons. But boy howdy is it a whole lot more reliable than Linux at running computer programs.

gogasca 5 hours ago||
Something that I recently have explored is the optimization of Docker layers and startup time for large containers. Using shared storage, tar layers preload, overlayBD https://github.com/codeexec/overlaybd-deploy is something that I would like to see more natively. Great article
a_t48 2 hours ago|
This is neat. I’m about to dive into snapshooters myself, any pitfalls to watch out for?
tsoukiory 3 hours ago||
I dont no spek anglais