Top
Best
New

Posted by codesmash 9/5/2025

I ditched Docker for Podman(codesmash.dev)
1123 points | 654 commentspage 3
daitangio 9/6/2025|
I do not know: the lack of proper docker compose support it is a problem for me. About security: gVistor adoption failure in Google is a proof that containerization cannot be enforced easily and container will always be less secure than a VM.

If you want proper security go to firecracker [^1]. Podman is the "RedHat/IBM docker-way" but I see very little benefit overall; never less if it works for you great and go with it!

[^1]: https://firecracker-microvm.github.io

vrotaru 9/6/2025||
There is a podman-compose which works almost as drop-in replacement.

Almost because most common commands work, but I have not check all.

And almost, because for some docker-compose.yaml which you downloaded/LLM generated you may need to prepend `docker.io/` to the image name

virgoerns 9/6/2025||
Podman 4.7 supports both the ordinary compose (Go implementation) and older Python podman-compose. But personally I moved to quadlets and didn't look back.
CraigJPerry 9/5/2025||
Docker is failing in that trap where they feel the need to try (and mostly fail so far) to add net-new value streams (e.g. mcp catalogue, a bunch of other stuff i immediately turned off that last time i installed it) rather than focus on the core value.

It's not the case that they've maximised the utility of the core build / publish container nor the acquire / run container workflows and but they're prioritising fluff around the edges of the core problem.

Podman for its various issues is a whole lot more focussed.

colechristensen 9/5/2025||
The core of docker needs to be free. The docker registry can charge corporate customers for storage and such, but besides being the default registry, there's not much money there because it's a commodity service, not anything unique.

There's just not much money to be made there, especially considering that docker is a pretty thin wrapper built on top of a bunch of other free technology.

When somebody can make a meaningful (though limited) clone of your core product in 100 lines of bash, you can't make much of a business on top of it [1]

Docker suffers from being useful in the open source space but having no reasonable method to make revenue.

https://github.com/p8952/bocker

jen20 9/6/2025||
They have the second problem that `container` [1] will eat up a ton of their business when it ships properly in a month or two.

[1]: https://github.com/apple/container

mdaniel 9/6/2025||
uh-huh, maybe for the corps that have a predictable hardware refresh schedule but I've never worked in such a place https://github.com/apple/container#:~:text=is%20supported%20...
jen20 9/6/2025||
Why does the hardware refresh schedule matter? What you appear to be referring to is the _operating system_ refresh schedule, which should be every year even at enterprises where corporate IT is problematic.
hyperpape 9/5/2025||
I have a few links saved from my joyful experience using podman with Fedora (and therefore selinux). Iirc, I tried using podman because Fedora shipped cgroups v2, which didn't work with Docker (in my own ignorance, I would've thought coordinating with major dev tools would be important, but distros often have other ideas).

- https://www.redhat.com/en/blog/user-namespaces-selinux-rootl... - https://www.redhat.com/en/blog/sudo-rootless-podman

I'd summarize these posts as "very carefully explaining how to solve insane problems."

Kerbiter 9/5/2025|
Fedora is rather aggressive in pushing Podman. They have their Cockpit control panel for Fedora Server, and they've simply made the Cockpit Docker plugin unavailable when it was working fine, because "use Podman integration plugin instead".
evertheylen 9/5/2025||
To add to the article: systemd integration works in the other way too! Running systemd in a Docker container is a pain. It's much easier in Podman: https://developers.redhat.com/blog/2019/04/24/how-to-run-sys...

(Most people use containers in a limited way, where they should do just one thing and shouldn't require systemd. OTOH I run them as isolated developer containers, and it's just so much easier to run systemd in the container as the OS expects.)

kodama-lens 9/5/2025||
I tried podman for multiple times. Normal testing & sandox stuff just works and you really can do alias docker=podman. But ass soon as you add nertworking me broke for me. And for me it is really just a tool and I need my tools working. So I switched back.

Recently I did the GitLab Runner migration for a company and switched to rootless docker. Works perfectly, all devs did not notice all there runs now use rootless docker and buildkit for builds. All thanks to rootless kit. No podman problems, more secure and no workflow change needed

rglover 9/5/2025||
I may be the odd man out, but after getting unbelievably stressed out by containers, k8s, etc., I couldn't believe how zen just spinning up a new VPS and bootstrapping it with a bash script was. That combined with systemd scripts can get you relatively far without all of the (cognitive) overhead.

The best part? Whenever there's an "uh oh," you just SSH in to a box, patch it, and carry on about your business.

madeofpalk 9/5/2025||
> you just SSH in to a box, patch it

Oh god. I can’t imagine how I could build reliably software if this is what I was doing. How do you know what “patches” are needed to run your software?

rglover 9/5/2025||
A staging server?
TrainedMonkey 9/5/2025|||
Containers and container orchestrators are complex tools. The constant cost of using them is pretty high compared to bash scripts. However the scale / maintenance factor is significantly lower, so for a 100 boxes simplicity of bash scripts might still win out over the containers. At 1000 machines it is highly likely that simplest and least maintenance overall solution will be using an orchestrator.
rglover 9/5/2025||
That's what I found out, though: the footprint doesn't matter. I did have to write a simple orchestration system, but it's literally just me provisioning a VPS, bootstrapping it with deps, and pulling the code/installing its dependencies. Short of service or hardware limits, this can work for an unlimited number of servers.

I get the why most people think they need containers, but it really seems only suited for hyper-complex (ironically, Google) deployments with thousands of developers pushing code simultaneously.

chickensong 9/5/2025|||
> it really seems only suited for hyper-complex (ironically, Google) deployments with thousands of developers pushing code simultaneously

There are many benefits to be had for individuals and small companies as well. The piece of mind that comes with immutable architecture is incredible.

While it's true that you can often get quite far with the old cowboy ways, particularly for competent solo devs or small teams, there's a point where it starts to unravel, and you don't need to be a hyper-complex mega-corp to see it happen. Once you stray from the happy path or have common business requirements related to operations and security, the old ways become a liability.

There are reasons ops people will accept the extra layers and complexity to enable container-based architecture. They're not thrilled to add more infrastructure, but it's better than the alternative.

mixmastamyk 9/8/2025|||
Sounds like you reinvented ansible?
sroerick 9/6/2025|||
I couldn't agree more.

It's really not that hard, folks are just trading Linux knowledge for CI/CD knowledge.

Its React but for DevOps

lotyrin 9/5/2025||
Well... yeah? If you exist in as an individual or part of a group which is integrated (shared trust, goals, knowledge etc.) then yeah, obviously you do not have the problem (tragedy of the commons) that splitting things up (including into containers, but literally any kind of boundary) solves for.

The container split is often introduced because you have product-a, product-b and infrastructure operations teams/individuals that all share responsibility for an OS user space (and therefore none are accountable for it). You instead structure things as: a host OS and container platform for which infra is responsible, and then product-a container(s) and product-b container(s) for which those teams are responsible.

These boundaries are placed (between networks, machines, hosts and guests, namespaces, users, processes, modules, etc. when needed due to trust or useful due to knowledge sharing and goal alignment.

When they are present in single-user or small highly-integrated team environments, it's because they've been cargo-culted there, yes, but I've seen an equal number of environments where effective and correct boundaries were missing as I've seen ones where they were superfluous.

GrumpyGoblin 9/5/2025||
Podman networking is extremely unreliable. Our company made an effort to switch to get away from Docker Enterprise. We had to kill the effort because multiple people had random disconnects and packet drops with a range of services including K8S, Kafka, and even basic applications, both internal and in host network.

```

> kubectl port-forward svc/argocd-server -n argocd 8080:443

Forwarding from 127.0.0.1:8080 -> 8080

Forwarding from [::1]:8080 -> 8080

Handling connection for 8080

Handling connection for 8080

Handling connection for 8080

E0815 09:12:51.276801 27142 portforward.go:413] an error occurred forwarding 8080 -> 8080: error forwarding port 8080 to pod 87b32b48e6c729565b35ea0cefe9e25d8f0211cbefc0b63579e87a759d14c375, uid : failed to execute portforward in network namespace "/var/run/netns/cni-719d3bfa-0220-e841-bd35-fe159b48f11c": failed to connect to localhost:8080 inside namespace "87b32b48e6c729565b35ea0cefe9e25d8f0211cbefc0b63579e87a759d14c375", IPv4: dial tcp4 127.0.0.1:8080: connect: connection refused IPv6 dial tcp6 [::1]:8080: connect: connection refused

error: lost connection to pod

```

People had other issues also. It looks nice and I would love to use it, but it just currently isn't mature/stable enough.

dpkirchner 9/5/2025|
I've had similar issues using kubectl to access some tools that made a lot of requests (polling, which is something argocd does I believe).

Setting this environment variable helped a lot: KUBECTL_PORT_FORWARD_WEBSOCKETS=true

Note: because Google's quality is falling you won't be able to find this variable using their search, but you can read about it by searching Bing or asking an LLM.

pbd 9/6/2025||
I keep seeing Podman mentioned as a Docker alternative, but I'm unclear on when the juice is worth the squeeze. For someone doing typical web development (Node.js/Python services, Postgres, Redis), what specific problems would Podman solve that Docker doesn't? Is this more about security/compliance or are there developer experience benefits too?
arcfour 9/6/2025||
At a high level, Docker and Podman implement the same standard for containers, but my understanding is that Podman implements more of said standard (more/newer features) and in a more standards compliant way.

This can be a good or a bad thing—good because it's better, but bad because the popularity of Docker sometimes means things aren't compatible and require some tweaking to get running.

pbd 9/6/2025||
fair enough. thanks.
mechanicalpulse 9/6/2025||
Podman is daemonless while docker is a client/server pair. Podman also shipped with support for rootless containers, though Docker now has that capability as well.

The podman CLI is nearly a drop-in replacement for docker such that `alias docker=podman` works for many of the most common use cases.

If you don't care about the security implications of running containers as root via a client/server protocol, then by all means keep using Docker. I've switched to podman and I'm happy with my decision, but to each their own.

drzaiusx11 9/5/2025||
Still happily using Colima as a Docker Desktop for Mac replacement. It even allows mixed architecture containers in the same compose stack. What's podman gain me besides a half baked Docker compose implementation?
osigurdson 9/5/2025|
Keep using docker, who cares. The article is concerned about CVEs, etc, but this doesn't matter for development very much.

If you use k8s for anything, podman might help you avoid remembering yet another iac format.

cpuguy83 9/5/2025|||
Concerned about cve's but doesn't pay attention to the massive list of cve's for rootless setups which have a much broader scope/impact.
drzaiusx11 9/5/2025|||
My laptop isn't exposing any ports outside localhost, so all I care about is validation of my containers for local-only testing (similar use case to Docker desktop.)

Colima would/should never be used in production for a number of reasons, but yeah it's great for local only development on a laptop.

danousna 9/5/2025|
I use both Podman and Docker at work, specifically I had to use the same docker images / container setup in a RHEL deployment and it worked great.

A huge pain was when I used "podman-compose" with a custom podman system storage location, two times it ended corrupted when doing an "up" and I had to completely scratch my podman system storage.

I must have missed something though ...

More comments...