Top
Best
New

Posted by codesmash 9/5/2025

I ditched Docker for Podman(codesmash.dev)
1123 points | 654 commentspage 4
delduca 9/5/2025|
I have ditched docker desktop on macOS for OrbStack.
chrisweekly 9/5/2025|
OrbStack looks pretty nice, BUT an $8/mo/user subscription? Blech.
frje1400 9/5/2025|||
Orbstack is worth every penny. It's simply amazingly solid compared to Podman on macOS (a year ago at least, I don't know if Podman has improved). We migrated 100+ devs to Orbstack and it was like a collective sigh of relief that we finally had something that actually worked.
otterley 9/5/2025||||
Useful software that makes our lives more convenient is worth paying for--after all, it pays most of our salaries, doesn't it?

It feels a little hypocritical for us to feed our families through our tech talent and then complain that someone else is doing the same.

chrisweekly 9/5/2025||
It's the subscription model that chafes. For SaaS? Ok, sure. But for a desktop app, I just don't like it. It might not be entirely rational. shrug
bzzzt 9/5/2025||||
It doesn't only look prettier, it also starts and works a lot faster. Switched a few years ago; at that time Docker desktop has a known issue of continually using 5% CPU on Mac which they didn't fix for years.
osigurdson 9/5/2025||
I don't understand why people need a gui for docker/podman.
elliottr1234 9/5/2025|||
Take a look at https://docs.orbstack.dev/

It's much more than a gui for it supports running k8s locally, managing custom vm instances, resource monitoring of containers, built in local domain name support with ssl mycontainer.orb, a debug shell that gives you ability to install packages that are not available in the image by default, much better and automated volume mounting and view every container in finder, ability to query logs, an amazing ui, plus it is much, much faster and more resource efficient.

I am normally with you that terminal is usually enough, but the above features really do make it worth it especially when using existing services that have complicated failure logs or are resource intensive like redis, postgres, livekit, etc or you have a lot of ports running and want to call your service without having to worry about remembering port numbers or complicated docker network configuration.

grep_name 9/5/2025|||
I use orbstack, but I never look at it, it just opens when I start up the computer. I used to use docker desktop, which I never looked at either. The docker daemon has always just been broken on Mac for as long as I've been trying to work with it (about 4 years, at least as far as Mac environments).

Idk what the problem is, but it's ugly. I switched to orbstack because there was something like a memory leak happening with docker desktop, just using waaaaay too many resources all the time, sometimes it would just crash. I just started using docker desktop from the get-go because when I came on I had multiple people with more experience say 'oh, you're coming from linux? Don't even try to use the docker daemon, just download docker desktop'.

osigurdson 9/5/2025||
On Windows, the easiest thing is to just use podman without podman desktop. It installs easily as a winget package and works in your current shell without having to first start WSL (it does that behind the scenes).

On Linux, for development, podman and docker are pretty similar but I prefer the k8s yaml approach vs compose so tend to use podman.

I don't think Apple really cares about dev use cases anymore so I haven't used a Mac for development in a while (I would for iOS development of course if that ever came up).

delduca 9/5/2025||||
Trust me. It worth each cent.
jbverschoor 9/5/2025|||
vs $11 for docker? blech
elliottr1234 9/5/2025||
Honestly just the debug shell alone is worth a good amount of $. You can remotely run shell commands on your deployed docker container and install packages that are not available in the base image without modifying the base image which can be a life saver.

https://docs.orbstack.dev/features/debug

Let alone the local resource monitor, increased, performance, automated local domains (no more complicated docker network settings to get your app working with local host), and more.

jbverschoor 9/6/2025|||
Check out my tool https://github.com/jrz/container-shell

I basically use (orbstack) docker containers as light weight VM, easily accessible through multiple shells and they shutdown when nothing is running anymore.

I use them for development isolation? Or when I need to run some tool. It mounts the current directory, so your container is chrooted to that project.

chrisweekly 9/5/2025|||
it does sound pretty compelling
jbverschoor 9/6/2025||
See my other reply
vbezhenar 9/5/2025||
I did numerous attempts to switch from docker to podman. Latest one worked, and so far I didn't feel the need to get back to docker. There was only one issue that I had: huge uid didn't work in podman (like 1000000 I think), but I fixed the dockerfile and rest worked fine for me. podman-compose does not work well in my experience, but I don't use it anymore.
wyoung2 9/5/2025||
> huge uid didn't work in podman (like 1000000 I think)

You're running into the `/etc/sub[ug]id` defaults. The old default was to start your normal user at 100000 + 64k additional sub-IDs per user, but that changed recently when people at megacorps with hundreds of thousands of employees defined in LDAP and similar ran into ID conflict. Sub-IDs now start at 2^19 on RHEL10 for this reason.

osigurdson 9/5/2025|||
Instead of using compose, you can create Kubernetes like yamls and run with podman play kube.

Of course if you have really large / complex compose files or just don't feel like learning something else / aren't using k8s, stick with docker.

r_lee 9/5/2025||
have you tried nerdctl? its basically just the containerd cli which is close to k8s and etc. not a for profit thing, just following containerd spec
0xbadcafebee 9/5/2025||
If "security" is the reason you're switching to Podman, I have some bad news.

Linux gets a new privilege escalation exploit like once a month. If something would break out of the Docker daemon, it will break out of your own user account just fine. Using a non-root app does not make you secure, regardless of whatever containerization feature claims to add security in your own user namespace. On top of all that, Docker has a rootless mode. https://docs.docker.com/engine/security/rootless/

The only things that will make your system secure are 1) hardening every component in the entire system, or 2) virtualization. No containers are secure. That's why cloud providers all use mini-VMs to run customer containers (e.g. AWS Fargate) or force the customer to manage their own VMs that run the containers.

amclennon 9/5/2025|
> That's why cloud providers all use mini-VMs to run customer containers (e.g. AWS Fargate) or force the customer to manage their own VMs that run the containers.

This is only partially true. Google's runtime (gvisor) does not share a kernel with the host machine, but still runs inside of a container.

s_ting765 9/5/2025|||
Google cloud dropped gVisor in favor of micro VMs.

https://cloud.google.com/blog/products/serverless/cloud-run-...

carwyn 9/5/2025|||
Second generation moved away from gVisor:

https://cloud.google.com/blog/products/serverless/cloud-run-...

amclennon 9/5/2025||
Ah, today I learned
jchw 9/6/2025||
I primarily use Podman but it is worth noting a few things:

- Podman is usually used "rootless", but it doesn't have to be. It can also be used with rootful containers. It's still daemonless, though it can shim the Docker socket (very useful for using i.e. Docker Compose.)

- Docker can be used in a rootless fashion too. It will still run a daemon, but it can be a user service using user namespaces. Personally I think Podman does much better here.

Podman also has some other interesting advantages, like better systemd integrations. Sometimes Kubernetes just isn't necessarily; Podman + Systemd works well in a lot of those cases. (Note though that I have yet to try Quadlets.) Though unfortunately I don't think even the newer Quadlets integration has some basic niceties that Kubernetes has (like simple ways to do zero downtime deployments.)

bjt 9/5/2025||
In the beginning, Docker DID have "standalone mode" where it would launch just one container as a child process. That was actually an easier way to manage the mounts and cgroups necessary to stand up a container. I made a ticket to bring it back after they removed it, and it was closed with a wontfix. The cynic in me says it was done more for commercial reasons (they wanted a more full featured daemon on the server doing things they could charge for) as opposed to just being a little shim that just did one thing.
ktosobcy 9/5/2025||
Tried to migrate (M1 MBP) a couple of times and it wasn't working well resuting in reverting to docker...
tristor 9/5/2025||
Podman is really painful if you do anything interesting inside of a container, it's great and simple if all you're doing is running nginx or a scripting language runtime or something in a container, but for folks who write actual software that gets compiled to target a system and utilizes syscalls, running in Podman is a pain in the ass unless you disable most of the "benefits". Docker on the other hand pretty much just works.
tomrod 9/5/2025||
Most of my containers end up on k8s clusters as pods. What else would one use podman or docker for beyond local dev or maybe running a local containerized service?
jeffhuys 9/5/2025||
For a while we used it for scalable preview environments: specify the branch, hit deploy, and have a QA-able environment, with full database (anonymized) ready to go in 15 minutes (DB was time bottleneck).

We ditched it for EC2s which were faster and more reliable while being cheaper, but that's beside the point.

Locally I use OrbStack by the way, much less intrusive than Docker Desktop.

spicyusername 9/5/2025||
EC2 and containers are orthogonal technologies, though.

Containers are the packaging format, EC2 is the infrastructure. (docker, crio, podman, kata, etc are the runtime)

When deploying on EC2, you still need to deploy your software, and when using containers you still need somewhere to deploy to.

jeffhuys 9/5/2025||
True; I conflate the two often. The EC2s run on an IAM image, same as production does, which before was a docker image.
spicyusername 9/5/2025||
Arguably it would still be beneficial to use container images when building your AMIs (vs installing use apt or copying your binaries), since using container images still solves the "How do I get my software to the destination?" and the "How do I run my software and give it the parameters it needs?" problems in a universal way.
jeffhuys 9/5/2025||
In what way does you mean this? I’ve built two jobs for the preview envs: DeployEnvironment (runs the terraform stuff that starts the ec2/makes s3 buckets/creates api gateway/a lot of other crap) and then ProvisionEnvironment (zips the local copy of the branch and rsyncs it to the environment, and some other stuff). I build the .env file in ProvisionEnvironment, which accounts for the parameters. I’d love to get your point of view here!
spicyusername 9/5/2025||
Using a container image as your "artifact" is often a good approach to distributing your software.

    zips the local copy of the branch and rsyncs it to the environment, and some other stuff
This would happen in your Dockerfile, and then the process of actually "installing" your application is just docker run (or kubectl apply, etc), which is an industry standard requiring no specialized knowledge about your application (since that is abstracted away in your Dockerfile).

You're basically splitting the process of building and distributed your application into: write the software, build the image, deploy the image.

Everyone who uses these tools, which is most people by this point, will understand these steps. Additionally, any framework, cloud provider, etc that speaks container images, like ECS, Kubernetes, Docker Desktop, etc can manage your deployments for you, since they speak container images. Also, the API of your container image (e.g. the environment variables, entrypoint flags, and mounted volumes it expects) communicate to those deploying your application what things you expect for them to provide during deployment.

Without all this, whoever or whatever is deploying your application has to know every little detail and you're going to spend a lot of time writing custom workflows to hook into every different kind of infrastructure you want to deploy to.

anticorporate 9/5/2025|||
There are many SMB application use cases that sit somewhere on the spectrum between "self-hosted" and "enterprise" where docker/podman hit the sweet spot in terms of complexity and cost versus reliability. Containers have become a handy application packaging format (just don't tell yourself the isolation provides meaningful security on its own).
sc68cal 9/5/2025||
Someone has to manage your kubernetes environment. Depending on the nature of your workload, it may not be worth running kubernetes and instead just run everything via podman on your hosts. It really depends on how much investment you have in Kubernetes YAMLs.
devjab 9/5/2025||
I suspect a lot of places pour them into Azure Kubernetes Services and Azure Container Apps for this exact reason. I assume other cloud provices have similar services.

Though as someone who's used a lot of Azure infrastructure as code with Bicep and also done the K8s YAML's I'm not sure which is more complicated at this point to be honest. I suspect that depends on your k8s setup of course.

sc68cal 9/9/2025||
This is very true. My circumstances are a little unusual where using Azure costed more than running it internally on VMs, and running k8s or an equivalent didn't really add much value since I would have had to manage that, and my workload is uniform where each VM runs the same services so just running a podman pod was easier. There was no need for dynamic scheduling and scaling would just be launching more VMs and running more podman pods, and the entire deployment is just an Ansible playbook that preps the VM after boot then launches the containers. It didn't make sense to have another kind of YAML file to deploy the containers.
manbitesdog 9/5/2025||
We where using Podman for certain deployments to AWS recently. However, it was in an EC2 instance and the overhead was unnecesary, so we ended up pasting Bocker[1] to AI, and stripping it of anything unnecesary until leaving just the few isolation features we needed.

[1] https://github.com/p8952/bocker/tree/master

ZeroConcerns 9/5/2025|
I would love to love Podman, but the fact that it regularly just fails to work on my Windows laptop (the WSL2 instance seems fine, but can't be connected to, the UI just says 'starting', and none of the menu options do anything) and that I can't figure out how to make IPv6 networking work on any platform means that Docker isn't going anywhere for the foreseeable future, I'm afraid...
More comments...