Top
Best
New

Posted by codesmash 9/5/2025

I ditched Docker for Podman(codesmash.dev)
1123 points | 654 comments
ttul 9/5/2025|
Back in 2001/2002, I was charged with building a WiFi hotspot box. I was a fan of OpenBSD and wanted to slim down our deployment, which was running on Python, to avoid having to copy a ton of unnecessary files to the destination systems. I also wanted to avoid dependency-hell. Naturally, I turned to `chroot` and the jails concept.

My deployment code worked by running the software outside of the jail environment and monitoring the running processes using `ptrace` to see what files it was trying to open. The `ptrace` output generated a list of dependencies, which could then be copied to create a deployment package.

This worked brilliantly and kept our deployments small and immutable and somewhat immune to attack -- not that being attacked was a huge concern in 2001 as it is today. When Docker came along, I couldn't help but recall that early work and wonder whether anyone has done a similar thing to monitor file usage within Docker containers and trim them down to size after observing actual use.

sroerick 9/5/2025||
The best CI/CD pipeline I ever used was my first freelance deployment using Django. I didn't have a clue what I was doing and had to phone a friend.

We set up a git post receive hook which built static files and restarted httpd on a git receive. Deployment was just 'git push live master'.

While I've used Docker a lot since then, that remains the single easiest deployment I've ever had.

I genuinely don't understand what docker brings to the table. I mean, I get the value prop. But it's really not that hard to set up http on vanilla Ubuntu (or God forbid, OpenBSD) and not really have issues.

Is the reproducibility of docker really worth the added overhead of managing containers, docker compose, and running daemons on your devbox 24/7?

rcv 9/5/2025|||
> I genuinely don't understand what docker brings to the table. I mean, I get the value prop. But it's really not that hard to set up http on vanilla Ubuntu (or God forbid, OpenBSD) and not really have issues.

Sounds great if you're only running a single web server or whatever. My team builds a fairly complex system that's comprised of ~45 unique services. Those services are managed by different teams with slightly different language/library/etc needs and preferences. Before we containerized everything it was a nightmare keeping everything in sync and making sure different teams didn't step on each others dependencies. Some languages have good tooling to help here (e.g. Python virtual environments) but it's not so great if two services require a different version of Boost.

With Docker, each team is just responsible for making sure their own containers build and run. Use whatever you need to get your job done. Our containers get built in CI, so there is basically a zero percent chance I'll come in in the morning and not be able to run the latest head of develop because someone else's dev machine is slightly different from mine. And if it runs on my machine, I have very good confidence it will run on production.

sroerick 9/6/2025|||
OK, this seems like an absolutely valid use case. Big enterprise microservice architecture, I get it. If you have islands of dev teams, and a dedicated CI/CD dev ops team, then this makes more sense.

But this puts you in a league with some pretty advanced deployment tools, like high level K8, Ansible, cloud orchestration work, and nobody thinks those tools are really that appropriate for the majority of devteams.

People are out here using docker for like... make install.

AlphaSite 9/6/2025|||
Having a reproducible dev environment is great when everyone’s laptop is different and may be running different OSes, libraries, runtimes, etc.

Also docker has the network effect. If there was a good light weight tool that was better enough people would absolutely use it.

But it doesn’t exist.

In an ideal world it wouldn’t exist, but we don’t live there.

a012 9/6/2025|||
> Having a reproducible dev environment is great when everyone’s laptop is different and may be running different OSes, libraries, runtimes, etc.

Docker and other containerization solved the “it works on my machine” issue

em-bee 9/6/2025||
almost. there is still an issue with selinux. i just had that case. because the client develops with selinux turned off, the docker containers don't run on my machine if i have selinux turned on.
znpy 9/6/2025||
you miss an intermediate environment (staging, pre-prod, canary, whatever you want to call it) with selinux turned on.
em-bee 9/6/2025|||
i don't. the customer does. and they don't seem to care. turning selinux off works for them and they are not paying me to fix that or work around it.
YJfcboaDaJRDw 9/6/2025|||
[dead]
adastra22 9/6/2025|||
Docker is that lightweight tool, isn’t it? It doesn’t seem that complex to me. Unfamiliar to those who haven’t used it, but not intrinsic complexity.
rglullis 9/6/2025||||
Imagine you have a team of devs, some using macOS, some using Debian, some using NixOS, some on Windows + WSL. Go ahead and try to make sure that everyone's development environment by simply running "git pull" and "make dev"
Fanmade 9/6/2025||
Ha, I've written a lot of these Makefiles and the "make dev" command even became a personal standard that I added to each project. I don't know if I read about that, or if it just developed into that because it just makes sense. In the last few years, these commands very often started a docker container, though. I do tend to work on Windows with WSL and I most of my colleagues use macOS or Linux, so that's definitely one of the reasons why docker is just easier there.
em-bee 9/6/2025|||
15 years ago i has a customer who ran a dozen different services on one machine, php, python, and others. a single dev team. upgrading was a nightmare. you upgraded one service, it broke another. we hadn't yet heard about docker, and used proxmox. but the principle is the same. this is definitely not just big enterprise.
johnisgood 9/6/2025||
That is wild. I have been maintaining servers with many services and upgrading never broke anything, funnily enough: on Arch Linux. All the systems where an upgrade broke something were Ubuntu-based ones. So perhaps the issue was not so much about the services themselves, but the underlying Linux distribution and its presumably shitty package manager? I do not know the specifics so I cannot say, but in my case it was always the issue. Since then I do not touch any distribution that is not pacman-based, in fact, I use Arch Linux exclusively, with OpenBSD here and there.
em-bee 9/6/2025||
i used "broken" generously. it basically means that for example for multiple php based services, we had to upgrade them all at once, which lead to a large downtime until everything was up and running again. services in containers meant that we could deal with them one at a time and dramatically reduce the downtime and complexity of the upgrade process.
curt15 9/6/2025|||
Would there still have been a problem if you were able to install multiple php versions side-by-side? HPC systems also have to manage multiple combinations of toolchains and environments and they typically use Modules[1] for that.

[1] https://hpc-wiki.info/hpc/Modules

em-bee 9/6/2025||
probably not, but it wasn't just php, and also one of the goals was the ability to scale up. and so having each service in its own container meant that we could move them to different machines and add more machines as needed.
johnisgood 9/6/2025|||
Oh, I see what you mean now, okay, that makes sense.

I would use containers too, in such cases.

SkiFire13 9/6/2025||||
> so there is basically a zero percent chance I'll come in in the morning and not be able to run the latest head of develop because someone else's dev machine is slightly different from mine.

It seems you never had to deal with timezone-dependent tests.

sroerick 9/6/2025||
What are timezone-dependent tests? Sounds like a bummer
SkiFire13 9/7/2025||
I once had to work with a legacy Java codebase and they hardcoded the assumption their software would run in the America/New_York timezone, except some parts of the codebase and tests used the _system_ timezone, so they would fail if run in a machine with a different timezone.
const_cast 9/5/2025|||
[flagged]
latentsea 9/6/2025|||
I like how you didn't even ask for any context that would help you evaluate whether or not their chosen architecture is actually suitable for their environment before just blurting out advice that may or may not be applicable (though you would have no idea, not having enquired).
const_cast 9/8/2025||
Much like parallel programming, distributed systems have a very small window of requirement.

Really less than 1% of systems need to be distributed. Are you Google? No? Then you probably don't need it.

The rest is just for fun. Or, well, pain. Usually pain.

latentsea 9/11/2025||
I like how you didn't even enquire as to what size organisation they worked in order to determine if it might actually be applicable in their case.
const_cast 9/12/2025||
I never said it was applicable to them, in fact I said the opposite:

> Obviously that ship has sailed for you, but I mean in the general sense.

In the general sense, no, you don't need a distributed system. Even if you have billions of dollars worth of revenue - no, you don't need a distributed system. I know, because I've worked on monoliths that service hundreds of thousands of users and generate billions in revenue.

If you're making YouTube, maybe you need a distributed system. Are you making YouTube? Probably not.

You can, of course, choose to make a distributed system anyway. If you want to decrease your development velocity 1000x and introduce unbelievable amounts of complexity and risk.

latentsea 9/14/2025||
We're there at least 1000 engineers working on that system you worked on?
const_cast 9/16/2025||
Yes, 1500 or so.
Bnjoroge 9/5/2025|||
what they described is a fairly common set up in damn near most enterprises
const_cast 9/8/2025|||
Yeah most enterprise software barely works and is an absolute maintenance nightmare because they're sprawling distrivuted systems.

Ask yourself: how does an enterprise with 1000 engineers manage to push a feature out 1000x slower than two dudes in a garage? Well, processes, but also architecture.

Distributed systems slow down your development velocity by many orders of magnitude, because they create extremely fragile systems and maintenance becomes extremely high risk.

We're all just so used to the fragility and risk we might think it's normal. But no, it's really not, it's just bad. Don't do that.

nazgul17 9/5/2025||||
Both can be true
fulafel 9/6/2025||||
Enterprises are frequently antipattern zoos. If you have many teams you can use the modular monolith pattern instead of microservices, that way you have the separation but not the distributed system.
sroerick 9/6/2025|||
Wherefore art thou IBM
bolobo 9/5/2025||||
> I genuinely don't understand what docker brings to the table. I mean, I get the value prop. But it's really not that hard to set up http on vanilla Ubuntu (or God forbid, OpenBSD) and not really have issues.

For me, as an ex-ops, the value proposition is to be able to package a complex stack made of one or more db, several services and tools (ours and external), + describe the interface of these services with the system in a standard way (env vars + mounts points).

It massively simplify the onboarding experience, make updating the stack trivial, and also allow devs, ci and prod to run the same version of all the libraries and services.

sroerick 9/6/2025||
OK, I completely agree with this.

That said, I'm not a nix guy, but to me, intuitively NixOS wins for this use case. It seems like you could either

A. Use declarative OS installs across deployments B. Put your app into a container which sometimes deploys it's own kernel and then sometimes doesn't and this gets pushed to a third party cloud registry, or you can set up your own registry, and then this container runs on a random ubuntu container or cloud hosting site where you basically don't administer or do any ops you just kind of use it as an empty vessel which exists to run your Docker container.

I get that in practice, these are basically the same, and I think that's a testament to the massive infrastructure work Docker, Inc has done. But it just doesn't make any sense to me

sterlind 9/6/2025|||
you can actually declare containers directly in Nix. they use the same config/services/packages machinery as you'd use to declare a system config. and then you can embed them in the parent machine's config, so they all come online and talk to each other with the right endpoints and volumes and such.

or you can use the `build(Layered)Image` to declaratively build an oci image with whatever inside it. I think you can mix and match the approaches.

but yes I'm personally a big fan of Nix's solution to the "works on my machine" problem. all the reproducibility without the clunkiness of having to shell into a special dev container, particularly great for packaging custom tools or weird compilers or other finnicky things that you want to use, not serve.

bolobo 9/6/2025|||
The end result will be the same but I can give 3 docker commands to a new hire and they will be able to set up the stack on their MacBook or Linux or Windows system in 10 minutes.

Nix is, as far as I know, not there and we would probably need weeks of training to get the same result.

Most of the time the value of a solution is not in its technical perfection but in how many people already know it, documentation, and more important all the dumb tooling that's around it!

Shog9 9/5/2025||||
Reproducibility? No.

Not having to regularly rebuild the whole dev environment because I need to work on one particular Python app once a quarter and its build chain reliably breaks other stuff? Priceless.

janjongboom 9/5/2025|||
This false sense of reproducability is why I funded https://docs.stablebuild.com/ some years ago. It lets you pin stuff in dockerfiles that are normally unpinnable like OS package repos, docker hub tags and random files on the internet. So you can go back to a project a year from now and actually get the same container back again.
jselysianeagle 9/5/2025||
Isn't this problem usually solved by building an actual image for your specific application, tagging that and pushing to some docker repo? At least that's how it's been at placec I've worked at that used docker. What am I missing?
lmm 9/6/2025|||
What do you do when you then actually need to make a change to your application (e.g. a 1-liner fix)? Edit the binary image?
macNchz 9/6/2025|||
Build and tag internal base images on a regular cadence that individual projects then use in their FROM. You’ll have `company-debian-python:20250901` as a frozen-in-time version of all your system level dependencies, then the Dockerfile using it handles application-level dependencies with something that supports a lockfile (e.g. uv, npm). The application code itself is COPY’d into the image towards the end, such that everything before it is cached, but you’re not relying on the cache for reproducibility, since you’re starting from a frozen base image.

The base image building can be pretty easily automated, then individual projects using those base images can expect new base images on a regular basis, and test updating to the latest at their leisure without getting any surprise changes.

lmm 9/7/2025||
At that point you're doing most of the work yourself, and the value add from Docker is pretty small (although not zero) - most of the gains are coming from using a decent language-level dependency manager.
xylophile 9/6/2025||||
You can always edit the file in the container and re-upload it with a different tag. That's not best practice, but it's not exactly sorcery.
lmm 9/6/2025||
It's not, but at that point you're giving up on most of the things Docker was supposed to get you. What about when you need to upgrade a library dependency (but not all of them, just that one)?
jselysianeagle 9/7/2025||
I'm not sure what the complication here is. If application code changes, or some dependency changes, you build a new docker image as needed, possibly with an updated Dockerfile as well if that's required. The Dockerfile is part of the application repo and versioned just like everything else in the repo. CICD helps build and push a new image during PRs, or tag creation, just like you would with any application package / artifact. Frequent building and pushing of docker images can over time start taking up space of course but you can take care of that by maybe cleaning out old images from time to time if you can determine they're no longer needed.
zmmmmm 9/6/2025|||
you append it to the end of the docker file so that the previous image is still valid with its cached build steps
lmm 9/6/2025||
And just keep accreting new layers indefinitely?
jselysianeagle 9/7/2025||
Docker re-uses layers as needed and can detect when a new layer needs to be added. It's not like images grow in size without bound each time something is changed in the Dockerfile.
jamwil 9/5/2025||||
Perhaps more focused on docker-based development workflows than final deployment.
fcarraldo 9/5/2025|||
Builds typically aren’t retained forever.
northzen 9/5/2025||||
Use pixi or uv to maintain this specific environment and separate it from the global one
sroerick 9/6/2025|||
I know this pain, and Docker absolutely makes sense for this use case, but I feel like we would both agree that this is a duct tape and bubble gum solution? Though a totally justifiable one
Shog9 9/6/2025||
Oh sure. 20 years ago I used VMs and that was also a duct tape solution. I'd have hoped for a proper solution by now, but a lighter hack works too
roozbeh18 9/5/2025||||
Someone wrote a PHP7 script to generate some of our daily reports a while back that nobody wants to touch. Docker happily runs the PHP7 code in the container and generates the reports on any system. its portable, and it doesnt require upkeep.
kqr 9/6/2025||||
Docker in and of itself does not do you much good. Its strength comes from the massive amounts of generic tooling that is built around the container as the standard deployable unit.

If you want to handle all your deployments the same way, you can basically only choose between Nix and containers. Unfortunately, containers are far more popular and have more tooling.

sroerick 9/6/2025||
I think this is accurate. It just feels like a lot of "we use Docker because everybody uses Docker. That's just the way we do it.

But if you actually add up the time we spend using docker, I'm really not sure it saves that many cycles

antihero 9/6/2025||||
> Is the reproducibility of docker really worth the added overhead of managing containers, docker compose, and running daemons on your devbox 24/7?

Yes. Everything on my box is ephemeral and can be deleted and recreated or put on another box with little-to-no thought. Infrastructure-as-code means my setup is immutable and self-documented.

It's a little more time to set up initially, but now I know exactly what is running.

I don't really understand the 24/7 comment, now that it is set up there's very very little maintenance. Sometimes an upgrade might go askew but that is rare.

Any change to it is recorded as a git commit, I don't have to worry about logging what I've done ever because it's done for me.

Changes are handled by a GitHub action, all I have to do to change what is running is commit a file, and the infra will update itself.

I don't use docker-compose, I use a low-overhead microk8s single-node cluster that I don't think about at all really, I just have changes pushed to it directly with Pulumi (in a real environment I'd use something like ArgoCD) and everything just works nicely. Ingress to services is done through Cloudflare tunnels so I don't even have to port-forward or think about NAT or anything like this.

To update my personal site, I just do a git commit/push, the it's CI/CD builds builds a container and then updates the Pulumi config in the other repo to point to the latest hash, which then kicks off an action in my infra repo to do a Pulumi apply.

Currently it runs on Ubuntu but I'm thinking of using Talos (though it's still nice to be able to just SSH to the box and mess around with files).

I'm not sure why people struggle with this, or the benefits of this approach, so much? It seems like a lot of complexity if you're inexperienced, but if you've been working with computers for a long time, it isn't particularly difficult—there are far more complicated things that computers do.

I could throw the box (old macbook) in a lake and be up and running with every service on a new box in an hour or so. Or I could run it on the cloud. Or a VPS, or metal, or whatever really, it's a completely portable setup.

tasuki 9/6/2025||||
> We set up a git post receive hook which built static files and restarted httpd on a git receive. Deployment was just 'git push live master'.

I still do that for all my personal projects! One of the advantages of docker is that you don't have to rebuild the thing on each deployment target.

bonzini 9/6/2025||||
QEMU used a similar CI for its website before switching to Gitlab pages:

https://gist.github.com/bonzini/1abbbdec739e77503945a3605e0e...

ctkhn 9/6/2025||||
Just for my home server, I have more than 10 containers for home assistant, vpn, library management for movies/tv/music, photos backup, password manager, and a notes server. I started without knowing what docker was, and in less than a year realized running services directly on my OS was more hassle than I wanted both with compatibility between services dependencies, networking setup for them, and configuring reboots and upgrades. I would say the reproducibility and configurability is easily worth the slight overhead and in my experience even reduced it.
ownagefool 9/6/2025||||
Forget docker for a second.

Suddenly you're in a team with 2-3 people and one of them likes to git push broken code and walk-off.

Okay, lets make this less about working with a jack-ass, same setup, but each 5 minutes of downtime cost you millions of dollars. One of your pushes work locally but don't work on the server.

The point of a more structed / complex CI/CD process is to eliminate failures. As the stakes become higher, and the stack becomes more complex, the need for the automation grows.

Docker is just a single part of that automation that makes other things / possible / lowers specific class of failures.

IanCal 9/5/2025||||
Managing and running some containers is really easy though. And running daemons? Don’t we all have loads of things running all the time?

I find it easier to have the same interface for everything, where I can easily swap around ports.

strzibny 9/6/2025||||
I know well what you are talking about since I did something similar, but I finally moved to Docker with Kamal (except one project I still have to move). The advantage of Docker's reproducibility is to have a peace of mind when comes to rollbacks and running exact versions due to system dependencies. If anyone is curious I wrote Kamal Handbook to help people adopt Kamal which I think brings all the niceness to Docker deployment so it's not annoying.
twelvedogs 9/6/2025||||
> Is the reproducibility of docker really worth the added overhead of managing containers, docker compose, and running daemons on your devbox 24/7?

Why wouldn't it be, containers are super easy to manage, dockerd uses bugger all resources in dev (on Linux anyway) and docker compose files are the simplest setup scripts I've ever used

I like docker because it's easy and I'm lazy

rollcat 9/8/2025||||
> God forbid, OpenBSD

What exactly is your problem with OpenBSD? Shaming it completely out of context is kinda mean - they're the upstream for OpenSSH and LibreSSL.

throwmeaway222 9/5/2025|||
> I didn't have a clue what I was doing and had to phone a friend.

> I genuinely don't understand what docker brings to the table.

I think you invalidated your own opinion here

sroerick 9/6/2025||
Sorry, sir, I didn't realize nobody should ever spend any time learning anything or, failing that, describe what happened to them during that time. I'm no neckbeard savant but I do have a dozen years of deploying web apps and also using Docker during that time, so I think I'm allowed to have an opinion. Go drink some warm milk, you will feel better.
throwmeaway222 9/6/2025||
You have 12 years of deployment experience and some of that using docker, would have been a more useful thing to say in your OC. I was literally just pointing out your argument was pretty weak - this context would have made it stronger.
bmgoau 9/5/2025|||
First result on Google, 22k stars https://github.com/slimtoolkit/slim
ttul 9/6/2025|||
Super cool looking project. I always thought this concept was useful and wondered why base Docker did not incorporate the same idea.
KuhlMensch 9/7/2025|||
Well, this is elegant/cool.
champtar 9/5/2025|||
In OpenWrt there is ujail, you give it an ELF (or multiple) to run, it'll parse them to find all the libraries they need, then it creates a tmpfs and mount bind read only the required files. https://github.com/openwrt/procd/blob/dafdf98b03bfa6014cd94f...
kqr 9/6/2025|||
I interviewed for a startup that does exactly this, except also for syscalls etc. They're mainly focused on security and not size. https://bifrostsec.com/

(I ended up taking another offer but I still think they're onto something.)

t43562 9/5/2025||
To provide 1 contrary opinion to all the others saying they have a problem:

Podman rocks for me!

I find docker hard to use and full of pitfalls and podman isn't any worse. On the plus side, any company I work for doesn't have to worry about licences. Win win!

nickjj 9/5/2025||
> On the plus side, any company I work for doesn't have to worry about licences. Win win!

Was this a deal breaker for any company?

I ask because the Docker Desktop paid license requirement is quite reasonable. If you have less than 250 employees and make less than $10 million in annual revenue it's free.

If you have a dev team of 10 people and are extremely profitable to where you need licenses you'd end up paying $9 a year per developer for the license. So $90 / year for everyone, but if you have US developers your all-in payroll is probably going to be over $200,000 per developer or roughly $2 million dollars. In that context $90 is practically nothing. A single lunch for the dev team could cost almost double that.

To me that is a bargain, you're getting an officially supported tool that "just works" on all operating systems.

csours 9/5/2025|||
Companies aren't monoliths, they're made of teams.

Big companies are made of teams of teams.

The little teams don't really get to make purchasing decisions.

If there's a free alternative, little teams just have to suck it up and try to make it work.

---

Also consider that many of these expenses are born by the 'cost center' side of the house, that is, the people who don't make money for the company.

If you work in a cost center, the name of the game is saving money by cutting expenses.

If technology goes into the actual product, the cost for that is accounted for differently.

citizenpaul 9/5/2025||
It always amazes me how hostile most large companies are to paying for developer tools that have a trivial cost. Then they will approve the budget for some yay quartly profit party no one cares about that cost $100k for the venue rental alone.

I do understand that this mostly is because management wants staff to be replaceable and disposable having specialty tools suggests that a person can be unique.

flyinglizard 9/5/2025||
No, it's not because of that. It's because:

1. You want to control spend - there are budgets. 2. You want to control accounting - minimize the number of vendors you work with. Each billing needs to come with an invoice, these need to be managed, when a developer leaves you need to cancel their seat etc. It's a pain. 3. You want to control compliance - are these tools safe? Are they accessing sensitive data? Are they audited? 4. You want to control interoperability between teams. Can't have it become a zoo of bring-your-own stuff.

So free tools get around all of these, you can just wing it under the radar and if the tool becomes prominent enough then you go fight the war to have it adopted. Once there's spend, you need to get into line. And that line makes a lot of sense when you're into 30 developers, let alone hundreds.

strken 9/6/2025||
If you've got 30 developers then you've probably got, what, five or six teams? Your tech leads/senior engineers/whoever provides tech leadership at a team level are operating at a scale where they can go to the pub with your head of engineering/CTO/each other/the dude from finance who has a credit card and fit around a table.

I've worked at companies that size and the "war" involved putting time in the calendar of the head of engineering, asking how his son was, demoing the product we wanted for about two minutes and explaining the pain point it solved, then promising to get our legal team and the one security person to review it after he put the credit card in and before we used it in prod. When I worked somewhere larger it was much more difficult.

citizenpaul 9/7/2025||
Obviously all of this is far from universal. I've worked at places that just gave me a card with $500-$1000mo limit for whatever I need. I've worked at a place when I asked for $100 of hard drive space to test something out of production they said find another way.
akerl_ 9/5/2025||||
The problem isn’t generally the cost, it’s the complexity.

You end up having to track who has it installed. Hired 5 more people this week? How many of them will want docker desktop? Oh, we’ve maxed the licenses we bought? Time to re-open the procurement process and amend the purchase order.

nickjj 9/5/2025|||
A large company who is buying licenses for tools has to deal with this for many different things. Docker is not unique here.

An IT department for a company of that size should have ironed out workflows and automated ways to keep tabs on who has what and who needs what. They may also be under various compliance requirements that expect due diligence to happen every quarter to make sure everything is legit from a licensing perspective.

Even if it's not automated, it's normal for a team to email IT / HR with new hire requirements. Having a list of tools that need licenses in that email is something I've seen at plenty of places.

I would say there's lots of other tools where onboarding is more complicated from a license perspective because it might depend on if a developer wants to use that tool and then keeping tabs on if they are still using it. At least with Docker Desktop it's safe to say if you're on macOS you're using it.

I guess I'm not on board with this being a major conflict point.

Aurornis 9/5/2025|||
> An IT department for a company of that size should have ironed out workflows and automated ways to keep tabs on who has what and who needs what. They may also be under various compliance requirements that expect due diligence to happen every quarter to make sure everything is legit from a licensing perspective.

Correct, but every additional software package and each additional license adds more to track.

Every new software license requires legal to review it.

These centralized departments add up all of the license and SaaS costs and it shows up as one big number, which executives start pushing to decrease. When you let everyone get a license for everything they might need, it gets out of control quickly (many startups relearn this lesson in their growth phase)

Then they start investigating how often people use software packages and realize most people aren't actually using most software they have seats for. This happens because when software feels 'free' people request it for one-time use for a thing or to try it out and then forget about it, so you have low utilization across the board.

So they start making it harder to add new software. They start auditing usage. They may want reports on why software is still needed and who uses it.

It all adds up. I understand you don't think it should be this way, but it is at big companies. You're right that that the $24/user per month isn't much, but it's one of dozens of fees that get added, multiplied by every employee in the company, and now they need someone to maintain licenses, get them reviewed, interact with the rep every year, do the negotiation battles, and so on. It adds up fast.

oooyay 9/5/2025|||
> Correct, but every additional software package and each additional license adds more to track.

This is going to differ company to company but since we're narrowing it to large companies I disagree. Usually there's a TPM that tracks license distribution and usage. Most companies provide that kind of information as part of their licensing program (and Docker certainly does.)

> Every new software license requires legal to review it.

Yes, but this is like 90% of what legal does - contract review. It's also what managers do but more on the negotiation end. Most average software engineers probably don't realize it but a lot of cloud services, even within a managed cloud provider like AWS, require contract and pricing negotiation.

> These centralized departments add up all of the license and SaaS costs and it shows up as one big number, which executives start pushing to decrease. When you let everyone get a license for everything they might need, it gets out of control quickly (many startups relearn this lesson in their growth phase)

As I said earlier, I can't speak for other companies but at large companies I've worked at this just simply isn't true. There's metrics for when the software isn't being used because the corporation is financially incentivized to shrink those numbers or consolidate on software that achieves similar goals. They're certainly individually tracked fairly far up the chain even if they do appear as a big number somewhere.

Eduard 9/5/2025|||
that all is most basic bookkeeping, I cannot take the argument "$x/user × every employee adds up" serious.

Also, latest with 20 employees or computers, someone in charge of IT (sysadmin, IT department) would decide to use a software asset management tool (aka software inventory system) to automatically track, roll out, uninstall, monitor vetted software. Anything else is just unprofessional.

akerl_ 9/5/2025||||
Idk what to tell you other than that it is.

Large companies do have ways to deal with this: they negotiate flat rates or true-up cadences with vendors. But now you’ve raised the bar way higher than “just use podman”.

dec0dedab0de 9/5/2025||||
It becomes a pain point when the IT team never heard of docker, all new licenses need to be approved by the legal department, and your manager is afraid to ask for any extra budget.

Also, I don't want to have to troubleshoot why the docker daemon isn't running every time I need it

regularfry 9/5/2025|||
I'll see your "IT team never heard of docker" and raise you "security want to ban local containers because they allow uncontrolled binaries onto corporate hardware.". But that's not something podman solves...
mgkimsal 9/5/2025||
Every single developer is running 'uncontrolled source code' on corporate hardware every single day.
cyberpunk 9/5/2025|||
The defence isn't against malicious developers writing evil code, but some random third party container launched via a curl | bash which mounts ~/ into it and posts all your ssh keys to some server in china... Or whatever.

Or so I was told when I made the monumental mistake of trying to fight such a policy once.

So now we just have a don't ask don't tell kind of gig going on.

I don't really know what the solution is, but dev laptops are goldmines for haxxors, and locking them down stops them from really being dev machines. shrug

zmmmmm 9/6/2025||
> some random third party container launched via a curl | bash which mounts ~/ into it and posts all your ssh keys to some server in china

it's pretty stupid because the same curl | bash that could have done that could have just posted the same contents directly to the internet without the container. The best chance you actually have is to do as much development as possible inside a sealed environment like ... a container where at least you have some way to limit visibility of partially trusted code of your file system.

regularfry 9/6/2025|||
And this is regarded as an existential problem which cannot be permitted to persist by some in the security space.
reaperducer 9/5/2025||||
It becomes a pain point when the IT team never heard of docker

Or when your IT department is prohibited from purchasing anything that doesn't come from Microsoft or CDW.

0cf8612b2e1e 9/5/2025||||
I have personally given up trying to get a $25 product purchased through official channels. The process can make everything painful.
johannes1234321 9/5/2025|||
Congrats, the process fulfilled it's purpose. Another small cost saved :)
0cf8612b2e1e 9/5/2025||
Trust me, the thought crossed my mind. They definitely beat me.
regularfry 9/6/2025|||
It can be easier to spend £100K than £100.
axlee 9/5/2025|||
>It becomes a pain point when the IT team never heard of docker

Where do you work ? Is that even possible in 2025?

cyberpunk 9/5/2025|||
'corp IT' in a huge org typically all outsourced MCSEs who are seemingly ignorant of every piece of technology outside of azure.

Or so it seems to me whenever I have to deal with them. We ended up with Microsoft defender on our corp Macs even.. :|

anakaine 9/5/2025||||
Its absolutely possible. Weve also had them unaware of github, and had them label Amazon S3 as a risk since it specifically wasn't Microsoft.

There is no bottom to the barrel, and incompetence and insensitivity can rise quite high in some cases.

dec0dedab0de 9/5/2025||||
I work at a cool place now that is well aware of it, but in 2023 I worked at a very large insurance company with over a thousand people in IT. Some of the gatekeepers were not aware of docker. Luckily another team had set up Openshift, but then the approval process for using it was a nightmare.
tracker1 9/5/2025|||
Apparently they work in the past...
unethical_ban 9/5/2025||||
>An IT department for a company of that size should have ironed out workflows

I'm in IT consulting. If most companies could even get the basic best practices of the field implemented, I wouldn't have a job.

reaperducer 9/5/2025||||
An IT department for a company of that size should have ironed out workflows

The business world is full of things that "should" be a certain way, but aren't.

For the technology world, double the number.

We'd all like to live in some magical imaginary HN "should" world, but none of us do. We all work in companies that are flawed, and sometimes those flaws get in the way of our work.

If you've never run into this, buy a lottery ticket.

Dennip 9/5/2025||||
Not sure on docker desktops specifics but usually large companies have enterprise/business licencing available and specifically do not deal with this, and do not want to manually deal with this, because they can use SSO & dynamically assign licenses to user groups etc.
nullify88 9/6/2025||
Or use Microsoft MyAccess to have the users allocate a license themselves.
stronglikedan 9/5/2025||||
Or just use Podman and don't worry about licenses, since it's just as good but sooo much easier.
reaperducer 9/5/2025||
Some day I hope to work for a company small enough that I can "just" use any software I feel like for whatever reasons I want.

But I have to feed my family.

worik 9/5/2025||
> I can "just" use any software I feel like for whatever reasons I want.

What could possibly go wrong?

nullify88 9/6/2025||
For my day job, installing software / admin access is reserved to those who work in IT / software development. Rest of the business need to go through a vetted software library.
eptcyka 9/6/2025||||
A large company has to deal with many different things, some of the things are intrinsic to the business, some are not. When push comes to shove, business will try to relieve itself of the latter so it can focus on the former.
itsdrewmiller 9/5/2025||||
You're arguing against a straw man here - no one but you used the term "dealbreaker" or "major" conflict point. It can be true that it is not a dealbreaker but still a downside.
zbrozek 9/5/2025||||
Yeah all of that is a huge pain and fantastic to avoid.
worik 9/5/2025|||
Not just large companies

OT because not docker

In the realm of artistic software (thinking Alberton Live and Adobe suites) licensing hell is a real thing. In my recent experience it sorts the amateurs from the pros, in favour of amateurs

The time spent learning the closed system includes hours and dollars wrestling licenses. Pain++. Not just the unaffordable price, but time that could be spent creating

But for an aspiring professional it is the cost of entry. These tools must be mastered (if not paid for, ripping is common) as they have become a key part of the mandated tool chains, to the point of enshittification

The amateur is able to just get on with it, and produce what they want when they want with a dizzying array of possible tools

weberc2 9/5/2025||||
I'm of the opinion that large companies should be paying for the software they use regardless of whether it's open source or not, because software isn't free to develop. So assuming you're paying for the software you use, you still have the problem that you are subject to your internal procurement processes. If your internal procurement processes make it really painful to add a new seat, then maybe the processes need to be reformed. Open source only "fixes" the problem insofar as there's no enforcement mechanism, so it makes it really easy for companies to stiff the open source contributors.
akerl_ 9/5/2025|||
So, I'm of two thoughts here:

1. As parallel commenters have pointed out, no. Plenty of open source developers exist who aren't interested in getting paid for their open source projects. You can tell this because some open source projects sell support or have donation links or outright sell their open source software and some do not. This line of thinking seems to come out of some utopian theoretical world where open source developers shouldn't sell their software because that makes them sell-outs but users are expected to pay them anyways.

2. I do love the idea of large companies paying for open source software they use because it tends to set up all kinds of good incentives for the long term health of software projects. That said, paying open source projects tends to be comically difficult. Large companies are optimized for negotiating enterprise software agreements with a counterparty that is primed to engage in that process. They often don't have a smooth way to like, just feed money into a Donate form, or make a really big Github or Patreon Sponsorship, etc. So even people in large companies that really want to give money to open source devs struggle to do so.

weberc2 9/10/2025||
I think I fully agree, although to expound on (1) I don't think that is the kind of software that any company should want to depend on for anything remotely important. I'm sure there are counter examples where you get a high quality project that doesn't require or accept donations, but I think these will be exceedingly few and far between. It seems like it's in the company's best interest to make sure the development for a dependency isn't going to go away for lack of funding?
bityard 9/5/2025||||
"stiff the open source contributors"

I'm not sure you realize that "open source" means anyone anywhere is free to use, modify, and redistribute the software in any way they see fit? Maybe you're thinking of freeware or shareware which often _do_ come with exceptions for commercial use?

But anyway, as an open source contributor, I have never felt I was being "stiffed" just because a company uses some software that I helped write or improve. I contribute back to projects because I find them useful and want to fix the problems that I run into so I don't have to maintain my own local patches, help others avoid the same problems, and because making the software better is how I give back to the open source community.

pferde 9/6/2025||
[flagged]
t43562 9/7/2025||
....and yet those tech bros all use open source software themselves.
rlpb 9/5/2025|||
> so it makes it really easy for companies to stiff the open source contributors

I don't think there's any stiffing going on, since the open source contributors knowingly contributed with a license that specifically says that payment isn't required. It is not reasonable for them to take the benefits of doing that but then expect payment anyway.

devjab 9/5/2025||||
> You end up having to track who has it installed. Hired 5 more people this week? How many of them will want docker desktop? Oh, we’ve maxed the licenses we bought? Time to re-open the procurement process and amend the purchase order.

I don't quite get this argument. How is that different from any piece of software that an employee will want in any sort of enterprise setting? From an IT operations perspective it is true that Docker Desktop on Windows is a little more annoying than something like an Adobe product, because Docker Desktop users need their local user to be part of their local docker security group on their specific machine. Aside from that I would argue that Docker Desktop is by far one of the easiest developer tools (and do note that I said developer tools) to track licenses for.

In non-enterprise setups I can see why it would be annoying but I suspect that's why it's free for companies with fewer than 250 people and 10 million in revenue.

akerl_ 9/5/2025|||
I touched on this in my parallel reply, but to expand on it:

The usual way that procurement is handled, for the sake of everybody's sanity, is to sign a flat-rate / tiered contract, often with some kind of true-up window. That way the team that's trying to buy software licenses doesn't have their invoices swinging up/down every time headcount or usage patterns shifts, and they don't have to go back to the well every time they need more seats.

This is a reasonably well-oiled machine, but it does take fuel: setting up a new enterprise agreement like that takes humans and time, both of which are not free. So companies are incentivized to be selective in when they do it. If there's an option that requires negotiating a license deal, and an option that does not, there's decent inertia towards the latter.

All of which is a long way to say: many large enterprises are "good" at knowing how many of their endpoints are running what software, either by making getting software a paperwork process or by tracking with some kind of endpoint management (though it's noteworthy that there are also large enterprises that suck at endpoint management and have no clue what's running in their fleet). The "hard" part (where "hard" means "requires the business to expend energy they'd rather not) is getting a deal that doesn't involve the license seat counter / invoice details having to flex for each individual.

Aurornis 9/5/2025||||
You're right that it's no different than other software, but when you reach the point where the average employee has 20-30 different licenses for all the different things they might use, managing it all becomes a job for multiple people.

Costs and management grow in an O(n*m) manner where n is employees and m is numbers of licenses per employee. It seems like nothing when you're small and people only need a couple licenses, but a few years in the aggregate bills are eye-popping and you realize the majority of people don't use most of the licenses they've requested (it really happens).

Contrast this with what it takes for an engineer to use a common, free tool: They can just use it. No approval process. No extra management steps for anyone. Nothing to argue that you need to use it every year at license audit time. Just run with it.

devjab 9/7/2025||
> Contrast this with what it takes for an engineer to use a common, free tool: They can just use it. No approval process.

As far as IT operations goes, it's usually easier to get approval for paid products since they come with support and are viewed as more "trustworthy". At least in my experience.

I've never worked in a 300+ organisation where you could "just use" things. I have worked in places where they gave some of us local admins (I've been a domainadmin in a few places too), but there is usually a large bureaucracy around software regardless of licenses. Where I work right now, licensing is a minor part of it for companies with good payment systems (like Docker) where it'll automatically go on the books and be EU tax deducted. Compare that to GitKraken where you need to create an IT owner account inside their system, and then distribute the annual licenses manually after you pay for them with a credit card that you will then need to manually submit for tax deduction.

maigret 9/5/2025|||
> How is that different from any piece of software that an employee will want in any sort of enterprise setting?

Open source is different in exactly that, no procurement.

Finance makes procurement annoying so people are not motivated to go through it.

devjab 9/7/2025|||
Around here it's usually a lot harder to get open source software approved with IT because they tend to dislike products where they can't call a compant. Licensing is easier of course, but for a lot of software licensing is virtually automatic. With Docker it's billed by the amount of people in the Docker AD group, and it tells EU tax deductable automatically.

Not that this should be an argument for docker. The idea that having someone to call makes a piece of software "safer" is as ridiculous at it sounds. Especially if you've ever tried "calling" a company you buy 20 licenses from, and when I say call what I really mean is talking with a chatbot and then waiting a month for them to get back to you via email. But IT's gonna IT.

mgkimsal 9/5/2025|||
That assumes that you can, in fact, install that software in the first place. "Developers" sometimes get a bit of a pass, but I've been inside more than a few companies where... no one could install anything at all, regardless of whether there was a cost. Requesting some software would usually get someone with too much time on their hands (who would also complain about being overworked) asking what you need, why you need it, why you didn't try something else, do you really need it, etc. In some scenarios the 'free' works against, because "there's no support". I was seeing this as late as 2019 at a company - it felt like being back in 1997.
nightpool 9/5/2025||
Cool. Then keep using Docker Desktop if you want to. That's not the situation most of the people in this thread are talking about though.
thinkingtoilet 9/5/2025||||
Are you complaining about buying 5 licenses? It seems extremely easy to handle. It feels like sometimes people just want to complain.
almosthere 9/5/2025|||
Everything is hard in a large company and they have hired teams to manage procurement so this is just you over thinking.
malnourish 9/5/2025|||
How often have you dealt with large org procurement processes? I've spent weeks waiting on the one person needed to approve something that cost less than something I could readily buy on my T&E card.
dboreham 9/5/2025||||
Typically the team they hired is focused on you not procuring things.
akerl_ 9/5/2025||
I think a lot of this boils down to Procurement's good outcome generally being quite different than the good outcome for each team that wants a purchase.

To draw a parallel: imagine a large open source project with a large userbase. The users interact with the project and a bunch of them have ideas for how to make it better! So they each cut feature requests against the project. The maintainers look at them. Some of the feature requests they'll work on, some of them they'll take well-formed pull requests. But some they'll say "look, we get that this is helpful for you, but we don't think this aligns with the direction we want the project to go".

A good procurement team realizes that every time the business inks a purchase agreement with a vendor, the company's portfolio has become incrementally more costly. For massive deals, most of that cost is paid in dollars. For cheaper software, the sticker price is low but there's still the cost of having one more plate to juggle for renewals / negotiations / tracking / etc.

So they're incentivized to be polite but firm and push back on whether there's a way to get the outcome in another way.

(this isn't to suggest that all or even most procurement teams are good, but there is a kernel of sanity in the concept even though it's often painful for the person who wants to buy something)

akerl_ 9/5/2025|||
What a strangely hostile reply.
ejoso 9/5/2025||||
This math sounds really simple until you work for a company that is “profitable” yet constantly turning over every sofa cushion for spare change. Whuch describes most publicly traded companies.

It can be quite difficult to get this kind of money for such a nominal tool that has a lot of free competition. Docker was very critical a few years ago, but “why not use podman or containerd or…” makes it harder to stand up for.

wiether 9/5/2025||||
> If you have a dev team of 10 people and are extremely profitable to where you need licenses you'd end up paying $9 a year per developer for the license.

It doesn't quite change your argument, but where have you seen $9/year/dev?

The only way I see a $9 figure is the $9/month for Docker Pro with a yearly sub, so it's 12*$9=$108/year/dev or $1080/year for your 10 devs team.

Also it should be noted that Docker Pro is intended for individual professionals, so you don't have collaboration features on private repos and you have to manage each licence individually, which, even for only 10 licences, implies a big overhead.

If you want to work as a team you need to take the Docker Team licence, at $15/month/dev on a yearly sub, so now you are at $1800/year for your 10 devs team.

Twenty times more than your initial figure of $90/year. Still, $1800 is not that much in the grand scheme of things, but then you still have to add a usual Atlassian sub, an Office365/GWorkspace sub, an AI sub... You can end-up paying +$200/month/dev just in software licences, without counting the overhead of managing them.

nickjj 9/6/2025||
I can't speak for all companies but a few I've dealt with bought licenses exclusively for Docker Desktop access. They're not using private repos since they were invested in private registries through their cloud provider.
dice 9/5/2025||||
> Was this a deal breaker for any company?

It is at the company I currently work for. We moved to Rancher Desktop or Podman (individual choice, both are Apache licensed) and blocked Docker Desktop on IT's device management software. Much easier than going through finance and trying to keep up with licenses.

regularfry 9/5/2025||
Deal breaker for us too, now in my second org where that's been true.

It's not just that you need a licence now, it's that even if we took it to procurement, until it actually got done we'd be at risk of them turning up with a list of IP addresses and saying "are you going to pay for all of these installs, then?". It's just a stupid position to get into. The Docker of today might not have a record of doing that, but I wouldn't rule out them getting bought by someone like Oracle who absolutely, definitely would.

SushiMon 9/5/2025||
Were there any missing/worse functional capabilities that drove you over to Podman/alternatives? Or just the licensing / pricing?
regularfry 9/6/2025||
No, it was entirely a business decision in both cases.
orochimaaru 9/5/2025||||
If your enterprise with a large engineering team that isn’t a software company, you are a cost center. So anything related to developer tools is rarely funded. It will mostly be - use the free stuff and suck it up.

Either that or you have a massive process to acquire said licenses with multiple reporting requirements. So, you manager doesn’t need the headache and says just use the free stuff and move on.

I used to use docker. I use podman now. Are there teams in my enterprise who have docker licenses - maybe. But tracking them down and dealing with the process of adding myself to that “list” isn’t worth the trouble.

bongodongobob 9/5/2025||||
I work for a $2 billion/yr company and we need three levels of approval for a Visio license. I've never been at a large corp where you could just order shit like that. You'll have to fill out forms , have a few meetings about it, business justification spreadsheets etc, then get told it's not in the budget.
troyvit 9/5/2025||||
> I ask because the Docker Desktop paid license requirement is quite reasonable. If you have less than 250 employees and make less than $10 million in annual revenue it's free.

It is for now, but I can't think of a player as large as Docker that hasn't pulled the rug out from under deals like this. And for good reason, that deal is probably a loss leader and if they want to continue they need to convert those free customers into paying.

codesmash 9/5/2025||||
The problem is not the cost. It's complexity. From a buyer perspective literally fighting with the procurement team is a nightmare.

And usually the need is coming from someone below C-level. So you have to: convince your manager and his manager convince procurement team it has to be in a budget (and usually it's much easier to convince to pay for the dinner) than you have a procurement team than you need to go through vendor review process (or at least chase execution)

This is reality in all big companies that this rule applies to. It's at least a quarter project.

Once I tried to buy a $5k/yr software license. The Sidekiq founder told me (after two months of back and forth) that he's done and I have to pay by CC (which I didn't had as miserable team lead).

zer00eyz 9/5/2025||||
> you'd end up paying $9 a year per developer for the license

It's only 9 bucks a year, its only 5 bucks a month, its less than a dollar a day.

Docker, ide, ticking system, GitHub, jira, sales force, email, office suit, Figma.... all of a sudden your spending 1000 bucks a month per staff member for a small 10 person office.

Meanwhile AWS is charging you .01xxxx for bandwidth, disk space, cpu time, s3 buckets, databases. All so tiencent based AI clients from China hammer your hardware and run up your bill....

The rent seeking has gotten out of hand.

j45 9/5/2025||
The loaded cost is truly something else, and most understood by people who had to find a way to pay for it all, or paid for it all for others.

The majority of businesses in the world, (and the majority of jobs) are created and delivered by small business, not big.

And then the issues when a service goes down it takes everyone else down with it.

taormina 9/5/2025||||
Yep! What startup has the goal of making less than $10 million in annual revenue? That sentence was absolutely a deal breaker for the CEO and CTO of our last company.

And since when has Docker Desktop "just worked"?

nickjj 9/6/2025||
I've been using Docker since before Docker Desktop.

Never really had any major problems with Docker Desktop on Windows. I run it and it allows me to run containers through WSL 2. Volume performance is near native Linux speeds and the software itself doesn't crash, even on my 10 year old machine.

I also use it on macOS on a work laptop for a lot of different projects and it works. There's more issues around volume mount performance here but it's not something that's unusably slow. Also given the volume performance is mostly due to OS level file system things I'm skeptical Podman would resolve that. I remember trying Colima for something and it made no difference there.

firesteelrain 9/5/2025||||
We only run Podman Desktop if ever because for large companies it is cost prohibitive. We also found that most people don’t need *Desktop at all. Command line works fine
DerArzt 9/5/2025||||
I work at a fortune 250 and cost of the licence was the given reason for moving to podman for the whole org.
t43562 9/5/2025||||
I don't particularly care if it's worth it or not. I don't need to do it. Getting money for things is not easy in all companies.
jandrese 9/5/2025||||
> Was this a deal breaker for any company?

It's not the money, it's the bureaucracy. You can't just buy software, you need a justification, a review board meeting, marketplace survey with explanations of why this particular vendor was chosen over others with similar products, sign off from the management chain, yearly re-reviews for the support contract, etc...

And then you need to work with the vendor to do whatever licensing hoops they need to do to make the software work in an offline environment that will never see the Internet, something that more often than not blows the minds of smaller vendors these days. Half the time they only think in the cloud and situations like this seem like they come from Mars.

The actual cost of the product is almost nothing compared to the cost of justifying its purchase. It can be cheaper to hire a full time engineer to maintain the open source solutions just to avoid these headaches. But then of course you get pushback from someone in management that goes "we want a support contract and a paid vendor because that's best practices". You just can't win sometimes.

k4rli 9/5/2025||||
Docker Desktop is also (imo) useless and helps be ignorant.

Most Mac users I see using it struggle to see the difference between "image" and "container". Complete lack of understanding.

All the same stuff can easily be done from cli.

com2kid 9/5/2025|||
> Most Mac users I see using it struggle to see the difference between "image" and "container". Complete lack of understanding.

Because they just want their software package to run and they have been given some magic docker incantation that, if they are lucky, actually launches everything correctly.

The first time I used Docker I had so many damn issues getting anything to work I was put off of it for a long time. Heck even now I am having issues getting GPU pass through working, but only for certain containers, other containers it is working fine for. No idea what I am even supposed to do about that particular bit of joy in my life.

> All the same stuff can easily be done from cli.

If a piece of technology is being forced down a user's throat, users just wants it to work and go out of their way so they can get back to doing their actual job.

johnmaguire 9/5/2025||||
I don't believe it's possible to run Docker on macOS without Docker Desktop (at least not without something like lima.) AFAIUI, Docker Desktop contains not just the GUI, but also the hypervisor layer. Is my understanding mistaken?
cduzz 9/5/2025||
It's pretty easy to run docker on macos -- colima[1] is just a brew command away...

It runs qemu under the hood if you want to run x86 (or sparc or mips!) instead of arm on a newer mac.

[1]https://formulae.brew.sh/formula/colima

mdaniel 9/6/2025|||
As hair splitting, one can choose to use qemu or Virtualization.framework https://lima-vm.io/docs/config/vmtype/vz/ (I'm aware that's a link to Lima docs but ... <https://github.com/abiosoft/colima/blob/v0.8.4/config/config...>)
lmm 9/6/2025|||
> colima[1] is just a brew command away...

Which would be great if it worked reliably, or had any documentation at all for when it breaks. But it doesn't and it doesn't.

cduzz 9/6/2025||
First, I guess I'll just invoke Sturgeon's law[1] -- almost all software, especially if you don't really understand it, is crap, and probably the software you understand is also crap, you're just used to it. Good software is pretty tricky to make.

But second -- I use colima lots, on my home macs and my work macs, and it mostly just works. The profiles stuff is kinda annoying and I find myself accidentally running arm when I want x86, or other tedious config issues crop up. But it actually has been easier to live with than docker desktop where I'd run out of space and things would fall apart.

Docker on MacOS is broadly going work poorly relative to it on linux, just from having to run the docker stuff in a linux vm that's hiding somewhere behind the scenes.

If you find too much friction with any of these, probably it's easier to just run a linux vm on the mac and interact with docker in the 'native' environment. I've found UTM to be quite a bit easier to live with than virtualbox.

[1] https://en.wikipedia.org/wiki/Sturgeon%27s_law

lmm 9/7/2025||
> almost all software, especially if you don't really understand it, is crap, and probably the software you understand is also crap, you're just used to it. Good software is pretty tricky to make.

Most software has issues, but Colima is noticeably worse than most software I've used. And the complete lack of documentation is definitely not normal.

j45 9/5/2025||||
Not everyone uses software the same way.

Not everyone becomes a beginner to using software the same way or the one way we see.

dakiol 9/5/2025||||
I cannot run docker in macos without docker desktop. I use the cli to manage images, containers, and everything else.
lucyjojo 9/5/2025||||
for reference a jp dev will be paid around $50,000. most of the world will probably be in the 10k-50k range except a few places (switzerland, luxembourg, usa?).

atlassian and google and okta and ghe and this and that (claude code?). that eventually starts to stack up.

throwaway0236 9/5/2025||
I think you are underestimating the salaries in other "developed" countries, but you are right that US salaries are much higher than any other country (especially in Silicon Valley)

You have a valid point in that many HN commentators seem to live in a bubble where spending thousands of dollars on a developer for "convenience" is seen as a no-brainer. They often work in companies that don't make a profit, but are funded by huge VC investments. I don't blame them, as it is a valid choice given the circumstances. If you have the money, why not? But they may start thinking differently if the flow of VC money slows down.

It's similar to how some wealthy people buy a private jet. Their time is valuable, and the cost seems justified (at least if you don’t care about the environmental impact).

I believe that frugality is actually the default mode of business, but many companies in SV are protected from the consequences by the VCs.

arunc 9/5/2025||||
$90 is also like 1.5 hours of work that I would've spent debugging podman anyway. And I've spent more than a few hours every time podman breaks, it to be honest.
maxprimer 9/5/2025||||
Even large companies with thousands of developers have budgets to manage and often times when the CT/IO sees free as an option that's all that matters.
tecleandor 9/5/2025||||
I've seen a weird thing on their service agreement:

Use Restrictions. Customer and its Users may not and may not allow any third party to: [...] 10. Access the Service for the purpose of developing or operating products or services intended to be offered to third parties in competition with the Services[...]

Emphasis mine on 'operating'.

So I cannot use Docker Desktop to operate, for example: ECR, GCR or Harbor?

chuckadams 9/5/2025||
I think the Service in question is services like Docker Hub that they don't let you use as the infrastructure for your competing site.
fkyoureadthedoc 9/5/2025||||
At my job going through procurement for something like Docker Desktop when there are free alternatives is not worth it.

It takes forever, so long that I'll forget that I asked for something. Then later when they do get around to it, they'll take up more of my time than it's worth on documentation, meetings, and other bullshit (well to me it's bullshit, I'm sure they have their reasons). Then when they are finally convinced that yes a Webstorm license is acceptable, they'll spend another inordinate amount of time trying to negotiate some deal with Jetbrains. Meanwhile I gave up 6 months ago and have been paying the $5 a month myself.

smileysteve 9/5/2025||||
To bring up AI, and the eventual un-subsidizing of costs; if $9 a year is too much for docker... Then even the $20/mo (June) price tag is too high for AI, much less $200 (August), or $2000? (post subsidizing)
pmontra 9/5/2025||||
I think that I never saw somebody using Docker Desktop. I saw running containers with the command line everywhere, but I maybe I did not notice. No licenses for the command line tools, right?
akerl_ 9/5/2025|||
On a Mac or Windows machine, you generally need something to get you a Linux environment on which to run the containers.

You can run your own VM via any number of tools, or you can use WSL now on Windows, etc etc. But Docker Desktop was one of the first push-button ways to say "I have a Mac and I want to run Docker containers and I don't want to have to moonlight as a VM gardener to do it.

chuckadams 9/5/2025||||
The command-line tools on a Mac usually come from Docker Desktop. The homebrew version of docker is bare-bones and requires the virtualbox-based docker-machine package, whereas Desktop is using Apple's Virtualization Framework. Nobody runs the homebrew version as far as I can tell.

On Windows, you can use the docker that's built in to the default WSL2 image (ubuntu), and Docker Desktop will use it if available, otherwise it uses its own backend (probably also Hyper-V based).

I use Orbstack myself, but that's also a paid product.

throwaway0236 9/5/2025|||
I sometimes use Docker Desktop on my Mac to view logs. It's more convenient.
xyzzy_plugh 9/5/2025||||
It's a deal breaker because it was previously free to use, and frankly it's not worth $1 a month given there are better paid alternatives, let alone better free alternatives.
papageek 9/5/2025||||
You need a compliance department and attorneys to look over licenses and agreements. It's a real hassle and not really related to cost of the license itself.
tclancy 9/5/2025||||
Yes. I worked for a company with a few thousand developers and we swapped away from Docker one week with almost no warning. It was a memorable experience.
smokel 9/5/2025||||
Reading through the comments here, it looks like there is an opportunity for a startup to streamline software licensing. Just a free tip.
adolph 9/5/2025|||
Yeah, at a big enterprise the larger challenge ahead of even payment is the legal arrangements. They typically sign some "master license" agreement with an aggregator like CDW. Those places don't seem well set up for software redistribution though. Setting up a Steam or AppStore clone for various utility-ware would go a long way to enabling people to access the software an enterprise doesn't mind paying for if the legal and financial stuff wasn't applying friction.
eehoo 9/5/2025|||
There are already software licensing providers such as 10Duke that do exactly that. Pretty much all of the licensing related problems mentioned here would either disappear or at the very least get dramatically simpler if more companies used 10Duke Enterprise as their licensing solution to issue and manage licenses. There is a better way, but sadly most businesses overlook licensing.

(the company I work for uses them, our licensing used to be a mess similar to what's described here)

racecar789 9/6/2025||||
> you'd end up paying $9 a year per developer for the license

Correction: Docker Desktop is $9/month (not $9/year).

m463 9/5/2025||||
I hated the docker desktop telemetry. I remember it happened in the macos installer even before you got any dialog box
patmcc 9/5/2025||||
It's not the cost, it's the headache. Do I need to worry about setting up SSO, do I need to work with procurement, do I need to do something in our SOC2 audit, do I need to get it approved as an allowed tool, etc.

Whether it's $100/year or $10k/year it's all the same headache. Yes, this is dumb, but it's how the process works at a lot of companies.

Whereas if it's a free tool that just magically goes away. Yes, this is also dumb.

bastardoperator 9/5/2025||||
Docker has persuaded several big shops to purchase site licenses.
secondcoming 9/5/2025||||
Yes. Our company no longer allows use of Docker Desktop
phaedrix 9/5/2025||||
You are off by a factor of 12.

It's $9 per month not year.

nickjj 9/6/2025||
Thanks, I can't believe I missed that!

$90 vs $1,080 would be the difference anually.

debarshri 9/5/2025||||
You can always negotiate the price
johannes1234321 9/5/2025||
In other words: you can always make the buying process more complex and expensive.

For some products that might be worth it. For other not.

But whatever the outcome: you still got to track license compliance afterwards and renew licenses. (Which also works better when tracking internal usage as you know your need)

flerchin 9/5/2025|||
"officially supported" is not a value.

It's not the price, it's that there is one. 1 penny would be too much because it prevents compose-ability of dev workstations.

Izmaki 9/5/2025|||
None of your companies need to worry about licenses. Docker ENGINE is free and open source. Docker DESKTOP is a software suite that requires you to purchase a license to use in a company.

But Docker Engine, the core component which works on Linux, Mac and Windows through WSL2, that is completely and 1000% free to use.

xhrpost 9/5/2025|||
From the official docs:

>This section describes how to install Docker Engine on Linux, also known as Docker CE. Docker Engine is also available for Windows, macOS, and Linux, through Docker Desktop.

https://docs.docker.com/engine/install/

I'm not an expert but everything I read online says that Docker runs on Linux so with Mac you need a virtual environment like Docker Desktop, Colima, or Podman to run it.

LelouBil 9/5/2025|||
Docker desktop will run a virtual machine for you. But you can simply install docker engine in wsl or in a VM on mac exactly like you would on linux (you give up maybe automatic port forwarding from the VM to your host)
rovr138 9/5/2025|||
> But you can simply install docker engine in wsl or in a VM on mac exactly like you would on linux (you give up maybe automatic port forwarding from the VM to your host)

and sharing files from the host, ide integration, etc.

Not that it can't be done. But doing it is not just, 'run it'. Now you manage a vm, change your workflow, etc.

mpyne 9/5/2025||
Of course, but that's the value-add of Docker Desktop. But you don't have to tie yourself to it, or even if you do use it for a bit to get going faster, you have a migration path open to doing it yourself should you need it.
linuxftw 9/5/2025|||
This. I run docker in WSL. I also do 100% of my development in WSL (for work, anyway). Windows is basically just my web browser.
CuriouslyC 9/5/2025||
Ironic username. As a die hard, WSL aint bad though. I just can't deal with an OS that automatically quarantines bittorrent clients, decides to override local administrator policies via windows updates and pops up ad notifications.
mmcnl 9/5/2025|||
I personally use Windows + WSL2 and for work use macOS. I prefer Windows + WSL2 by a longshot. It just "works". macOS never "just works" for me. Colima is fine but requires a static memory allocation for the VM, it doesn't have the level of polish that WSL2 has. Brew is awful compared to apt (which you get with WSL2 because it's just Linux).

And then there's the windowing system of macOS that feels like it's straight from the 90s. "System tray" icons that accumulate over time and are distracting, awful window management with clunky animations, the near useless dock (clicking on VS Code shows all my 6 IDEs, why?). Windows and Linux are much modern in that regard.

The Mac hardware is amazing, well worth its price, but the OS feels like it's from a decade ago.

linuxftw 9/5/2025||||
All my personal machines run linux. At work my choices are Mac or Windows. If Macs were still x86_64 I might choose that and run a VM, but I have no interest in learning the pitfalls of cross arch emulation or dealing with arm64 linux distro for a development machine.
chuckadams 9/5/2025||
I never notice the difference between arm64 and x86 environments, since I'm flipping between them all the time just because the arm boxes are so much cheaper. The only time it matters to me is building containers, and then it's just a matter of passing `--platform=linux/amd64,linux/arm64` to `docker buildx`.

If you're building really arch-specific stuff, then I could see not wanting to go there, but Rosetta support is pretty much seamless. It's just slower.

croon 9/5/2025|||
+1

I use WSL for work because we have no linux client options. It's generally fine, but both forced windows update reboots as well as seemingly random wsl reboots (assuming because of some component update?) can really bite you if you're in the middle of something.

iainmerrick 9/5/2025|||
If you're already paying for Macs, is paying for Docker Desktop really a big problem?
chrisweekly 9/5/2025||
I think the point is that Docker Desktop for macOS is bad.
chuckadams 9/5/2025|||
It's not all that bad these days ever since they added virtio support. Orbstack is well worth paying for as an alternative, but that won't solve anyone's procurement headaches either.
iainmerrick 9/6/2025|||
Oh! I wasn’t trying to make a big point except that paying for software isn’t necessarily a bad thing, and if you’re already invested in Macs you’re presumably OK with paying good money for good products.

Having used Docker Desktop on a Mac myself, it seems... fine? It does the job well enough, and it’s part of the development rather than production flow so it doesn’t need to be perfect, just unobtrusive.

matsemann 9/5/2025||||
If you've installed Docker on Windows you've most likely done that by using Docker Desktop, though.
Izmaki 9/5/2025|||
That's just one way. The alternative is WSL 2 with Docker Engine.
GrantMoyer 9/5/2025||||
Docker Engine without Docker Desktop is available through winget as "Docker CLI"[1].

[1]: https://github.com/microsoft/winget-pkgs/tree/master/manifes...

mmcnl 9/5/2025||||
I just follow the official Linux instructions on the Docker website. It just works.
t43562 9/5/2025|||
Right, we were using macs - same story.
t43562 9/5/2025||||
Those companies use docker desktop on their dev's machines.
connicpu 9/5/2025|||
There's no need if all your devs use desktop Linux as their primary devices like we do where I work :)
t43562 9/5/2025||
On Mac we just switched to podman and didn't have anything to worry about.
krferriter 9/5/2025|||
I am using MacOS and like a year ago I uninstalled docker and docker desktop, installed podman and podman-compose, and have changed literally nothing else about how I use containers and docker image building/running locally. It was a drop-in replacement for me.
nickthegreek 9/5/2025||||
Anyone have opinions on OrbStack for mac over these other alternatives?
elliottr1234 9/5/2025|||
It's well worth it its much more than a gui for it supports running k8s locally, managing custom vm instances, resource monitoring of containers, built in local domain name support with ssl mycontainer.orb, a debug shell that gives you ability to install packages that are not available in the image by default, much better and automated volume mounting and view every container in finder, ability to query logs, an amazing ui, plus it is much, much faster and more resource efficient.

The above features really do make it worth it especially when using existing services that have complicated failure logs or are resource intensive like redis, postgres, livekit, etc or you have a lot of ports running and want to call your service without having to worry about remembering port numbers or complicated docker network configuration.

Check it out https://docs.orbstack.dev/

chuckadams 9/5/2025||
Docker Desktops also supports a local kubernetes stack, but it takes several minutes to start up, and I think in the end it's just minikube? Haven't tried Orbstack's k8s stack myself since I'm good with k3d. I did have cause though to spin up a VM a while back, and that was buttah.
johncoltrane 9/5/2025||||
I tried all the DD alternatives (on macOS) and I think OrbStack is the easiest to use and least invasive of them all.

But it is not cross-platform, so we settled on Podman instead, which came (distant) second in my tests. The UI is horrible, IMO but hey… compromises.

I use OrbStack for my personal stuff, though.

fernandotakai 9/5/2025||||
orbstack is absolutely amazing. not only the docker side works much better than docker desktop but their lightweight linux vms are just beyond great.

i've been using an archlinux vm for everything development over the past year and a half and i couldn't be happier.

veidr 9/5/2025||||
Yes, Orbstack is significantly better than Docker Desktop, and probably also better than any other Docker replacement out there right now (for macOS), if you aren't bothered by the (reasonable) pricing.

It costs about $100/year per seat for commercial use, IIRC. But it is significantly faster than Docker Desktop at literally everything, has a way better UI, and a bunch of QoL features that are nice. Plus Linux virtualization that is both better and (repeating on this theme) significantly more performant than Parallels or VMWare Fusion or UTM.

karlshea 9/5/2025|||
Been using it for a year or so now and it’s amazing. Noticeably faster than DD and the UI isn’t Electron or whatever’s going on there.
lmm 9/6/2025||||
Really? We switched 6+ months ago and I'm still dealing with all the little broken corners that keep cropping up.
allovertheworld 9/5/2025|||
Cant imagine being forced to use a linux PC for work lmao
connicpu 9/5/2025||
I happily embraced it, to each their own I guess. There are folks who mainly work on their mac/windows laptops and just ssh into their workstation, but IT gives us way more freedom (full sudo access) on Linux so I can customize a lot more which makes me a lot happier.
Almondsetat 9/5/2025|||
That's their completely optional prerogative
firesteelrain 9/5/2025|||
Podman is inside the Ubuntu WSL image. No need for docker at all
kordlessagain 9/5/2025||
This is not correct, at least when looking at my screen:

(base) kord@DESKTOP-QPLEI6S:/mnt/wsl/docker-desktop-bind-mounts/Ubuntu/37c7f28..blah..blah$ podman

Command 'podman' not found, but can be installed with:

sudo apt install podman

firesteelrain 9/5/2025||
Hmm maybe it’s what our admins provided to us then. I actually have never run it at home only airgapped
goldman7911 9/5/2025|||
You only have to worry about licences if you use Docker DESKTOP. Why not use RANCHER Desktop?

I have been using it by years. Tested it in Win11 and Linux Mint. I can have even a local kubernetes.

lmm 9/6/2025|||
Low-quality UX (e.g. you have to switch tabs and switch back if you ever want to see the current state of your containers, because it loads it once when you open the tab and never updates, and doesn't even give you a button to refresh it), lack of documentation, behavioural changes that happen silently (e.g. it autoupdates which changes the VM hostname, so the thing that was working yesterday doesn't work today and you have no idea why) and general flakiness.
mpawelski 9/6/2025||||
I concur. My company is using Rancher Desktop on Windows machines. No problems. As long as you use don't care about GUI, and just use CLI dommands ("docker" , "docker compose" ).
seabrookmx 9/6/2025|||
Why not use Docker Engine/CE on Linux so you don't have to run a VM?
xedrac 9/5/2025|||
I vastly prefer Podman over Docker. No user/group fuss, no security concerns over a root process. No having to send data to a daemon.
anakaine 9/5/2025|||
On a few machines now ive had Podmans windows uninstaller fail to remove all its components and cause errors on start up due to postman not being found. Even manually removing leftover services and start up items didn't fix the issue. Its a constant source of annoyance.
ac130kz 9/5/2025||
It's works great until you need that one option from Docker Compose that is missing in Podman Compose (which is written in Python for whatever reason, yeah...).
carwyn 9/5/2025||
You can use the real compose (Go) with Podman now. The Python clone is not your only option.
ac130kz 9/5/2025|||
Well, is this Podman's "service mode" also fully compatible with Docker Compose file functionality though?
bigbong 9/6/2025||
Looks like the compose `watch` option is not yet supported[1]. Huge blocker for adoption in local development.

[1]:https://github.com/containers/podman-compose/issues/792

jcotton42 9/5/2025|||
What do you mean by "the real compose"?
ac130kz 9/6/2025||
I assume Docker Compose v2 from Docker.
xrd 9/5/2025||
I love podman, and, like others have said here, it does not always work with every container.

I often try to run something using podman, then find strange errors, then switch back to docker. Typically this is with some large container, like gitlab, which probably relies on the entirety of the history of docker and its quirks. When I build something myself, most of the time I can get it working under podman.

This situation where any random container does not work has forced me to spin up a VM under incus and run certain troublesome containers inside that. This isn't optimal, but keeps my sanity. I know incus now permits running docker containers and I wonder if you can swap in podman as a replacement. If I could run both at the same time, that would be magical and solve a lot of problems.

There definitely is no consistency regarding GPU access in the podman and docker commands and that is frustrating.

But, all in all, I would say I do prefer podman over docker and this article is worth reading. Rootless is a big deal.

nunez 9/5/2025||
I presume that the bulk of your issues are with container images that start their PID 1s as root. Podman is rootless by default, so this causes problems.

What you can do if you don't want to use Docker and don't want to maintain these images yourself is have two Podman machines running: one in rootful mode and another in rootless mode. You can, then, use the `--connection` global flag to specify the machine you want your container to run in. Podman can also create those VMs for you if you want it to (I use lima and spin them myself). I recommend using --capabilities to set limits on these containers namespaces out of caution.

Podman Desktop also installs a Docker compatibility layer to smooth over these incompatibilities.

xrd 9/5/2025|||
This is terrific advice and I would happily upvote a blog post on this! I'll look into exactly this.
bsder 9/5/2025|||
Is there a blog post on this somewhere? I'd really love to read more about it beyond just the official documentation.
nunez 9/8/2025||
I made a blog post some years ago about how to create your own VMs with Lima: https://blog.carlosnunez.me/post/docker-desktop-alternative-...

You can also use this to create a VM for Podman that runs on Fedora, rootful by default: https://github.com/carlosonunez/bash-dotfiles/blob/main/lima...

If you go the Lima approach, use `podman system connection add` to add rootful and rootless VMs, then use the `--connection` flag to specify which you want to use. You can alias them to make that easier; for instance, use `alias podman=podman` for rootless stuff (assuming the rootless VM is your default) nad `alias rpodman=podman --connection rootful` for rootful stuff. I'll write a post describing how to set all of that up soon!

gorjusborg 9/5/2025|||
> I love podman, and, like others have said here, it does not always work with every container.

Which is probably one of the motivations for the blog post. Compatibility will only be there once a large enough share of users use podman that it becomes something that is checked before publish.

firesteelrain 9/5/2025|||
Weird, we run GitLab server and runners all on podman. Honestly I wish we would switch to putting the runners in k8s. But it works well. We use Traefik.
xrd 9/5/2025||
Yeah, I had it running using podman, but then had some weird container restarts. I switched back to docker and those all went away. I am sure the solution is me learning more and troubleshooting podman, but I just didn't spend the time, and things are running well in an isolated VM under docker.

That's good to know it works well for you, because I would prefer not to use docker.

dathinab 9/5/2025||
in my experience (at least rootless) podman does enforce resource limits much better/stricter

we had some similar issues and it was due to containers running out of resources (mainly RAM/memory, by a lot, but only for a small amount of time). And it happens that in rootless this was correctly detected and enforced, but on non rootless docker (in that case on a Mac dev laptop) it didn't detect this resource spikes and hence "happened to work" even through it shouldn't have.

k_roy 9/5/2025||
I use a lot of `buildx` stuff. It ostensibly works in podman, but in practice, I haven't had much luck
awoimbee 9/5/2025||
The main issue is podman support on Ubuntu. Ubuntu ships outdated podman versions that don't work out of the box. So I use podman v5, GitHub actions uses podman v3, and my coworkers on Ubuntu use docker. So now my script must work with old podman, recent podman and docker
rsyring 9/5/2025||
Additionally, there aren't even any trusted repos out there building/publishing a .deb for it. The ones that I could find when I searched last were all outdated or indicated they were not going to keep moving forward.

I could get over this. But, IMO, it lends itself to asking the "why" question. Why wouldn't Podman make installing it easier? And the only thing that makes sense to me is that RedHat doesn't want their dev effort supporting their competitor's products.

That's a perfectly reasonable stance, they owe me nothing. But, it does make me feel that anything not in the RH ecosystem is going to be treated as a second-class citizen. That concerns me more than having to build my own debs.

gucci-on-fleek 9/5/2025|||
They publish statically-linked binaries on GitHub [0], so to install it, you just need to download and unpack a single file. But you don't get any automatic updates like you would if they provided an apt repository.

[0]: https://github.com/containers/podman/releases

throwaway127482 9/6/2025|||
Aren't the statically linked binaries just the remote client? i.e. they can't run containers on their own, right?

In the past I think I wound up using https://github.com/mgoltzsche/podman-static because I could not get those podman static binaries to work

rsyring 9/5/2025||||
Wow! I can't believe I missed that. Thanks.
rsyring 9/7/2025||
Apparently, I didn't miss anything after all. :(

See sibling comment above about them being just remote binaries.

Eduard 9/5/2025|||
how come there is no podman Linux installer?
gucci-on-fleek 9/5/2025|||
Well all the downstream distros have their own installers (apt, dnf, pacman, etc.). If you're compiling from source, then "make install" [0] should work as expected, and if you're downloading the pre-built binaries from GitHub [1], you just need to copy a single statically-linked binary into "/usr/local/bin".

[0]: https://github.com/containers/podman/blob/c8183c50/Makefile#...

[1]: https://github.com/containers/podman/releases

xylophile 9/6/2025|||
You mean curl?
dathinab 9/5/2025||||
> Why wouldn't Podman make installing it easier?

What else can they do then having a package for every distro?

https://podman.io/docs/installation#installing-on-linux

Including instructions to build from source (including for Debian and Ubuntu):

https://podman.io/docs/installation#building-from-source

I don't know about this specific case but Debian and or Ubuntu having outdated software is a common Debian/Ubuntu problem which nearly always is cause by Debian/Ubuntu itself (funnily if it's outdated in Ubuntu doesn't mean it's outdated in Debian and the other way around ;=) ).

rsyring 9/5/2025||
> What else can they do...

They can do what Docker and many other software providers do that are committed to cross OS functionality. They could build packages for those OSes. Example:

https://docs.docker.com/engine/install/ubuntu/#install-using...

The install instructions you link to are relying on the OS providers to build/package Podman as part of their OS release process. But that is notoriously out-of-date.

You could argue, "Not Podman's Problem", and, in one sense, you'd be right. But, again, it leads to the question "Why wouldn't they make it their problem like so many other popular projects have?" and I believe I answered that previously.

dathinab 9/5/2025||
> build/package Podman as part of their OS release process. But that is notoriously out-of-date.

providing duplicate/additional non official builds for other OS is

- undermining the OSes package curation

- confusing for the user

- cost additional developer time, which for most OSS is fairly limited

- for non vendorable system dependencies this additional dev time cost can be way higher in all kinds of surprising ways

- obfuscate if a Linux distro is in-cable of properly maintaining their packages

- lead to a splitting of the target OS specific eco system of software using this as a dependency

etc.

it's a lose lose lose for pretty much everyone involved

so as long as you don't have a have a monetary reason that you must do it (like e.g. docker has) it's in my personal opinion a very dump thing to do

I apologize for being a bit blunt but in the end why not use a Linux distribution which works with modern software development cycles?

Blaming others for problems with the OS you decided to use when there are alternatives seems not very productive.

rsyring 9/5/2025||
> cost additional developer time, which for most OSS is fairly limited

Mostly agree. But something like Podman w/ RedHat behind it is unlikely to be limited in the same way a lot of community OSS projects are.

Unfortunately, I disagree with just about every other point you made but don't think it's worth responding point-by-point. In short, I think a project having dedicated builds for popular OSes is a win-win for just about everyone, excepting that it does take, sometimes a considerable amount of, effort to support those cross OS builds. Additionally, there are now options like Snap/Flatpack/AppImage that can be targets instead of the OS itself, although there is admittedly a tradeoff there as well.

For some projects, say something like ripgrep, just using what is in the OS repo is fine because having the latest and greatest features/bug-fixes is unlikely to matter to most people using the tool.

But, on something like Podman, where there is so many pieces, it's a relatively new technology, and the interaction between Kernel, OS, and user space is so high, being stuck with a non-current OS provided release for a couple years is a non-starter.

> why not use a Linux distribution which works with modern software development cycles?

Because I like my OS to be stable, widely supported, and I also like some of my applications to be evergreen. I find Ubuntu is usually a really good mix that way and I'm going on 15+ years of use. There are other solutions for that that I could use, but I'm mostly happy where I am and don't want to spend the kind of time it would take to adopt a different OS and everything that would follow from that.

That leads _me_ to avoid Podman currently. I can appreciate that you have a different opinion, I just think you are a overplaying your perspective a bit in the comment above.

dathinab 9/5/2025|||
> like Snap/Flatpack/AppImage that can be targets instead of the OS itself [..] ripgrep

sure I agree that where it's easily doable (like e.g. ripgrep) having non distro specific builds is a must have

But sadly this doesn't fully work for podman AFIK as it involves a lot of subtle interactions with things which aren't consistently setup across Linux distros with probably the worst offender being the Linux security modules system (e.g. SELinux, AppArmor etc.). But thinking about probably sooner or later you probably could have a mostly OS independent postman setup (limited to newer OS versions). Or to be more specific 3 one with SELinux one with AppArmor and neither with neither, so I guess maybe not :/

ibejoeb 9/6/2025||||
>something like Podman w/ RedHat behind it is unlikely to be limited in the same way a lot of community OSS projects are

exactly. I've built podman for debian. It's not an esoteric target. It gets a little hairy with all of the capabilities stuff and selinux, but it's feasible. Give me, I don't know, $10k a quarter and I'd probably do it.

xylophile 9/6/2025|||
Trying to be genuinely helpful here:

After many years of "I want stability and evergreen", I finally realized that this is Fedora. Each release is very stable, and they arrive more often than once an eon.

c-hendricks 9/6/2025||||
Is there something wrong with the version in homebrew?

https://formulae.brew.sh/formula/podman

kiney 9/5/2025|||
debian trixie has podman 5 packages in official repos. Good chance thqt those work on ubuntu
gm678 9/5/2025|||
Also on Ubuntu 25.04, which I updated a homeserver too despite it not being LTS just for the easy access to Podman 5. Once Ubuntu 26.04 comes out the pain described by some sibling comments should end. Podman 4 is a workable version but 5.0 is where I'd say it really became a complete replacement for Docker and quadlets fully matured.
bityard 9/5/2025|||
Not a good idea: https://wiki.debian.org/DontBreakDebian

(It's titled "Don't Break Debian" but might also be called "Don't Break Ubuntu" as it applies there just as well.)

alyandon 9/5/2025|||
Yeah, the lack of an official upstream .deb that is kept up to date (like the official Docker .deb repos) for Ubuntu really kills using podman for most of my internal use cases.
troyvit 9/5/2025|||
This is my biggest problem too, and it's not just my problem but Podman's problem. Lack of name recognition is big for sure compared to Docker, but to me this version mismatch problem is higher on the list and more sure to keep Podman niche. Distros like Ubuntu always ship with older versions of software, it's sadly up to the maintainer to release newer versions, and Podman just doesn't seem interested in doing that. I don't know if it was their goal but it got me to use some RedHat derivative on my home server just to get a later version.
ramon156 9/5/2025|||
One of the reasons I don't use Ubuntu/debian is because it's just too damn slow with updates. I'm noticing that to this day it's still an issue.

Yes I could use flatpack on ubuntu, however I feel like this is partly something Ubuntu/Debian should provide out-of-the-box

alyandon 9/5/2025|||
LTS in general being slow to uptake new versions of software is a feature not a bug. It gives predictability at the cost of having to deal with older versions of software.

With Ubuntu at least, some upstreams publish official PPAs so that you aren't stuck on the rapidly aging versions that Canonical picks when they cut an LTS release.

Debian I found out recently has something similar now via "extrepo".

skydhash 9/5/2025||||
I use debian specifically for things to be kept the same. Once I got things setup, I don’t really want random updates to come and break things.
rsyring 9/5/2025|||
Ubuntu is committed to the Snap ecosystem and there is a lot of software that you can get from a snap if you need it to be evergreen.
bityard 9/5/2025|||
Since Podman is open source, Ubuntu (and others) are able to package it and and update it themselves if they choose. But I think it's understandable that Red Hat would not want to pay their development teams to package their software for a direct competitor.
t0mk 9/6/2025|||
Yes! I really like the idea of podman, but after 4 hours trying to make it work on 24.04, I reverted to Docker and compose.

There is some dissonance in presenting Podman as a plug-in replacement for Docker, and making it so damn hard to install on (some category's) most popular contemporary LTS Linux distro.

physicles 9/6/2025|||
I had this problem on the latest Pop OS LTS (I know, I know). Took about 4 hours to find the magic incantation that would let me install podman >= 4 without breaking anything else.
chanux 9/6/2025|||
You just effortlessly summarized one of the major headaches in tech in real world!
ac130kz 9/5/2025||
That's an Ubuntu issue though, they ship lots of outdated software. Nginx, PHP, PostgreSQL, Podman, etc, the critical software that must be updated asap, even with stable versions they all require a PPA to be properly updated.
mrighele 9/5/2025||
> If your Docker Compose workflow is overly complex, just convert it to Kubernetes YAML. We all use Kubernetes these days, so why even bother about this?

I find that kubernetes yaml are a lot more complex than docker compose. And while I do, no, not everybody uses kubernetes.

esseph 9/5/2025||
Having an LLM function as a translation layer from docker compose to k8s yaml works really well.

On another note, podman can generate k8s yaml for you, which is a nice touch and easy way to transition.

politelemon 9/5/2025|||
Use an LLM is not a solution. It's effectively telling you to switch your brain off and hope nothing goes wrong in the future. In reality things do go wrong and any conversation should be done with a good understanding of the system involved.
hallway_monitor 9/5/2025|||
While I agree with this concept, I don't think it is applicable here. Docker compose files and k8s yaml are basically just two different syntaxes, saying the same thing. Translating from one syntax to another is one of the best use cases for an LLM in my opinion. Like anything else you should read it and understand it after the machine has done the busy work.
catlifeonmars 9/5/2025|||
I bet there’s already a conversion library for it. Translating from one syntax to another _reliably_ should be done with a dedicated library. That being said, I don’t disagree that using an LLM can be helpful to generate code to do the same.
KronisLV 9/5/2025|||
> Translating from one syntax to another is one of the best use cases for an LLM in my opinion.

Have a look at https://kompose.io/ as well.

brennyb 9/5/2025||||
"Using an IDE is not a solution" same arguments, same counter arguments. An abstraction being leaky does not mean it's useless. You will always need to drop down a layer occasionally, but there's value in not having to live on the lower layer all the time.
lmm 9/6/2025||
The difference being that when your IDE makes a mistake you can understand and debug it, and maybe even patch it to fix it (or failing that at least understand what triggers it and work around it).
SoftTalker 9/5/2025|||
When things go wrong, you just ask the LLM about that too. It's 2025.

/s

pvtmert 9/5/2025||||
Both (K8s and Compose) are well defined schemas, hence the conversation is mere mapping via search & replace. Bunch of `sed` statements could do that, LLM is an overkill for the job.

Meanwhile, kompose.io exists, which is exactly what it does (but with Go templates as far as I can tell)

physicles 9/6/2025||||
For toy projects, sure. For production, the probability that the LLM would sneak in some subtle typo is just too high.
IHLayman 9/5/2025|||
You don’t need an LLM for this. Use `kubectl` to create a simple pod/service/deployment/ingress/etc, run `kubectl get -o yaml > foo.yaml` to bring it back to your machine in yaml format, then edit the `foo.yaml` file in your favorite editor, adding the things you need for your service, and removing the things you don’t, or things that are automatically generated.

As others have said, depending on an LLM for this is a disaster because you don’t engage your brain with the manifest, so you aren’t immediately or at least subconsciously aware of what is in that manifest, for good or for ill. This is how bad manifest configurations can drift into codebases and are persisted with cargo-cult coding.

[edit: edit]

esseph 9/5/2025|||
> You don't need an LLM for this

I guess that depends on how many you need to do

BTW, I'm talking about docker/compose files. kubectl doesn't have a conversion there. When converting from podman, it's super simple.

Docker would be wise to release their own similar tool.

compose syntax isn't that complex, nor would it take advtange of many k8s features out of the box, but it's a good start for a small team looking to start to transition platforms

(Have been running k8s clusters for 5+ years)

pvtmert 9/5/2025||
Why would docker should create such tool in the first place? It's the job of the target/destination to provide compatibility layer. In this case, Kubernetes already does with kompose.io tool.

kompose: https://kubernetes.io/docs/tasks/configure-pod-container/tra...

Also, technically docker-compose was the first orchestration tool compared to Kubernetes. Expecting former to provide a translation layer for the latter is rather unorthodox. It is usually the latter tool provides certain compatibility features for former tools...

esseph 9/5/2025||
Why would docker?

Because it's not very useful by itself for running production infra, but it's great for helping to develop it.

Otherwise you're going to see more and more move to podman (and podman desktop) / OCI containers over time, as corps won't have to pay the docker tax and will get better integration with their existing k8s platform.

pvtmert 9/6/2025||
Docker is useful and running in production. (well, it was the only, before containerd got separated entirely and it was usable directly from K8s about when it was 1.16)

What you say is absolutely correct. If Docker keeps creating compatibility layers for it's competitors, it makes everyone to switch to a competitor. In this case, the competitor is Kubernetes as it's running in production for much larger scale (enterprise workloads) compared to Podman et. al.

Hence, it's the job of Podman, Kubernetes, et. al. to write their compatibility layer to provide a value-add for their customers.

hamdingers 9/5/2025|||
This assumes everyone who wants to run containers via podman has kubectl and a running cluster to create resources in which is a strange assumption.
vbezhenar 9/5/2025|||
I disagree with you on that. Kubernetes YAML is on the same level of complexity as docker compose and sometimes even easier.

But verbosity - yeah, kubernetes is absolutely super-verbose. Some 100-line docker-compose could easily end up as 20 yamls of 50 lines each. kubectl really needs some sugar to convert yamls from simple form to verbose and back.

Kovah 9/6/2025|||
> Kubernetes YAML is on the same level of complexity as docker compose

> Some 100-line docker-compose could easily end up as 20 yamls of 50 lines each.

Yeah, nothin to add here.

physicles 9/6/2025|||
We have about 30 services running in 4 environments, including dev. I desperately want a better kustomize that removes most of the boilerplate and adds linting (like, every process should have a ram limit, but no cpu limit). I estimate about 75% of the lines of YAML are redundant.
ThatFave 9/6/2025||
Have you thought about using jsonnet? It has a good library for k8s (https://jsonnet-libs.github.io/k8s-libsonnet/). I like how I don’t need to worry about white spaces and how I can use functions to reduce boilerplate. For an example environment: https://github.com/ThatFave/homelab
physicles 9/6/2025||
Is that intended to be a good example? There's still tons of duplication between the environments.

Kustomize eliminates the vast majority of the duplication (i.e. a unique fact about the cluster being expressed in more than one place), it's just the boilerplate that's annoying.

ThatFave 9/6/2025||
Not a good one, no. I am currently in the process of rewriting this, so that I eliminate duplicate code. The language has the potential, I am still in the process of learning. I also dislike boilerplate, but I think my example is still better than pure yaml.
osigurdson 9/5/2025||
I don't know how to create a compose file, but I do know how to create a k8s yaml. Therefore, compose is more "complex" for me.
0_gravitas 9/5/2025|||
This is a conflation of "Simple" and "Easy" (rather, "complex" and "hard"). 'Simple vs Complex' is more or less objective, 'Easy vs Hard' is subjective, and changes based on the person.

And of course, Easy =/= Simple, nor the other way around.

hamdingers 9/5/2025|||
I'm a CKA and use docker compose exclusively in my homelab. It's simpler.
raquuk 9/5/2025||
The "podman generate systemd" command from the article is deprecated. The alternative are Podman Quadlets, which are similar to (docker-)compose.yaml, but defined in systemd unit files.
stingraycharles 9/5/2025||
Which actually makes a lot of sense, to hand over the orchestration / composing to systemd, since it’s not client <> server API calls (like with docker) anymore but actual userland processes.
Cyph0n 9/5/2025|||
Yep. It works even better on a declarative distro like NixOS because you can define and extend your systemd services (including containers) from a single config.

Taking this further (self-plug), you can automatically map your Compose config into a NixOS config that runs your Compose project on systemd!

https://github.com/aksiksi/compose2nix

solarkraft 9/5/2025|||
It totally does! On the con side, I find systemd unit files a lot less ergonomic to work with than compose files that can easily be git-tracked and colocated.
mariusor 9/5/2025||
What makes a systemd service less ergonomic? I guess it needs a deployment step to place it into the right places where systemd looks for them, but is there anything else?
broodbucket 9/5/2025||
With almost no documentation, mind
raquuk 9/5/2025|||
I find the man page fairly comprehensive: https://docs.podman.io/en/latest/markdown/podman-systemd.uni...
tux1968 9/5/2025||
Is linking to a 404 page meant to highlight the lack of docs, or is there some mistake?
raquuk 9/5/2025||
Apparently the documentation was just updated. The new location is https://docs.podman.io/en/latest/markdown/podman-quadlet.7.h...
mdaniel 9/5/2025|||
I do believe you about the "updated" part, and that's a constant hazard with linking to "latest" or "main" of anything. But I don't know why you'd then change the actual file in the URL, since the original comment was citing "podman-systemd.unit.5.html" <https://docs.podman.io/en/v5.6.1/markdown/podman-systemd.uni...> and you've chosen to cite quadlet.7
stryan 9/5/2025||
Not OP but "podman-systemd.unit.5" used to be the primary Quadlet documentation (a remnant of when it was podman-generate-systemd perhaps?) with every Quadlet file type (.container, .volume, .network, etwc) documented on one page.

The new docs split that out into separate podman-container/volume/etc.unit(5) pages, with quadlet.7 being the index page. So they're still linking to the same documentation, just the organization happened to change underneath them.

If you must see what they linked to originally, the versions docs are still the original organization (i.e. all on one page): https://docs.podman.io/en/v5.6.0/markdown/podman-systemd.uni...

pvtmert 9/5/2025||
Not podman user (but currently trying to install to give it a shot), this comment stream shows how even the documentation "randomly disappearing" on a project that claims in production-ready or stable state. (Or lack thereof)

On the contrary, docker documentation *is* stable, I had bookmarks from 10-years ago on the *latest* editions, that still work today. The final link may have changed, but at least, there is a redirect (or a text showing that has been moved) instead of plain 404/not-found.

This is a crucial part of the quality applications offer. There might've been 100s of podmans probably since Docker was launched more than 10 years ago, but none came close to maintain high-quality of documentation and user-interface (ie. cli commands, switches). Especially in the backward-compatible way.

stryan 9/6/2025||
The Podman reference section, which is what OP linked to, is a direct web version of the man pages. The main method of accessing it, the man pages, has not changed.

It's a different style of documentation organization: if you want to link to a specific version you should link to the specific version not latest. I won't argue it's necessarily a better way of doing things than Docker, but knowing it's the same thing as what's with the package is nice.

pvtmert 9/6/2025||
You can tell users that "you are holding it wrong" or fix the actual problem that exists in the first place. Good luck telling millions of people to not to bookmark certain version, instead use this or that... Maybe, add just a redirect, a simple page with the link that says "Hey, this documentation has moved to there, click here".

Just put this thread into a whatever LLM. I overall see 2 major themes here. Compatibility and stability issues, all over the place. Not just documentation, but with other tools. Compose schema v2 does not match the current/latest one, missing functionality (although this one is acceptable at certain level), etc.

Also, as soon as the docs were "posted", it became obsolete/useless/deprecated. I mean, what sort of quality are we talking about here?

vaylian 9/5/2025|||
You can also look the documentation up locally: `man quadlet`
diarrhea 9/5/2025||
One challenge I have come across is mapping multi-UID containers to a single host user.

By default, root in the container maps to the user running the podman container on the host. Over the years, applications have adopted patterns where containers run as non-root users, for example www-data aka UID 33 (Debian) or just 1000. Those no longer map to your own user on the host, but subordinate IDs. I wish there was an easy way to just say "ALL container UIDs map to single host user". The uidmap and userns options did not work for me (crun has failed executing those containers).

I don’t see the use case for mapping to subordinate IDs. It means those files are orphaned on the host and do not belong to anyone, when used via volume mapping?

mixedbit 9/5/2025||
If I understand things correctly, this is Linux namespaces limitation, so tools like Docker or Podman will not be able to support such mapping without support from Linux. But I'm afraid the requirement for UIDs to be mapped 1:1 is fundamental, otherwise, say two container users 1000 and 0 are mapped to the same host user 1000. Who then should be displayed in the container as the owner of a file that is owned by the user 1000 on a host?
privatelypublic 9/5/2025|||
Have you looked at idmapped mounts? I don't think it'll fix everything (only handles FS remapping, not kernel calls that are user permissioned)
diarrhea 9/5/2025||
I have not, thanks for the suggestion though.

A second challenge with the particular setup I’m trying is peer authentication with Postgres, running bare metal on the host. I mount the Unix socket into the container, and on the host Postgres sees the Podman user and permits access to the corresponding DB.

Works really well but only if the container user is root so maps natively. I ended up patching the container image which was the path of least resistance.

teekert 9/5/2025|||
This. And then some way to just be “yourself” in the container as well. So logs just show “you”.
lights0123 9/5/2025||
ignore_chown_errors will allow mapping root to your user ID without any other mappings required.
miki123211 9/5/2025||
I've been dealing with setting up Podman for work over the last week or so, and I wouldn't wish that on my worst enemy.

If you use rootless Podman on a Redhat-derived distribution (which means Selinux), along with a non-root user in your container itself, you're in for a world of pain.

Nextgrid 9/5/2025||
I've never seen the benefit of rootless.

Either the machine is a single security domain, in which case running as root is no issue, or it's not and you need actual isolation in which case run VMs with Firecracker/Kata containers/etc.

Rootless is indeed a world of pain for dubious security promises.

mbreese 9/5/2025|||
One of the major use cases was multi-user HPC systems. Because they can be complicated, it’s not uncommon for bioinformatics data analysis programs to be distributed as containers. Large HPC clusters are multi-tennant by nature, so running these containers needs to be rootless.

There are existing tools that fill this gap (Singularity/Apptainer). But, there is always friction when you have to use a specialized tool versus the default. For me, this is a core usecase for rootless containers.

For the reduced feature set we need from containers in bioinformatics, rootless is pretty straightforward. You could get largely the same benefits from chroots.

Where I think the issues start is when you start to use networking, subuids, or other features that require root-level access. At this level, rootless because a tedious exercise in configuration that probably isn’t worth the effort. The problem is, the features I need will be different from the features you need. Satisfying all users in a secure was may not be worth it.

lmm 9/6/2025||
> Large HPC clusters are multi-tennant by nature, so running these containers needs to be rootless.

I can't see how any kind of sensible security evaluation process would reach that conclusion. If you trust your users you don't need rootless, if you don't trust your users rootless containers aren't good enough. I suspect people do rootless because it seems easy and catches a few accidental mistakes rather than it being a legitimate security measure.

mbreese 9/6/2025||
Think R1 research university or government lab level HPC clusters…

These are almost always multi-tennant with differing levels of trust and experience between users. The data processed here can often have data access agreements or laws that limit who can see what data. You can’t have a poorly configured container exposing data, for example. So, the number of people who have root access is very limited. Normal users running workflows would all be required to run code rootless.

bbkane 9/5/2025||||
I see your point but I wouldn't let the perfect be the enemy of the good.

If I just want to run a random Docker container, I'm grateful I can get at least "some security" without paying as much in setup/debugging/performance.

Of course, ideally I wouldn't have to choose and the thing that runs the container would be able to run it perfectly securely without me having to know that. But I appreciate any movement in that direction, even if it's not perfect.

pkulak 9/5/2025|||
Rootless is nice because if you mount some directory in, all the files don't end up owned by root. You can get around that by custom building every image so the user has your user id, but that's a pain.
jwildeboer 9/5/2025|||
Sure. Constructing the case to shoot yourself in the foot is not a big problem. But in reality things mostly just work. I’m happily running a bunch of services behind a (nginx) reverse proxy as rootless containers. Forgejo, the forgejo runner to build stuff, uptime-kuma and more on a bunch of RHEL10 machines with SELinux enabled.
preisschild 9/5/2025||
Do you do OCI/container builds inside your forgejo-runner container?
mfenniak 9/5/2025||
People having trouble getting this configured is a common issue for self-hosting Forgejo Runner. As a Forgejo contributor, I'm currently polishing up new documentation to try to support people with configuring this; here's the draft page: https://forgejo.codeberg.page/@docs_pull_1421/docs/next/admi...

(Should live at https://forgejo.org/docs/v12.0/admin/actions/docker-access/ once it is finished up, if anyone runs into the comment after the draft is gone.)

preisschild 9/5/2025||
Im not hosting a Forgejo instance (yet), but self-hosted Gitlab with gitlab-runner in Kubernetes, so I was wondering how you solved this.

I'm using dind too, but this requires privileged runners...

marcel_hecko 9/5/2025|||
I have done the same. It's not too bad - just don't rely on LLMs to design your quadlets or systemd unit files. Read the docs for the exact podman version you use and it's pretty okay.
znpy 9/5/2025|||
Your issue is selinux then, not podman. It’s not correct to blame it on podman.
zelphirkalt 9/6/2025|||
Although it would be podman's job to explain how to set up Selinux, to avoid issues with it (and not just "disable it" as the answer). That is, if they list themselves as available for OS with Selinux.
master_crab 9/6/2025|||
It’s always selinux. I’m surprised parent didn’t figure that out
prmoustache 9/5/2025|||
How so? I have been using exlusively podman on Fedora for the most part of the last 7 years or so.
goku12 9/5/2025||
That surprises me too. Podman is spearheaded by Redhat and Fedora/RHEL was one of the earliest distros to adopt it and phase out docker. Why wouldn't they have the selinux config figured out?
znpy 9/5/2025||
They have.

Most likely gp is having issues with volumes and hasn’t figured out how to mix the :z and :Z attribute to bind mounts. Or the containers are trying to do something that security-wise is a big no-no.

In my experience SELinux defaults have been much wiser than me and every time i had issues i ended up learning a better way to do what i wanted to do.

Other than that… it essentially just works.

zelphirkalt 9/6/2025||
I personally like the verbose notation for docker volumes in docker compose files, where source and target are separate attributes in the YAML file. Not all munged into one long string, and unable to specify the type of mount explicitly. But that notation does not support stating the :z or :Z. I am running a Debian most of the time to develop and had no issue with the docker bind mounts, but on Fedora Selinux messed things up and I would get strange permission denied errors in the container for bind mounted config files. So I would have to change my docker compose file just for Fedora and Selinux. I think I even tried it with one of z: or Z:, but still Selinux interfered. At some point I had the choice of burning many more hours into configuring Selinux, disable Selinux, or reinstall docker as root. Since the Fedora OS is merely a VM, I chose to install Docker as root.

My point is: If figuring things out with podman is similar to my experience, I understand why people don't want to do that. Do they have a definitive page dedicated to setting up Selinux for podman, that is well maintained and guaranteed to solve all Selinux issues, and allows me to use bind mounts with readonly permission?

Insanity 9/5/2025|||
We went through an org wide Docker -> Podman migration and it went _relatively_ smooth. Some hiccups along the way but nothing that the SysDev team couldn't overcome.
YorickPeterse 9/5/2025|||
Meanwhile it works perfectly fine without any fuzz on my two Fedora Silverblue setups. This sounds less like a case of "Podman is suffering by definition" and more a case of a bunch of different variables coming together in a less than ideal way.
sigio 9/5/2025|||
I've setup a few podman machines (on debian), and generally liked it. I've been struggling for 2 days now to get a k8s server up, but that's not giving my any joy. (It doesn't like our nftables setup)
jimjimwii 9/5/2025|||
My anecdote: I've been using rootless podman on Ubuntu in production environments in multiple organizations (both startup and enterprise) for years without encountering a single issue related to podman itself.

I'm sure what you wrote here is true but i cant fathom how. Maybe its a rh specific issue? (Like how ubuntu breaks rootless bwrap by default)

ThatMedicIsASpy 9/5/2025|||
SELinux has good errors and all I usually need is :z and :Z on mounts
gm678 9/5/2025||
Can confirm, have been doing exactly what GP says is a world of pain with no problems as soon as I learned what `:z` and `:Z` do and why they might be needed.

A good reference answer: https://unix.stackexchange.com/questions/651198/podman-volum...

TL;DR: lowercase if a file from the host is shared with a container or a volume is shared between multiple containers. Uppercase in the same scenario if you want the container to take an exclusive lock on the volumes/files (very unlikely).

zelphirkalt 9/6/2025||
How do I make it :ro then? For example it is a good practice to mount config files as readonly. But if I have to use :z, I think I cannot use :ro?
gausswho 9/6/2025||
:ro,z
zelphirkalt 9/6/2025||
Ah, I never knew this is possible. Thank you!
mixmastamyk 9/5/2025|||
Sounds like you need to grant the user sufficient permissions. What else might go wrong?
marcel_hecko 9/5/2025|||
It's mostly the subgid subuid mapping of ids between guest and host which is non trivial to understand in rootless envs. Add selinux in the mix....
galangalalgol 9/5/2025|||
What actual issues do you run into? We have selinux and rootless and I didn't notice the transition from docker as a user.
strbean 9/5/2025|||
> subgid subuid mapping

trigger warning please D:

iTokio 9/5/2025|||
Mounting Volume and dealing with FS permissions.

They are many different workarounds but it’s a known pain point.

zamalek 9/5/2025|||
As a huge fan of podman this is definitely one of my disappointments. In the event that you're still struggling with this, the answer is using a --user systemd quadlet. You'll need to use machinectl (machinectl shell <user>@.host) for systemd commands to work, and you'll want to enable linger for that user.

One thing which just occurred to me, maybe it's possible to have a [container] and a [service].user in a quadlet?

thyristan 9/5/2025||
Yes, but the reason for that pain is SElinux. The first, second and third law of RedHat sysadmin work is "disable SElinux".
preisschild 9/5/2025||
> The first, second and third law of RedHat sysadmin work is "disable SElinux".

Must not be a good sysadmin then. SELinux improves the security and software like podman can be relatively easily be made to work with it.

I use podman on my Fedora Workstation with selinux set to enforce without issues

zelphirkalt 9/6/2025||
And now comes the part, where you link your guide how you set it up, please! I would like to try exactly that setup and OS. Have a Fedora VM here where I recently struggled with docker and Selinux.
thyristan 9/6/2025||
Docker != podman. Entirely different.

With podman, RedHat made an effort to make SElinux work. With Docker, as third-party-software, no proper SElinux config was ever written. With Docker, there is no hope at all that you'd get SElinux to work.

With podman, there is hope, as long as all your containers and usecases are simple, "well-behaved" and preferrably also RedHat-based and SElinux-aware. In the easy cases, podman + SElinux will just work. But unfortunately, containers are the means to get crappy software running, where the developers were too lazy to do proper packaging/installation/configuration/integration. So most cases are not easy and will not work with SElinux, if you don't have infinite time to write your own config...

Tajnymag 9/5/2025||
I've wanted to migrate multiple times. Unfortunately, it failed on multiple places.

Firstly, podman had a much worse performance compared to docker on my small cloud vps. Can't really go into details though.

Secondly, the development ecosystem isn't really fully there yet. Many tools utilizing Docker via its socket, fail to work reliably with podman. Either because the API differs or because of permission limitations. Sure, the tools could probably work around those limitations, but they haven't and podman isn't a direct 1:1 drop in replacement.

bonzini 9/5/2025||
> podman had a much worse performance compared to docker on my small cloud vps. Can't really go into details though.

Are you using rootless podman? Then network redirection is done using user more networking, which has two modes: slirp4netns is very slow, pasta is the newer and good one.

Docker is always set up from the privileged daemon; if you're running podman from the root user there should be no difference.

Tajnymag 9/5/2025||
Well, yes, but rootless is basically the main selling point of podman. Once you start using daemons and privileged containers, you can just keep using docker.
bonzini 9/5/2025||
No, the main selling point is daemonless. For example, you put podman in a systemd unit and you can stop/start with systemctl without an external point of failure.

Comparing root docker with rootless podman performance is apples to oranges. However, even for rootless pasta does have good performance.

curt15 9/5/2025||
Some tools talk to docker not using the docker CLI but directly through its REST API. Podman also exposes a similar REST API[1]. Is Podman with its API server switched on substantially different from the docker daemon?

[1]. https://docs.podman.io/en/latest/markdown/podman-system-serv...

xylophile 9/6/2025|||
Docker daemon runs as root, and runs continuously.

If you're running rootless Podman containers then the Podman API is only running with user privileges. And, because Podman uses socket activation, it only runs when something is actively talking to it.

eriksjolund 9/6/2025||
Sometimes it's possible to not use the Podman API at all. Convert the compose file to quadlet files with the command-line tool podlet and start the container with "systemctl --user start myapp.service". Due to the fork/exec architecture of podman, the container can then be started without using the Podman API.
bonzini 9/6/2025||
Yes, either quadlet or handwritten podman CLI in .service files is the way to go. I don't like using generate-systemd because it hides the actual configuration of the container, I see no point in being stateful...
bonzini 9/5/2025|||
Yes because the API server is stateless, unlike the docker daemon. If you kill it you can still operate on containers, images, etc. by other means, whereas if you kill the docker daemon the CLI stops working too.
anilakar 9/5/2025|||
SELinux-related permission errors are an endless nuisance with podman and quadlet. If you want to sandbox about anything it's easier to create a pod with full host permissions and necessary /dev/ files mounted, running a simple program that exposes minimal functionality over an isolated container network.
Aluminum0643 9/5/2025||
Udica, plus maybe ausearch | audit2allow -C, makes it easy to generate SELinux policies for containers (works great for me on RHEL10-like distros)

https://www.redhat.com/en/blog/generate-selinux-policies-con...

seemaze 9/5/2025||
Thats funny, podman had better performance and less resource usage on my resource constrained system. I chalked it up to crun vs runc, though both docker and podman both support configuring alternate runtimes. Plus no deamon..
vermaden 9/5/2025|
I ditched Docker and Podman for FreeBSD Jails :)

More here:

- https://vermaden.wordpress.com/2023/06/28/freebsd-jails-cont...

- https://vermaden.wordpress.com/2025/04/11/freebsd-jails-secu...

- https://vermaden.wordpress.com/2025/04/08/are-freebsd-jails-...

- https://vermaden.wordpress.com/2024/11/22/new-jless-freebsd-...

cheema33 9/5/2025||
Can you run MS SQL Server inside a FreeBSD jail? Or any of the thousands of other ready to run docker containers?

Whatever you gain by running FreeBSD comes at a high cost. And that high cost is keeping FreeBSD jails from taking over.

chuckadams 9/5/2025|||
That's ... a lot of setup. Does FreeBSD have anything similar to containerd?
vermaden 9/8/2025||
Check BastilleBSD https://bastillebsd.org/ along with Rocinante - https://bastillebsd.org/rocinante/rocinante/ from here.
udev4096 9/5/2025|||
How is that any different than running VMs on a linux host?
vermaden 9/8/2025||
Because containers are not virtual machines - You can run 1000 FreeBSD Jails on your laprop while you will not be able to run 1000 VMs - Jails are a lot lighter on resources then VMs.
matrix12 9/5/2025||
Very distro specific however.
More comments...