Posted by codesmash 9/5/2025
It feels a little hypocritical for us to feed our families through our tech talent and then complain that someone else is doing the same.
It's much more than a gui for it supports running k8s locally, managing custom vm instances, resource monitoring of containers, built in local domain name support with ssl mycontainer.orb, a debug shell that gives you ability to install packages that are not available in the image by default, much better and automated volume mounting and view every container in finder, ability to query logs, an amazing ui, plus it is much, much faster and more resource efficient.
I am normally with you that terminal is usually enough, but the above features really do make it worth it especially when using existing services that have complicated failure logs or are resource intensive like redis, postgres, livekit, etc or you have a lot of ports running and want to call your service without having to worry about remembering port numbers or complicated docker network configuration.
Idk what the problem is, but it's ugly. I switched to orbstack because there was something like a memory leak happening with docker desktop, just using waaaaay too many resources all the time, sometimes it would just crash. I just started using docker desktop from the get-go because when I came on I had multiple people with more experience say 'oh, you're coming from linux? Don't even try to use the docker daemon, just download docker desktop'.
On Linux, for development, podman and docker are pretty similar but I prefer the k8s yaml approach vs compose so tend to use podman.
I don't think Apple really cares about dev use cases anymore so I haven't used a Mac for development in a while (I would for iOS development of course if that ever came up).
https://docs.orbstack.dev/features/debug
Let alone the local resource monitor, increased, performance, automated local domains (no more complicated docker network settings to get your app working with local host), and more.
I basically use (orbstack) docker containers as light weight VM, easily accessible through multiple shells and they shutdown when nothing is running anymore.
I use them for development isolation? Or when I need to run some tool. It mounts the current directory, so your container is chrooted to that project.
You're running into the `/etc/sub[ug]id` defaults. The old default was to start your normal user at 100000 + 64k additional sub-IDs per user, but that changed recently when people at megacorps with hundreds of thousands of employees defined in LDAP and similar ran into ID conflict. Sub-IDs now start at 2^19 on RHEL10 for this reason.
Of course if you have really large / complex compose files or just don't feel like learning something else / aren't using k8s, stick with docker.
Linux gets a new privilege escalation exploit like once a month. If something would break out of the Docker daemon, it will break out of your own user account just fine. Using a non-root app does not make you secure, regardless of whatever containerization feature claims to add security in your own user namespace. On top of all that, Docker has a rootless mode. https://docs.docker.com/engine/security/rootless/
The only things that will make your system secure are 1) hardening every component in the entire system, or 2) virtualization. No containers are secure. That's why cloud providers all use mini-VMs to run customer containers (e.g. AWS Fargate) or force the customer to manage their own VMs that run the containers.
This is only partially true. Google's runtime (gvisor) does not share a kernel with the host machine, but still runs inside of a container.
https://cloud.google.com/blog/products/serverless/cloud-run-...
https://cloud.google.com/blog/products/serverless/cloud-run-...
- Podman is usually used "rootless", but it doesn't have to be. It can also be used with rootful containers. It's still daemonless, though it can shim the Docker socket (very useful for using i.e. Docker Compose.)
- Docker can be used in a rootless fashion too. It will still run a daemon, but it can be a user service using user namespaces. Personally I think Podman does much better here.
Podman also has some other interesting advantages, like better systemd integrations. Sometimes Kubernetes just isn't necessarily; Podman + Systemd works well in a lot of those cases. (Note though that I have yet to try Quadlets.) Though unfortunately I don't think even the newer Quadlets integration has some basic niceties that Kubernetes has (like simple ways to do zero downtime deployments.)
We ditched it for EC2s which were faster and more reliable while being cheaper, but that's beside the point.
Locally I use OrbStack by the way, much less intrusive than Docker Desktop.
Containers are the packaging format, EC2 is the infrastructure. (docker, crio, podman, kata, etc are the runtime)
When deploying on EC2, you still need to deploy your software, and when using containers you still need somewhere to deploy to.
zips the local copy of the branch and rsyncs it to the environment, and some other stuff
This would happen in your Dockerfile, and then the process of actually "installing" your application is just docker run (or kubectl apply, etc), which is an industry standard requiring no specialized knowledge about your application (since that is abstracted away in your Dockerfile).You're basically splitting the process of building and distributed your application into: write the software, build the image, deploy the image.
Everyone who uses these tools, which is most people by this point, will understand these steps. Additionally, any framework, cloud provider, etc that speaks container images, like ECS, Kubernetes, Docker Desktop, etc can manage your deployments for you, since they speak container images. Also, the API of your container image (e.g. the environment variables, entrypoint flags, and mounted volumes it expects) communicate to those deploying your application what things you expect for them to provide during deployment.
Without all this, whoever or whatever is deploying your application has to know every little detail and you're going to spend a lot of time writing custom workflows to hook into every different kind of infrastructure you want to deploy to.
Though as someone who's used a lot of Azure infrastructure as code with Bicep and also done the K8s YAML's I'm not sure which is more complicated at this point to be honest. I suspect that depends on your k8s setup of course.