Posted by mpweiher 10/28/2024
Then the BORG came and assimilated us. Our deploys take easily 45+ minutes to really start shifting traffic
See https://stackify.com/kestrel-web-server-asp-net-core-kestrel... - comparison table 3/4 of the way down.
- On commit/push, your build runs once, and stores in an artifact. If nothing has changed, don't rebuild, reuse.
- Your build gets packed into a Docker container once and pushed to a remote registry. If nothing has changed, don't rebuild, reuse.
- Every test and subsequent stage uses the same build artifact and/or container. Again, this is as simple as pulling a binary or image. Within the same pipeline workspace, it's a file on disk shared between jobs.
- Using a self-hosted CI/CD runner, on the same network and provider as your artifact/container registry, means extremely low-latency, high-bandwidth file transfers. And because it's self-hosted, you don't have to wait for a runner to be allocated, it's waiting for you; it's just connected to remotely and immediately used. K8s runners on autoscaling clusters make it easy to scale jobs in parallel.
- Having each pipeline step use a prebuilt Docker container, and not having each step do a bunch of repetitive stuff (like installing tools, downloading deps...) when they don't need to, is essential. If every single job is doing the same network transfer and same tool install every time, optimize it.
- A kubernetes deploy to production should absolutely take its time to cycle an old and new pod. Half of the point of K8s is to prevent interrupting production traffic and rely on it to automatically resolve issues and prevent larger problems. This means leaning on health checks and ramping traffic for safety. But actually running the deploy part should be nearly instantaneous, a `kubectl apply` or `helm upgrade` should take seconds.
The only exception to all this is if you (rightly) have a very large test suite that takes a while to go through. You can still optimize the hell out of tests for speed and parallelize them a lot, though.
I prefer to run tests locally whenever possible, for instance as a git hook, rather than in a CI instance. If you need auditability for something like PCI, that approach probably won't work, but I think the small web (i.e. most of the web) can do just fine with it.
How is this a good excuse? Is it really that difficult for a developer to spend an afternoon understanding GitHub Actions and Docker, at least at a superficial level so they can understand what they're looking at?
In the words of Jake the Dog, "Sucking at something is the first step to getting good at something."
Many ways to skin the cat. This is just one of them.
On my VM, keep this running:
while true; do ssh pipe.pico.sh sub deploy-app; docker compose pull && docker compose up -d; done
On my local machine: docker buildx build --push -t ghcr.io/abc/app .; ssh pipe.pico.sh pub deploy-app -e
https://pipe.pico.sh/ docker buildx build --push -t ghcr.io/abc/app . && ssh myvm 'cd /my/app/path && docker compose pull && docker compose up -d'
?That might work if you have a single VM but it's a little more complicated when you have an app on multiple instances.
pipe is a multicast pubsub which means you can have many subscribers.
I can see a count of readers per day on each post. It also shows counts of devices, browsers, countries, and referrers. Here's what it looks like: https://herman.bearblog.dev/public-analytics/
You also just lost all your guardrails and collaborative controls, as well as created a dependency on all engineers being equally capable.
In other words, unless you are DHH and don't have to scale (both in terms of workload and terms of company), this scenario doesn't apply in the real world.