In this way I’m able to set up AWS EC2 instances or digital ocean droplets, a bunch of game servers spin up and report back their existence to a backend game services API. So far it’s working but this part of my project is still in development.
I used to target containerizing my apps, which adds complexity, but often in AWS I have to care about VMs as resources anyways (e.g. AWS gamelift requires me to spin up VMs, same with AWS EKS). I’m still going back and forth between containerizing and using systemd; having a local stack easily spun up via docker compose is nice, but with systemd what I write locally is basically what runs in prod environment, and there’s less waiting for container builds and such.
I share all of this in case there’s a gray beard wizard out there who can offer opinions. I have a tendency to explore and research (it’s fuuun!) so I’m not sure if I’m on a “this is cool and a great idea” path or on a “nobody does this because <reasons>” path.
The closer you get to 100% resource utilization the more regular your workload has to become. If you can queue requests and latency isn't a problem, no problem, but then you have a batch process and not a live one (obviously not for games).
The reason is because live work doesn't come in regular beats, it comes in clusters that scale in a fractal way. If your long term mean is one request per second what actually happens is you get five requests in one second, three seconds with one request each, one second with two requests, and five seconds with 0 requests (you get my point). "fractal burstiness"
You have to have free resources to handle the spikes at all scales.
Also very many systems suffer from the processing time for a single request increasing as overall system loads increase. "queuing latency blowup"
So what happens? You get a spike, get behind, and never ever catch up.
https://en.wikipedia.org/wiki/Network_congestion#Congestive_...
The cycle time impact of variability of a single-server/single-queue system at 95% load is nearly 25x the impact on the same system at 75% load, and there are similar measures for other process queues.
As the other comment notes, you should really work from an assumption that 80% is max loading, just as you'd never aim to have a swap file or swap partition of exactly the amount of memory overcommit you expect.
Questions I imagine a thorough multiplayer solutions engineer would be curious of, the kind of person whose trying to squeeze as much juice out of the hardware specs as possible.
Even if that weren't the case, lead times for tasks will always increase with more utilization; see e.g. [1]: If you push a system from 80% to 95% utilization, you have to expect a ~4.75x increase in lead time for each task _on average_: (0.95/0.05) / (0.8/0.2)
Note that all except the term containing ρ in the formula are defined by your system/software/clientele, so you can drop them for a purely relative comparison.
[1]: https://en.wikipedia.org/wiki/Kingman%27s_formula
Edit: Or, to try to picture the issue more intuitively: If you're on a highway nearing 100% utilization, you're likely standing in a traffic jam. And if that's not (yet) strictly the case, the probabilty of a small hiccup creating one increases exponentially.
There are OS tunables, and these tunables will have some measure of impact on the overall system performance.
But the things that make high-utilization systems so bad for cycle time are inherent aspects of a queue-based system that you cannot escape through better tuning, because the issues these cause to cycle time were not due to a lack of tuning.
If you can tune a system so that what previously would have been 95% loading is instead 82% loading that will show significant performance improvements, but you'd erase all those improvements if you just allowed the system to go back up to 95% loaded.
That's a great article you link and it basically notes up front what the throughput requirements are in terms of cores per player, which then sets the budget for what the latency can be for a single player's game.
Now, if you imagine for a second that they managed to get it so that the average game will just barely meet their frame time threshold, and try to optimize it so that they are running right at 99% capacity, they have put themselves in an extremely dangerous position in terms of meeting latency requirements.
Any variability in hitting that frame time would cause a player to bleed over into the next player's game, reducing the amount of time the server had to process that other player's game ticks. That would percolate down the line, impacting a great many players' games just because of one tiny little delay in handling one player's game.
In fact it's reasons like this that they started off with a flat 10% fudge adjustment to account for OS/scheduling/software overhead. By doing so they've in principle already baked-in a 5-8% reduction in capacity usage compared to theoretical.
But you'll notice in the chart that they show from recent game sessions in 2020 that the aggregate server frame time didn't hang out at 2.34 ms (their adjusted per-server target), it actually tended to average at 2.0 ms, or about 85% of the already-lowered target.
And that same chart makes clear why that is important, as there was some pretty significant variability in each day's aggregate frame times, with some play sessions even going above 2.34 ms on average. Had they been operating at exactly 2.34 ms they would definitely have needed to add more server capacity.
But because they were in practice aiming at 85% usage (of a 95% usage figure), they had enough slack to absorb the variability they were seeing, and stay within their overall server expectations within ±1%.
Statistical variability is a fact of life, especially when humans and/or networks are involved, and systems don't respond well to variability when they are loaded to maximum capacity, even if it seems like that would be the most cost-effective.
Typically this only works where it's OK to ignore variability of time, such as in batch processing (where cost-effective throughput is more valuable than low-latency).
The engineering time, the risks of decreased performance, and the fragility of pushing the limit at some point become not worth the benefits of reaching some higher metric of utilization. If it's not where you are, that optimum trade off point is somewhere.
Still, I can see the draw for independent devs to use docker compose. Teams and orgs though makes sense to use podman and systemd for the smaller stuff or dev, and then literally export the config as a kubernetes yaml.
Meanwhile, with Docker (or the not recommended rootful podman), you can have centralized management of multiple machines with a tool like Portainer.
I like the idea of podman, but this has been a head-scratcher for me.
podman is really only suitable for a single node, but they may have added things I have missed.
You provide us a docker image, and we unpack it, turn it into a VM image and run as many instances as you want side-by-side with CPU affinity and NUMA awareness. Obviating the docker network stack for latency/throughput reasons - since you can
They had tried nomad, agones and raw k8s before that.
As a hobbyist part of me wants the VM abstracted completely (which may not be realistic). I want to say “here’s my game server process, it needs this much cpu/mem/network per unit, and I need 100 processes” and not really care about the underlying VM(s), at least until later. The closest thing I’ve found to this is AWS fargate.
Also holy smokes if you were a part of the team that architected this solution I’d love to pick your brain.
At a previous job, we used azure container apps - it’s what you _want_ fargate to be. AIUI, Google Cloud Run is pretty much the same deal but I’ve no experience with it. I’ve considered deploying them as lambdas in the past depending on session length too…
> As a hobbyist part of me wants the VM abstracted completely (which may not be realistic). I want to say “here’s my game server process, it needs this much cpu/mem/network per unit, and I need 100 processes” and not really care about the underlying VM(s), at least until later. The closest thing I’ve found to this is AWS fargate.
You can’t have on demand usage with no noisy neighbours without managing the underlying VMs.
I used hathora [0] at my previous job, (they’ve expanded since and I’m not sure how much this applies anymore) - they had a CLI tool which took a dockerfile and a folder and built a container and you could run it anywhere globally after that. Their client SDK contained a “get lowest latency location” that you could call on startup to use. It was super neat, but quite expensive!
By making it an “us” problem to run the infrastructure at a good cost, and be cheaper then than AWS for us to run, meaning we could take no profit on cloud vms. making us cost competitive as hell.
Ok, the idea was that what we really want is "ease of use" and "cost effective".
In game development (at least the gamedev I did) we didn't really want to care about managing fleets or versions, we just wanted to publish a build and then request that same build to run in a region.
So, the way I originally designed Accelbytes managed gameservers was that we treat docker containers as the distribution platform (if it runs in Docker it'll have all the bundled dependencies after all) and then you just submit that to us. We reconcile the docker image into a baked VM image on the popular cloud providers and you pay per minute that they're actively used. The reason to do it this way is that cloud providers are really flexible with the size of their machines.
So, the next issue, cost!
If we're using extremely expensive cloud VMs, then the cloud providers can undercut us by offering managed gameservers; worse, people don't compare devex of those things (though it's important to me when I was at AB); so we need to offer things at basically a neutral cost. It has to be the same price (or, ideally cheaper) to use Accelbyte's managed gameservers over trying to do it yourself on a cloud provider.. That way we guarantee the cloud providers don't undercut us: they wouldn't cannabalise their own margins on a base product like VMs to offer them below market rate.
So, we turn to bare-metal. We can build a fleet of globally managed servers, we can even rent them to begin with. By making it our problem to get good costs (because that pays for development), we are forced to make good decisions about CapEx vs OpEx, and it becomes "our DNA" to actually run servers, something most companies don't want to think about- but cloud costs are exorbitant and you need specialists to run them (I am one of those).
The bursty nature of games, seems like it fits best in a cloud, but you'll often find that naturally games don't like to ship next to each other, and the first weeks are the "worst" weeks in terms of capacity. If you have a live service game that sustains it's own numbers: that's a rarity, and in those cases it's even easier to plan capacity.
But if you build for a single burst, and you're a neutral third-party: you have basically built for every burst, and the more people who use you, the more profit you can make on the same hardware. -- and the more we buy, the better volume discounts we get, and the better we get at running things, etc;etc;etc.
Anyway, in order to make effective use of bare-metal, I wrote a gvisor clone that had some supervisor functionality, the basic gist of it was that the supervisor could export statistics of the gameserver, such as number of connections to the designated GS port (which is a reasonable default for player count) and information if it had completed loading (I only had two ways of being able to know this, one was the Agones RPC, the other was looking for a flag on disk... I was going to implement more), as well as ensuring the process is alive and lifecycling the process on death (collect up logs, crash dumps, any persistence, send it to the central backend to be processed). It was also responsible for telling the kernel to jail the process to the resources that the game-developer had requested. (So, if they asked for 2vCPU and 12G of ram, then, that's what they get).
It was also looking at NUMA awareness and CPU affinity, so, some cores would have been wasted (Unreal Engines server for example ideally likes to have 2 CPU Cores, where 1 is doing basically everything, and the other is about 20% used- theoretically you can binpack that second thread onto a CPU core, but my experience on The Division made me think that I really hate when my computer lies to me and plenty of IaaS providers have abstractions that lie.
I wrote the supervisor in Rust and it had about a 130KiB (static) memory footprint and I set myself a personal budget of 2 ticks per interval, but I left before achieving that one.
I could go into a lot of detail about it, I was really happy when I discovered that they continued my design. It didn't make me popular when I came up with it I think, they wanted something simple, and despite being developers, developing something instead of taking something off the shelf is never simple.
Also, they were really in bed with AWS. So anything that took away from AWS dominance was looked at with suspicion.
- Go based supervisor daemon runs as a systemd service on the host. I configure it to know about my particular game server and expected utilization target. - The supervisor is responsible for reconciling my desired expectations (a count, or % of cpu/mem/etc so far) with spinning up game servers managed by systemd (since systemd doesn’t natively support this sort of dynamism + go code is super lean). - If I want more than one type of game server I imagine I could extend this technique to spinning up more than one supervisor but I’m keeping that in my back pocket for now. - I haven’t thought up a reason to, but my Go supervisor might want to read the logs of my game servers through journald.
For my purposes I’m not making a generic solution for unknown workloads like your Rust supervisor, which probably helps reduce complexity.
My workstation uses systemd so I can see my supervisor working easily. Real heckin’ neat.
My only advice, is capture stdout from the supervisor of the child process (gameserver) instead of putting an own dependency on journald: because everyone speaks stdout and you can later enrich your local metrics with it if its structured well, and forward it centrally.
What you’ve stated so far is interesting. I’ve reading through some of your blog content too. I sent a LinkedIn connection request (look for Andrew).
Probing question: knowing what you know do you have any opinions (strong or otherwise) on what a solo dev might want to pursue? Just curious how you respond.
Why not both? Systemd allows you to make containers via nspawn, which are defined just about the exact same as you do a regular systemd service. Best of both worlds.
That would be portable[1] services.
https://adamgradzki.com/lightweight-development-sandboxes-wi...
https://docs.podman.io/en/latest/markdown/podman-systemd.uni...
(In fact, nothing prevents anyone from extracting and repackaging the sysvinit generator, now that I think of it).
I’ve also done Microsoft Orleans clusters and still recommend the single pid, multiple containers/processes approach. If you can avoid Orleans and kubernetes and all that, the better. It just adds complexity to this setup.
I’m starting to appreciate simplicity away from containers that’s why I’m even exploring systemd. I bet big on containers and developed plenty of skills, especially with k8s. I never stopped to appreciate that I’m partly in the business of making processes run on OSes, and it kinda doesn’t matter if the pid is a container or running ‘directly’ on the hardware. I’ll probably layer it back in but for now I’m kinda avoiding it as an exercise.
E.g. if I’m testing a debug ready build locally and want to attach my debugger, I can do that in k8s but there’s a ceremony of opening relevant ports and properly pointing to the file system of the container. Not a show stopper since I mostly debug while writing testing/production code in dev… But occasionally the built artifact demands inspection.
This all probably speaks to my odd prioritization: I want to understand and use. I’ve had to step back and realize part of the fun I have in pursuing these projects is the research.
Bare metal is the only actually good option, but you often have to do a lot yourself. Multiplay did offer it last time I looked, but I don’t know what’s going on with them now.
systemd-networkd now implements a resolve hook for its internal DHCP
server, so that the hostnames tracked in DHCP leases can be resolved
locally. This is now enabled by default for the DHCP server running
on the host side of local systemd-nspawn or systemd-vmspawn networks.
Hooray.localAll the services you forgot you were running for ten whole years, will fail to launch someday soon.
Because last time I wrote systemd units it looked like a job.
Also, way over complex for anything but a multi user multi service server. The kind you're paid to maintain.
Why wouldn't you want unit files instead of much larger init shell scripts which duplicate logic across every service?
It also enabled a ton of event driven actions which laptops/desktops/embedded devices use.
Indeed, that criticism makes no sense at all.
> It also enabled a ton of event driven actions which laptops/desktops/embedded devices use.
Don't forget VMs. Even in server space, they use hotplug/hotunplug as much as traditional desktops.
> Don't forget VMs. Even in server space, they use hotplug/hotunplug as much as traditional desktops.
I was doing hot plugging of hardware awo+ decades ago when I still administered Solaris machines. IBM/mainframes has been doing it since forever.
Even on Linux udevd existed before systemd did.
The futzing around with resolv.conf(5) for one.
I take to setting the immutable flag on the file given all the shenanigans that "dynamic" elements of desktop-y system software does with the file when I want the thing to never change after I install the server. (If I do need to change something (which is almost never) I'll remove/re-add the flag via Anisble's file:attr.)
Of course nowadays "init system" now also means "network settings" for some reason, and I have often have to fight between system-networkd and NetworkManager on some distros: I was very happy with interfaces(5), also because once I set the thing on install on a server, I hardly have to change it and the dynamic-y stuff is an anti-feature.
SystemD as init replacement is "fine"; SystemD as kitchen-sink-of-the-server-with-everything-tightly-coupled can get annoying.
The server and desktop have a lot more disk+RAM+CPU than the embedded device, to the point that running systemd on the low end of "just enough to run Linux" would be a pain.
Outside embedded, though, it probably works uniformly enough.
I recently set up a "modern" systemd based Ubuntu server in a VM and it used closer to 1 G before I installed any service.
I just checked a random debian 12 system (with systemd) running a bunch of services at home, and here's what I see:
$ free -m
total used free shared buff/cache available
Mem: 3791 320 2235 1 1313 3471
Swap: 99 0 99
Seems like usage is pretty much on par with your expectation. The largest consumers are systemd-journal which is storing logs in RAM, and filebeat which is relatively wasteful w/ memory. systemd itself (without the journal buffer log) consumes maybe 20-30 MB.
On one (coreelec) systemd has 7M of resident data, of which 5.5M are shared libraries; by comparison, the numbers for sshd are respectively 6M and 3.5M.
On an OpenWRT machine without systemd (it's using procd) there are 700k of resident data. So the "cost" of systemd seems to be ~5M. Certainly I wouldn't run systemd on an old router with 16MB of RAM, but the cost is two orders of magnitude less than 1G-300M.
Fascinating. Last time I wrote a .service file I thought how muhc easier it was than a SysV init script.
I vastly prefer #1.
Systemd gives you both worlds.
Commercially I would do my best to implement this using systemd features, then finally give up and throw a script but with extensive comments describing reasons why it has to be a script instead of a systemd unit file.
TIL. Didn't know I can get paid to maintain my PC because I have a background service that does not run as my admin user.
[Service]
Type=simple
ExecStart=/usr/bin/my-service
If this is a hard job for you well maybe get another career mate. Especially now with LLMs.The thing to me is that services sometimes do have cause to be more complex, or more secure, or to be better managed in various ways. Over time we might find (for ex) oh actually waiting for this other service to be up and available first helps.
And if you went to run a service in the past, you never know what you are going to get. Each service that came with (for ex) Debian was it's own thing. Many forked off from one template or a other. But often forked long ago, with their own idiosyncratic threads woven in over time. Complexity emerged, and it wasn't contained, and it crrtainly wasn't normalized complexity across services: there would be dozens of services each one requiring careful staring at an init script to understand, with slightly different operational characteristics and nuance.
I find the complaints about systemd being complex almost always look at the problem in isolation. "I just want to run my (3 line) service, but I don't want to have to learn how systemd works & manages unit: this is complex!". But it ignores the sprawl of what's implied: that everyone else was out there doing whatever, and that you stumble in blind to all manners of bespoke homegrown complexity.
Systemd offers a gradient of complexity, that begins with extremely simple (but still offering impressive management and oversight), and that lets services wade into more complexity as they need. I think it is absolutely humbling and to some people an affront to see man pages with so so so many options, that it's natural to say: I don't need this, this is complex. But given how easy it is, how much great ability to see the state of the world we get that SysV never offered, given the standard shared culture tools and means, and given the divergent evolutionary chaos of everyone muddling through init scripts themselves, systemd feels vastly more contained, learnable, useful, concise, and less complex than the nightmares of old. And it has simple starting points, as shown at the top, that you can add onto and embelish onwards as you find cause to move further along the gradient of complexity, and you can do so in a simple way.
It's also incredibly awesome how many amazing tools for limiting process access, for sandboxing and securing services systemd has. The security wins can be enormous.
> Because last time I wrote systemd units it looked like a job
Last, an LLM will be able to help you with systemd, since it is common knowledge with common practice. If you really dislike having to learn anything.
I struggle to figure out what it is that the systemd haters club actually struggles with, what is actually the hard parts. I do in fact sometimes use a 3 line .service file and it works fine. It feels like there is a radically conservative anti progress anti learning anti trying force that is extremely vocal that shows up all the time everywhere in any thread, to protest against doing anything or learning anything. I really really am so eager to find the learnable lessons, to find the hard spots, but it's almost entirely the same low grade discursive trashing with no constructive or informative input.
It feels like you use emotional warfare rather than reason. The culture I am from is powerless against that if that's all you bring but I also feel no respect for a culture that is so unable to equivocate what the fuck it's problems actually are. Imo we all need a social defense against complaints that are wildly vacuous & unspecific. Imo you are not meeting any baselines for taking your whinges seriously.
... or doesn't care to discuss it any more. RedHat's push was succesful, linux is not a hobby OS any more, you won.
I can agree with you that linux needed something better than sysv init.
I can't agree with you that this monolithic solution that takes over more and more services is better.
Oh, you want a specific complaint?
Why the fuck does systemd lock up the entire startup process for 2 minutes if you start a desktop machine without network connectivity?
Also it's entirely contained within a program that creates systemd .service files. It's super easy to extract it in a separate project. I bet someone will do it very quickly if there's need.
However, it is not easy figuring out which of those script are actually a SysVInit script and which simply wrap systemd.
* https://en.wikipedia.org/wiki/Jamie_Zawinski#Zawinski's_Law
:)
Make an `smtp.socket`, which calls `smtp.service`, which receives the mail and prints it on standard output, which goes to a custom journald namespace (thanks `LogNamespace=mail` in the unit) so you can read your mail with `journalctl --namespace=mail`.
Breaking systemd was a thorn on distributions trying to use musl.
Probably no biggie to google the necessary copypasta to launch stuff from .service files instead. Which, being custom, won't have their timeout set back to "infinity" with every update. Unlike the existing rc.local wrapper service. Which, having an infinity timeout, and sometimes deciding that whatever was launched by rc.local can't be stopped, can cause shutdown hangs.
v259? [cue https://youtu.be/lHomCiPFknY]
> Required minimum versions of following components are planned to be raised in v260:
* Linux kernel >= 5.10 (recommended >= 5.14),
Don't these two statements contradict each other?