Top
Best
New

Posted by vquemener 3 days ago

Mount Mayhem at Netflix: Scaling Containers on Modern CPUs(netflixtechblog.com)
47 points | 22 comments
yjftsjthsd-h 2 hours ago|
Okay, I'll ask the dumb question: Couldn't you also reduce the number of layers per container? Sure, if you can reuse layers you should, but unless you've done something very clever like 1 package per layer I struggle to think that 50 is really useful?
gucci-on-fleek 2 hours ago||
> unless you've done something very clever like 1 package per layer I struggle to think that 50 is really useful?

1 package per layer can actually be quite nice, since it means that any package updates will only affect that layer, meaning that downloading container updates will use much less network bandwidth. This is nice for things like bootc [0] that are deployed on the "edge", but less useful for things deployed in a well-connected server farm.

[0]: https://bootc-dev.github.io/bootc/

yjftsjthsd-h 1 hour ago|||
Yes, my intended meaning was that if you're doing that or something similar then I totally get having lots of layers because it's useful. Mostly I've only seen it come up with nix, but I can see how bootc would have a similar deal. That said, most container images I've ever seen aren't doing anything that clever and probably should be like... 2-3 layers? (One base layer, one with all your dependencies shoved in, and maybe one on top for the actual application.)
seabrookmx 1 hour ago|||
It doesn't work this way really?

It's called a layer because each layer on top depends on the layers below.

If you change the package defined in the bottom most layer, all 49 above it are invalid and need re-pulled or re-built.

gucci-on-fleek 1 hour ago|||
> If you change the package defined in the bottom most layer, all 49 above it are invalid and need re-pulled or re-built.

I also initially thought that that was the case, but some tools are able to work around that [0] [1] [2]. I have no idea how it works, but it works pretty well in my experience.

[0]: https://github.com/hhd-dev/rechunk/

[1]: https://coreos.github.io/rpm-ostree/container/#creating-chun...

[2]: https://coreos.github.io/rpm-ostree/build-chunked-oci/

minitech 1 hour ago|||
That’s mostly a Dockerism (and even Docker has `COPY --link` these days). The underlying tech supports independent layers.
solatic 45 minutes ago|||
This is Netflix, they have thousands of engineers. So you have two approaches to solve the problem: either write enforced policy-as-code to prevent people from deploying images with too-high layer count (and pray they never need to rollback to an image from before the policy was written), thus incurring political alignment costs around the new policy and forcing non-compliant teams to adapt (which is time not spent on features); or, solve the problem entirely at the infrastructure level.

It's hardly surprising that companies consider infrastructure-level solutions to be better.

ActorNightly 1 hour ago|||
Its not a dumb question. It seems like when it comes to these supposed high tech enterprise solutions, they spend so much churn in doing something that is very complex and impressive like investigating architecture performance when it comes to kernel level operations and figuring out the kernel specifics that are causing slowdowns. Instead they can put that talent into just writing software without containers that can just max out any EC2 instance in terms of delivering streamed content, and then you don't worry about why your containers are taking so long to load.
hvb2 1 hour ago|||
I have seen these comments quite a bit but they gloss over a major feature of a large company.

In a large company you can have thousands of developers just coding away at their features without worrying about how any of it runs. You can dislike that, but that's how that goes.

From a company perspective this is preferable as those developers are supposedly focussed on building the things that make the company money. It also allows you to hire people that might be good at that but have no idea how the deployment actually works or how to optimize that. Meanwhile with all code running sort of the same way, that makes the operations side easier.

When the company grows and you're dealing with thousands of people contributing code. These optimizations might save a lot of money/time. But those savings might be peanuts compared with every 10 devs coming up with their own deployment and the ops overhead of that.

Hikikomori 7 minutes ago|||
Content is not streamed from these containers.
redanddead 50 minutes ago||
Here’s an even dumber question: why didn’t they make a documentary instead of an article?
rixed 2 hours ago||
I am not familiar with the nitty gritty of container instance building process, so maybe I'm just not the intended audience, but this is particularly unclear to me:

  > To avoid the costly process of untarring and shifting UIDs for every container, the new runtime uses the kernel’s idmap feature. This allows efficient UID mapping per container without copying or changing file ownership, which is why containerd performs many mounts
Why does using idmap require to perform more mount?
nineteen999 1 hour ago||
The costly process probably explains why they just started injecting ads in my plan where there previously weren't any.

And also explains why rather than be leveraged into a more expensive plan to help them pay for their containers, I cancelled my subscription. Not like there's more than 1% content there worth paying for these days anyway.

martijnvds 1 hour ago||
This kind of id mapping works as a mount option (it can also be used on bind mounts). You give it a mapping of "id in filesystem on disk" to "id to return to filesystem APIs" and it's all translated on the fly.
ViktorRay 4 hours ago||
Articles like this are pretty cool. It’s so interesting to see the behind the scenes that happens whenever we watch a Netflix movie.
haneul 4 hours ago||
Interesting, another case of removing HT improving performance. Reminds me of doing that on Intel CPUs of a few gens ago.
parliament32 4 hours ago||
It took them this long to move from docker to containerd?
DeathArrow 1 hour ago||
So using the "old" container architecture could have been better than wasting time implementing the new architecture, dealing with the performance issues and wasting more time fixing the issues?
vivzkestrel 5 hours ago||
- can someone kindly explain why there are 2 websites that all claim to be netflix tech blog?

- website 1 https://netflixtechblog.medium.com/

- website 2 https://netflixtechblog.com/

hhh 1 hour ago||
The second one is a hosted custom domain for the medium blog iirc
geodel 4 hours ago||
I mean Netflix is dealing with big, important things like container scaling, creating a million micro services talking to each other and so on. Having multiple tech blogging platform on Medium is not something they have a spare moment to think about.
owenthejumper 4 hours ago|
Why is this so badly AI written? Netflix can surely pay for writers.

At this point I refuse to read any content in the AI format of: - The problem - The solution - Why it matters