Posted by bumbledraven 15 hours ago
But I don't want to be either of those customers. It means the whole system has an extra layer of abstraction, so they can juggle VMs around. It's why you need slow EBS instead of just getting a flash drive in the same case as the CPU, with 0.01x the latency.
The key to scaling up is to have big-enough hardware on the backend. If Hetzner is renting out bare metal instances then they can only rent out the sizes that they have. If a cloud provider invests in really big single systems, they can offer fractions of those systems to multiple tenants, some of whom scale up to use the entire system, and some who don't. I think that is a win-win.
A fractional VM is also a fungible VM. If the tenant calls to spin up a certain size VM, then the backend can find suitable hardware for it from a menu of sizes. Smaller VMs can slot in anywhere there is room, not just on a designated bare-metal system.
A cloud provider is always going to want to maximize their rack space, wattage/heat, and resource usage. So they will invest in high-density systems at every chance. On the other hand, cloud tenants will have diverse needs, including some fraction of those big computers.
A service offering VMs for $20 is a long way from AWS, but I see how it makes sense as a first step. AWS also started with EC2, but in a completely different environment with no competition.
if we go back to the principle that modern computers are really fast, SSDs are crazy fast
and we remove the extra cruft of abstractions - software will be easier to develop - and we wouldn't have people shilling 'agents' as a way for faster development.
ultimately the bottleneck is our own thinking.
simple primitives, simpler thinking.
For that money I can get 5 big bare metal boxes on OVH with fast SSDs, put k0s on them, fast deploy with kluctl, cloudflare tunnels for egress. Backups to a cheap S3 bucket somewhere. I'll never look at another cloud provider.
(Percentages cited above are tongue-in-cheek, actual numbers are probably different)
Jokes aside: - k8s is insane peace of software. A right tool for a big problem. Not for your toys. Yes, it is crazy difficult to setup and manage. Then what?
- cloud has bad and slow disk. BS. They have perfectly fast NVME.
Something else? That’s it.
Why I am so confident? I used to setup and manage kubernetes for 2 years. I have some experience. Do I use it more? Nope. Not a right tool for me. Ansible with some custom Linux tools fits better for Me.
I also build my own cloud. But if I say it less loud: hosting to host websites for https://playcode.io. Yea, it is hard and with a lot of compromises. Like networking, yes I want to communicate between vms in any region. Or disks and reliability. What about snapshots? And many bare metal renters gives only 1Gbt/s. Which is not fine. Or they ask way more for 10Gbt uplink. So it is easy to build some limited and unreliable shit or non scalable.
52.35.87.134 <- Amazon Technologies Inc. (AT-88-Z)
Our exe.dev web UI still runs on AWS. We also have a few users left on our VM hosts there, as when we launched in December we were considering building on AWS. Now almost all customer VMs are on other bare metal providers or machines we are racking ourselves. We built our own GLB with the help of another vendor's anycast network. You can see that if you try any of the exe.xyz names generated for user VMs.
We would move exe.dev too, but we have a few customers who are compliance sensitive going through it, so we need to get the compliance story right with our own hardware before we can. It is a little annoying being tied to AWS just for that, but very little of our traffic goes through them, so in practice it works.
Hey wait a minute!
Cloud is bad?