Posted by antov825 8/31/2025
That being said I've lost a lot of VMs on ec2 and had entire regions go down in gcp and aws in the last 3 years alone, so going to the public cloud isn't a solves it all solution - knock on wood the colo we've been using hasn't been down once in 12+ years.
I think in general that expectation is NOT acceptable though especially around data loss. Because the non engineering stakeholders don't believe it is.
Engineers don't make decisions in a vacuum, if you can manage the expectations, good for you. But in most cases that's very much an uphill battle which might make you look incompetent because you cannot guarantee no data loss.
Even colocation is often fraud with issues. I shall not mentioned the plectra of dead hardware from datacenter electricity failures. Ironically, my home has more stable electricity then some datacenters lol.
Unless you running a business where a few minutes downtime will cost you millions, most companies can literally run their own servers from their basements. I often see how much people overestimate their need for 99.999% uptime, or bandwidth requirements.
Its not like colocation is that much cheaper. The electricity prices your paying are often more expensive then even business/home electricity. That leave only internet/fiber, and the pletra of commercial fiber these days.
Used to get minimum quoted price of 2k, for a 1Gbit business fiber years ago (not inc install costs). Now you get in some countries, 5 or 10Gbit for 100 Euro business fiber.
When my colo contract runs out in a couple years, I may seriously consider it, especially since they're already talking about offering bigger bandwidth packages.
How many HTTP requests/second can a single machine handle? (2024) - https://news.ycombinator.com/item?id=45085446 - Aug 2025 (32 comments)
If you go this route, you’ve got to build out your own stack for security, global delivery, databases, storage, orchestration, networking ... the whole deal. That means juggling a bunch of different tools, patching stuff, fixing breakage at 3 a.m., and scaling it all when things grow. Pretty soon you need way more engineers, and the “cheap” servers don’t feel so cheap anymore.
Early Google rejected big iron and built fault tolerance on top of commodity hardware. WhatsApp used to run their global operation employing only 50 engineering staff. Facebook ran on Apache+PHP (they even served index.php as plain text on one occasion). You can build enormous value through simple means.
Suddenly I learned why my employer was willing to spend so much on OpenStack and Active directory.
lol, why was this the defining moment? She wasn't too keen on hearing the high pitch wwwwhhhhuuuuurrrrrrr of the server fans?
You absolutely can, and it has been the most common practice for scaling them for decades.
That’s obviously possible snd common.
What I meant was actually butchering the monolith into separate pieces and deploying it, which is - by the definition of monolith - impossible.
There is no limit or cost to deploying 10000 lines over 1000 lines.
If that’s possible (regardless of what was deployed to the two machines) then the app just isn’t a true monolith.
Yes you can. Its called having multiple applications servers. They all run the same application, just more of them. Maybe they connect to the same DB, maybe not, maybe you shard the DB.
Of course you can. I've done it.
Identical binary on multiple servers with the load balancer/reverse proxy routing specific requests to specific instances.
The practical result is indeed "running different aspects of the monolith on different servers".
You can also publish libraries as nuget packages (privately if necessary) to share code across repos if you want the new app in it's own repo.
I've worked on projects with multiple frontends, multiple backbends, lots of separately deployed Azure functions etc, it's no problem at all to make significant structural changes as long as the code isn't a big ball of mud.
I always start with a monolith, we can easily make these changes when necessary. No point complicating things until you actually have a reason to.
Using cloud and box storage on Hetzner is more expensive than the dedicated server, 4x owning the hardware and paying the power bill. AWS and Azure are just nuts, >100x the price because they charge so much for storage even with hard drives. Contabo nor Netcup can do this, its too much storage for them.
Every time I look at this I come to the same basic conclusion, the overhead of renting someone else’s machine is quite high compared to the hardware and power cost and it would be a worse solution than having that performance on the local network for bandwidth and latency. The problem isn't so much the compute performance, that is relatively fairly priced, its the storage costs and data transfer that bites.
Not really what the article was necessarily about but cloud is sort of meant to be good for low end hardware but its actually kind of not, the storage costs are just too high even a Hetzner Storage box.
Here is a fun one ...
https://www.reddit.com/r/selfhosted/comments/1dqq3h8/my_12x_...
Dude is running 12x AMD 6600HS with a power draw between 300 a 400W. The compute alone is easily 3x of a equivalent Hetzner 48c server. We shall not mention the that inc 768GB of memory (people underestimate how much high capacity rdimms draw in power).
The main issue with Hetzner is, as long as you only use their base configuration servers, they are very competitive. The issue is, if you start to step a little bit out of line, the prices simply skyrocket. Try adding more storage to some servers, memory, or you need a higher interconnect between your servers (limited to 10Gbit).
Even basic consumer hardware comes with 2.5Gbit, yet, Hetzner is in the stone ages with 1Gbit. I remember the time when Hetzner introduced 1Gbit. Hetzner was innovation, and progression. But that has been slowly vanishing. Hetzner has been getting more and more lazy. You see the issue also with their cloud offerings storage. Look at Netcup, even Strato etc... They barely introduce anything new anymore, and when something comes its often less competitive or broken. The whole S3 costing Backblaze price levels and non-stop issues.
You can tell they are the only company that every pushed for consumer hardware hosting on mass scale, what made them a small monopoly in the market. And it shows if your a old customer, and know their history. Hey, do people remember the price increases for the auction hardware because of the Ukraine invasion. Do not worry folks, when the electricity prices go down, we will adjust them down. O, we are pre-war prices for almost 2 years. Where is that promised price drops? Hehehe ...
But then you need to buy newer, more expensive hardware, which pushes your initial price up (divide by the amount of time you'll need to host the server to get the monthly equivalent, then add power/connectivity/maintenance and compare to Hetzner).
Btw generally the reason homelabbers flock to legacy enterprise hardware is that it generally gives you a good amount of compute for a cheap price, if you don't mind the increased power cost. This is actually fine as a lot of homelab usage is bursty and the machine can be powered off outside of those times.
Unless you are bought a 50~64 core server (and the power bill to match), your often way cheaper with consumer level hardware. Older server hardware advantage is more on the amount of total memory you can install or the amount of pcie lanes.
The cheapest enterprise CPUs (AMD as example) are currently Zen2, the moment you want Zen3s, the prices go up a lot, for anything 32core or higher.
I have seen so many homelab that ran ancient hardware, only for them to realize that they are able to do the same or more, on modern mini-pcs or similar hardware. Often at the fraction of the power draw.
The reason why so many people loved to run enterprise hardware, was because in the US you had electricity prices in the low single digit or barely in the teens. When you get some 35 cent/kw prices, people tend to do a bit of research and find out its not the 2010's anymore.
I ran multiple enterprise servers, with 384GB memory, things idled at 120W+ (and that was with temp controlled fans because those drain a ton of power). Second PSU? There goes your idle to 150W+.
Ironically, just get a few minipcs and with the same memory capacity spread, your doing 50W (often less) or less. The advantage of using laptop cpus. I have had 8 core Zen3 systems, doing 4.5W in idle.
And yes, you can turn off enterprise hardware but you can also sleep minipcs. And they do not tend to sound like jet engines when waking up ;)
I have a minisforum itx board next to me, with 16c Zen4, cost 380 Euro. Idles at 17W. Beats any 32C Zen2 enterprise server. Even something like a AMD EPYC 7C13 (64C), will be ~40% faster and still costs 600 Euro from China. It will do better on actual multithread workloads where you can really have tons of processes but 400 bucks vs 600+400 motherboard.
Just saying, enterprise has its uses, like in enterprise environments but for most people, especially homelabbers, its often overkill.
How much waste is there from all this meta-software?
In reality, I host more on Raspberry Pis with USB SSDs than some people host on hundred-plus watt Dells.
At the same time, people constantly compare colo and hardware costs with the cost per month of cloud and say cloud is "cheaper". I don't even bother to point out the broken thinking that leads to that. In reality, we can ignore gatekeepers and run things out of our homes, using VPSes for public IPs when our home ISPs won't allow certain services, and we can still have excellent uptimes, often better than cloud uptimes.
Yes, we can consolidate many, many services in to one machine because most services aren't resource heavy constantly.
Two machines on two different home ISP networks backing each other up can offer greater aggregate uptime than a single "enterprise" (a misnomer, if you ask me, if you're talking about most x86 vendors) server in colo. A single five minute reboot of a Dell a year drops uptime from 100% to 99.999%.
Cloud is such bullshit that it's exhausting even just engaging with people who "but what if" everything, showing they've never even thought about it for more than a minute themselves.
My biggest takeaway was to have my core database tables (user, subscription, etc) backed up every 10 minutes, and the rest every hour, and test their restoration. (When I shut down the site it was 1.2TB.) Having a script to quickly provision a new node—in case I ever needed it—would have something up within 8 minutes from hitting enter.
When I compare this to the startups I’ve consulted for, who choose k8s because it’s what Google uses yet they only push out 1000s of database queries per day with a handful of background jobs and still try to optimize burn, I shake my head.
I’d do it again. Like many of us I don’t have the need for higher-complexity setups. When I did need to scale, I just added more vCPUs and RAM.
The technical bits aren’t all there, though, and there’s a plethora of noise and misinformation. Happy to talk via email though.
My main concern is that at the current cloud premiums rates, I will be better off even if I need to hire someone specifically for managing the local infra.