Top
Best
New

Posted by antov825 8/31/2025

Use One Big Server (2022)(specbranch.com)
350 points | 323 commentspage 2
synack 9/1/2025|
The complexity you introduce trying to achieve 100% uptime will often undermine that goal. Most businesses can tolerate an hour or two of downtime or data loss occasionally. If you set this expectation early on, you can engineer a much simpler system. Simpler systems are more reliable.
tgtweak 9/1/2025||
We had single-datacenter resiliency (meaning n+1 on power, cooling, network + isp, servers) and it was fine. You still need offsite DRS strategy here - this is one of the things having that hybrid cloud is great for: you can replicate your critical workloads like databases and services to the cloud in no-load standby, or delta-copy your backups to a cheap cloud provider for simplified recovery in a disaster scenario (ie: entire datacenter gets taken out). The cost of this is relatively low since data into the cloud is free and you're only really incurring costs in a disaster recovery scenario. Most virtualized platforms (veeam etc) support offsite secondary incremental backups with relative ease, recovery is also pretty straightforward.

That being said I've lost a lot of VMs on ec2 and had entire regions go down in gcp and aws in the last 3 years alone, so going to the public cloud isn't a solves it all solution - knock on wood the colo we've been using hasn't been down once in 12+ years.

hvb2 9/1/2025||
Much less expensive too.

I think in general that expectation is NOT acceptable though especially around data loss. Because the non engineering stakeholders don't believe it is.

Engineers don't make decisions in a vacuum, if you can manage the expectations, good for you. But in most cases that's very much an uphill battle which might make you look incompetent because you cannot guarantee no data loss.

gethly 9/1/2025||
I run on VPSs as well. I ditched cloud a long time ago. Once my project starts making money, I will definitely buy my own hardware and collocate. Cloud is like dating apps. We had fun for a decade but it's time to get serious and get some things actually done and be productive again.
benjiro 9/1/2025|
> I will definitely buy my own hardware and collocate.

Even colocation is often fraud with issues. I shall not mentioned the plectra of dead hardware from datacenter electricity failures. Ironically, my home has more stable electricity then some datacenters lol.

Unless you running a business where a few minutes downtime will cost you millions, most companies can literally run their own servers from their basements. I often see how much people overestimate their need for 99.999% uptime, or bandwidth requirements.

Its not like colocation is that much cheaper. The electricity prices your paying are often more expensive then even business/home electricity. That leave only internet/fiber, and the pletra of commercial fiber these days.

Used to get minimum quoted price of 2k, for a 1Gbit business fiber years ago (not inc install costs). Now you get in some countries, 5 or 10Gbit for 100 Euro business fiber.

HankStallone 9/2/2025||
Sometimes I wonder why I'm not running my servers from my home, considering my 1Gb fiber has 3ms latency, and a good UPS would get me through all but a couple of the longest power outages I've had in the last 15 years. As long as I'm hosting small business web sites or something like that, and not critical banking or hospital systems, there's no reason it wouldn't be fine.

When my colo contract runs out in a couple years, I may seriously consider it, especially since they're already talking about offering bigger bandwidth packages.

gethly 9/2/2025||
In the past, ISPs were forbidding this for home connections. But nowadays it looks like it should not be an issue for basic websites.
dang 8/31/2025||
Related ongoing thread:

How many HTTP requests/second can a single machine handle? (2024) - https://news.ycombinator.com/item?id=45085446 - Aug 2025 (32 comments)

rthnbgrredf 9/1/2025||
Bare-metal servers sound super cheap when you look at the price tag, and yeah, you get a lot of raw power for the money. But once you’re in an enterprise setup, the real cost isn’t the hardware at all, it’s the people needed to keep everything running.

If you go this route, you’ve got to build out your own stack for security, global delivery, databases, storage, orchestration, networking ... the whole deal. That means juggling a bunch of different tools, patching stuff, fixing breakage at 3 a.m., and scaling it all when things grow. Pretty soon you need way more engineers, and the “cheap” servers don’t feel so cheap anymore.

rollcat 9/1/2025||
A single, powerful box (or a couple, for redundancy) may still be the right choice, depending on your product / service. Renting is arguably the most approachable option: you're outsourcing the most tedious parts + you can upgrade to a newer generation whenever it becomes operationally viable. You can add bucket storage or CDN without dramatically altering your architecture.

Early Google rejected big iron and built fault tolerance on top of commodity hardware. WhatsApp used to run their global operation employing only 50 engineering staff. Facebook ran on Apache+PHP (they even served index.php as plain text on one occasion). You can build enormous value through simple means.

amluto 9/1/2025|||
If you use a cloud, you need a solution for security (ever heard of “shared responsibility”?), global delivery (a big cloud will host you all over, and this requires extra effort on your part, kind of like how having multiple rented or owned servers requires extra effort), storage (okay, I admit that S3 et al are nice and that non-big-cloud solutions are a bit lacking in this department), orchestration (the cloud handles only the lowest level — you still need to orchestrate your stuff on top of it), fixing breakage at 3 a.m. (the cloud can swap you onto a new server, subject to availability; so can a provider like Hetzner. You still need to fail over to that server successfully), patching stuff (other than firmware, the cloud does not help you here).
msgodel 9/1/2025||
I used to say "oh yeah just run qemu-kvm" until my girlfriend moved in with me and I realized you do legitimately need some kind of infrastructure for managing your "internal cloud" if anyone involved isn't 100% on the same page and then that starts to be its own thing you really do have to manage.

Suddenly I learned why my employer was willing to spend so much on OpenStack and Active directory.

ahdanggit 9/1/2025||
> until my girlfriend moved in with me

lol, why was this the defining moment? She wasn't too keen on hearing the high pitch wwwwhhhhuuuuurrrrrrr of the server fans?

msgodel 9/1/2025||
She was another software engineer and needed VMs too so I thought I'd just let her use some of my spare compute.
alkonaut 8/31/2025||
Microservices vs not is (almost) orthogonal to N servers vs one. You can make 10 microservices and rent a huge server and run all 10 services. It's more an organizational thing than a deployment thing. You can't do the opposite though, make a monolith and spread it out on 10 servers.
marcosdumay 8/31/2025||
> You can't do the opposite though, make a monolith and spread it out on 10 servers.

You absolutely can, and it has been the most common practice for scaling them for decades.

alkonaut 9/1/2025||
That’s just _duplicating_ the nodes horizontally which wasnt what I meant.

That’s obviously possible snd common.

What I meant was actually butchering the monolith into separate pieces and deploying it, which is - by the definition of monolith - impossible.

doganugurlu 9/1/2025||
What would be the point of actually butchering the monolith?

There is no limit or cost to deploying 10000 lines over 1000 lines.

alkonaut 9/1/2025|||
I meant in the sense of ”machine A only manages thing authentication” and ”machine B only manages orders”.

If that’s possible (regardless of what was deployed to the two machines) then the app just isn’t a true monolith.

fuckaj 9/1/2025|||
[dead]
const_cast 8/31/2025||
> You can't do the opposite though, make a monolith and spread it out on 10 servers.

Yes you can. Its called having multiple applications servers. They all run the same application, just more of them. Maybe they connect to the same DB, maybe not, maybe you shard the DB.

alkonaut 9/1/2025||
That’s obviously not what I meant. I meant running different aspects of the monolith on different servers.
lelanthran 9/1/2025|||
> I meant running different aspects of the monolith on different servers.

Of course you can. I've done it.

Identical binary on multiple servers with the load balancer/reverse proxy routing specific requests to specific instances.

The practical result is indeed "running different aspects of the monolith on different servers".

sfn42 9/1/2025||||
That's not a problem in a well designed ASP.NET project. Just create a new web API project, move a controller into it and copy/paste the necessary boilerplate into Program.cs, set up config etc for it and configure cicd to deploy it separately, there you go. Less than a days work.

You can also publish libraries as nuget packages (privately if necessary) to share code across repos if you want the new app in it's own repo.

I've worked on projects with multiple frontends, multiple backbends, lots of separately deployed Azure functions etc, it's no problem at all to make significant structural changes as long as the code isn't a big ball of mud.

I always start with a monolith, we can easily make these changes when necessary. No point complicating things until you actually have a reason to.

PaulKeeble 8/31/2025||
I often wonder if my home NAS/Server would be better off put onto a rented box or a cloud server somewhere, especially since I now have 1gbit/s internet. Even now the 20TB of drive space and 6 Cores with 32GB on Hetzner with a dedicated is about twice the price of buying the hardware over a 5 year period. I suspect the hardware will actually last longer than that and its the same level of redundancy (RAID) on a rented dedicated so the backup is the same cost between the two.

Using cloud and box storage on Hetzner is more expensive than the dedicated server, 4x owning the hardware and paying the power bill. AWS and Azure are just nuts, >100x the price because they charge so much for storage even with hard drives. Contabo nor Netcup can do this, its too much storage for them.

Every time I look at this I come to the same basic conclusion, the overhead of renting someone else’s machine is quite high compared to the hardware and power cost and it would be a worse solution than having that performance on the local network for bandwidth and latency. The problem isn't so much the compute performance, that is relatively fairly priced, its the storage costs and data transfer that bites.

Not really what the article was necessarily about but cloud is sort of meant to be good for low end hardware but its actually kind of not, the storage costs are just too high even a Hetzner Storage box.

Nextgrid 8/31/2025||
It really depends on your power costs. In certain parts of Europe, power is so expensive that Hetzner actually works out cheaper (despite them providing you the entire machine and datacenter-grade internet connection).
benjiro 9/1/2025||
Trust me, even with 35 cent/kwh (Germany), its easy to make it work. Just do not buy enterprise hardware. People are obsessed with running racks full of often obsolete hardware, that is not designed around energy efficiency.

Here is a fun one ...

https://www.reddit.com/r/selfhosted/comments/1dqq3h8/my_12x_...

Dude is running 12x AMD 6600HS with a power draw between 300 a 400W. The compute alone is easily 3x of a equivalent Hetzner 48c server. We shall not mention the that inc 768GB of memory (people underestimate how much high capacity rdimms draw in power).

The main issue with Hetzner is, as long as you only use their base configuration servers, they are very competitive. The issue is, if you start to step a little bit out of line, the prices simply skyrocket. Try adding more storage to some servers, memory, or you need a higher interconnect between your servers (limited to 10Gbit).

Even basic consumer hardware comes with 2.5Gbit, yet, Hetzner is in the stone ages with 1Gbit. I remember the time when Hetzner introduced 1Gbit. Hetzner was innovation, and progression. But that has been slowly vanishing. Hetzner has been getting more and more lazy. You see the issue also with their cloud offerings storage. Look at Netcup, even Strato etc... They barely introduce anything new anymore, and when something comes its often less competitive or broken. The whole S3 costing Backblaze price levels and non-stop issues.

You can tell they are the only company that every pushed for consumer hardware hosting on mass scale, what made them a small monopoly in the market. And it shows if your a old customer, and know their history. Hey, do people remember the price increases for the auction hardware because of the Ukraine invasion. Do not worry folks, when the electricity prices go down, we will adjust them down. O, we are pre-war prices for almost 2 years. Where is that promised price drops? Hehehe ...

Nextgrid 9/1/2025||
> Just do not buy enterprise hardware

But then you need to buy newer, more expensive hardware, which pushes your initial price up (divide by the amount of time you'll need to host the server to get the monthly equivalent, then add power/connectivity/maintenance and compare to Hetzner).

Btw generally the reason homelabbers flock to legacy enterprise hardware is that it generally gives you a good amount of compute for a cheap price, if you don't mind the increased power cost. This is actually fine as a lot of homelab usage is bursty and the machine can be powered off outside of those times.

benjiro 9/1/2025||
Why do you need to buy more expensive hardware? The thing is, if you run a old Xeon server, and compare that performance vs modern consumer level hardware.

Unless you are bought a 50~64 core server (and the power bill to match), your often way cheaper with consumer level hardware. Older server hardware advantage is more on the amount of total memory you can install or the amount of pcie lanes.

The cheapest enterprise CPUs (AMD as example) are currently Zen2, the moment you want Zen3s, the prices go up a lot, for anything 32core or higher.

I have seen so many homelab that ran ancient hardware, only for them to realize that they are able to do the same or more, on modern mini-pcs or similar hardware. Often at the fraction of the power draw.

The reason why so many people loved to run enterprise hardware, was because in the US you had electricity prices in the low single digit or barely in the teens. When you get some 35 cent/kw prices, people tend to do a bit of research and find out its not the 2010's anymore.

I ran multiple enterprise servers, with 384GB memory, things idled at 120W+ (and that was with temp controlled fans because those drain a ton of power). Second PSU? There goes your idle to 150W+.

Ironically, just get a few minipcs and with the same memory capacity spread, your doing 50W (often less) or less. The advantage of using laptop cpus. I have had 8 core Zen3 systems, doing 4.5W in idle.

And yes, you can turn off enterprise hardware but you can also sleep minipcs. And they do not tend to sound like jet engines when waking up ;)

I have a minisforum itx board next to me, with 16c Zen4, cost 380 Euro. Idles at 17W. Beats any 32C Zen2 enterprise server. Even something like a AMD EPYC 7C13 (64C), will be ~40% faster and still costs 600 Euro from China. It will do better on actual multithread workloads where you can really have tons of processes but 400 bucks vs 600+400 motherboard.

Just saying, enterprise has its uses, like in enterprise environments but for most people, especially homelabbers, its often overkill.

bpye 8/31/2025||
I think I’ve settled on both being the answer - Hetzner is affordable enough that I can have a full backup of my NAS (using ZFS snapshots and incremental backups), and as a bonus can host some services there instead of at home. My home network still has much lower latency and so is preferable for ie. my Lightroom library.
johnklos 8/31/2025||
These days we have more meta-software than software. Instead of Apache with virtualhosts, we have a VM running Docker instances, each with an nginx of its own, all connected by a separate Docker of nginx acting as a proxy.

How much waste is there from all this meta-software?

In reality, I host more on Raspberry Pis with USB SSDs than some people host on hundred-plus watt Dells.

At the same time, people constantly compare colo and hardware costs with the cost per month of cloud and say cloud is "cheaper". I don't even bother to point out the broken thinking that leads to that. In reality, we can ignore gatekeepers and run things out of our homes, using VPSes for public IPs when our home ISPs won't allow certain services, and we can still have excellent uptimes, often better than cloud uptimes.

Yes, we can consolidate many, many services in to one machine because most services aren't resource heavy constantly.

Two machines on two different home ISP networks backing each other up can offer greater aggregate uptime than a single "enterprise" (a misnomer, if you ask me, if you're talking about most x86 vendors) server in colo. A single five minute reboot of a Dell a year drops uptime from 100% to 99.999%.

Cloud is such bullshit that it's exhausting even just engaging with people who "but what if" everything, showing they've never even thought about it for more than a minute themselves.

joshmn 8/31/2025||
I did this (well, a large-r VPS for $120/month) for my Rails-based sports streaming website. I had a significant amount of throughput too, especially at peak (6-10pm ET).

My biggest takeaway was to have my core database tables (user, subscription, etc) backed up every 10 minutes, and the rest every hour, and test their restoration. (When I shut down the site it was 1.2TB.) Having a script to quickly provision a new node—in case I ever needed it—would have something up within 8 minutes from hitting enter.

When I compare this to the startups I’ve consulted for, who choose k8s because it’s what Google uses yet they only push out 1000s of database queries per day with a handful of background jobs and still try to optimize burn, I shake my head.

I’d do it again. Like many of us I don’t have the need for higher-complexity setups. When I did need to scale, I just added more vCPUs and RAM.

vevoe 8/31/2025|
Is there somewhere I can read more about your setup/experience with your streaming site? I currently run a (legal :) streaming site but have it hosted on AWS and have been exploring moving everything over to a big server. At this point it just seems like more work to move it than to just pay the cloud tax.
joshmn 8/31/2025||
Do a search for HeheStreams on your favorite search engine.

The technical bits aren’t all there, though, and there’s a plethora of noise and misinformation. Happy to talk via email though.

vevoe 8/31/2025||
Will do, thank you!
uhura 9/1/2025||
I’ve been having those discussions with friends for the last 3 or 4 years. The downside of having local infra is pretty much having someone that has the experience to do it right. While this article covered the higher end, the math on the lower end tends to work out at 1 year of ownership depending on what you already have because you will probably already have a small rack and some networking gear.

My main concern is that at the current cloud premiums rates, I will be better off even if I need to hire someone specifically for managing the local infra.

simonw 8/31/2025|
This was written in 2022, but looks like it's most still relevant today. Would be interesting to see updated numbers on the expected costs of various hosting providers.
More comments...