So well put, my good sir, this describes exactly my feelings with k8s. It always starts off all good with just managing a couple of containers to run your web app. Then before you know it, the devops folks have decided that they need to put a gazillion other services and an entire software-defined networking layer on top of it.
After spending a lot of time "optimizing" or "hardening" the cluster, cloud spend has doubled or tripled. Incidents have also doubled or tripled, as has downtime. Debugging effort has doubled or tripled as well.
I ended up saying goodbye to those devops folks, nuking the cluster, booted up a single VM with debian, enabled the firewall and used Kamal to deploy the app with docker. Despite having only a single VM rather than a cluster, things have never been more stable and reliable from an infrastructure point of view. Costs have plummeted as well, it's so much cheaper to run. It's also so much easier and more fun to debug.
And yes, a single VM really is fine, you can get REALLY big VMs which is fine for most business applications like we run. Most business applications only have hundreds to thousands of users. The cloud provider (Google in our case) manages hardware failures. In case we need to upgrade with downtime, we spin up a second VM next to it, provision it, and update the IP address in Cloudflare. Not even any need for a load balancer.
People use Kubernetes for way too small things, and it sounds like you don't have the scale for actually running Kubernetes.
My app is fairly simple node process with some side car worker processes. k8s enables me to deploy it 30 times for 30 PRs, trivially, in a standard way, with standard cleanup.
Can I do that without k8s? Yes. To the same standard with the same amount of effort? Probably not. Here, I'd argue the k8s APIs and interfaces are better than trying to do this on AWS ( or your preferred cloud provider ).
Where things get complicated is k8s itself is borderline cloud provider software. So teams who were previously good using a managed service are now owning more of the stack, and these random devops heros aren't necessarily making good decisions everywhere.
So you really have three obvious use cases:
a) You're doing something interesting with the k8s APIs, that aren't easy to do on a cloud provider. Essentially, you're a power user. b) You want a cloud abstraction layer because you're multi-cloud or you want a lock-in bargaining chip. c) You want cloud semantics without being on a cloud provider.
However, if you're a single developer with a single machine, or a very small team and you're happy working through contended static environments, you can pretty much just put a process on a box and call it done. k8s is overkill here, though not as much as people claim until the devops heros start their work.
So having everyone use the same deployment model (and that’s typically k8s) saves effort. I don’t like it for sure
This is certainly the case from all the third person accounts I hear. Online. I never actually met a single one that is like that, if anything, those same people are the ones that are first to tell me about their Hetzner setups.
The trouble is that we are literally expected to do this everywhere we go. I've personally advocated for approaches which use say, a pair of dedicated servers, or VMs as in GPs example. If you want it outside of AWS/GCP/Azure, you're regarded as a crazy person. If you don't adopt "best practices" (as defined by vendors) then management are scared. Management very often trust the sales and marketing departments of big vendors more than their own staff. Many of us have given up fighting this, because what it comes down to is a massive asymmetry of information and trust.
The challenge is convincing people that "golden images" and containers share a history, and that kubernetes didn't invent containers: they just solved load balancing and storage abstraction for stateless message architectures in a nice way.
If you're doing something highly stateful, or that requires a heavy deployment (game servers are typically 10's of GB and have rich dynamic configuration in my experience) then kubernetes starts to become round-peg-square-hole. But people buy into it because the surrounding tooling is just so nice; and like GP says: those cloud sales guys are really good at their jobs, and kubernetes is so difficult to run reliably yourself that it gets you hooked on cloud.
There's a literal army of highly charismatic, charming people who are economically incentivised to push this technology and it can be made to work so- the odds, as they say, are against you.
I think this is the crux of the matter. Also, "everybody is doing it, so they must be right" is also a very common way of thinking amongst this population.
Around the time of the pandemic, a company wanted to make some Javascript code do a kind of transformation over large number of web-pages (a billion or so, fetched as WARC files from the web archive). Their engineers suggested setting up SmartOS VMs and deploying Manta (which would have allowed the use of the Javascript code in a totally unmodified way -- map-reduce from the command-line, that scales with the number storage/processing nodes) which should have taken a few weeks at most.
After a bit of googling and meeting, the higher ups decided to use AWS Lambdas and Google Cloud Functions, because that's what everyone else was doing, and they figured that this was a sensible business move because the job-market must be full of people who know how to modify/maintain Lambda/GCF code.
Needless to say, Lambda/GCF were not built for this kind of workload, and they could not scale. In fact, the workload was so out-of-distribution, that the GCP folks moved the instances (if you can call them that) to a completely different data-center, because the workload was causing performance problems, for _other_ customers in the original data-center.
Once it became clear that this approach cannot scale to a billion or so web-pages, it was decided to -- no, not to deploy Manta or an equivalent -- but to build a custom "pipeline" from scratch, that would do this. This system was in development for 6 months or so, and never really worked correctly/reliably.
This is the kind of thing that happens when non-engineers can override or veto engineering decisions -- and the only reason they can do that, is because the non-engineers sign the paychecks (it does not matter how big the paycheck is, because market will find a way to extract all of it).
One of the fallacies of the tech-industry (I do not mean to paint with too broad a brush, there are obviously companies out there that know what they are doing) is that there are trade-offs to be made between business-decisions and engineering-decisions. I think this is more a kind of psychological distortion or a false-choice (forcing an engineering decision on the basis of what the job market will be like some day in the future -- during a pandemic no less -- is practically delusional). Also, if such trade-offs are true trade-offs, then maybe the company is not really an engineering company (which is fine, but that is kind of like a shoe-store having a few podiatrists on staff -- it is wasteful, but they can now walk around in white lab-coats, and pretend to be a healthcare institution instead of a shoe-store).
Personally, I believe that the tech industry sustains itself via technical debt, much like the real economy sustains itself on real debt. In some sense, everyone is trying to gaslight everyone else into incurring as much technical debt as possible, so that a way to service the debt can be sold. Most of the technical debt is not necessary, and if people were empowered to just not incur it, I suspect it would orient tech companies towards making things that actually push the state of the art forward.
They're getting kickbacks from cloud vendors. Prove me wrong.
But yeah, let's just spin-up a shadow IT VM with Debian like GP said, it's easy!
That’s literally how they sold AWS in the beginning.
Cloud won not because of costs or flexibility but because it allowed teams to provision their own machines from their budget instead of going through all the red tape with their IT departments creating… a bunch of shadow IT VMs!
Everything old is new again, except it works on an accelerated ten year cycle in the IT industry.
If you'd know Kubernetes, you know not to use it. I say that as someone who used to do consulting for it.
The reality is that yet again "making money" completely collides with efficient, quality, sane productive work.
For me one of the main reasons to leave that space is that I couldn't really deal with the fact that my work collides with a client's success. That said I have helped to get off that stuff and other things that they thought they needed, that just wasted time and money. It just feels odd going into a company that hired you to consult on a topic only to end up telling them "The best approach for you is not doing that at all". Often never. Like some people thought "Well, if we have hundreds of thousands or even millions of users" and the reality was that even in these scenarios if you went away from that abstract thought and discussed a hypothetical based on their product they realized that they'd still be better off without it. Besides the fact that this hypothetical often was in a future that made it likely that they said they'd likely have completely different setup so preparing for that didn't even make sense.
I think a big thing related to that was/is the microservice craze where people end up moving to a complex architecture for not many good reasons and then they increase complexity way faster than what they actually deliver in terms of the product, because it somehow feels good. I know it does, I've been there. When in reality the outcome often is just a complex mess with what could have been a relatively simple monolith. And these monoliths do work. And in the vast majority of cases they are easy to scale, because your problem switches from "how do we best allocate that huge amount of very different services across our infrastructure" to (for the most part) "how do we spin up our monolith on one more server" which tends to be a way easier to tackle service.
And nothing stops you from still using everything else if you want. Just because it's a monolith doesn't mean you need to skip on any of the cloud offerings, etc. For some reason there seems to be that idea that if you write a monolith you are somehow barred from using modern tooling, infrastructure, services, etc. Not sure where that comes from.
I'm not surprised even in the slightest that DevOps workers will slap k8s on everything, to show "real industry experience" in a job market where the resume matches the tools.
I mean, I worked with people who were suprised that you can run more applications inside ec2 vm than just 1 app.
To be fair though, that's true for every profession or skill.
> I mean, I worked with people who were suprised that you can run more applications inside ec2 vm than just 1 app.
I've seen something similar where people were surprised that you can use an object storage (so effectively "make HTTP requests") from every server.
But if its use was confined to this use case, pretty much nobody would be using it (unless as a customer of the organization's infra) and barely would be talking about it (like how there isn't too much talk about Borg).
The reason k8s is a thing in the first place is because it's being used by way too many people for their own goods. (Most people having worked in startups have met too many architecture astronauts in our lives).
If I had to bet, I'd wager that 99% of k8s users are in the “spin a few containers to run your web app” category (for the simple reason that for one billion-dollar tech business using it for legit reasons, there's many thousands early startups who do not).
Teams are free to use EKS internally.
But to quote someone: "you are not Google".
This is why you get many folks over-thinking the solution and picking the most hyped technologies and using them to solve the wrong problems without thinking about what they are selling.
You don't need K8s + AWS EC2 + S3 just to host a web app. That tells me they like lighting money on fire and bankrupting the company and moving to the next one.
Maybe those devops folks only pay attention to k8s clusters and you're flying under their radar with your single debian VM + Kamal. But the same thinking that results in an overtly complex, impossible to debug, expensive to run k8s cluster can absolutely result in the same using regular VMs unless, again, you are just left to your own devices because their policies don't apply to VMs, yet.
The problem usually is you're one mistake away from someone shoving their nose in it. "What are you doing again? What about HA and redundancy? slow rollout and rollback? You must have at least 3 VMs (ideally 5) and can't expose all VMs to the internet of course. You must define a virtual network with policies that we can control and no wireguard isn't approved. You must split the internet facing load balancer from the backend resources and assign different identities with proper scoping to them. Install these 4 different security scanners, these 2 log processors, this watchdog and this network monitor. Are you doing mtls between the VMs on the private network? what if there is an attacker that gains access to your network? What if your proxy is compromised? do you have visibility into all traffic on the network? everything must flow throw this appliance"
The irony is that "DevOps" was supposed to be a culture and a set of practices, not a job title. The tools that came with it (=Kubernetes) turned out to be so complex that most developers didn't want to deal with them and the DevOps became a siloed role that the movement was trying to eliminate.
That's why I have an ick when someone uses devops as a job title. Just say "System Admin" or "Infrastrcutre Engineer". Admit that you failed to eliminate the siloes.
I am primarily a backend developer but I do a lot of ops / infra work because nobody else wants to do it. I stay as far away from k8s as possible.
Scale vertically until you can't because you're unlikely to hit a limit and if you do you'll have enough money to pay someone else to solve it.
Docker is amazing development tooling but it makes for horrible production infrastructure.
Docker Compose is good for running things on a single server as well.
Docker Swarm and Hashicorp Nomad are good for multi-server setups.
Kubernetes is... enterprise and I guess there's a scale where it makes sense. K3s and similar sort of fill the gap, but I guess it's a matter of what you know and prefer at that point.
Throw on Portainer on a server and the DX is pretty casual (when it works and doesn't have weird networking issues).
Of course, there's also other options for OCI containers, like Podman.
And I’m building and happily using Uncloud (https://github.com/psviderski/uncloud) for this (inspired by Kamal). It makes multi-machine setups as simple as a single VM. Creates a zero-config WireGuard overlay network and uses the standard Docker Compose spec to deploy to multiple VMs. There is no orchestrator or control plane complexity. Start with one VM, then add another when needed, can even mix cloud VMs and on-prem.
Radboud University recently announced they're rolling it out for managing containers across the faculty which is the most "serious install" I know about, but there could be other: https://cncz.science.ru.nl/en/news/2026-04-15_uncloud/
https://uncloud.run/docs/getting-started/install-cli/#instal...
All of this just adds so much extra complexity. If I'm running Amazon.com then sure, but your average app is just fine on a single VM.
FROM scratch
COPY my-static-binary /my-static-binary
ENTRYPOINT “/my-static-binary”
Having multiple processes inside one container is a bit of an anti-pattern imo
There are situations where a single VM, no matter how powerful is, can do the job.
If you have actual need to deploy few dozen services all talking with eachother k8s isn't bad way to do it, it has its problems but it allows your devs to mostly self-service their infrastructure needs vs having to process ticket for each vm and firewall rules they need. That is saying from perspective of migrating from "old way" to 14 node actual hardware k8s cluster.
It does make debugging harder as you pretty much need central logging solution, but at that scale you want central logging solution anyway so it isn't big jump, and developers like it.
Main problem with k8s is frankly nothing technical, just the "ooh shiny" problem developers have where they see tech and want to use tech regardless of anything
I'm not familiar with kubernetes, but doesn't it already do SDN out of the box?
Yes and no. Kubernetes defines specification about network behavior (in form of CNI), but it contains no actual implementation. You have to install the network plugin basically as the first setup step.
I use k3s/Rancher with Ansible and use dedicated VMs on various providers. Using Flannel with wireguard connects them all together.
This I think is reasonable solution as the main problem with cloud providers is they are just price gouging.
Most companies aren't "web scale" ™ and don't need an orchestrator built for google level elasticity, they need a vm autoscaling group if anything.
Most apps don't need such granular control over fs access, network policies, root access, etc, they need `ufw allow 80 && ufw enable`
Most apps don't need a 15 stage, docker layer caching optimized, archive promotion build pipeline that takes 30 minutes to get a copy change shipped to prod, they need a `git clone me@github.com:me/mine.git release_01 && ln -s release_01 /var/www/me/mine/current`
This is coming from someone who has had roles both as a backend product engineer and as a devops/platform engineer, who has been around long enough to remember "deploy" to prod was eclipse ftping php files straight to the prod server on file save. I manage clusters for a living for companies that went full k8s and never should have gone full k8s. ECS would have worked for 99% of these apps, if they even needed that.
Just like the js ecosystem went bat shit insane until things started to swing back towards sanity and people started to trim the needless bloat, the same is coming or due for the overcomplexity of devops/backend deployments
Do you pair it with some orchestration (to spin up the necessary VM)?
It's obvious to you, me and the other 2 presumably techie people who've responded within 15 mins that you shouldn't have been using Kubernetes. But you probably work in a company of full of techie people, who ended up using Kubernetes.
We have HN, an environment full of techie people here who immediately recognise not to use k8s in 99% of cases, yet in actually paid professional environments, in 99% of cases, the same techie people will tolerate, support and converge on the idea they should use k8s.
I feel like there's an element of the emperors new clothes here.
That is not what kube is designed for.
As a devops/cloud engineer coming from a pure sysadmin background (you've got a cluster of n machines running RHEL and that's it) i feel this.
The issues i see however are of different nature:
1. resumeè-driven development (people get higher-paying job if you have the buzzwords in your cv)
2. a general lack of core-linux skills. people don't actually understand how linux and kubernetes work, so they can't build the things they need, so they install off-the-shelf products that do 1000 things including the single one they need.
3. marketing, trendy stuff and FOMO... that tell you that you absolutely can't live without product X or that you must absolutely be doing Y
to give you an example of 3: fluxcd/argocd. they're large and clunky, and we're getting pushed to adopt that for managing the services that we run inside the cluster (not developer workloads, but mostly-static stuff like the LGTM stack and a few more things - core services, basically). they're messy, they add another layer of complexity, other software to run and troubleshoot, more cognitive load.
i'm pushing back on that, and frankly for our needs i'm fairly sure we're better off using terraform to manage kubernetes stuff via the kubernetes and helm provider. i've done some tests and frankly it works beautifully.
it's also the same tool we use to manage infrastructure, so we get to reuse a lot of skills we already have.
also it's fairly easy to inspect... I'm doing some tests using https://pkg.go.dev/github.com/hashicorp/hcl/v2/hclparse and i'm building some internal tooling to do static analysis of our terraform code and automated refactoring.
i still think kubernetes is worth the hassle, though (i mostly run EKS, which by the way has been working very good for me)
> Traditional Cloud 1.0 companies sell you a VM with a default of 3000 IOPS, while your laptop has 500k. Getting the defaults right (and the cost of those defaults right) requires careful thinking through the stack.
I wish them a lot of luck! I admire the vision and am definitely a target customer, I'm just afraid this goes the way things always go: start with great ideals, but as success grows, so must profit.
Cloud vendor pricing often isn't based on cost. Some services they lose money on, others they profit heavily from. These things are often carefully chosen: the type of costs that only go up when customers are heavily committed—bandwidth, NAT gateway, etc.
But I'm fairly certain OP knows this.
Using fio
Hetzner (cx23, 2vCPU, 4 GB) ~3900 IOPS (read/write) ~15.3 MB/s avg latency ~2.1 ms 99.9th percentile ≈ ~5 ms max ≈ ~7 ms
DigitalOcean (SFO1 / 2 GB RAM / 30 GB Disk) ~3900 IOPS (same!) ~15.7 MB/s (same!) avg latency ~2.1 ms (same!) 99.9th percentile ≈ ~18 ms max ≈ ~85 ms (!!)
using sequential dd
Hetzner: 1.9 GB/s DO: 850 MB/s
Using low end plan on both but this Hetzner is 4 euro and DO instance is $18.
RS 1000 G12 AMD EPYC™ 9645 8 GB DDR5 RAM (ECC) 4 dedicated cores 256 GB NVMe
Costs 12,79 €
Results with the follwing command:
fio --name=randreadwrite \ --filename=testfile \ --size=5G \ --bs=4k \ --rw=randrw \ --rwmixread=70 \ --iodepth=32 \ --ioengine=libaio \ --direct=1 \ --numjobs=4 \ --runtime=60 \ --time_based \ --group_reporting
IOPS Read: 70.1k IOPS Write: 30.1k IOPS ~100k IOPS total
Throughput Read: 274 MiB/s Write: 117 MiB/s
Latency Read avg: 1.66 ms, P99.9: 2.61 ms, max 5.644 ms Write avg: 0.39 ms, P99.9: 2.97 ms, max 15.307 ms
IOPS: read 325k, write 139k
Throughput: read 1271MB/s, write 545MB/s
Latency: read avg 0.3ms, P99.9 2.7ms, max 20ms; write: 0.14ms, P99.9 0.35ms max 3.3ms
so roughly 100 times iops and throughput of the cloud VMs
Using a Netcup VPS 1000 G12 is more comparable.
read: IOPS=18.7k, BW=73.1MiB/s
write: IOPS=8053, BW=31.5MiB/s
Latency Read avg: 5.39 ms, P99.9: 85.4 ms, max 482.6 ms
Write avg: 3.36 ms, P99.9: 86.5 ms, max 488.7 ms
Here are some "Regular Performance" shared resource stats
Hetzner CPX11 (Ashburn, 2 CPUs, 2GB, 5.49€ or $6.99/month before VAT)
read: IOPS=36.7k, BW=144MiB/s, avg/p99.9/max 2.4/6.1/19.5ms
write: IOPS=15.8k, BW=61.7MiB/s, avg/p99.9/max 2.4/6.1/18.7ms
Hetzner CPX22 (Helsinki, 2 CPUs, 4GB, 7.99€ or $9.49/month before VAT)
read: IOPS=48.2k, BW=188MiB/s, avg/p99.9/max 1.9/5.7/10.8ms
write: IOPS=20.7k, BW=80.8MiB/s, avg/p99.9/max 1.8/5.8/10.9ms
Hetzner CPX32 (Helsinki, 4 CPUs, 8GB, 13.99€ or $16.49/month before VAT)
read: IOPS=48.3k, BW=189MiB/s, avg/p99.9/max 1.9/6.2/36.1ms
write: IOPS=20.7k, BW=81.0MiB/s, avg/p99.9/max 1.8/6.3/36.1ms
If that's true, I wonder if this is a deliberate decision by cloud providers to push users towards microservice architectures with proprietary cloud storage like S3, so you can't do on-machine dbs even for simple servers.
Instead they make the default "meager IOPS" and then charge more to the people who need more.
Edit: I posted this before reading, and these two are the same he points out.
And yes, IO typically happens in 4kb blocks, so you need a decent amount of IOPS to get the full bandwidth.
Business 101 teaches us that pricing isn't based on cost. Call it top down vs bottom up pricing, but the first principles "it costs me $X to make a widget, so 1.y * $X = sell the product for $Y is not how pricing works in practice.
The price is what the customer will pay, regardless of your costs.
For example I calculated the cost of a solar install to be approximately: Material + Labour + Generous overhead + Very tidy profit = 10,000€
In practice I keep getting offers for ~14,000€, which will be reduced to 10,000€ with a government subsidy and my request for an itemized invoice is always met with radio silence.
Which it won't be, if at every turn you choose the hyperscaler.
It kinda is, but obscured by GP's formula.
More simply; if it costs you $X to produce a product and the market is willing to pay $Y (which has no relation to $X), why would you price it as a function of $X?
If it costs me $10 to make a widget and the market is happy to pay $100, why would I base my pricing on $10 * 1.$MARGIN?
But that is an equilibrium result, and famously does not apply to monopolies, where elasticity of substitution will determine the premium over the rental rate of capital.
There is already so much software out there, which isn't used by anyone. Just take a look at any appstore. I don't understand why we are so obsessed with cranking out even more, whereas the obvious usecase for LLMs should be to write better software. Let's hope the focus shifts from code generation to something else. There are many ways LLMs can assist in writing better code.
I believe right now we are still in the phase of “how can AI help engineers write better software”, but are slowly shifting to “how can engineers help AI write better software.” This will bring in a new herd of engineers with completely different views on what software is, and how to best go about building computer interactions.
Vibe coding or LLM accelerated development is going to turn this on its head. Everyone will be able to afford custom software to fit their specific needs and preferences. Where Salesforce currently has 150,000 customers, imagine 150,000 customers all using their own customised CRM. The scope for software expansion is unbelievably large right now.
I honestly think this is ideal. Video games aside, I think one day we'll look back and realize just how insane it was that we built software for millions or even billions of users to use. People can now finally build the software that does exactly what they've wanted their software to do without competing priorities and misaligned revenue models working against them. One could argue this kind of software, by definition, is higher quality.
I could see maybe more customization of said software, but not totally fresh. I do agree that people will invent more one-off throwaway software, though.
Jevons paradox would be if despite software becoming cheaper to produce the total spend on producing software would increase because the increase in production outruns the savings
Jevons paradox applies when demand is very elastic, i.e. small changes in price cause large changes in quantity demanded. It's a property of the market.
My view is actually the opposite. Software now belongs to cattle, not pet. We should use one-offs. We should use micro-scale snippets. Speaking language should be equivalent to programming. (I know, it's a bit of pipe dream)
In that sense, exe.dev (and tailscale) is a bit like pet-driven projects.
As for the average quality: it’s unclear.
My intuition is that agents lift up the floor to some degree, but at the same time will lead to more software being produced that’s of mediocre quality, with outliers of higher quality emerging at a higher rate than before.
If you're doing anything complicated, Excel just doesn't make sense anymore. it'll still the be data exchange format (at least, something more advanced than csv), but it's no longer the only frontend.
"No one uses" is no longer the insult it once was. I don't need or want to make software for every last person on the world to use. I have a very very small list of users (aka me) that I serve very well with most of the software that I generate these days outside of work.
It certainly is for lots of businesses, otherwise they go out of business.
There is something called 'revenue' which they need to make from customers which are their 'users', and that revenue pays for the 'operating costs' which includes payroll, office rent, infrastructure etc.
This just means that it is important than ever to know what to build just as how it is built. It is unrealistic for a business to disregard that and to build anything they want and end up with zero users.
No users, No revenue. No revenue, No business.
If you wanted to, you could make an argument about the principal-agent problem - that as hunter-gatherers or subsistence, farmers, our quality versus quantity decisions only affected us, whereas in a market economy, you could argue that one person’s quality versus quantity decision affects someone else.
But dismantling capitalism will not solve this problem. It just moves the decision-making to a different group of people. Those people will face the same trade-offs and the same incentives. After the Revolution, even the most loyal comrade will have to contend with the fact that they can choose to provide the honourable working class with more of a thing if they drop the quality.
I agree there is opportunity in making LLM development flows smooth, paired with the flexibility of root-on-a-Linux-machine.
> Time and again I have said “this is the one” only to be betrayed by some half-assed, half-implemented, or half-thought-through abstraction. No thank you.
The irony is that this is my experience of Tailscale.
Finally, networking made easy. Oh god, why is my battery doing so poorly. Oh god, it's modified my firewall rules in a way that's incompatible with some other tool, and the bug tracker is silent. Now I have to understand their implementation, oh dear.
No thank you.
I hope this wasn't interpreted towards exe.dev. That really is a cool service!
Tags permanently erase the user identity from a device, and disable things like Taildrop. When I tried to assign a tag for ACLs, I found that I then could not remove it and had to endure a very laborous process to re-register a Tailscale device that I added to Tailscale for the express purpose of remotely accessing
Could you rephrase that / elaborate on that? Isn't Tailscale's selling point precisely that they do identity-based networking?
EDIT: Never mind, now I see the sibling comment to which you also responded – I should have reloaded the page. Let's continue there!
Everything which cloud companies provide just cost so much, my own postgres running with HA setup and backup cost me 1/10th the price of RDS or CloudSQL service running in production over 10 years with no downtime.
i directly autoscales instances off of the Metrics harvested from graphana it works fine for us, we've autoscaler configured via webhooks. Very simple and never failed us.
i don't know why would i even ever use GCP or AWS anymore.
All my services are fully HA and backup works like charm everyday.
Does a regular 20-something software engineer still know how to turn some eBay servers & routers into a platform for hosting a high-traffic web application? Because that is still a thing you can do! (I've done it last year to make a 50PiB+ data store). I'm genuinely curious how popular it is for medium-to-big projects.
And Hetzner gives you almost all of that economic upside while taking away much of the physical hassle! Why are they not kings of the hosting world, rather than turning over a modest €367M (2021).
I find it hard to believe that the knowledge to manage a bunch of dedicated servers is that arcane that people wouldn't choose it for this kind of gigantic saving.
Managing servers is fine. Managing servers well is hard for the average person. Many hand-rolled hosting setups I've encountered includes fun gems such as:
- undocumented config drift.
- one unit of availability (downtime required for offline upgrades, resizing or maintenance)
- very out of date OS/libraries (usually due to the first two issues)
- generally awful security configurations. The easiest configuration being open ports for SSH and/or database connections, which probably have passwords (if they didn't you'd immediately be pwned)
Cloud architecture might be annoying and complex for many use-cases, but if you've ever been the person who had to pick up someone else's "pet" and start making changes or just maintaining it you'll know why the it can be nice to have cloud arch put some of their constraints on how infra is provisioned and be willing to pay for it.
Whether or not cloud is viable for a company is very individual. It's very hard to pin point a size or a use case that will always make cloud the "correct" choice.
But I came across Mythic Beasts (https://www.mythic-beasts.com/) yesterday, similar idea, UK based. Not used them yet but made the account for the next VPS.
OP is not saying they push new versions at such a high frequency they need checks every one minute.
The choice of one minute vs 15 minute is implementation detail and when architected like this costs nothing.
I hope that helps. Again this is my own take.
It is like 4 lines of config for Postgres, the only line you need to change is on which path Postgres should store the data.
Maybe change the filesystem?
You can use block storage if data matters to you.
Many services do not need to care about data reliability or can use multiple nodes, network storage or many other HA setups.
But there is middleground in form of VPS, where hardware is managed by the provider. It's still way way cheaper than some cloud magic service.
I am sure it's luck but we have few hetzner VPSes in both German locations and in last 5 years afaik they've never been down. On our http monitor service they have 100s of days uptime only because we restarted them ourselves.
An employee is going to cost anywhere between 8k and 50k per month. Hiring an employee to save 200/month on servers by using a shitty VPS provider is not saving you any money.
I ended up buying a cheap auctioned Hetzner server and using my self-hostable Firecracker orchestrator on top of it (https://github.com/sahil-shubham/bhatti, https://bhatti.sh) specifically because I wanted the thing he’s describing — buy some hardware, carve it into as many VMs as I want, and not think about provisioning or their lifecycle. Idle VMs snapshot to disk and free all RAM automatically. The hardware is mine, the VMs are disposable, and idle costs nothing.
The thing that, although obvious, surprised me most is that once you have memory-state snapshots, everything becomes resumable. I make a browser sandbox, get Chromium to a logged-in state, snapshot it, and resume copies of that session on demand. My agents work inside sandboxes, I run docker compose in them for preview environments, and when nothing’s active the server is basically idle. One $100/month box does all of it.
Out of interest, what sandboxing solution do you use?
`ssh you/repo/branch@box.clawk.work` → jump directly into Claude Code (or Codex) with your repo cloned and credentials injected. Firecracker VMs, 19€/mo.
POC, please be kind.
at 19€/mo are you subsidizing it given the sharp rise of LLM costs lately?
or are you heavily restricting model access. surely there is no Opus?
Not sure we can move away from cpu/memory/io budgeting towards total metal saturation because code isn't what it used to be because no one handles malloc failure any more, we just crash OOM
The key point is the partner companies. Almost nobody is actually running their own clouds the way they would with various 365 products, AWS or Azure. They buy the cloud from partners, similar to how they used to (and still do) buy solutions from Microsoft partners. So if you want to "sell cloud" you're probably going to struggle unless you get some of these onboard. Which again would probably be hard because I imagine a lot of what they sell is sort of a package which basically runs on VM's setup as part of the package that they already have.
International visitors might tell us more about benefits of non EU, US or UK nexus companies/legal/rights.