Top
Best
New

Posted by pavel_lishin 12/20/2025

Go ahead, self-host Postgres(pierce.dev)
674 points | 396 comments
mittermayr 12/20/2025|
Self-hosting is more a question of responsibility I'd say. I am running a couple of SaaS products and self-host at much better performance at a fraction of the cost of running this on AWS. It's amazing and it works perfectly fine.

For client projects, however, I always try and sell them on paying the AWS fees, simply because it shifts the responsibility of the hardware being "up" to someone else. It does not inherently solve the downtime problem, but it allows me to say, "we'll have to wait until they've sorted this out, Ikea and Disney are down, too."

Doesn't always work like that and isn't always a tried-and-true excuse, but generally lets me sleep much better at night.

With limited budgets, however, it's hard to accept the cost of RDS (and we're talking with at least one staging environment) when comparing it to a very tight 3-node Galera cluster running on Hetzner at barely a couple of bucks a month.

Or Cloudflare, titan at the front, being down again today and the past two days (intermittently) after also being down a few weeks ago and earlier this year as well. Also had SQS queues time out several times this week, they picked up again shortly, but it's not like those things ...never happen on managed environments. They happen quite a bit.

mattmanser 12/20/2025||
Over 20 year I've had lots of clients on self-hosted, even self-hosting SQL on the same VM as the webserver as you used to in the long distant past for low-usage web apps.

I have never, ever, ever had a SQL box go down. I've had a web server go down once. I had someone who probably shouldn't have had access to a server accidentally turn one off once.

The only major outage I've had (2/3 hours) was when the box was also self-hosting an email server and I accidentally caused it to flood itself with failed delivery notices with a deploy.

I may have cried a little in frustration and panic but it got fixed in the end.

I actually find using cloud hosted SQL in some ways harder and more complicated because it's such a confusing mess of cost and what you're actually getting. The only big complication is setting up backups, and that's a one-off task.

paulryanrogers 12/20/2025||
Disks go bad. RAID is nontrivial to set up. Hetzner had a big DC outage that lead to data loss.

Off site backups or replication would help, though not always trivial to fail over.

alemanek 12/20/2025|||
As someone who has set this up while not being a DBA or sysadmin.

Replication and backups really aren’t that difficult to setup properly with something like Postgres. You can also expose metrics around this to setup alerting if replication lag goes beyond a threshold you set or a backup didn’t complete. You do need to periodically test your backups but that is also good practice.

I am not saying something like RDS doesn’t have value but you are paying a huge premium for it. Once you get to more steady state owning your database totally makes sense. A cluster of $10-20 VPSes with NVMe drives can get really good performance and will take you a lot farther than you might expect.

tormeh 12/21/2025|||
I think the pricing of the big three is absurd, so I'm on your side in principle. However, it's the steady state that worries me. When the box has been running for 4 years and nobody who works there has any (recent) experience operating postgres anymore. That shit makes me nervous.
ldng 12/21/2025||||
More than that, it's easier than it ever was to setup but we live in the post-truth world where nobody wants to own their shit (both figuratively and concretely) ...
andersmurphy 12/20/2025||||
Even easier with sqlite thanks to litestream.
westurner 12/21/2025||
datasette and datasette-lite (WASM w/pyodide) are web UIs for SQLite with sqlite-utils.

For read only applications, it's possible to host datasette-lite and the SQLite database as static files on a redundant CDN. Datasette-lite + URL redirect API + litestream would probably work well, maybe with read-write; though also electric-sql has a sync engine (with optional partial replication) too, and there's PGlite (Postgres in WebAssembly)

bg24 12/21/2025|||
Yes. Also you can have these replicas of Postgres across regions.
mattmanser 12/20/2025||||
So can the cloud, and cloud has had more major outages in the last 3 months than I've seen on self-hosted in 20 years.

Deploys these days take minutes so what's the problem if a disk does go bad? You lose at most a day of data if you go with the 'standard' overnight backups, and if it's mission critical, you will have already set up replicas, which again is pretty trivial and only slightly more complicated than doing it on cloud hosts.

paulryanrogers 12/21/2025||
> ...you will have already set up replicas, which again is pretty trivial and only slightly more complicated than doing it on cloud hosts.

Even on PostgreSQL 18 I wouldn't describe self hosted replication as "pretty trivial". On RDS you can get an HA replica (or cluster) by clicking a radio box.

fabian2k 12/20/2025||||
For this kind of small scale setup, a reasonable backup strategy is all you need for that. The one critical part is that you actually verify your backups are done and work.

Hardware doesn't fail that often. A single server will easily run many years without any issues, if you are not unlucky. And many smaller setups can tolerate the downtime to rent a new server or VM and restore from backup.

j45 12/20/2025||||
Not as often as you might think. Hardware doesn’t fail like it used to.

Hardware also monitors itself reasonably well because the hosting providers use it.

It’s trivial to run a mirrored containers on two separate proxmox nodes because hosting providers use the same kind of stuff.

Offsite backups and replication? Also point and click and trivial with tools like Proxmox.

RAID is actually trivial to setup.l if you don’t compare it to doing it manually yourself from the command line. Again, tools like Proxmox make it point and click and 5 minutes of watching from YouTube.

If you want to find a solution our brain will find it. If we don’t we can find reasons not to.

tempest_ 12/20/2025||
> if you don’t compare it to doing it manually yourself

Even if you do ZFS makes this pretty trivial as well.

j45 12/23/2025||
Ah.. ZFS, really under rated and unfortunate with the unrelated history around it, the tech is quite solid.
mcny 12/21/2025||||
One thing that will always stick in my mind is one time I worked at a national Internet service provider.

The log disk was full or something. That's not the shameful part though. What followed is a mass email saying everyone needs to update their connection string from bla bla bla 1 dot foo dot bar to bla bla bla 2 dot foo dot bar

This was inexcusable to me. I mean this is an Internet service provider. If we can't even figure out DNS, we should shut down the whole business and go home.

PunchyHamster 12/21/2025||||
They, do, it isn't, cloud providers also go bad.

> Off site backups or replication would help, though not always trivial to fail over.

You want those regardless of where you host

znpy 12/21/2025|||
> RAID is nontrivial to set up.

Skill issue?

It's not 2003, modern volume-managing filesystems (eg:ZFS) make creating and managing RAID trivial.

arwhatever 12/20/2025|||
Me: “Why are we switching from NoNameCMS to Salesforce?”

Savvy Manager: “NoNameCMS often won’t take our support calls, but if Salesforce goes down it’s in the WSJ the next day.”

dilyevsky 12/20/2025|||
This ignores the case when BigVendor is down for your account and your account only and support is mia, which is not that uncommon ime
yunwal 12/21/2025|||
It doesn’t ignore that case, it simply allows them to shift blame whereas the no name vendor does not.
zelphirkalt 12/21/2025|||
So in the end it's not better for the users at all, it's just for non-technical people to shift blame. Great "business reasoning".
WJW 12/21/2025|||
Nobody in this thread ever claimed it was better for the users. It's better for the people involved in the decision.
zelphirkalt 12/21/2025|||
Yes, you are correct. But actually, I am not claiming someone claimed it :) I am actually trying to get at the idea, that the "business people" usually bring up, that they are looking after the user's/customer's interest and that others don't have the "business mind", while actually when it comes to this kind of decision making, all of that is out of the window, because they want to shift the blame.

A few steps further stepped back, most of the services we use are not that essential, that we cannot bear them being down a couple of hours over the course of a year. We have seen that over and over again with Cloudflare and AWS outages. The world continues to revolve. If we were a bit more reasonable with our expectations and realistic when it comes to required uptime guarantees, there wouldn't be much worry about something being down every now and then, and we wouldn't need to worry about our livelihood, if we need to reboot a customer's database server once a year, or their impression about the quality of system we built, if such a thing happens.

But even that is unlikely, if we set up things properly. I have worked in a company where we self-hosted our platform and it didn't have the most complex fail-safe setup ever. Just have good backups and make sure you can restore, and 95% of the worries go away, for such non-essential products, and outages were less often than trouble with AWS or Cloudflare.

It seems that either way, you need people who know what they are doing, whether you self-host or buy some service.

growse 12/21/2025||||
And this speaks to the lack of alignment about what's good for the decision makers Vs what's good for the customer.
PunchyHamster 12/21/2025|||
It's not tho, they have workers that they pay not making money, all while footing bigger bill for the "pleasure"
notKilgoreTrout 12/21/2025||
That's more a small business owner perspective. For a middle manager rattling some cages during a week of IBM downtime is adequate performance while it is unclear how much performative response is necessary if mom&pops is down for a day.
oconnor663 12/21/2025||||
You have to consider the class of problems as a whole, from the perspective of management:

- The cheap solution would be equally good, and it's just a blame shifting game.

- The cheap solution is worse, and paying more for the name brand gets you more reliability.

There are many situations that fall into the second category, and anyone running a business probably has personal memories of making the second mistake. The problem is, if you're not up to speed on the nitty gritty technical details of a tradeoff, you can't tell the difference between the first category and the second. So you accept that sometimes you will over-spend for "no reason" as a cost of doing business. (But the reason is that information and trust don't come for free.)

dilyevsky 12/21/2025||||
This excuse only works for one or maybe two such outages in most orgs
nwallin 12/21/2025|||
> non-technical people

It's also better for the technical people. If you self host the DB goes down at 2am on a Sunday morning all the technical people are gonna get woken up and they will be working on it until it's fixed.

If us-east goes down a technical person will be woken up, they'll check downdetector.com, and they'll say "us-east is down, nothin' we can do" and go back to sleep.

ajmurmann 12/21/2025|||
"Nobody has ever been fired for buying IBM"
psychoslave 12/21/2025||
https://www.forbes.com/sites/duenablomstrom1/2018/11/30/nobo...
balex 12/23/2025|||
JFrog, is that you?
TheNewsIsHere 12/21/2025|||
Just wait until you end up spending $100,000 for an awful implantation from a partner who pretends to understand your business need but delivers something that doesn’t work.

But perhaps I’m bitter from prior Salesforce experiences.

madeofpalk 12/20/2025|||
> but it allows me to say, "we'll have to wait until they've sorted this out, Ikea and Disney are down, too."

From my experience your client’s clients don’t care about this when they’re still otherwise up.

tjwebbnorfolk 12/20/2025|||
Yes but the fact that it's "not their fault" keeps the person from getting fired.

Don't underestimate the power of CYA

api 12/20/2025|||
This is a major reason the cloud commands such a premium. It’s a way to make down time someone else’s problem.

The other factor is eliminating the “one guy who knows X” problem in IT. What happens if that person leaves or you have to let them go? But with managed infrastructure there’s a pool of people who know how to write terraform or click buttons and manage it and those are more interchangeable than someone’s DIY deployment. Worst case the cloud provider might sell you premium support and help. Might be expensive but you’re not down.

Lastly, there’s been an exodus of talent from IT. The problem is that anyone really good can become a coder and make more. So finding IT people at a reasonable cost who know how to really troubleshoot and root cause stuff and engineer good systems is very hard. The good ones command more of a programmer salary which makes the gap with cloud costs much smaller. Might as well just go managed cloud.

pdimitar 12/21/2025|||
I never understood the argument of a senior IT person's salary competing for the cloud expenses. In my contracting and consulting career I have done all of programming, monitoring and DevOps many times; the cost of my contract is amortized over multiple activities.

The way you present it makes sense of course. But I have to wonder whether there really are such clear demarcation lines between responsibilities. At least over the course of my career this was very rarely the case.

01HNNWZ0MV43FF 12/20/2025||||
That is called "bus factor" or "lottery factor". If the one IT guy gets hit by a bus or wins the lottery and quits, what happens? You want a bus factor of two or more - Two people would have to get hit by a bus for the company to have a big problem
growse 12/21/2025||
There's a bus factor equivalent with the cloud, too. The power to severely disrupt your service (either accidentally, or on purpose) rests with a single org (and often, a single compliance department within that org).

Ironically, this becomes more of a concern the larger the supplier. AWS can live with firing any one of their customers - a smaller outfit probably couldn't.

6LLvveMx2koXfwn 12/21/2025|||
Surely 'the other factor' is no factor at all as IaC can target on-prem just as easily as cloud?
TheNewsIsHere 12/21/2025||
Many people do inaccurately equate IaC with “cloud native” or cloud “only”.

It can certainly fit into a particular cloud platform’s offerings. But it’s by no means exclusive to the cloud.

My entire stack can be picked up and redeployed anywhere where I can run Ubuntu or Debian. My “most external” dependencies are domain name registries and an S3-API compatible object store, and even that one is technically optional, if given a few days of lead time.

HPsquared 12/20/2025|||
That's real microeconomics.
blitz_skull 12/21/2025|||
From my experience, this completely disavows you from an otherwise reputation damaging experience.
vb-8448 12/20/2025|||
You can still outsource up to VM level and handle everything else on you own.

Obviously it depends on the operational overhead of specific technology.

bossyTeacher 12/20/2025|||
> Self-hosting is more a question of responsibility I'd say. I am running a couple of SaaS products and self-host at much better performance at a fraction of the cost of running this on AWS

It is. You need to answer the question: what are the consecuences of your service being down for lets say 4 hours or some security patch isn't properly applied or you have not followed the best practices in terms of security? Many people are technically unable, lack the time or the resources to be able to confidently address that question, hence paying for someone else to do it.

Your time is money though. You are saving money but giving up time.

Like everything, it is always cheaper to do it (it being cooking at home, cleaning your home, fixing your own car, etc) yourself (if you don't include the cost of your own time doing the service you normally pay someone else for).

bigstrat2003 12/20/2025|||
> Like everything, it is always cheaper to do it (it being cooking at home, cleaning your home, fixing your own car, etc) yourself (if you don't include the cost of your own time doing the service you normally pay someone else for).

In a business context the "time is money" thing actually makes sense, because there's a reasonable likelihood that the business can put the time to a more profitable use in some other way. But in a personal context it makes no sense at all. Realistically, the time I spend cooking or cleaning was not going to earn me a dime no matter what else I did, therefore the opportunity cost is zero. And this is true for almost everyone out there.

_superposition_ 12/20/2025||
Lol this made me laugh, there's a reasonable likelihood that time will be filled with meetings.
bigstrat2003 12/21/2025||
Heh, true. Although in fairness I said the business can repurpose the time to make money, not that they will. I'm splitting hairs, but it seems in keeping with the ethos here. ;)
PunchyHamster 12/21/2025||||
You can pay someone else to manage your hardware stack, there are literal companies that will just keep it running, while you just deploy your apps on that.

> It is. You need to answer the question: what are the consecuences of your service being down for lets say 4 hours or some security patch isn't properly applied or you have not followed the best practices in terms of security?

There is one advantage self hosted setup has here, if you set up VPN, only your employees have access, and you can have server not accessible from the internet. So even in case of zero day that WILL make SaaS company leak your data, you can be safe(r) with self-hosted solution.

> Your time is money though. You are saving money but giving up time.

The investment compounds. Setting up infra to run a single container for some app takes time and there is good chance it won't pay back for itself.

But 2nd service ? Cheaper. 5th ? At that point you probably had it automated enough that it's just pointing it at docker container and tweaking few settings.

> Like everything, it is always cheaper to do it (it being cooking at home, cleaning your home, fixing your own car, etc) yourself (if you don't include the cost of your own time doing the service you normally pay someone else for).

It's cheaper if you include your own time. You pay a technical person at your company to do it. Saas company does that, then pays sales and PR person to sell it, then pays income tax to it, then it also needs to "pay" investors.

Yeah making a service for 4 people in company can be more work than just paying $10/mo to SaaS company. But 20 ? 50 ? 100 ? It quickly gets to point where self hosting (whether actually "self" or by using dedicated servers, or by using cloud) actually pays off

jbverschoor 12/20/2025|||
Yea I agree.. better outsource product development, management, and everything else too by that narrative
nemothekid 12/20/2025|||
Unironically - I agree. You should be outsourcing things that aren't your core competency. I think many people on this forum have a certain pride about doing this manually, but to me it wouldn't make sense in any other context.

Could you imagine accountants arguing that you shouldn't use a service like Paychex or Gusto and just run payroll manually? After all it's cheaper! Just spend a week tracking taxes, benefits and signing checks.

Self-hosting, to me, doesn't make sense unless you are 1.) doing something not offered by the cloud or a pathological use case 2.) or running a hobby project or 3.) you are in maintaince mode on the product. Otherwise your time is better spent on your core product - and if it isn't, you probably aren't busy enough. If the cost of your RDS cluster is so expensive relative to your traffic, you probably aren't charging enough or your business economics really don't make sense.

I've managed large database clusters (MySQL, Cassandra) on bare metal hardware in managed colo in the past. I'm well aware of the performance thats being left on the table and what the cost difference is. For the vast majority of businesses, optimizing for self hosting doesn't make sense, especially if you don't have PMF. For a company like 37signals, sure, product velocity probably is very high, and you have engineering cycles to spare. But if you aren't profitable, self hosting won't make you profitable, and your time is better spent elsewhere.

belorn 12/20/2025|||
You can outsource everything, but outsourcing critical parts of the company may also put the existence of the company in the hand of a third-party. Is that an acceptable risk?

Control and risk management cost money, be that by self hosting or contracts. At some point it is cheaper to buy the competence and make it part of the company rather than outsource it.

nemothekid 12/20/2025|||
I think you and I simply disagree about your database being a core/critical part of your stack. I believe RDS is good enough for most people, and the only advantage you would have in self hosting is shaving 33% off your instance bill. I'd probably go a step further and argue that Neon/CockroachDB Serverless is good enough for most people.
dolmen 12/21/2025||
Access control to your (customer's) data may also be a concern that rules out managed services like RDS.
nemothekid 12/21/2025||
I'm not sure what is meaningfully different about RDS that wouldn't rule out the cloud in general if that was a concern.
solatic 12/21/2025|||
I'm totally with you on the core vs. context question, but you're missing the nuance here.

Postgres's operations is part of the core of the business. It's not a payroll management service where you should comparison shop once the contract comes up for renewal and haggle on price. Once Postgres is the database for your core systems of record, you are not switching away from it. The closest analog is how difficult it is/was for anybody who built a business on top of an Oracle database, to switch away from Oracle. But Postgres is free ^_^

The question at heart here is whether the host for Postgres is context or core. There are a lot of vendors for Postgres hosting: AWS RDS and CrunchyData and PlanetScale etc. And if you make a conscious choice to outsource this bit of context, you should be signing yearly-ish contracts with support agreements and re-evaluating every year and haggling on price. If your business works on top of a small database with not-intense access needs, and can handle downtime or maintenance windows sometimes, there's a really good argument for treating it that way.

But there's also an argument that your Postgres host is core to your business as well, because if your Postgres host screws up, your customers feel it, and it can affect your bottom line. If your Postgres host didn't react in time to your quick need for scaling, or tuning Postgres settings (that a Postgres host refuses to expose) could make a material impact on either customer experience or financial bottom-line, that is indeed core to your business. That simply isn't a factor when picking a payroll processor.

nemothekid 12/21/2025||
Ignoring the fact that the assumption that you will automatically have as good or better uptime than a cloud provider, I just feel like you just simply aren't being thoughtful enough with the comparison. Like in what world is payroll not as important as your DBMS - if you can't pay people you don't have a business!

If your payroll processor screws up and you can't pay your employees or contractors, that can also affect your bottom line. This isn't a hypothetical - this is a real thing that happened to companies that used Rippling.

If your payroll processor screws up and you end up owing tens of thousands to ex-employees because they didn't accrue vacation days correctly, that can squeeze your business. These are real things I've seen happen.

Despite these real issues that have jammed up businesses before rarely do people suggest moving payroll in-house. Many companies treat Payroll like cloud, with no need for multi-year contracts, Gusto lets you sign up monthly with a credit card and you can easily switch to rippling or paychex.

What I imagine is you are innately aware of how a DBMS can screw up, but not how complex payroll can get. So in your world view payroll is a solved problem to be outsourced, but DBMS is not.

To me, the question isn't whether or not my cloud provider is going to have perfect uptime. The assumption that you will achieve better uptime and operations than cloud is pure hubris; it's certainly possible, but there is nothing inherent about self-hosting that makes it more resilient. The question is your use case differentiated enough where something like RDS doesn't make sense. If it's not, your time is better spent focused on your business - not setting up dead man switches to ensure your database backup cron is running.

solatic 12/22/2025||
> Like in what world is payroll not as important as your DBMS - if you can't pay people you don't have a business!

Most employees, contractors, and vendors are surprisingly forgiving of one-time screw-ups. Hell, even the employees who are most likely to care the most about a safe, reliable paycheck - those who work for the US federal government - weren't paid during the recent shutdown, and not for the first time, and still there wasn't some massive wave of resignations across the civil service. If your payroll processor screws up that badly, you fire them and switch processors.

If your DBMS isn't working, your SaaS isn't working. Your SLA starts getting fucked and your largest customers are using that SLA as reason to stop payments. Your revenue is fucked.

Don't get me wrong, having working payroll is pretty important. But it's not actually critical the way the DBMS is, and if it was, then yeah you'd see more companies run it in-house.

nemothekid 12/23/2025||
>Most employees, contractors, and vendors are surprisingly forgiving of one-time screw-ups.

If you are a new business that isn't true. Your comparison to the US federal government is not apt at all - the USG is one of the longest running, stable organizations in the country, people will have plenty of patience for the USG, but they wont have it for your incorporated-last-month business.

Secondly I could make the same argument for AWS. AWS has plenty of downtime - way more than the USG has shutdowns, and there are never been a massive wave of customers off of AWS.

Finally, as a small business, if your payroll gets fucked, your largest assets will use that to walk out the door! The second you miss payroll is the second your employees start seeing the writing on the wall, its very hard to recover moral after that. Imagine being Uber and not paying drivers on time, they will simply drive more often with a competitor.

That said, I still see the parallels with the hypothetical "Accountant forums". The subject matter experts believe their shiny toy is the most critical to the business and the other parts aren't. Replace "US federal government" with "Amazon Web Services", and you will have your "Accountant forums" poster arguing why payroll should be done in house and SLA doesn't matter.

zbentley 12/20/2025|||
That’s pretty reductive. By that logic the opposite extreme is just as true: if using managed services is just as bad as outsourcing everything else, then a business shouldn’t rent real estate either—every business should build and own their own facility. They should also never contract out janitorial work, nor should they retain outside law firms—they should hire and staff those departments internally, every time, no nuance allowed.

You see the issue?

Like, I’m all for not procuring things that it makes more sense to own/build (and I know most businesses have piss-poor instincts on which is which—hell, I work for the government! I can see firsthand the consequences of outsourcing decision making to contractors, rather than just outsourcing implementation).

But it’s very case-by-case. There’s no general rule like “always prefer self hosting” or “always rent real estate, never buy” that applies broadly enough to be useful.

gopher_space 12/20/2025|||
I'll be reductive in conversations like this just to help push the pendulum back a little. The prevailing attitude seems (to me) like people find self-hosting mystical and occult, yet there's never been a better time to do it.

> But it’s very case-by-case. There’s no general rule like “always prefer self hosting” or “always rent real estate, never buy” that applies broadly enough to be useful.

I don't know if anyone remembers that irritating "geek code" thing we were doing a while back, but coming up with some kind of shorthand for whatever context we're talking about would be useful.

zbentley 12/20/2025||
No argument here, that’s a fair and thoughtful response, and you’re not wrong regarding the prejudice against self-hosting (and for what it’s worth I absolutely come from the era where that was the default approach, have done it extensively, like it, and still do it/recommend it when it makes sense).

> “ geek code" thing we were doing a while back

Not sure what you’re referring to. “Shibboleet”, perhaps? https://xkcd.com/806/

gopher_space 12/20/2025||
> The Geek Code, developed in 1993, is a series of letters and symbols used by self-described "geeks" to inform fellow geeks about their personality, appearance, interests, skills, and opinions. The idea is that everything that makes a geek individual can be encoded in a compact format which only other geeks can read. This is deemed to be efficient in some sufficiently geeky manner.

https://en.wikipedia.org/wiki/Geek_Code

foo42 12/21/2025||
geek code is worthy of its own hn submission
jama211 12/20/2025|||
So well said, I like the technique of taking their logic and turning it around, never seen that before but it’s smart.
antihipocrat 12/21/2025||
In my experience it only ends well on the Internet and with philosophically inclined friends.
jama211 12/21/2025||
Anything ending well on the internet is like a mythical unicorn though
Thaxll 12/20/2025||
That argument does not hold when there is aws serverless pg available, which cost almost nothing for low traffic and is vastly superior to self hosting regarding observability, security, integration, backup ect...

There is no reason to self manage pg for dev / environnement.

https://aws.amazon.com/rds/aurora/serverless/

starttoaster 12/20/2025|||
"which cost almost nothing for low traffic" you invented the retort "what about high traffic" within your own message. I don't even necessarily mean user traffic either. But if you constantly have to sync new records over (as could be the case in any kind of timeseries use-case) the internal traffic could rack up costs quickly.

"vastly superior to self hosting regarding observability" I'd suggest looking into the cnpg operator for Postgres on Kubernetes. The builtin metrics and official dashboard is vastly superior to what I get from Cloudwatch for my RDS clusters. And the backup mechanism using Barman for database snapshots and WAL backups is vastly superior to AWS DMS or AWS's disk snapshots which aren't portable to a system outside of AWS if you care about avoiding vendor lock-in.

jread 12/20/2025||||
This was true for RDS serverless v1 which scaled to 0 but is no longer offered. V2 requires a minimum 0.5 ACU hourly commit ($40+ /mo).
cobolcomesback 12/20/2025||
V2 scales to zero as of last year.

https://aws.amazon.com/blogs/database/introducing-scaling-to...

It only scales down after a period of inactivity though - it’s not pay-per-request like other serverless offerings. DSQL looks to be more cost effective for small projects if you can deal with the deviations from Postgres.

jread 12/20/2025||
Ah, good to know, I hadn't seen that V2 update. Looks like a min 5m inactivity to auto-pause (i.e., scale to 0), and any connection attempt (valid or not) resumes the DB.
maccard 12/20/2025||||
Aurora serverless requires provisioned compute - it’s about $40/mo last time I checked.
snovv_crash 12/21/2025||
The performance disparity is just insane.

Right now from Hetzner you can get a dedicated server with 6c/12t Ryzen2 3600, 64GB RAM and 2x512GB Nvme SSD for €37/mo

Even if you just served files from disc, no RAM, that could give 200k small files per second.

From RAM, and with 6 dedicated cores, network will saturate long before you hit compute limits on any reasonably efficient web framework.

gonzo41 12/21/2025|||
Just use a pg container on a vm, cheap as chips and you can do anything to em.
molf 12/20/2025||
> I'd argue self-hosting is the right choice for basically everyone, with the few exceptions at both ends of the extreme:

> If you're just starting out in software & want to get something working quickly with vibe coding, it's easier to treat Postgres as just another remote API that you can call from your single deployed app

> If you're a really big company and are reaching the scale where you need trained database engineers to just work on your stack, you might get economies of scale by just outsourcing that work to a cloud company that has guaranteed talent in that area. The second full freight salaries come into play, outsourcing looks a bit cheaper.

This is funny. I'd argue the exact opposite. I would self host only:

* if I were on a tight budget and trading an hour or two of my time for a cost saving of a hundred dollars or so is a good deal; or

* at a company that has reached the scale where employing engineers to manage self-hosted databases is more cost effective than outsourcing.

I have nothing against self-hosting PostgreSQL. Do whatever you prefer. But to me outsourcing this to cloud providers seems entirely reasonable for small and medium-sized businesses. According to the author's article, self hosting costs you between 30 and 120 minutes per month (after setup, and if you already know what to do). It's easy to do the math...

Nextgrid 12/20/2025||
> employing engineers to manage self-hosted databases is more cost effective than outsourcing

Every company out there is using the cloud and yet still employs infrastructure engineers to deal with its complexity. The "cloud" reducing staff costs is and was always a lie.

PaaS platforms (Heroku, Render, Railway) can legitimately be operated by your average dev and not have to hire a dedicated person; those cost even more though.

Another limitation of both the cloud and PaaS is that they are only responsible for the infrastructure/services you use; they will not touch your application at all. Can your application automatically recover from a slow/intermittent network, a DB failover (that you can't even test because your cloud providers' failover and failure modes are a black box), and so on? Otherwise you're waking up at 3am no matter what.

molf 12/20/2025|||
> Every company out there is using the cloud and yet still employs infrastructure engineers

Every company beyond a particular size surely? For many small and medium sized companies hiring an infrastructure team makes just as little sense as hiring kitchen staff to make lunch.

spwa4 12/20/2025|||
For small companies things like vercel, supabase, firebase, ... wipe the floor with Amazon RDS.

For medium sized companies you need "devops engineers". And in all honesty, more than you'd need sysadmins for the same deployment.

For large companies, they split up AWS responsibilities into entire departments of teams (for example, all clouds have math auth so damn difficult most large companies have -not 1- but multiple departments just dealing with authorization, before you so much as start your first app)

add-sub-mul-div 12/20/2025||||
You're paying people to do the role either way, if it's not dedicated staff then it's taking time away from your application developers so they can play the role of underqualified architects, sysadmins, security engineers.
scott_w 12/20/2025|||
From experience (because I used to do this), it’s a lot less time than a self-hosted solution, once you’re factoring in the multiple services that need to be maintained.
pinkgolem 12/20/2025||
As someone who has done both.. i disagree, i find self hosting to a degree much easier and much less complex

Local reproducibility is easier, and performance is often much better

scott_w 12/20/2025||
It depends entirely on your use case. If all you need is a DB and Python/PHP/Node server behind Nginx then you can get away with that for a long time. Once you throw in a task runner, emails, queue systems, blob storage, user-uploaded content, etc. you can start running beyond your own ability or time to fix the inevitable problems.

As I pointed out above, you may be better served mixing and matching so you spend your time on the critical aspects but offload those other tasks to someone else.

Of course, I’m not sitting at your computer so I can’t tell you what’s right for you.

pinkgolem 12/20/2025||
I mean, fair, we are ofc offloading some of that.. email being one of those, LLM`s being another thing.

Task runner/que at least for us postgres works for both cases.

We also self host an s3 storage and allow useruploaded content in within strict borders.

flomo 12/20/2025|||
Yeah, and nobody is looking at the other side of this. There just are not a lot of good DBA/sysop type who even want to work for some non-tech SMB. So this either gets outsourced to the cloud, or some junior dev or desktop support guy hacks it together. And then who knows if the backups are even working.

Fact is a lot of these companies are on the cloud because their internal IT was a total fail.

Nextgrid 12/20/2025||
If they just paid half of the markup they currently pay for the cloud I'm sure they'll be swimming in qualified candidates.
strken 12/21/2025|||
Our AWS spend is something like $160/month. Want to come build bare metal database infrastructure for us for $3/day?
Nextgrid 12/21/2025|||
When you need to scale up and don't want that $160 to increase 10x to handle the additional load the numbers start making more sense: 3 month's worth of the projected increase upfront is around 4.3k, which is good money for a few days' work for the setup/migration and remains a good deal for you since you break even after 3 months and keep on pocketing the savings indefinitely from that point on.

Of course, my comment wasn't aimed at those who successfully keep their cloud bill in the low 3-figures, but the majority of companies with a 5-figure bill and multiple "infrastructure" people on payroll futzing around with YAML files. Even half the achieved savings should be enough incentive for those guys to learn something new.

solatic 12/21/2025||
> few days' work

But initial setup is maybe 10% of the story. The day 2 operations of monitoring, backups, scaling, and failover still needs to happen, and it still requires expertise.

If you bring that expertise in house, it costs much more than 10x ($3/day -> $30/day = $10,950/year).

If you get the expertise from experts who are juggling you along with a lot of other clients, you get something like PlanetScale or CrunchyData, which are also significantly more expensive.

Nextgrid 12/21/2025||
> monitoring

Most monitoring solutions support Postgres and don't actually care where your DB is hosted. Of course this only applies if someone was actually looking at the metrics to begin with.

> backups

Plenty of options to choose from depending on your recovery time objective. From scheduled pg_dumps to WAL shipping to disk snapshots and a combination of them at any schedule you desire. Just ship them to your favorite blob storage provider and call it a day.

> scaling

That's the main reason I favor bare-metal infrastructure. There is no way anything on the cloud (at a price you can afford) can rival the performance of even a mid-range server that scaling is effectively never an issue; if you're outgrowing that, the conversation we're having is not about getting a big DB but using multiple DBs and sharding at the application layer.

> failover still needs to happen

Yes, get another server and use Patroni/etc. Or just accept the occasional downtime and up to 15 mins of data loss if the machine never comes back up. You'd be surprised how many businesses are perfectly fine with this. Case in point: two major clouds had hour-long downtimes recently and everyone basically forgot about it a week later.

> If you bring that expertise in house

Infrastructure should not require continuous upkeep/repair. You wouldn't buy a car that requires you to have a full-time mechanic in the passenger seat at all times. If your infrastructure requires this, you should ask for a refund and buy from someone who sells more reliable infra.

A server will run forever once set up unless hardware fails (and some hardware can be redundant with spares provisioned ahead of time to automatically take over and delay maintenance operations). You should spend a couple hours a month max on routine maintenance which can be outsourced and still beats the cloud price.

I think you're underestimating the amount of tech that is essentially nix machines all around you that somehow just... work* despite having zero upkeep or maintenance. Modern hardware is surprisingly reliable and most outages are caused by operator error when people are (potentially unnecessarily) messing with stuff rather than the hardware failing.

snovv_crash 12/21/2025|||
At 160/mo you are using so little you might as well host off of a raspberry pi on your desk with a USB3 SSD attached. Maintenance and keeping a hot backup would take a few hours to set up, and you're more flexible too. And if you need to scale, rent a VPS or even dedicated machine from Hetzner.

An LLM could set this up for you, it's dead simple.

strken 12/22/2025|||
I'm not going to put customer data on a USB-3 SSD sitting on my desk. Having a small database doesn't mean you can ignore physical security and regulatory compliance, particularly if you've still got reasonable cash flow. Just as one example, some of our regulatory requirements involve immutable storage - how am I supposed to make an SSD that's literally on my desk immutable in any meaningful way? S3 handles this in seconds. Same thing with geographically distributed replicas and backups.

I also disagree that the ongoing maintenance, observability, and testing of a replicated database would take a few hours to set up and then require zero maintenance and never ping me with alerts.

snovv_crash 12/22/2025||
The lede I buried there is whether all of this theater actually gives you better security and availability than 'toy' hardware.

Looking at all the recent AWS, Azure and Cloudflare outages, I posit that it doesn't.

flomo 12/22/2025|||
Nice troll. But TFA is about corporate IT so hopefully you get whatever.
flomo 12/20/2025|||
For companies not heavily into tech, lots of this stuff is not that expensive. Again, how many DBAs are even looking for a 3 hr/month sidegig?
barnabee 12/21/2025||||
It depends very much what the company is doing.

At my last two places it very quickly got to the point where the technical complexity of deployments, managing environments, dealing with large piles of data, etc. meant that we needed to hire someone to deal with it all.

They actually preferred managing VMs and self hosting in many cases (we kept the cloud web hosting for features like deploy previews, but that’s about it) to dealing with proprietary cloud tooling and APIs. Saved a ton of money, too.

On the other hand, the place before that was simple enough to build and deploy using cloud solutions without hiring someone dedicated (up to at least some pretty substantial scale that we didn’t hit).

scott_w 12/20/2025||||
> Every company out there is using the cloud and yet still employs infrastructure engineers to deal with its complexity. The "cloud" reducing staff costs is and was always a lie.

This doesn’t make sense as an argument. The reason the cloud is more complex is because that complexity is available. Under a certain size, a large number of cloud products simply can’t be managed in-house (and certainly not altogether).

Also your argument is incorrect in my experience.

At a smaller business I worked at, I was able to use these services to achieve uptime and performance that I couldn’t achieve self-hosted, because I had to spend time on the product itself. So yeah, we’d saved on infrastructure engineers.

At larger scales, what your false dichotomy suggests also doesn’t actually happen. Where I work now, our data stores are all self-managed on top of EC2/Azure, where performance and reliability are critical. But we don’t self-host everything. For example, we use SES to send our emails and we use RDS for our app DB, because their performance profiles and uptime guarantees are more than acceptable for the price we pay. That frees up our platform engineers to spend their energy on keeping our uptime on our critical services.

pinkgolem 12/20/2025|||
>At a smaller business I worked at, I was able to use these services to achieve uptime and performance that I couldn’t achieve self-hosted, because I had to spend time on the product itself. So yeah, we’d saved on infrastructure engineers.

How sure are you about that one? All of my hetzner vm`s reach an uptime if 99.9% something.

I could see more then one small business stack fitting onto a single of those vm`s.

scott_w 12/20/2025|||
100% certain because I started by self hosting before moving to AWS services for specific components and improved the uptime and reduced the time I spent keeping those services alive.
pinkgolem 12/20/2025||
What was work you spend configuring those services and keeping them alive? I am genuinely curious...

We have a very limited set of services, but most have been very painless to maintain.

scott_w 12/20/2025||
A Django+Celery app behind Nginx back in the day. Most maintenance would be discovering a new failure mode:

- certificates not being renewed in time

- Celery eating up all RAM and having to be recycled

- RabbitMQ getting blocked requiring a forced restart

- random issues with Postgres that usually required a hard restart of PG (running low on RAM maybe?)

- configs having issues

- running out of inodes

- DNS not updating when upgrading to a new server (no CDN at the time)

- data centre going down, taking the provider’s email support with it (yes, really)

Bear in mind I’m going back a decade now, my memory is rusty. Each issue was solvable but each would happen at random and even mitigating them was time that I (a single dev) was not spending on new features or fixing bugs.

pinkgolem 12/20/2025||
I mean, going back a decade might be part of the reason?

Configs having issues is like number 1 reason i like the setup so much..

I can configure everything on my local machine and test here, and then just deploy it to a server the same way.

I do not have to build a local setup, and then a remote one

scott_w 12/20/2025||
Er… what? Even in today’s world with Docker, you have differences between dev and prod. For a start, one is accessed via the internet and requires TLS configs to work correctly. The other is accessed via localhost.
chasd00 12/21/2025|||
Just fyi, you can put whatever you want in /etc/hosts, it gets hit before the resolver. So you can run your website on localhost with your regular host name over https.
scott_w 12/21/2025||
I’m aware, I just picked one example but there are others like instead of a mail server you’re using console, or you have a CDN.
pinkgolem 12/21/2025|||
I use a https for localhost, there are a ton of options for that.

But yes, the cert is created differently in prod and there are a few other differences.

But it's much closer then in the cloud.

squeaky-clean 12/20/2025|||
Just because your VM is running doesn't mean the service is accessible. Whenever there's a large AWS outage it's usually not because the servers turned off. It also doesn't guarantee that your backups are working properly.
pinkgolem 12/20/2025||
If you have a server where everything is on the server, the server being on means everything is online... There is not a lot of complexity going on inside a single server infrastructure.

I mean just because you have backups does not mean you can restore them ;-)

We do test backup restoration automatically and also on a quarterly basis manually, but so you should do with AWS.

Otherwise how do you know you can restore system a without impact other dependency, d and c

kijin 12/20/2025|||
Yes, mix-and-match is the way to go, depending on what kind of skills are available in your team. I wouldn't touch a mail server with a 10-foot pole, but I'll happily self-manage certain daemons that I'm comfortable with.

Just be careful not to accept more complexity just because it is available, which is what the AWS evangelists often try to sell. After all, we should always make an informed decision when adding a new dependency, whether in code or in infrastructure.

scott_w 12/20/2025||
Of course AWS are trying to sell you everything. It’s still on you and your team to understand your product and infrastructure and decide what makes sense for you.
aranelsurion 12/20/2025||||
> still employs infrastructure engineers

> The "cloud" reducing staff costs

Both can be true at the same time.

Also:

> Otherwise you're waking up at 3am no matter what.

Do you account for frequency and variety of wakeups here?

Nextgrid 12/20/2025||
> Do you account for frequency and variety of wakeups here?

Yes. In my career I've dealt with way more failures due to unnecessary distributed systems (that could have been one big bare-metal box) rather than hardware failures.

You can never eliminate wake-ups, but I find bare-metal systems to have much less moving parts means you eliminate a whole bunch of failure scenarios so you're only left with actual hardware failure (and HW is pretty reliable nowadays).

wredcoll 12/20/2025||
If this isn't the truth. I just spent several weeks, on and off, debugging a remote hosted build system tool thingy because it was in turn made of at least 50 different microservice type systems and it was breaking in the middle of two of them.

There was, I have to admit, a log message that explained the problem... once I could find the specific log message and understand the 45 steps in the chain that got to that spot.

spiralpolitik 12/20/2025||||
In-house vs Cloud Provider is largely a wash in terms of cost. Regardless of the approach, you are going need people to maintain stuff and people cost money. Similarly compute and storage cost money so what you lose on the swings, you gain on the roundabouts.

In my experience you typically need less people if using a Cloud Provider than in-house (or the same number of people can handle more instances) due to increased leverage. Whether you can maximize what you get via leverage depends on how good your team is.

US companies typically like to minimize headcount (either through accounting tricks or outsourcing) so usually using a Cloud Provider wins out for this reason alone. It's not how much money you spend, it's how it looks on the balance sheet ;)

matthewmacleod 12/20/2025||||
I don’t think it’s a lie, it’s just perhaps overstated. The number of staff needed to manage a cloud infrastructure is definitely lower than that required to manage the equivalent self-hosted infrastructure.

Whether or not you need that equivalence is an orthogonal question.

Nextgrid 12/20/2025|||
> The number of staff needed to manage a cloud infrastructure is definitely lower than that required to manage the equivalent self-hosted infrastructure.

There's probably a sweet spot where that is true, but because cloud providers offer more complexity (self-inflicted problems) and use PR to encourage you to use them ("best practices" and so on) in all the cloud-hosted shops I've been in a decade of experience I've always seen multiple full-time infra people being busy with... something?

There was always something to do, whether to keep up with cloud provider changes/deprecations, implementing the latest "best practice", debugging distributed systems failures or self-inflicted problems and so on. I'm sure career/resume polishing incentives are at play here too - the employee wants the system to require their input otherwise their job is no longer needed.

Maybe in a perfect world you can indeed use cloud-hosted services to reduce/eliminate dedicated staff, but in practice I've never seen anything but solo founders actually achieve that.

freedomben 12/20/2025|||
Exactly. Companies with cloud infra often still have to hire infra people or even an infra team, but that team will be smaller than if they were self-hosting everything, in some cases radically smaller.

I love self-hosting stuff and even have a bias towards it, but the cost/time tradeoff is more complex than most people think.

riedel 12/20/2025||||
Working in a university Lab self-hosting is the default for almost anything. While I would agree that cost are quite low, I sometimes would be really happy to throw money at problems to make them go away. Without having the chance and thus being no expert, I really see the opportunity of scaling (up and down) quickly in the cloud. We ran a postgres database of a few 100 GB with multiple read replica and we managed somehow, but actually really hit our limits of expertise at some point. At some point we stopped migrating to newer database schemas because it was just such a hassle keeping availability. If I had the money as company, I guess I would have paid for a hosted solution.
AYBABTME 12/21/2025||||
The fact that as many engineers are on payroll doesn't mean that "cloud" is not an efficiency improvement. When things are easier and cheaper, people don't do less or buy less. They do more and buy more until they fill their capacity. The end result is the same number (or more) of engineers, but they deal with a higher level of abstraction and achieve more with the same headcount.
strken 12/21/2025||||
I can't talk about staff costs, but as someone who's self-hosted Postgres before, using RDS or Supabase saves weeks of time on upgrades, replicas, tuning, and backups (yeah, you still need independent backups, but PITRs make life easier). Databases and file storage are probably the most useful cloud functionality for small teams.

If you have the luxury of spending half a million per year on infrastructure engineers then you can of course do better, but this is by no means universal or cost-effective.

erulabs 12/20/2025|||
Well sure you still have 2 or 3 infra people but now you don’t need 15. Comparing to modern Hetzner is also not fair to “cloud” in the sense that click-and-get-server didn’t exist until cloud providers popped up. That was initially the whole point. If bare metal behind an API existed in 2009 the whole industry would look very different. Contingencies Rule Everything Around Me.
cardanome 12/20/2025|||
You are missing that most services don't have high availability needs and don't need to scale.

Most projects I have worked on in my career have never seen more than a hundred concurrent users. If something goes down on Saturday, I am going to fix it on Monday.

I have worked on internal tools were I just added a postgres DB to the docker setup and that was it. 5 Minute of work and no issues at all. Sure if you have something customer facing, you need to do a bit more and setup a good backup strategy but that really isn't magic.

lucideer 12/21/2025|||
> at a company that has reached the scale where employing engineers to manage self-hosted databases is more cost effective than outsourcing.

This is the crux of one of the most common fallacies in software engineering decision making today. I've participated in a bunch of architecture / vendor evaluations that concluded managed services are more cost effective almost purely because they underestimated (or even discarded entirely) the internal engineering cost of vendor management. Black box debugging is one of the most time costuming engineering pursuits, & even when it's something widely documented & well supported like RDS, it's only really tuned for the lowest common denominator - the complexities of tuning someone else's system at scale can really add up to only marginally less effort than self-hosting (if there's any difference at all).

But most importantly - even if it's significantly less effort than self-hosting, it's never effectively costed when evaluating trade-offs - that's what leads to this persistent myth about the engineering cost of self-hosting. "Managing" managed services is a non-zero cost.

Add to that the ultimate trade-off of accountability vs availability (internal engineers care less about availability when it's out of there hands - but it's still a loss to your product either way).

bastawhiz 12/21/2025||
> Black box debugging is one of the most time costuming engineering pursuits, & even when it's something widely documented & well supported like RDS, it's only really tuned for the lowest common denominator - the complexities of tuning someone else's system at scale can really add up to only marginally less effort than self-hosting (if there's any difference at all).

I'm really not sure what you're talking about here. I manage many RDS clusters at work. I think in total, we've spent maybe eight hours over the last three years "tuning" the system. It runs at about 100kqps during peak load. Could it be cheaper or faster? Probably, but it's a small fraction of our total infra spend and it's not keeping me up at night.

Virtually all the effort we've ever put in here has been making the application query the appropriate indexes. But you'd do no matter how you host your database.

Hell, even the metrics that RDS gives you for free make the thing pay for itself, IMO. The thought of setting up grafana to monitor a new database makes me sweat.

solatic 12/21/2025|||
> even the metrics that RDS gives you for free make the thing pay for itself, IMO. The thought of setting up grafana to monitor a new database makes me sweat.

CloudNative PG actually gives you really nice dashboards out-of-the-box for free. see: https://github.com/cloudnative-pg/grafana-dashboards

bastawhiz 12/21/2025||
Sure, and I can install something to do RDS performance insights without querying PG stats, and something to schedule backups to another region, and something to aggregate the logs, and then I have N more things that can break.
lucideer 12/23/2025|||
> Could it be cheaper or faster? Probably

Ultimately, it depends on your stack & your bottlenecks. If you can afford to run slower queries then focusing your efforts elsewhere makes sense for you. We run ~25kqps average & mostly things are fine, but when on-call pages come in query performance is a common culprit. The time we've spent on that hasn't been significantly different to self-hosted persistence backends I've worked with (probably less time spent but far from orders of magnitudes - certainly not worthy of a bullet point in the "pros" column when costing application architectures.

bastawhiz 12/23/2025||
> query performance is a common culprit

But that almost certainly has to do with index use and configuration, not whether you're self hosting or not. RDS gives you essentially all of the same Postgres configuration options.

convolvatron 12/20/2025|||
its not. I've been in a few shops that use RDS because they think their time is better spend doing other things.

except now they are stuck trying to maintain and debug Postgres without having the same visibility and agency that they would if they hosted it themselves. situation isn't at all clear.

Nextgrid 12/20/2025|||
One thing unaccounted for if you've only ever used cloud-hosted DBs is just how slow they are compared to a modern server with NVME storage.

This leads the developers to do all kinds of workarounds and reach for more cloud services (and then integrating them and - often poorly - ensuring consistency across them) because the cloud hosted DB is not able to handle the load.

On bare-metal, you can go a very long way with just throwing everything at Postgres and calling it a day.

andersmurphy 12/20/2025|||
100% this directly connected nvme is a massive win. Often several orders of magnitude.

You can take it even further in some context if you use sqlite.

I think one of the craziest ideas of the cloud decade was to move storage away from compute. It's even worse with things like AWS lambda or vercel.

Now vercel et al are charging you extra to have your data next to your compute. We're basically back to VMs at 100-1000x the cost.

NewJazz 12/20/2025||||
Yeah our cloud DBs all have abysmal performance and high recurring cost even compared to metal we didn't even buy for hosting DBs.
briHass 12/20/2025|||
This is the reason I manage SQL Server on a VM in Azure instead of their PaaS offering. The fully managed SQL has terrible performance unless you drop many thousands a month. The VM I built is closer to 700 a month.

Running on IaaS also gives you more scalability knobs to tweak: SSD Iops and b/w, multiple drives for logs/partitions, memory optimized VMs, and there's a lot of low level settings that aren't accessible in managed SQL. Licensing costs are also horrible with managed SQL Server, where it seems like you pay the Enterprise level, but running it yourself offers lower cost editions like Standard or Web.

molf 12/20/2025|||
Interesting. Is this an issue with RDS?

I use Google Cloud SQL for PostgreSQL and it's been rock solid. No issues; troubleshooting works fine; all extensions we need already installed; can adjust settings where needed.

convolvatron 12/20/2025||
its more of a general condition - its not that RDS is somehow really faulty, its just that when things do go wrong, its not really anybody's job to introspect the system because RDS is taking care of it for us.

in the limit I dont think we should need DBAs, but as long as we need to manage indices by hand, think more than 10 seconds about the hot queries, manage replication, tune the vacuumer, track updates, and all the other rot - then actually installing PG on a node of your choice is really the smallest of problems you face.

prisenco 12/20/2025|||
| self hosting costs you between 30 and 120 minutes per month

Can we honestly say that cloud services taking a half hour to two hours a month of someone's time on average is completely unheard of?

SatvikBeri 12/20/2025|||
I handle our company's RDS instances, and probably spend closer to 2 hours a year than 2 hours a month over the last 8 years.

It's definitely expensive, but it's not time-consuming.

prisenco 12/21/2025||
Of course. But people also have high uptime servers with long-running processes they barely touch.
esseph 12/20/2025|||
Very much depends on what you're doing in the cloud, how many services you are using, and how frequently those services and your app needs updates.
fhcuvyxu 12/21/2025|||
Self hosting does not cost you that much at all. It's basically zero once you've got backups automated.
npn 12/21/2025|||
I also encourage people to just use managed databases. After all, it is easy to replace such people. Heck actually you can fire all of them and replace the demand with genAI nowadays.
anal_reactor 12/20/2025|||
The discussion isn't "what is more effective". The discussion is "who wants to be blamed in case things go south". If you push the decision to move to self-hosted and then one of the engineers fucks up the database, you have a serious problem. If same engineer fucks up cloud database, it's easier to save your own ass.
jrochkind1 12/21/2025|||
Agreed. As someone in a very tiny shop, all us devs want to do as little context switching to ops as possible. Not even half a day a month. Our hosted services are in aggregate still way cheaper than hiring another person. (We do not employ an "infrastructure engineer").
arevno 12/21/2025||
> trading an hour or two of my time

pacman -S postgresql

initdb -D /pathto/pgroot/data

grok/claude/gpt: "Write a concise Bash script for setting up an automated daily PostgreSQL database backup using pg_dump and cron on a Linux server, with error handling via logging and 7-day retention by deleting older backups."

ctrl+c / ctrl+v

Yeah that definitely took me an hour or two.

solatic 12/21/2025||
So your backups are written to the same disk?

> datacenter goes up in flames

> 3-2-1 backups: 3 copies on 2 different types of media with at least 1 copy off-site. No off-site copy.

Whoops!

ZeroConcerns 12/20/2025||
So, yeah, I guess there's much confusion about what a 'managed database' actually is? Because for me, the table stakes are:

-Backups: the provider will push a full generic disaster-recovery backup of my database to an off-provider location at least daily, without the need for a maintenance window

-Optimization: index maintenance and storage optimization are performed automatically and transparently

-Multi-datacenter failover: my database will remain available even if part(s) of my provider are down, with a minimal data loss window (like, 30 seconds, 5 minutes, 15 minutes, depending on SLA and thus plan expenditure)

-Point-in-time backups are performed at an SLA-defined granularity and with a similar retention window, allowing me to access snapshots via a custom DSN, not affecting production access or performance in any way

-Slow-query analysis: notifying me of relevant performance bottlenecks before they bring down production

-Storage analysis: my plan allows for #GB of fast storage, #TB of slow storage: let me know when I'm forecast to run out of either in the next 3 billing cycles or so

Because, well, if anyone provides all of that for a monthly fee, the whole "self-hosting" argument goes out of the window quickly, right? And I say that as someone who absolutely adores self-hosting...

thedougd 12/20/2025||
It's even worse when you start finding you're staffing specialized skills. You have the Postgres person, and they're not quite busy enough, but nobody else wants to do what they do. But then you have an issue while they're on vacation, and that's a problem. Now I have a critical service but with a bus factor problem. So now I staff two people who are now not very busy at all. One is a bit ambitious and is tired of being bored. So he's decided we need to implement something new in our Postgres to solve a problem we don't really have. Uh oh, it doesn't work so well, the two spend the next six months trying to work out the kinks with mixed success.
arcbyte 12/20/2025|||
Slack is a necessary component in well functioning systems.
zbentley 12/20/2025|||
And rental/SaaS models often provide an extremely cost effective alternative to needing to have a lot of slack.

Corollary: rental/SaaS models provide that property in large part because their providers have lots of slack.

thedougd 12/20/2025|||
Of course! It should be included in the math when comparing in-housing Postgres vs using a managed service.
satvikpendem 12/20/2025|||
This would be a strange scenario because why would you keep these people employed? If someone doesn't want to do the job required, including servicing Postgres, then they wouldn't be with me any longer, I'll find someone who does.
sixdonuts 12/21/2025||
No doubt. Reading this thread leads me to believe that almost no one wants to take responsibility for anything anymore, even hiring the right people. Why even hire someone who isn't going to take responsibility for their work and be part of a team? If an org is worried about the "bus factor" they are probably not hiring the right people and/or the org management has poor team building skills.
satvikpendem 12/21/2025||
Exactly, I just don't understand the grandparent's point, why have a "Postgres person" at all? I hire an engineer who should be able to do it all, no wonder there's been a proliferation of full stack engineers over specialized ones.

And especially having worked in startups, I was expected to do many different things, from fixing infrastructure code one day to writing frontend code the next. If you're in a bigger company, maybe it's understandable to be specialized, but especially if you're at a company with only a few people, you must be willing to do the job, whatever it is.

stackskipton 12/21/2025||
Because working now at what used to be startup size, not having X Person leads to really bad technical debt problems as that person Handling X was not really skilled enough to be doing so but it was illusion of success. Those technical debt problems are causing us massive issues now and costing the business real money.
marcosdumay 12/20/2025|||
IMO, the reason to self-host your database is latency.

Yes, I'd say backups and analysis are table stakes for hiring it, and multi-datacenter failover is a relevant nice to have. But the reason to do it yourself is because it's literally impossible to get anything as good as you can build in somebody's else computer.

andersmurphy 12/20/2025||
Yup, often orders of magnitude better.
satvikpendem 12/20/2025|||
If you set it up right, you can automate all this as well by self hosting. There is really nothing special about automating backups or multi region fail over.
awestroke 12/20/2025||
But then you have to check that these mechanisms work regularly and manually
satvikpendem 12/20/2025||
One thing I learned working in the industry, you have to check them when you're using AWS too.
awestroke 12/20/2025||
Really? You're saying RDS backups can't be trusted?
satvikpendem 12/20/2025|||
Trusted in what sense, that they'll always work perfectly 100% of the time? No, therefore one must still check them from time to time, and it's really no different when self hosting, again, if you do it correctly.
awestroke 12/20/2025||
What are some common ways that RDS backups fail to be restored?
satvikpendem 12/20/2025||
Why are you asking me this? Are you trying to test whether I've actually used RDS before? I'm sure a quick search will find you the answer to your question.
SoftTalker 12/20/2025|||
No backup strategy can be blindly trusted. You must verify it, and also test that restores actually work.
odie5533 12/20/2025|||
Self-host things the boss won't call at 3 AM about: logs, traces, exceptions, internal apps, analytics. Don't self-host the database or major services.
cube00 12/20/2025||
Depending on your industry, logs can be very serious business.
wahnfrieden 12/20/2025|||
Yugabyte open source covers a lot of this
graemep 12/20/2025|||
Which providers do all of that?
BoorishBears 12/20/2025|||
I don't know which don't?

The default I've used on Amazon and GCP both do (RDS, Cloud SQL)

jeffbee 12/21/2025|||
GCP Alloy DB
dangoodmanUT 12/20/2025||
There should be no data loss window with a hosted database
andersmurphy 12/20/2025|||
Feom what I remember if AWS loses your data they are basically give you some credits and that's it.
jeremyjh 12/20/2025||||
That requires synchronous replication, which reduces availability and performance.
xboxnolifes 12/20/2025|||
Why is that?
tgtweak 12/21/2025||
As someone who self hosted mysql (in complex master/slave setups) then mariadb, memsql, mongo and pgsql on bare metal, virtual machines then containers for almost 2 decades at this point... you can self host with very little downtime and the only real challenge is upgrade path and getting replication right.

Now with pgbouncer (or whatever other flavor of sql-aware proxy you fancy) you can greatly reduce the complexity involved in managing conventionally complex read/write routing and sharding to various replicas to enable resilient, scalable production-grade database setups on your own infra. Throw in the fact that copy-on-write and snapshotting is baked into most storage today and it becomes - at least compared to 20 years ago - trivial to set up DRS as well. Others have mentioned pgBackRest and that further enforces the ease with which you can set up these traditionally-complex setups.

Beyond those two significant features there isn't many other reasons you'd need to go with hosted/managed pgsql. I've yet to find a managed/hosted database solution that doesn't have some level of downtime to apply updates and patches so even if you go fully hosted/managed it's not a silver bullet. The cost of managed DB is also several times that of the actual hardware it's running on, so there is a cost factor involved as well.

I guess all this to say it's never been a better time to self-host your database and the learning curve is as shallow as it's ever been. Add to all of this that any garden-variety LLM can hand-hold you through the setup and management, including any issues you might encounter on the way.

donatj 12/20/2025||
The author brings up the point, but I have always found surprising how much more expensive managed databases are than a comparable VPS.

I would expect a little bit more as a cost of the convenience, but in my experience it's generally multiple times the expense. It's wild.

This has kept me away from managed databases in all but my largest projects.

orev 12/20/2025||
Once they convince you that you can’t do it yourself, you end up relying on them, but didn’t develop the skills you would need to migrate to another provider when they start raising prices. And they keep raising prices because by then you have no choice.
zbentley 12/20/2025|||
There is plenty of provider markup, to be sure. But it is also very much not a given that the hosted version of a database is running software/configs that are equivalent to what you could do yourself. Many hosted databases are extremely different behind the scenes when it comes to durability, monitoring, failover, storage provisioning, compute provisioning, and more. Just because it acts like a connection hanging off a postmaster service running on a server doesn’t mean that’s what your “psql” is connected to on RDS Aurora (or many of the other cloud-Postgres offerings).
aranelsurion 12/20/2025||
> Just because it acts like a connection hanging off

If anything that’s a feature for ease of use and compatibility.

citizenpaul 12/20/2025||||
I have not tested this in real life yet but it seems like all the argument about vendor lock in can be solved, if you bite the bullet and learn basic Kubernetes administration. Kubernetes is FOSS and there are countless Kubernetes as a service providers.

I know there are other issues with Kubernetes but at least its transferable knowledge.

ch2026 12/20/2025|||
Wait, are you talking about cloud providers or LLMs?
nrhrjrjrjtntbt 12/21/2025||
Yes if the DB is 5x the VM and the the VM is 10x the dedicated server from say OVH etc. then you are payng 50x.
heipei 12/20/2025||
I still don't get how folks can hype Postgres with every second post on HN, yet there is no simple batteries-included way to run a HA Postgres cluster with automatic failover like you can do with MongoDB. I'm genuinely curious how people deal with this in production when they're self-hosting.
franckpachot 12/20/2025||
It's largely cultural. In the SQL world, people are used to accepting the absence of real HA (resilience to failure, where transactions continue without interruption) and instead rely on fast DR (stop the service, recover, check for data loss, start the service). In practice, this means that all connections are rolled back, clients must reconnect to a replica known to be in synchronous commit, and everything restarts with a cold cache.

Yet they still call it HA because there's nothing else. Even a planned shutdown of the primary to patch the OS results in downtime, as all connections are terminated. The situation is even worse for major database upgrades: stop the application, upgrade the database, deploy a new release of the app because some features are not compatible between versions, test, re-analyze the tables, reopen the database, and only then can users resume work.

Everything in SQL/RDBMS was thought for a single-node instance, not including replicas. It's not HA because there can be only one read-write instance at a time. They even claim to be more ACID than MongoDB, but the ACID properties are guaranteed only on a single node.

One exception is Oracle RAC, but PostgreSQL has nothing like that. Some forks, like YugabyteDB, provide real HA with most PostgreSQL features.

About the hype: many applications that run on PostgreSQL accept hours of downtime, planned or unplanned. Those who run larger, more critical applications on PostgreSQL are big companies with many expert DBAs who can handle the complexity of database automation. And use logical replication for upgrades. But no solution offers both low operational complexity and high availability that can be comparable to MongoDB

franckpachot 12/20/2025|||
Beyond the hype, the PostgreSQL community is aware of the lack of "batteries-included" HA. This discussion on the idea of a Built-in Raft replication mentions MongoDB as:

>> "God Send". Everything just worked. Replication was as reliable as one could imagine. It outlives several hardware incidents without manual intervention. It allowed cluster maintenance (software and hardware upgrades) without application downtime. I really dream PostgreSQL will be as reliable as MongoDB without need of external services.

https://www.postgresql.org/message-id/0e01fb4d-f8ea-4ca9-8c9...

abrookewood 12/20/2025||
"I really dream PostgreSQL will be as reliable as MongoDB" ... someone needs to go and read up on Mongo's history!

Sure, the PostrgreSQL HA story isn't what we all want it to be, but the reliability is exceptional.

computerfan494 12/20/2025||
Postgres violated serializability on a single node for a considerable amount of time [1] and used fsync incorrectly for 20 years [2]. I personally witnessed lost data on Postgres because of the fsync issue.

Database engineering is very hard. MongoDB has had both poor defaults as well as bugs in the past. It will certainly have durability bugs in the future, just like Postgres and all other serious databases. I'm not sure that Postgres' durability stacks up especially well with modern MongoDB.

[1] https://jepsen.io/analyses/postgresql-12.3

[2] https://archive.fosdem.org/2019/schedule/event/postgresql_fs...

abrookewood 12/21/2025||
Thanks for adding that - I wasn't aware.
mfalcao 12/20/2025|||
The most common way to achieve HÁ is using Patroni. The easiest way to set it up is using Autobase (https://autobase.tech).

CloudNativePG (https://cloudnative-pg.io) is a great option if you’re using Kubernetes.

There’s also pg_auto_failover which is a Postgres extension and a bit less complex than the alternatives, but it has its drawbacks.

franckpachot 12/20/2025||
Be sure to read the Муths and Truths about Synchronous Replication in PostgreSQL (by the author of Patroni) before considering those solutions as cloud-native high availability: https://www.postgresql.eu/events/pgconfde2025/sessions/sessi...
da02 12/21/2025||
What is your preferred alternative to Patroni?
tresil 12/20/2025|||
If you’re running Kubernetes, CloudNativePG seems to be the “batteries included” HA Postgres cluster that’s becoming the standard in this area.
franckpachot 12/20/2025|||
CloudNativePG is automation around PostgreSQL, not "batteries included", and not the idea of Kubernetes where pods can die or spawn without impacting the availability. Unfortunately, naming it Cloud Native doesn't transform a monolithic database to an elastic cluster
monus 12/21/2025|||
We’ve recently had a disk failure in the primary and CloudNativePG promoted another to be primary but it wasn’t zero downtime. During transition, several queries failed. So something like pgBouncer together with transactional queries (no prepared statements) is still needed which has performance penalty.
_rwo 12/21/2025||
> So something like pgBouncer together with transactional queries

FYI - it's already supported by cloudnativepg [1]

I was playing with this operator recently and I'm truly impressed - it's a piece of art when it comes to postgres automation; alongside with barman [2] it does everything I need and more

[1] https://cloudnative-pg.io/docs/1.28/connection_pooling [2] https://cloudnative-pg.io/plugin-barman-cloud/

jknoepfler 12/20/2025|||
I use Patroni for that in a k8s environment (although it works anywhere). I get an off-the-shelf declarative deployment of an HA postgres cluster with automatic failover with a little boiler-plate YAML.

Patroni has been around for awhile. The database-as-a-service team where I work uses it under the hood. I used it to build database-as-a-service functionality on the infra platform team I was at prior to that.

It's basially push-button production PG.

There's at least one decent operator framework leveraging it, if that's your jam. I've been living and dying by self-hosting everything with k8s operators for about 6-7 years now.

tempest_ 12/20/2025||
We use patroni and run it outside of k8s on prem, no issues in 6 or 7 years. Just upgraded from pg 12 to 17 with basically no down time without issue either.
baobun 12/20/2025||
Yo I'm curious if you have any pointers on how you went about this to share? Did you use their provided upgrade script or did you instrument the upgrade yourself "out of band"? rsync?

Currently scratching my head on what the appropriate upgrade procedure is for a non-k8s/operator spilo/patroni cluster for minimal downtime and risk. The script doesn't seem to work for this setup, erroring on mismatching PG_VERSION when attempting. If you don't mind sharing it would be very appreciated.

tempest_ 12/21/2025||
I did not use a script (my environment is bare metal running ubuntu 24).

I read these and then wrote my own scripts that were tailored to my environment.

https://pganalyze.com/blog/5mins-postgres-zero-downtime-upgr...

https://www.pgedge.com/blog/always-online-or-bust-zero-downt...

https://knock.app/blog/zero-downtime-postgres-upgrades

Basically

- Created a new cluster on new machines

- Started logically replicating

- Waited for that to complete and then left it there replicating for a while until I was comfortable with the setup

- We were already using haproxy and pgbouncer

- Then I did a cut over to the new setup

- Everything looked good so after a while I tore down the old cluster

- This was for a database 600gb-1tb in size

- The client application was not doing anything overly fancy which meant there was very little to change going from 12 to 17

- Additionally I did all of the above in a staging environment first to make sure it would work as expected

Best of luck.

baobun 12/22/2025||
Thank you! o7

Going to have some more figuring out what's up with spilo - turns out that running that outside of k8s is rare and not much documented. But it's still patroni so this is very helpful.

wb14123 12/21/2025|||
Yeah I'm also wondering that. I'm looking for self-host PostgreSQL after Cockroach changed their free tier license but found the HA part of PostgreSQL is really lacking. I tested Patroni which seems to be a popular choice but found some pretty critical problems (https://www.binwang.me/2024-12-02-PostgreSQL-High-Availabili...). I tried to explore some other solutions, but found out the lack of a high level design really makes the HA for PostgreSQL really hard if not impossible. For example, without the necessary information in WAL, it's hard to enforce primary node even with an external Raft/Paxos coordinator. I wrote some of them down in this blog (https://www.binwang.me/2025-08-13-Why-Consensus-Shortcuts-Fa...) especially in the section "Highly Available PostgreSQL Cluster" and "Quorum".

My theory of why Postgres is still getting the hype is either people don't know the problem, or it's acceptable on some level. I've worked in a team that maintains the in house database cluster (even though we were using MySQL instead of PostgreSQL) and the HA story was pretty bad. But there were engineers manually recover the data lost and resolve data conflicts, either from the recovery of incident or from customer tickets. So I guess that's one way of doing business.

forinti 12/21/2025|||
I love Postgresql simply because it never gives me any trouble. I've been running it for decades without trouble.

OTOH, Oracle takes most of my time with endless issues, bugs, unexpected feature modifications, even on OCI!

dpedu 12/21/2025|||
This is my gripe with Postgres as well. Every time I see comments extolling the greatness of Postgres, I can't help but think "ah, that's a user, not a system administrator" and I think that's a completely fair judgement. Postgres is pretty great if you don't have to take care of it.
forinti 12/22/2025||
I manage Postgresql and the thing I really love about it is that there's not much no manage. It just works. Even setting up streaming replication is really easy.
dpedu 12/22/2025||
Initial setup is rarely the hard part of any technology.
christophilus 12/20/2025|||
I’ve been tempted by MariaDB for this reason. I’d love to hear from anyone who has run both.
paulryanrogers 12/20/2025||
IMO Maria has fallen behind MySQL. I wouldn't chose it for anything my income depends on.

(I do use Maria at home for legacy reasons, and have used MySQL and Pg professionally for years.)

danaris 12/20/2025||
> IMO Maria has fallen behind MySQL. I wouldn't chose it for anything my income depends on.

Can you give any details on that?

I switched to MariaDB back in the day for my personal projects because (so far as I could tell) it was being updated more regularly, and it was more fully open source. (I don't recall offhand at this point whether MySQL switched to a fully paid model, or just less-open.)

Seattle3503 12/21/2025|||
SKIP LOCKED was added in 10.6 (~2021), years after MySQL had it (~2017). My company was using MariaDB around the time and was trailing a version or two and it made implementing a queue very painful.
chuckadams 12/20/2025||||
One area where Maria lags significantly is JSON support. In MariaDB, JSON is just an alias for LONGTEXT plus validation: https://mariadb.com/docs/server/reference/data-types/string-...
paulryanrogers 12/20/2025|||
IME MariaDB doesn't recover or run as reliably as modern versions of MySQL, at least with InnoDB.
dangoodmanUT 12/20/2025|||
Patroni, Zolando operator on k8s
groundzeros2015 12/20/2025|||
Because that’s an expensive and complex boondoggle almost no business needs.
paulryanrogers 12/20/2025||
RDS provides some HA. HAProxy or PGBouncer can help when self hosting.
notaseojh 12/20/2025||
it's easy to through names out like this (pgbackrest is also useful...) but getting them setup properly in a production environment is not at all straightforward, which I think is the point.
zbentley 12/20/2025|||
…in which case, you should probably use a hosted offering that takes care of those things for you. RDS Aurora (Serverless or not), Neon, and many other services offer those properties without any additional setup. They charge a premium for them, however.

It’s not like Mongo gives you those properties for free either. Replication/clustering related data loss is still incredibly common precisely because mongo makes it seem like all that stuff is handled automatically at setup when in reality it requires plenty of manual tuning or extra software in order to provide the guarantees everyone thinks it does.

paulryanrogers 12/20/2025|||
Yeah my hope is that the core team will adopt a built in solution, much as they finally came around on including logical replication.

Until then it is nice to have options, even if they do require extra steps.

isuckatcoding 12/20/2025||
Take a look at https://github.com/vitabaks/autobase

In case you want to self host but also have something that takes care of all that extra work for you

runako 12/20/2025||
Thank you, this looks awesome.
satvikpendem 12/20/2025|||
I wonder how well this plays with other self hosted open source PaaS, is it just a Docker container we can run I assume?
yakkomajuri 12/20/2025||
Just skimmed the readme. What's the connection pooling situation here? Or is it out of scope?
yoan9224 12/21/2025||
I've been self-hosting Postgres for production apps for about 6 years now. The "3 AM database emergency" fear is vastly overblown in my experience.

In reality, most database issues are slow queries or connection pool exhaustion - things that happen during business hours when you're actively developing. The actual database process itself just runs. I've had more AWS outages wake me up than Postgres crashes.

The cost savings are real, but the bigger win for me is having complete visibility. When something does go wrong, I can SSH in and see exactly what's happening. With RDS you're often stuck waiting for support while your users are affected.

That said, you do need solid backups and monitoring from day one. pgBackRest and pgBouncer are your friends.

petterroea 12/20/2025||
I have ran (read: helped with infrastructure) a small production service using PSQL for 6 years, with up to hundreds of users per day. PSQL has been the problem exactly once, and it was because we ran out of disk space. Proper monitoring (duh) and a little VACUUM would have solved it.

Later I ran a v2 of that service on k8s. The architecture also changed a lot, hosting many smaller servers sharing the same psql server(Not really microservice-related, think more "collective of smaller services ran by different people"). I have hit some issues relating to maxing out the max connections, but that's about it.

This is something I do on my free time so SLA isn't an issue, meaning I've had the ability to learn the ropes of running PSQL without many bad consequences. I'm really happy I have had this opportunity.

My conclusion is that running PSQL is totally fine if you just set up proper monitoring. If you are an engineer that works with infrastructure, even just because nobody else can/wants to, hosting PSQL is probably fine for you. Just RTFM.

kunley 12/21/2025||
Psql (lowercase) is the name of the textual sql client for PostgreSQL. For a general abbreviation we rather use "Pg".
petterroea 12/21/2025||
Good catch, thx
reilly3000 12/20/2025||
But it’s 1500 pages long!
petterroea 12/21/2025||
Good point. I sure didn't read it myself :D

I generally read the parts I think I need, based on what I read elsewhere like Stackoverflow and blog posts. Usually the real docs are better than some random person's SO comment. I feel that's sufficient?

devin 12/20/2025|
What irks me about so many comments in this thread is that they often totally ignore questions of scale, the shape of your workloads, staffing concerns, time constraints, stage of your business, whether you require extensions, etc.

There is a whole raft of reasons why you might be a candidate for self-hosting, and a whole raft of reasons why not. This article is deeply reductive, and so are many of the comments.

groundzeros2015 12/20/2025|
Engineers almost never consider any of those questions. And instead deploy the maximally expensive solution their boss will say ok to.
RadiozRadioz 12/20/2025|||
Bad, short-sighted engineers will do that. An engineer who is not acting solely in the best interests of the wider organisation is a bad one. I would not want to work with a colleague who was so detached from reality that they wouldn't consider all GP's suggested facets. Engineering includes soft/business constraints as well as technical ones.
groundzeros2015 12/20/2025|||
We are saying similar things.
RadiozRadioz 12/20/2025||
Ah, you are implying that most engineers are bad, I see. In that case I agree too
groundzeros2015 12/21/2025||
I don’t know if they are bad engineers, but they have poor judgment.
npn 12/21/2025|||
I bet you also believe database is the single source of truth, right?
WackyFighter 12/21/2025|||
I find it is the opposite way around. I come up with <simple solution> based on open source tooling and I am forced instead to use <expensive enterprise shite> which is 100% lock in proprietary BS because <large corporate tech company> is partnered and is subsidising development. This has been a near constant throughout my career.
groundzeros2015 12/21/2025||
I agree, my statement is too coarse. There can be a lot of organizational pressure to produce complexity and it’s not fair to just blame engineers.

I’ve given a lot of engineers tasks only to find they are “setting up kubernetes cluster so I can setup automated deployments with a dashboard for …”

And similarly in QA I rarely see a cost/benefit consideration for a particular test or automation. Instead it’s we are going to fully automate this and analyze every possible variable.

More comments...