Posted by turtles3 11 hours ago
I agree that managing lots of databases can be a pain in the ass, but trying to make Postgres do everything seems like a problem as well. A lot of these things are different things and trying to make Postgres do all of them seems like it will lead to similar if not worse outcomes than having separate dedicated services.
I understand that people were too overeager to jump on the MongoDB web scale nosql crap, but at this point I think there might have been an overcorrection. The problem with the nosql hype wasn't that they weren't using SQL, it's that they were shoehorning it everywhere, even in places where it wasn't a good fit for the job. Now this blog post is telling us to shoehorn Postgres everywhere, even if it isn't a good fit for the job...
What I like about the "just use PostgreSQL" idea is that, unfortunately, most people don't use Redis well. They are just using it as a cache, which IMHO, isn't even equivalent to scratching the surface of all the amazing things Redis can do.
As we all know, it's all about tradeoffs. If you are only using Redis as a cache, then does the performance improvement you get by using it out weight the complexity of another system dependency? Maybe? Depends...
Side note: If you are using Redis for caching and queue management, those are two separate considerations. Your cache and queues should never live on the same Redis instance because the should have different max-memory policies! </Side note>
The newest versions of Rails have really got me thinking about the simplicity of a PostgreSQL only deployment, then migrating to other data stores as needed down the line. I'd put the need to migrate squarely into the "good problems" to have because it indicates that your service is growing and expanding past the first few stages of growth.
All that being said, man I think Redis is sooooo cool. It's the hammer I am always for a nail to use on.
They may be its point, but I frankly didn't see much use in the wild. You might argue that then those systems didn't need Redis in the first place and I'd agree, but then note that that is the point tigerdata makes.
edit: it's not about serious uses, it's about typical uses, which are sad (and same with Kafka, Elastic, etc, etc)
> Tell me you don't understand Redis point is data structures without telling me you don't understand Redis point is data structures.
regardless of the author, I think slop of that sort belongs on reddit, not HN.
If you follow this advice naively, you might try to implement two or more of these other-kind-of-DB simulacra data models within the same Postgres instance.
And it’ll work, at first. Might even stay working if only one of the workloads ends up growing to a nontrivial size.
But at scale, these different-model workloads will likely contend with one-another, starving one-another of memory or disk-cache pages; or you’ll see an “always some little thing happening” workload causing a sibling “big once-in-a-while” workload to never be able to acquire table/index locks to do its job (or vice versa — the big workloads stalling the hot workloads); etc.
And even worse, you’ll be stuck when it comes to fixing this with instance-level tuning. You can only truly tune a given Postgres instance to behave well for one type-of-[scaled-]workload at a time. One workload-type might use fewer DB connections and depend for efficiency on them having a higher `work_mem` and `max_parallel_workers` each; while another workload-type might use many thousands of short-lived connections and depend on them having small `work_mem` so they’ll all fit.
But! The conclusion you should draw from being in this situation shouldn’t be “oh, so Postgres can’t handle these types of workloads.”
No; Postgres can handle each of these workloads just fine. It’s rather that your single monolithic do-everything Postgres instance, maybe won’t be able to handle this heterogeneous mix of workloads with very different resource and tuning requirements.
But that just means that you need more Postgres.
I.e., rather than adding a different type-of-component to your stack, you can just add another Postgres instance, tuned specifically to do that type of work.
Why do that, rather than adding a component explicitly for caching/key-values/documents/search/graphs/vectors/whatever?
Well, for all the reasons TFA outlines. This “Postgres tuned for X” instance will still be Postgres, and so you’ll still get all the advantages of being able to rely on a single query language, a single set of client libraries and tooling, a single coherent backup strategy, etc.
Where TFA’s “just use Postgres” in the sense of reusing your Postgres instance only scales if your DB is doing a bare minimum of that type of work, interpreting “just use Postgres” in the sense of adding a purpose-defined Postgres instance to your stack will scale nigh-on indefinitely. (To the point that, if you ever do end up needing what a purpose-built-for-that-workload datastore can give you, you’ll likely be swapping it out for an entire purpose-defined PG cluster by that point. And the effort will mostly serve the purpose of OpEx savings, rather than getting you anything cool.)
And, as a (really big) bonus of this approach, you only need to split PG this way where it matters, i.e. in production, at scale, at the point that the new workload-type is starting to cause problems/conflicts. Which means that, if you make your codebase(s) blind to where exactly these workloads live (e.g. by making them into separate DB connection pools configured by separate env-vars), then:
- in dev (and in CI, staging, etc), everything can default to happening on the one local PG instance. Which means bootstrapping a dev-env is just `brew install postgres`.
- and in prod, you don’t need to pre-build with new components just to serve your new need. No new Redis instance VM just to serve your so-far-tiny KV-storage needs. You start with your new workload-type sharing your “miscellaneous business layer” PG instance; and then, if and when it becomes a problem, you migrate it out.
1. Downtime doesn't matter. 2. Paying someone else (eg. AWS) to manage redundancy and fail-over.
It just feels crazy to me that Postgres still doesn't have a native HA story since I last battled with this well over a decade ago.
Same with MySQL and many other "traditional" databases. It tends to work out because these failures are rare and you can get pretty close with external leader election and fencing, but Postgres is NOT easy (likely impossible) to operate as a CP system according to the CAP theorem.
There are various attempts at fixing this (Yugabyte, Neon, Cockroach, TiDB, ...) which all come with various downsides.
[1]: Someone tried it with Patroni and failed miserably, https://www.binwang.me/2024-12-02-PostgreSQL-High-Availabili...