Top
Best
New

Posted by turtles3 11 hours ago

It's 2026, Just Use Postgres(www.tigerdata.com)
472 points | 287 commentspage 5
sheerun 9 hours ago|
Postgres is king of its own, other solutions can be incorporated in it eventually by someone or some organization, that's it
tombert 10 hours ago||
Meh.

I agree that managing lots of databases can be a pain in the ass, but trying to make Postgres do everything seems like a problem as well. A lot of these things are different things and trying to make Postgres do all of them seems like it will lead to similar if not worse outcomes than having separate dedicated services.

I understand that people were too overeager to jump on the MongoDB web scale nosql crap, but at this point I think there might have been an overcorrection. The problem with the nosql hype wasn't that they weren't using SQL, it's that they were shoehorning it everywhere, even in places where it wasn't a good fit for the job. Now this blog post is telling us to shoehorn Postgres everywhere, even if it isn't a good fit for the job...

cyanydeez 10 hours ago|
to be fair, Postgres cana basically do everything Mongo can and just as well.
tombert 9 hours ago||
Ok, I'm a little skeptical of that claim but let's grant it. I still don't think Postgres is going to do everything Kafka and Redis can do as well as Kafka or Redis.
cpursley 8 hours ago||
pgmq gets very close for a lot of Kafka use cases
ablob 9 hours ago||
I really wonder how "It's year X" could establish itself as an argument this popular.
gherkinnn 1 hour ago|
I remember $CURRENT_YEAR taking off around 2016 where it was used in the innocent days of the culture wars alongside terms like "dongle" and "shitlord".
fitsumbelay 9 hours ago||
fair points made but I use sqlite for many things because sometimes you just need a tent
antirez 10 hours ago||
The point of Redis is data structures and algorithmic complexity of operations. If you use Redis well, you can't replace it with PostgreSQL. But I bet you can't replace memcached either for serious use cases.
hmcfletch 9 hours ago||
As someone who is a huge fan of both Redis and Postgres, I whole heartedly agree with the "if you are using Redis well, you can't replace it with PostgreSQL" statement.

What I like about the "just use PostgreSQL" idea is that, unfortunately, most people don't use Redis well. They are just using it as a cache, which IMHO, isn't even equivalent to scratching the surface of all the amazing things Redis can do.

As we all know, it's all about tradeoffs. If you are only using Redis as a cache, then does the performance improvement you get by using it out weight the complexity of another system dependency? Maybe? Depends...

Side note: If you are using Redis for caching and queue management, those are two separate considerations. Your cache and queues should never live on the same Redis instance because the should have different max-memory policies! </Side note>

The newest versions of Rails have really got me thinking about the simplicity of a PostgreSQL only deployment, then migrating to other data stores as needed down the line. I'd put the need to migrate squarely into the "good problems" to have because it indicates that your service is growing and expanding past the first few stages of growth.

All that being said, man I think Redis is sooooo cool. It's the hammer I am always for a nail to use on.

abtinf 10 hours ago|||
“well” is doing a lot of heavy lifting in your comment. Across a number of companies using Redis, I’ve never seen it used correctly. Adding it to the tech stack is always justified with hand waving about scalability.
PunchyHamster 9 hours ago|||
well, redis is a bit of a junk bin of random barely related tools. It's just very likely that any project of non-trivial complexity will need at least some of them and I wouldn't necessarily advocate for trying jerry-rigging most of them in postgresql like the author of article, for example why would anyone want wasting their SQL DB server performance on KV lookups?
lstodd 10 hours ago|||
There are data structures in Redis?

They may be its point, but I frankly didn't see much use in the wild. You might argue that then those systems didn't need Redis in the first place and I'd agree, but then note that that is the point tigerdata makes.

edit: it's not about serious uses, it's about typical uses, which are sad (and same with Kafka, Elastic, etc, etc)

yalldidwhat 10 hours ago||
Did someone really downvote the creator of Redis?
antirez 10 hours ago|||
All the time here in HN, I'm proud of it -- happy to have opinions not necessarily aligned with what users want to listen to. Also: never trust the establishment without thinking! ;D
WorkerBee28474 10 hours ago||||
IIRC there was a pre-edit version with snark.
antirez 9 hours ago||
Yes, but the downvotes came later too, I edited it with the same exact content but without the asshole that is in me. Still downvotes received.
WorkerBee28474 7 hours ago||
Delaying votes may be one of HN's anti-manipulation tactics.
evil-olive 10 hours ago|||
I was one of the downvoters, and at the time I downvoted it, it was a very different comment. this is the original (copied from another tab that I hadn't refreshed yet):

> Tell me you don't understand Redis point is data structures without telling me you don't understand Redis point is data structures.

regardless of the author, I think slop of that sort belongs on reddit, not HN.

kachapopopow 6 hours ago||
and if you think it doesn't fit your suitcase? just add an extension and you're good to go (ex: timescaledb)
derefr 10 hours ago||
Something TFA doesn’t mention, but which I think is actually the most important distinction of all to be making here:

If you follow this advice naively, you might try to implement two or more of these other-kind-of-DB simulacra data models within the same Postgres instance.

And it’ll work, at first. Might even stay working if only one of the workloads ends up growing to a nontrivial size.

But at scale, these different-model workloads will likely contend with one-another, starving one-another of memory or disk-cache pages; or you’ll see an “always some little thing happening” workload causing a sibling “big once-in-a-while” workload to never be able to acquire table/index locks to do its job (or vice versa — the big workloads stalling the hot workloads); etc.

And even worse, you’ll be stuck when it comes to fixing this with instance-level tuning. You can only truly tune a given Postgres instance to behave well for one type-of-[scaled-]workload at a time. One workload-type might use fewer DB connections and depend for efficiency on them having a higher `work_mem` and `max_parallel_workers` each; while another workload-type might use many thousands of short-lived connections and depend on them having small `work_mem` so they’ll all fit.

But! The conclusion you should draw from being in this situation shouldn’t be “oh, so Postgres can’t handle these types of workloads.”

No; Postgres can handle each of these workloads just fine. It’s rather that your single monolithic do-everything Postgres instance, maybe won’t be able to handle this heterogeneous mix of workloads with very different resource and tuning requirements.

But that just means that you need more Postgres.

I.e., rather than adding a different type-of-component to your stack, you can just add another Postgres instance, tuned specifically to do that type of work.

Why do that, rather than adding a component explicitly for caching/key-values/documents/search/graphs/vectors/whatever?

Well, for all the reasons TFA outlines. This “Postgres tuned for X” instance will still be Postgres, and so you’ll still get all the advantages of being able to rely on a single query language, a single set of client libraries and tooling, a single coherent backup strategy, etc.

Where TFA’s “just use Postgres” in the sense of reusing your Postgres instance only scales if your DB is doing a bare minimum of that type of work, interpreting “just use Postgres” in the sense of adding a purpose-defined Postgres instance to your stack will scale nigh-on indefinitely. (To the point that, if you ever do end up needing what a purpose-built-for-that-workload datastore can give you, you’ll likely be swapping it out for an entire purpose-defined PG cluster by that point. And the effort will mostly serve the purpose of OpEx savings, rather than getting you anything cool.)

And, as a (really big) bonus of this approach, you only need to split PG this way where it matters, i.e. in production, at scale, at the point that the new workload-type is starting to cause problems/conflicts. Which means that, if you make your codebase(s) blind to where exactly these workloads live (e.g. by making them into separate DB connection pools configured by separate env-vars), then:

- in dev (and in CI, staging, etc), everything can default to happening on the one local PG instance. Which means bootstrapping a dev-env is just `brew install postgres`.

- and in prod, you don’t need to pre-build with new components just to serve your new need. No new Redis instance VM just to serve your so-far-tiny KV-storage needs. You start with your new workload-type sharing your “miscellaneous business layer” PG instance; and then, if and when it becomes a problem, you migrate it out.

otabdeveloper4 10 hours ago||
No thanks. In 2026 I want HA and replication out of the box without the insanity.
eikenberry 9 hours ago||
Came to say the same thing. Personally I'd only touch Postgres in a couple cases.

1. Downtime doesn't matter. 2. Paying someone else (eg. AWS) to manage redundancy and fail-over.

It just feels crazy to me that Postgres still doesn't have a native HA story since I last battled with this well over a decade ago.

nrvn 10 hours ago|||
Exactly my thoughts immediately after reading the word “just”. Also, PITR.
groundzeros2015 9 hours ago||
You exceeded the step of maxing out the best server you can buy?
throwaway7783 9 hours ago||
HA is not about exceeding the limits of a server. Its about still serving traffic when that best server I bought goes offline (or has failed memory chip, or a disk or... ).
groundzeros2015 8 hours ago||
Replication?
lima 8 hours ago||
Postgres replication, even in synchronous mode, does not maintain its consistency guarantees during network partitions. It's not a CP system - I don't think it would actually pass a Jepsen test suite in a multi-node setup[1]. No amount of tooling can fix this without a consensus mechanism for transactions.

Same with MySQL and many other "traditional" databases. It tends to work out because these failures are rare and you can get pretty close with external leader election and fencing, but Postgres is NOT easy (likely impossible) to operate as a CP system according to the CAP theorem.

There are various attempts at fixing this (Yugabyte, Neon, Cockroach, TiDB, ...) which all come with various downsides.

[1]: Someone tried it with Patroni and failed miserably, https://www.binwang.me/2024-12-02-PostgreSQL-High-Availabili...

JoshPurtell 8 hours ago||
It's 2026, just use Planetscale Postgres
nickmonad 8 hours ago|
Unless you're doing OLTP. Then, TigerBeetle ;)
More comments...