Top
Best
New

Posted by turtles3 6 hours ago

It's 2026, Just Use Postgres(www.tigerdata.com)
382 points | 205 commentspage 2
bluepuma77 4 hours ago|
Now we only need easy self-hosted Postgres clustering for HA. Postgres seems to need additional tooling. There is Patroni, which doesn't provide container images. There is Spilo, which provides Postgres images with Patroni, but they are not really maintained. There is a timescaledb-ha image with Patroni, but no documentation how to use it. It seems the only easy way for hosting a Postgres cluster is to use CloudNativePG, but that requires k8s.

It would be awesome to have easy clustering directly built-in. Similar to MongoDB, where you tell the primary instance to use a replica set, then simply connect two secondaries to primary, done.

lima 3 hours ago|
Postgres is not a CP database, and even with synchronous replication, it can lose writes during network partitions. It would not pass the Jepsen test suite.

This is very hard to fix and requires significant architectural changes (like Yugabyte or Neon have done).

mhh__ 2 hours ago||
I like "just use postgres" but postgres is getting a bit long in the tooth in some ways, so I'm pretty helpful that CedarDb sticks the landing.

https://cedardb.com/

I suspect it not being open source may prevent a certain level of proliferation unfortunately.

samuelknight 5 hours ago||
Skeptical about replacing Redis with a table serialized to disk. The point of Redis is that it is in memory and you can smash it with hot path queries while taking a lot of load off the backing DB. Also that design requires a cron which means the table could fill disk between key purges.
dry_soup 5 hours ago|
From the article, using UNLOGGED tables puts them in memory, not on disk
samuelknight 4 hours ago||
I the article is wrong. UNLOGGED means it isn't written to WAL which means recovery and rollback guarantees won't work since the transaction can finish before the page can be synchronized on disk. The table loses integrity as a trade off for a faster write.

https://www.postgresql.org/docs/current/sql-createtable.html...

lucas1068 5 hours ago||
I've found that Postgres consumes (by default) more disk than, for example, MySQL. And the difference is quite significant. That means more money that I have to pay every month. But, sure Postgres seems like I system that integrates a lot of subsystems, that adds a lot of complexity too. I'm just marking the bad points because you mention the good points in the post. You're also trying to sell you service, which is good too.
benjiro 4 hours ago||
The problem is that Postgres uses something like 24B overhead per row. That is not a issue with small Tables, but when your having a few billion around, each byte starts to add up fast. Then you a need link tables that explode that number even more, etc ... It really eats a ton of data.

At some point you end up with binary columns and custom encoded values, to save space by reducing row count. Kind of doing away with the benefits of a DB.

direwolf20 1 hour ago||
Yeah postgres and mariadb have some different design choices. I'd say use either one until it doesn't work for you. One of the differences is the large row header in postgres.
PunchyHamster 4 hours ago|||
On flipside, restore from plain postgresql dump is much, much faster than plain mysql backup. There are alternative strategies for mysql but that's extra work
EvanAnderson 5 hours ago||
Some people do Postgres on compressed ZFS volumes to great success.
olavgg 5 hours ago|||
On average I get around 4x compression on PostgreSQL data with zstd-1
ddtaylor 5 hours ago|||
I am curious if you know anyone using Btrfs for this too. I like ZFS, but it Btrfs can do this it would be easier to use with some distros, etc. as it's supported in kernel.
riku_iki 4 hours ago||
I do it.

The big problem for me from running DB on Btrfs is that when I delete large dirs or files (100GB+), it locks disk system, and Db basically stop responding on any queries.

I am very surprised that FS which is considered prod grade having this issue..

jlundberg 4 hours ago||
Try XFS if you havn’t yet.

Very solid and no such issues.

ddtaylor 3 hours ago|||
I haven't used XFS in almost two decades, does it have compression support in the same way? Also, does it do JBOD stuff? I know it's a bit of a different thing, but I really enjoy the pool many disks together part of Btrfs, although it has its limitations.
EvanAnderson 3 hours ago||
XFS doesn't have inline compression, nor does it have volume management functionality. It's a nice filesystem (and it's very fast) but it's just a filesystem.
riku_iki 3 hours ago|||
No compression.
jb3689 2 hours ago||
It irks me that these "just use Postgres" posts only talk about feature sets with no discussion about operations, reliability, real scaling, or even just guard rails and opinions to deter you from making bad design decisions. The author writes about how three nine's is multiplied over several dependencies, but that's not how this shakes out in practice. Your relational database is typically far more vulnerable than distributed alternatives. "Just use Postgres" is fine advice but gets used as a crutch by companies who wind up building everything in-house for no good reason.
the_arun 3 hours ago||
I am looking for a db that runs using existing json/yaml/csv files, saves data back to those files in a directory, which I can sync using Dropbox or whatever shared storage. Now I can run this db wherever I am & run the application. Postgres feels a bit more for my needs
realslimjd 2 hours ago||
It feels like you want SQLite?
TheRealPomax 3 hours ago||
Why? Why would separate json/yaml/csv files be better than just... syncing using postgres itself? You point `psql` to the host you need, because clearly you have internet access even on the go, and done: you don't need to sync anything, you already have remote access to the database?
kibibu 5 hours ago||
Blog posts, like academic papers, should have to divulge how AI has been used to write them.
pigbearpig 49 minutes ago||
Even blog post is generous. This is an ad.
tallytarik 5 hours ago|||
Yes this is clearly verbatim output from an LLM.

But it's perfect HN bait, really. The title is spicy enough that folks will comment without reading the article (more so than usual), and so it survives a bit longer before being flagged as slop.

ddtaylor 5 hours ago||
Is HN guidelines to flag AI content? I am unsure of how flagging for this is supposed to work on HN and have only ever used the flag feature for obvious spam or scams.
furyofantares 4 hours ago||
It might be wrong, but I have started flagging this shit daily. Garbage articles that waste my time as a person who comes on here to find good articles.

I understand that reading the title and probably skimming the article makes it a good jumping off point for a comment thread. I do like the HN comments but I don't want it to be just some forum of curious tech folks, I want it to be a place I find interesting content too.

ddtaylor 3 hours ago||
I agree. It seems this is kind of a shelling point right now on HN and there isn't a clear guideline yet. I think your usage of flagging makes sense. Thanks
wmf 4 hours ago|||
People used to write Medium/Linkedin slop by hand and they didn't have to disclose it. Slopping is its own punishment.
itisit 5 hours ago|||
Granted it's often easy to tell on your own, but when I'm uncertain I use GPTZero's Chrome extension for this. Eventually I'll stop doing that and assume most of what I read outside of select trusted forums is genAI.
bitwize 4 hours ago||
You're absolutely right! Let's delve into why blog posts like this highlight the conflict between the speed and convenience of AI and authentic human expression. Because it's not just about fears of hallucination—it's about ensuring the author's voice gets heard. Clearly. Completely. Unmistakably.
irishcoffee 4 hours ago||
If nothing else, I sure got amusement from this.
vb-8448 5 hours ago||
It's 5th of feb 2026, and we already get our monthly "just use postgres" thread

btw, big fan of postgres :D

TheAceOfHearts 5 hours ago||
I'll take it one step further and say you should always ask yourself if the application or project even needs a beefy database like Postgres or if you can get by with using SQLite. For example, I've found a few self-hosted services that just overcomplicated their setup and deployment because they picked Postgres or MariaDB over SQLite, despite it being a much better self-contained solution.
nikisweeting 2 hours ago||
SQLite is great until you try to do any kind of multi-writer stuff. Theres no SELECT FOR UPDATE locking and no parallel write support, if any of your writes take more than a few ms you end up having to manage queueing at the application layer, which means you end up having to build your own concurrent-safe multi-writer queue anyway.
timeinput 5 hours ago||
I find that if I want to use JSON storage I'm somewhat stuck choosing my DB stack. If I want to use JSON, and change my database from SQLite to Postgres I have to substantially change my interface to the DB. If I use only SQLite, or only Postgres it's not so bad, but the transition cost to "efficient" JSON use in Postgres from a small demo in SQLite is kind of high compared to just starting with an extra docker run (for a Postgres server) / docker compose / k8s yaml / ... that has my code + a Postgres database.

I really like having some JSON storage because I don't know my schema up front all the time, and just shoving every possible piece of potentially useful metadata in there has (generally) not bit me, but not having that critical piece of metadata has been annoying (that field that should be NOT NULL is NULL because I can't populated it after the fact).

Olshansky 2 hours ago|
https://github.com/Olshansk/postgres_for_everything/
More comments...