Posted by pavel_lishin 4 days ago
... but you can do a lot with just "a single VM and robust backup". PostgreSQL restore is pretty fast, and if you automated deployment you can start with it in minutes, so if your service can survive 30 minutes of downtime once every 3 years while the DB reloads, "downgrading" to "a single cloud VM" or "a single VM on your own hardware" might not be a big deal.
Disclaimer: there's no silver bullet, yadda yadda. But SQLite in WAL mode and backups using Litestream have worked perfectly for me.
I was disappointed alloy doesn't support timescaledb as a metrics endpoint. Considering switching to telegraf just because I can store the metrics on Postgres.
SQLite when prototyping, Postgres for production.
If you need to power a lawnmower and all you have is a 500bhp Scania V8, you may as well just do it.
Even better, stage to a production-like environment early, and then deploy day can be as simple as a DNS record change.
I have switched to using postgres even for prototyping once I prepared some shell scripts for various setup. With hibernate (java) or knex (Javascript/NodeJS) and with unit tests (Test Driven Development approach) for code, I feel I have reduced the friction of using postgres from the beginning.
Because it's "low effort" to just fire it into sqlite and if I have to do ridiculous things to the schema as I footer around working out exactly what I want the database to do.
I don't want to use nodejs if I can possibly avoid it and you literally could not pay me to even look at Java, there isn't enough money in the world.
Incidentally, you can rsync postgres dumps as well. That's what I do when testing and when sharing test data with team mates. At times, I decide to pgload the database dump into a different target system.
My reason for sharing: I accepted that I was being lethargic about using postgres, so I just automated certain things as I went along.
For most purposes, it works perfectly fine, but with two main caveats:
1. It is single user, single connection (i.e. no MVCC) 2. It doesn't support all postgres extensions (particularly postGIS), though it does support pgvector
https://github.com/supabase-community/pg-gateway is something that may be used to use pglite for prototyping I guess, but I haven't used this.
An expert will give you thousands of theoretical reasons why self-hosting the DB is a bad idea.
An "expert" will host it, enjoy the cost savings and deal with the once-a-year occurrence of the theoretical risk (if it ever occurs).
If you're a small business, you almost never need replicas or pooling. Postgres is insanely capable on modern hardware, and is probably the fastest part of your application if your application is written in a slower dynamic language like Python.
I once worked with a company that scaled up to 30M revenue annually, and never once needed more than a single dedicated server for postgres.
And Postgres upgrades are not transparent. So you'll have a 1 or 2 hours task, every 6 to 18 months that you have only a small amount of control over when it happens. This is ok for a lot of people, and completely unthinkable for some other people.