Sacrificing some convenience? Probably. But POSIX shell and coreutils is the last truly stable interface. After ~12 years of doing this I got sick of tool churn.
FreeBSD and jails is so easy to maintain its unbelievable.
I have now switched from some SaaS products to self-hosted alternatives:
- Feed reader: Feedly to FreshRSS.
- Photo management/repository: Google Photos to Imich or Imgich (don't remember).
- Bookmarks: Mozilla's Pocket to Hoarder.
And so far, my experience has been simple and awesome!
Don't get me wrong I love some of the software suggested. However yet a another post that does not take backups as seriously as the rest of the self-hosting stack.
Backups are stuck in 2013. We need plug and play backups for containers! No more roll your own with zfs datasets, back up data on the filesystem level (using sanoid/syncoid to manage snapshots or any other alternatives.
Each VM or container gets a data mount on a zvol. Containers go to OS mount and each OS has its own volume (so most VMs end up with 2 volumes attached)
Fully automated, incremental, verified backups, and restoring is one click of a button.
I find it strange that, especially with a docker which already knows your volumes, app data, and config, can't automatically backup and restore databases, and configs. Jeez, they could have built it right into docker.
One could set up a Docker Compose service that uses rclone to gzip and back up your docker volumes to something durable to get this done. An even more advanced version of this would automate testing the backups by restoring them into a clean environment and running some tests with BATS or whatever testing framework you want.
Meaning you'd need to upload full snapshots on a fixed interval
For most apps, the answer is usually "use a database" that correctly saves data.
I'm curious as to what issues you might be alluding to!
Nix (and I recently adopted deploy-rs to ensure I keep SSH access across upgrades for rolling back or other troubleshooting) makes experimenting really just a breeze! Rolling back to a working environment becomes trivial, which frees you up to just try stuff. Plus things are reproducible so you can try something with a different set of machines before going to "prod" if you want.
I haven't needed to tune selfhosted databases. They do fine for low load on cheap hardware from 10 years ago.
So, if you're using filesystem snapshots as source of backups for database, then I agree, you _should_ be good. the regular pgdumps is a workaround for other cases for me.
I remember spending time on this as a teenager but I haven't touched my MariaDB config in a decade now probably. Ah no, one time a few years ago I turned off fsyncing temporarily to do a huge batch of insertions (helped a lot with qps, especially on the HDD I used at the time), but that's not something to leave permanently enabled so not really tuning it for production use
Iirc default shared_mem is 128MB and it's usually recommended to set to 50-75% system RAM.
Bitnami PostgreSQL Helm chart - https://github.com/bitnami/charts/tree/main/bitnami/postgres...
Not sure on "easy" backups besides just running pg_dump on a cron but it's not very space efficient (each backup is a full backup, there's no incremental)
(Docker Swarm only for now, though I’m thinking about adding k8s later this year)
Cloudflare Tunnels is a step in the right direction, but it’s not end to end encrypted.
The question is then, how to secure self hosted apps with minimal configuration, in a way that is almost bulletproof?
It's easy to manage and reason about.
If the software you host constantly has vulnerabilities and something like apt install unattended-upgrades doesn't resolve them, maybe the software simply isn't fit for hosting no matter what team you put on it. That hired team might as well just spend some time making it secure rather than "keeping on top of vulnerabilities"
The solution is a secure software in front. It could be Wireguard, but sometimes you don’t know your users or they don’t want to install anything.
While it's not nearly as powerful as say DataDog, it provides the core essentials of CPU, memory, disk, network, temperature and even GPU monitoring (via agent only).