Top
Best
New

Posted by finnlab 4/1/2025

Self-Hosting like it's 2025(kiranet.org)
221 points | 229 commentspage 2
cullumsmith 4/1/2025|
Still running everything from my basement using FreeBSD jails and shell scripts.

Sacrificing some convenience? Probably. But POSIX shell and coreutils is the last truly stable interface. After ~12 years of doing this I got sick of tool churn.

Gud 4/1/2025||
Same. Why add the complexity of docker/kubernetes/?

FreeBSD and jails is so easy to maintain its unbelievable.

bitsandboots 4/1/2025||
Not just stable - also easy to understand when, if ever, something goes wrong. There's very little magic, very little layers of complexity.
pentagrama 4/2/2025||
As a non-developer, I find it difficult to step into the self-hosting world. What I recommend for people like me is the service PikaPods [1], which takes care of the hard part of self-hosting.

I have now switched from some SaaS products to self-hosted alternatives:

- Feed reader: Feedly to FreshRSS.

- Photo management/repository: Google Photos to Imich or Imgich (don't remember).

- Bookmarks: Mozilla's Pocket to Hoarder.

And so far, my experience has been simple and awesome!

[1] https://www.pikapods.com

smjburton 4/1/2025||
This is a great introduction to self-hosting, good job OP. As some of the other comments mentioned, discussion about self-hosted security and the importance of back-ups would be good to include. Also, you link to some great resources for discovering self-hosted applications, but it would be interesting to hear some of the software you enjoy self-hosting outside of core infrastructure. As I'm sure you're aware, self-hosters are always looking for new ideas. :)
FloatArtifact 4/1/2025||
Self-Hosting like it's 2025...uhhgg...

Don't get me wrong I love some of the software suggested. However yet a another post that does not take backups as seriously as the rest of the self-hosting stack.

Backups are stuck in 2013. We need plug and play backups for containers! No more roll your own with zfs datasets, back up data on the filesystem level (using sanoid/syncoid to manage snapshots or any other alternatives.

nijave 4/1/2025||
Why not zfs snapshots? Besides using Hyper-V machine snapshots, that's been the easiest way, by far, for me. No need to worry about the 20 different proprietary tools that go with each piece of software.

Each VM or container gets a data mount on a zvol. Containers go to OS mount and each OS has its own volume (so most VMs end up with 2 volumes attached)

FloatArtifact 4/3/2025||
Well, one argument not to use ZFS is simply the resources it takes. It eats up a lot of ram. Also I'm under the impression that one should never live-snapshot a database without risk of corruption.
marceldegraaf 4/1/2025|||
Best decision of last year for my homelab: run everything in Proxmox VMs/containers and back up to a separate Proxmox Backup Server instance.

Fully automated, incremental, verified backups, and restoring is one click of a button.

FloatArtifact 4/1/2025||
Yes, I'm considering that if I can't find a solution that is plug-and-play for containers Independent of the OS and file system. Although I don't mind something abstracting on top of ZFS. ZFS Mental overhead through the snapshot paradigm can lead to its own complexities. A traditional backup and restorer front end would be great.

I find it strange that, especially with a docker which already knows your volumes, app data, and config, can't automatically backup and restore databases, and configs. Jeez, they could have built it right into docker.

nunez 4/1/2025||
rclone is great for this.

One could set up a Docker Compose service that uses rclone to gzip and back up your docker volumes to something durable to get this done. An even more advanced version of this would automate testing the backups by restoring them into a clean environment and running some tests with BATS or whatever testing framework you want.

nijave 4/1/2025||
Rclone won't take a consistent snapshot so you either need to shutdown the thing or use some other tool to export the data first
auxym 4/1/2025||
zfs/btrfs snapshot and then rclone that snapshot?
nijave 4/1/2025||
I think that'd break deleting incremental snapshots unless you tried uploading a gigantic blob of the entire filesystem, wouldn't it?

Meaning you'd need to upload full snapshots on a fixed interval

hankchinaski 4/1/2025||
The only thing that holds me back for self hosting is Postgres. Has anyone managed to get a rock solid Postgres setup self managed? Backups + tuning?
homebrewer 4/1/2025||
Put it on a zfs dataset and back up data on the filesystem level (using sanoid/syncoid to manage snapshots, or any of their alternatives). It will be much more efficient compared to all other backup strategies with similar maintenance complexity.
candiddevmike 4/1/2025||
Filesystem backups may not be consistent and may lose transactions that haven't made it to the WAL. You should always try to use database backup tools like pgdump.
tpetry 4/1/2025|||
Transactions that haven‘t been written to the WAL yet are also lost when the server crashes or you run pgdump. Stuff not in WAL is not safe in any means, its still a transaction in progress.
nijave 4/1/2025||||
If a filesystem backup isn't consistent, the app isn't using sync correctly and needs a bug report. No amount of magic can work around an app that wants to corrupt data.

For most apps, the answer is usually "use a database" that correctly saves data.

nz 4/1/2025|||
Entire companies have been built around synchronizing the WAL with ZFS actions like snapshot and clone (i.e. Delphix and probably others). Would be cool to have `zpgdump` (single-purpose, ZFS aware equivalent).
lytedev 4/1/2025|||
I self-host Postgres at home and am probably screwing it up! I do at least have daily backups, but tuning is something I have given very little thought to. At home, traffic doesn't cause much load.

I'm curious as to what issues you might be alluding to!

Nix (and I recently adopted deploy-rs to ensure I keep SSH access across upgrades for rolling back or other troubleshooting) makes experimenting really just a breeze! Rolling back to a working environment becomes trivial, which frees you up to just try stuff. Plus things are reproducible so you can try something with a different set of machines before going to "prod" if you want.

swizzler 4/1/2025|||
I was using straight filesystem backups for a while, but I knew they could be inconsistent. Since then, I've setup https://github.com/prodrigestivill/docker-postgres-backup-lo..., which regularly dumps a snapshot to the filesystem, which regular filesystem backups can consume. The README has restore examples, too

I haven't needed to tune selfhosted databases. They do fine for low load on cheap hardware from 10 years ago.

nijave 4/1/2025||
Inconsistent how? Postgres can recover from a crash or loss of power which is more-or-less the same as a filesystem snapshot
pedantsamaritan 4/1/2025||
Getting my backup infrastructure to behave they way I'd want with filesystem snapshot (e.g. zfs or btrfs snapshot) was not trivial. (I think the hurdle was my particularity about the path prefix that was getting backed up.) write once pg_dumps could still have race conditions, but considerably fewer.

So, if you're using filesystem snapshots as source of backups for database, then I agree, you _should_ be good. the regular pgdumps is a workaround for other cases for me.

Aachen 4/1/2025|||
Why would tuning be necessary for a regular setup, does it come with such bad defaults? Why not upstream those tunes so it can work out of the box?

I remember spending time on this as a teenager but I haven't touched my MariaDB config in a decade now probably. Ah no, one time a few years ago I turned off fsyncing temporarily to do a huge batch of insertions (helped a lot with qps, especially on the HDD I used at the time), but that's not something to leave permanently enabled so not really tuning it for production use

zrail 4/1/2025||
PostgreSQL defaults (last I looked, it's been a few years) are/were set up for spinning storage and very little memory. They absolutely work for tiny things like what self-hosting usually implies, but for production workloads tuning the db parameters to match your hardware is essential.
nijave 4/1/2025||
Correct, they're designed for maximum compatibility. Postgres doesn't even do basic adjustments out of the box and defaults are designed to work on tiny machines.

Iirc default shared_mem is 128MB and it's usually recommended to set to 50-75% system RAM.

nodesocket 4/1/2025|||
I run a few PostgreSQL instances in containers (Kubernetes via Bitnami Helm chart). I know running stateful databases is generally not best practice but for development/homelab and tinkering works great.

Bitnami PostgreSQL Helm chart - https://github.com/bitnami/charts/tree/main/bitnami/postgres...

nijave 4/1/2025|||
https://pgtune.leopard.in.ua/ is a pretty good start. There's a couple other web apps I've seen that do something similar.

Not sure on "easy" backups besides just running pg_dump on a cron but it's not very space efficient (each backup is a full backup, there's no incremental)

mfashby 4/1/2025|||
I've got an openbsd server, postgres installed from the package manager, and a couple of apps running with that as the database. My backup process just stops all the services, backs up the filesystem, then starts them again. Downtime is acceptable when you don't have many users!
candiddevmike 4/1/2025||
What is your RTO/RPO?
nijave 4/1/2025|||
RTO - best effort RPO - wait? You guys have backups and test then??
orthoxerox 4/1/2025|||
1s/0s
qudat 4/1/2025||
I’ve been self hosting small web apps using an SSH service https://tuns.sh and it helped me replace my DO droplet pretty successfully. What’s nice is it also has built-in usage analytics, alert notifications when a tunnel goes down, and fully manageable with a remote TUI which means I don’t have to install anything to use it.
notpushkin 4/1/2025||
I feel like I’ve been plugging it way too many times... but if you’re looking for a more humane alternative to Portainer, check out my project, Lunni: https://lunni.dev/

(Docker Swarm only for now, though I’m thinking about adding k8s later this year)

finnlab 4/1/2025|
this looks really cool, I love to see some competition in this space
notpushkin 4/1/2025||
Thank you so much!
aborsy 4/1/2025||
I can self host many applications, but their security must be outsourced to a company. I don’t have time to keep on top of vulnerabilities.

Cloudflare Tunnels is a step in the right direction, but it’s not end to end encrypted.

The question is then, how to secure self hosted apps with minimal configuration, in a way that is almost bulletproof?

interloxia 4/1/2025||
I don't need public access to my stuff so my strategy is to use zerotier taking care that services are only able to use the virtual network.

It's easy to manage and reason about.

Aachen 4/1/2025||
> security must be outsourced to a company. I don’t have time to keep on top of vulnerabilities.

If the software you host constantly has vulnerabilities and something like apt install unattended-upgrades doesn't resolve them, maybe the software simply isn't fit for hosting no matter what team you put on it. That hired team might as well just spend some time making it secure rather than "keeping on top of vulnerabilities"

aborsy 4/1/2025|||
The concern is zero days. There are probably lots of easy zero days, patched across a host of software, once discovered in one.

The solution is a secure software in front. It could be Wireguard, but sometimes you don’t know your users or they don’t want to install anything.

nijave 4/1/2025|||
There's only a handful of web apps packaged in the OS repo. Even wildly popular software like WordPress and Drupal you need to use their built in facilities or manually apply outside the OS update manager
nodesocket 4/1/2025|
I recently discovered Beszel (https://github.com/henrygd/beszel) for monitoring all my homelab servers. It's quick, easy, and has a very clean and intuitive interface. Works especially well running inside of containers on hosts. I also wrote a very quick guide on getting it running inside of Kubernetes at https://github.com/henrygd/beszel/discussions/431.

While it's not nearly as powerful as say DataDog, it provides the core essentials of CPU, memory, disk, network, temperature and even GPU monitoring (via agent only).

More comments...