Posted by jandeboevrie 7 days ago
https://kubernetes-csi.github.io/docs/developing.html
There are 4 gRPCs listed in the overview, that literally is all you need.
When restoring from backup I went with Rook (which is a wrapper on ceph) instead and it's been much more stable, even able to recover (albeit with some manual intervention needed) from a total node hardware failure.
So far things are running well but I can't shake this fear that I am in for a rude awakening and I loose everything. I backups but the recovery will be painful if I have to do it.
I will have to take a look at rook since I am not quite committed enough yet (only moved over 2 things) to switch.
I have a 15TB volume for video storage, and it can't complete any replica rebuilds. It always fails at some point and then tries to restart.
I think I am likely keeping most of my storage just setup with a storage class that uses my NFS as storage. But longhorn will be used for the things that need to be faster like the databases. I moved jellyfin over to Longhorn and it went from being borderline unusable while metadata was grabbed to actually working well.
I can't imagine my biggest volume being more than 100gb, and even that is likely a major over estimation on my part.
You need a separate storage lan, a seriously beafy one at to use Longhorn. But even 25Gbit was not enough to keep volumes from being corrupted.
When rebuilds take too long, longhorn fails, crashes, hangs, etc, etc.
We will never make the mistake of using Longhorn again.
Allowing anyone to delete all your data is not great. When I found this I gave up on Longhorn and installed Ceph.
only been in development for what like 5 years at this point? =) i have no horse in this race but seems to me openebs will close the gap sooner.
[0] https://lobste.rs/s/vmardk/longhorn_kubernetes_native_filesy... [1] https://github.com/democratic-csi/democratic-csi
(a lot of us distrust distributed 'POSIX-like' filesystems for good reasons)
You're going to have to open the image and then go to the third image. I thought it was interesting that OCI pegs Lustre at 8Gb/s and their high performance FS at much higher than that... 20-80.
Most resource requirements for Ceph assume you're going for a decently sized cluster, not something homelab sized.
> All you need is a machine, virtual or physical, with two CPU cores, 4GB RAM, and at least two or three disks (plus one disk for the operating system).