Posted by ibobev 12/19/2025
The assumption Garage makes, which is well-documented, is that of 3 replica nodes, only 1 will be in a crash-like situation at any time. With 1 crashed node, the cluster is still fully functional. With 2 crashed nodes, the cluster is unavailable until at least one additional node is recovered, but no data is lost.
In other words, Garage makes a very precise promise to its users, which is fully respected. Database corruption upon power loss enters in the definition of a "crash state", similarly to a node just being offline due to an internet connection loss. We recommend making metadata snapshots so that recovery of a crashed node is faster and simpler, but it's not required per se: Garage can always start over from an empty database and recover data from the remaining copies in the cluster.
To talk more about concrete scenarios: if you have 3 replicas in 3 different physical locations, the assumption of at-most one crashed node is pretty reasonable, it's quite unlikely that 2 of the 3 locations will be offline at the same time. Concerning data corruption on a power loss, the probability to lose power at 3 distant sites at the exact same time with the same data in the write buffers is extremely low, so I'd say in practice it's not a problem.
Of course, this all implies a Garage cluster running with 3-way replication, which everyone should do.
- you lose whatever writes to s3 haven't finished yet, if any
- the local node will need to repair itself a bit after rebooting
- the local node is now trashed and will have to copy all data back over
- all the nodes are now trashed and it's restore from backup time
I've been kicking the tyres for a bit and I think it's the happy case in the above, but lots of software out there completely falls apart on crashes so it's not generally a safe assumption. I think the behaviour is sqlite on zfs doesn't care about pulling the power cable out, lmdb is a bit further down the list.
If it's just the write buffer at risk, that's fine. But the chance of overlapping power loss across multiple sites isn't low enough to risk all the existing data.
It's downright stupid if you build a system that loses all existing data when all nodes go down uncleanly, not even simultaneously but just overlapping. What if you just happen to input a shutdown command the wrong way?
I really hope they meant to just say the write buffer gets lost.
Again, I'm not concerned for new writes, I'm concerned for all existing data from the previous months and years.
And getting in this situation only takes one out of a wide outage or a bad push that takes down the cluster. Even if that's stupid, it's a common enough stupid that you should never risk your data on the certainty you won't make that mistake.
You can't protect against everything, but you should definitely protect against unclean shutdown.
Also, garage gives you the possibility to automatically snapshot the metadata, advices on how to do the snapshotting at the filesystem level and to restore that.
How do filesystem level snapshots work if nodes might get corrupted by power loss? Booting from a snapshot looks exactly the same to a node as booting from a power loss event. Are you implying that it does always recover from power loss and you're defending a flaw it doesn't even have?
Previously I used LocalStack S3 but ultimately didn't like the lack of persistance thats not available on the OSS verison. MinIO OSS is apparently no longer maintained? Also looked at SeaweedFS and RustFS but from a quick reading into them this once was the easiest to set up.
Just run "weed sever -s3 -dir=..." to have an object store.
I've spent a mostly pleasant day seeing whether I can reasonably use garage + rclone as a replacement for NFS and the answer appears to be yes. Not really a recommended thing to do. Garage setup was trivial, somewhat reminiscent of wireguard. Rclone setup was a nuisance, accumulated a lot of arguments to get latency down and I think the 1.6 in trixie is buggy.
Each node has rclone's fuse mount layer on it with garage as the backing store. Writes are slow and a bit async, debugging shows that to be wholly my fault for putting rclone in front of it. Reads are fast, whether pretending to be a filesystem or not.
Yep, I think I'm sold. There will be better use cases for this than replacing NFS. Thanks for sharing :)
this is the reliability question no?
https://archive.fosdem.org/2024/schedule/event/fosdem-2024-3...
Slides are available here:
https://git.deuxfleurs.fr/Deuxfleurs/garage/src/commit/4efc8...
Conditionnal writes : no, we can't do it with CRDTs, which are the core of Garage's design.
https://dd.thekkedam.org/assets/documents/publications/Repor... http://www.bailis.org/papers/ramp-sigmod2014.pdf
https://garagehq.deuxfleurs.fr/blog/2022-ipfs/
Let's talk!