Posted by hajtom 12/3/2025
There are two ways open source projects continue.
1. The creator has a real, solid way to make money (React by Facebook, Go by Google).
2. The project is extremely popular (Linux, PostreSQL).
Is it possible for people to reliably keep working for ~free? In theory yes, but if you expect that, you have a very bad understanding of 98% of human behavior.
There's also tonne of Open Source that isn't as popular but serving niche communities. It's definitely harder but not impossible. OS core and paid hosting with bells and whistles has proven to be a good sustainable model.
Redis, Elasticsearch, Terraform, MongoDB, CockroachDB have all changed their OSS licenses in recent years.
Also, Debian has been around for a few decades, although I do admit that - like the Linux kernel - that wouldn't have been possible without a lot of companies contributing back to the ecosystem.
Need to start reconsidering the approach now and looking for alternatives
https://garagehq.deuxfleurs.fr/
Edit: jeez, three of us all at once...
rclone serve s3 path/to/buckets --addr :9000 --auth-key <key-id>,<secret>
`Be wary that an OSD, whether based on a physical device or a file, is resource intensive.`
Can anyone quantify "resource intensive" here? Is it "takes an entire Raspberry Pi to run the minimum set" or is it "takes 4 cores per OSD"?
Edit: This is the specific doc page https://canonical-microceph.readthedocs-hosted.com/stable/ho...
minio was also suited for some smaller use cases (e.g. running a partial S3 compatible storage for integration tests). Ceph isn't really good for it.
But if you ran large minio clusters in production ceph might be a very good alternative.
I haven't tried it though. Seems simple enough to run.
Am forced to use MinIO for certain products now but will eventually move to better eventually. Garage is high on my list of alternatives.
Similar to the way Broadcom did with VMware hiking prices astronomically for their largest clients, and basically killing the SME offering.
Anyone have any suggestions?
Seaweedfs is more mature and has many interfaces (S3, webdav, SFTP, REST, fuse mount). It's most appropriate for storing lots of small files.
I prefer the command line interface and data/synchronization model of Garage, though. It's easier to manage, probably because the developers aren't biting off more than they can chew.
Like in the old MinIO days, an S3 object is a file on the filesystem, not some replicated blocks. You could always rebuild the full object store content with a few rsync. I appreciate the simplicity.
My main concern was that you couldn't configure it easily through files, you had to use CLI, which wasn't very convenient. I hope this has changed.
Configuration is still through the CLI, though it's fairly simple. If your usecase is similar to the way that the Deuxfleurs organization uses it -- several heterogeneous, geographically distributed nodes that are more or less set-it-and-forget-it -- then it's probably a good fit.
My use case is relatively common : I want small S3 compatible object stores that can be deployed in Kubernetes without manual intervention. The CLI part was a bit in the way last time, this could have been automated but it wasn't straightforward.