Posted by hajtom 10 hours ago
https://github.com/versity/versitygw
Unlike other options like Garage or Minio, it doesn't have any clustering, replication, erasure coding, ...
Your S3 objects are just files on disk, and Versity exposes it. I gather it exists to provide an S3 interface on top of their other project (ScoutFS), but it seems like it should work on any old filesystem.
He also mentioned that the minio-to-versity migration is a straight forward process. Apparently, you just read the data from mino's shadow filesystem and set it as an extended attribute in your file.
Thanks for posting this, as it's the first I've come across their work.
It's a "Misc" endpoint in the Garage docs here: https://garagehq.deuxfleurs.fr/documentation/reference-manua...
Thanks for pointing that out.
If the answer is the latter, seaweedfs is an option:
https://github.com/seaweedfs/seaweedfs?tab=readme-ov-file#qu...
Possibly of interest: s3gw[1] is a modified version of ceph's radosgw that allows it to run standalone. It's geared towards kubernetes (notably part of Rancher's storage solution), but should work as a standalone container.
[0] https://github.com/gaul/s3proxy [1] https://github.com/s3gw-tech/s3gw
It's not a fully featured s3 compatible service, like MinIO, but we used it to great success as a local on-prem s3 read/write cache with AWS as the backing S3 store. This avoided expensive network egress charges as we wanted to process data in both AWS as well as in a non-AWS GPU cluster (i.e. a neocloud)
It is useful to remember that one may fork at the commit before a license change.
I assume forks, and software that uses them will be held to the same requirements.
https://opensource.google/documentation/reference/using/agpl...
You create a proprietary piece of software. You license it to Google and negotiate terms. You then negotiate different terms with Microsoft. Nothing so far prevents you from doing this. You can't yank the license from Google unless your contract allows that, but maybe it does. You can in theory then go and release it under a different license to the public. If that license is perpetual and non-revokable then presumably I can use it after you decide to stop offering that license. But if the license is non-transferrable then I can't pass on your software to someone else either by giving them a flash drive with it, or by releasing it under a different license.
Several open source projects have been re-licensed. The main thing that really is the obstacle is that in a popular open source or copyleft project you have many contributors each of which holds the copyright to their patches. So now you have a mess of trying to relicense only some parts of your codebase and replace others for the people resisting the change or those you can't reach. It's a messy process. For example, check out how the Open Street Maps data got relicensed and what that took.
My understanding of what they meant by "retroactively apply a restrictive license" is to apply a restrictive license to previous commits that were already distributed using a FOSS license (the FOSS part being implied by the new license being "restrictive" and because these discussions are usually around license changes for previously FOSS projects such as Terraform).
As allowing redistribution under at least the same license is usually a requirement for a license to be considered FOSS, you can't really change the license of an existing version as anyone who has acquired the version under the previous license can still redistribute it under the same terms.
Edit: s/commit/version/, added "under the same terms" at the end, add that the new license being "restrictive" contributes to the implication that the previous license was FOSS
Copyright law is also a complex matter which differs by country and I am not a lawyer so take this with a grain of salt, but there seem to be "edge cases" where the license can be revoked as seen in the stackexchange page below.
See:
https://lists.opensource.org/pipermail/license-discuss_lists...
https://opensource.stackexchange.com/questions/4012/are-lice...
That got backlash so now it’s just getting dropped entirely?
People get to do whatever they want but bit jarring to go from this is worth something people will pay for to maintenance mode in quick succession
That's literally what the commit shows that they're doing?
> *This project is currently under maintenance and is not accepting new changes.*
> For enterprise support and actively maintained versions, please see MinIO SloppyAISlop (not actual name)
Start open source to use free advertising and community programmer, and then dumps it all for commercial licensing.
I think n8n is next because they finished the release candidate for version 2.0, but there are no changelogs.
quay.io/minio/minio:RELEASE.2025-04-22T22-12-26Z
Here's a simple script that does it automagically (you'll need golang installed):
> build-minio-ver.sh
#!/bin/bash
set -e
VERSION=$(git ls-remote --tags https://github.com/minio/minio.git | \
grep -Eo 'RELEASE\.[0-9T-]+Z' | sort | tail -n1)
echo "Building MinIO $VERSION ..."
rm -rf /tmp/minio-build
git clone --depth 1 https://github.com/minio/minio.git /tmp/minio-build
cd /tmp/minio-build
git fetch --tags
git checkout "$VERSION"
echo "Building minio..."
CGO_ENABLED=0 go build -trimpath \
-ldflags "-s -w \
-X github.com/minio/minio/cmd.Version=$VERSION \
-X github.com/minio/minio/cmd.ReleaseTag=$VERSION \
-X github.com/minio/minio/cmd.CommitID=$(git rev-parse HEAD)" \
-o "$OLDPWD/minio"
echo " Binary created at: $(realpath "$OLDPWD/minio")"
"$OLDPWD/minio" --versionIts like GET <namespace>/object, PUT <namespace>/object. To me its the most obvious mapping of HTTP to immutable object key value storage you could imagine.
It is bad that the control plane responses can be malformed XML (e.g keys are not escaped right if you put XML control characters in object paths) but that can be forgiven as an oversight.
Its not perfect but I don't think its a strange API at all.
My browser prints that out to 413 pages with a naive print preview. You can squeeze it to 350 pretty reasonably with a bit of scaling before it starts getting to awfully small type on the page.
Yes, there's a simple API with simple capabilities struggling to get out there, but pointing that out is merely the first step on the thousand-mile journey of determining what, exactly, that is. "Everybody uses 10% of Microsoft Word, the problem is, they all use a different 10%", basically. If you sat down with even 5 relevant stakeholders and tried to define that "simple API" you'd be shocked what you discover and how badly Hyrum's Law will bite you even at that scale.
> My browser prints that out to 413 pages with a naive print preview. You can squeeze it to 350 pretty reasonably with a bit of scaling before it starts getting to awfully small type on the page.
idk why you link to Go SDK docs when you can link to the actual API reference documentation: https://docs.aws.amazon.com/AmazonS3/latest/API/API_Operatio... and its PDF version: https://docs.aws.amazon.com/pdfs/AmazonS3/latest/API/s3-api.... (just 3874 pages)
And many S3-compatible alternatives (probably most but the big ones like Ceph) don't implement all of the features.
For example for lifecycles backblaze have completely different JSON syntax
3500 pages to describe upload and download, basically. That is pretty strange in my book.
Now with the trivial part off the table, let's consder storage classes, security and ACLs, lifecycle management, events, etc.
I’ve seen a lot of bad takes and this is one of them.
Listing keys is weird (is it V1 or V2)?
The authentication relies on an obtuse and idiosyncratic signature algorithm.
And S3 in practice responds with malformed XML, as you point out.
Protocol-wise, I have trouble liking it over WebDAV. And that's depressing.
I think it's only gotten as complicated as it has as new features have been organically added. I'm sure there are good use cases for everything, but it does beg the question -- is a better API possible for object storage? What's the minimal API required? GET/POST/DELETE?
For example, did you know that date filtering in S3 is based on string prefix matching against an ISO8601/RFC3339 style string representation? Want all objects created between 2024-01-01 and 2024-06-30? You'll need to construct six YYYY-MM prefixes (one per month) for datetime and add them as filter array elements.
As a result the service abbreviation is also incorrect these days. Originally the first S stood for "Simple". With all the additions they've had to bolt on, S2 would be far more appropriate a name.
S3 has 3 independent permissions mechanisms.
it's storing a [utf8-string => bytes] mapping with some very minimal metadata. But that can be whatever you want. JSON, CBOR, XML, actual document formats etc.
And it's default encoding for listing, management operations and similar is XML....
> but I feel like we missed an opportunity here for a standardized interface.
except S3 _is_ the de-facto standard interface which most object storage system speaks
but I agree it's kinda a pain
and commonly done partial (both feature wise and partial wrong). E.g. S3 store utf8 strings, not utf8 file paths (like e.g. minio does), that being wrong seems fine but can lead to a lot of problems (not just being incompatible for some applications but also having unexpected perf. characteristics for others) making it only partial S3 compatible. Similar some implementations random features like bulk delete or support `If-Match`/`If-Non-Match` headers can also make them S3 incompatible for some use cases.
So yeah, a new external standard which makes it clear what you should expect to be supported to be standard compatible would be nice.
My only blocker for a fork to maintain compatibility and path to upgrade from earlier versions.
At the 1 billion valuation from the previous round, achieving a successful exit requires a company with deep pockets. Right now, Nvidia is probably a suitable buyer for MinIO, which might explain all the recent movements from them. Dell, Broadcom, NetApp, etc, are not going to buy them.