Top
Best
New

Posted by hajtom 12/3/2025

MinIO is now in maintenance-mode(github.com)
511 points | 322 commentspage 2
st3fan 12/3/2025|
What a story. EOL the open source foundation of your commercial product, to which many people contributed, to turn it into a closed source "A-Ff*ing-I Store" .. seriously what the ...
nikeee 12/3/2025||
Didn't contribute to MinIO, but if they accepted external contributions without making them sign a CLA, they cannot change the license without asking every external contributor for consent to the license change. As it is AGPL, they still have to provide the source code somewhere.

IANAL, of course

lima 12/3/2025|||
They required a "Community Contribution License" in each PR description, which licensed each contribution under Apache 2 as an inbound license.

Meanwhile, MinIO's own contributions and the distribution itself (outbound license) were AGPL licensed.

It's effectively a CLA, just a bit weaker, since they're still bound by the terms of Apache 2 vs. a full license assignment like most CLAs.

NewsaHackO 12/3/2025||
People underestimate the amount of fakeness a lot of these "open-core/source" orgs have. I guarantee from day one of starting the MinIO project, they had eyes on future commercialization, and of course made contributors sign away their rights knowing full well they are going to go closed source.
sieabahlpark 12/3/2025||
[dead]
smsm42 12/3/2025|||
Well, you can not have a product without having "AI" somewhere in the name anymore. It's the law.
orphea 12/3/2025|||
https://youtu.be/-qbylbEek-M?t=33
alex-aizman 12/4/2025|||
back in 2018, it didn't feel this way
daveguy 12/3/2025|||
This is why I don't bother with AGPL released by a company (use or contribute).

Choosing AGPL with contributors giving up rights is a huge red flag for "hey, we are going to rug pull".

Just AGPL by companies without even allowing contributor rights is saying, "hey, we are going to attempt to squeeze profit out and don't want competition on our SaaS offering."

I wish companies would stop trying to get free code out of the open source community. There have been so many rug pulls it should be expected now.

btian 12/3/2025|||
What's the problem? Surely people will fork it
binsquare 12/3/2025||
I still don't understand what the difference is.

What is an AI Stor (e missing on purpose because that is how it is branded: https://www.min.io/product/aistor)

everfrustrated 12/3/2025|||
Might be because of this other storage product named that https://github.com/NVIDIA/aistore
singhrac 12/3/2025||
Does anyone use this? I was setting it up a few months ago but it felt very complicated compared to MinIO (or alternatives). Is there a sort of minikube-like tool I could use here?
56kbr 12/3/2025||
There's a development/playground deployment for local K8s (e.g. Minikube, KinD): https://github.com/NVIDIA/aistore/tree/main/deploy/dev/k8s.

For production you'd need a proper cluster deployed via Helm, but for trying it out locally that setup is easy to get running.

paulddraper 12/3/2025||||
It can store things for AI workloads (and non-AI workloads, but who’s counting…)
bigbuppo 12/3/2025||||
About a billion dollars difference in valuation up until the bubble pops.
ljm 12/3/2025|||
Looks like AI slop

    Replication

    A trusted identity provider is a
    key component to single sign on.
Uh, what?

It’s probably just Minio but it costs more money.

bananapub 12/3/2025||
for those looking for a simple and reliable self hosted S3 thing, check out Garage . it's much simpler - no web ui, no fancy RS coding, no VC-backed AI company, just some french nerds making a very solid tool.

fwiw while they do produce Docker containers for it, it's also extremely simple to run without that - it's a single binary and running it with systemd is unsurprisingly simple[1].

0: https://garagehq.deuxfleurs.fr/

1: https://garagehq.deuxfleurs.fr/documentation/cookbook/system...

colesantiago 12/3/2025|
How do you sustain yourselves while developing this project?

What if the sponsorships run out?

prmoustache 12/3/2025||
What if a company change license, drop the project or goes bankrupt?

You shouldn't expect guarantee of any kind.

colesantiago 12/4/2025||
> What if a company change license, drop the project or goes bankrupt?

You can always fork the project, then the questions of sponsorships still remains.

Recently Ghostty is a non profit, which means that it is guaranteed not to turn into a for profit and rugpull like what MinIO has done.

prmoustache 12/4/2025||
That doesn't guarantee the devs stay motivated either.

In the end open source allows motivated people to take over the project if you aren't willing to do it yourself but projects can also die of lack of motivated/paid resources.

jdoe1337halo 12/3/2025||
I use this image on my VPS, it was the last update before they neutered the community version

quay.io/minio/minio:RELEASE.2025-04-22T22-12-26Z

spapas82 12/3/2025||
This is a way too old version. You should use a newer one instead by downloading the source and built the binaries yourself.

Here's a simple script that does it automagically (you'll need golang installed):

> build-minio-ver.sh

  #!/bin/bash
  set -e

  VERSION=$(git ls-remote --tags https://github.com/minio/minio.git | \
  grep -Eo 'RELEASE\.[0-9T-]+Z' | sort | tail -n1)

  echo "Building MinIO $VERSION ..."

  rm -rf /tmp/minio-build
  git clone --depth 1 https://github.com/minio/minio.git /tmp/minio-build

  cd /tmp/minio-build
  git fetch --tags
  git checkout "$VERSION"

  echo "Building minio..."

  CGO_ENABLED=0 go build -trimpath \
  -ldflags "-s -w \
  -X github.com/minio/minio/cmd.Version=$VERSION \
  -X github.com/minio/minio/cmd.ReleaseTag=$VERSION \
  -X github.com/minio/minio/cmd.CommitID=$(git rev-parse HEAD)" \
  -o "$OLDPWD/minio"

  echo " Binary created at: $(realpath "$OLDPWD/minio")"

  "$OLDPWD/minio" --version
NietTim 12/3/2025||
Same here, since I'm the only one using my instance. But, you should be aware that there is an CVE in that version that allows any account level to increase their own permissions to admin level, so it's inherently unsafe
tiernano 12/3/2025||
Is this not the best thing that could happen? Like now its in maintenance, it can be forked without any potential license change in the future, or any new features that are in that license change... This allows anyone to continue working on this, right? Or did i miss something?
jagged-chisel 12/3/2025||
> ... it can be forked without any potential license change in the future ...

It is useful to remember that one may fork at the commit before a license change.

phoronixrly 12/3/2025||
It is also useful to remember that MinIO has historically held to an absurd interpretation of the AGPL -- that it spreads (again, according to them) to software that communicates with MinIO via the REST API/CLI.

I assume forks, and software that uses them will be held to the same requirements.

ahepp 12/3/2025|||
As long as I'm not the one who gets sued over this, I think it would be wonderful to have some case law on what constitutes an AGPL derivative work. It could be a great thing for free software, since people seem to be too scared to touch the AGPL at all right now.
NegativeK 12/3/2025||||
They're not the only ones to claim that absurdity.

https://opensource.google/documentation/reference/using/agpl...

createaccount99 12/4/2025|||
I thought that literally was the point of AGPL. If not, what's the difference between it and GPL3?
lukaslalinsky 12/4/2025||
AGPL changes what it means to "distribute" the software. With GPL, sending copies of the software to users is distribution. With AGPL, if the users can access it over network, it's distribution. The implication is that if you run a custom version of MinIO, you need to open source it.
Weryj 12/3/2025||
Pretty sure you can’t retroactively apply a restrictive license, so that was never a concern.
IgorPartola 12/3/2025||
You can, sort of, sometimes. Copyleft is still based on copyright. So in theory you can do a new license as long as all the copyright holders agree to the change. Take open source/free/copyleft out of it:

You create a proprietary piece of software. You license it to Google and negotiate terms. You then negotiate different terms with Microsoft. Nothing so far prevents you from doing this. You can't yank the license from Google unless your contract allows that, but maybe it does. You can in theory then go and release it under a different license to the public. If that license is perpetual and non-revokable then presumably I can use it after you decide to stop offering that license. But if the license is non-transferrable then I can't pass on your software to someone else either by giving them a flash drive with it, or by releasing it under a different license.

Several open source projects have been re-licensed. The main thing that really is the obstacle is that in a popular open source or copyleft project you have many contributors each of which holds the copyright to their patches. So now you have a mess of trying to relicense only some parts of your codebase and replace others for the people resisting the change or those you can't reach. It's a messy process. For example, check out how the Open Street Maps data got relicensed and what that took.

bilkow 12/3/2025||
I think you are correct, but you probably misunderstood the parent.

My understanding of what they meant by "retroactively apply a restrictive license" is to apply a restrictive license to previous commits that were already distributed using a FOSS license (the FOSS part being implied by the new license being "restrictive" and because these discussions are usually around license changes for previously FOSS projects such as Terraform).

As allowing redistribution under at least the same license is usually a requirement for a license to be considered FOSS, you can't really change the license of an existing version as anyone who has acquired the version under the previous license can still redistribute it under the same terms.

Edit: s/commit/version/, added "under the same terms" at the end, add that the new license being "restrictive" contributes to the implication that the previous license was FOSS

IgorPartola 12/3/2025||
Right but depending on the exact license, can the copyright holder revoke your right to redistribute?
bilkow 12/3/2025||
It's probable that licenses that explicitly allows revocation at will would not be approved by OSI or the FSF.

Copyright law is also a complex matter which differs by country and I am not a lawyer so take this with a grain of salt, but there seem to be "edge cases" where the license can be revoked as seen in the stackexchange page below.

See:

https://lists.opensource.org/pipermail/license-discuss_lists...

https://opensource.stackexchange.com/questions/4012/are-lice...

Havoc 12/3/2025||
I thought they were pivoting towards close it and trying to monetize this?

That got backlash so now it’s just getting dropped entirely?

People get to do whatever they want but bit jarring to go from this is worth something people will pay for to maintenance mode in quick succession

embedding-shape 12/3/2025||
> I thought they were pivoting towards close it and trying to monetize this?

That's literally what the commit shows that they're doing?

> *This project is currently under maintenance and is not accepting new changes.*

> For enterprise support and actively maintained versions, please see MinIO SloppyAISlop (not actual name)

this_user 12/3/2025|||
Their marketing had shifting to trying to push an AI angle for some time now. For an established project or company, that's usually a sign that things aren't going well.
ocdtrekkie 12/3/2025||
They cite a proprietary alternative they offer for enterprises. So yes they pivoted to a monetized offering and are just dropping the open source one.
itopaloglu83 12/3/2025||
So they’re pulling an OpenAI.

Start open source to use free advertising and community programmer, and then dumps it all for commercial licensing.

I think n8n is next because they finished the release candidate for version 2.0, but there are no changelogs.

candiddevmike 12/3/2025||
It sucks that S3 somehow became the defacto object storage interface, the API is terrible IMO. Too many headers, too many unknowns with support. WebDAV isn't any better, but I feel like we missed an opportunity here for a standardized interface.
tlarkworthy 12/3/2025||
?

Its like GET <namespace>/object, PUT <namespace>/object. To me its the most obvious mapping of HTTP to immutable object key value storage you could imagine.

It is bad that the control plane responses can be malformed XML (e.g keys are not escaped right if you put XML control characters in object paths) but that can be forgiven as an oversight.

Its not perfect but I don't think its a strange API at all.

jerf 12/3/2025|||
That may be what S3 is like, but what the S3 API is is this: https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/s3

My browser prints that out to 413 pages with a naive print preview. You can squeeze it to 350 pretty reasonably with a bit of scaling before it starts getting to awfully small type on the page.

Yes, there's a simple API with simple capabilities struggling to get out there, but pointing that out is merely the first step on the thousand-mile journey of determining what, exactly, that is. "Everybody uses 10% of Microsoft Word, the problem is, they all use a different 10%", basically. If you sat down with even 5 relevant stakeholders and tried to define that "simple API" you'd be shocked what you discover and how badly Hyrum's Law will bite you even at that scale.

zokier 12/3/2025|||
> That may be what S3 is like, but what the S3 API is is this: https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/s3

> My browser prints that out to 413 pages with a naive print preview. You can squeeze it to 350 pretty reasonably with a bit of scaling before it starts getting to awfully small type on the page.

idk why you link to Go SDK docs when you can link to the actual API reference documentation: https://docs.aws.amazon.com/AmazonS3/latest/API/API_Operatio... and its PDF version: https://docs.aws.amazon.com/pdfs/AmazonS3/latest/API/s3-api.... (just 3874 pages)

tlarkworthy 12/4/2025|||
It's better to link to a leading S3 compatible API docs page. You get a better measure of the essential complexity

https://developers.cloudflare.com/r2/api/s3/api/

It's not that much, most of weirder S3 APIs are optional, orthogonal APIs, which is good design.

jerf 12/4/2025|||
Because it had the best "on one HTML page" representation I found in the couple of languages I looked at.
eproxus 12/3/2025|||
That page crashes Safari for me on iOS.
PunchyHamster 12/3/2025||||
It gets complex with ACLs for permissions, lifecycle controls, header controls and a bunch of other features that are needed on S3 scale but not at smaller provider scale.

And many S3-compatible alternatives (probably most but the big ones like Ceph) don't implement all of the features.

For example for lifecycles backblaze have completely different JSON syntax

perbu 12/3/2025||||
Last I checked the user guide to the API was 3500 pages.

3500 pages to describe upload and download, basically. That is pretty strange in my book.

nine_k 12/3/2025||
Even download and upload get tricky if you consider stuff like serving buckets like static sites, or stuff like siged upload URLs.

Now with the trivial part off the table, let's consder storage classes, security and ACLs, lifecycle management, events, etc.

candiddevmike 12/3/2025||||
Everything uses poorly documented, sometimes inconsistent HTTP headers that read like afterthoughts/tech debt. An S3 standard implementation has to have amazon branding all over it (x-amz) which is gross.
drob518 12/3/2025|||
I suspect they learned a lot over the years and the API shows the scars. In their defense, they did go first.
christina97 12/3/2025|||
I mean… it’s straight up an Amazon product, not like it’s an IETF standard or something.
paulddraper 12/3/2025||||
!!!

I’ve seen a lot of bad takes and this is one of them.

Listing keys is weird (is it V1 or V2)?

The authentication relies on an obtuse and idiosyncratic signature algorithm.

And S3 in practice responds with malformed XML, as you point out.

Protocol-wise, I have trouble liking it over WebDAV. And that's depressing.

KaiserPro 12/3/2025|||
HTTP isn't really a great back plane for object storage.
ssimpson 12/3/2025|||
I thought the openstack swift API was pretty clean, but i'm biased.
giancarlostoro 12/3/2025|||
To be fair. We still have an opportunity to create a standardized interface for object storage. Funnily enough when Microsoft made their own they did not go for S3 compatible APIs, but Microsoft usually builds APIs their customers can use.
mbreese 12/3/2025|||
It was better. When it first came out, it was a pretty simple API, at least simpler than alternatives (IIRC, I could just be thinking with nostalgia).

I think it's only gotten as complicated as it has as new features have been organically added. I'm sure there are good use cases for everything, but it does beg the question -- is a better API possible for object storage? What's the minimal API required? GET/POST/DELETE?

bostik 12/3/2025|||
I suspect there is no decent "minimal" API. Once you get to tens of millions of objects in a given prefix, you need server side filtering logic. And to make it worse, you need multiple ways to do that.

For example, did you know that date filtering in S3 is based on string prefix matching against an ISO8601/RFC3339 style string representation? Want all objects created between 2024-01-01 and 2024-06-30? You'll need to construct six YYYY-MM prefixes (one per month) for datetime and add them as filter array elements.

As a result the service abbreviation is also incorrect these days. Originally the first S stood for "Simple". With all the additions they've had to bolt on, S2 would be far more appropriate a name.

everfrustrated 12/3/2025|||
Like everything it starts off simple but slowly with every feature added over 19 years Simple Storage is it not.

S3 has 3 independent permissions mechanisms.

dathinab 12/3/2025||
S3 isn't JSON

it's storing a [utf8-string => bytes] mapping with some very minimal metadata. But that can be whatever you want. JSON, CBOR, XML, actual document formats etc.

And it's default encoding for listing, management operations and similar is XML....

> but I feel like we missed an opportunity here for a standardized interface.

except S3 _is_ the de-facto standard interface which most object storage system speaks

but I agree it's kinda a pain

and commonly done partial (both feature wise and partial wrong). E.g. S3 store utf8 strings, not utf8 file paths (like e.g. minio does), that being wrong seems fine but can lead to a lot of problems (not just being incompatible for some applications but also having unexpected perf. characteristics for others) making it only partial S3 compatible. Similar some implementations random features like bulk delete or support `If-Match`/`If-Non-Match` headers can also make them S3 incompatible for some use cases.

So yeah, a new external standard which makes it clear what you should expect to be supported to be standard compatible would be nice.

hintymad 12/3/2025||
I suspect that Clickhouse will go down the same path. They already changed their roadmap a bit two years ago[1], and had good reasons: if the open sourced version does too well, it will compete with their cloud business.

[1] https://news.ycombinator.com/item?id=37608186

jsiepkes 12/4/2025||
There is also Ambry ( https://github.com/linkedin/ambry ) as an alternative. The blob store open-sourced, created and maintained by LinkedIn. It also has an S3 compatible interface.

I think it is about 10 years old now and it is really stable.

ncrmro 12/3/2025||
As a note ceph (rook on kubernetes) which is distributed blockstorage has a built in s3 endpoint support
Joel_Mckay 12/3/2025|
Like many smart people they focused on telling people the "how", and assume visitors to their wall of "AI"/hype text already understand the use-case "why".

1. I like that it is written in Go

2. I saw nothing above what Apache Spark+Hadoop with _consistent_ object stores already offers on Amazon (S3), Google Cloud (GCS), and or Microsoft (Azure Storage, ADLS Gen2)

Best of luck, maybe folks should look around for that https://donate.apache.org/ button before the tax year concludes =3

PunchyHamster 12/3/2025|
> I saw nothing above what Apache Spark+Hadoop with _consistent_ object stores already offers on Amazon (S3), Google Cloud (GCS), and or Microsoft (Azure Storage, ADLS Gen2)

it was very simple to setup, and even if you just leased a bunch of servers off say OVH, far FAR cheaper to run your own than paying any of the big cloud providers.

It also had pretty low requirements, ceph can do all that but setup is more complex and RAM requirements far, far higher

Joel_Mckay 12/3/2025||
MinIO still makes no sense, as Ceph is fundamentally already RADOS at its core (fully compatible with S3 API.)

For a proper Ceph setup, even the 45drives budget configuration is still not "hobby" grade.

I will have to dive into the MinIO manual at some point, as the value proposition still seems like a mystery. Cheers =3

PunchyHamster 12/3/2025|||
MinIO is far less complex than getting same functionality on Ceph stack.

But that's kind of advantage only on the small companies and hobbyist market, big company either have enough needs to run big ceph cluster, or to buy it as a service.

Minio is literally "point it at storage(s), done". And at far smaller RAM usage.

Ceph is mon servers, osd servers, then rados gatway server on top of that.

Joel_Mckay 12/3/2025||
It sounds a lot like SwiftOnFile with GlusterFS, but I would need to look at it more closely on personal time. =3
dardeaup 12/3/2025|||
"Ceph is fundamentally already RADOS at its core (fully compatible with S3 API.)"

Yes, Ceph is RADOS at its core. However, RADOS != S3. Ceph provides an S3 compatible backend with the RADOS Gateway (RGW).

Joel_Mckay 12/3/2025||
My point was even 45drives virtualization of Ceph host roles to squeeze the entire setup into a single box was not a "hobby" grade project.

I don't understand yet exactly what MinIO would add on top of that to make it relevant at any scale. I'll peruse the manual on the weekend, because their main site was not helpful. Thanks for trying though ¯\_(ツ)_/¯

dardeaup 12/3/2025||
What I tried to say (perhaps not successfully) was that core Ceph knows nothing about S3. One gets S3 endpoint capability from the radosgw which is not a required component in a ceph cluster.
Joel_Mckay 12/3/2025||
The risk with mixing different subjects per thread. Cheers =3

https://docs.ceph.com/en/latest/radosgw/s3/

More comments...