Posted by mfreed 10/29/2025
“The storage device driver exposes Fluid Storage volumes as standard Linux block devices mountable with filesystems such as ext4 or xfs. It...allows volumes to be resized dynamically while online.”
Yet an xfs file system cannot be shrunk at all, and an ext4 filesystem cannot be shrunk without first unmounting it.
Are you simply doing thin provisioning of these volumes, so they appear to be massive but aren’t really? I see later that you say you account for storage based on actual consumption.
They can be used with, for example, the listed file systems.
No one claimed the listed file systems would (usefully) cooperate with (all aspects of) the block device's resizing.
Put differently, there is no point in being able to shrink a volume if you can’t safely shrink the filesystem that uses it.
The usual solution to this problem is thin provisioning, where you put a translation layer between the blocks the filesystem thinks it’s using and the actual underlying blocks. With thin provisioning you can allocate only, say, 1GB to the physical storage, but the block device presents itself as much larger than that, so you can pretend to create a 1PB filesystem on top of it.
We just launched a bunch around “Postgres for Agents” [0]:
forkable databases, an MCP server for Postgres (with semantic + full-text search over the PG docs), a new BM25 text search extension (pg_textsearch), pgvectorscale updates, and a free tier.
To my eye, seeing "Agentic Postgres" at the top of the page, in yellow, is not persuasive; it comes across as bandwagony. (About me: I try to be open but critical about new tech developments; I try out various agentic tooling often.).
But I'm not dismissing the product. I'm just saying this part is what I found persuasive:
> Agents spin up environments, test code, and evolve systems continuously. They need storage that can do the same: forking, scaling, and provisioning instantly, without manual work or waste.
That explains it clearly in my opinion.
* Seems to me, there are taglines that only work after someone in "on-board". I think "Agentic Postgres" is that kind of tagline. I don't have a better suggestion in mind at the moment, though, sorry.
Our existing Postgres fleet, which uses EBS for storage, still serves thousands of customers today; nothing has changed there.
What’s new is Fluid Storage, our disaggregated storage layer that currently powers the new free tier (while in beta). In this architecture, the compute nodes running Postgres still access block storage over the network. But instead of that being AWS EBS, it’s our own distributed storage system.
From a hardware standpoint, the servers that make up the Fluid Storage layer are standard EC2 instances with fast local disks.
I'm curious whether you evaluated solutions like zfs/Gluster? Also curious whether you looked at Oracle Cloud given their faster block storage?