Posted by Krontab 11 hours ago
It's hard to even find new PC builds using SATA drives.
SATA was phased out many years ago. The primary market for SATA SSDs is upgrading old systems or maybe the absolute lowest cost system integrators at this point, but it's a dwindling market.
You could even get more using a PCIe NVME expansion card, since it's all over PCIe anyways.
E.g. going to the suggested U.2 is still going to net you looking for the PCIe lanes to be available for it.
More so it would only need one drive. ODD is dead for at least 10 years and most people never need another internal drive at all.
And SATA SSDs do make sense, they are significantly more cost effective than NVMe and trivial to expand. Compare the simplicity, ease, and cost of building an array/pool of many disks comprised of either 2.5" SATA SSDs or M.2 NVMe and get back to me when you have a solution that can scale to 8, 14, or 60 disks as easily and cheaply as the SATA option can. There are many cases where the performance of SSDs going over ACHI (or SAS) is plenty and you don't need to pay the cost of going to full-on PCIe lanes per disk.
That doesn't seem to be what the vendors think, and they're probably in a better position to know what's selling well and how much it costs to build.
We're probably reaching the point where the up-front costs of qualifying new NAND with old SATA SSD controllers and updating the firmware to properly manage the new NAND is a cost that cannot be recouped by a year or two of sales of an updated SATA SSD.
SATA SSDs are a technological dead end that's no longer economically important for consumer storage or large scale datacenter deployments. The one remaining niche you've pointed to (low-performance storage servers) is not a large enough market to sustain anything like the product ecosystem that existed a decade ago for SATA SSDs.
You can buy cheap add-in cards to use PCIe slots as M.2 slots, too.
If you need even more slots, there are add-in cards with PCIe switches which allow you to install 10+ M.2 drives into a single M.2 slot.
Likely we'd need a different protocol to make scaling up the number of high speed SSDs in a single box to work well.
Going forward, SAS should just replace SATA where NVMe PCIe is for some reason a problem (eg price), even on the consumer side, as it would still support existing legacy SATA devices.
Storage related interfaces (I'm aware there's some overlap here, but point is, there's already plenty of options, and lots of nuances to deal with already, let's not add to it without good reason):
- NVMe PCIe
- M.2 and all of it's keys/lengths/clearances
- U.2 (SFF-8639) and U.3 (SFF-TA-1001)
- EDSFF (which is a very large family of things)
- FibreChannel
- SAS and all of it's permutations
- Oculink
- MCIO
- Let's not forget USB4/Thunderbolt supporting Tunnelling of PCIe
Obligatory: https://imgs.xkcd.com/comics/standards_2x.png
The main problem is having proper translation of device management features, e.g. SMART diagnostics or similar getting back to the host. But from a performance perspective, it seems reasonable to switch to USB once you are multiplexing drives over the same, limited IO channels from the CPU to expand capacity rather than bandwidth.
Once you get out of this smallest consumer expansion scenario, I think NAS takes over as the most sensible architecture for small office/home office settings.
Other SAN variants really only make sense in datacenter architectures where you are trying to optimize for very well-defined server/storage traffic patterns.
Is there any drawback to going towards USB for multiplexed storage inside a desktop PC or NAS chassis too? It feels like the days of RAID cards are over, given the desire for host-managed, software-defined storage abstractions.
Does SAS still have some benefit here?
I wonder what it would take to get the same behavior out of USB as for other "internal" interconnects, i.e. say this is attached storage and do retry/reconnect instead of deciding any ephemeral disconnect is a "removal event"...?
FWIW, I've actually got a 1 TB Samsung "pro" NVMe/M.2 drive in an external case, currently attached to a spare Ryzen-based Thinkpad via USB-C. I'm using it as an alternate boot drive to store and play Linux Steam games. It performs quite well. I'd say is qualitatively like the OEM internal NVMe drive when doing disk-intensive things, but maybe that is bottlenecked by the Linux LUKS full-disk encryption?
Also, this is essentially a docked desktop setup. There's nothing jostling the USB cable to the SSD.
I don't really know how one would get numbers for any of the above one way or the other though.
I am almost never IO blocked where the performance difference between the two matters. I guess when I do the initial full backup image of my drive, but after that, everything is incremental.
This doesn't make sense as written. I suspect you meant to say "SATA SSDs" (or just "SATA") in the first sentence instead of "SSDs", and M.2 instead of NVMe in the second sentence. This kind of discussion is much easier to have when it isn't polluted by sloppy misnaming.
Even then, I suppose it how the m.2 vs 2.5" SATA mounting turns out depends on the specific system. E.g. on this PC the main NVMe slot is above the GPU but mounting a 2.5" SSD is 4 screws on a custom sled + cabling once mounted. If it were the other way around and the NVMe was screw-in only below the GPU while the SSD had an easy mount then it might be a different story.
https://wccftech.com/no-samsung-isnt-phasing-out-of-the-cons...
Tech news has been quite the bummer in the last few months. I'm running out of things to anticipate in my nerd hobby.
We've already seen the typical number of SATA ports on a consumer desktop motherboard drop from six to four or two. We'll probably go through a period where zero is common but four is still an option on some motherboards with the same silicon, before SATA gets removed from the silicon.
It's called a PCIe disk controller and you just accustomized to have one built-in in the south bridge.
I want to build a mini PC-based 3D printed NAS box with a SATA backplate with that exact NVME connector adapter setup!
https://makerworld.com/en/models/1644686-n5-mini-a-3d-printe...
The reality is, as long as you have PCIe you can do pretty much whatever you want, and it's not a big deal.
SATA SSD still seems like the way you have to go for a 5 to 8 drive system (boot disk + 4+ raid6).
DWPD: Between the random teamgroup drives in the main NAS and WD Red Pro HDDs in the backup, the write limits are actually about the same. With the bonus reads are infinite on the SSDs, so things like scheduled ZFS scrubs don't count as 100 TB of usage across the pool each time.
Heat: Actually easier to manage than the HDDs. The drives are smaller (so denser for the same wattage) but the peak wattage is lower than the idle spinning wattage of the HDDs and there isn't a large physical buffer between the hot parts and the airflow. My normal case airflow keeps them at <60C under sustained benching of all of the drives raw, and more like <40 C given ZFS doesn't like to go more than 8 GB/s in this setup anyways. If you select $600 top end SSDs with high wattage controllers shipping with heatsinks you might have more of a problem, otherwise it's like 100 W max for the 22 drives and easy enough to cool.
PLP: More problematic if this is part of your use case, as NVMe drives with PLP will typically lead you straight into enterprise pricing. Personally my use case is more "on demand large file access" with extremely low churn data regularly backed up for the long term and I'm not at a loss if I have an issue and need to roll back to yesterday's data, but others who use things more as an active drive may have different considerations.
The biggest downsides I ran across were:
- Loading up all of the lanes on a modern consumer board works in theory, can be buggy as hell in practice. Anything from the boot becoming EXTREMELY long to just not working at all sometimes to PCIe errors during operation. Used Epyc in a normal PC case is the way to go instead.
- It costs more, obviously
- Not using a chassis designed for massive numbers of drives with hot-swap access can be quite the pain to troubleshoot install for.
The biggest upsides (other than the obvious ones) I ran across were:
- No spinup drain on the PSU
- No need to worry about drive powersaving/idling <- pairs with -> whole solution is quiet enough to sit in my living room without hearing drive whine.
- I don't look like a struggling fool trying to move a full chassis around :)
Its always one of the 2. M.2 but PCIe/NVMe, or SATA but not M.2.
Used multiport SATA HBA cards are inexpensive on eBay. Multiport nvme cards are either passive for bifurcation and give you 4x x4 for an x16 slot or are active and very expensive.
I don't see how you get to 16 m.2 devices on a consumer socket without lots of expense.
In practice you can put 4 drives in the x16 slot intended for a GPU, 1 drive each in any remaining PCIe slots, plus whatever is available onboard. 8 should be doable, but I doubt you can go beyond 12.
I know there are some $2000 PCIe cards with onboard switches so you can stick 8 NVMe drives on there - even with an x1 upstream connection - but at that point you're better off going for a Threadripper board.
Even that gives you one m.2 slot, and 8/8/8/16 on the x16 slots, if you have the right cpu. Assuming those are can all bifurcate down to x4 (which is most common), that gets you 10 m.2 slots out of the 40 lanes. That's more than you'd get on a modern desktop board, but it's not 16 either.
For home use, you're in a tricky spot; can't get it in one box, so horizontal scaling seems like a good avenue. But in order to do horizontal scaling, you probably need high speed networking, and if you take lanes for that, you don't have many lanes left for storage. Anyway, I don't think there's much simple software to scale out storage over multiple nodes; there's stuff out there, but it's not simple and it's not really targeted towards a small node count. But, if you don't really need high speed, a big array of spinning disks is still approachable.
Buiding a new system with that in 2025 would be a bit silly.
It's the end of an era.
If you care even remotely about speed, you'll get an NVMe drive. If you're a data hoarder who wants to connect 50 drives, you'll go for spinning rust. Enterprise will go for U.3.
So what's left? An upgrade for grandma's 15-year-old desktop? A borderline-scammy pre-built machine where the listed spec is "1TB SSD" and they used the absolute cheapest drive they can find? Maybe a boot drive for some VM host?
I would think an SSD is going to be better than a spinning disc even with the limits of sata if you want to archive things or work with larger data or whatever
4 M.2 NVMe drives is quite doable, and you can put 8TB drives in each. There are very few people who need more than 32TB of fast data access, who aren't going to invest in enterprise hardware instead.
Pre-hype, for bulk storage SSDs are around $70/TB, whereas spinning drives are around $17/TB. Are you really willing to pay that much more for slightly higher speeds on that once-per-month access to archived data?
In reality you're probably going to end up with a 4TB NVMe drive or two for working data, and a bunch of 20TB+ spinning drives for your data archive.
I have a couple of 2TB USB-C SSDs. I haven't bought a separate SATA drive in well over a decade. My last home built PC broke around 2013.
(SSDs are "fine", just playing devil's advocate.)
There's probably a similar cost usb-c solution these days, and I use a usb adapter if I'm not at my desktop, but in general I like the format.
Actually that's a really common use - I've bought a half dozen or so Dell rack mount servers in the last 5 years or so, and work with folks who buy orders of magnitude more, and we all spec RAID0 SATA boot drives. If SATA goes away, I think you'll find low-capacity SAS drives filling that niche.
I highly doubt you'll find M.2 drives filling that niche, either. 2.5" drives can be replaced without opening the machine, too, which is a major win - every time you pull the machine out on its rails and pop the top is another opportunity for cables to come out or other things to go wrong.
#1 is all NVMe. It's dominated by laptops, and desktops (which are still 30% or so of shipments) are probably at the high end of the performance range.
#2 isn't a big market, and takes what they can get. Like #3, most of them can just plug in SAS drives instead of SATA.
#3 - there's an enterprise market for capacity drives with a lower per-device cost overhead than NVMe - it's surprisingly expensive to build a box that will hold dozens of NVMe drives - but SAS is twice as fast as SATA, and you can re-use the adapters and mechanicals that you're already using for SATA. (pretty much every non-motherboard SATA adapter is SAS/SATA already, and has been that way for a decade)
#4 - cloud uses capacity HDDs and both performance and capacity NVMe. They probably buy >50% of the HDD capacity sold today; I'm not sure what share of the SSD market they buy. The vendors produce whatever the big cloud providers want; I assume this announcement means SATA SSDs aren't on their list.
I would guess that SATA will stay on the market for a long time in two forms: - crap SSDs, for the die-hards on HN and other places :-) - HDDs, because they don't need the higher SAS transfer rate for the foreseeable future, and for the drive vendor it's probably just a different firmware load on the same silicon.