Top
Best
New

Posted by motiejus 4/19/2025

Synology Lost the Plot with Hard Drive Locking Move(www.servethehome.com)
653 points | 403 comments
Renaud 4/22/2025|
Synology isn't about the NAS hardware and OS. Once setup, it doesn't really matter as long as your config is reliable and fast, so there are many competitive options to move to.

The killer feature for me is the app ecosystem. I have a very old 8-bay Synology NAS and have it setup in just a few clicks to backup my dropbox, my MS365 accounts, my Google business accounts, do redundant backup to external drive, backup important folders to cloud, and it was also doing automated torrent downloads of TV series.

These apps, and more (like family photos, video server, etc), make the NAS a true hub for everything data-related, not just for storing local files.

I can understand Synology going this way, it puts more money in their pocket, and as a customer in professional environment, I'm ok to pay a premium for their approved drives if it gives me an additional level of warranty and (perceived) safety.

But enforcing this accross models used by home or soho users is dumb and will affect the good will of so many like me, who both used to buy Synology for home and were also recommending/purchasing the brand at work.

This is a tech product, don't destroy your tech fanbase.

I would rather Synology kept a list of drives to avoid based on user experience, and offer their Synology-specific drives with a generous warranty for pro environments. Hel, I would be ok with sharing stats about drive performance so they could build a useful database for all.

They way they reduce the performance of their system to penalise non-synology rebranded drives is bascially a slap in the face of their customers. Make it a setting and let the user choose to use the NAS their bought to its full capabilities.

sersi 4/22/2025||
On the other had, they have also slowly destroyed their app ecosystem. The photo solution is much worse than it used to be both in terms of features and the now removed support for media codecs. Video station has pretty much been dead for years.

At this point, I'm not that convinced that there's anything that synology offers that isn't handled much better by an app running on docker. This wasn't true 10 years ago.

nolok 4/22/2025||
The photo apps is great because their photo backup app on android is great, and the only thing that works as well as google photo to ensure all your photo and videos are saved, untouched, no duplicate, no missed media.

That's it. For the actual viewing / sorting / album you need something like immich or photoprism, the photos app actually sucks.

Video station has been removed in the latest minor update, not even a major update, they just took it out no warning no replacement. But then again it was not that good, jellyfin is the way to go for me.

Their crown jewels are active backup, hyper backup and synology office. That's where they own their space.

qsi 4/22/2025|||
This is sad... I've been using Synology for a very long time (over 15 years?) and have been pretty happy with my experience. The one time I needed their tech support also left me with a good impression...

This however is a deal breaker for me as I'd hate to be locked in to their drives for all the reasons in TFA but also as a matter of principle.

I hope Synology will reconsider!

InsideOutSanta 4/22/2025||
Yeah, same. I have had three Synology boxes over the last 20 or so years, and they have been super reliable, easy to use, and easy to update. The last one is important to me because I would, over time, add more disks and, when the drive bays were all full, replace smaller disks with larger ones.

The first one I bought is still in service at my parents' place, silently and reliably backing up their cloud files and laptops.

I was fully expecting to buy more in the future, but this is a dealbreaker. If a disk goes bad, I want to go to the local store, pick one up, and have the problem fixed half an hour later. I do not want to figure out where I can get approved disks, what sizes are available, how long it will take to ship them, etc.

I've recently installed Unraid on an old PC, and the experience has been surprisingly good. It's not as nice as a Synology, but it's not difficult, either. It's just a bit more work. I've also heard that HexOS plans to support heterogeneous disks, and I plan to check it out once that is available.

So that's the direction I'll be going in instead.

timcobb 4/22/2025||
> I have had three Synology boxes over the last 20 or so years

Sounds like this is the problem with Synology... How are they going to make money when their products are so good!

horsawlarway 4/22/2025||
Honestly, seems like they got roughly one hardware sale to him every 6 years or so.

Which is along the same trend line I'm seeing for my purchases.

That's pretty solid for hardware sales.

My guess is that they've over invested in things like their "drive" office software suite, and don't know how to monetize it or recoup costs.

I like Synology, but locking me to their drives is a hard "no thanks" from me.

Next NAS won't be from them if that's their play...

timcobb 4/22/2025||
Every six years is enough for apple and other companies who have other sources of revenue and have staked out this high quality niche. But androids, as an example, are more of an average 3 year lifespan if I'm not mistaken, which is closer to what Synology would probably want to achieve but cannot.
close04 4/22/2025|||
The comparison to phones is shaky here. Phones bring substantial performance and feature improvements over 6 years, HW and SW. Synology on the other hand still uses a 5-6 year old CPU and 1Gbps connectivity in their home "plus" line. The OS development is mostly security updates with substantial feature releases few and far between. I expect this from a NAS but it's not at all comparable to a phone.

Forcing their drives is a tax on top of an already existing tax. Synology already charges a premium for lower end specs than the competition. If that's not enough to compensate for the longer upgrade cycles, and they want a hand in every cookie jar it's just going to be a hard pass for me.

I upgraded my Synology box every few years and this is exactly the time I was looking to go to the next model. And I'd pull the trigger and buy a current model before they implement the policy but the problem is now I don't trust that they won't retroactively issue an update that cripples existing models somehow. QNAP or the many alternative HW manufacturers that support an arbitrary OS are starting to be that much more attractive.

horsawlarway 4/22/2025|||
I don't think mobile is the right comparison. Those ecosystems are explicitly operating on the assumption that they will profit through the software ecosystem (app store revenue).

Synology seems to have gone entirely the other direction here. Most of their software is given away for free, but the hardware is being monetized.

Additionally - the hardware has different operating constraints. I think the big deal for Synology is that they probably assumed that storage need growth would equate to sales growth.

EX - Synology may have assumed that if I need to store 1TB in 2010, and 5TB in 2015, that would equate to me buying additional NAS hardware.

But often, HDD size increases mean that I can keep the same number of bays and just bump drive size.

Which... is great for me as a user, but bad for Synology (this almost single handedly explains this move, as an aside - I just think it's a bad play).

---

I'd rather they just charged for the software products they're blowing all their money on, or directly tie upgrades to the software products to upgrading hardware.

apetrovic 4/22/2025|||
What are "competitive options"? It's a genuine question. Before Synology, I had some DIY server in a Fractal Design case, and noise and, to be honest, bulk were a problem. Also, maintenance of the server wasn't funny.

I switched to Synology about six years ago (918+). The box is small, quiet, and easy to put in the rack together with the network gear. I started with 4TB drives, gradually switched to 8TB over time (drive by drive). I don't use much of their apps (mostly download station, backup, and their version of Docker to run Syncthing, plus Tailscale). But the box acts like an appliance - I basically don't need to maintain it at all; it just works.

I don't like all this stuff with vendor lock-in, so when the time comes for replacing the box, what are alternatives on par with the experience and quality I currently have with Synology?

sersi 4/22/2025|||
The problem is that a lot of competitors don't necessarily have great software. For example QNAP on the hardware side is supposed to be good, you have more bang for the bucks in term of performance but they had several major CVEs that really call into question their security practices. I have a friend who is running Unraid on QNAP and is happy though.
tiew9Vii 4/22/2025||||
The new Chinese NAS's due to hit the market look extremely promising.

- Minisforum N5 Pro NAS

- AOOSTAR WTR MAX

Good compute power as they know users will be running Docker and other services on them, using the NAS as a mini server.

OS agnostic allowing users to install TrueNas, Unraid, favourite Linus distro of choice.

The Minisforum and AOOSTAR look to be adding all the features power users and enthusiasts are asking for.

If you just want a NAS as a NAS and nothing else, the new Ubiquiti NAS looks great value as well.

adam_th 4/22/2025||||
Unraid is brilliant if you're interested in BYO hardware. It can be setup with mix and match drives, supports docker and virtual machines. Realistically it's a bit more work than Synology to get up and running, but once it is, the only thing you really need to do is update the software from time to time
j45 4/22/2025|||
I don't mind the idea of BYO hardware, especially if it's an old server with hotswap drive and hotswap power built in.

Increasingly, with the time I have towards the things that interest me, I just want storage and a bit of compute to be like a home appliance, reasonably set and forget it and leave my messing around on a USFF computer.

Hamuko 4/22/2025|||
I've heard things about Unraid not being that performant due to the design of the disk array solution.
bayindirh 4/22/2025|||
You can add a cache SSD to keep hot data to reduce access times, and why do you need that much of a throughput to begin with?
Zekio 4/22/2025|||
you can run ZFS without the Unraid disk array in unraid these days
Hamuko 4/22/2025||
Doesn't that get rid of one of the biggest benefit of Unraid where you can mix and match drives, just like in a Synology hybrid RAID?
dml2135 4/22/2025|||
I think this is just the tradeoff you need to make. I’m not aware of a solution where you can mix-and-match drives but also get the write performance of a traditional RAID array.
Zekio 4/22/2025|||
that is true, but you can make one fast pool using zfs and one slower one using unraids disk array, if you want to, or just use the zfs part as a cache for performance
Scene_Cast2 4/22/2025||||
I have an old Helios4 board. Too bad they don't make them anymore - it's tiny, has ECC, and was purpose built to be a NAS.
transpute 4/22/2025||
Marvell CN913x in QNAP TS435XeU NAS is the SoC successor to Armada A388 on Helios4. Still available, building on Linux support for Armada.
kxrm 4/22/2025||||
Kind of surprising, I went the other way. I started out with ReadyNAS 15 years ago and after that product faded due to lack of support I no longer wanted to be tied down to a manufacturer. I built a custom solution using a U-Nas chassis. Found FreeNAS back in the day and have stuck with it ever since. Maintenance is fairly minimal.

If you heavily rely on apps/services. I've just gone to self managed docker environments for things like that. A very simple script runs updates.

j45 4/22/2025||||
I over-purchased a NAS and ended up with QNAP, even thought Synology provided more power (lower electricity use) to performance ratio.

In hindsight buying a QNAP that was more than the Synology equivalent felt like a good idea but I didn't really get into it quickly enough.

I also got burned by Western Digital's scandal of selling WD Red drives that really weren't that got them caught in a class action lawsuit. Can't see myself buying them again.

hjgjhyuhy 4/22/2025||||
Apart from the form factor, my custom built machine with Unraid pretty much works like what you describe. Soon two years of use without major issues.
EVa5I7bHFq9mnYK 4/22/2025||||
I have WD MyCloud NAS. It has Transmission to pirate movies and Twonky DLNA server to send them to my TVs. Not much, but honest work.
m4rtink 4/22/2025|||
Some Intel N100/N105 board from Aliexpress with Fedora or Debian on top should be fine & much more flexible if you decided you want more than just a file server.
nijave 4/22/2025|||
Or throw on TrueNAS or UnRAID if you want a GUI
dsego 4/22/2025|||
Anecdotally, I quickly gave up on their value-add apps, they didn't seem well thought out and had many missing features. My impression was that they were mostly there to tick all the boxes for their marketing material. It's been a few years since I looked at them so I can't give specific examples unfortunately.
jjkaczor 4/22/2025|||
Yes, it is the overall ease of configuration, operation - but also for me the app ecosystem.

Well, my Synology NAS is from... 2013 (have upgraded the drives 3-times), so... it is/was time to replace it, and I can tell you that it won't be with another Synology device...

I won't go back to QNAP, which is what I had before Synology, because during an OS update it wiped all my data (yes, there was a warning, but the whole purpose of having a RAID NAS is safe reliable data storage)

May check-out a custom hardware build, combined with Xpenology.

j45 4/22/2025||
Important story to note - it's not a backup if you don't have more than one copy of it (beyond multiple copies on one NAS).
jjkaczor 5/6/2025||
I have a stack of old harddrives in external enclosures - as I upgrade (non-failed) drives, I buy another enclosure and then do backups that way. So far, over 30-years, it has proven to be more reliable than burned media (I have even had to help clients who had project source code succumb to "bit-rot" on their physical burned media, but I still had a project HD tucked away from 15-years prior)
ycombinatrix 4/22/2025|||
Fwiw I don't use a single one of their apps. I bought it for their hybrid raid feature.
ChrisMarshallNY 4/22/2025||
Same here.

At one time, Drobo was the only manufacturer that did that, but I have had very bad luck with Drobos.

I’ve been running a couple of Synology DS cages for over five years, with no issues.

shantara 4/22/2025|||
I’m running two Synology NAS devices, and I wouldn’t consider their app ecosystem to be their strong point. I started by trying to take advantage of the built-in Synology apps when I first got my NAS, but quickly realized how limited they are. Their bi-directional synchronization solution is so slow and archaic compared Syncthing! And the same is true for most of their software offerings. At this point, I’m happy with having Docker support, and don’t particularly care about the rest of their apps.

I still appreciate how easy and maintenance-free was their implementation of the core NAS functionality. I do have a Linux desktop for experiments and playing around with, but I prefer to have all of my actually important data to be on a separate rock solid device. Previously, Synology fulfilled this role and was worth paying for, but if this policy goes live, I wouldn’t consider them fro my next NAS.

InsideOutSanta 4/22/2025|||
I would count supported third-party apps like SyncThing as part of the app ecosystem. You can add the SynoCommunity repository to your Synology and install SyncThing directly, which is pretty nice.

It's a bit more convenient than how other solutions, like Unraid, handle this, where you manually configure a Docker container.

shantara 4/22/2025||
That’s true, but it’s only relevant for the initial setup. I wouldn’t think twice about giving up something so minor compared with the sheer anticompetitive nature of Synology locking down the devices.
op00to 4/22/2025|||
Agree. Have a few Synologies and the apps are crap ware.
HenriTEL 4/22/2025|||
Yes, in the end, no matter how polished your apps are, a NAS is a tech product sold to tech people. Tech people want to choose their hard drive.
j45 4/22/2025||
Synology did a good job of being relatively turnkey.

QNAP has more configurability for better and worse.

Curious ot hear what other manufactures can compare to them out of the box.

Self-configuring something is a different thing.

I simply do not care any more to rebuild raids and manually swap drives under duress when something is going down. I just replace existing drives with new ones well before they die after they've hit enough years. Backblaze's report is incredibly valuable.

nerdjon 4/22/2025|||
How much of a market is there really for those apps? They are competing against most consumers accepting the ease (and significantly cheaper) of cloud based storage.

We (in the tech space) can scream privacy and risks of the cloud all day long but most consumers seem to just not care.

I have 2 Synology NAS and the only app that I actually use is Synology Drive thanks to the sync app, but there are open source alternatives that would work better and not require a client on the NAS side to work.

I can't imagine any enterprise would be using these features.

Been in the market for a new NAS myself and I am going to be looking into truenas or keep an eye on what Ubiquity is doing in this space (but its a no go until they add the ability to communicate with a UPS).

bluGill 4/22/2025||
Without the apps they have even less market though.
nerdjon 4/22/2025||
While true, that assumes that the engineering effort is worth whatever extra market they are getting from it.

I just can't imagine there is that many people that would bother with a "private cloud" that may not already have a use case for a NAS at home for general data storage.

j45 4/22/2025|||
Other manufacturers like Qnap also have this app ecosystem.

It doesn’t address the mandatory nature of drives when at most dell and hp have put their part number on drives for the most part.

lazide 4/22/2025||
The issue is QNAP has terrible quality/stability at the OS level compared to Synology (also with Apps).

The number of times I’ve broken things on QNAP systems doing what should be normal functionality, only to find out it’s because of some dumb implementation detail is over a dozen. Synology, maybe 1-2.

Roughly the same number of systems/time in use too.

transpute 4/22/2025|||
> QNAP has terrible quality/stability at the OS level

Some QNAP devices can be coaxed into running Debian.

https://wiki.qnap.com/wiki/Debian_Installation_On_QNAP

https://wiki.debian.org/InstallingDebianOn/QNAP

m000 4/22/2025|||
Mind that these are ancient models that are dog slow for anything more than serving files. Not that they are fast in serving files...

I did the procedure on my (now 15yo) TS-410, mostly because the vendored Samba is not compatible with Windows 11 (I had turned-off all secondary services years ago). It took a few days to backup around 8TB of data to external drives. And AROUND 2 WEEKS to restore them (USB2 CPU overhead + RAID5 writes == SLOOOOOW).

Even to get the time down to 2 weeks, I really had to experiment with different modes of copying. My final setup was HDD <-USB3-> RPi4 <-GbE-> TS-410. This relieved TS-410 CPU from the overhead of running the USB stack. I also had to use rsync daemon on TS-410 to avoid the overhead of running rsync over SSH.

So, it's definitely not for the faint of heart, but if you go through the trouble, you can keep the box alive as off-site backup for a few more years.

Having said that, I have to commend QNAP for providing security updates for all this time. The latest firmware update for TS-410 is dated 2024-07-01 [1]. This is really going beyond and above supporting your product when it comes to consumer-level devices.

[1] https://www.qnap.com/en/download?model=ts-410&category=firmw...

frontlodjkgi 4/22/2025||||
Wouldn't it be cheaper to just build any NAS and chuck Debian on it if you didn't care about the OS and vendor software to begin with?
transpute 4/22/2025|||
e.g. QNAP has rare hardware combo of half-depth 1U low-power Arm NAS /w mainline Linux support, 32GB ECC RAM, dual NVME, 4x hotswap SATA, 2x10G SFP, 2x2.5G copper, hardware support for ZFS encryption, https://news.ycombinator.com/item?id=40868855.

In theory, one could fit an Arm RK3588 SBC with NVME-to-PCIe-to-HBA or NVME-to-SATA into half-depth JBOD case. That would give up 2x10G SFP, 2xNVME and ECC RAM.

frontlodjkgi 4/22/2025||
Maybe it's just me, but rare harware isn't something I'd look for in a reliable storage system unless I had a really special need general hardware just couldn't be made to do
transpute 4/22/2025|||
Per sibling comment, "unique" is a better descriptor than "rare". The NAS is made in Taiwan and has been readily available from Amazon or QNAP store.

The Marvell CN913x SoC has been shipping for 5 years, following the predecessor Armada SoC family released 10 years ago and used in multiple consumer NAS products, https://linuxgizmos.com/marvell-lifts-curtain-on-popular-nas.... Mainline Linux support for this SoC has benefited from years of contributions, while Marvell made incremental hardware improvements without losing previous Linux support.

j45 4/22/2025||
This is spot on. I'd like to add that unique often in hardware is not forcing people to buy a few times to get it right, especially first time buyers.
j45 4/22/2025||||
Rare more means a unique combination of common hardware products, where other manufacturers don't put all of the features into one piece of hardware like qnap or others might, to keep people buying more devices to get what they want, or buy a device that is way too overkill for their needs.
Scene_Cast2 4/22/2025|||
"Rare" in this case is referring to a unique offering, not to the availability of that particular part.

As I understand, migrating to other hardware wouldn't be an issue if availability becomes an issue.

lazide 4/22/2025||||
I ended up doing that with a larger QNAP I had. It did have some odd bugs that I needed to track down, but otherwise was a good (albeit overly expensive) NAS. I used zfs.
j45 4/22/2025|||
Storage should be an appliance, or you're the appliance repair man always on call.
nehal3m 4/22/2025|||
Sure, but don't you lose the app ecosystem then?
transpute 4/22/2025|||
Hacker flexibility or consumer take-it-or-leave-it, pick one.

Debian offers flexibility and control, at the cost of time and effort. PhotoSync mobile apps will reliably sync mobile devices with NAS over standard protocols, including SSH/SFTP. A few mobile apps do work with self-hosted WebDAV and CalDAV. XPenology attempts to support Synology apps on standard Linux, without excluding standard Debian packages.

tankenmate 4/22/2025|||
Debian's software repo is about 500 times bigger than Synology's.
Tepix 4/22/2025||||
FWIW, i haven't had any real issues with QNAP since 10 years or so, but i'm pretty much only using basic features.

Once i added a 4th drive to a RAID 5 set and i was impressed that it performed the operation on-line. Neat.

Oh, there was one issue: A while ago my Timemachine backups were unreliable, but i haven't had that issue since three years or so.

j45 4/22/2025|||
It's news to me, maybe I haven't touched mine much out of leaving it absolutely stock.

Were you installing things manually or just using the app store?

freeAgent 4/22/2025|||
As far as lists of drives to avoid, Synology could certainly do that, but we also already have Backblaze’s reports on their own failure rates. Synology also uses multiple vendors to produce “Synology” branded drives, so as the article states this may also lead to confusion about which Synology branded drives are “good” vs. “bad” in the future, even with seemingly identical specs.
op00to 4/22/2025||
The idea is not so much about which drives fail or whatever. It’s more that certain consumer drives have firmwares that don’t work well with NAS workloads. Long timeouts could be treated as a failed drive rather than a transient error by a desktop drive, for example.
freeAgent 4/22/2025||
I’d argue that anyone who is buying a NAS for personal use probably does enough research to figure out that NAS-focused/appropriate drives are a thing-though. And if they contact Synology support, it should be very easy for them to identify bad drive types. On top of that, they can (and have) warn about problematic drives.
m463 4/22/2025|||
Maybe this will backfire like the keurig DRM coffee pods?

https://www.theverge.com/2015/2/5/7986327/keurigs-attempt-to...

razakel 4/22/2025|||
>kept a list of drives to avoid based on user experience

Well, that sounds like a great way to get sued.

franga2000 4/22/2025|||
On what grounds exactly? You tested something, it turned out to perform below average, so you say you don't recommend buying it. Where's the crime?
tolien 4/22/2025|||
Seems to have worked well enough for Backblaze for years and years now. Another major vendor publicly announcing that make X model Y has shitty reliability is as much pressure on the storage duopoly as we're likely to get.
j45 4/22/2025|||
There's no reason to be scared to share your experience with hardware.
bambax 4/22/2025||||
Just do it in reverse: a list of drives that they have tested and can confirm work well; at the end of the list they just mention that they cannot recommend any other.
p_ing 4/22/2025|||
You would need to account for every drive firmware revision.
xmodem 4/22/2025|||
This is in fact standard practise for many software vendors.
j45 4/22/2025|||
How would it be a great way to get sued?

Backblaze publishes a great report.

https://www.backblaze.com/blog/backblaze-drive-stats-for-202...

msh 4/22/2025|||
qnap seems to have a similar app ecosystem, or is there a quality difference? I have only used QNAP NAS devices so I dont know.
j45 4/22/2025||
QNAP's ecosystem is decent. There is a third party store by a former QNAP employee that has a lot more selection in it.

Getting a lower powered intel celeron QNAP nas basically lets you run anything you want software or app wise, including docker that just works instead of hunting for ARM64 binaries for anything that is not available off the shelf.

doanchu 4/22/2025||
TrueNAS can do all those stuff for you.
coolgoose 4/22/2025|||
No it can't. Let's be honest Synology's OS is covering more than just storage, and no, spinning up a lot of 3'rd party docker containers that you need to maintain, secure and manage isn't as easy.
brynx97 4/22/2025||
What can't TrueNAS do that was listed in the parent comment?

I'd rather have the flexibility offered by TrueNAS, in addition to the robust community. Yes, Synology hardware is convienent in some use cases, but you can generally build yourself a more powerful and versatile home server with TrueNAS Scale. There is a learning curve, so it is not for everyone.

cheschire 4/22/2025||
And for the learning curve folks there’s HexOS
NexRebular 4/22/2025|||
Or OmniOS with Napp-It
dostick 4/19/2025||
Synology became so bad, they measure disk space in percent, and thresholds cannot be configured to lower than 5%. This may have been okay when volume sizes were in gigabytes, but now with multi-TB drives, 5% is a lot of space. The result of that is NAS in permanent alarm state because less than 5% space is free. And this makes it less likely for the user to notice when an actual alarm happens because they are desensitised to warnings. I submitted this to them at least four times, and they reply that this is fine, it’s already decided to be like that, so we will not change it. Another stupid thing is that notifications about low disk space are sent to you via email and push until it’s about 30 GB free. Then free space goes below 30 GB and reaches zero, yet notifications are not sent anymore. My multiple reports about this issue always responded along the lines of “it’s already done like that, so we will not change it”.

Most modern, especially software companies, choose not to fix relatively small but critical problems, yet they actively employ sometimes hundreds of customer support yes-people whose job seems to be defusing customer complaints. Nothing is ever fixed anymore.

kbolino 4/19/2025||
I think preventing alarm fatigue is a very good reason to fix issues.

But 5% free is very low. You may want to use every single byte you feel you paid for, but allocation algorithms really break down when free space gets so low. Remember that there's not just a solid chunk of that 5% sitting around at the end of the space. That's added up over all the holes across the volume. At 20-25% free, you should already be looking at whether to get more disks and/or deciding what stuff you don't actually need to store on this volume. So a hard alarm at 5% is not unreasonable, though there should also be a way to set a soft alarm before then.

j1elo 4/19/2025|||
5% of my 500 GB is 25 GB, which is already a lot of space but understandable. Not many things would fit in there nowadays.

But 5% of a 5 TB volume is 250 GB, that's the size of my whole system disk! Probably not so understandable by the lay person.

kbolino 4/19/2025||
This is partly why SSDs just lie nowadays and tell you they only have 75-90% of the capacity that is actually built into them. You can't directly access that excess capacity but the drive controller can when it needs to (primarily to extend the life of the drive).

Some filesystems do stake out a reservation but I don't think any claim one as large as 5% (not counting the effect of fixed-size reservations on very small volumes). Maybe they ought to, as a way of managing expectations better.

For people who used computers when the disks were a lot smaller, or who primarily deal in files much much smaller than the volumes they're stored on, the absolute size of a percentage reservation can seem quite large. And, in certain cases, for certain workloads, the absolute size may actually be more important than the relative size.

But most file systems are designed for general use and, across a variety of different workloads, spare capacity and the impact of (not) keeping it open is more about relative than absolute sizes. Besides fragmentation, there's also bookkeeping issues, like adding one more file to a directory cascading into a complete rearrangement of the internal data structures.

fc417fc802 4/22/2025||
> sare capacity and the impact of (not) keeping it open is more about relative than absolute sizes

I don't think this is correct. At least btrfs works with slabs in the 1 GB range IIRC.

One of my current filesystmes is upwards of 20 TB. Reserving 5% of that would mean reserving 1 TB. I'll likely double it in the near future, at which point it would mean reserving 2 TB. At least for my use case those numbers are completely absurd.

kbolino 4/22/2025||
We're not talking about optical discs or backup tapes which usually get written in full in a single session. Hard drive storage in general use is constantly changing.

As such, fragmentation is always there; absolute disk sizes don't change the propensity for typical workloads to produce fragmentation. A modern file system is not merely a bucket of files, it is a database that manages directories, metadata, files, and free space. If you mix small and large directories, small and large files, creation and deletion of files, appending to or truncating from existing files, etc., you will get fragmentation. When you get close to full, everything gets slower. Files written early in the volume's life and which haven't been altered may remain fast to access, but creating new files will be slower, and reading those files afterward will be slower too. Large directories follow the same rules as larger files, they can easily get fragmented (or, if they must be kept compact, then there will be time spent on defragmentation). If your free space is spread across the volume in small chunks, and at 95% full it almost certainly will be, then the fact that the sum of it is 1 TB confers no benefit by dint of absolute size.

Even if you had SSDs accessed with NVMe, fragmentation would still be an issue, since the file system must still store lists or trees of all the fragments, and accessing those data structures still takes more time as they grow. But most NAS setups are still using conventional spinning-platter hard drives, where the effects of fragmentation are massively amplified. A 7200 RPM drive takes 8.33 ms to complete one rotation. No improvements in technology have any effect on this number (though there used to be faster-spinning drives on the market). The denser storage of modern drives improves throughput when reading sequential data, but not random seek times. Fragmentation increases the frequency of random seeks relative to sequential access. Capacity issues tend to manifest as performance cliffs, whereby operations which used to take e.g. 5 ms suddenly take 500 or 5000. Everything can seem fine one day and then not the next, or fine on some operations but terrible on others.

Of course, you should be free to (ab)use the things you own as much as you wish. But make no mistake, 5% free is deep into abuse territory.

Also, as a bit of an aside, a 20 TB volume split into 1 GB slabs means there's 20,000 slabs. That's about the same as the number of 512-byte sectors in a 10 MB hard drive, which was the size of the first commercially available consumer hard drive for the IBM PC in the late 1980s. That's just a coincidence of course, but I find it funny that the numbers are so close.

Now, I assume the slabs are allocated from the start of the volume forward, which means external slab fragmentation is nonexistent (unless slabs can also be freed). But unless you plan to create no more than 20,000 files, each exactly 1 GB in size, in the root directory only, and never change anything on the volume ever again, then internal slab fragmentation will occur all the same.

fc417fc802 4/22/2025||
Yes thank you I am aware of what fragmentation is.

There are two sorts of fragmentation that can occur with btrfs. Free space and file data. File data is significantly more difficult to deal with but it "only" degrades read performance. It's honestly a pretty big weakness of btrfs. You can't realistically defragment file data if you have a lot of deduplication going on because (at least last I checked) the tooling breaks the deduplication.

> If your free space is spread across the volume in small chunks, and at 95% full it almost certainly will be

Only if you failed to perform basic maintenance. Free space fragmentation is a non-issue as long as you run the relevant tooling when necessary. Chunks get compacted when you rebalance.

Where it gets dicey is that the btrfs tooling is pretty bad at handling the situation where you have a small absolute number of chunks available. Even if you theoretically have enough chunks to play musical chairs and perform a rebalance the tooling will happily back itself into a corner through a series of utterly idiotic decisions. I've been bitten by this before but in my experience it doesn't happen until you're somewhere under 100 GB of remaining space regardless of the total filesystem size.

kbolino 4/22/2025||
If compaction (= defragmentation) runs continuously or near-continuously, it results in write amplification of 2x or more. For a home/small-office NAS (the topic at hand) that's also lightly used with a read-heavy workload, it should be fine to rely on compaction to keep things running smoothly, since you won't need it to run that often and you have cycles and IOPS to spare.

If, under those conditions, 100 GB has proven to be enough for a lot of users, then it might make sense to add more flexible alarms. However, this workload is not universal, and setting such a low limit (0.5% of 20 TB) in general will not reflect the diverse demands that different people put on their storage.

kimixa 4/22/2025||||
Also Synology use btrfs, a copy-on-write filesystem - that means there are operations that you might not expect that require allocation of new blocks - like any write, even if overwriting an existing file's data.

And "unexpected" failure paths like that are often poorly tested in apps.

leptons 4/22/2025||||
No matter how many TB of online HD storage I have, hard disks are just a temporary buffer for my tape drives.
buserror 4/22/2025|||
At home I have a 48xLTO5 changer with 4 drives (I picked for a song a while back! I actually don't need it but heck, it has a ROBOT ARM), and at work I'm currently provisioning a 96 LTO 9 tape drive dual-rack. With 640 tapes available :-)

I'm a STRONG believer in tapes!

Even LTO 5 gives you a very cheap 1.5TB of clean, pretty much bulletproof storage.. You can pick a drive (with a SAS HBA card) for less than $200, there is zero driver issue (SCSI, baby); the linux tape changer code is stable since 1997 (with a port to VMS!).

Tape FTW :-)

leptons 4/23/2025||
I don't have one but I'd definitely take a tape changer if it weren't too expensive. It would be amazing to have 72TB of storage just waiting to be filled, without needing to go out into my garage to load a tape up.

LTO tapes have really changed my life, or at least my mental health. Easy and robust backup has been elusive. DVD-R was just not doing it for me. Hard drives are too expensive and lacked robustness. My wife is a pro photographer so the never-ending data dumps had filled up all our hard drives, and spending hundreds of dollars more on another 2-disk mirror RAID, and then another, and another was just stupid. Most of the data will only need to be accessed rarely, but we still want to keep it. I lost sleep over the mountains of data we were hoarding on hard drives. I've had too many hard drives just die, including RAIDs being corrupted. LTO tape changed all of that. It's relatively cheap, and pretty easy and fast compared to all the other solutions. It's no wonder it's still being used in data centers. I love all the data center hand-me-downs that flood eBay.

And I do love hearing the tapes whir, it makes me smile.

worthless-trash 4/22/2025|||
This is an area that i'm quickly growing into, what are you curently using and what should I stay away from ?
leptons 4/22/2025||
I got a used internal LTO5 tape drive on eBay for about $150, and then an HBA card to connect it to for about $25 or $30. I bought some LTO5 tapes, and typically I pay about $3.50/TB on eBay for new/used tapes. Many sellers charge far more for tapes, but occasionally I find a good deal. Most tapes are not used very much and have lots of life left in them (they have a chip inside the tape that tracks usage).

Then I scored another 3 used LTO5 tape drives on eBay for about $100, they all worked. I mainly use 1 tape drive. I have it running on an Intel i5 system with an 8-drive RAID10 array (cheap used drives, with a $50 9260-8i hardware RAID card), which acts as my "offsite" backup out in my detached garage - it's off most of the time (cold storage?) unless I'm running a backup. I can loose up to 2 drives without losing any data, and it's been running really well for years. I have 3 of these RAID setups in 3 different systems, they work great with the cheapest used drives from Amazon. I'm not looking for high performance, I just need redundancy. I've had to replace maybe 3 drives across all 3 systems due to failure over the last 7 years.

On Windows the tape drive with LTFS was not working well, I think due to Windows Defender trying to test the files as it was writing them, causing a lot of "shoeshining" of the tape, but I think Windows Defender can be disabled. But I bought tape backup software from https://www.iperiusbackup.com - it just works and makes backups simple to set up and run. I always verify the backup. If something is really important I'll back up to at least 2 tapes. Some really important stuff I will generate parity files (win WinPar) and put those on tape too. Non-encrypted the drive runs at the full 140MB/s, but with encryption it runs at about 60MB/s, because I guess the tape drive is doing the encryption.

I love it, it has changed my data-hoarding life. At $3.50/TB and 140MB/s and 1.5TB per tape, it can't be beat by DVD-R or hard drives for backup. Used LTO5 is really in a sweet spot right now on eBay, but LTO6 is looking good too recently (2.5TB/tape). LTO6 drives can read LTO5 tapes, so there's a pretty easy upgrade path. I also love that there is a physical write-protect switch on the tapes, which hard drives don't have. If you plug in a hard drive to an infected system, that hard drive could easily be compromised if you don't know your system is infected.

worthless-trash 4/24/2025||
Thank you for the detailed write up, you've started me on the path!
sitkack 4/22/2025||||
[flagged]
runamok 4/19/2025|||
100%. Those disks are likely working much harder moving the head all over the place to find those empty spaces when it writes.
gambiting 4/22/2025||
....do you think the drive doesn't know where the empty space actually is?
MrDrMcCoy 4/22/2025|||
Drives blindly store and retrieve blocks wherever you tell them, with no awareness of how or if they relate to one another. It's a filesystem's job to keep track of what's where. Filesystems get fragmented over time, and especially as they get full. The more full they get, the more seeking and shuffling they have to do to find a place to write stuff. This will be the case even after the last spinning drive rusts out, as even flash eventually has to contend with fragmentation. Heck, even RAM has to deal with fragmentation. See the discussion from the last few weeks about the ongoing work to figure out a contiguous memory allocator in Linux. It's one of the great unsolved problems in general comparing that you and your descendants would be set for life if you could solve.
akx 4/22/2025||
Not quite, AFAIK? Drive controllers may internally remap blocks to physical disk blocks (e.g. when a bad sector is detected; see the SMART attribute Reallocated Sector Count).
kbolino 4/22/2025||
Logical Block Addressing (LBA) by its very nature provides no hard guarantees about where the blocks are located. However, the convention that both sides (file systems and drive controllers) recognize is that runs of consecutive LBAs generally refer to physically contiguous regions of the underlying storage (and this is true for both conventional spinning-platter HDDs as well as most flash-based SSDs). The protocols that bridge the two sides (like ATA, SCSI, and NVMe) use LBA runs as the basic unit of accessing storage.

So while block remapping can occur, and the physical storage has limits on its contiguity (you'll eventually reach the end of a track on a platter or an erasable page in a flash chip), the optimal way to use the storage is to put related things together in a run of consecutive LBAs as much as possible.

MrDrMcCoy 4/22/2025||
Sure, but bad block tracking and error correction are pretty different from the implied file/volume awareness I was responding to.
kbolino 4/22/2025||
Yes, to be clear, the drive controller generally (*) has no concept of volumes or files, and presents itself to the rest of the computer as a flat, linear collection of fixed-size logical blocks. Any additional structure comes from software running outside the drive, which the drive isn't aware of. The conventional bias that adjacent logical blocks are probably also adjacent physical blocks merely allows the abstraction to be maintained while also giving the file system some ability to encourage locality of related data.

* = There are some exceptions to this, e.g. some older flash controllers were made that could "speak" FAT16/32 and actually know if blocks were free or not. This particular use was supplanted by TRIM support.

genewitch 4/22/2025|||
I think you'll find that the word "find" doesn't mean "has to search", like one can find their nose in the middle of their face, if one desires.

Change the word to "seek" and it may make more sense.

fc417fc802 4/22/2025||
It makes more sense but it's not true for the modern CoW filesystems that I'm familiar with. Those allocate free space in slabs that they write to sequentially.
kbolino 4/22/2025|||
Also, CoW isn't some kind of magic. There are two meanings I can think of here:

A) When you modify a file, everything including the parts you didn't change is copied to a new location. I don't think this is how btrfs works.

B) Allocated storage is never overwritten, but modifying parts of a file won't copy the unchanged parts. A file's content is composed of a sequence (list or tree) of extents (contiguous, variable-length runs of 1 or more blocks) and if you change part of the file, you first create a new disconnected extent somewhere and write to that. Then, when you're done writing, the file's existing extent limits are resized so that the portion you changed is carved out, and finally the sequence of extents is set to {old part before your change}, {your change}, {old part after your change}. This leaves behind an orphaned extent, containing the old content of the part you changed, which is now free. From what evidence I can quickly gather, this is how btrfs works.

Compared to an ordinary file system, where changes that don't increase the size of a file are written directly to the original blocks, it should be fairly obvious that strategy (B) results in more fragmentation, since both appending to and simply modifying a file causes a new allocation, and the latter leaves a new hole behind.

While strategy (A) with contiguous allocation could eliminate internal (file) fragmentation, it would also be much more sensitive to external (free space) fragmentation, requiring lots of spare capacity and/or frequent defrag.

Either way, the use of CoW means you need more spare capacity, not less. It's designed to allow more work to be done in parallel, as fits modern hardware and software better, under the assumption that there's also ample amounts of extra space to work with. Denying it that extra space is going to make it suffer worse than a non-CoW file system would.

fc417fc802 4/22/2025||
Which is exactly why you periodically do maintenance to compact the free space. Thus it isn't an issue in practice unless you have a very specific workload in which case you should probably be using a specialized solution. (Although I've read that apparently you can even get a workload like postgres working reasonably well on zfs which surprises me.)

If things get to the point where there's over 1 TB of fragmented free space on a filesystem that is entirely the fault of the operator.

kbolino 4/22/2025||
What argument are you driving at here? The smaller the free space, the harder it is to run compaction. The larger the free space, the easier it is. There are some confounding forces in certain workloads, but the general principle stands.

"Your free space shouldn't be very fragmented when you have such large amounts free!" is exactly why you should keep large amounts free.

kbolino 4/22/2025|||
If you delete files, or append to existing files, then the promises of the initial allocation strategy go out the window.
pixelesque 4/22/2025||
Not defending them in any way, but I know with my Infrant (then Netgear unfortunately, who last year killed the products) ReadyNASs which also used mdadm to configure BTRFS with RAID5 in a similar way to Synology and QNAP, the recommendation was that you don't want your BTRFS filesystem to run low on space, because then it runs out of metadata space, and if it does that it becomes read-only, and can become unstable.

Basically, the recommendation was to always have 5% free space, so this isn't just Synology saying this.

izacus 4/22/2025|||
The recommendation of 5% free space comes from arrays in sizes of 100s of GB, not tens of TB.

This is the same kind of issue that Linux root filesystems had - a % based limitation made sense when disks were small, but now they don't make a lot of sense anymore when they restrict usage of hundreds of GB (which are not actually needed by the filesystem to operate).

pixelesque 4/22/2025||||
Actually, reading the BTRFS docs, they recommend keeping 5-10% free space:

https://archive.kernel.org/oldwiki/btrfs.wiki.kernel.org/ind...

MrDrMcCoy 4/22/2025||
Yup. After having dealt with that, I err on the side of caution and don't even let it merge small files into inodes for space savings. I still love btrfs for the CoW, snapshots, and compression, but you really gotta give that metadata a wide berth.
magicalhippo 4/22/2025|||
ZFS has a similar limitation.

It tries to mitigate this by reserving some space for metadata, to be used in emergencies, but apparently it's possible to exhaust it and get your filesystem into a read-only state.

There was some talk about increasing the reservations to prevent this but can't recall if changes were made.

jjcob 4/22/2025||
There is one argument for Synology doing this: There have been cases where hard drive companies mislead their customers. I personally fell victim to this when Western Digital started selling SMR drives as WD Red, without labelling them as SMR drives.

So lots of customers thought they were buying a drive that's perfect for NAS, only to discover that the drives were completely unsuitable and took days to restore, or failed alltogether. Synology had to release updates to their software to deal with the fake NAS drives, and their support was probably not happy to deal with all the angry customers who thought the problem was with Synology, and not Western Digital for selling fake NAS drives.

If you buy a drive from Synology, you know it will work, and won't secretly be a cheaper drive that's sold as NAS compatible even though it is absolutely unsuitable for NAS.

franga2000 4/22/2025||
That's a great argument for selling drives, not for locking your devices down to practically require them
spease 4/22/2025|||
> That's a great argument for selling drives, not for locking your devices down to practically require them

The counterargument is, people won’t listen and then blame Synology when their data is affected. At which point it may be too late for anything but attempting data recovery.

Sufficiently nontechnical users may blame the visible product (the NAS) even if the issue is some nuance to the parts choice made by a tech friend to keep it within their budget.

Synology is seen as the premium choice in the consumer NAS argument, so vertically integrating and charging a premium to guarantee “it just works” is not unprecedented.

There are definitely other NAS options as well, if someone is willing to take on more responsibility for understanding the tech.

yurishimo 4/22/2025|||
I don’t think anyone would care if Synology gave priority to their own drives. A checkbox during setup that says “Yes, I know I’m using these drives that have not been validated blah blah blah” would be plenty. That’s not what Synology did however and that’s the main reason everyone is pissed.
netruk44 4/22/2025|||
Funny you mention that…

I have a DS1515+ which has an SSD cache function that uses a whitelisted set of known good drives that function well.

If you plug in a non whitelisted ssd and try to use it as a cache, it pops up a stern warning about potential data loss due to unsupported drives with a checkbox to acknowledge that you’re okay with the risk.

So…there’s really no excuse why they couldn’t have done this for regular drives.

spease 4/22/2025||
That assumes that the person setting up the NAS is the same person using it, which is not going to be the case for non-tech-savvy users.

Everyone will understand it costing more, fewer people will understand why the NAS ate their data without the warning it was supposed to provide, because cheap drives that didn’t support certain metrics were used.

If Synology wants to have there be only one way that the device behaves, they have to put constraints on the hardware.

brookst 4/22/2025||||
But those people would call support when the array couldn’t rebuild. And many of them would blame Synology, and demand warranty replacement of the “defective” device, and generally cost money and stress.

As long as Synology is up front in the requirement and has a return policy for users who buy one and are surprised, I think they’re well within their rights to decide they’re tired of dealing with support costs from misbehaving drives.

As long as they don’t retroactively enforce this policy on older devices I don’t understand the emotionality here. Haven’t you ever found yourself stuck supporting software / systems / etc that customers were trying to cheap out on, making their problems yours?

kstrauser 4/22/2025||
It’s not that I don't understand. It’s that as an end user I don't give a shit.

Toyota might have great reasons for opening a chain of premium quality gas stations, but the second they required me to use them, I'd never buy another Toyota for as long as I lived.

I want to bring my own drives, just as I have since I bought my first DS-412+ 13 years ago.

numpad0 4/22/2025|||
People are going to ignore that and leave bad reviews online, which will have compounding effects. SMR drives works in RAID until CMR buffer regions are depleted, and then RAID starts falling apart. This will undoubtedly create wrong impressions that Synology products, not the drives, are not trustworthy.
Melatonic 4/22/2025||
What if the device had a minimum benchmark feature that would test any new drive ? And fail the worst ones ?
numpad0 4/23/2025||
SMR drives work like SSD: writes are buffered to CMR zone, consolidated into an SMR track data, copied into onboard cache RAM and written to SMR zone. SMR tracks has sizes of 128MB or so, and can be written or erased in track-at-once manners by the head half-overwriting data like moving a broad whiteboard marker slowly outward on a pottery wheel, rather than giving each rings of data enough separation. This works because the heads has higher resolution in radial direction in reads than writes; the marker tip is broader than what the disk's eyes can see.

This copy operation is done either while the disk is idling, or forced by stop responding to read and write operations if CMR buffer zone is depleted and data has to be moved off. RAID softwares cannot handle the latter scenarios, and consider the disk faulty.

You can probably corner a disk into this depleted state to expose a drive being SMR based, but I don't know if that works reliably or if it's the right solution. This is roughly all I know on technical side of this problem anyway.

littlestymaar 4/22/2025||||
> The counterargument is, people won’t listen and then blame Synology when their data is affected

I see this kind of arguments “X had to do Y otherwise customers would complain” a lot every time a company does something shady and some contrarian wants to defend them, but it really isn't as smart as you think it is: the company doesn't care if people complain, otherwise they wouldn't be doing this kind of move either, because it raises a lot more complaints. Company only care if it affects their bottom line, that is if they can be liable in court, or if the problem is big enough to drive customers away. There's no way this issue would do any of those (at least not as much as what they are doing right now, by a very large margin).

It's just yet another case of an executive doing a shady move for short terms profits, there's no grand reasoning behind it.

brookst 4/22/2025||
Your absolute conviction is misplaced. Support is expensive to provide, especially on hardware that’s expensive to ship around.

This may be a bad move, and you’re certainly right that Synology expects to make more profit with this policy than without it, but it’s a more complex system than you understand. Irate customers calling support and review-bombing for their own mistakes are a real cost.

I don’t blame Synology for wanting to sell fewer units at higher prices to more professional customers. Hobbyists are man attractive market but, well, waves hands at the comments in this thread.

freeAgent 4/22/2025|||
The thing is, that more professional market would never make the mistake of putting SMR drives in a RAID array anyway and they are also (I hope) good enough at doing their own research to filter out reviews from uneducated retail consumers. So, again, we’re left with trying to find a justification for this move other than Synology’s profits.

And when this issue happened with WD drives, I don’t remember a backlash against Synology at all. WD, on the other hand, deserved and received plenty of blame.

devilbunny 4/24/2025|||
Synology is in that "prosumer" space, though, where maybe you don't really want to sell to hobbyists - but the bulk of your hobbyists in this space are tech-savvy and will recommend your products at work if you don't alienate them.
moooo99 4/22/2025||||
> The counterargument is, people won’t listen and then blame Synology when their data is affected. At which point it may be too late for anything but attempting data recovery.

Is it though? Most (consumer) NAS systems are probably sold without the drives which are bough separately. When there is an issue with the drive and it breaks, I’m pretty sure most people technical enough to consider the need for a NAS would attribute that failure to the manufacturer of the drives, not to the manufacturer of the computer they put their drives into

brookst 4/22/2025||
I know a photographer who needs tech support for really anything, and who has bought drives and upgraded is NAS himself. I don’t think that’s unusual, but of course n=1.
Melatonic 4/22/2025||||
I see your argument here and this could also be solved by some type of somewhat difficult flag that Synology could implement.

Meaning that by default it could require a Synology drive that is at minimum going to work decently.

Want to mess around more and are more technical ? Make it a CLI command or something the average joe is going to be wary about. With a big warning message.

Personally I only like to buy very reliable enterprise class drives (probably much better than whatever Synology will officially sell) and this is my main concern.

freeAgent 4/22/2025|||
Nobody blamed Synology when that WD SMR issue happened. Come on, let’s get real here. Locking the devices down so they only work with drives bearing Synology branding is about Synology’s profits.
formerly_proven 4/22/2025|||
For a BYOD product it would be fine to add a blacklist of DM-SMR drives imho. Or have a big red banner and a taint flag.

But that's not what Synology did.

Also, if the image on the Synology page is accurate, they are relabeled Toshiba drives. Which doesn't really seem a good choice for SMB/SOHO NAS devices, because the Toshiba "Machine Gun" MGxx drives are the loudest drives on the market.

sersi 4/22/2025|||
I had some 12TB WD ultrastar that I happily replaced with some Toshiba MG09ACA18TE. To my ears at least the toshiba sound signficantly more bearable than the ultrastar were (lower pitch so less disturbing). Due to living in a small apartment my NAS is in my living room so noise matters.

That said, I've since added an ssd and moved almost everything to the ssd (docker, the database and all apps) and it's much nicer in term of noise.

m000 4/22/2025||||
TBF, the picture in TFA only shows rack-mounted Synology devices, where noise is not really a concern.

Synologoy SMB/SOHO NAS devices should not be affected by the drive lockdown (for now).

sokoloff 4/22/2025|||
Per the article, it’s all “Plus” models from 2025 on, which definitely includes desktop 2-8 bay units.

I have been a happy enough Synology user since 2014, even though I had to hardware repair my main DS1815+ twice in that time (Intel CPU bug and ATX power-on transistor replacement).

Other than two hardware failures in 10 years (not good), the experience was great, including two easy storage expansions and the replacement of one failed drive. These were all BYOD, and the current drives are shucked WD reds with a white sticker (still CMR).

I happily recommended them prior to this change and now will have to recommend against.

Melatonic 4/22/2025||
So basically buy one of the older models now ?

Will new firmware updates to everything before this require the Synology branded drives ?

sokoloff 4/22/2025||
Seemingly not, at least not for the moment, based on the experiences of people migrating arrays [successfully] from older units and the massive backlash that would result if they trashed currently working arrays.

If you’re going to buy a DSM unit, I’d definitely buy a 2024 or earlier model. But even as an overall happy user under their old go-to-market approach, I can’t recommend them now.

Heliosmaster 4/22/2025|||
but that's the proposed change, that their Plus lineup which is generally targeted for SMB/SOHO/enthusiast market, will be working only with their drives
m000 4/22/2025||
Then the photo with the U-factor servers is obvs. a distraction. Thanks for the clarification.
numpad0 4/22/2025|||
It can't be programmatically identified because manufacturers actively hide them. There are ATA commands for that, DM-SMR drives lie to it.

I wouldn't be sure why Toshiba M-prefixed 7K2 drives would be bad for NAS use cases. They're descendants of what were used in high performance SPARC servers. Hot, dense, obnoxious, but that's just Fujitsu. They're plenty reliable, performant, and perfect for your all-important on-line/near-line data! You just have to look away from bills(/s).

0: https://www.techpowerup.com/265841/some-western-digital-wd-r...

1: https://www.techpowerup.com/265889/seagate-guilty-of-undiscl...

2: https://www.smartmontools.org/ticket/1313

wetbaby 4/22/2025|||
Which WD drives specifically were misleading customers?

It's fine to have 'Synology supported drives' which guarantee compatability, but requiring them is absolute bollocks.

jjcob 4/22/2025||
WD Red drives. They released a new version of their WD Red drive, and the only difference they stated in the specs was that it had more cache. So I thought, great, this is their updated model with more cache, it's going to be faster.

After some time, people started to post about problems with the new WD Red drives. People had troubles restoring failed drives, I had a problem where I think the drives never stopped rewriting sectors (you could hear the hard drives clicking 24/7 even when everything was idle)

Then someone figured out that WD had secretely started selling SMR drives instead of CMR drives. The "updated" models with more cache were much cheaper disks, and they only added more cache to try and cover up the fact that the new disks suffored from catastrophic slowdowns during certain workloads (like rebuilding a NAS volume).

This was a huge controversy. But apparently selling SMR drives is profitable, so WD claims the problem is just that NAS software needs to be made compatible with SMR drives, and all is well. They are still selling SMR drives in their WD Red line.

Edit: Here's a link to one of the many forum threads where people discovered the switch: https://community.synology.com/enu/forum/1/post/127228

fc417fc802 4/22/2025|||
> the problem is just that NAS software needs to be made compatible with SMR drives

That's half of it ... maybe? Last time I looked drives that offer host managed SMR still weren't available to regular consumers. In theory that plus a compatible filesystem would work flawlessly. In practice you can't even buy the relevant hardware.

toast0 4/22/2025||||
> This was a huge controversy. But apparently selling SMR drives is profitable, so WD claims the problem is just that NAS software needs to be made compatible with SMR drives, and all is well. They are still selling SMR drives in their WD Red line.

Well, SMR lets you store more stuff on the same platter (more or less); fewer platters reduces costs, etc.

WD's claims about it being a software problem would be more reasonable if they were providing guidance about what the software needs to do to perform well with these drives, and probably that would involve having information about the drive available to the OS/filesystem rather than hidden.

MortyWaves 4/22/2025|||
For future reference which WD drives are actually suitable for NAS? I remember someone saying you need to look for a specific more expensive type.
m000 4/22/2025|||
WD Red Plus -> CMR technnology, suitable for NAS

WD Red -> SMR technology, slightly cheaper, not suitable for NAS

kstrauser 4/22/2025||
…but still marketed for NAS users, alas.
m000 4/22/2025||
I should have been more specific: not suitable for RAID NAS

RAID and NAS used to go together when drive capacities were lower. E.g. I had a 9TB NAS with RAID5 at times when 8TB drives were >$500 a pop. These days, NAS does not necessarily imply having a RAID setup. I see a new "build your SFF/RPi NAS" article every week, and it rarely involves RAID.

This is because a NAS setup with a single high-capacity drive and an online backup subscription (e.g. Backblaze) is more cost-effective and perfectly adequate for a lot of users, who have no interest in playing the sysadmin. In such a setup, you just need a drive that can withstand continuous operation, and SMR should work fine.

kstrauser 4/22/2025||
That's an interesting point I hadn't considered. To me, NAS implies RAID. You might be right that this is no longer true.
kstrauser 4/22/2025|||
Frankly, I haven’t bought a single WD since. I no longer trust them as a brand, and trust is critical for this class of things.
DHolzer 4/22/2025|||
buying HDDs these days currently feels a little like navigating the dark web.
izacus 4/22/2025|||
I heard its about EU CYA directive, which requires IoT companies to deploy security patches in firmware within X months of those patches being released.

This would explain why they'd only want to support HDD models the Synology OS can flash firmware updates to.

(It's also convenient to get more margins.)

kstrauser 4/22/2025||
Nah. That wouldn’t require them to apply patches to someone else’s hardware than an end user installed after the fact. By analogy, Samsung isn't obligated to patch the firmware of an AirPods connected to it.
j45 4/22/2025|||
Western Digital misrepresented their Red drives being as suitable for NAS as they advertised, which ended up with them receiving a class action lawsuit against them.

Luckily all of the settings can be searched for and verified.

Seagate and Hitachi have seemed to treat me well over the years and I was giving WD a chance.

Next drives to buy will be from this list: https://www.backblaze.com/blog/backblaze-drive-stats-for-202...

mappu 4/22/2025|||
I have a mix of SMR and CMR drives in my NAS and it works perfectly (on mdadm). I understand ZFS hates it but what is the problem on Synology?
Hikikomori 4/22/2025|||
Afaik it's not much of a problem until you need to rebuild the raid.
Arrowmaster 4/22/2025|||
My rough understanding is an SMR drive has a small section that's CMR and where the data goes when you do a write. Then when the drive is idle it moves the data to SMR because SMR writes are slow. If you fill up the small CMR section it starts writing directly to SMR and you see a huge performance loss. Without adaptation for SMR drives a lot of systems recognized this slowdown as a failure and would hault a restore. Even with that corrected for now you are looking at 10x the rebuild time of a CMR drive which increases the odds of another drive failing during the rebuild.
gwbas1c 4/22/2025|||
A friend of mine who moved from Communist Russia to the US once explained to me the "tyranny of choice." He explained that, in the US, sometimes we get overwhelmed with many, many options.

To be quite blunt: After choosing my NAS, the act of choosing hard drives is actually harder and somewhat overwhelming. To be quite honest, knowing that I can choose from a narrower set of drives that are guaranteed to work correctly is going to really tip my scale in favor of Synology the next time I'm in the market for a NAS.

wiether 4/23/2025||
Locking non-approved drives does absolutely nothing for your case though.

They can sell "guaranteed to work" drives for people like you who don't want to go through all the picking process, while letting other people the choice to put any drive they want.

kmacdough 4/22/2025||
This is an argument in favor of all anti-competitive walled garden moves. But we've seen time and again the degrading service and price gouging that ultimately comes. People have a right to use the things they own as they please. Companies don't have a right to protect their image that supersedes this basic human right of independence.
PeterStuer 4/19/2025||
I currently run 2 Synology NAS's in my setup. I am very satisfied with their performance, but nevertheless I will be phasing them out because their offerings are not evolving in line with customer satisfaction but with profit maximization through segmentation and vertical lock-in.
joshstrange 4/19/2025||
Do you have a plan on what you’re going to move to?

I’ve used (and still use) UnRaid before but switched to Synology for my data a while back due to both the plug-and-play nature of their systems (it’s been rock solid for me) and easily accessible hard drive trays.

I’ve built 3 UnRaid servers and while I like the software, hardware was always an issue for me. I’d love a 12-bay-style Synology hardware device that I could install whatever I wanted on. I’m just not interested in having to halfway deconstruct a tower to get at 1 hard drive. Hotswap bays are all I want to deal with now.

disambiguation 4/20/2025|||
Not OP but TrueNAS is a good alternative - both the software and their all in one NAS builds.

I have an unraid on a usb stick somewhere in my rack, but overtime it started feeling limited, and when they began changing their license structure I decided it was time to switch, though I run it on a Dell r720xd instead of one of their builds (my only complaint is the fan noise - i think 730 and up are better in this regard)

Proxmox was also on my short list for hypervisors if you dont want TrueNAS.

chme 4/22/2025||
I also have a TrueNAS, but because of its limitations (read-only root file system), I came to the conclusion that, if I ever need to reinstall it, I would switch to Proxmox and install TrueNAS as one virtual client, next the other clients for my home lab.

I have found workarounds for the read-only root file system. But they aren't great. I have installed Gentoo with a prefix inside the home directory, which provides me with a working compiler and I can install and update packages. This sort of works.

For running services, I installed jailmaker, which starts a lxc debian, with docker-compose. But I am not so happy about that, because I would rather have an atomic system there. I couldn't figure out how to install Fedora CoreOS inside a lxc container, and if that is even possible. Maybe NixOS would be a another option.

But, as I said, for those services I would rather just run them in Proxmox and only use the TrueNAS for the NAS/ZFS management. That provides more flexibility and better system utilization.

GeertJohan 4/22/2025|||
I use TrueNAS Scale as root OS and have it run a Linux VM, which is easily done via their 'Virtualization' feature. No need for Proxmox. Afaik it works a lot better to give zfs direct access to underlying hdds. TrueNAS also has an 'Apps' feature, which are basically glorified helm chart installs on k3s that TrueNAS installs for you. But I prefer more control so I have k8s on the Linux VM. Whats also great is that the k8s on the Linux VM can use the TrueNAS storage via democratic-csi.

https://github.com/democratic-csi/democratic-csi

chme 4/22/2025|||
I was using Truecharts before k8s was deprecated.

The deprecation caused me to move to something more neutral and stay away from all 'native' apps of TrueNAS and migrated to ordinary docker-compose, because that seem to be the most approachable.

I was also looking into running a Talos k8s cluster, but that didn't seem to be as approachable to me and a bit overkill for a single-node setup.

sokoloff 4/22/2025|||
I run Proxmox on the bare metal and pass the HBA through to the TrueNAS VM (so it gets direct access to the attached drives).
therein 4/22/2025|||
> I also have a TrueNAS, but because of its limitations (read-only root file system)

It isn't really the case. TrueNAS wants you to look at it as an appliance so they make it work that way out of the box.

On the previous release, they had only commented out the apt repos but you could write to the root filesystem.

On the latest release, they went a little further and did lock the root filesystem by default but using a single command (`sudo /usr/local/libexec/disable-rootfs-protection`), root becomes writable and the commented out apt repos are restored. It just works.

chme 4/22/2025||
But AFAIK, updates will overwrite everything, so installing anything is just temporary.
therein 4/22/2025||
I have both of these releases running side by side for multiple years by now. It will not auto-update between releases anyway similarly to how nobody would do a dist-upgrade on you automatically. Neither have ever overwritten my changes to enable rootfs rw + apt repo fix and other changes to the filesystem, no more than a normal Debian would. Enabling apt actually gets you a more up to date system than you'd get otherwise.
ellen364 4/22/2025||||
I've been mostly happy with a Terrmaster DAS attached to a mini PC running UnRaid. The bays are hotswappable and overall it's been solid.

I say "mostly" happy because I almost returned it. The USB connection between mini PC and Terramaster would be fine for a few days and then during intense operations like parity checks would disconnect and look like parity error/disk failure, except the disks were fine. Eventually realised the DAS uses power from the USB port as well as the adapter plug and the mini PC wasn't supplying enough power. Since attaching a powered USB hub it's been perfect.

Explanation of symptoms and solution, in case anyone is considering one or has the same problem: https://forum.terra-master.com/en/viewtopic.php?t=5830

webstrand 4/22/2025||
I had the same issue, same solution worked for me too.

It works well, but USB connection could be faster, and it bogs down when doing writes with soft-raid. I've been thinking about going for a DAS solution connected directly via SAS, instead. Still musing about what enclosure to use, though.

asteroidburger 4/22/2025||||
I haven't used them personally so I can't vouch for them, but UGREEN's NAS line is the same form factor as a Synology unit, but it lets you run any OS. I'd probably put straight Debian on mine and handle it all manually as I do now. I wouldn't be surprised if you could put Unraid on it.
donatzsky 4/22/2025||
Pretty sure Asustor also allows installing any other Linux you want.
Marsymars 4/22/2025||||
I use a qnap TL-D800S for 8 bays connected to my home server. You could use as many as you have available PCIe ports.
PeterStuer 4/20/2025||||
No plans yet. My current NAS setup should be fine for another 2 years.

If it were now, I'd probably look deeper into Asus, QNap or a DIY TrueNAS.

freddie_mercury 4/22/2025|||
There are a lot of NAS cases you can buy. I have a Jonsbo that I've been pretty happy with for a year or so.
rpdillon 4/19/2025|||
I'm in a similar position. I'm on my second NAS in the last 12 years. I've been very satisfied with their performance, but this kind of behavior is just completely unacceptable. I guess I'll need to look into QNAP or some other brand. Also, I think my four disc setup is in a RAID 5, but it might be Synology's proprietary version, so I'll need to figure out how to migrate off of that. I don't think I'll be able to just insert the drives in a different NAS and have it work.
kalleboo 4/19/2025||
Even Synology's "proprietary" RAID is just Linux mdadm, and they have instructions on their website on how to mount it under Linux. One of the reasons I preferred Synology in the first place was their openness about stuff like that!
rpdillon 4/20/2025|||
Awesome to know! I'll read up on mdadm, appreciate the pointer!
1oooqooq 4/22/2025|||
it's migrating to btrfs raid 1 now. and their docs just say to wipe the drivers in case of issues lol.
chrisandchris 4/22/2025|||
That was my first thought too. I am currently a very happy Synology customer and am selling them to B2B customers for storage.

I will still have to come accross something like Hyper Vault for backup and Drive for storage that works (mostly) smoothless. I would be happy to self-host, but the No Work Needed (tm) products of Synology are just great.

Sad to see them taking this road.

johntitorjr 4/19/2025||
[dead]
kotaKat 4/19/2025||
I'm going to buck the nerds and say I wish Drobo was back. I love my 5N, but had to retire it as it began to develop Type B Sudden Drobo Death Syndrome* and switch out to QNAP.

It was simple, it just worked, and I didn't have to think about it.

* TB SDDS - a multi-type phenomenon of Drobo units suddenly failing. There were three 'types' of SDDS I and a colleague discovered - "Type A" power management IC failures, "Type B" unexplainable lockups and catatonia, and "Type C" failed batteries. Type B units' SOCs have power and clock go in and nothing going out.

romanhn 4/22/2025||
My 2nd generation Drobo that I got back in 2008 is still chugging along. Haven't had to replace a hard drive in 10-12 years either. I love it even though it's super slow by today's standards. Been meaning to retire it for years, but it's been so rock solid I rarely have to think about it.
mig39 4/22/2025||
I still have two Drobo 5N2 NAS boxes going strong. One is the backup for the other. I really wish someone would take up the Drobo-like simplicity and run with it.
TabTwo 4/22/2025||
I'm not sure what customers Synology is targeting. Small office/home office (SoHo) was their original market, but these customers won't be willing to pay high prices per drive. Medium-sized businesses? They mostly move their infrastructure to the cloud, which probably leads to low sales volumes. Plus, they're very price-sensitive too. Large enterprises and corporations? This is the domain of established providers like NetApp. Synology might dream about the high prices that these major storage vendors can charge, but this market is difficult to enter without years and years and years of proven reliability in hardware and service.

I don't think this will work the way Synology imagines it.

npunt 4/22/2025||
Something I haven't seen emphasized enough with this move is that by obfuscating a drive's vendor and manufacture date, you won't know if your drives are from the same batch. This is important because if there's any manufacturing defects in a given batch of drives, the failures are likely to happen around the same time, greatly increasing the chance of losing data.

Basically, Synology drives are not only more expensive, they're also statistically speaking less reliable when building a RAID with them, negating the very purpose of the product. What a dumb move.

dredmorbius 4/22/2025|
I still love the example some here may recall, where HN itself was hit by this with two disks within a RAID array AND THE BACKUP SERVER all failed within hours of one another:

<https://news.ycombinator.com/item?id=32048148>

Resulting in, FWIW, my top-rated-ever HN comment, I think:

<https://news.ycombinator.com/item?id=32048149>

Shank 4/22/2025||
I personally think that in 2025, you should treat the NAS as a purely storage product and buy your hardware from that perspective. TrueNAS or UniFi’s new NAS product fulfill that goal. From there, supplement your NAS with a Mac Mini or other mini-PC for storage-adjacent tasks.

Synology’s whole business model (arguably QNAP’s too) depends on you wanting more drive bays than 2 and wanting to host apps and similar services. The premium they ask is substantial. You can spec out a beefy Dell PowerEdge with a ton of drive bays for cheap and install TrueNAS, and you’ll likely be much happier.

But the fundamental suggestion I make is to consider a NAS a storage-only product. If you push it to be an app and VM server too, you’re dependent on these relatively closed ecosystems and subject to the whims of the ecosystem owner. Synology choosing to lock out drives is just one example. Their poor encryption support (arbitrary limitations on file filenames or strange full-disk encryption choices) is another. If you dive into any system like Synology long enough, you’ll find warts that ultimately you wouldn’t face if you just used more specialized software than what the NAS world provides.

beaviskhan 4/22/2025||
> The premium they ask is substantial. You can spec out a beefy Dell PowerEdge with a ton of drive bays for cheap and install TrueNAS, and you’ll likely be much happier.

Yeah, but then you have a PowerEdge with all the noise and heat that goes along with it. I have an old Synology 918 sitting on my desk that is so quiet I didn't notice when the AC adapter failed. I noticed only because my (docker-app-based) cloud backups failed and alerted me.

Unless Synology walks back this nonsense, I'll likely not buy another one, but I do think there is a place for this type of box in the world.

Shank 4/22/2025||
> Unless Synology walks back this nonsense, I'll likely not buy another one, but I do think there is a place for this type of box in the world.

I would recommend a mini-ITX NAS enclosure or a prebuilt system from a vendor that makes TrueNAS boxes. iXSystems does sell prebuilt objects but they’re still pricey.

timcobb 4/22/2025||
This is what I need to research, I don't understand why we need NAS hardware at all, aren't the controllers and drivers in an average box running Linux enough for the same software raid that Synology does?
abofh 4/22/2025|||
You don't - similarly you don't need a Mac Book to run OS X (technically at least); You buy the full-flag NAS or MacBook because it's a bundled supportable quantity that reduces your cognitive load in exchange for money. Synology for an SMB or home lab is pretty good stuff and you don't spend (as much) of your time editing smb.conf, or configuring the core backup services or whatever. Some clickops and you're done - and you can do a lot - under the hood it's still Linux (or at least mine is), you can SSH in and do damage -- the hardware isn't "special", it's not necessarily substantially better than another system that could handle a similar number of drives in any measurable way.

I have a synology because I got tired of running RAID on my personal linux machines (had a drobo before that for the same reasons) - but as things like drive locking occur and arguably better OSS platforms available, I'm not sure I'd make the same decision today.

Shank 4/22/2025|||
I perceive the main benefit to the NAS hardware to be ease-of-management in terms of RAID via a software stack, and of course, physical hardware slots for holding lots of disks. You can easily build a NAS with pure Linux/FreeBSD and a case with disks inside.
jbverschoor 4/22/2025||
Synology sells hardware and you get the software without yearly license.

Investors want bigger returns. They know they will not get away at this point by selling a monthly license. A large percentage would not buy anymore.

What other options do you have for recurring revenue? Cloud storage, but I don't think that's a great success.

And then... yes, harddisks. They are consumable devices with a limited lifespan. Label them as your own and charge a hefty fee.

The disks in a (larger) NAS setup are more than what the NAS costs. They want a piece of that pie by limitting your options.

No more syno for me in the future

mgsouth 4/19/2025|
I've no experience with Synology and have no opinion regarding their motivations, execution, or handling of customers.

However...

Long long ago I worked for a major NAS vendor. We had customers with huge NAS farms [1] and extremely valuable data. We were, I imagine, very exposed from a reputation or even legal standpoint. Drive testing and certification was A Very Big Deal. Our test suites frequently found fatal firmware bugs, and we had to very closely track the fw versions in customer installations. From a purely technical viewpoint there's no way we wanted customers to bring their own drives.

[1] Some monster servers had tripple-digit GBs of storage, or even a TB! (#getoffmylawn)

thomasjudge 4/22/2025||
For an entertaining/terrifying perspective on firmware, obligatory Bryan Cantrill talk "Zebras All the Way Down" https://www.youtube.com/watch?v=fE2KDzZaxvE
gh02t 4/22/2025|||
Synology is consumer and SMB focused, though. High end storage that level of integration makes sense, but for Synology it's just not something *most of their customers care about or want.
cm2187 4/22/2025||
That being said, there aren't many major HDD manufacturers anymore, nor do they have many models. Synology is using vanilla linux features like md and lvm. You don't think those manufacturers have tested their drives against vanilla linux?
More comments...