Posted by motiejus 4/19/2025
The killer feature for me is the app ecosystem. I have a very old 8-bay Synology NAS and have it setup in just a few clicks to backup my dropbox, my MS365 accounts, my Google business accounts, do redundant backup to external drive, backup important folders to cloud, and it was also doing automated torrent downloads of TV series.
These apps, and more (like family photos, video server, etc), make the NAS a true hub for everything data-related, not just for storing local files.
I can understand Synology going this way, it puts more money in their pocket, and as a customer in professional environment, I'm ok to pay a premium for their approved drives if it gives me an additional level of warranty and (perceived) safety.
But enforcing this accross models used by home or soho users is dumb and will affect the good will of so many like me, who both used to buy Synology for home and were also recommending/purchasing the brand at work.
This is a tech product, don't destroy your tech fanbase.
I would rather Synology kept a list of drives to avoid based on user experience, and offer their Synology-specific drives with a generous warranty for pro environments. Hel, I would be ok with sharing stats about drive performance so they could build a useful database for all.
They way they reduce the performance of their system to penalise non-synology rebranded drives is bascially a slap in the face of their customers. Make it a setting and let the user choose to use the NAS their bought to its full capabilities.
At this point, I'm not that convinced that there's anything that synology offers that isn't handled much better by an app running on docker. This wasn't true 10 years ago.
That's it. For the actual viewing / sorting / album you need something like immich or photoprism, the photos app actually sucks.
Video station has been removed in the latest minor update, not even a major update, they just took it out no warning no replacement. But then again it was not that good, jellyfin is the way to go for me.
Their crown jewels are active backup, hyper backup and synology office. That's where they own their space.
This however is a deal breaker for me as I'd hate to be locked in to their drives for all the reasons in TFA but also as a matter of principle.
I hope Synology will reconsider!
The first one I bought is still in service at my parents' place, silently and reliably backing up their cloud files and laptops.
I was fully expecting to buy more in the future, but this is a dealbreaker. If a disk goes bad, I want to go to the local store, pick one up, and have the problem fixed half an hour later. I do not want to figure out where I can get approved disks, what sizes are available, how long it will take to ship them, etc.
I've recently installed Unraid on an old PC, and the experience has been surprisingly good. It's not as nice as a Synology, but it's not difficult, either. It's just a bit more work. I've also heard that HexOS plans to support heterogeneous disks, and I plan to check it out once that is available.
So that's the direction I'll be going in instead.
Sounds like this is the problem with Synology... How are they going to make money when their products are so good!
Which is along the same trend line I'm seeing for my purchases.
That's pretty solid for hardware sales.
My guess is that they've over invested in things like their "drive" office software suite, and don't know how to monetize it or recoup costs.
I like Synology, but locking me to their drives is a hard "no thanks" from me.
Next NAS won't be from them if that's their play...
Forcing their drives is a tax on top of an already existing tax. Synology already charges a premium for lower end specs than the competition. If that's not enough to compensate for the longer upgrade cycles, and they want a hand in every cookie jar it's just going to be a hard pass for me.
I upgraded my Synology box every few years and this is exactly the time I was looking to go to the next model. And I'd pull the trigger and buy a current model before they implement the policy but the problem is now I don't trust that they won't retroactively issue an update that cripples existing models somehow. QNAP or the many alternative HW manufacturers that support an arbitrary OS are starting to be that much more attractive.
Synology seems to have gone entirely the other direction here. Most of their software is given away for free, but the hardware is being monetized.
Additionally - the hardware has different operating constraints. I think the big deal for Synology is that they probably assumed that storage need growth would equate to sales growth.
EX - Synology may have assumed that if I need to store 1TB in 2010, and 5TB in 2015, that would equate to me buying additional NAS hardware.
But often, HDD size increases mean that I can keep the same number of bays and just bump drive size.
Which... is great for me as a user, but bad for Synology (this almost single handedly explains this move, as an aside - I just think it's a bad play).
---
I'd rather they just charged for the software products they're blowing all their money on, or directly tie upgrades to the software products to upgrading hardware.
I switched to Synology about six years ago (918+). The box is small, quiet, and easy to put in the rack together with the network gear. I started with 4TB drives, gradually switched to 8TB over time (drive by drive). I don't use much of their apps (mostly download station, backup, and their version of Docker to run Syncthing, plus Tailscale). But the box acts like an appliance - I basically don't need to maintain it at all; it just works.
I don't like all this stuff with vendor lock-in, so when the time comes for replacing the box, what are alternatives on par with the experience and quality I currently have with Synology?
- Minisforum N5 Pro NAS
- AOOSTAR WTR MAX
Good compute power as they know users will be running Docker and other services on them, using the NAS as a mini server.
OS agnostic allowing users to install TrueNas, Unraid, favourite Linus distro of choice.
The Minisforum and AOOSTAR look to be adding all the features power users and enthusiasts are asking for.
If you just want a NAS as a NAS and nothing else, the new Ubiquiti NAS looks great value as well.
Increasingly, with the time I have towards the things that interest me, I just want storage and a bit of compute to be like a home appliance, reasonably set and forget it and leave my messing around on a USFF computer.
If you heavily rely on apps/services. I've just gone to self managed docker environments for things like that. A very simple script runs updates.
In hindsight buying a QNAP that was more than the Synology equivalent felt like a good idea but I didn't really get into it quickly enough.
I also got burned by Western Digital's scandal of selling WD Red drives that really weren't that got them caught in a class action lawsuit. Can't see myself buying them again.
Well, my Synology NAS is from... 2013 (have upgraded the drives 3-times), so... it is/was time to replace it, and I can tell you that it won't be with another Synology device...
I won't go back to QNAP, which is what I had before Synology, because during an OS update it wiped all my data (yes, there was a warning, but the whole purpose of having a RAID NAS is safe reliable data storage)
May check-out a custom hardware build, combined with Xpenology.
At one time, Drobo was the only manufacturer that did that, but I have had very bad luck with Drobos.
I’ve been running a couple of Synology DS cages for over five years, with no issues.
I still appreciate how easy and maintenance-free was their implementation of the core NAS functionality. I do have a Linux desktop for experiments and playing around with, but I prefer to have all of my actually important data to be on a separate rock solid device. Previously, Synology fulfilled this role and was worth paying for, but if this policy goes live, I wouldn’t consider them fro my next NAS.
It's a bit more convenient than how other solutions, like Unraid, handle this, where you manually configure a Docker container.
QNAP has more configurability for better and worse.
Curious ot hear what other manufactures can compare to them out of the box.
Self-configuring something is a different thing.
I simply do not care any more to rebuild raids and manually swap drives under duress when something is going down. I just replace existing drives with new ones well before they die after they've hit enough years. Backblaze's report is incredibly valuable.
We (in the tech space) can scream privacy and risks of the cloud all day long but most consumers seem to just not care.
I have 2 Synology NAS and the only app that I actually use is Synology Drive thanks to the sync app, but there are open source alternatives that would work better and not require a client on the NAS side to work.
I can't imagine any enterprise would be using these features.
Been in the market for a new NAS myself and I am going to be looking into truenas or keep an eye on what Ubiquity is doing in this space (but its a no go until they add the ability to communicate with a UPS).
I just can't imagine there is that many people that would bother with a "private cloud" that may not already have a use case for a NAS at home for general data storage.
It doesn’t address the mandatory nature of drives when at most dell and hp have put their part number on drives for the most part.
The number of times I’ve broken things on QNAP systems doing what should be normal functionality, only to find out it’s because of some dumb implementation detail is over a dozen. Synology, maybe 1-2.
Roughly the same number of systems/time in use too.
Some QNAP devices can be coaxed into running Debian.
I did the procedure on my (now 15yo) TS-410, mostly because the vendored Samba is not compatible with Windows 11 (I had turned-off all secondary services years ago). It took a few days to backup around 8TB of data to external drives. And AROUND 2 WEEKS to restore them (USB2 CPU overhead + RAID5 writes == SLOOOOOW).
Even to get the time down to 2 weeks, I really had to experiment with different modes of copying. My final setup was HDD <-USB3-> RPi4 <-GbE-> TS-410. This relieved TS-410 CPU from the overhead of running the USB stack. I also had to use rsync daemon on TS-410 to avoid the overhead of running rsync over SSH.
So, it's definitely not for the faint of heart, but if you go through the trouble, you can keep the box alive as off-site backup for a few more years.
Having said that, I have to commend QNAP for providing security updates for all this time. The latest firmware update for TS-410 is dated 2024-07-01 [1]. This is really going beyond and above supporting your product when it comes to consumer-level devices.
[1] https://www.qnap.com/en/download?model=ts-410&category=firmw...
In theory, one could fit an Arm RK3588 SBC with NVME-to-PCIe-to-HBA or NVME-to-SATA into half-depth JBOD case. That would give up 2x10G SFP, 2xNVME and ECC RAM.
The Marvell CN913x SoC has been shipping for 5 years, following the predecessor Armada SoC family released 10 years ago and used in multiple consumer NAS products, https://linuxgizmos.com/marvell-lifts-curtain-on-popular-nas.... Mainline Linux support for this SoC has benefited from years of contributions, while Marvell made incremental hardware improvements without losing previous Linux support.
As I understand, migrating to other hardware wouldn't be an issue if availability becomes an issue.
Debian offers flexibility and control, at the cost of time and effort. PhotoSync mobile apps will reliably sync mobile devices with NAS over standard protocols, including SSH/SFTP. A few mobile apps do work with self-hosted WebDAV and CalDAV. XPenology attempts to support Synology apps on standard Linux, without excluding standard Debian packages.
Once i added a 4th drive to a RAID 5 set and i was impressed that it performed the operation on-line. Neat.
Oh, there was one issue: A while ago my Timemachine backups were unreliable, but i haven't had that issue since three years or so.
Were you installing things manually or just using the app store?
https://www.theverge.com/2015/2/5/7986327/keurigs-attempt-to...
Well, that sounds like a great way to get sued.
Backblaze publishes a great report.
https://www.backblaze.com/blog/backblaze-drive-stats-for-202...
Getting a lower powered intel celeron QNAP nas basically lets you run anything you want software or app wise, including docker that just works instead of hunting for ARM64 binaries for anything that is not available off the shelf.
I'd rather have the flexibility offered by TrueNAS, in addition to the robust community. Yes, Synology hardware is convienent in some use cases, but you can generally build yourself a more powerful and versatile home server with TrueNAS Scale. There is a learning curve, so it is not for everyone.
Most modern, especially software companies, choose not to fix relatively small but critical problems, yet they actively employ sometimes hundreds of customer support yes-people whose job seems to be defusing customer complaints. Nothing is ever fixed anymore.
But 5% free is very low. You may want to use every single byte you feel you paid for, but allocation algorithms really break down when free space gets so low. Remember that there's not just a solid chunk of that 5% sitting around at the end of the space. That's added up over all the holes across the volume. At 20-25% free, you should already be looking at whether to get more disks and/or deciding what stuff you don't actually need to store on this volume. So a hard alarm at 5% is not unreasonable, though there should also be a way to set a soft alarm before then.
But 5% of a 5 TB volume is 250 GB, that's the size of my whole system disk! Probably not so understandable by the lay person.
Some filesystems do stake out a reservation but I don't think any claim one as large as 5% (not counting the effect of fixed-size reservations on very small volumes). Maybe they ought to, as a way of managing expectations better.
For people who used computers when the disks were a lot smaller, or who primarily deal in files much much smaller than the volumes they're stored on, the absolute size of a percentage reservation can seem quite large. And, in certain cases, for certain workloads, the absolute size may actually be more important than the relative size.
But most file systems are designed for general use and, across a variety of different workloads, spare capacity and the impact of (not) keeping it open is more about relative than absolute sizes. Besides fragmentation, there's also bookkeeping issues, like adding one more file to a directory cascading into a complete rearrangement of the internal data structures.
I don't think this is correct. At least btrfs works with slabs in the 1 GB range IIRC.
One of my current filesystmes is upwards of 20 TB. Reserving 5% of that would mean reserving 1 TB. I'll likely double it in the near future, at which point it would mean reserving 2 TB. At least for my use case those numbers are completely absurd.
As such, fragmentation is always there; absolute disk sizes don't change the propensity for typical workloads to produce fragmentation. A modern file system is not merely a bucket of files, it is a database that manages directories, metadata, files, and free space. If you mix small and large directories, small and large files, creation and deletion of files, appending to or truncating from existing files, etc., you will get fragmentation. When you get close to full, everything gets slower. Files written early in the volume's life and which haven't been altered may remain fast to access, but creating new files will be slower, and reading those files afterward will be slower too. Large directories follow the same rules as larger files, they can easily get fragmented (or, if they must be kept compact, then there will be time spent on defragmentation). If your free space is spread across the volume in small chunks, and at 95% full it almost certainly will be, then the fact that the sum of it is 1 TB confers no benefit by dint of absolute size.
Even if you had SSDs accessed with NVMe, fragmentation would still be an issue, since the file system must still store lists or trees of all the fragments, and accessing those data structures still takes more time as they grow. But most NAS setups are still using conventional spinning-platter hard drives, where the effects of fragmentation are massively amplified. A 7200 RPM drive takes 8.33 ms to complete one rotation. No improvements in technology have any effect on this number (though there used to be faster-spinning drives on the market). The denser storage of modern drives improves throughput when reading sequential data, but not random seek times. Fragmentation increases the frequency of random seeks relative to sequential access. Capacity issues tend to manifest as performance cliffs, whereby operations which used to take e.g. 5 ms suddenly take 500 or 5000. Everything can seem fine one day and then not the next, or fine on some operations but terrible on others.
Of course, you should be free to (ab)use the things you own as much as you wish. But make no mistake, 5% free is deep into abuse territory.
Also, as a bit of an aside, a 20 TB volume split into 1 GB slabs means there's 20,000 slabs. That's about the same as the number of 512-byte sectors in a 10 MB hard drive, which was the size of the first commercially available consumer hard drive for the IBM PC in the late 1980s. That's just a coincidence of course, but I find it funny that the numbers are so close.
Now, I assume the slabs are allocated from the start of the volume forward, which means external slab fragmentation is nonexistent (unless slabs can also be freed). But unless you plan to create no more than 20,000 files, each exactly 1 GB in size, in the root directory only, and never change anything on the volume ever again, then internal slab fragmentation will occur all the same.
There are two sorts of fragmentation that can occur with btrfs. Free space and file data. File data is significantly more difficult to deal with but it "only" degrades read performance. It's honestly a pretty big weakness of btrfs. You can't realistically defragment file data if you have a lot of deduplication going on because (at least last I checked) the tooling breaks the deduplication.
> If your free space is spread across the volume in small chunks, and at 95% full it almost certainly will be
Only if you failed to perform basic maintenance. Free space fragmentation is a non-issue as long as you run the relevant tooling when necessary. Chunks get compacted when you rebalance.
Where it gets dicey is that the btrfs tooling is pretty bad at handling the situation where you have a small absolute number of chunks available. Even if you theoretically have enough chunks to play musical chairs and perform a rebalance the tooling will happily back itself into a corner through a series of utterly idiotic decisions. I've been bitten by this before but in my experience it doesn't happen until you're somewhere under 100 GB of remaining space regardless of the total filesystem size.
If, under those conditions, 100 GB has proven to be enough for a lot of users, then it might make sense to add more flexible alarms. However, this workload is not universal, and setting such a low limit (0.5% of 20 TB) in general will not reflect the diverse demands that different people put on their storage.
And "unexpected" failure paths like that are often poorly tested in apps.
I'm a STRONG believer in tapes!
Even LTO 5 gives you a very cheap 1.5TB of clean, pretty much bulletproof storage.. You can pick a drive (with a SAS HBA card) for less than $200, there is zero driver issue (SCSI, baby); the linux tape changer code is stable since 1997 (with a port to VMS!).
Tape FTW :-)
LTO tapes have really changed my life, or at least my mental health. Easy and robust backup has been elusive. DVD-R was just not doing it for me. Hard drives are too expensive and lacked robustness. My wife is a pro photographer so the never-ending data dumps had filled up all our hard drives, and spending hundreds of dollars more on another 2-disk mirror RAID, and then another, and another was just stupid. Most of the data will only need to be accessed rarely, but we still want to keep it. I lost sleep over the mountains of data we were hoarding on hard drives. I've had too many hard drives just die, including RAIDs being corrupted. LTO tape changed all of that. It's relatively cheap, and pretty easy and fast compared to all the other solutions. It's no wonder it's still being used in data centers. I love all the data center hand-me-downs that flood eBay.
And I do love hearing the tapes whir, it makes me smile.
Then I scored another 3 used LTO5 tape drives on eBay for about $100, they all worked. I mainly use 1 tape drive. I have it running on an Intel i5 system with an 8-drive RAID10 array (cheap used drives, with a $50 9260-8i hardware RAID card), which acts as my "offsite" backup out in my detached garage - it's off most of the time (cold storage?) unless I'm running a backup. I can loose up to 2 drives without losing any data, and it's been running really well for years. I have 3 of these RAID setups in 3 different systems, they work great with the cheapest used drives from Amazon. I'm not looking for high performance, I just need redundancy. I've had to replace maybe 3 drives across all 3 systems due to failure over the last 7 years.
On Windows the tape drive with LTFS was not working well, I think due to Windows Defender trying to test the files as it was writing them, causing a lot of "shoeshining" of the tape, but I think Windows Defender can be disabled. But I bought tape backup software from https://www.iperiusbackup.com - it just works and makes backups simple to set up and run. I always verify the backup. If something is really important I'll back up to at least 2 tapes. Some really important stuff I will generate parity files (win WinPar) and put those on tape too. Non-encrypted the drive runs at the full 140MB/s, but with encryption it runs at about 60MB/s, because I guess the tape drive is doing the encryption.
I love it, it has changed my data-hoarding life. At $3.50/TB and 140MB/s and 1.5TB per tape, it can't be beat by DVD-R or hard drives for backup. Used LTO5 is really in a sweet spot right now on eBay, but LTO6 is looking good too recently (2.5TB/tape). LTO6 drives can read LTO5 tapes, so there's a pretty easy upgrade path. I also love that there is a physical write-protect switch on the tapes, which hard drives don't have. If you plug in a hard drive to an infected system, that hard drive could easily be compromised if you don't know your system is infected.
So while block remapping can occur, and the physical storage has limits on its contiguity (you'll eventually reach the end of a track on a platter or an erasable page in a flash chip), the optimal way to use the storage is to put related things together in a run of consecutive LBAs as much as possible.
* = There are some exceptions to this, e.g. some older flash controllers were made that could "speak" FAT16/32 and actually know if blocks were free or not. This particular use was supplanted by TRIM support.
Change the word to "seek" and it may make more sense.
A) When you modify a file, everything including the parts you didn't change is copied to a new location. I don't think this is how btrfs works.
B) Allocated storage is never overwritten, but modifying parts of a file won't copy the unchanged parts. A file's content is composed of a sequence (list or tree) of extents (contiguous, variable-length runs of 1 or more blocks) and if you change part of the file, you first create a new disconnected extent somewhere and write to that. Then, when you're done writing, the file's existing extent limits are resized so that the portion you changed is carved out, and finally the sequence of extents is set to {old part before your change}, {your change}, {old part after your change}. This leaves behind an orphaned extent, containing the old content of the part you changed, which is now free. From what evidence I can quickly gather, this is how btrfs works.
Compared to an ordinary file system, where changes that don't increase the size of a file are written directly to the original blocks, it should be fairly obvious that strategy (B) results in more fragmentation, since both appending to and simply modifying a file causes a new allocation, and the latter leaves a new hole behind.
While strategy (A) with contiguous allocation could eliminate internal (file) fragmentation, it would also be much more sensitive to external (free space) fragmentation, requiring lots of spare capacity and/or frequent defrag.
Either way, the use of CoW means you need more spare capacity, not less. It's designed to allow more work to be done in parallel, as fits modern hardware and software better, under the assumption that there's also ample amounts of extra space to work with. Denying it that extra space is going to make it suffer worse than a non-CoW file system would.
If things get to the point where there's over 1 TB of fragmented free space on a filesystem that is entirely the fault of the operator.
"Your free space shouldn't be very fragmented when you have such large amounts free!" is exactly why you should keep large amounts free.
Basically, the recommendation was to always have 5% free space, so this isn't just Synology saying this.
This is the same kind of issue that Linux root filesystems had - a % based limitation made sense when disks were small, but now they don't make a lot of sense anymore when they restrict usage of hundreds of GB (which are not actually needed by the filesystem to operate).
https://archive.kernel.org/oldwiki/btrfs.wiki.kernel.org/ind...
It tries to mitigate this by reserving some space for metadata, to be used in emergencies, but apparently it's possible to exhaust it and get your filesystem into a read-only state.
There was some talk about increasing the reservations to prevent this but can't recall if changes were made.
So lots of customers thought they were buying a drive that's perfect for NAS, only to discover that the drives were completely unsuitable and took days to restore, or failed alltogether. Synology had to release updates to their software to deal with the fake NAS drives, and their support was probably not happy to deal with all the angry customers who thought the problem was with Synology, and not Western Digital for selling fake NAS drives.
If you buy a drive from Synology, you know it will work, and won't secretly be a cheaper drive that's sold as NAS compatible even though it is absolutely unsuitable for NAS.
The counterargument is, people won’t listen and then blame Synology when their data is affected. At which point it may be too late for anything but attempting data recovery.
Sufficiently nontechnical users may blame the visible product (the NAS) even if the issue is some nuance to the parts choice made by a tech friend to keep it within their budget.
Synology is seen as the premium choice in the consumer NAS argument, so vertically integrating and charging a premium to guarantee “it just works” is not unprecedented.
There are definitely other NAS options as well, if someone is willing to take on more responsibility for understanding the tech.
I have a DS1515+ which has an SSD cache function that uses a whitelisted set of known good drives that function well.
If you plug in a non whitelisted ssd and try to use it as a cache, it pops up a stern warning about potential data loss due to unsupported drives with a checkbox to acknowledge that you’re okay with the risk.
So…there’s really no excuse why they couldn’t have done this for regular drives.
Everyone will understand it costing more, fewer people will understand why the NAS ate their data without the warning it was supposed to provide, because cheap drives that didn’t support certain metrics were used.
If Synology wants to have there be only one way that the device behaves, they have to put constraints on the hardware.
As long as Synology is up front in the requirement and has a return policy for users who buy one and are surprised, I think they’re well within their rights to decide they’re tired of dealing with support costs from misbehaving drives.
As long as they don’t retroactively enforce this policy on older devices I don’t understand the emotionality here. Haven’t you ever found yourself stuck supporting software / systems / etc that customers were trying to cheap out on, making their problems yours?
Toyota might have great reasons for opening a chain of premium quality gas stations, but the second they required me to use them, I'd never buy another Toyota for as long as I lived.
I want to bring my own drives, just as I have since I bought my first DS-412+ 13 years ago.
This copy operation is done either while the disk is idling, or forced by stop responding to read and write operations if CMR buffer zone is depleted and data has to be moved off. RAID softwares cannot handle the latter scenarios, and consider the disk faulty.
You can probably corner a disk into this depleted state to expose a drive being SMR based, but I don't know if that works reliably or if it's the right solution. This is roughly all I know on technical side of this problem anyway.
I see this kind of arguments “X had to do Y otherwise customers would complain” a lot every time a company does something shady and some contrarian wants to defend them, but it really isn't as smart as you think it is: the company doesn't care if people complain, otherwise they wouldn't be doing this kind of move either, because it raises a lot more complaints. Company only care if it affects their bottom line, that is if they can be liable in court, or if the problem is big enough to drive customers away. There's no way this issue would do any of those (at least not as much as what they are doing right now, by a very large margin).
It's just yet another case of an executive doing a shady move for short terms profits, there's no grand reasoning behind it.
This may be a bad move, and you’re certainly right that Synology expects to make more profit with this policy than without it, but it’s a more complex system than you understand. Irate customers calling support and review-bombing for their own mistakes are a real cost.
I don’t blame Synology for wanting to sell fewer units at higher prices to more professional customers. Hobbyists are man attractive market but, well, waves hands at the comments in this thread.
And when this issue happened with WD drives, I don’t remember a backlash against Synology at all. WD, on the other hand, deserved and received plenty of blame.
Is it though? Most (consumer) NAS systems are probably sold without the drives which are bough separately. When there is an issue with the drive and it breaks, I’m pretty sure most people technical enough to consider the need for a NAS would attribute that failure to the manufacturer of the drives, not to the manufacturer of the computer they put their drives into
Meaning that by default it could require a Synology drive that is at minimum going to work decently.
Want to mess around more and are more technical ? Make it a CLI command or something the average joe is going to be wary about. With a big warning message.
Personally I only like to buy very reliable enterprise class drives (probably much better than whatever Synology will officially sell) and this is my main concern.
But that's not what Synology did.
Also, if the image on the Synology page is accurate, they are relabeled Toshiba drives. Which doesn't really seem a good choice for SMB/SOHO NAS devices, because the Toshiba "Machine Gun" MGxx drives are the loudest drives on the market.
That said, I've since added an ssd and moved almost everything to the ssd (docker, the database and all apps) and it's much nicer in term of noise.
Synologoy SMB/SOHO NAS devices should not be affected by the drive lockdown (for now).
I have been a happy enough Synology user since 2014, even though I had to hardware repair my main DS1815+ twice in that time (Intel CPU bug and ATX power-on transistor replacement).
Other than two hardware failures in 10 years (not good), the experience was great, including two easy storage expansions and the replacement of one failed drive. These were all BYOD, and the current drives are shucked WD reds with a white sticker (still CMR).
I happily recommended them prior to this change and now will have to recommend against.
Will new firmware updates to everything before this require the Synology branded drives ?
If you’re going to buy a DSM unit, I’d definitely buy a 2024 or earlier model. But even as an overall happy user under their old go-to-market approach, I can’t recommend them now.
I wouldn't be sure why Toshiba M-prefixed 7K2 drives would be bad for NAS use cases. They're descendants of what were used in high performance SPARC servers. Hot, dense, obnoxious, but that's just Fujitsu. They're plenty reliable, performant, and perfect for your all-important on-line/near-line data! You just have to look away from bills(/s).
0: https://www.techpowerup.com/265841/some-western-digital-wd-r...
1: https://www.techpowerup.com/265889/seagate-guilty-of-undiscl...
It's fine to have 'Synology supported drives' which guarantee compatability, but requiring them is absolute bollocks.
After some time, people started to post about problems with the new WD Red drives. People had troubles restoring failed drives, I had a problem where I think the drives never stopped rewriting sectors (you could hear the hard drives clicking 24/7 even when everything was idle)
Then someone figured out that WD had secretely started selling SMR drives instead of CMR drives. The "updated" models with more cache were much cheaper disks, and they only added more cache to try and cover up the fact that the new disks suffored from catastrophic slowdowns during certain workloads (like rebuilding a NAS volume).
This was a huge controversy. But apparently selling SMR drives is profitable, so WD claims the problem is just that NAS software needs to be made compatible with SMR drives, and all is well. They are still selling SMR drives in their WD Red line.
Edit: Here's a link to one of the many forum threads where people discovered the switch: https://community.synology.com/enu/forum/1/post/127228
That's half of it ... maybe? Last time I looked drives that offer host managed SMR still weren't available to regular consumers. In theory that plus a compatible filesystem would work flawlessly. In practice you can't even buy the relevant hardware.
Well, SMR lets you store more stuff on the same platter (more or less); fewer platters reduces costs, etc.
WD's claims about it being a software problem would be more reasonable if they were providing guidance about what the software needs to do to perform well with these drives, and probably that would involve having information about the drive available to the OS/filesystem rather than hidden.
WD Red -> SMR technology, slightly cheaper, not suitable for NAS
RAID and NAS used to go together when drive capacities were lower. E.g. I had a 9TB NAS with RAID5 at times when 8TB drives were >$500 a pop. These days, NAS does not necessarily imply having a RAID setup. I see a new "build your SFF/RPi NAS" article every week, and it rarely involves RAID.
This is because a NAS setup with a single high-capacity drive and an online backup subscription (e.g. Backblaze) is more cost-effective and perfectly adequate for a lot of users, who have no interest in playing the sysadmin. In such a setup, you just need a drive that can withstand continuous operation, and SMR should work fine.
This would explain why they'd only want to support HDD models the Synology OS can flash firmware updates to.
(It's also convenient to get more margins.)
Luckily all of the settings can be searched for and verified.
Seagate and Hitachi have seemed to treat me well over the years and I was giving WD a chance.
Next drives to buy will be from this list: https://www.backblaze.com/blog/backblaze-drive-stats-for-202...
To be quite blunt: After choosing my NAS, the act of choosing hard drives is actually harder and somewhat overwhelming. To be quite honest, knowing that I can choose from a narrower set of drives that are guaranteed to work correctly is going to really tip my scale in favor of Synology the next time I'm in the market for a NAS.
They can sell "guaranteed to work" drives for people like you who don't want to go through all the picking process, while letting other people the choice to put any drive they want.
I’ve used (and still use) UnRaid before but switched to Synology for my data a while back due to both the plug-and-play nature of their systems (it’s been rock solid for me) and easily accessible hard drive trays.
I’ve built 3 UnRaid servers and while I like the software, hardware was always an issue for me. I’d love a 12-bay-style Synology hardware device that I could install whatever I wanted on. I’m just not interested in having to halfway deconstruct a tower to get at 1 hard drive. Hotswap bays are all I want to deal with now.
I have an unraid on a usb stick somewhere in my rack, but overtime it started feeling limited, and when they began changing their license structure I decided it was time to switch, though I run it on a Dell r720xd instead of one of their builds (my only complaint is the fan noise - i think 730 and up are better in this regard)
Proxmox was also on my short list for hypervisors if you dont want TrueNAS.
I have found workarounds for the read-only root file system. But they aren't great. I have installed Gentoo with a prefix inside the home directory, which provides me with a working compiler and I can install and update packages. This sort of works.
For running services, I installed jailmaker, which starts a lxc debian, with docker-compose. But I am not so happy about that, because I would rather have an atomic system there. I couldn't figure out how to install Fedora CoreOS inside a lxc container, and if that is even possible. Maybe NixOS would be a another option.
But, as I said, for those services I would rather just run them in Proxmox and only use the TrueNAS for the NAS/ZFS management. That provides more flexibility and better system utilization.
The deprecation caused me to move to something more neutral and stay away from all 'native' apps of TrueNAS and migrated to ordinary docker-compose, because that seem to be the most approachable.
I was also looking into running a Talos k8s cluster, but that didn't seem to be as approachable to me and a bit overkill for a single-node setup.
It isn't really the case. TrueNAS wants you to look at it as an appliance so they make it work that way out of the box.
On the previous release, they had only commented out the apt repos but you could write to the root filesystem.
On the latest release, they went a little further and did lock the root filesystem by default but using a single command (`sudo /usr/local/libexec/disable-rootfs-protection`), root becomes writable and the commented out apt repos are restored. It just works.
I say "mostly" happy because I almost returned it. The USB connection between mini PC and Terramaster would be fine for a few days and then during intense operations like parity checks would disconnect and look like parity error/disk failure, except the disks were fine. Eventually realised the DAS uses power from the USB port as well as the adapter plug and the mini PC wasn't supplying enough power. Since attaching a powered USB hub it's been perfect.
Explanation of symptoms and solution, in case anyone is considering one or has the same problem: https://forum.terra-master.com/en/viewtopic.php?t=5830
It works well, but USB connection could be faster, and it bogs down when doing writes with soft-raid. I've been thinking about going for a DAS solution connected directly via SAS, instead. Still musing about what enclosure to use, though.
If it were now, I'd probably look deeper into Asus, QNap or a DIY TrueNAS.
I will still have to come accross something like Hyper Vault for backup and Drive for storage that works (mostly) smoothless. I would be happy to self-host, but the No Work Needed (tm) products of Synology are just great.
Sad to see them taking this road.
It was simple, it just worked, and I didn't have to think about it.
* TB SDDS - a multi-type phenomenon of Drobo units suddenly failing. There were three 'types' of SDDS I and a colleague discovered - "Type A" power management IC failures, "Type B" unexplainable lockups and catatonia, and "Type C" failed batteries. Type B units' SOCs have power and clock go in and nothing going out.
I don't think this will work the way Synology imagines it.
Basically, Synology drives are not only more expensive, they're also statistically speaking less reliable when building a RAID with them, negating the very purpose of the product. What a dumb move.
<https://news.ycombinator.com/item?id=32048148>
Resulting in, FWIW, my top-rated-ever HN comment, I think:
Synology’s whole business model (arguably QNAP’s too) depends on you wanting more drive bays than 2 and wanting to host apps and similar services. The premium they ask is substantial. You can spec out a beefy Dell PowerEdge with a ton of drive bays for cheap and install TrueNAS, and you’ll likely be much happier.
But the fundamental suggestion I make is to consider a NAS a storage-only product. If you push it to be an app and VM server too, you’re dependent on these relatively closed ecosystems and subject to the whims of the ecosystem owner. Synology choosing to lock out drives is just one example. Their poor encryption support (arbitrary limitations on file filenames or strange full-disk encryption choices) is another. If you dive into any system like Synology long enough, you’ll find warts that ultimately you wouldn’t face if you just used more specialized software than what the NAS world provides.
Yeah, but then you have a PowerEdge with all the noise and heat that goes along with it. I have an old Synology 918 sitting on my desk that is so quiet I didn't notice when the AC adapter failed. I noticed only because my (docker-app-based) cloud backups failed and alerted me.
Unless Synology walks back this nonsense, I'll likely not buy another one, but I do think there is a place for this type of box in the world.
I would recommend a mini-ITX NAS enclosure or a prebuilt system from a vendor that makes TrueNAS boxes. iXSystems does sell prebuilt objects but they’re still pricey.
I have a synology because I got tired of running RAID on my personal linux machines (had a drobo before that for the same reasons) - but as things like drive locking occur and arguably better OSS platforms available, I'm not sure I'd make the same decision today.
Investors want bigger returns. They know they will not get away at this point by selling a monthly license. A large percentage would not buy anymore.
What other options do you have for recurring revenue? Cloud storage, but I don't think that's a great success.
And then... yes, harddisks. They are consumable devices with a limited lifespan. Label them as your own and charge a hefty fee.
The disks in a (larger) NAS setup are more than what the NAS costs. They want a piece of that pie by limitting your options.
No more syno for me in the future
However...
Long long ago I worked for a major NAS vendor. We had customers with huge NAS farms [1] and extremely valuable data. We were, I imagine, very exposed from a reputation or even legal standpoint. Drive testing and certification was A Very Big Deal. Our test suites frequently found fatal firmware bugs, and we had to very closely track the fw versions in customer installations. From a purely technical viewpoint there's no way we wanted customers to bring their own drives.
[1] Some monster servers had tripple-digit GBs of storage, or even a TB! (#getoffmylawn)