Posted by motiejus 4/19/2025
It starts with "Synology's storage systems have been transitioning to a more appliance-like business model." As a long-time user, all of this collectively moves Synology from "highly recommended" to "avoid."
It sounds like only certain features will be unavailable for non-Synology drives:
> Additionally, certain features such as volume-wide deduplication, lifespan analysis, and automatic firmware updates for third-party devices will be disabled.
It sounds like you can still use non-Synology drives just fine, but not do certain advanced things with them?
So why is this being called "locking"? I use Synology at home just as very basic RAID. Am I correct that this wouldn't affect me at all?
And are there any reasons why this is justifiable (e.g. hard drive manufacturers lying about health information) or is it just a cash grab?
Disabling filesystem features when using them is insane.
Whats next — no encryption if you're using a Seagate?
https://kb.synology.com/en-me/DSM/help/DSM/StorageManager/vo...
It might be more about performance, that they'll require their own drives with custom firmware that works better?
That's what I'm trying to understand here. Is Synology really removing important basic necessary features, or is this more about high-end consistency and performance?
I'm wondering if anybody has any better recommendations given the requirement of being able to add storage capacity without having to completely recreate the FS.
Snapshots are available, but a little more work to deal with since you have to learn about subvolumes. It's not that hard.
Edit: TIL, SHR is just mdadm + btrfs.
pvcreate /dev/sdz
vgextend NAS /dev/sdz
Now we want to add additional space to an existing LV "backup": lvextend --size +128G --resizefs NAS/backup
*note: --resizefs only works for file-systems supported by 'fsadmn' - its man-page says:"fsadm utility checks or resizes the filesystem on a device (can be also dm-crypt encrypted device). It tries to use the same API for ext2, ext3, ext4, ReiserFS and XFS filesystem."
If using BTRFS inside the LV, and the LV "backup" is mounted at /srv/backup, tell it to use the additional space using:
btrfs filesystem resize max /srv/backupHowever I did notice that the performance was substantially worse when using heterogeneous drives, which makes SHR somewhat less valuable to me.
The only drawback, if I can call it that, is that syncs are done on-demand, so the data is technically unprotected between syncs. But for my use case this is acceptable, and I actually like the flexibility of being in control of when this is done. Automating that with a script would be trivial, in any case.
Not used either but these were 2 options that came up when I was researching few years ago.
I don't need 100% of my bytes to be instantly available to me on my network. The most important stuff is already available. I can wait a day for arbitrary media to thaw out for use. Local caching and pre-loading of read-only blobs is an extremely obvious path for smoothing over remote storage.
Other advantages should be obvious. There are no limits to the scale of storage and unless you are a top 1% hoarder, the cost will almost certainly be more than amortized by the capex you would have otherwise spent on all that hardware.
20TB which u can keep in a 2-bay cute little nas will cost you $4k USD / year on S3 infrequent access tier in APAC (where I am). So "payback time" of local hardware is just 6 months vs S3 IA. That's before you pay for any data transfers.
For my use-case I'm OK with un-hedged risk and dollars staying in my pocket.
This is the same product.
> 20TB
I think we might be pushing the 1% case here.
Just because we can shove 20TB of data into a cute little nas does not mean we should.
For me, knowledge that the data will definitely be there is way more important than having "free" access to a large pool of bytes.
I'm the last person I know who buys DVDs, and they're 2/3s of the reason I need more space. The last third is photography. 45.7 megapixels x 20 FPS adds up quick.
S3's cost is extreme when you're talking in the tens of terabytes range. I don't have the upstream to seed the backup, and if I'm going outside of my internal network it's too slow to use as primary storage. Just the NAS on gigabit ethernet is barely adequate to the task.
Until Amazon inexplicably deletes your AWS account because your Amazon.com account had an expired credit card and was trying and failing to renew a subscription.
Ask me how I know
Confusingly "Glacier" is both its own product, which stores data in "vaults", and a family of storage tiers on Amazon S3, which stores data in "buckets". I think Glacier the product is deprecated though, since accessing the Glacier dashboard immediately recommends using Glacier the S3 storage tiers instead.
Okay, I'm curious now. When you were talking about "a bunch of local disks", what size disk did you have in mind?
Right now the best price per TB is found on disks in the 14-24TB range.
There are no recurring costs to this setup except electricity. I don't think S3 can beat that.
This is not the perspective of actors working on longer timescale. For a number is agencies, preserving some encrypted data is beneficial, because it will be possible to recover in N years, whether any classic improvements, bugs found in key generators, or advances in quantum.
Very few people here will be that interesting, but... worth keeping in mind.
You can’t be serious.
Bandwidth to get all of that back down to your system is much pricier, depending on how much you use that data.
I think self-built is the best bang for buck you're going to get and not have any annoying limitations.
There's plenty of motherboards with integrated CPUs (N100 same as cheaper Ugreen ones) for roughly 100. Buy a decent PSU and get an affordable case. For my configuration with a separate AMD CPU I'm looking at right around 400 Euros but I get total control.
And as far as software is concerned, setting up a modern OS like TrueNAS I find about the same difficulty as an integrated one from Ugreen.
For a NAS, I don't think I'd need more than 1-2 lanes for any single device. That sounds fine.
Shoutout to openmediavault. Just yesterday I installed it on my DXP8800 and now it works like a charm. But to install another OS you have to deactivate the watchdog timer in the BIOS, otherwise it resets the NAS every three minutes. Press CRTL + F12 to get into the BIOS and look for something like "watchdog" and disable it.
Last but not least, they seem to have Docker support which was restricted to more powerful Synology models, and it's a nice bonus for self-hosting nowadays.
Ended up buying a terrmaster DAS instead and connected it with usb to my NUC.
Also considered a NAS enclosure with an n110 mini itx board, would allow you to upgrade it in the future.
I have had a Solaris ZFS filer that I've ran for a long time (due to historical reasons, I jumped on OpenSolaris when it came out and never had a chance to move off Oracle's lineage). I moved to Synology about three years ago b/c I was sick and tired of managing my own file server. Yet, I feel like at this point the cons of Synology are starting to outweigh the manageability advantages that drew me in.
[1] https://www.reddit.com/r/synology/comments/1feqy62/synology_...
Some of us are using that with great success to eliminate the locking situation.
They really have to sell it by minimising the price differential and reducing the lead time.
This is the same old tired argument Apple made about iPhone screens - complain about inferior aftermarket parts while doing everything in their power to not make the original parts available anywhere but AASPs. Except here we have the literal same parts with only a difference in the firmware vendor string.
On the other hand, an NVMe drive from Crucial that lied about syncing data caused a write hole in ZFS and the associated pool broke to the point where we could only mount it with lots of flags in read only mode.
The problem is - I've formatted my drives with SHR(Synology Hybrid RAID - essentially another exclusive lock-in) and this would mean a rather painful transition to the new drive, since this now involves getting a whole new drives to format and move data to, rather than a simple lift-and-drop.
Ugh.
Not sure why people are saying SHR is proprietary in some of the comments I read, it's effectively a wrapper for mdadm — though I suppose the GUI itself could be called proprietary.
Is there something with 6-8 drives slots on which i could install whatever OS i want ? Ideally with a small form factor. I don’t want to have a giant desktop again for my nas purposes.