Top
Best
New

Posted by motiejus 4/19/2025

Synology Lost the Plot with Hard Drive Locking Move(www.servethehome.com)
653 points | 403 commentspage 3
nichos 4/19/2025|
I wish 1 or more HD manufacturers would get together and sell a NAS that runs TrueNAS on it. Or even an existing NAS manufacturer (UGreen, etc)

All these NAS manufacturers a spending time developing their own OS, when TrueNAS is well established.

Kirby64 4/19/2025|
TrueNAS isn’t nearly friendly enough for the average user. HexOS may fit that bill, although it seems rather immature. It runs on top of TrueNAS.
ryao 4/22/2025||
My doctor was able to switch from Synology to TrueNAS after I advised him to replace his failing Synology NAS with a TrueNAS box and I gave him a link to the TrueNAS documentation. He is fairly average in my opinion.
queenkjuul 4/22/2025|||
I think the fact they're a doctor puts them well above the average person's intelligence (or for your sake at least i hope so)
Kirby64 4/22/2025|||
Like the other poster said, your anecdote says more about the doctor than the average user. TrueNAS documentation isn't the worst by any means, but it has far too many extremely low level controls accessible. It would be overwhelming to most users unless you stay very within a small window of basic functionality. If you're just using it for network storage, maybe it's fine... anything beyond that though is going to trip up many folks.
yupyupyups 4/22/2025||
They should've designed a model with open source firmware, written useful technical articles about storage, done PR campaigns, and sold premium versions of said hardware. I imagine that it would create more good will and be a better business move.

They could've even sold their own branded drives on the side and explained why they are objectively better, but still let customers choose.

asdswe 4/22/2025||
I can hardly see the point for devices like this. If you are tech-savvy enough to host your own NAS locally, you might as well build your own NAS and install whatever user-friendly OS (Unraid, OpenMediaVault as an example) you wish. No vendor-lock in whatsoever that way. If you arent tech savvy enough, then you should probably use cloud storage anyway.
sumtechguy 4/22/2025||
I am very capable of building my own. But the plug and play of it is really nice. I basically popped in some drives and had a share up and running in a couple of hours. The same sort with DIY was do a part picker list build it (1-2 days) and then pop the drives in then figure out how to configure it correctly (another day or so because I do it very rarely). Then at that point yeah for what I use it for they are equivalent. Other than now I get to keep track of the CVE's that syno is doing for me with the occasional patch. Now I could put a stripped linux distro or trunas or something like that and get the same. Possible and ease of use are also a spectrum to weigh against. When you are young you have all the time you need. When you are older you just want to put it together and serve some files and do something else, because I have done this 6 times already.

But there is actually one reason I am going DIY next time. 'uname -a'. They ship with very old kernels. I suspect the other utilities are in the same shape. They have not updated their base system in a long time. I suspect they have basically left all of the amazing changes the kernels have had over the past decade out. They are randomly cherry picking things. Which is fine. But it has to be creating a 'fun' support environment.

jitl 4/22/2025|||
Time is money, I would rather buy a NAS these days even though in the past I ran my own FreeBSD ZFS server NAS. Much cheaper use of my time to pay a 2-3x premium on hardware if it means I spend 4 hours on build and 1 hour on admin per year vs 12 hours build, 6 hours admin per year.
InfinityByTen 4/22/2025|||
Getting a machine and setting it up for local usage as storage accessible over your home network is very different from having to install bunch of duct taped software and hoping it reliably works all the time without fail are two very different things.

I'm a full time dev and even having my home assistant breaking every time I think of upgrading it, is annoyance enough. My home lights and what not are down for two hours and I'm mostly installing HA from scratch and recovering from the backup that I've started to take since the last collapse.

A NAS is a way more critical device and I don't want to lose my data or needing to spend 2 weeks recovering data under an anxiety attack because I hastily did one upgrade.

saltysalt 4/22/2025|||
Thanks for the OS recommendations! I'm a soon to be ex-Synology user looking for a new home (their killing of Video Station also irked me).
michaelt 4/22/2025||
Honestly they got into the NAS business when cloud offerings were different, and many Internet connections were a lot slower.

Synology’s market is the intersection of:

People who have lots of pirated media, who want generous storage with torrent capabilities.

People who want to store CCTV feeds.

People who find the cloud too expensive, or feel it isn’t private enough.

People with a “two is one, one is none” backup philosophy, for whom the cloud alone is not enough.

Tiny businesses that just need a windows file share.

bhouston 4/22/2025||
I used to be a Synology fan with owning many of their devices for work and office. But I've moved on to TrueNAS now and haven't looked back. TrueNAS with ZFS is just amazing.
Ylpertnodi 4/22/2025|
It would be interesting to know the negatives of Synology, and why did you move from them. Gaining nothing from your comment, it could be that your boss was pissed his ex- got a job as security there.
bhouston 4/22/2025||
My biggest reason is flexibility in machine configuration. I found a lot of flexibility in bringing my own machine to the table. I can set any mix of HDD or SSD or Nvme or ram in my machines. With synology or any other premade box I am stuck in a specific config.

For example my current TrueNAS at home has 128GB of Ram, 32TB of NVMe (live data) and 72TB of HDD (archive data) with significant cpu and gpu (well compared to synology box, I am running a Ryzen 5700G) and 10G networking.

It didn’t start out here but because TrueNAS works with your own hardware I can evolve myself incrementally as hardware costs and needs change.

It is a beast of a machine and it is no work to maintain - TrueNAS makes it an appliance.

I’ve written up previously on my home setup here: https://benhouston3d.com/blog/home-network-lessons

system2 4/22/2025||
We've deployed Synology across multiple client environments for almost a decade, and it's been an incredibly reliable platform. Barring the usual hard drive failures (which are inevitable over time), we've had zero issues with SSO integration, expanding arrays with expansion units, seamless hardware upgrades, or the application layer. It just works.

That’s why I’m hoping Synology rethinks its position. Swapping out trusted, validated drives for unknowns introduces risk we’d rather avoid, especially since most of our client setups now run on SSDs. If compatibility starts breaking, that’s a serious operational concern.

NKosmatos 4/22/2025||
As an owner of two Synology boxes (and maintainer of a couple more), I’m not happy about these news. Synology needs to rethink its user base, upgrade their hardware offerings and avoid be lead by profit hunting board members.

I hope someone high ranking from Synology reads all the comments from this post (and many others from https://mariushosting.com ) and takes the correct decisions. Please don’t let Synology become like the other greedy companies out there.

sylens 4/22/2025||
Bought a Synology in 2020, been using it for backups and Plex since then. Only recently started doing a little more with it (Immich, Kavita, etc.)

Sometimes I brush up against its limitations and its annoying to me; other times I like the convenience it provides (Cloud Sync, Hyper Backup). Even before this announcement, I think that when this thing bites the dust, I would likely build something myself and run Unraid or TrueNAS.

IMO what they really needed to do was improve the QuickConnect service to function similar to Cloudflare Zero Access/Tunnels, or integrate better with that. That's really the missing link in turning your NAS into a true self hosted service that can compete with big tech cloud services, in that you won't expose your home IP and won't need to fiddle around with a reverse proxy yourself.

bitwize 4/22/2025||
Realistically, this will be par for the course among NAS vendors in 10 years, and there will be vigorous defenses of it on Hackernews. "If you want an open NAS, build one yourself. I choose a Yoyodyne NAS because it integrates with the apps I use. And the Yoyodyne disk restriction is no big deal since I know I'll be using something supported and compatible."

Any asshole thing a company does, provided they remain solvent enough to stick to their guns for enough time, becomes an accepted industry practice. BonziBuddy generated outrage back in the day because it was spyware. Now Microsoft builds BonziBuddy tech right into Windows, and people -- professional devs, even -- are like "Yeah, I don't understand why anyone would want the hassle of desktop Linux when there's WSL2."

kyrofa 4/22/2025||
> When a drive fails, one of the key factors in data security is how fast an array can be rebuilt into a healthy status. Of course, Amazon is just one vendor, but they have the distribution to do same-day and early morning overnight parts to a large portion of the US. Even overnighting a drive that arrives by noon from another vendor would be slower to arrive than two of the four other options at Amazon.

In a way this is a valid point, but it also feels a bit silly. Do people really make use of devices like this and then try to overnight a drive when something fails? You're building an array-- you're designing for failure-- but then you don't plan on it? You should have spare drives on hand. Replenishing those spares is rarely an emergency situation.

gambiting 4/22/2025||
>>You should have spare drives on hand.

I've never heard of anyone doing that for a home nas. I have one and I don't keep spare drives purely because it's hard to justify the expense.

itiwaru 4/22/2025|||
The drives I have obtained ahead of failure only after drives have been used past the 8 year mark are for a rebuild. I would hardly call them spares.

I did end up with a spare before at the 3 year mark but the bathtub curve of failure has held true and now that so-called spare is 6 years old, unused, too small of a drive, and so never planned to be used in any way.

The conventional wisdom is that you should not store drives that don't get spun up infrequently, so what does it mean to have spares unless you are spinning them up once a month and expecting them to last any longer once actually used?

topspin 4/22/2025||||
I do. Also, I have an unopened 990 EVO Plus ready to drop into whatever machine needs it.

I'm not made of money. I just don't want to make excuses over some $90 bit of junk. So I have have spare wifi, headset, ATX PSU, input devices, and a low cost "lab" PSU to replace any dead wallwart. That last one was a life saver: the SMPS for my ISPs "business class" router died one day, so I cut and stripped the wires, set the volts+amps and powered it that way for a few days while they shipped a replacement.

Hamuko 4/22/2025||||
I had a hot spare in the form of a backup drive. It was a 12 TB external WD that I'd already burned in and had as a backup target for the NAS. Then when one of the drives in the NAS failed, I broke the HDD out of the enclosure and used it to replace the broken drive. It hadn't been in use for many months and I'd rather sacrifice some backups rather than the array. I also technically had offsite backups for it that I could restore in an emergency.
1oooqooq 4/22/2025||||
always run the previous drive gen space.

i budget 300usd each, for 2 or 3 drivers. that is always the sweet spot since forever. get the largest enterprise model for exactly that price.

that was 2tb 10yrs ago. 10tb 5yrs ago.

so 5yrs ago i rebuilt storage on those 10tb drivers but only using 2tb volumes (coulda be 5, but i was still keeping the last gen size as data haven't grow), now my old drivers are spares/monthly off-machine copies. i used one when getting a warranty for a failed new 10tb one btw.

now i can get 20tb drivers for that price, i will probably still only increase the volumes to 10tb at most and have two spares.

kyrofa 4/22/2025|||
Heh, I suppose you've heard of one now. Fair enough, I could be in the minority here.
TiredOfLife 4/22/2025||
Yeah. If you don't have a couple spare 100TB ssd nases you can turn on in the event of failure yoh are doing it wrong
tiew9Vii 4/22/2025|||
A lot of these are home power users.

They build the array to support a drive failure but as home power users without unlimited funds don’t have a hot spare or store room they can run to. It’s completely reasonable to order a spare on failure unless it’s mission critical data needing 24/7 uptime.

They completely planned for it. They’ve planned for if there is a failure they can get a new drive within 24 hours which for home power users is generally enough, especially when likely get a warning before complete failure.

cm2187 4/22/2025|||
I agree, I don't buy spares, but when I have a drive failure, the first thing I do is an incremental backup, so that I know my data is safe regardless, while I am waiting for a drive.

Also worth noting that I don't think I experienced hard fails, it's often the unrecoverable error count shooting up in more than one event, which tells me it's time to replace. So I don't wait for the array to be degraded.

But I guess that's an important point, monitor your drives. Synology will do that for you, but you should monitor all your other drives. I have a script that uploads all the smart data off all my drives across all my machines to a central location, to keep an eye on SSD wear levels, SSD bytes written (sometimes you have surprises), free disk space and smart errors.

nicolas_t 4/22/2025||
Do you have a link to your script? Mostly I'd love to have a good dashboard for that data.
cm2187 4/22/2025||
Not the full script but can share some pointers.

Using smartctl to extract smart data as it works so well.

Generally "smartctl -j --all -l devstat -l ssd /dev/sdXXX". You might need to add "-d sat" to capture certain devices on linux (like drive on an expansion unit on synology). By the way, synology ships with an ancient version of smartctl, you can use a xcopy newer version on synology. "-j" export to json format.

Then you need to do a bit of magic to normalise the data. Like some wear level are expressed in health (start = 100) or percent used (start = 0). There are different versions of smart data, the "-l devstat" outputs a much more useful set of stats but older SSDs won't support that.

Host writes are probably the messiest part, because sometimes they are expressed in blocks, or units of 32MB, or something else. My logic is:

  if (nvme_smart_health_information_log != null)
  {
   return nvme_smart_health_information_log.data_units_written * logical_block_size * 1000;
  }
  if (scsi_error_counter_log?.write != null)
  {
   // should be 1000*1000*1000
   return (long)(double.Parse(scsi_error_counter_log.write.gigabytes_processed) * 1024 * 1024 * 1024);
  }
  var devstat = GetAtaDeviceStat("General Statistics", "Logical Sectors Written");
  if (devstat != null)
  {
   return devstat.value * logical_block_size;
  }
  if (ata_smart_attributes?.table != null)
  {
   foreach (var att in ata_smart_attributes.table)
   {
    var name = att.name;
    if (name == "Host_Writes_32MiB")
    {
     return att.raw.value * 32 * 1024 * 1024;
    }
    if (name == "Host_Writes_GiB" || name == "Total_Writes_GB" || name == "Total_Writes_GiB")
    {
     return att.raw.value * 1024 * 1024 * 1024;
    }
    if (name == "Host_Writes_MiB")
    {
     return att.raw.value * 1024 * 1024;
    }
    if (name == "Total Host Writes")
    {
     return att.raw.value;
    }
    if (name == "Total LBAs Written" || name == "Total_LBAs_Written" || name == "Cumulative Host Sectors Written")
    {
     return att.raw.value * logical_block_size;
    }
   }

  }
and even that fails in some cases where the logical block size is 4096.

I think you need to test it against your drives estate. My advice, just store the raw json output from smartctl centrally, and re-parse it as you improve your logic for all these edge cases based on your own drives.

sersi 4/22/2025|||
My synology NAS is for my own use. I do not keep spare drives on hand, I would go to the nearby shop that's 20 minutes away from me to get a new drive. They wouldn't have synology branded drives but they have the toshiba MG series, Western Digital and Seagate.

Within my NAS, I have 2 different pool, 1 is for important data, it's 2 hard disk with SHR1 replicated to an offsite NAS. Another pool is for less important data (movies, etc), it's SHR1 with 5 hard disks, 75TB total capacity, none of the hard disks are the same batch or production date. Not having the data immediately is not a problem. Losing that data would suck but I'd rebuild so I'm fine not having a spare drive on hand.

snowwrestler 4/22/2025||
Failures should be rare, which means a spare HD might be sitting in a drawer without spinning for years, which HDs don’t like to do.

When you need to replace a drive, it’s better to purchase one new. It was manufactured recently and not sitting for very long.

AlexandrB 4/22/2025||
> a spare HD might be sitting in a drawer without spinning for years, which HDs don’t like to do.

How so? Does this imply drives "age out" while sitting at distribution warehouses too?

ziml77 4/19/2025|
Hopefully they don't start pushing this change to older products. I don't want to have to replace my NAS, but if I ever do, it certainly won't be with another Synology product, even if they walk this decision back.
More comments...