Posted by motiejus 4/19/2025
All these NAS manufacturers a spending time developing their own OS, when TrueNAS is well established.
They could've even sold their own branded drives on the side and explained why they are objectively better, but still let customers choose.
But there is actually one reason I am going DIY next time. 'uname -a'. They ship with very old kernels. I suspect the other utilities are in the same shape. They have not updated their base system in a long time. I suspect they have basically left all of the amazing changes the kernels have had over the past decade out. They are randomly cherry picking things. Which is fine. But it has to be creating a 'fun' support environment.
I'm a full time dev and even having my home assistant breaking every time I think of upgrading it, is annoyance enough. My home lights and what not are down for two hours and I'm mostly installing HA from scratch and recovering from the backup that I've started to take since the last collapse.
A NAS is a way more critical device and I don't want to lose my data or needing to spend 2 weeks recovering data under an anxiety attack because I hastily did one upgrade.
Synology’s market is the intersection of:
People who have lots of pirated media, who want generous storage with torrent capabilities.
People who want to store CCTV feeds.
People who find the cloud too expensive, or feel it isn’t private enough.
People with a “two is one, one is none” backup philosophy, for whom the cloud alone is not enough.
Tiny businesses that just need a windows file share.
For example my current TrueNAS at home has 128GB of Ram, 32TB of NVMe (live data) and 72TB of HDD (archive data) with significant cpu and gpu (well compared to synology box, I am running a Ryzen 5700G) and 10G networking.
It didn’t start out here but because TrueNAS works with your own hardware I can evolve myself incrementally as hardware costs and needs change.
It is a beast of a machine and it is no work to maintain - TrueNAS makes it an appliance.
I’ve written up previously on my home setup here: https://benhouston3d.com/blog/home-network-lessons
That’s why I’m hoping Synology rethinks its position. Swapping out trusted, validated drives for unknowns introduces risk we’d rather avoid, especially since most of our client setups now run on SSDs. If compatibility starts breaking, that’s a serious operational concern.
I hope someone high ranking from Synology reads all the comments from this post (and many others from https://mariushosting.com ) and takes the correct decisions. Please don’t let Synology become like the other greedy companies out there.
Sometimes I brush up against its limitations and its annoying to me; other times I like the convenience it provides (Cloud Sync, Hyper Backup). Even before this announcement, I think that when this thing bites the dust, I would likely build something myself and run Unraid or TrueNAS.
IMO what they really needed to do was improve the QuickConnect service to function similar to Cloudflare Zero Access/Tunnels, or integrate better with that. That's really the missing link in turning your NAS into a true self hosted service that can compete with big tech cloud services, in that you won't expose your home IP and won't need to fiddle around with a reverse proxy yourself.
Any asshole thing a company does, provided they remain solvent enough to stick to their guns for enough time, becomes an accepted industry practice. BonziBuddy generated outrage back in the day because it was spyware. Now Microsoft builds BonziBuddy tech right into Windows, and people -- professional devs, even -- are like "Yeah, I don't understand why anyone would want the hassle of desktop Linux when there's WSL2."
In a way this is a valid point, but it also feels a bit silly. Do people really make use of devices like this and then try to overnight a drive when something fails? You're building an array-- you're designing for failure-- but then you don't plan on it? You should have spare drives on hand. Replenishing those spares is rarely an emergency situation.
I've never heard of anyone doing that for a home nas. I have one and I don't keep spare drives purely because it's hard to justify the expense.
I did end up with a spare before at the 3 year mark but the bathtub curve of failure has held true and now that so-called spare is 6 years old, unused, too small of a drive, and so never planned to be used in any way.
The conventional wisdom is that you should not store drives that don't get spun up infrequently, so what does it mean to have spares unless you are spinning them up once a month and expecting them to last any longer once actually used?
I'm not made of money. I just don't want to make excuses over some $90 bit of junk. So I have have spare wifi, headset, ATX PSU, input devices, and a low cost "lab" PSU to replace any dead wallwart. That last one was a life saver: the SMPS for my ISPs "business class" router died one day, so I cut and stripped the wires, set the volts+amps and powered it that way for a few days while they shipped a replacement.
i budget 300usd each, for 2 or 3 drivers. that is always the sweet spot since forever. get the largest enterprise model for exactly that price.
that was 2tb 10yrs ago. 10tb 5yrs ago.
so 5yrs ago i rebuilt storage on those 10tb drivers but only using 2tb volumes (coulda be 5, but i was still keeping the last gen size as data haven't grow), now my old drivers are spares/monthly off-machine copies. i used one when getting a warranty for a failed new 10tb one btw.
now i can get 20tb drivers for that price, i will probably still only increase the volumes to 10tb at most and have two spares.
They build the array to support a drive failure but as home power users without unlimited funds don’t have a hot spare or store room they can run to. It’s completely reasonable to order a spare on failure unless it’s mission critical data needing 24/7 uptime.
They completely planned for it. They’ve planned for if there is a failure they can get a new drive within 24 hours which for home power users is generally enough, especially when likely get a warning before complete failure.
Also worth noting that I don't think I experienced hard fails, it's often the unrecoverable error count shooting up in more than one event, which tells me it's time to replace. So I don't wait for the array to be degraded.
But I guess that's an important point, monitor your drives. Synology will do that for you, but you should monitor all your other drives. I have a script that uploads all the smart data off all my drives across all my machines to a central location, to keep an eye on SSD wear levels, SSD bytes written (sometimes you have surprises), free disk space and smart errors.
Using smartctl to extract smart data as it works so well.
Generally "smartctl -j --all -l devstat -l ssd /dev/sdXXX". You might need to add "-d sat" to capture certain devices on linux (like drive on an expansion unit on synology). By the way, synology ships with an ancient version of smartctl, you can use a xcopy newer version on synology. "-j" export to json format.
Then you need to do a bit of magic to normalise the data. Like some wear level are expressed in health (start = 100) or percent used (start = 0). There are different versions of smart data, the "-l devstat" outputs a much more useful set of stats but older SSDs won't support that.
Host writes are probably the messiest part, because sometimes they are expressed in blocks, or units of 32MB, or something else. My logic is:
if (nvme_smart_health_information_log != null)
{
return nvme_smart_health_information_log.data_units_written * logical_block_size * 1000;
}
if (scsi_error_counter_log?.write != null)
{
// should be 1000*1000*1000
return (long)(double.Parse(scsi_error_counter_log.write.gigabytes_processed) * 1024 * 1024 * 1024);
}
var devstat = GetAtaDeviceStat("General Statistics", "Logical Sectors Written");
if (devstat != null)
{
return devstat.value * logical_block_size;
}
if (ata_smart_attributes?.table != null)
{
foreach (var att in ata_smart_attributes.table)
{
var name = att.name;
if (name == "Host_Writes_32MiB")
{
return att.raw.value * 32 * 1024 * 1024;
}
if (name == "Host_Writes_GiB" || name == "Total_Writes_GB" || name == "Total_Writes_GiB")
{
return att.raw.value * 1024 * 1024 * 1024;
}
if (name == "Host_Writes_MiB")
{
return att.raw.value * 1024 * 1024;
}
if (name == "Total Host Writes")
{
return att.raw.value;
}
if (name == "Total LBAs Written" || name == "Total_LBAs_Written" || name == "Cumulative Host Sectors Written")
{
return att.raw.value * logical_block_size;
}
}
}
and even that fails in some cases where the logical block size is 4096.I think you need to test it against your drives estate. My advice, just store the raw json output from smartctl centrally, and re-parse it as you improve your logic for all these edge cases based on your own drives.
Within my NAS, I have 2 different pool, 1 is for important data, it's 2 hard disk with SHR1 replicated to an offsite NAS. Another pool is for less important data (movies, etc), it's SHR1 with 5 hard disks, 75TB total capacity, none of the hard disks are the same batch or production date. Not having the data immediately is not a problem. Losing that data would suck but I'd rebuild so I'm fine not having a spare drive on hand.
When you need to replace a drive, it’s better to purchase one new. It was manufactured recently and not sitting for very long.
How so? Does this imply drives "age out" while sitting at distribution warehouses too?