SamirD
Golden Member
These are solid points for anyone considering a long-term solution. People forget that if you have a drive, you have to put the drive in something, and if the drive doesn't die but the something does, what do you do? Usually the focus is on drive failure, but no one really thinks about raid controller failure as much as a point of failure, but it truly is.Here are some advantages of ZFS solution. First the raid is not locked in your vendor. You can replace the hardware anytime around the disk (that is a big advantage when trying to recover data after your NAS died). Second you can use the actual machine running the raid you don't require a dedicated box so you need to consider that when doing power calculations. Third if you want just the file sever and nothing more you can use low power parts (and modern parts which use less power than 10 year old technology). Last maintenance is not that difficult if you do a few things like label your disk so it is easy to figure out which one died and ZFS has a lot of advantages over older file system by incorporating protection against bit rot. It isn't enough just to be consistent across the raid but actually know which copy is the valid copy. One of the biggest problem with disks is this ability to zero out bad blocks which can create holes in the data. This is something zfs block hash is very good at detecting and fixing.
-
Anyway the choice is the buyer they should just be aware of all the options. To be honest it is #1 that and #2 that well prevent me from never buying a nas box. I don't need the extra hardware and i don't need to worry about trying to reassemble the data if the actual nas box dies.
And this is when being standards based and away from proprietary standards helps. Most nas units are ext3/4 based so you can read the drive using almost any linux live cd. But the exotic synology and qnap proprietary formats can leave you with only one recovery path--theirs.
I treat a nas unit as a multi-point failure device that has no recovery, ie like a drive that failed. If a nas unit fails, assume data gone. So I have multiple nas units from multiple vendors all with the same data replicated. But this doesn't address bit rot, which is damn real.
I first experienced bit rot during yearly comparisons of my photography archive. Just a single file out of hundreds of thousands will not compare correctly with the other two copies of the data, all three copied directly from the source and compared back in that time. As areal densities have increased, the error rate has basically remained the same, and that means bit rot will increasingly become a problem.
ZFS is the only file system I know of that can actively and automatically deal with bit rot. I still don't think I would trust it 100% so I'd have multiple copies of the data, but knowing that this problem is being addressed now will provide a much better solution down the road when the problem is a lot larger too.
And the nice thing about making your own solution is that older essentially free working hardware that is just 'around' can be put to use. I disagree with all the whining over power because no one seems to understand that there are 220/240v 30a circuits on residential ac, ovens, and dryers that use more power than a little server will since the cpu usage is quite little even on a 95w cpu like the i7-2600. The difference in power costs between a 25w nas and a 7w one over the course of a year is the cost of a single meal. And yet no one thinks twice about how many amps cooking at 400F for 35 minutes will cost, or how much it costs to dry clothes--even when each of these appliances are drawing well over 100W during their operation.
