I currently maintain a home file server which contains 15TB of storage using a spanned partition over 3 4TB Hdds and 3TB HDDs. I maintain two independent backups which I sync weekly so my data is "reasonably" secure. I am planning on upgrade the server soon to contain 20TB of storage, using as many 4TB drives as necessary depending on the method.
This isn't a "which RAID level should I use" thread, though it is related to RAID. I would like some level of cost effective redundancy purely to buy time to sync to my latest backup. Downtime isn't an issue once that is achieved.
My thinking is that I would like to use RAID 5 purely as a stop gap measure, which no intention of rebuilding should a drive fail. For example, lets say I use 6 X 4TB drives to build a 20TB RAID 5 array. At some point one of the HDDs dies and the array becomes degraded. All I'm looking for is time to update my backups. Afterward, I will replace the drive (probably days later), reinitialize, and restore from backup instead of going through the rebuild process. I'm thinking this will remove some of the risk of rebuild failure since I won't need to successfully read every sector of every drive. I will need to rely on my backups at that point, but that is why I have two.
In all, what I'm looking for are real world experiences of how long it takes to initialize a RAID 5 array versus rebuilding it. If it isn't noticeably faster to reinitialize instead of rebuilding, then its not really worth the hit in write performance or the time investment. I'll just stripe it and make sure I'm consistent with my backups.
I don't mind working without a net (RAID 0, JBOD), but running so many drives makes me wonder about my chances of failure. My other thought is to use something similar to flex raid to create a parity drive. Perhaps that is the better way to go regardless.
Opinions?
This isn't a "which RAID level should I use" thread, though it is related to RAID. I would like some level of cost effective redundancy purely to buy time to sync to my latest backup. Downtime isn't an issue once that is achieved.
My thinking is that I would like to use RAID 5 purely as a stop gap measure, which no intention of rebuilding should a drive fail. For example, lets say I use 6 X 4TB drives to build a 20TB RAID 5 array. At some point one of the HDDs dies and the array becomes degraded. All I'm looking for is time to update my backups. Afterward, I will replace the drive (probably days later), reinitialize, and restore from backup instead of going through the rebuild process. I'm thinking this will remove some of the risk of rebuild failure since I won't need to successfully read every sector of every drive. I will need to rely on my backups at that point, but that is why I have two.
In all, what I'm looking for are real world experiences of how long it takes to initialize a RAID 5 array versus rebuilding it. If it isn't noticeably faster to reinitialize instead of rebuilding, then its not really worth the hit in write performance or the time investment. I'll just stripe it and make sure I'm consistent with my backups.
I don't mind working without a net (RAID 0, JBOD), but running so many drives makes me wonder about my chances of failure. My other thought is to use something similar to flex raid to create a parity drive. Perhaps that is the better way to go regardless.
Opinions?