- Feb 18, 2001
- 30,990
- 5
- 81
I currently have a bunch of data on different NAS's, but my FreeNAS has three 2 TB RAID5 arrays built on a adaptec 31205 controler. I have another 2 TB on another NAS. I need more space, and I want to combine all my NAS's together. all together, I have 16 750GB drives with just over 6 TB of actual data across all arrays and NAS's. I just bought a adaptec 52445 28 port SAS/SATA controler to put n my freenas box as part of this process.
I don't have a way to back up data to tape or something, it costs way too much. (disk is < tape, and disk is still expensive for this amount of data, and I don't have a drive). So this is what I am going to do.
1. Buy four 1.5 TB drives (6TB total space)
2. Move data from all systems to these 4 drives
3. Put all 16 750GB drives into freenas box on new controler.
4. Build RAID6 array of those 16 disks, total space 10.5TB.
5. Move data from 1.5 TB drives to new array
6. Turn 1.5 TB drives into a new 4.5TB RAID5 array
Good:
Cost for this solution (disk) would be $520, limited to the four 1.5TB drives.
Problems/concerns
Moving data will take forever, and during that time, the 1.5TB drives each become a single point of failure. if one fails, I lose 1.5TB of data. Possible solution: Getting 5 1.5TB drives and build an array on the new card first. Issues with that solution are that the mainboard I have has 2 PCIe 16x slots, but I don't know if both can have non-video cards installed in them (old and new controller). It also costs an extra ~$130 for the drive. Additionally, the case I have only has 20 hot swap trays, and that is a total of 21 drives. (16 750's, 5 1.5's)
Is there any reason I shouldn't create a 10.5TB array on 16 disks? It needs to be redundant since I don't do backups (calculated risk) and with 16 drives the possibility of one failing while rebuilding from a failed drive becomes larger. My current calculated risk that I will accept is at 4 disks in a raid5 array. That's why I am switching to RAID6 (dual striped parity).
Another solution I can think of would be this:
1. Get 10 1.5TB drives
2. Copy data from one of the 2TB arrays to every available free space portion on the other three 2 TB arrays.
3. Pull those 4 disks out of the server
4. Install 10 1.5TB drives (now 10+8 drives in the 20 bay server)
5. Build 12 TB RAID6 Array
6. Copy data from all other arrays and NAS's back to the new 12TB array.
Good:
Have new drives in the server, all the same size.
Problems/concerns:
Post for this solution is $1300 for 10 1.5TB disks. Possible Solution: Sell the 16 750GB drives to recoup some of the outlay. However, this will only be $40-$50 a disk after shipping expense and all that good stuff. It also takes up time I don't have. That would put the cost at around $600, plus 5-6 hours of time for selling the disks.
By the way, if you want to caluclate MTBF failure rates (full loss) of an array:
RAID0 - MTBF of a disk / total disks
RAID5 - MTBF of array / (Rebuild time/(MTBF of a disk / total disks-1))
RAID6 - Failure of 1 disk turns array into RAID 5, so it would be failure possibility of RAID5 / MTBF of single disk
So a 4 disk RAID5 of Seagate drives has a MTBF of the array of 750,000/4 = 7812 days to lose a disk from the array. Assuming rebuild time of 24 hours, the probability of total loss (assuming instant replacement) would be 0.0096%. That's high, but acceptable, especially since it will actually take 5-7 days to get a replacement disk if you order one online. The 24 hour rebuild time is best case. So total failure would have a 0.0768% chance for 8 days, with only 4 drives. 16 drives in array raises that to 0.384%, a 5 fold increase in possibility. 16 drives in RAID6 would have a probability of total failure of 0.0000512%. Back to safer levels, and 7500 times less likely to have total failure than 16 drives in RAID5. Of course, this is disk failure only, not controller, memory, CPU, or others which can all cause corruption, and only a backup can solve for.
Whitepaper on RAID failure calculations: http://www.google.com/url?sa=t...LAsQ4qkAgomcB6HkZ0eIdw
What do you think about the process I have planned? Any reason I shouldn't create an array that big?
I don't have a way to back up data to tape or something, it costs way too much. (disk is < tape, and disk is still expensive for this amount of data, and I don't have a drive). So this is what I am going to do.
1. Buy four 1.5 TB drives (6TB total space)
2. Move data from all systems to these 4 drives
3. Put all 16 750GB drives into freenas box on new controler.
4. Build RAID6 array of those 16 disks, total space 10.5TB.
5. Move data from 1.5 TB drives to new array
6. Turn 1.5 TB drives into a new 4.5TB RAID5 array
Good:
Cost for this solution (disk) would be $520, limited to the four 1.5TB drives.
Problems/concerns
Moving data will take forever, and during that time, the 1.5TB drives each become a single point of failure. if one fails, I lose 1.5TB of data. Possible solution: Getting 5 1.5TB drives and build an array on the new card first. Issues with that solution are that the mainboard I have has 2 PCIe 16x slots, but I don't know if both can have non-video cards installed in them (old and new controller). It also costs an extra ~$130 for the drive. Additionally, the case I have only has 20 hot swap trays, and that is a total of 21 drives. (16 750's, 5 1.5's)
Is there any reason I shouldn't create a 10.5TB array on 16 disks? It needs to be redundant since I don't do backups (calculated risk) and with 16 drives the possibility of one failing while rebuilding from a failed drive becomes larger. My current calculated risk that I will accept is at 4 disks in a raid5 array. That's why I am switching to RAID6 (dual striped parity).
Another solution I can think of would be this:
1. Get 10 1.5TB drives
2. Copy data from one of the 2TB arrays to every available free space portion on the other three 2 TB arrays.
3. Pull those 4 disks out of the server
4. Install 10 1.5TB drives (now 10+8 drives in the 20 bay server)
5. Build 12 TB RAID6 Array
6. Copy data from all other arrays and NAS's back to the new 12TB array.
Good:
Have new drives in the server, all the same size.
Problems/concerns:
Post for this solution is $1300 for 10 1.5TB disks. Possible Solution: Sell the 16 750GB drives to recoup some of the outlay. However, this will only be $40-$50 a disk after shipping expense and all that good stuff. It also takes up time I don't have. That would put the cost at around $600, plus 5-6 hours of time for selling the disks.
By the way, if you want to caluclate MTBF failure rates (full loss) of an array:
RAID0 - MTBF of a disk / total disks
RAID5 - MTBF of array / (Rebuild time/(MTBF of a disk / total disks-1))
RAID6 - Failure of 1 disk turns array into RAID 5, so it would be failure possibility of RAID5 / MTBF of single disk
So a 4 disk RAID5 of Seagate drives has a MTBF of the array of 750,000/4 = 7812 days to lose a disk from the array. Assuming rebuild time of 24 hours, the probability of total loss (assuming instant replacement) would be 0.0096%. That's high, but acceptable, especially since it will actually take 5-7 days to get a replacement disk if you order one online. The 24 hour rebuild time is best case. So total failure would have a 0.0768% chance for 8 days, with only 4 drives. 16 drives in array raises that to 0.384%, a 5 fold increase in possibility. 16 drives in RAID6 would have a probability of total failure of 0.0000512%. Back to safer levels, and 7500 times less likely to have total failure than 16 drives in RAID5. Of course, this is disk failure only, not controller, memory, CPU, or others which can all cause corruption, and only a backup can solve for.
Whitepaper on RAID failure calculations: http://www.google.com/url?sa=t...LAsQ4qkAgomcB6HkZ0eIdw
What do you think about the process I have planned? Any reason I shouldn't create an array that big?