• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

need raid 5 explanation

pinhead

Senior member
if you had 3 36gb drives in a raid 5 configuration, with an 8gb utility partition which is c:, what should the space of d: be if it's the only other drive partitioned? I have a dell server with that scenario and my thinking is it should be the 36 minus the 8 which would be around 27-28 gb's. It shows d: as having about 60 gb's of space which means I don't understand raid 5 or they didn't configure it right. I'm guessing the former. Thanks.
 
Originally posted by: pinhead
if you had 3 36gb drives in a raid 5 configuration, with an 8gb utility partition which is c:, what should the space of d: be if it's the only other drive partitioned? I have a dell server with that scenario and my thinking is it should be the 36 minus the 8 which would be around 27-28 gb's. It shows d: as having about 60 gb's of space which means I don't understand raid 5 or they didn't configure it right. I'm guessing the former. Thanks.

After creating a RAID 5 container you would start with 72 GB. Subtract 8GB for the C: partition which would leave you with 64 GB. Of course these numbers will vary depending on how the drives are formatting (NTFS vs. FAT vs. etc.). The total space of a RAID 5 container can be computer by the following formula:
(N-1) x M GB = Total GB

N = Total number of hard disk in RAID array (in your case 3 HDD)
M = Total disk space on 1 hard disk (in your case 36 GB)

(3-1) x 36 GB = 72 GB

techfuzz
 
Starting to make sense, I've been looking a little and my understanding is part of it is the parity taking up space whi8ch allows a disk to go down without bringing the whole array down. With it like that you can just swap the drive and it will rebuild the data on it correct? Thanks!
 
Correct. The equivalent of one drives worth of space contains the parity. The parity space is distributed (striped) along with the data.
 
The parity is like a mathematical equation. It's kinda like looking for "x" is a drive dies. A simple example

2 + 5 = 7
2 + x = 7

Hardware RAID is the preferred method since there is a dedicated processor on board that will handle the calculations for striping and parity. OSes like Windows (Server editions) and various *nix also allow for software RAID which basically off loads the work onto you computer's CPU. Usually a waste since your expensive processors should be doing meaningful work and not crunching data like lowly IDE or IDE RAID adapters like your conventional Promise or Highpoint adapters. Recently hardware ATA RAID adapters have become popular and companies like 3Ware and Adaptec have released several versions. Below is an example of a hardware SCSI and hardware ATA RAID adapter (0,1,3,5,0+1)
Mylex Acceleraid 352 Dual Channel U160 SCSI w/ 64MB ECC PC100
Adaptec 2400A ATA100 IDE w/ 128MB ECC PC100

RAID5 has better read speeds in the event of a disk failure compared to RAID3 which keeps parity on a dedicated drive. Suppose we have the following data: ABCDEFGHIJKL

RAID3 on a 4 drive array

A B C X
D E F X
G H I X
J K L X

Suppose drive 3 dies and "CFIL" are now lost. During the read, the drive controller will need to do work on every read to rebuild the data from the lost drive using the parity. Again, kinda like looking for "x" in an algebra problem.

RAID5 can potentially mitigate that problem because the parity is spread across all the drives in the array.

RAID5 on a 4 drive array

A B C X
D E X F
G X H I
X J K L

Now suppose drive 3 dies again but this time "CXHK" is lost. Reading "ABX" will result in number crunching to get back "C" but on the next read, all the data is intact. "DEF" can be read just fine and the parity is a non-issue.

Next we have the issues of rebuilds. Rebuilding a critial array again requires lots of number crunching. Usually you set the processor on the RAID adapter to split its available CPU cycles between rebuilding the array and handling read/write requests depending on how much load you expect.

Other considerations include "hot spares" which are extra drives that are running but do nothing until a drive fails. When a drive fails, the rebuild will automatically begin on the hot spare. The protects against the unlikely event of multiple drive failures and to rebuild critial arrays ASAP. Also saves you from having to get out of bed at 3AM to make sure a drop is swapped in.

That is my crappy 1:30AM explaination of RAID5

Windogg
 
Back
Top