RAID5 with partitions? is it possible?

bobdazzla

Junior Member
Mar 14, 2006
7
0
0
Computer Mens:

Going to be setting up a software RAID5 system via linux. I have a number of different drives here with different capacities (3x200GB, 2x300GB), so I was wondering; is it possible to partition each of those drives into 100GB each, so that I won't have to reduce the 300GB drives to match the 200GB drives?

I can think of two concerns off the top of my head...

1) If an entire drive dies using this method, it would wipe out 2-3 partitions in the process . so I wouldn't be able to recover at all. Or are entire-drive failures very rare and it is much more likely that only 1 partition would fail at a time, giving me the opportunity to swap out the entire drive?

2) If a partition DOES fail... how would you replace the one partition? Swapping out a drive would mean, again, replace 2-3 partitions in the process... Would I manually have to copy the remaining valid paritions on the failing drive to the new drive, THEN add this drive to the array to let auto-rebuild of the failed partition take place?

I think I might have just answered my own question, and that it can't be done, but if anybody can provide clarification I'd appreciate it.
 

TGS

Golden Member
May 3, 2005
1,849
0
0
Yes you can, but whatever the smallest amount of space is that you designate is the largest amount you can stripe across all of the volumes. IE for your example for 100GB per disk, all disks will have 100GB allocated for the RAID 5. The remainder of the space can still be allocated as non-raid storage, or any number of other RAID volumes. Though when you do that if it's a high I/O environment you will see the contention for the disk when you write/read to either logical volume.


1) No, if the drive fails all disk partitions will go tango-uniform. Logical corruption against a single partition should only affect the single partition where the corruption had occured.

2) If a partition fails, that's pretty much it for the data. Unless you have recovery tools, again this is still logical corruption. If a disk fails, your volume should enter some sort of degraded state, but you will be able to access the data. When you replace the failed drive, the new disk is going to rebuild the data from the data and parity from the RAID 5. You should be able to access your data in the degraded state and during the rebuild. The rebuild will automatically replace your entire disks worth of data.
 

bobdazzla

Junior Member
Mar 14, 2006
7
0
0
TGS, thanks for the detailed advice. I was wondering if I could get a little more clarification:

so what you are saying is, if I take the 5 drives (again, 3x200GB and 2x300GB) and make 10x100 GB partitions from them, I can't add them all into 1 raid array (and even if that WAS possible, I'd run into the problems of 1) and 2) in my original post)?

Using the 100GB partitions, I would have to make 3 arrays from the 5 drives:
array A: 400 GB (5x100GB partitions across the 5 disks minus 1x100GB parity)
array B: 400 GB (5x100GB partitions across the 5 disks minus 1x100GB parity)
array C: 100 GB (2x100GB partitions across the 2 300GB disks minus 1x100GB parity)

I think this is what you are saying, I just want to make sure.

Thanks!


 

bobdazzla

Junior Member
Mar 14, 2006
7
0
0
hi rebate,
I would be doing this RAID5 via software through linux. Does this make the either of the setups I described in the 2nd post of this thread possible?
 

nweaver

Diamond Member
Jan 21, 2001
6,813
1
0
Yes, you would run into problems, because one physical failure (Single drive dying) would wipe out multiple partitions/members of the raid.

You could make multiple raid volumes, like you show above. One thing to remember with Linux is that you can mount partitions/raid devices/LVM's were ever they are needed. Also, you cannot boot from a raid5 array, only raid1 or non-raid. I would figure out what you need space for and get creative with mount points and LVM. I would make the 3 300gb drives a raid 5 array, breake the 2x200 gig drives into: 10 gig raid1 for /, 512/1gig on each for swap (don't raid the swap), rest chunked out as needed for /var, /home (raid1, or maybe your raid5), etc.
 

nweaver

Diamond Member
Jan 21, 2001
6,813
1
0
also realize you can combine raid arrays into logical volumes to maximize space. So have a raid5 (3x300) and a raid 1 (2x200) and make an LVM of 800 gigs
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: nweaver
also realize you can combine raid arrays into logical volumes to maximize space. So have a raid5 (3x300) and a raid 1 (2x200) and make an LVM of 800 gigs

While this is true, that logical volume would have... odd... performance characteristics, since part of it would be a RAID5 and part of it would be a RAID1. :p But it would work fine if all you care about is having a big 800GB logical disk and performance is not an issue.

Using the 100GB partitions, I would have to make 3 arrays from the 5 drives:
array A: 400 GB (5x100GB partitions across the 5 disks minus 1x100GB parity)
array B: 400 GB (5x100GB partitions across the 5 disks minus 1x100GB parity)

You should be able to do this via software RAID (it's possible there are some hardware controllers that could handle something like this, but not any of the consumer ones I've seen). And in this case, a single drive failure will only take out one partition per array, so you'll still only lose data if you have two disks fail.

array C: 100 GB (2x100GB partitions across the 2 300GB disks minus 1x100GB parity)

That would normally be called a "RAID1" array. :p There's no point to doing RAID5 with two drives.
 

RebateMonger

Elite Member
Dec 24, 2005
11,586
0
0
Sorry, but I don't know that much about Linux and its software arrays.

If possible, it'd seem like creating a RAID 1 boot array (using 100GB partitions from the two 300GB drives) and then creating a RAID 5 array (with 200GB partitions from each of the five drives) might make sense. You'd get your OS array (and incidental data) spread safely across two drives, and your RAID 5 array would be evenly spread across the five drives, which is how RAID 5 is intended to work.

You'd end up with a 100GB RAID 1 boot array and a 800GB RAID 5 data array.
 

bobdazzla

Junior Member
Mar 14, 2006
7
0
0
Thank you all for the advice.

I think I'm going to go with the 3 arrays option I described in the 2nd post, and then add them all into 1 LVM like nweaver suggested (I'm very glad you pointed that out, I wasn't aware linux could do that). I'll drop in little 40 GB drive into the system to boot from.

I am planning on making this system into a file server for my mp3s, to stream over a network, so throughput and performance isn't too important, just a lot of space with convenient access (which is why the LVM is such a tantilizing solution... I don't want to have to breakup my music over 3 seperate arrays. Being able to address them all under 1 drive letter sounds much easier).

Thanks again!