RAID-5 Under Windows XP Pro?

jcony

Junior Member
Jul 6, 2004
8
0
0
I have 6 250GB Drives that I want to RAID-5 for redundancy. Hardware RAIDing them is out of the question - too much $$$, which leads to software RAID. Mircosoft states that under the Win2k workstation or any flavors of XP - software RAID-5 is not supported, only the server versions can do it. BUT, I looked around and found a way to modify windows XP/2000 to include the ability to use software RAID-5. It is a simple proceedure that I found:

HERE

Anyhow, I attempted this method. I didn't run into any issues, but the option to use RAID-5 never appeared for me. I was wondering if anyone has gotten this method to work? Any advice or other methods for this modification? Other possibile solutions?

p.s. I would consider the using the server version as my OS to include support for RAID-5, but that doesn't go good with the gamming side my machine is used for. The performance is FPS takes a huge hit.
 

Sunner

Elite Member
Oct 9, 1999
11,641
0
76
I've never tried that, I just thought I'd inform you that disk performance, write performance especially, will likely be horrible with software RAID5.
 

horhey

Member
Dec 23, 2003
102
0
0
Originally posted by: Sunner
I've never tried that, I just thought I'd inform you that disk performance, write performance especially, will likely be horrible with software RAID5.

I agree with Sunner. I would deffinatly look into the hardware side. Like this card review
 

BatmanNate

Lifer
Jul 12, 2000
12,444
2
81
Originally posted by: horhey
Originally posted by: Sunner
I've never tried that, I just thought I'd inform you that disk performance, write performance especially, will likely be horrible with software RAID5.

I agree with Sunner. I would deffinatly look into the hardware side. Like this card review


I have this card and can vouch for it being truly excellent. :)
 

jcony

Junior Member
Jul 6, 2004
8
0
0
Thanks for the advice. $250 is a bit high. I suppose that if i have gone this far, if its the only option, ill have to go with it.

However, is there any reason I shouldn't go with a 4 channel RAID5 adapter (4CHx2Drive=8DriveMAX)? Will this not work or is it not recommended? Why?

for example:
HIGHPOINT ROCKET RAID 454 RAID5 HOST ADAPTER RETAIL

TIA
 

spyordie007

Diamond Member
May 28, 2001
6,229
0
0
Considering what you've dropped on drives $250 really isnt all that much....

If that's too much than yes the 4 channel rocket raid card would be a good alternative.

Even under Windows Server I generally try and avoid using software RAID simply because by nature the performance is pitiful. You've got yourself a nice set of drives there, get yourself a controller that will give them their dues.

-Erik
 

jcony

Junior Member
Jul 6, 2004
8
0
0
well, my big question is - what am i NOT getting with the 4 channel one? will there be any performance gain/loss? will my data be better protected? at this first glympse, i dont see a reason to go with the 6 channel over the 4. this is a one shot deal so i want to make sure it goes down right ;o. but if there is no gain by going with the more expensive one, i dont know why i should go with that route.

TIA
 

Pandamonium

Golden Member
Aug 19, 2001
1,628
0
76
The 4 channel alternative you're considering likely is not a true hardware RAID card. While not scientific, every true hardware RAID card I've seen is fairly large. The more scientific difference is that the $91 alternative will force you to put 4 drives on 2 channels, whereas the Promise controller will allow one channel per drive. Also, IIRC, that promise card is something like 64bit/66mhz instead of 32bit/33mhz, which gives it much higher headroom for theoretical throughput.
 

Bookmage

Member
Feb 19, 2002
176
0
0
The Rocketraid is only software raid and offers slightly better performance than Windows RAID. If you want Hardware raid, go 3ware or Promise if u must. I run 6 x 200GB on my Promise SX6000 and it's been great for the most part. Software RAID 5 under windows should not really be considered. I would recommend a 3ware 8 port controller. It is pricey, but well worth the cost if you plan on doing anytype of RAID5. The alternative is to setup RAID 5 under windows and see how bad it is. Or it just may be all that you need...
 

jose

Platinum Member
Oct 11, 1999
2,079
2
81
Like everyone stated, go w/ a hardware based solution if your doing raid5. I prefer 3ware because of support/compatabilty across diff OS's ..
Getting more channels 4 vs 6 vs 8 etc . allows you to have addl hot spares. I always have at least 1 spare.

What are you going to do w/ a raid5 array ?? Business ?? If you need raid5, then get a true raid5 controller, otherwise just setup your drives as reg. individual drives.

Regards,
Jose
 

redbeard1

Diamond Member
Dec 12, 2001
3,006
0
0
I followed a similar (or maybe the same) guide a while ago and was able to make a software raid 5 using 3 scsi drives in XP. What I did notice was that XP now takes 5 to 10 minutes to boot, basically like a server. Even after I removed the array it still takes forever to boot.
 

Concillian

Diamond Member
May 26, 2004
3,751
8
81
Originally posted by: Sunner
I've never tried that, I just thought I'd inform you that disk performance, write performance especially, will likely be horrible with software RAID5.

I dunno about Windows software RAID 5, but in Linux, software RAID 5 performance is very good. CPU utilization is high, but that will happen with any software RAID solutions. Most of the low cost cards out there are really just a translation layer that allows software RAID 5 to be done without an OS anyway, I don't think you'd see a huge performance advantage in going to a POS RAID card, unless the Win software RAID 5 really sucks, but I kind of doubt that. Some of the mid priced RAID 5 cards have a hardware parity calculator to offload some CPU usage, but everything else is software (Promise SXx000 cards are an example of this)

With my Escalade, my write performance is MUCH MUCH better in software RAID 5 than in hardware RAID 5. Like 100+ MB/sec vs less than 50. ( yes, I did say that correctly, SOFTWARE RAID 5 write performance is faster. I think this is more an issue with the 3Ware cards than anything else, it seems especially the 4 channel cards have slow RAID 5 write performance) Read performance is similar, I see sustained read throughput of ~140 MB or so either way. CPU utilization is, of course, much lower when using hardware RAID 5.

As far as the 4 channel cards, I think they're only built to run 1 drive per channel. So running 8 drives isn't really an option. The issue is the IDE interface is not built to deal with multiple drives on the same channel being used at the exact same time, if one is being accessed, then the other cannot. This obviously creates significant performance issues.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
Ya.

software raid is fine (I a lot of cases it's faster then higher end hardware raid cards, or so I've read.) unless your going to be using it as a gaming box or high CPU intensive setup. It will eat CPU cycles.

Was a issue when computers were 400mhz, but if you have a 1.5ghz machine you have plenty of spare cpu cycles to burn 99.8% of the time.

If you have high CPU needs probably the best would be a dual cpu box and software raid setup (like a mac).

Then for best performance go and buy 2 IDE to PCI cards, with 2 channels each. Then run each IDE drive on it's own channel so that you avoid any master/slave conflicts. So that way you can run 4 drives in RAID 5 and then one spare incase one of your drives fail it can switch over.


But then anything beyond that you may run into problems. Our x86 outdated 32bit 33mhz PCI bus will max out at 127 or so MB/s (of course this wouldn't be a issue if you had a Mac or a PCI-X machine or whatnot, but that still is expensive). The problem as I see it is that with all the software RAID going on your going to be moving lots of bits around during high speed file transfers and such. Each drive can do a average maximum of 40-50MB/s, so it probably wouldn't take much more to have your PCI bus be completely saturated.

Which is probably why you still want to have hardware RAID. All that bit shifting and whatnot is kept inside the card itself, and doesn't go into your PCI bus, only the information being transfered to the different parts of your computer.

So unless your setting up your machine as a dedicated file server, probably hardware RAID will still be best. It wouldn't be to hot as a gaming machine, or at least optimable since you want all the CPU cycles you can get.

Or something like that. not to sure, never ran software RAID myself, although I probably will in the near future. Those harddrives are getting damn cheap.

edit:

Plus I have no clue if winXP even supports this stuff.
 

Sunner

Elite Member
Oct 9, 1999
11,641
0
76
Originally posted by: Concillian
Originally posted by: Sunner
I've never tried that, I just thought I'd inform you that disk performance, write performance especially, will likely be horrible with software RAID5.

I dunno about Windows software RAID 5, but in Linux, software RAID 5 performance is very good. CPU utilization is high, but that will happen with any software RAID solutions. Most of the low cost cards out there are really just a translation layer that allows software RAID 5 to be done without an OS anyway, I don't think you'd see a huge performance advantage in going to a POS RAID card, unless the Win software RAID 5 really sucks, but I kind of doubt that. Some of the mid priced RAID 5 cards have a hardware parity calculator to offload some CPU usage, but everything else is software (Promise SXx000 cards are an example of this)

With my Escalade, my write performance is MUCH MUCH better in software RAID 5 than in hardware RAID 5. Like 100+ MB/sec vs less than 50. ( yes, I did say that correctly, SOFTWARE RAID 5 write performance is faster. I think this is more an issue with the 3Ware cards than anything else, it seems especially the 4 channel cards have slow RAID 5 write performance) Read performance is similar, I see sustained read throughput of ~140 MB or so either way. CPU utilization is, of course, much lower when using hardware RAID 5.

As far as the 4 channel cards, I think they're only built to run 1 drive per channel. So running 8 drives isn't really an option. The issue is the IDE interface is not built to deal with multiple drives on the same channel being used at the exact same time, if one is being accessed, then the other cannot. This obviously creates significant performance issues.

Have you tried doing a complete Bonnie++ run on that array?
I've used software RAID under Windows, Linux, and Solaris, and I've never seen a software setup that could even touch any decent hardware setup.

Had a box with 12 x 18GB 10K RPM disks in a RAID-5, it had a hard time breaking 5 MB/Sec using software RAID-5, but could do upwards of 75 MB/Sec using a hardware controller.
And yes, those were old drives, so that's why even the "high" 75 MB/Sec seems low :)
 

Concillian

Diamond Member
May 26, 2004
3,751
8
81
What I quoted was bonnie++ benchmarks of read and write performance. The drives were 4x Seagate 7200.7. I believe I used a bonnie++ filesize of either 20x or 40x RAM size, which would've put it at 5-10GB (256MB RAM). I was mostly checking filesystem performance, and tried out jfs, xfs, reiserfs and ext3. XFS with external journal was a winner when looking at a balance of large and midsize reads, with some importance of write performance (XFS was #1 in the reads for those size and #2 in write performance)

Card was in a 64 bit / 66MHz PCI slot, so PCI bus throughput was not a limitation.

I'll see if I still have the results somewhere, but I think I was using the PostIt filing system at the time, and I'm reasonably certain I threw them out when I moved.

I ended up with hardware RAID 5 becuase of the CPU usage, but I remember that the performance of the software RAID 5 was not far off, definitely over 100MB/sec read performance.
 

Sunner

Elite Member
Oct 9, 1999
11,641
0
76
Originally posted by: Concillian
What I quoted was bonnie++ benchmarks of read and write performance. The drives were 4x Seagate 7200.7. I believe I used a bonnie++ filesize of either 20x or 40x RAM size, which would've put it at 5-10GB (256MB RAM). I was mostly checking filesystem performance, and tried out jfs, xfs, reiserfs and ext3. XFS with external journal was a winner when looking at a balance of large and midsize reads, with some importance of write performance (XFS was #1 in the reads for those size and #2 in write performance)

Card was in a 64 bit / 66MHz PCI slot, so PCI bus throughput was not a limitation.

I'll see if I still have the results somewhere, but I think I was using the PostIt filing system at the time, and I'm reasonably certain I threw them out when I moved.

I ended up with hardware RAID 5 becuase of the CPU usage, but I remember that the performance of the software RAID 5 was not far off, definitely over 100MB/sec read performance.

Yeah, I've never had a problem with read performance, which of course makes perfect sense :)
Write performance has been absolutely abysmal on every software RAID-5 implementation I've ever tried though.
The most extreme example would be a poor Ultra Enterprise 250 with a 400 MHz US-II.
In a best case scenario, it could sustain about 1.6 MB/Sec write using 12 10K RPM SCSI drives.

Of course that's an old box, very slow by todays standards, but still, it's not exactly a 486 ;)