Hardware v. Software RAID

mmaki

Member
Dec 27, 2000
77
0
0
I'm contemplating the purchase of a new file server for our office. I have about 100 users. I plan on getting box with 3x34GB SCSI drives with 3 more empty bays for expansion. Probably somewhere around 900MHz processor and 512M RAM. The primary purpose of this server is to serve files and maybe a few printers, that's about it. It will not run any apps. My network is switched 100Mb. I'd like to install Linux. I have it up and running on another machine serving some apps and other files. It's Redhat 7.0 with the Linux softare RAID 0 installed.

My question is: is there an advantage in this configuration for hardware raid over software RAID? Please comment on amount of RAM and processor speed too.

Thanks!
 

rlism

Golden Member
Feb 1, 2001
1,461
0
0
Hardware RAID uses much less overhead. Software RAID tends to occupy your memory and hog your CPU cycles. Especially since this is a file server, I'm guessing there is going to be a lot of disk activity; this will definitely hinder the performance of the machine due to the processes required to do the RAID'ing. Software RAID is probably a bad solution for a file server.

I don't know how important your company's files are, but I certainly wouldn't wanna risk my butt with RAID 0. I hope you have some other backup system working. If performance, downtime, and fault tolerance is a concern, I would suggest going with a hardware solution, and not using RAID 0.

edit:
Actually, I wouldn't use software RAID 0 even if fault tolerance weren't a factor. Would it even give you a performance boost? anyone have any experience with this? You're probably better off without RAID than using software RAID 0.
 

holden j caufield

Diamond Member
Dec 30, 1999
6,324
10
81
I've used software raid in our windows 2000 server. It's Ok not great or anything but never really 100 users logged on at one time and using it so software is ok for us. Yeah raid 0 won't let you sleep too well with 3x34gb scsi you're a good candidate for raid 5 software or hardware. cpu usage for our dual cpu isn't too bad. cpu util is not as high as some would think. good luck. :)
 

Zach

Diamond Member
Oct 11, 1999
3,400
1
81
I've run software RAID in Redhat 7 and shown no CPU hit, but without any good performance. On of the drives is pretty damn old though, could be the bottleneck.

Not tested under load, however.

Anyway, I'd get a Mylex card if I were you. Heard good things, and for an extablished business not enough money to worry about. AMI has been suggested too. If you can't get a hardware card you could always just use the drives as-is too, no RAID. RAID 0 will risk data, RAID 5 will waste 36GB and might slow you down, the medium option might be nothing.
 

mmaki

Member
Dec 27, 2000
77
0
0
Thanks for all the comments.

Doesn't RAID0 offer a bit of perfomance advantage for reads and/or writes? I also like having one large virtual partition. I have a good backup system (DDS-4 autoloader) and I can afford about day if I should ever get a failure to reinstall. Does anyone have any experience replacing a failed drive in a RAID5 config? How did it go?

Thanks again!
 

Jhereg

Senior member
Jan 23, 2000
260
0
0
I don't know why you would want to run Raid0 on a production server.. raid5 is much more prudent
Changing a drive on a raid5 system is easy , just plug in the new drive ,the controller will automatically reformat and write the parity info.. the process can take a while depending on size of drives , amount of data, and CPU speed
 

LeeBear

Member
Jan 23, 2001
51
0
0
RAID 0 will give you higher sustained transfer rates during reads and writes because basically each drive only has to read/write a portion of the data. The downside to that is your access time is usually a bit slower (not much) because it'll only be as fast as the slowest drive... even if all the drives are the same make and model there's still slight performance difference between drives. From your description of this server's purpose, mainly as a file and print server I think you would'll be better off without the striping. Especially if you're only using a 100 Mbit connection you'll never reach the maximum transfer rate of even one of the drives in non RAID 0 mode.

-LeeBear
 

Zach

Diamond Member
Oct 11, 1999
3,400
1
81
I did it (replacing RAID 5) in Redhat Linux, software RAID. Not bad. I wanted to see what would happen, so I unplugged the drive while the system was running. THe SCSI but reset, but no data was lost. Within a few seconds, the system printed that there were problems with that drive. I plugged it back in, did something like mdhotadd (multi device hot add) and it came back online. Took about 6 hours to re-sync though; the system has to sync the parity with the drive's contents in RAID 5.

And, both RAID 5 and RAID 0 offer performance benifit. More so with RAID 0, since there's no overhead. RAID 5 can slow things down alot too, if you have a poor RAID controller or if the software RAID isn't up to par. Personally, I've backed up data with large IDE hard drives. 45-60GB 5400RPM IDE drives cost alot less than tape, are faster, and are easier for most people to deal with. You just have to figure out how you want to go about backing up, ghosting maybe? I'm doing a cp -a via cron in Linux, as using tar makes a file that's too big for the file system.
 

esung

Golden Member
Oct 13, 1999
1,063
0
0
RAID 5 offers not just performance, but fault tolerance as well. where as RAID 0 offer the best performance, there's no safety margin. and the risk factor is actually 3x of JBOD, since any one drive failure will cause data lost on the whole subsystem.
 

LordOfAll

Senior member
Nov 24, 1999
838
0
0
Hmmm where to start.

First I would never run raid 0 on a server. You double the chances of a catastrofic drive failure.

Second You left out the option of a raid 0/1 or 1/0 setup. For the cost of some drives you never go down due to drive failure. This can really make your life easier and you still get the performance of the raid 0 setup.

Third Striping with parity setups (raid 3 through, er, 7 I think) is a good setup if and only if you have a good hardware setup with a caching subsystem. Otherwise you wont get even equivilant performance to one drive under most circumstances. Also this type is good for reads, not so hot at writes. So it depends on what you use it for.

Fourth Hardware raid will let you run a hot swap or hot spare sutup, provided you use a backplane. This may or may not be important to you.

The huys over at storage review have a nice lil primer for raid, if you are interested.

Storage review

Edit: fixed some spelling
 

Zach

Diamond Member
Oct 11, 1999
3,400
1
81
Lots of hardware RAID does hot spare and hot swap, maybe not some of the IDE but it's a common enough feature.
 

mmaki

Member
Dec 27, 2000
77
0
0
Thanks for all the great replies and link to StorageReview.com. I'm definitly leaning towards hardware RAID5. I have a question of curiosity though; is there any way in Linux without RAID to make multiple disks appear as one large disk? I guess, in other words, without RAID, each disk needs its own partition right?
 

esung

Golden Member
Oct 13, 1999
1,063
0
0
Zach: JBOD means Just Bunch Of Disks, which means the drives are not in a RAID configuration.. how can one drive failure cost data lost on others? (the disks are totally unrelated)
 

Zach

Diamond Member
Oct 11, 1999
3,400
1
81
But the data is still spanned, and we don't know how the RAID card keeps track of the disks. If you killed half of your hard drive, most of the data might be good, but what if it damages the partition? If the FAT is gone, no more files. Likewise for JBOD, since we don't know how it keeps track of what drive is first there's no way to know if loosing one drive could kill all the data.
 

Zach

Diamond Member
Oct 11, 1999
3,400
1
81
In Linux, a common way to "partition" is to actually mount the drive partitions as directories. No C: D: and E:, you have some drive holding the / folder ("root folder"), and you can mount other drives as the other directories (the same is now possible in Win2K, your Program Files directory could be a second hard drive). One drive could have the /boot partition to hold the boot files, along with maybe /root for the root user's home directory and thw swap space partitions (Linux don't use swap files often). Other drives could be /home, /usr, /var, /etc and etc.

Makes everything look like one solid filesystem, until you do a 'df' command to check free disk space and see all the HD referances, like /dev/sd0 (first SCSI disk)being /, and /dev/hda (first IDE disk) being /home. Also makes for a good bad pun in the end... etc.
 

esung

Golden Member
Oct 13, 1999
1,063
0
0
JBOD: I agree it depends on the implementation of spanning is important, but so what I saw the software did put overall FAT on each drive to avoid the scenario where the partition / file record is damaged. But again, this is completely situation dependant.