software scsi raid in linux?

Louie1961a

Member
Sep 19, 2001
146
0
0
I was given a bunch of old scsi hard drives for free out of a surplused server from work. I have two 4.3 gb IBM ultrastar 2es U/W drives, and 2 quantum 22752 2.25GB W drives. Yesterday I picked up an adaptec 2940 U/W controller for $20. Everything works fine and I formatted all the disks to wipe them clean. My question is, can I use any or all of these drives and this adapter to set up software raid in Linux? If so, will it give me any speed advantages over my maxtor diamondmax 40 plus ATA hard drive?
 

bevancoleman

Golden Member
Jun 24, 2001
1,080
0
0
I don't think they will be faster than your 40Gb, they are fairly old drives.

Linux will support the SCSI card, though I'm not sure about S/W Raid. I don't recomend software raid anyway, it's ot a reliable as HW and if something does go wrong it makes it heaps harder to get functional. However I would check the seach engine at linuxdoc.org for software raid and see what comes up.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
The controller and drives should work with no problems. Read up on md and lvm, the 2 types of Linux software RAID, or the SoftWareRAID-HOWTO. Linux software RAID support hot spares, RAID 5, etc almost everything hardware RAID does, and RAID 0 will get pretty much the full speed out of every drive, so depending on how fast the drives are they may be faster than your IDE disk.
 

Buddha Bart

Diamond Member
Oct 11, 1999
3,064
0
0
can you boot off a software raid setup? In theory its impossible, but is there any hackjob solution?

bart
 

Louie1961a

Member
Sep 19, 2001
146
0
0
Here was my thought. I have the maxtor IDE drive and the 2 scsi ultrastars at 4.3 gigs each. Since I only use a 3 gig "/" partition, I figured I could create the raid 0 array, and partition it into my swap and "/" partitions, hopefully picking up some speed in the process. I could always leave my boot partition on the IDE drive. If this really won't get me any speed, I may just use the dirves for extra space, or perhaps partition each one out for a different OS, say put lfs on one and a BSD on another. I may set up the raid just for the experience as well. This is not a mission critical machine, just one of my toys that I experiment on. I always read that SCSI had lower overhead, and allowed for instruction queueing in linux, etc. so even though these drives are older, I thought that a raid 0 array might get me something. Is there a good benchmarking tool for linux to test the speed of the set up versus the IDE drive? How would hdparm -t work? Is that a reasonable measure?

 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
hdparm isn't very good, for real disk performance you want to run something like bonnie++ or dbench.
 

Zach

Diamond Member
Oct 11, 1999
3,400
1
81


<< can you boot off a software raid setup? In theory its impossible, but is there any hackjob solution?

bart
>>



It's supported out of the box, there's no problem in theory. Easiest was to do it is to set up the RAID partition while you install. You put Lilo in /dev/md0. md0 is multi device 0, as in RAID volume.

Louie1961a: I was playing with Mandrake once, and used 4 different sized hard drives. I made all RAID partitions, and made a RAID 0 array from them all. No wasted space, it works. The software RAID drivers is very intellegent! ANything beyond what could be mirrored was like a single drive, and small drives were striped, then striped to larger drives. Cool stuff. Slow though! EVen RAID 0 can take a hit, it's faster than a single drive but you can tell there's a little overhead. RAID 5 will really show on slow systems too. You will at best get an archiving system, it'll be good for network storage though, and SCSI drives should last forever (compared to their older IDE counterparts).
 

Zach

Diamond Member
Oct 11, 1999
3,400
1
81


<< I don't think they will be faster than your 40Gb, they are fairly old drives.

Linux will support the SCSI card, though I'm not sure about S/W Raid. I don't recomend software raid anyway, it's ot a reliable as HW and if something does go wrong it makes it heaps harder to get functional. However I would check the seach engine at linuxdoc.org for software raid and see what comes up.
>>



Yes, raidhotremove /dev/hda is tough. And raidhotadd /dev/hda. Whoo. Man.

I have a server with 4 or 5 9.1GB scsi drives, I was playing with it before putting it into the real world and unplugging drive's power and/or SCSI connections. THey weren't hot swap drives either, two were Fujitsu, one was Seatage, one was an ancient full height Quantum. No problems!

The RAID 5 algorithm will use spare system CPU cycles to repair it's parity after messing with it, and the whole software RAID subsystem (if you canc all it that) reads the drives to see where they go, instead of the /etc/raidtab file. So if you use a boot drive holding /etc and it dies you can live. I put it all on one big RAID 5 array though, with a RAID 1 array for /boot and 4 swap partitions (one per drive, or maybe it's 5).
 

Louie1961a

Member
Sep 19, 2001
146
0
0
I tried it with red hat and Suse. Suse wouldn't work, no way no how. Kept getting wierd glitches and system hangs. The suse doesn't work with the latest adaptec 29xx driver either, but it would work with the "old" 29xx driver. wierd.

Red hat installed it and ran it flawlessly. the only problem was that even in a raid 0 config, these drives scored about half of what my IDE drive did in hdparm (I know, it is in accurate, but it is a rough comparison). I could really feel it in performance too. The scsi's just can't hold their own. I am going to throw them in my spare p166 box, and put them in a raid one format. I will share that box via samba and just map it to all of the boxes (windows and linux) as a network drive to store junk on. What the hey, it was a fun experiment, and I cant argue with the extra 10GB or so for free.
 

Zach

Diamond Member
Oct 11, 1999
3,400
1
81
How fast is your system? My P200 showed faster times with RAID 0 on two 6.4's than them seperatly.
 

Louie1961a

Member
Sep 19, 2001
146
0
0
You are correct, they showed faster speeds than each alone, by about 50%, but the problem is they are just such slow drives that my 7200 RPM maxtor blew them away, even when they were arrayed in a raid 0 configuration.

My system is nothing special, PIII 550 coppermine oc'ed to 733, 256mg ram, asus v7100 geforce 2 mx..mostly usual stuff
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
drives scored about half of what my IDE drive did in hdparm (I know, it is in accurate, but it is a rough comparison).

hdparm isn't even a rough comparison with RAID devices, it was designed for single disks only. Try something like dbench or bonnie++ for a better rough comparison.
 

Zach

Diamond Member
Oct 11, 1999
3,400
1
81


<< You are correct, they showed faster speeds than each alone, by about 50%, but the problem is they are just such slow drives that my 7200 RPM maxtor blew them away, even when they were arrayed in a raid 0 configuration.

My system is nothing special, PIII 550 coppermine oc'ed to 733, 256mg ram, asus v7100 geforce 2 mx..mostly usual stuff
>>



That's the nature of a HD, RAID just helps data transfer rates. Seek times matter too, as you now know.