In a RAID 5 array...

Vegito

Diamond Member
Oct 16, 1999
8,329
0
0
yes...

i have 1 tb using 250 gb maxtor with 3ware ide raid.. they pretty much suck.. slow, no onboard cache.. crappie software

the scsi raid are usually 10k, lots of cache, helps with raid 5..

i did a test before, using 3 drive and up to 6 drive raid 5.. the read/write does increase according to each drive added but not that much.. maybe 5-7% increase.. which isn't bad but having 10 drive raid 5 will slow down overall bus speed..

currently i have 7 drive raid 5 for storage @ 10k rpm, not too worry about speed..

 

Fullmetal Chocobo

Moderator<br>Distributed Computing
Moderator
May 13, 2003
13,704
7
81
forcesho, are you using SATA or IDE drives? I'm using 5 Hitachi 250gb SATA drives, and RAID 5 is flying, albeit I have them on a BroadCom 4852, and not a 3Ware controller...
Tas.
 

Vegito

Diamond Member
Oct 16, 1999
8,329
0
0
Originally posted by: tasburrfoot78362
forcesho, are you using SATA or IDE drives? I'm using 5 Hitachi 250gb SATA drives, and RAID 5 is flying, albeit I have them on a BroadCom 4852, and not a 3Ware controller...
Tas.

i got both.. a 8 drive sata maxtor maxline ii with the 3dware controller, raid 5 is fast but rebuilding and everything else is slow

my main system is using 7 fujitsu MAP3147NP in dual channel mode with ultra 320.. So basically I been spoiled by u320 speeds
 

Fullmetal Chocobo

Moderator<br>Distributed Computing
Moderator
May 13, 2003
13,704
7
81
Cool. I want to get something fast for my scratch disk, so I'm deciding on whether to add more disks to my RAID 5 array, or grab some Raptors and throw them in RAID 0. But I don't want to go SCSI, although this system would be perfect for it. I don't know though. I might do some research. I only need about 100gb for my scratch disks anyway... But yea, initializing a RAID 5 array takes for fvcking ever...
Tas.
 

Fullmetal Chocobo

Moderator<br>Distributed Computing
Moderator
May 13, 2003
13,704
7
81
Originally posted by: Tick
Try to talk myself out of getting a raptor raid 5 array.

Do you want us to? Or you did, or you currently are doing that??
Tas.
 

imported_Tick

Diamond Member
Feb 17, 2005
4,682
1
0
Originally posted by: tasburrfoot78362
hehehehe. Why bother? What is the array gonig to be for?
Tas.

Well, a desktop.. Original I wanted raid 5 for large storage, for all teh warez, plus fault tollerence, since I can't be trusted to make backups. Then the idea of raptor raid 5 got me.
 

Fullmetal Chocobo

Moderator<br>Distributed Computing
Moderator
May 13, 2003
13,704
7
81
Ah. Well, Raptors wouldn't be ideal for that I don't think, since you are doing it for performance. You'd be much better off with getting larger drives... Raptors are great for performance, but they suck for storage. I'd grab 3 or 4 250s and RAID 5 em...
Tas.
 

imported_Tick

Diamond Member
Feb 17, 2005
4,682
1
0
Yeah. I found that it's possible to get 250's for $120 which is good. I intend to get 5, give better storage efficiency, and 1tb of storage.
 

Fullmetal Chocobo

Moderator<br>Distributed Computing
Moderator
May 13, 2003
13,704
7
81
Yep. That's exactly what I'm doing right now. Awesome performance too, actually. I get to research what I'm going to do for my scratch disk though... Yeehah... What controller are you going to be using?
Tas.
 

imported_Tick

Diamond Member
Feb 17, 2005
4,682
1
0
Originally posted by: tasburrfoot78362
Cool... Did the RAID 5 array take forever to initialize on yours too?
Tas.

Don't know, haven't set it up yet. But that seems to be the norm.
 

TGS

Golden Member
May 3, 2005
1,849
0
0
Well since the basis for the array is not for I/O it would have been cheaper to go the software route. Unless you have the card already, buying a hardware card for fault tolerance is a bit over the top.

RAID 5 is one of the slowest array types for write speed, due to parity. The read speeds are decent with the additional drives in the array.

The reason 10-15K rpm SCSI drives are faster, of course is partly due to the faster rotational speed, though the command queueing helps with reducing actuator movement. Which in turn the command queueing enabled drives will "intelligently" parse up the data segments to make the best use of the actuator over the area it is hover above.

For a simple home storage system, software raid is fine. RAID 5 is almost recommendable due to the available disk space you can get with that type of array. RAID 5 is not recommended for high I/O dependant environments.

For home use:

7200 rpm > 10K rpm
RAID 5 > * for available storage

You won't even notice the difference, in 7200vs 10K, and RAID 5 vs anything else unless you pull out a stopwatch. Your I/O usage just will not stress the array.
 

Vegito

Diamond Member
Oct 16, 1999
8,329
0
0
also cache on hardware controller plays a big role.. especially in raid 5..

i have hardware raid 5 u320 scsi.. its not over the top if you have over 1tb of data.. if i had less, i'd prob use raid 1 software
 

TGS

Golden Member
May 3, 2005
1,849
0
0
Actually the XOR processor does most of the work in a hardware setup. The cache just helps you from becoming I/O limited in large bursts. Both of which are a product of a high I/O environment. Which the OP is not going to be.

edit:

i have hardware raid 5 u320 scsi.. its not over the top if you have over 1tb of data.. if i had less, i'd prob use raid 1 software

It really shouldn't matter how large the array is, except for parity purposes on very large arrays(IE numbers of drives, opposed to amount of data). The basis for hardware vs software is you will choke the processor with XOR parity calculations in a high I/O scenario. Where as with hardware the controllers onboard XOR processor handles all those transactions without having to take CPU time away from more critical OS functions. The purpose of the cache is to minimize the parity write penalty. The cache provides the buffer for data to be held while the parity calculations are going off in the background. Typically smaller controllers can be set to writeback immediately or stage the data off when the cache is reaching a certain point.
 

imported_Tick

Diamond Member
Feb 17, 2005
4,682
1
0
Originally posted by: TGS
Well since the basis for the array is not for I/O it would have been cheaper to go the software route. Unless you have the card already, buying a hardware card for fault tolerance is a bit over the top.

RAID 5 is one of the slowest array types for write speed, due to parity. The read speeds are decent with the additional drives in the array.

The reason 10-15K rpm SCSI drives are faster, of course is partly due to the faster rotational speed, though the command queueing helps with reducing actuator movement. Which in turn the command queueing enabled drives will "intelligently" parse up the data segments to make the best use of the actuator over the area it is hover above.

For a simple home storage system, software raid is fine. RAID 5 is almost recommendable due to the available disk space you can get with that type of array. RAID 5 is not recommended for high I/O dependant environments.

For home use:

7200 rpm > 10K rpm
RAID 5 > * for available storage

You won't even notice the difference, in 7200vs 10K, and RAID 5 vs anything else unless you pull out a stopwatch. Your I/O usage just will not stress the array.


Thanks, about what I thought. I have a raptor for boot disk, so I'm not too concerned about speed. I'm interested in huge capacity, and some fault tollerence. I'm think hardRAID, though, as I don't want the softraid using up my system resources. If I wanted I/O, I'd probably go 50 or 10. I don't want to just have a bunch of drive, as I really will never make reliable backups. Also, I plan to get SATA drives with NCQ, to help with drive efficiency.