Raid5 with western digital green

Copenhagen69

Diamond Member
Feb 8, 2005
3,055
0
76
I am in the middle of buying some 1.5TB western digital green drives for raid5. My buddy told me this was a horrible choice and I would have lots of down time and all that trying to fix everything when my system kicks a drive out of raid ....


thoughts? :confused::confused::confused:
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
correct. if you put it in a drobo or qnap - it may work since they use custom drivers to handle those cheap drives.
 

Blain

Lifer
Oct 9, 1999
23,643
3
81
RAID 5 is about performance with some redundancy
Green drives are not about performance, but capacity and low power usage

RAID 5 + Green HDs = What's the point? :\
 

velis

Senior member
Jul 28, 2005
600
14
81
RAID 5 is NOT about performance. For that, you go with RAID 0. If you want reliability too, you go for 10. 5 kills write performance by design - and greatly enhances read performance since it's effectively stiping for read. Even more so does 6 (kill write performance that is).

That said: I have 3x 1.5 TB Samsungs 5400RPM (HD153UI) on my 790GX chipset doing RAID5 just fine for more than a year now.

Also, I assembled a server some years back when Athlon64 was all the rage. I used an Adaptec 8502 I think with 6 200GB drives and the array works like a charm (though slow IMO) to this day. Though the server itself is going into retirement now and the array with it. Shame the controller has 2TB array limit :(
But I have heard that some controllers have real issues with low end consumer drives so watch out if you plan to go the HW route.
 

Jeff7

Lifer
Jan 4, 2001
41,596
19
81
My own RAID5 experience comes with a Promise SX4000 and then a Highpoint card. 2300, perhaps?

The SX4000 gave very good sustained R/W speeds when compared to a standalone hard drive. It had some level of hardware-acceleration for calculation of parity data, which likely helped with the write speeds. I also had the board outfitted with a stick of 256MB ECC RAM.
I don't remember what the 2300 managed; I don't think that it has any onboard buffer memory.

In terms of downtime when a drive dies....I don't know why there would be much of any downtime. I had a drive drop dead on the SX4000. The RAID controller gave a warning about a missing drive, but it ran normally, perhaps with a bit of a detriment to performance. I got the replacement drive, plugged it into the array, and it automatically regenerated the drive's contents. No real issues at all.


I'll have to check what kind of drives I've got when I get home. The SX4000 was running 200GB Western Digital drives, and the 2300 is running 1TB Seagates, that's about all I remember.

Edit: Ok, so the SX4000 was also running Seagate drives. 200GB, Barracuda 7200.7, ST3200822A.

The Highpoint 2300 is running Seagate Barracudas, 1TB, 7200.12, ST31000528AS.
 
Last edited:

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
yeah it is a bad idea man - please trust us on this one. use those drives as jbod and learn how to use mount points :) that is all.
 

Zxian

Senior member
May 26, 2011
579
0
0
Simple rule - don't do RAID5 unless you're on a dedicated hardware raid controller. Your system performance is going to be simply terrible.

As for using the GP drives in a raid array - it's doable, and can definitely save you some cash if you decide to go that route. I have a 3ware 9650SE-8LPML in my server, which up until recently had 8x WD10EACS drives connected in RAID6. Performance was plenty for my needs, and I had no issues with the drives or the array. I moved all my storage to 4xWD20EARS in RAID5 since I'll likely be needing to expand soon enough.

I think people forget what the original acronym stands for. ;)
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
well don't forget raid-10 :) it gives you a fightin' chance at survival for hardly any cost gain. no double read/write's
 

Zxian

Senior member
May 26, 2011
579
0
0
Honestly, I think people tend to overestimate the risk of dual spindle failures before a drive is restored. Maybe this was more of a concern way back when, but I'd question other things if you're having two drives die from a single array before you have a chance to replace the drive and rebuild. Also, RAID10 is 50% overhead in terms of storage cost, regardless of array size. That's what makes RAID5/6 far more appealing.

The other thing that people tend to forget is what is often the biggest killer of hard drives. It's not heat, but vibrations from other hard drives. These things (even at enterprise levels) are not built perfectly, and therefore vibrate due to a slight unbalanced load. When you've got two or more hard drives that are vibrating out of sync, this puts considerably more strain on the spindle axles.

I went so far as to suspend my hard drives in my server to minimize the vibrations they put out. You can see my previous setup here and here.