• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

RAID 5: Hardware vs. Semi-Hardware

NightFalcon

Senior member
(Long post ahead, you?ve been warned 🙂)

I have this problem - I'm building a file server which will need to be very reliable and be able to serve over 2T of data at fairly high rates (not all at once of course 😛). I have already decided on most of the components, but when it comes to the actual physical storage, I'm in a bit of trouble in terms of what to choose.

My basic requirements are as follows: 1) must be a RAID 5 implementation 2) must support 8 hard drives and over 2T of total space 3) must have online capacity expansion. Beyond that, it's just a matter of picking the most reliable and one that has the most other features. I should say right now that there are two controllers I'm seriously looking at - the Broadcom RAIDCore BC4852, and the 3ware 9500S-8.

A few days ago, I was pretty much set on the 3ware card which goes for about $500 on newegg and is a full hardware implementation. Today, however, I found this article, and my opinion greatly changed.

So due to this fact, I figured I might stand a better chance of making a good decision if I ask around a bit. My first question is this - what happens in a full hardware raid setup if the controller itself dies? From what I understand is that you have to replace it with the exact same model or else the whole array is lost. Is that correct? If so, since the RAIDCore is not a full hardware raid, is it still affected by this, or can I put another controller in its place if it should fail and my array will continue to function?

My second question is actually more about the interface between the motherboard and the controller. Right now I'm set on getting the Supermicro X6DVL-EG motherboard which comes with 2 64-bit 66MHz PCI-X slots. The 3ware card is made for PCI 2.2 compliant 64-bit/66MHz slot. Are those two compatible or not? For the RAIDCore, that one is made for 133MHz PCI-X slot, but from what I've read, it should run fine in the slower 66 MHz, correct?

Ok, not that we got those things out of the way, what would be your general recommendation? Price is not an issue, I don't mind paying $500 for a good controller as long as it will keep my data safe and will give me very good performance. I realize that the RAIDCore uses the CPU for doing its XOR calculations, but for the most part, the CPUs on my systems (especially since there will be two of them) won't be too heavily loaded, so I don't see any problems with the RAID card using the CPU. Are there any advantages or disadvantages to this model besides the increased CPU usage?

Also, how would you rank these cards in terms of features? I'm still reading the article above, and it seems like the 3ware might be lacking quite a bit, even though it was recommended to me as a good SATA RAID controller (even though it doesn't seem to have native SATA support).

Any comments on this topic appreciated. Though I must ask that if you recommend something, please give good reasons for doing so. Thanks 🙂
 
Care to elaborate? I will actually start of with around 500GB of space (depending on what initial drive size I pick). My initial setup will have only 3 drives, either 250GB each, 300GB, or even over that, haven't decided yet. From there, I will expand the array as needed, so when I have all 8 slots filled, I can theoretically have over 2TB. Might not happen soon, but I don't want to plan upgrading my controller later down the road. Have to plan ahead 🙂
 
np

Also wanted to add this, can an NCQ SATA drive work with a controller that has no native NCQ support? That seems to be the case with both of those controllers, and it seems that Seagate now produces mostly NCQ drives. Just want to make sure that they will still work, even though I won't see the advantage of NCQ.
 
You will have no compat problems with standard HBA and drives.

Rebuilding a RAID5 with that much space is rather risky. If another drive bombs while you're in the middle of a rebuild, you're had! Believe me I've had it happen before and it was with SCSI drives! Of course they were beaten to death but still.
 
Well tbh, in the many years I've been working with computers I've only had a single bad drive and that was an old WD 40GB with nothing on it. Can't say I even want to plan for failure of 2 drives at the same time, because with every other RAID level I loose a lot more space and with some I also won't be able to start with 3 drives (so greater initial cost). My hope is that I can start with 3 drives, in a few weeks when I get my next paycheck get a fourth one to use as a spare, and then when I start running out of the initial space I'll start buying more. I honestly doubt that I will ever be in a situation where two drives fail at the same time. They will be running in a well cooled environment and the server will be attached to a UPS that will be able to power it for at least 12 minuets (enough to shutdown). Even if there is a possibility that two drives can fail at the same time, I don't see those chances as being worth the extra cost and less overall space.

Since no one seems to be able (or want to 😛) answer some of the other questions, I might as well say that at this point I am 90% sure that I will be going with RAIDCore card. 3ware seems a little out of date at this point, and I'd rather get more features and sacrifice on the CPU usage, especially since it's a dual 2.8GHz that won't be too heavily used. If I may ask a different question though - as I said, the server will be on a UPS, but theoretically, what can happen to my array if the power goes out without a proper shutdown? The fact that the controller uses the main memory for cache, and can be writing to drives when the power goes out... does that present a problem?
 
The fact that the controller uses the main memory for cache, and can be writing to drives when the power goes out... does that present a problem?[/quote]

Not significantly worse than a controller that uses on-card memory for cache and can be writing to drives when the power goes out 🙂

I don't know pricing, availability, or how the finished product turned out, but while this was in development, it was clearly (on paper) a superior product to all other SATA RAID5 solutions.
 
Servers should never be shut down if the possibility of unprotected buffers (i.e. dirty cache) exists. This is not healthy for your filesystem and data integrity can be compromised.

You do NOT need to have two disks fail at the same time. If a R5 array becomes defunct and you don't know about it for a while and another drops off you're in trouble. Make sure you have alarms and make sure they work!

In the SCSI realm where our servers live, this is all too well known. Like listening to an old ELP 8 track, I can practically hear the screamer before the error happens! The sooner it gets corrected the better things will be. I've been down the rough road, fell over the edge and thought it would be cool to kick my feet while hanging on with my fingernails. Then I fell. Needless to say I won't do it again.

If budget permits, consider SCSI. The HBA's are far better and the drives are too. It's expensive and may stretch beyond the boundries of your budget. I'm not real familiar with the RAIDCORE products outside of the stuff that I've read on the web. They look OK, certainly better than Highpoint, probably on the same line as Promise and a notch below 3Ware. 3Ware products are practically enterprise level on the higher end. LSi makes SATA controllers that were basically converted from their popular MegaRAID SCSI lines but they do have problems. They are getting better though.
 
At any rate, I think I see the reason for the RaidCORE card preferring a 133Mhz PCI-X slot. Because it's using the CPU for its XOR calculations, it needs the extra speed/bandwith/reduced latency in order to compete speed-wise with the other cards on a 66Mhz PCI-X bus. I would venture a guess that on a 66Mhz slot, you're going to lose some speed but I doubt it's going to be enough to make you wish you'd gotten the other card. As far as NCQ goes, it appears that NCQ seems to slow things down in desktop environments but speed things up in server environments. Also, for the record even the fastest SATA drive (74GB Raptor w/TCQ) absolutely gets slaughtered by SCSI drives in fileserver benchmarks. Even the Seagate Cheetah 10k.6 and 10k.7 drives score 35-50 I/O's faster. I agree with the above poster. Try to go SCSI if at all possible. Even last generation or generation before last SCSI is faster in a server environment than any SATA drive.
 
Well regarding the drive failure notification, I will certainly set it up so that I get an e-mail as soon as a drive fails. Though actually, until I have all 8 drives taken up by the RAID 5 array, I can have the 4th drive connected and mark it as a spare, so that as soon as any drive files, the controller will begin rebuilding instantly. And of course, I will set up e-mail notifications for myself in the event this should happen.

Regarding SCSI, I've considered the issue, but honestly can't justify the price. That extra boost of speed will most likely be unnoticed in my environment (which currently is 100mbps, but will be upgraded to gigabit soon enough), and considering how much it would cost to even get close to the same amount of space as I can get using SATA, it's just not worth it. Personally, I don't see SCSI staying at the top for too long, especially with SATA II. One on one, yes SCSI might still win, but when RAID is in the picture I consider SATA a much more "healthy" approach (and don't get me started on the cables lol). For enterprise servers, SCSI might still be the way to go, in my case it won't be worth it.

Either way, I've already ordered the enclosure which should be arriving tomorrow 🙂 8 hot-swap SATA bays in the front, I doubt I'll regret that decision.
 
Back
Top