Hardware choices for "hardware-RAID5"

BonzaiDuck

Lifer
Jun 30, 2004
16,784
2,115
126
It's still six months later from the time I posted here saying I would build a Core-2-Duo with the ASUS P5WDG2-Ws-Pro motherboard. I've looked at various options including the 680i boards, and I keep coming back to this one.

In addition to the stability, proven over-clocking and other features, I wanted to use the Pro board because of the PCI-X slots. And I do not plan to build an SLI or Crossfire system (although the Pro provides for Crossfire) -- a high-end nVidia 8800 GTX or even GTS is more than I need.

The idea was to open up the SATA2 bottleneck as much as possible with hardware RAID5. I've looked at the results for sustained throughput with PCI-X RAID5, and the sustained throughput for the individual drives I want to use in a four-drive array. The array will be almost as fast as an array using WD Raptors.

But I also see that there are PCI-E RAID controllers available, with a four-port card running at PCI-E x4. I'm pretty sure that running such a card in the Pro's second slot will degrade the video to x8, so I'd have to pick either an ASUS "Wall Street Quartet" or a 680i motherboard to use a PCI-E RAID card.

How much of an improvement in disk throughput -- if any -- would I have in using a PCI-E RAID controller over a PCI-X RAID controller?

 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
PCIe or PCI-X, assuming at least that the PCI-X is working properly and not throttling down to standard PCI, etc,. are not likely to be significant bottlenecks.

What matter more in such comparisons are secondary issues. For consumer builds, you're generally better off going to PCIe, because you'll get vastly more choices in motherboards, feature sets, pricing, and forwards compatibility. For server-class builds, expect to pay a lot more, and get fewer choices, and potentially hit secondary requirements such as ECC RAM, server-class CPUs, EPS12V power supplies, and perhaps issues such as lack of driver support for new consumer OSs. But you also gain the potential secondary benefit of getting better pricing on old PCI-X controllers or even entire server builds on eBay. Newer controllers may appear in both formats, but looking into the future, PCIe will tend to dominate.

A useful secondary benefit of PCI-X is that it's generally backwards-compatible with PCI. So PCI-X controllers come far ahead in flexibility if you're willing to tolerate lower performance, esp. when putting a gigabit NIC on the same PCI bus.

The controller and RAID setup, etc., will matter much more than the interface. For the controller, you need to check driver support, performance, and features, such as RAID modes, stripe options, cache, battery backup, migration and expansion, stability and support.

Separating the storage controller from the workstation is often a good idea. Overclocking your data server is not a very good idea. Overclocking implies system instability -- at least touching on that initially, and you don't want to risk data and file system corruption. Similar issues apply with hardware and software installations over time -- enthusiasts desktops tend to change a lot, and such change might not be good for file servers / large data stores.

Having a separate file server introduces a significant bottleneck -- the network. Gigabit is almost mandatory for performance, but you can't typically do better than that, and then worrying about hitting x4 PCIe or 133/64 PCI-X becomes pointless, at least with consumer loads.
 

imported_Goo

Member
Oct 4, 2005
181
0
0
How many HD are you planning to put in the raid 5? If it is 4 a external solution maybe your better bet. Something like USR or freedom9 4bay raid tower, they're cheap, easy and fast.
 

bob4432

Lifer
Sep 6, 2003
11,727
46
91
what type of speeds are you expecting out of the array?

just remember that to get board with pci-x means a workstation board = lots $$$$ you can get pci-e 4x slots on just about anything now. i would go a pci-e sas card for this reason alone - so you can use it in the future.