RAID card or new motherboard??

alizee

Senior member
Aug 11, 2005
501
0
86
I would like RAID 5 support, and my current motherboard doesn't support it. I'm in a quandary, should I get a RAID card or get a motherboard that supports RAID 5. I will be using it with Windows, and while I don't want horrible performance, the main reason I want RAID 5 support is redundancy and one large volume.

A new motherboard with RAID 5 support is about $100, Highpoint RocketRAID 2310 for $150. I know there are cheaper RAID cards, but they either PCI instead of PCIe, or they are natively PCI with a bridge chip for PCIe.

Does anybody have a recomendation for a RAID card? Should I just buy a new motherboard?

I mainly want the largest volume I can get with redundancy. Performance is a secondary concern; I don't want less performance than I would get out of a single drive, but if I don't get much more than that, it's ok. I will be using this RAID in my file/media server, which is being backed up separately. The redundancy will make it easier to recover from single drive failures, but I do have that backup if there's more than that.

Thanks for the help.
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
That Highpoint looks be be a quasi-HW RAID card (check out the reviews on Google Shopping), so I'm not sure how much better it would be than Intel's. For performance, you would likely be better off with RAID 10, also. Less efficient use of total array space (4 drives, 1/2 capacity, 1 drive redundancy), but performance should be fine.

You can find decent Dell SAS controllers (OEM LSI Megaraid) for OK prices ($200 or so, where the branded cards are typically $300+), but I have read of them not working with desktop mobos. It could be a case of PCi-e 16x BIOS problems (desktop mobo turns off IGP when a card is plugged in, when you really want to use IGP anyway), but I don't know for certain, as I have not encountered any problems with them. OTOH, for that kind of cost, you could get a new mobo and an extra drive, or a cheaper card, and just use RAID 10.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
do not use RAID-5 it is a bad idea. it's a bad idea in the datacenter and it's a bad idea for home even more so.

There's a reason folks came up with raid-6 and there's a reason why folks use raid-50 and why folks prefer RAID-10 or 10E..

stick with LSI chipsets man. trust me on that one. preferably with a battery or flash backed write cache.
 

alizee

Senior member
Aug 11, 2005
501
0
86
That Highpoint looks be be a quasi-HW RAID card (check out the reviews on Google Shopping), so I'm not sure how much better it would be than Intel's. For performance, you would likely be better off with RAID 10, also. Less efficient use of total array space (4 drives, 1/2 capacity, 1 drive redundancy), but performance should be fine.

Why would you say RAID 10 over RAID 1? Performance?


do not use RAID-5 it is a bad idea. it's a bad idea in the datacenter and it's a bad idea for home even more so.

There's a reason folks came up with raid-6 and there's a reason why folks use raid-50 and why folks prefer RAID-10 or 10E..

stick with LSI chipsets man. trust me on that one. preferably with a battery or flash backed write cache.

Can you expand on that? Why exactly is it a bad thing? Performance and data integrity come to mind. You mention RAID 6 and RAID 50, so I imagine you mean recovery from multiple drive failures, as well. Also, would you still say I needed battery backup on a RAID card if I had an UPS for the whole system?

I think I needed to expand on what I'm using the storage for, and I'll update my original post, too. I mainly want the largest volume I can get with redundancy. Performance is a secondary concern; I don't want less performance than I would get out of a single drive, but if I don't get much more than that, it's ok. I will be using this RAID in my file/media server, which is being backed up separately. The redundancy will make it easier to recover from single drive failures, but I do have that backup if there's more than that.
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
I've had nothing but good experiences w/ 3Ware, as well (I rather like their diag/management utilities), and now they're part of LSI. Battery backup and such generally eat budgets right on up. If the OP needs 2TB or less, and is on a budget, I'd just go with a whole system backup, for redundancy (RAID does not give data redundancy, people, I learned this the hard way! :)), and maybe RAID 1 for availability. For performance as well, RAID 10 makes much more sense than any other common RAID implementation, and is plenty fast enough without an added computer on a card. RAID 10's performance is also about as good w/o a hardware RAID controller as it is with one, without heavy random loads, and even then, it will not be bad.

I might also add to avoid Adaptec. When they work, they work...but they can PITAs, sometimes, compared to LSI and 3Ware. I haven't touched anything with Highpoint in the name for about 10 years, so can't say how they are now, but the linked card is hardly a value, unless you're short on SATA ports, or need assurance that you can move the array between very different hardware (one advantage of using an add-in card), both of which can be dealt with cheaper.

----------

Added via edit:
Why would you say RAID 10 over RAID 1? Performance?
Performance, and total space, if it's an issue. RAID 1 gets you increased read speeds for a minor latency hit, and slightly worse write performance under heavy load (w/ SATA and PCI-e, not enough to worry about, usually). RAID 1 also only gives you one drive's-worth of space. RAID 10 gets you more space, and approaches the performance of RAID 0, but you need more drives, and you don't get more worst-case redundancy than RAID 5. RAID 10 also bypasses the RAID 5 write hole, which can be a source of data corruption, if rare (but you're kind of asking for it w/o battery backups, w/ parity-based RAID levels).
I don't want less performance than I would get out of a single drive
RAID 5 and 6 will give you less performance than from a single drive, unless you get a nice controller with plenty of cache. Even then, random write loads will still perform worse.

20 or so MB/s sequential writes will be typical for RAID 5 done in SW. HW can get you more, but even with a card that can give you 50+MB/s sequential writes, smaller random loads will bring it to its knees. Not much of a concern for a file server, but the cost of a controller good enough will likely be more than adding drives needed to implement RAID 10, especially for home use (IE, not getting SAS drives, a 'real' server, and all that mess), so it's hard to recommend doing it.
 
Last edited:

bigi

Platinum Member
Aug 8, 2001
2,490
156
106
Go with hardware RAID card. LSI/3ware I recommend.

I used to have RAID5, but upgraded controller and now running RAID6. RAID6 can take 2 drive failures, RAID5 one.

The rebuild process will take some time, depending on one's setup. It can actually run a few days. During this time all drives are stressed very much.
If you are running RAID5 and yet another drive fails during array rebuild, the array is gone.
With RAID6 your array can still take it. You do still have your data and that is all RAID needs to provide.

Many go for speed when "RAIDing", I go for data availability and array up time.

This kind of good setup will run for around $1000. Controller, BBU, Drives, Cages, etc.
You may want to invest in backups instead.

YMMV.
 

alizee

Senior member
Aug 11, 2005
501
0
86
Should I just go JBOD for my large volume needs and not worry about the potentially easier RAID solution for single drive failures? Or maybe even go all out for performance and go RAID 0 and skip any sort of local redundancy.

Like I said, I do have a network backup solution and it works great, and I definitely could recover from multiple drive failures, whether those drives are in a RAID array or not.
 

nk215

Senior member
Dec 4, 2008
403
2
81
I tried the Intel raid 5 , dell PERC and finally decided on the Dataoptic SPM393. The reason for me to pick the SPM393 is that I can scale to 6 hardware raid-5 arrays (~50TB) with minimal cost. The SPM393 let me connect the raid5 (5 2TB disks) to a single SATA port of the motherboard. In the BIOS, that port shows up as having 1 8TD HD connected to it.

It gets me 200+MB/sec read and around 80MB/write which is significantly better than my raid5 support in the motherboard (X58m).

My data is also backed up so I don't really need a raid5 (if data lost is a concern). The reason for me to go Raid5 is to minimize downtime. Restoring 5TB from external sources takes days on a Gbit connection.

When I have sometime, I'll do a full review on the SPM393. Needless to say, it works well for my need (which is to saturate a gbit connection on my server).