RAID controllers

Assoul

Member
Apr 13, 2013
32
0
61
I've been thinking about building a new rig, but wanted to know if there might be low to medium-end RAID controller cards that I should consider as I would like build it with 4 x 256 GB SSDs in RAID 10 and 4 x 1 TB HDDs in RAID 10. Is that possible?

Thx.
 
Feb 25, 2011
16,994
1,622
126
You can get used enterprise-grade hardware on eBay for usually not too expensive. The Dell "PERC" series ones are alright.

If you get a RAID controller with SAS ports, you can plug SATA drives into it, but not the other way around.
 

Assoul

Member
Apr 13, 2013
32
0
61
You can get used enterprise-grade hardware on eBay for usually not too expensive. The Dell "PERC" series ones are alright.

If you get a RAID controller with SAS ports, you can plug SATA drives into it, but not the other way around.

Nerd,

Is there a specific PERC model/series you recommend?
 
Feb 25, 2011
16,994
1,622
126
Not really. Whatever fits in your budget and has the features you need.

Some of the newer ones like the 710 support things like SSD cache drives and whatnot.

They're just a series I'm a little familiar with (I use 'em at work) and a quick eBay search returns a lot of them under the $100 mark.

RocketRAID is also pretty popular, and the IBM Serveraid M1015 has a pretty good reputation with FreeNAS people.
 

Assoul

Member
Apr 13, 2013
32
0
61
But also wondering, if the controller card fails, do I have to get the same exact one to get my data back? Or, can I plug the drives into a raid-enabled motherboard while I get a replacement card?

I don't know if these cards have a high tendency to fail, but I did previously do work on an old Dell server whose RAID 1 card malfunctioned. Thankfully, though, it was a simple matter of plugging one of the cards into the motherboard and everything was OK.
 
Feb 25, 2011
16,994
1,622
126
But also wondering, if the controller card fails, do I have to get the same exact one to get my data back?

Typically, yes. Assuming the card failure didn't damage the disks somehow.

Or, can I plug the drives into a raid-enabled motherboard while I get a replacement card?
Probably not.

I don't know if these cards have a high tendency to fail, but I did previously do work on an old Dell server whose RAID 1 card malfunctioned. Thankfully, though, it was a simple matter of plugging one of the cards into the motherboard and everything was OK.
They don't have a particularly high tendency to fail, but everything breaks eventually. You should have backups of your RAID array just in case.
 

Assoul

Member
Apr 13, 2013
32
0
61
OK, so I'm guessing a single disk will suffice for backup? And I can boot from that if it fails...
 

nk215

Senior member
Dec 4, 2008
403
2
81
the m1015 is popular because it can be used in IT mode. Basically, the controller passes all the HDD to the the OS. The user (I included) then use software raid to build the raid array.

The server hardware usually come with passive heat sink because servers already have high air flow thru the case to begin with (nVidia Grid card is a very good example). They can run hot on a typical desktop setup. Just put a small fan on it and it should last a long time.

I've not seen a controller goes bad yet. Come to think of it, motherboards go bad mostly because of capacitors. Controller has less of those and prob higher quality since they don't have to compete aggressively for price.
 

CiPHER

Senior member
Mar 5, 2015
226
1
36
On Windows platform Intel onboard RAID is very good, much better than many Hardware RAID and FakeRAID addon cards. Most cheap cards are FakeRAID - the actual RAID is performed by a binary blob on the host system so roughly equivalent to software RAID. True hardware RAID is like Areca with Intel IOP processor which truly offloads all RAID functions.

But beware - hardware RAID can be capped pretty fast with SSDs. For example the original 500MHz Intel IOP chip as used on Areca ARC-1220 and such, can be capped with a single SSD already around 70.000 IOps. Your host can go much higher - so software RAID is vastly superior in many cases.

If this concerns Linux/BSD then do not buy a RAID card but instead buy a HBA. IBM M1015 in IT-mode is an option, but there are more options. Your chipset SATA is always the best. Lowest latency and AHCI is usually the best because of TRIM support. On BSD TRIM works on ZFS and software RAID (GEOM) - i am not really certain about Linux though.

Also note that you can have a single software RAID array with some disks on the chipset SATA controller and some others on an addon controller like IBM M1015. So they do not have to be all on the same controller.
 

bigi

Platinum Member
Aug 8, 2001
2,490
156
106
Older RAID controllers that have been superb with HDDs may not work too well with SSDs.

You can get very good 3ware 9650 series controller for very good price now. It is very good/solid with regular HDDs but many have reported problems with SSD drives.

Also, features such as TRIM for each array member aren't supported.

I would strongly suggest getting designated RAID controller that was designed specifically to work and support SSD drives.
 

CiPHER

Senior member
Mar 5, 2015
226
1
36
TRIM on Intel RAID0 is supported (starting from Z68 i think) - not sure about RAID10 though. TRIM on linux software RAID is supported i believe, and BSD GEOM is the best RAID engine and I/O framework designed thus far, with TRIM (bio_delete) on mostly every GEOM layer available. Also BSD has TRIM support on ZFS pools.

Buying hardware RAID for RAID10 or RAID0 seems pointless to me. Software RAID or even onboard RAID (fakeRAID) is just fine for that. Because it uses your host CPU and memory, you often get higher IOps performance too. Latency should also be better.
 

simas

Senior member
Oct 16, 2005
412
107
116
the point of warning about onboard RAID is that now you are seriously tied your hands up in terms of things like BIOS firmware upgrade ( BIOS upgrade will wipe your config if your raid was bios dependent). a decent hardware hard well worth the money in such cases
 

CiPHER

Senior member
Mar 5, 2015
226
1
36
A BIOS upgrade will not wipe the config - the configuration is stored on the disks itself in the last sector of each harddrive - the 'metasector'. That is why you can move the disks to another motherboard of the same class, or same brand chipset, and it should work.

With Intel onboard RAID, the boot ROM will often get updated together with BIOS update. It is possible that newer versions of the boot ROM will not support the older metadata any longer. I believe this is what you are referring to. I am aware this happened in the past, but i am not aware of this still being an issue. Regardless, downgrading should resolve the issue.
 

simas

Senior member
Oct 16, 2005
412
107
116
Thank you CiPHER - yes, i had personal experience of using MB supplied RAID (in BIOS), doing BIOS firmware upgrade and losing raid (have it reset to default config). Grouping exactly the same drives together again in exactly the same RAID made them come up without data - lesson learned and I am wary of BIOS provided non default RAIDs. P8H77 Asus MB if interested.

I am also of the opinion that for most real consumer use cases RAID is greatly oversold and is of limited utility vs having a true backup , and rely on combination of data backup, file history, and full system imaging (with bare bone restore capabilities) for recovery.