RAID 10 experience

ehume

Golden Member
Nov 6, 2009
1,511
73
91
I just helped a friend upgrade his income-producing system to a RAID 10. Dan uses his rig to edit videos that he takes, so he wanted faster storage and faster editing than what he had. But what really caused him to make a change at this time was that his Adobe Premiere hardware acceleration (CUDA cores) could no longer be hacked to support his Nvidia 260. He had to move up to a 400-level video board. So he go a 470. He decided that since he was upgrading video, he would upgrade his storage, since that was long overdue.

Dan decided on a RAID 10 so he could have both speed and redundancy. He has been getting nervous about losing his projects -- you can't go back and re-shoot a wedding, for example. He backed up frequently, but if a drive went down in the middle of something, he was cooked. He was living on borrowed time and he knew it.

Dan got an Adaptec 6405e RAID controller, four 2-TB WD drives (Caviar Black) and a used Nvidia 460. I got in on the project when he asked me if I knew whether his airflow would cool his drives. I suggested we get together and see. If his case couldn't cool the drives, I had several cases he could use instead. So he brought his gear over and we brought his case to the garage.

First thing we did was use some fireplace matchsticks to immobilize his fans, then used a MetroVac ED500 to blow out all the dust. Then we went down to the basement and vacuumed the bits we hadn't gotten with the blower. Took the ED500 and blew off some last bits, and we were ready to go.

First we removed the pair of RAID 1 drives that had been connected to the motherboard for a motherboard-controlled RAID. Then we put the four identical 2-TB drives in the HD cage. There was somewhere from 1/8-inch to 0.5cm between the drives. Not sure it would be enough, but they fit right up against the front fan, so maybe. Plugged in the SATA power plugs from the PSU.

Then we installed the Adaptec 6405e card. It has an SAS cable with four SATA III cables attached . . . and one cable that terminated with what looked like the kind of plug you see on cables running from the USB II ports in the front of a case. The plug would have fit beautifully into one of the motherboard USB II headers. So we looked it up in the Adaptec documentation, both print and PDF. Nary a word nor a diagram. Finally we decided to roll it up and ignore it. But this was the first documentation lapse.

Plugged the four SATA III plugs into the four drives, and turned on the rig. POSTed fine. As described, The Adaptec card had its own POST, so we pressed ctrl-A to get into the 6405e's BIOS. We followed the instructions in the BIOS to set up the RAID 10. Straightforward. Not a problem.

But this involved Adaptec's second documentation lapse: although RAID 10 was mentioned in the marketing materials, in the paper setup instructions RAID 1 and RAID 5 were discussed, but not RAID10. Perhaps this is because RAID10 is really RAID1+0, but it did cause us confusion. Worse: Dan was afraid that he would not be able to install RAID10 the way he wanted.

Third lapse: Adaptec's setup documentation said you could start using the RAID array immediately, while it was still building. R-i-i-g-h-t. Windows never saw it to use it until after a number of steps later.

Fourth and fifth lapses: Adaptec's setup documentation was silent on how long it took to build an array. And there was no mention of how to monitor your progress. We spent a lot of time trying to find the array in windows before I dug into the array management functions and found that the progress could be displayed as a 2-digit percent. It took a long time to change from 14% to 15%, so we went to bed and let it work overnight. Bottom line: it takes an overnight job to build an 8-TB RAID.

Next day we were still unable to see the array in Windows, neither with Adaptec's RAID nor with Windows Disk Management. So, remembering the Bad Old Days(TM), I went into Device Manager and installed drivers. Now Windows could see the array. Adaptec's sixth lapse. (BTW -- the documentation did say there would be a Windows message asking for drivers. We never saw it. Also, if you boot from their DVD the mouse driver is hypersensitive, hard to click on something. And you can make a floppy, but not a thumb drive. Luckily there are other ways to get drivers, like the web.)

So now we turned to the Adaptec RAID Manager. It could finally see the drive controller, and correctly identified it as an Adaptec 6405e, but the array was not yet ready. Seventh lapse: no hint on what to do next.

We went into Windows Disk Manager. Found the array. We clicked on the drive. There was an option to create a simple drive, and when we did that there was a 2-TB limit. 1.7 TB was unusable. Eighth Adaptec documentation lapse: how to get all 3.7 TB of the array?

Luckily, the Web gave the answer: create a GPT disk. But right-clicking on the disk did not give us that option. It turns out that you must click to the left of the "disk" in the description section. Ninth Adaptec lapse.

We created the array as a GPT disk. Bingo. Formatted with a quick format, opened it up and stored a mini-project on it. Edited with Adobe Premiere. It worked Glory Hallelujah!

Now we shut down, pulled the Nvidia 260 and replaced it with an Nvidia 470. Worked fine.

Now Dan has an i7 920 with a fully populated bank of RAM. He does not overclock, since this is a production machine. He does run AntiMalwareBytes and AVG. The OS is on a Samsung 840 connected with SATA II, since that is all this x58 MB has.

We ran Windows Experience index. The score was 7.4. And that was from the CPU. Everything else was higher. And it felt snappy as we ran the rig through its paces.

The hard drives felt reasonably cool -- no hotter than low- to mid-30's.

Overall, the installation would have gone much more quickly if we could have known the answers to our questions in advance. Adaptec documentation really lacks a lot. If you decide to install one of their controllers, read the entire RDF manual. The tenth Adaptec lapse is that they tell you in the paper document how to install a Windows OS, as if we would all put our OS on RAID, instead of using the RAID as a data set. Adaptec does not mention in their paper guide anything about installing the controller where your OS is already in place. A single sentence would have done it.

Well, I had an opportunity to have an extended visit with an old friend. But we had many unnecessary worries. The Adaptec controller may be a good piece of hardware, but their instructions are woefully inadequate.
 

Railgun

Golden Member
Mar 27, 2010
1,289
2
81
I had a 10 array for a while. Nowhere near as complicated as you make it out to be.

And if he is looking to speed his editing, why isn't he using an SSD or an array of SSDs as his working disk?
 

MrX8503

Diamond Member
Oct 23, 2005
4,529
0
0
His working disk should definitely be an SSD. Even with RAID, mechanical drives can't compare.
 

Goros

Member
Dec 16, 2008
107
0
0
Hardware raid is not worth the investment to the home/small business user.

Better off going with software raid and adding parity drives.
 
Feb 25, 2011
16,988
1,619
126
I had a 10 array for a while. Nowhere near as complicated as you make it out to be.

And if he is looking to speed his editing, why isn't he using an SSD or an array of SSDs as his working disk?

Because the main bottleneck in video editing is sequential throughput, not random IOPS. You can bolt a bunch of spinners together in RAID, get near-SSD-level sequential I/O, add in device redundancy/fail tolerance, and you still end up with 4TB storage for about the cost of a single 512GB SSD.

I second the sentiment about just using software RAID though.
 

ehume

Golden Member
Nov 6, 2009
1,511
73
91
Should I retitle the thread, "Issues with documentation?" or some such? In retrospect, setting up a RAID was not complicated. But because neither of us had done it before, and because neither of us wished to mess up, and because both of us started in computers where you could really mess up badly, we had questions. And those questions were not answered in the documentation. And that's how you make an easy job hard.

Now that we have done it once, either of us could do it again, quite simply.

And where's a good reference for a "software RAID"?
 

Railgun

Golden Member
Mar 27, 2010
1,289
2
81
Because the main bottleneck in video editing is sequential throughput, not random IOPS. You can bolt a bunch of spinners together in RAID, get near-SSD-level sequential I/O, add in device redundancy/fail tolerance, and you still end up with 4TB storage for about the cost of a single 512GB SSD.

I second the sentiment about just using software RAID though.

You're going to need a large number to match that of a few SSDs. My 4x 120G Vertex 3s will push 4GB sequential. My 4x 1TB spinpoints...not even close.
 

Dari

Lifer
Oct 25, 2002
17,133
38
91
How does he have 3.7TB of available space if it's a RAID10? I thought RAID10 was stripped and mirror, which means you have a redundant stripped (RAID0) drive? Therefore, he should only have 2TB of space to work with, right?
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
How does he have 3.7TB of available space if it's a RAID10? I thought RAID10 was stripped and mirror, which means you have a redundant stripped (RAID0) drive? Therefore, he should only have 2TB of space to work with, right?
It's 4 2TB HDDs, so 4 marketing GBs, minus FS overhead, for the effective "RAID 0" part, mirrored.
 

Dari

Lifer
Oct 25, 2002
17,133
38
91
It's 4 2TB HDDs, so 4 marketing GBs, minus FS overhead, for the effective "RAID 0" part, mirrored.

I don't understand. Does that mean that it's 4TB of usable space? That does not make sense. I used to have a simple RAID1 system and it only showed 1 total HDD space as available and there were (obviously) 2 in there.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
I don't understand. Does that mean that it's 4TB of usable space? That does not make sense. I used to have a simple RAID1 system and it only showed 1 total HDD space as available and there were (obviously) 2 in there.
Each RAID 1, made up of 2 2TB HDDs offers 2TB of usable space. The RAID 0 of the 2 arrays offers 4TB.
 

Goros

Member
Dec 16, 2008
107
0
0
And where's a good reference for a "software RAID"?

I don't know that there is really a reference for it, outside of program specific guides.

I'd say the big 3 software raid apps are unRAID, ZFS, and FlexRAID.

Personally, I use FlexRAID to present an infinite raid structure with parity to my WMC7 x64 Ultimate media server. I use a snapshot RAID structure but you can set it up for live RAID as well.

I run 10 7200rpm HDD's with a total available storage of 25TB, but I use 2 of my 3TB drives as parity drives (a-la RAID 6) giving me usable space of around 17TB. Since they are all being used to store media, I formatted them in 64k clusters using GPT tables.

I can add an unlimited amount of drives, mount them as NTFS folders, decide which add to my parity disks and which to my storage pool (you have to use the largest drive size you have for parity) and it allows you to validate and verify on a set schedule. You can also set it up to SMART monitor the drives connected via an HBA and SAS expander.

Personally I use Hard Disk Sentinel to monitor all the drives in the system.

Once my current media storage is filled (or close) I will be adding a Norco 4224 with 24 additional HDD's of space and going to a RAID 60 style system with more parity drives as my total pool increases in size. Literally you just plug the drives in, activate them in disk manager, format them and mount them as folders, and add them to your drive or parity in FlexRAID.

I can't speak to unRAID or ZFS as I haven't used them.
 
Last edited:

Dari

Lifer
Oct 25, 2002
17,133
38
91
Each RAID 1, made up of 2 2TB HDDs offers 2TB of usable space. The RAID 0 of the 2 arrays offers 4TB.

Right, each RAID1 is 2TB of usable space. Therefore, a stripped array of two of those RAID1s would still be 2TB of usable space since the stripped array is there for performance while the mirror array is there for redundancy.
 

Dari

Lifer
Oct 25, 2002
17,133
38
91
Anyway, I guess I'm wrong on this but I just assumed it was the way it'd work. I'm actually surprised that you would get more than 2TB of space if you have 8TB of 4 2TB HDDs...
 

MrX8503

Diamond Member
Oct 23, 2005
4,529
0
0
Drive A - RAID1: 2x2TB = 2TB usable
Drive B - RAID1: 2x2TB = 2TB usable

Drive C - RAID0: DriveA x DriveB = 4TB usable.

In short RAID10 of 4x 2TB drives results in 4TB of usable space.
 

mrpiggy

Member
Apr 19, 2012
196
12
81
Know why the Adaptec 6405e's are so cheap relative to other RAID controllers like the LSI's or Areca's? Because they SUCK. While there are controllers that can approach SSD's with properly set up hard drives, the Adaptec 6405e aint one of them.

It's a PCIe "1x" lane Gen2 device with a max theoretical 500MBps transfer rate which means actual real-world transfers are always much lower. The best use of this crappy card is for is slower-transfer redundant bulk storage. Using this card for speedier video editing, regardless of the RAID type is counterproductive because it's throughout is pure crap.

Here's the specs from Adaptec's site if you don't believe me:

Bus System Interface


  • 6805E: 4-Lane PCIe Gen2
  • 6405E: 1-lane PCIe Gen2
You'd be just as fast just RAIDing off the motherboard ports.

FYI: Standard Windows Experience (in the control panel) for the disk score uses the OS drive (in this case the SSD). It doesn't measure non-OS drives unless explicit from the command line. I can assure you that if you run WEI on that RAID setup with that slow RAID card, your index would be much lower.
 
Last edited:

Goros

Member
Dec 16, 2008
107
0
0
I use an IBM M1015 flashed to an LSI 9207 8i in IT mode...basically just presenting all connected drives as sata6 drives, no raid hardware interfering with the UEFI bios on the motherboard.

Cost $117 with the bracket and shipping included.
 
Feb 25, 2011
16,988
1,619
126
You're going to need a large number to match that of a few SSDs. My 4x 120G Vertex 3s will push 4GB sequential. My 4x 1TB spinpoints...not even close.

No, but they probably will match a single Vertex 3, which was my point. It's video editing, not an benchmark contest.

And as fun as a benchmark contest is, when there's a usage model or other bottleneck, then there's a point of adequacy where further speed doesn't offer a productivity return. And a working rig for a business is all about ROI.

(p.s.: 4GB/sec from 4x SATA-3 SSDs in RAID-0? You might want to check the math on that - theoretical max is only ~2.4.)
 
Feb 25, 2011
16,988
1,619
126
Anyway, I guess I'm wrong on this but I just assumed it was the way it'd work. I'm actually surprised that you would get more than 2TB of space if you have 8TB of 4 2TB HDDs...

Assuming n drives, all of capacity c, RAID array capacity is:

RAID-0 = n*c
RAID-1 = 1/2*n*c
RAID-5 = (n-1)c
RAID-6 = (n-2)c
RAID-10 = 1/2*n*c

RAID-5/6 suffer compute penalties for parity calculations.

RAID-0 has no drive failure tolerance.
RAID-1 & 5 will continue operating with one failed drive.
RAID-6 with two failed drives.
RAID-10 with between one and n/2 failed drives, depending on which drives fail. (You'd have to be very very lucky.)

Of course failed drives should be replaced immediately and rebuilt.
 

MrX8503

Diamond Member
Oct 23, 2005
4,529
0
0
No, but they probably will match a single Vertex 3, which was my point. It's video editing, not an benchmark contest.

And as fun as a benchmark contest is, when there's a usage model or other bottleneck, then there's a point of adequacy where further speed doesn't offer a productivity return. And a working rig for a business is all about ROI.

(p.s.: 4GB/sec from 4x SATA-3 SSDs in RAID-0? You might want to check the math on that - theoretical max is only ~2.4.)

2x WD Blacks in RAID 0 would be about 300MB/s r/w at best. This is probably the speed the OP is getting from his RAID10.

A modern SSD is 500MB/s. When working with videos, you'll notice a difference considering how large videos can get. You would have to have 4x in RAID 0 to compare to 1 SSD in speed.
 

ehume

Golden Member
Nov 6, 2009
1,511
73
91
2x WD Blacks in RAID 0 would be about 300MB/s r/w at best. This is probably the speed the OP is getting from his RAID10.

A modern SSD is 500MB/s. When working with videos, you'll notice a difference considering how large videos can get. You would have to have 4x in RAID 0 to compare to 1 SSD in speed.

Remember, the SSD is connected with SATA II, not SATA III. I haven't done a drive test, but I imagine the speed is nowhere near 500MB/s.
 

Viper GTS

Lifer
Oct 13, 1999
38,107
433
136
You really don't need SSD performance for video editing. I work in a facility that has roughly 50 HD edit seats, all are connected via 1gb ethernet to the editing SANs. This is 50 megabit 720p, but the same would work just fine for higher bitrate/resolutions. Everything is edited in place on the SAN.

Would SSD be nice? Sure. If costs were equal of course I would love to have a full SSD SAN to connect all these machines to. But with half a petabyte of storage in our current system and already bursting at the seams SSD is simply not an option.

It's even less so for the typical home/semi-pro user.

Viper GTS
 
Feb 25, 2011
16,988
1,619
126
You really don't need SSD performance for video editing. I work in a facility that has roughly 50 HD edit seats, all are connected via 1gb ethernet to the editing SANs. This is 50 megabit 720p, but the same would work just fine for higher bitrate/resolutions. Everything is edited in place on the SAN.

Would SSD be nice? Sure. If costs were equal of course I would love to have a full SSD SAN to connect all these machines to. But with half a petabyte of storage in our current system and already bursting at the seams SSD is simply not an option.

It's even less so for the typical home/semi-pro user.

Viper GTS

Wouldn't upgrading to 10Gb Ethernet help you more in that case?
 

Viper GTS

Lifer
Oct 13, 1999
38,107
433
136
Wouldn't upgrading to 10Gb Ethernet help you more in that case?

This entire system is due for replacement next year, we'll do 10Gb in the core but there's no need for anything more than 1Gb to the editor. Total video + audio bandwidth per stream is under 10 MB/s, so 10Gb is just massive overkill at the edit level. For very complex edits with many layers and higher bitrates it could become an issue but not with this relatively light codec. Well, I say light by pro standards but it's still MUCH higher quality than most home users will ever see.

Viper GTS