Originally posted by: Pariah
In both RAID 10 and 0+1 all the drives in the array are synced, not acting on their own.
RAID Level 10
"All drives must move in parallel to proper track lowering sustained performance"
I think they mean "Within each RAID1, each drive must move in parallel to [the] proper track, lowering sustained performance." If they mean the whole array has to seek in sync, well, they're wrong (although their product may work this way, which would be dumb). I don't know what more to say. Think about it. You have stripes arranged something like this:
Disk1: 0 - 2 - 4 - 6
Disk2: 0 - 2 - 4 - 6
Disk3: 1 - 3 - 5 - 7
Disk4: 1 - 3 - 5 - 7
Operations on stripes 1, 3, 5, and 7 are completely independent from those on 0, 2, 4, and 6. If you read from or write to stripe 0, there's no reason for Disks 3 or 4 to be involved
at all, or that you should have 'sync up' with them before you can do your operation. Now, if you do an operation spanning two stripes, yes, you have to wait for the slow disks to finish their part of the job. But there's no reason you should have to wait for operations if they fit into a single stripe.
After a bit more searching I dug up an Intel page that had a 5 drive RAID 10 array. So, now I don't what to think. It was always my understanding that RAID 10 needed an even number of drives.
Not really; each RAID1 can have a different number of mirrors if you feel like it. Normally you'd have the same number of drives on each, though.
Regardless, beyond 3Ware and maybe a couple Promise controllers, no ATA RAID controllers support RAID 10, so it doesn't matter either way.
It's definitely more of an enterprise feature, something you find more often on high-end SCSI RAID cards. It's also less popular now that RAID5 is more widespread; you get better space utilization, most of the performance (writes are a little slower in RAID5), and can still tolerate a single drive failure (RAID1+0 can sometimes tolerate 2).