Originally posted by: EricMartello
The act of creating a raid 0 array of 2 or more drives does not increase the individual drive's probability of failure. So whether you have 2 drives or 5 in a raid 0 configuration, the probability of you losing all data due to disk failure is the same as if you had 1 in a non-raid configuration.
The first part, "does not
increase the
individual drive's probability of failure" is correct, the conclusion is not, because any single failure is a system failure.
System MTBF for n components is calculated as follows:
1 / MTBF (system) = 1 / MTBF (1) + 1 / MTBF (2) + ... + 1 / MTBF (n)
For n devices with equal MTBF, this simplifies to:
MTBF (system) = MTBF (1) / n
So 2 x RAID 0 has 1/2 the system MTBF of each drive, not counting the controller.
http://www.relexsoftware.com/customers/cs/versatel.asp
http://www.storagereview.com/guide2000/ref/hdd/perf/raid/concepts/relRel.html
However, if we assume that the published MTTF figures (1,000,000 hours for Maxline III), then the net results aren't so significant -- for a 2 drives, in 1 year I calculate 1% increase chance of failure; in 10 years, 8%. Assuming 1/2 that MTTF, I get 2% and 12% respectively. Assuming 1/2 the Maxline MTTF, and a 4 drive array, I get 5% in the 1st year, 21% in 5 years, and 34% in 10 years.
But this is really off-topic in this thread -- we still don't know why the OP's getting crazy performance numbers here. They are very high, and probably indicate problems with the benchmarks. This could matter, because people often use the same benchmarks elsewhere.
I tried 2 current Maxtor DiamondMax 10 300 GB / 16 MB cache SATA II drives using NForce 430, and didn't get anywhere near those transfer rates, nor those seek times. These drives are also supposed to have NCQ, though perhaps it's not the same tuning / sophistication of the Maxline III.
I don't know the reasons. I think it's not good to just believe those numbers. Simple transfer tests might give more reliable figures. E.g time to copy files from memory to drive -- via applications or via file copies where the source is known to be cached. If the seek times are really as great as the benchmark tests say, then copying to & from the same array should also give very good transfer rates.
I find Sysinternal's CacheSet useful for such tests. (It can clear the cache, thereby eliminating the effect of the cache on repeated tests.)