anikhtos

Senior member
May 1, 2011
289
1
0
as i know raid0 is to increase speed perfomanse and i/o right?
so my system is
cpu athlon II 250 e
mobo asus a78td evo
host adapter lsi 9211-8i

i run hdune 4.01 trial
with a single drive and i get according to hdtune
for 1mb file transfer 166 i/o and 166mbytes/sec speed
at 2 drive raid 0
for 1 mb file transfer 323 i/o and 323mbytes/sec speed

so far so good and those values with the controller of the mobo
so when i see the numbers according to hdtune
for 512bytes 4k and 64k transfer rates i have the same numbers
at single drive and at raid-0 configuration!??!?!

i use the lsi card at raid0 for the 1mb i see an increase from 166 to 231 but again for 512byte 4k and 64k the numbers are the same.

i go to raid-0 with 4 disk configuration
the mobo raid gives at 1mb 598 and the lsi 450
and both of them at 512bytes 4k and 64k the same numbers

1)should not see an increase at those numbers also?!?!?!
i2)s it logical for the lsi to have such a worse perfomance again the raid controller of the mainboard?? (at the 1mb transfer it is 25% slower)
 

anikhtos

Senior member
May 1, 2011
289
1
0
/thread

:thumbsdown:
are you a troll???

i have never ever had a raid setting again so i check some numbers and i question if this numbers are logical
so what with this attitude????
if we all knew everything this forum would not have a reason of existance
would it??
and i think your attitude is ilpolite and furthermore hurts the image of the forum overall
if you do not have anything to offer to someone question then keep your mouth shut and do not destroy a so good forum
cause the forum is the members and how they react with each other
 

FishAk

Senior member
Jun 13, 2010
987
0
0
The data must all be read or written to only one disk- no mater how many disks are joined together in a RAID 0 array until you go past the Data Stripe Size. For instance, my data strip size is 128K, so for any reads or writes less than 128K, all data is written to or read from a only a single disk.
 

anikhtos

Senior member
May 1, 2011
289
1
0
The data must all be read or written to only one disk- no mater how many disks are joined together in a RAID 0 array until you go past the Data Stripe Size. For instance, my data strip size is 128K, so for any reads or writes less than 128K, all data is written to or read from a only a single disk.
so the improvement of the speed in raid0 configurations is done for the strip size and above.
if that is the case if i use a lower strip lets say 32k or 64k that would increase the spead of the files that size and up.
what happens if i use a lower strip lets say 64k the speed increasement i have seen at 1mb file will remain the same?
how the strio size effects the speed of a raid0 configuration?!?
 

dac7nco

Senior member
Jun 7, 2009
756
0
0
You stated running an LSI RAID-0 array on an Athlon dual-core, and I immediately thought of a laughable post of a few weeks back. On the LSI controller, I use stripe sizes of 128, like FishAk said, and the RAID-0 will only help you sequentially, not in random access. I get 1.019GB/s read and 788MB/s write using four Corsair Force 120GB drives on an LSI 9280. Random read isn't much better than my 160GB Intel G2 in my laptop.

Again, my apologies.

Daimon
 

greenhawk

Platinum Member
Feb 23, 2011
2,007
0
71
Going to repeat FishAK a bit, but as mentioned, below the strip size, speeds do not change as only one drive is doing the searching (reading/writing).

Over that then the two or more drives can work together to get the faster speeds. But while sustained transfer speeds increase, due to the drive's design, access times stay the same, so random access times should not change (due to some differences in drives, overall access times can increase fractionally).

As for the other point of motherboard vs the LSI card, then other things come into play at higher speeds. The most noticable one is the bandwidth from the drive to the controller, (ie which version of sata). Raid cards generally follow a while behind so motherboards get a little leg up in some situations on that point.

Then their is the connection between the controller and the rest of the computer. Raid cards USE to be far better in this regard as onboard connections were single PCI-E or PCI connected, so a raid card had a better connection. Now all motherboard controllers have a high speed connection, and the RAID cards have not moved on very much at all (depending on price range). It can be a reason for some who want absolute speed to skip hardware raids.

The next issue with true hardware raid cards is the processor on them. a lot of lower end cards just do not have the processing power to move data from the drives to the computer (or vis versa). And that is while ignoring the XOR for raid 5 and beyond. I remember a good hardware raid card having a 400Mhz processor was considered top of the line. Now you see the high end cards using ones of over 1Ghz. In short, even with fast drives, and a good connection (ie: high PCI-E count) to the motherboard, their is still room for a bottleneck.

Though it would be interesting to see what a raid 5 between the motherboard and LSI did. I suspect the LSI would loose less over it's overhead speed vs the motherboard.
 

greenhawk

Platinum Member
Feb 23, 2011
2,007
0
71
so the improvement of the speed in raid0 configurations is done for the strip size and above.
if that is the case if i use a lower strip lets say 32k or 64k that would increase the spead of the files that size and up.
what happens if i use a lower strip lets say 64k the speed increasement i have seen at 1mb file will remain the same?
how the strio size effects the speed of a raid0 configuration?!?

Going partly from memory here.

Shrinking the strip size can help as you want for smaller file sizes, but only while the size is larger than the sector size of a HDD. Using a strip smaller than a HDD's sector size means the HDD is reading more data than it is returning to you, so performance is expected to noise dive at that point.

With the newer drives around, that puts a floor of 4K.

The down side of this is that for larger files, more time is spend requesting data. So for 1MB of data, instead of reading (at 128K stripe) 8 times, the controller end up reading 256 times (4K stripe). That seriously eats into peak read/write speeds.

How much varies on access time of the drives and throughput speed and might even be effected by if the drive is smart enough to read ahead (so every now and then, the next read request is already in the HDD's buffer).

So I suspect a cheap / slow / little cache / green drive to suffer faster than say a high performance drive like a WD Black or equilivant in this situation.

If you want better 4K responce times, then RAID is not the way to go. Some implementations of raid 1 (or raid 1+0 or what ever some motherboard manufactures call it) might help on the read side, but not the write performance at that file size.

The way to go is a fast access time, so either a V'raptor or SSD.

So for performance, you look for good access times, for speed, you can look at raid. Then you just need to match tasks to the system/method that best suits it.

ie: fast access times for OS drives, fast transfer times for data/programs.