RAID 10 (0+1) performance issue/question

iMPLiCiiT

Junior Member
Aug 4, 2011
6
0
0
I just bought 4x Seagate 7200.12 31000524AS 1 TB hard drives and put them in RAID 10 (0+1). I was wondering though whether this is the performance I should expect out of these hard drives as I thought I should have ~4x read performance and ~2x write performance (obviously under) but it seems they are only doing about 2.5x better. I have a P5Q (ICH10) and using the on-board raid controller. Are there any settings or issues that anyone can think of that are effecting my performance?

Info:
http://img405.imageshack.us/img405/9712/44925958.jpg

Benchmarks(seems rather inconsistent, is this normal?):
http://img191.imageshack.us/img191/8108/upload1z.png
http://img695.imageshack.us/img695/9223/upload2q.png
 
Last edited:

owensdj

Golden Member
Jul 14, 2000
1,711
6
81
I'm not sure if that performance is normal for that RAID controller, but you won't get as high a performance from that as you would from a good dedicated RAID card.
 

greenhawk

Platinum Member
Feb 23, 2011
2,007
1
71
Depends on how the motherboard is implementing the raid 1 part of the setup. If done correctly, both drives are read and the data checked with each other to ensure it is valid. In this case, getting double the performance is to be expected as only the raid 0 part of it will be doing any speeding up. The use of raid 1 givening better than single drive performance is more due to some manufactures running their own modified raid 1 and not doing data integrity checks. Intel I am pretty sure does not do this.

Write speed will be as you expected it (double).

The first performance graph looks OK, but did you check the drives indivually before raiding them? to get a feel of their performance and to check for any issues?

as to the second, it drops a lot. Is the OS on it or might you have a iffy drive in the set?
 

iMPLiCiiT

Junior Member
Aug 4, 2011
6
0
0
No I did not test each drive individually. Is there any way to test the drives individually without having to reinstall? Possibly just plug it into another computer?
 

iMPLiCiiT

Junior Member
Aug 4, 2011
6
0
0
Just did Long Generic and Short Self Test and SMART test with Seagate SeaTools and everything checked out. I also did a HDTune test on each drive separately and came up with these results:

http://img839.imageshack.us/img839/1372/93871606.png

I only included 1 image because every drive performed the exact same. Am I wrong to be expecting almost 4 times the read speeds though for a Raid 0+1 setup? Mine seems to be a little slow.
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
I just bought 4x Seagate 7200.12 31000524AS 1 TB hard drives and put them in RAID 10 (0+1). I was wondering though whether this is the performance I should expect out of these hard drives as I thought I should have ~4x read performance and ~2x write performance (obviously under) but it seems they are only doing about 2.5x better. I have a P5Q (ICH10) and using the on-board raid controller. Are there any settings or issues that anyone can think of that are effecting my performance?

Info:
http://img405.imageshack.us/img405/9712/44925958.jpg

Benchmarks(seems rather inconsistent, is this normal?):
http://img191.imageshack.us/img191/8108/upload1z.png
http://img695.imageshack.us/img695/9223/upload2q.png
First, HDD never scale perfectly. 1 drive, 100%. 2 drives @ raid 0 150%, 3 drives @ raid 0, 175%. 4 drives @ raid 0, 187.5%. Raid 0+1 with = 150% increase. However, response time goes up as there are most mechanical movements.

Also, raid <> safe. HDD, other than defects, which would show up within 3 months of usage, usually dies due to external effects (like kicking the PC or bad power supply.) Having raid drives sitting together means they are likely to die together. When one drive dies, the others ain't far from going. So unless it is critical data storage, proper backup is the way to go.

having said all that, SSD rocks with raid settings. It almost scale perfectly and response time doesn't increase.
 

greenhawk

Platinum Member
Feb 23, 2011
2,007
1
71
Is there any way to test the drives individually without having to reinstall?

none that I know of.

but since I am late posting, it makes this a pointless reply.

I only included 1 image because every drive performed the exact same.

Getting 4x I do not think is likly but does not hurt to be trying for as close as possible.

Looking at the single drive, it is very smooth, so the gitter in the original feel a little off to me (second of the two), but then the last large raid array I did was 3 15K scsi in raid 0. Has been 5 drive software raid 5 via sata port multiplers since (heavly capped max speeds).

next question. What power supply is in the system (size/brand/model)? What is the other main power users? Are the drives powered on two separate power cables or all share one (with or without power splitters)?

I ask as it is a issue with large arrays of all drives wanting power at the same time so placing more instant power demand on the power supply. That spike can pull down the voltage level (normally the 12V IIRC), and so cause a HDD to re-start itself / delay it's data fetch. So while the system can handly the 4 drives individually accessed, accessing them together needs more power to cover this instant power need (espically if on the limit of it's powering ability or if all on the same power cord (so depending on thickness of the wires).

You could try a 2 drive raid 0, bentch mark it, then compare it to the raid 10. At least then you know what the raid 1 is doing to the system.

Checking the motherboard manual to see how the sata is connected to the rest of the system might show something like a possible bottle neck, but in that case, not much you can do to fix it without replacing the board/get a good addon card.
 

iMPLiCiiT

Junior Member
Aug 4, 2011
6
0
0
Just did a 4 drive RAID 0 test, got what I was expecting 495 MB/s MAX, Average 407 MB/s. That is approx 4x the performance. Am I wrong to be expecting the same 4x the performance with my RAID 10 (0+1) setup?
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
you need a 4x pci express card ( i have a few $50 lsi cards with 8 port sas/sata fanout included shipped). raid-0 is pairing two drives, then striping them. this is optimal as a REAL raid controller will read from all 4 drives to achieve 4x performance but remember SOFT raid-ich is just that - not a dedicated processor. so if your cpu gets busy or scales itself down or affinity is moving around you can expect degradation.

set your cpu to 100&#37; all the time, and see if you can play with affinity to move apps to other cores and see if it affects speed. i don't use software raid for anything. it's too risky to rely on a software stack that can crash - when companies like LSI have been making SOLID raid controllers for the last 20 years that (with good drivers) are bulletproof. IF you want a good raid controller - search the VMWARE hardware compatibility list. they don't junk cards on the list (or they boot them). if it works with ESXi 4.1 - you got a good card.

btw velociraptors suck compared to sas savvio 10K in real life tests.
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
Just did a 4 drive RAID 0 test, got what I was expecting 495 MB/s MAX, Average 407 MB/s. That is approx 4x the performance. Am I wrong to be expecting the same 4x the performance with my RAID 10 (0+1) setup?
LoL. Care to show us what benching program you use and the results?
 

Vadatajs

Diamond Member
Aug 28, 2001
3,475
0
0
Just did a 4 drive RAID 0 test, got what I was expecting 495 MB/s MAX, Average 407 MB/s. That is approx 4x the performance. Am I wrong to be expecting the same 4x the performance with my RAID 10 (0+1) setup?

Yes, performance should be more like 2x because you've only got 2 stripes. It's not guaranteed that your mirrors will stagger reads, and it probably doesn't.

Also RAID 10 != RAID 0+1
 

FishAk

Senior member
Jun 13, 2010
987
0
0
Also RAID 10 != RAID 0+1

Technically, RAID 1+0 is a strip of mirrors, while RAID 0+1 is a mirror of stripes. People refer to both as RAID 10, but they are substantially different from each other. For instance, RAID 1+0 has a 66 percent chance of surviving a second disk failure, while the chance of survival with 0+1 is 33 percent. The OP is correct that the ICH10R chip does 0+1.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
you want striped pairs - so disk 0 and 1 (stripe 0) are in separate cages with separate power and dual ported (separate wire) that way you can lose a cage, power to the cage, a SF-8087 cable, if you have two raid controllers, the whole raid controller and keep on trucking. no single point of failure is the general idea.
 

iMPLiCiiT

Junior Member
Aug 4, 2011
6
0
0
Well I was reading about on the internet and apparently the ICHR10 chipset apparently doesn't support reading across the mirrors, so it only benefits 2x read. Kinda shitty, I guess I need a raid controller.
 
Last edited:

Cr0nJ0b

Golden Member
Apr 13, 2004
1,141
29
91
meettomy.site
so is raid 10 better than raid 5?

It's hard to answer such a qualitative question without some additional data, like what are you using if for, what is the goal, what apps, block size etc...

In generally I would say RAID 10 is "safer" because you have mirrored data sets and essentially a spare piece of data for every production bit. It's slower than RAID 0, but faster than RAID1. It's also likely faster than RAID 5, but that is somewhat dependend on use case.

RAID 5 has parity protection, but you are only using about 25% of the data for parity bits so it's somewhat less protected....but at this point we are talking higher math and double bit error statistics.

In my case I use:

RAID 0 -- Temp data that I backup all the time or don't care about. I want speed and it's best at that.

RAID 1 -- Important data that I don't necessarily need speed or added capacity that RAID 10 gives. This also give somewhat better fault tolerance and resiliency, since a drive loss will not impair production performance like RAID 5 or 0.

RAID 10 -- I use like RAID 1, when I have lots of disks that I want to join together into big sets that are very well protected...and I don't care about the added drives and lower effective usable capacity.

I back both of these up regularly in case of file system/OS corruption.

RAID 5 -- I use when I have dedicated Raid controller and I want to get better utilization. I would setup 5-6 drive sets giving an 80-85% (rough) utilization. The dedicated controller usually does pretty well for performance, but I know I'm losing something over the alternative...but that's ok, because I'm somewhat more protected. I use this for data sets and some archive sets...usually with smaller drives, FC or SAS drives.

RAID 6 -- I use when I have big fat drives that i want to protect from double drive errors. Generally you will lose more to parity, but you will gain more spare capacity in the event of a double failure. This is best for SATA type drives and archive use cases.

Again, this is what I do. Which approach I use depends largely on the use case.