4 SATA Rev2 SSDs - RAID 0 on P67

ArchAngel777

Diamond Member
Dec 24, 2000
5,223
61
91
I did not see this information posted anywhere on the web after some searches. I was always curious to know how the P67 would handle my 4 drive SATA Rev2 Raid 0 array. X58 (ICH10R SB) had a throughput limit of around 666mb/sec or so. I was able to confirm this with my own X58 system.

So I was at a crossroads a few days ago when it came to upgrade my computer. Do I keep the RAID array with the X58 system when I give it to my dad and purchase two SATA Rev3 SSDs, or do I migrate the array to my new P67 setup and purchase him a single SSD? I was actually torn, but basically it came down to this: 2 x SATA Rev3 is roughly the same throughput as 4x SATA Rev2. Since my Intel SSDs have never given me a problem, I opted to keep them. I was hoping for 1GB/Sec sequential reads...

Test Setup:
2500K @ 4.6Ghz
ASRock Extreme 4 Gen 3 P67
4 X25-M 80GB drives in a RAID 0 array. 128KB Stripe.
Secure Erased All 4 drives prior to building the RAID.
Write Back Caching Enabled via the Intel Rapid Storage Manager
4000MB test in Crystal Disk Mark

Sequential Read = 745mb/sec
Sequential write = 280mb/sec
Random 4K Read QD0 = 19mb/sec (this doesn't scale with RAID)
Random 4K Write QD0 = 170mb/sec

As a gamer, I primarily only care about read speeds. A combination of fast random reads and squential reads is what I am after. I will admit that I am a bit dissapointed with the results considering that when I use 3 drives in the array, I get the same performance. Each of these drives on their own is fully capable of 250mb/sec sequential reads. They scale perfectly up to 3 drives and the 4th drive doesn't add anything to performance. Rather than overhead being the problem, I firmly believe that there is a bottleneck somewhere in the P67 chipset with SATA Rev2 communication, much like the ICH10R had a throughput limit of 666mb/sec or roughly around that amount. I have seen reports of 2 x SATA Rev3 drives hitting 1GB/sec sequential reads which indicates that P67 can in fact handle it, but perhaps the routing of SATA Rev2 versus SATA Rev3 is different or is artificially capped.

Bottom Line is that with 4 SATA Rev2 drives, the following results can be expected:

P67 = 750Mb/sec
X58 (ICH10R) = 666Mb/sec

I'll run the tests again and try to use some additional software and update this thread. I no longer have my X58 system (Dad has it), However, there are threads out there with the ICH10R SB throughput limit well documented.

Note: I have ran RAID 0 with X25-M SSDs for well over 2 years. Never had the array fail at any time, and I have been extremely agressive with my overclocks. For people who say RAID0 is unreliable, there may be some truth to it, but the fact of the matter is that murphies law applies to a single drive as well as a RAID array. So if your data isn't backed up you shouldn't feel anymore secure with a Single drive over a RAID 0 array, because murphies law is gonna nail you...

I'll see about updating this thread with some screenshots later. I don't have access to them currently. I will also attempt to run some other benchmarks when I have time and update with my results further.
 

groberts101

Golden Member
Mar 17, 2011
1,390
0
0
you have something wrong with your config there or have some other board limitations going on(have you tried disabling all c-states in the bios to test?). The 6 series chipset can easily reach 1 gig(I've literally seen hundreds do it) with just 2 faster sata3 drives on the main ports. When you start spanning the sata3/sata2 ports obviously they will all downclock to sata2 specs even when running sata3 drives across that span.

Gigabytes boards are among the best that I've seen so far for spanned speeds and many others have already seen 1300-1400MB/s on that chip too. Here's a pretty old post from Doorules to show what they can achive for max throughput.
http://forums.tweaktown.com/gigabyte/42636-ud7-p67-raid-0-a.html

And I've played with OS volumes having really high sequentials compared to ones with more low end grunt(random speeds). I would take 4 drives(with 40 channels available in your case) running in sata2 over 2 faster drives running in sata3 any day for an OS volume. Actually night and day difference when pushed really hard. Key words.. "really hard".

lol.. kinda funny on the reread there. Anyhow.. you get the right picture I hope.
 

ArchAngel777

Diamond Member
Dec 24, 2000
5,223
61
91
lol.. kinda funny on the reread there. Anyhow.. you get the right picture I hope.

Yep, I definitely know that P67 is capable of much more throughput, but something is capping my speeds and it isn't the drives. I will attempt to update my Intel chipset driver. I think I used the ones off of the CD. I don't normally do that, but I did it in this case. I figure I will update them.
 

ArchAngel777

Diamond Member
Dec 24, 2000
5,223
61
91
Cables the same?

Unless I am mistaken, both SATA and SATA Rev2 cables have the same throughput. I believe the SATA Rev2 merely added the metal clips. In any case, yes, they are the same.

I went back to retest and came up with the same results. I am using the latest drivers from Intel. There is definitely a cap going on when using SATA Rev2. Whether it applies to my motherboard only, I am not sure.

If I run the 50mb test (silly, I know) I come up with 950gb/sec. The 4000MB test though, and the 2000MB test keeps between 745 and 750mb/sec.

Does anyone else have 4 SSDs that they can test with, with Intel P67 raid? Just use the 4 Intel SATA2 ports.

For those wonder, the setup is this.

Samsung 830 on Port 0 (SATA 3 speed)
DVD-ROM Drive on Port 1 (SATA 1 speed)
Intel X25-M on Port 2 (SATA 2 speed)
Intel X25-M on Port 3 (SATA 2 speed)
Intel X25-M on Port 4 (SATA 2 speed)
Intel X25-M on Port 5 (SATA 2 speed)

Samsung 830 and the DVD drive are not members of the RAID. All the Intel Drives are set for a 128KB raid stripe. Although Intel recomends 16KB stripe for an SSD. I tried it both ways and performance in all aspects of the drive were pretty much the same.

The marvel ports are disabled.
 
Last edited:

slow_poke

Junior Member
Dec 26, 2011
22
0
0
4 c300's on a p67 a year ago

Capture1-9.png
 

groberts101

Golden Member
Mar 17, 2011
1,390
0
0
This guy has the closest thing to my setup. I also see that Groberts is mentioned? Is that you?

http://www.ocztechnologyforum.com/f...AII-amp-SATAIII-ports-(Vertex-2-amp-Agility-3)

I am going to run ATTO on it instead of Crystal Disk Mark with the same settings to see what I will get.
yeah.. that's me. I beta-test hardware for those guys.


here's the thing. The bulk of the gain you'll see with larger SSD arrays is the low-end grunt and added Intel caching benefits. it will NOT be the larger arrays sequential speeds that will make the most perceivable differences in actual usage. UNLESS.. that usage happens to have araided HDD storage volume of about 8 drives tied up in R0 to ever even see them in usage. Even then though.. it's more like a "wow!.. look how fast that transfer went!".. rather than showing gain during app usage. And transferring to/from a ramdisk will give you the full view of potential performance to be had with faster storage.

OR.. you have freakishly heavy multitasking needs that consist of reading/writing to/from multiple storage volumes all at the same time. Primary reason for the latter is that bandwidth can be best described as a pie. How large it is.. and how aggressively that pie can be devoured by each storage volumes needs at any particular time will dictate as to whether or not it's ever really utilized. Most who raid SSD's are falsely lead to believe that the sequentials are what's making the difference in perceived speeds for normal usage. That is incorrect for the most part and in most typical hardware environments(such as those using non-raided storage while waiting for single HDD speed transfers.

Latency and random data performance(ESPECIALLY random data writes) are what makes an OS volume feel faster. Even moreso when you push it harder in multitasking/multiuser environments.

Even the ICH10R is no slouch for what it will allow when fully populated with 6 faster SSD's. Would I trade the random performance for 1GB/s sequentials?.. not a chance. And that's coming from someone who has an 8 drive HDD array to make use of that higher sequential performance. lol

Notice the latency of raided Sandforce arrays.


Was waiting for Ivy bridge to get the best of both worlds.. but probably won't want to wait for the tock of Ivy bridge E to get here. Looking like X79 may be my next hardware stop for a while. :)
 
Last edited: