think i have reached the max of ichr10

Elganja

Platinum Member
May 21, 2007
2,143
24
81
4 M4's in Raid 0 - 32byte block size

GTUUi.png
 

moriz

Member
Mar 11, 2009
196
0
0
that's odd, it should go about twice that for sequential read. everything else looks to be correct though.
 

Elganja

Platinum Member
May 21, 2007
2,143
24
81
that's odd, it should go about twice that for sequential read. everything else looks to be correct though.

i think it is a limitation of ichr10 ... there just isn't the bandwidth to do more with it

hopefully groberts or rubycon will chime in
 

jrocks84

Member
Mar 18, 2010
90
0
66
I do believe I saw someone say that there was a bandwidth limit of around 600 MB/s for the ICH10R, which is in line with what you are seeing.
 

groberts101

Golden Member
Mar 17, 2011
1,390
0
0
just depends on the rigs hardware, amount of OC, and numbers of drives used. "Average" top R/W speeds are usually around 700/675 for the ICH10R.

Couple of tips for the power hungry types.

1. ram speed does make a difference.
2. QPI mhz does make a difference.
3. PCI-E overclocking does make a difference.
4. Disabling all CMOS and OS related power saving options does make a difference.

With all the tweaks known to man being used on my X58,.. I've passed the 900/780 mark with ATTO. And passed 740 reads with benchmarks like AS SSD and CDM3.

So, factory limits can be raised quite a bit if you try hard enough.

PS. yeah.. that AS SSD could probably be tweaked a bit more with some of the above.. but you'd likely get better results from 6 x 128GB drives than you would with 4 x faster 256GB models. Mainly due to sata channel restrictions, more channels of nand overall, and increased caching to ram benefits from a wider array.

I surely wouldn't kick that array out of my rig though. Even funnier is the fact that going from all 4 drives down to a single drive config makes EVERYTHING seem slower regardless of all the "testing and theories" floating around out there. Seeing is believing... and a 4 drive wide array will make you believe.

Hope that helps.
 

Rubycon

Madame President
Aug 10, 2005
17,768
485
126
Out of all of those PCI-E o/c makes the largest difference.
Of course going too far is going to hurt, sometimes badly. (your data, silent corruption, etc.)

I'd keep it under 110, some hosts, notably the Areca 1880ix series, have a hard shut off at approximately 108MHz. Go 109 and the BIOS is skipped on POST!

I had 1680 and 1280 controllers that would go as high as 115 but 1bit ECC trips were fairly common on i/o intense tasks. I would NEVER advocate such a high PCI-E o/c on production hardware. (I always twist arms until something gives just to see what actual limits are, however.)