• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Horrible RAID Preformace! (pic)

Waffen

Member
why is my RAID so slow????
hey guys, Today I was benchmarking my system and I finally did the hard drive benchmark and it returned a rather low score. What is going on?

HDD.JPG



This is with 2x IBM 60GXP 40GB hard drives in RAID 0 on a Soyo k7v dragon+. On the RAID setup utility with the Promise controler it is set to UDMA Mode 5 which is UDMA100. I dont know what to do!
 
well before this system I played on a system with ultra 160 SCSI and It was MUCH faster than my system is now HDD wise. Shouldent I be loading HL faster or somthing?
 


<< How do you have the drives hooked up to the MB? >>


The drives are hooked to the IDE3 port on my soyo k7v Dragon +. They are on the same ribbon cable
 
1 drive should be on IDE 3 and 1 on IDE 4. At least on the High Point controllers. Both drives are set to master in a Raid 0 config.
 
um I dident think that was how it worked but can anybody else varrify this? Does this go for the promise controller?
 
Heh, I just posted about an identical setup here a couple days back. Lots of people have been getting very strange scores with RAID arrays using Sandra - something very flaky is going on there. Try using Iometer (get it at http://developer.intel.com/design/servers/devtools/iometer/index.htm) and comparing your scores to those at Storage Review here:

http://www.storagereview.com/comparison.html

Note that your scores will also depend on the size of the test file that you assign to Iometer - a smaller test file seems to perform better than a large one. This makes direct comparison a little tricky. Understand also that it takes a lot more work to run Iometer benches than Sandra benches. Hopefully, more people around here will run Iometer benches and post results for comparison. For reference, my result (using the Medium load Workstation pattern) on the remaining 28 GB of my NTFS partition was 235 I/O's/sec. In comparison, an 80GB D740X scored 138 and a 19GB Cheetah X15 scored 272. So 235 seems appropriate.

Please, if someone here has experience with Iometer, let me know if I should set up the tests differently or expect different results.
 


<< um I dident think that was how it worked but can anybody else varrify this? Does this go for the promise controller? >>



Yes, both drives are set as masters on IDE 3 and 4. The manual clearly states this (RTFM!).
 
yes they need to be on different channels otherwise this somewhat defeats the point this is 85% of the reason why it seems so horrid .. the next is sandra isnt a very good disk i/o benchmark .. someone posted a link to storagereview .. yes go there .. great info, they use the best benchmark i have seen for disk i/o .. intel's Iometers .. well connect the drives the way everyone has suggested performance should improve .. good luck ..
-neural
 
It's only been said around 1000 times before but RAID 0 has wery little advantage for general system operation. It's strength is working with large files such as A/V movies, large images and sound files. Performance can be worse when working with small files like OS .dll, .vxd files etc. With that in mind, the RAID drive should not be the C: (boot) drive. You can make RAID 0 interleve small files better by reducing the stripe size but then access times go up. You also need to set the cluster size so it will divide evenly into the stripe size. Benchmarks will generally be better when small stripe sizes are used as they usually use small file chunks. Try reformating your array to a 4K/4K stripe cluster or 8K/4K stripe cluster if you want to use RAID on the boot drive.
 
Back
Top