• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

My RAID-0 HDTach looks like a siesmograph...

edit: Thought I'd bump this for post 18 in case anyone wanted to see it. I plan to try this again when I get my hands on some 8MB cache drives. Would like to know what the issue is (if it's with AMD's sb750).

Frowntown

Why?
 
Last edited:
1. how much cache does your controller have?
2. is it set to write-back like 50% read/50% write?
3. is the DRIVE CACHE enabled too (dangerous since theres no battery on drives [yet])?

intel matrix raid usually defaults the drive cache disabled. and has no cache at all.

you should use more thorough test like sqlio or the vmware standard iometer workout.

They give you a more realistic picture of whats going on
 
Originally posted by: soccerballtux
Frowntown

Why?

That does look unhappy.

Couple quick things to check - is the array empty or are you benching on an array that contains data?

Antivirsus software disabled while running the bench?

Is the array (drive) setup in windows for indexing for search purposes? If so, try disabling the indexing.

Is the array setup for auto-defrag? If so try disabling auto-defrag on the drive/array.

Also, give HDtune 2.55 a try and see if you get the same Fremont aftershocks as HDTach is showing.

Originally posted by: Emulex
1. how much cache does your controller have?
2. is it set to write-back like 50% read/50% write?
3. is the DRIVE CACHE enabled too (dangerous since theres no battery on drives [yet])?

intel matrix raid usually defaults the drive cache disabled. and has no cache at all.

Regardless of whether the various cache settings are enabled/disabled/etc the read/write performance across the array should be smooth and continuous unless there is some serious firmware monkeybusiness going on.

The items you bulleted here would be expected to impact the performance in a consistent manner, moving the entire graph up or down on the y-axis, it would not be expected to result in sky-rocket or duldrums read/write rates in neighboring sectors.

Originally posted by: Emulex
you should use more thorough test like sqlio or the vmware standard iometer workout.

They give you a more realistic picture of whats going on

Again this is true, but entirely besides the point that the OP is asking why his HDTach bench looks like crap compared to how other's HDTach benches look. Telling him HDTach itself is just not a good bench does not resolve the root-cause of the inconsistent array read bandwidth.
 
Yes.

Only thought was that it could perhaps be remapped sectors; however these drives are brand new so I wouldn't expect any bad sectors on the first format(?).
 
Is it an option to un-raid the drives and test them in standard sata config to confirm it is truly a raid induced phenomenon?
 
Originally posted by: soccerballtux
I've got a 3rd one I could connect and do that with. Should I put data on it?

64kb, raid controller is the sb750.

Do it as an empty partion first. Then if no siesmograph in HDTach go ahead and start adding some data to it and retest to see if you can replicate the observations.
 
Drive benchmarks should be run on empty unpartitioned volumes for best results. Sometimes it's very smooth and sometimes like yours. If the drive is clean and not in use then something is fishy. There are other ways to test as well (real world) to see if the transfer is all over the place.

HD Tach can look like that on larger stripe sizes and smoother on smaller stripe sizes as well.
 
Will do.

What other "real world" tests could I do? Copying a file? I don't have anything fast enough to keep up.
64k was the smallest stripe size I could set up. Wondering if the 2mB cache could be it.
 
Depends on the source and destination. File copy works if you have enough drives to create a pair of striped logical drives. (four needed). Before single drives reached the gigabit per second level we used to ftp files across the network and use net per sec to see how smooth or jagged it was. The key with the network test was to use ftp not smb. I do regularly see 98% utilization on a gigabit line with Vista/7 to Server 2008R2 with teamed adapters, etc.
 
http://img294.imageshack.us/i/nodata.png/
http://img170.imageshack.us/i/withdata.png/

So if I have 2 of these drives in Raid0, and the read speed drops at the same part of each platter/drive, it could be that the poor performance is stacking.

Wonder if it could anything related to the cache. My WD (16MB cache and 166GB platters) with data reads smooth all the way.

Mechanically, does anyone know why the read speed zig zags so predictably (on the empty drive picture)? My WD does that too.
 
The mild zig-zag on the single drive is normal in my experience, but it shouldn't get excessive when stacked in raid0 like you have it.

I'm out of ideas to test though, you've pretty much exhausted my list. If you do ever find an answer please let me know, I will be forever curious about this.
 
Woops! I mis-tested last night. In the with-data uploaded earlier you see I had tested the raid there. Here's the standalone drive with data on it:

http://img269.imageshack.us/i/withdata.png/

Looks good.
Wonder if we're dealing with a raid controller issue here?
I'll have to connect to the integrated gigabyte raid and see if it's any better/worse. With single drives, it was worse, so I threw it onto the AMD chipset and forgot about it.
 
These first two sections of AMD Raid0 and Gigabyte Raid0 are drives I raided together and benched. They are not my C: drive.

AMD Raid0:
2x160GB, 64k stripe, unformatted
2x160GB, 128k stripe, unformatted
2x160GB, 64k stripe, 4k Cluster
LOL @ this one--3x160GB, 64k stripe, unformatted
HD Tune: 3x160GB, 64k stripe, unformatted, 1MB Block size
HD Tune: 3x160GB, 64k stripe, unformatted, 2MB block
HD Tune: 3x160GB, 64k stripe, unformatted, 4MB block
HD Tune: 3x160GB, 64k stripe, unformatted, 8MB block

Gigabyte Raid0:
2x160GB, 64k stripe, 32k cluster size
2x160GB, 64k stripe, 4k cluster size
2x160GB, 64k stripe, 512byte cluster size


Raid0, C drive, multiple results for comparison: [you guys have seen this data already]
http://img141.imageshack.us/im.../6179/cdrive64k4k2.png
http://img517.imageshack.us/img517/9471/cdrive64k4k.png

Storage drive:
http://img35.imageshack.us/img35/3990/amdwd5000aaks.png

I _think_ it's faster than my old single-drive system; but my computer was so fast already I couldn't really tell. At this point it's just for bragging rights I guess; so I really don't care if it's slower.
 
bump in case anyone finds these results interesting. I would not be surprised if AMD had a slapped together raid controller. I don't mind though. Some raid is more fun than No raid.
 
Interesting indeed. I'm almost certain that the problem is the SB750 as raid 0 on the ICH10R is notably more steady.
 
Back
Top