Post your Roadkill HDD Results!

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

bob4432

Lifer
Sep 6, 2003
11,727
46
91
Originally posted by: EricMartello
The RaptorX is a single drive, I'm running two 3200KS using WinXP disk spanning...essentially single drive. My system is an Athlon X2 4800+, 2 GB of Corsair 3500LL memory, Asus A8N32 Deluxe MB - all optimized by me for max performance and stability.

I also defrag regularly - it really does help. :)

these results seem very off - the raptor x has a sataI port....how are you geting 158MB/s? are you sure it is a single drive? the results just don't make sense...

anyway for me:
older 36GB hp u320 15K - 18163,
seagate 74GB 15k.5 - 27381

old maxtor 30GB 7.2K - 3851
hitachi 60GB 7.2k "deathstare" - 5070
wd 120GB 7.2k - 6262
maxtor maxline pro 500 500GB - 8819
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: Minerva
File copy results (especially in Vista) cannot be trusted as a benchmark due to caching.

File copies are the best tests, if you simply use large enough files (and have suitable source and destination and path to work with). Nothing else is a real test. A large enough test file will completely swamp the available RAM, so there's nothing that the system can do to invalidate the results significantly, regardless of how smart the caching is.

That's why I mentioned 8 GB -- that should be big enough for most conventional systems.

I recall a benchmark recommending files at least 4x as much big as available RAM on the system.
 

Gautama2

Golden Member
Jun 13, 2006
1,461
0
0
Using a WD SE 160gb SATA2 drive..Im getting 5675? Seems kinda low? I just degragged wtih Diskeeper Lite as well...


Linear Read
MBs read 271 MBs
Elapsed Time 5 seconds
Speed 54.333333 MB/sec


Random Read
MBs Read 13.7188 MBs
Elapsed Time 5.005 Seconds
Speed 2.7411 MB/sec

Access Time
Accesses 514
Elapsed Time 5.011
Access Time 9.75 ms

Overall-5675
 

Minerva

Platinum Member
Nov 18, 1999
2,134
25
91
Originally posted by: Madwand1
Originally posted by: Minerva
File copy results (especially in Vista) cannot be trusted as a benchmark due to caching.

File copies are the best tests, if you simply use large enough files (and have suitable source and destination and path to work with). Nothing else is a real test. A large enough test file will completely swamp the available RAM, so there's nothing that the system can do to invalidate the results significantly, regardless of how smart the caching is.

That's why I mentioned 8 GB -- that should be big enough for most conventional systems.

I recall a benchmark recommending files at least 4x as much big as available RAM on the system.

Even if your data sets exceed the size of your cache the cache still has an overall affect. Probably the easiest way to limit the windows cache is to use MAXMEM switches in BOOT.INI just as long as the OS stays out of paging. With a hardware controller it's easy to turn off cache.

The Roadkill mark factors in access time as well but it seems to have difficulty with interpreting some controller instructions to improve desktop file system performance. With a name of roadkill and an icon of a cat you have to wonder what was going through the author's mind. :Q

Either way it's fun to run these programs and try to interpret the results. :)

With drag racing it's so much easier with the chronograph. It doesn't lie. ;)

 

gerwen

Senior member
Nov 24, 2006
312
0
0
Samsung 40GB IDE in a Celeron 2.4 - 4472
Samsung 80GB SATA in an Athlon 3500+ - 5959

Maxtor 60GB IDE in Core2 E6400 - 5499
Seagate 320GB 7200.10 in same Core2 E6400 - 11057

 

EricMartello

Senior member
Apr 17, 2003
910
0
0
My system is pretty quick in "real world" situations, like loading apps and games almost instantly with minimal disk access. I don't think file copying really provides some kind of insight into how the disk will work overall, because different drives work better with different access patterns. The IO pattern for a game is different than that of a web server, for instance. All that aside, benchmark results should ALWAYS be taken with a grain of salt.
 

Highly Likely

Junior Member
Feb 23, 2006
12
0
0
samsung SP2514N 250GB ATA Physical 0 = 8551

Numbers go down when testing my partitions C: is highest F: is lowest

 

bob4432

Lifer
Sep 6, 2003
11,727
46
91
Originally posted by: EricMartello
My system is pretty quick in "real world" situations, like loading apps and games almost instantly with minimal disk access. I don't think file copying really provides some kind of insight into how the disk will work overall, because different drives work better with different access patterns. The IO pattern for a game is different than that of a web server, for instance. All that aside, benchmark results should ALWAYS be taken with a grain of salt.

mine is pretty quick too, but when you are somehow exceeding interface maximums such as sataI with a 10K rpm hdd something is definately wrong, especially when others with the same drives are all pretty equal or photoshop is involved...if these numbers are real why don't you enlighten us on what you did to exceed the interface speeds?

what do your hdtach #'s look like?
 

EricMartello

Senior member
Apr 17, 2003
910
0
0
Originally posted by: bob4432
mine is pretty quick too, but when you are somehow exceeding interface maximums such as sataI with a 10K rpm hdd something is definately wrong, especially when others with the same drives are all pretty equal or photoshop is involved...if these numbers are real why don't you enlighten us on what you did to exceed the interface speeds?

what do your hdtach #'s look like?

I'm not exceeding any maximums. SATA150 is 1.5 Gbps, which translates to about 187 MB/s.

Ran this bench once more and got an even higher score:
RaptorX Test 2 - 73,208

Notice that it never exceeds the 187 MB/s interface limit. What benchmark programs don't really test (can't test?) is how effective the particular drive's logic board is at managing its queue and controlling the access patterns based on requested data.

Now let's look at a better benchmark called ATTO:
RaptorX ATTO Test

In that scenario the drive's baseline performance is much more accurately depicted...but even so, it offers throughput close to what most people get with Raid 0 in addition to having very low seek times. Seek times diminish with Raid 0 quite a bit.

There's a lot more to HD performance than merely transfer rates and access times. If you only pay attention to those figures, it's like rating a CPU by only considering its FSB speed and Core speed.
 

bob4432

Lifer
Sep 6, 2003
11,727
46
91
Originally posted by: EricMartello
Originally posted by: bob4432
mine is pretty quick too, but when you are somehow exceeding interface maximums such as sataI with a 10K rpm hdd something is definately wrong, especially when others with the same drives are all pretty equal or photoshop is involved...if these numbers are real why don't you enlighten us on what you did to exceed the interface speeds?

what do your hdtach #'s look like?

I'm not exceeding any maximums. SATA150 is 1.5 Gbps, which translates to about 187 MB/s.

Ran this bench once more and got an even higher score:
RaptorX Test 2 - 73,208

Notice that it never exceeds the 187 MB/s interface limit. What benchmark programs don't really test (can't test?) is how effective the particular drive's logic board is at managing its queue and controlling the access patterns based on requested data.

Now let's look at a better benchmark called ATTO:
RaptorX ATTO Test

In that scenario the drive's baseline performance is much more accurately depicted...but even so, it offers throughput close to what most people get with Raid 0 in addition to having very low seek times. Seek times diminish with Raid 0 quite a bit.

There's a lot more to HD performance than merely transfer rates and access times. If you only pay attention to those figures, it's like rating a CPU by only considering its FSB speed and Core speed.

so sata150 is not 150MB/s?
 

EricMartello

Senior member
Apr 17, 2003
910
0
0
No, SATA 1.5 is 1.5 Gbps, which is 187 MB/s and SATA 3.0 is double that. The proper naming conventions for SATA include SATA 1.5 and SATA 3.0. There is no such thing as "SATA II", but that is commonly mistaken for SATA 3.0. SATA II was more like a code name for SATA 3.0 when 3.0 was still being developed.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: EricMartello
Now let's look at a better benchmark called ATTO:
RaptorX ATTO Test

In that scenario the drive's baseline performance is much more accurately depicted...but even so, it offers throughput close to what most people get with Raid 0 in addition to having very low seek times. Seek times diminish with Raid 0 quite a bit.

Somewhat surprisingly, it looks like ATTO comes close to getting this right. 82 MB/s is much more reasonable than 177 MB/s.

I have no idea what you're getting at about RAID 0 performance here. It really isn't complicated -- most drives, including some very inexpensive ones, peak somewhere around 60-70 MB/s. Stripe that properly with 2 drives, and you get around 120-140 MB/s, which is quite a bit higher than the 80 MB/s that the Raptor does.

Nobody's arguing here that the Raptor's not a nice drive or that it doesn't have nice access performance. What we're discussing is that the STR benchmark results are wrong. 177 MB/s STR for a single drive clearly indicates a faulty benchmark or bad test. Trying to defend or justify it or even argue about that... is not a good use of energy.
 

EricMartello

Senior member
Apr 17, 2003
910
0
0
Originally posted by: Madwand1
Somewhat surprisingly, it looks like ATTO comes close to getting this right. 82 MB/s is much more reasonable than 177 MB/s.

Why is it surprising that ATTO is more accurate? It's a better program. 177 MB/s is not unreasonable. It's called burst speed...what did you really buy into that propaganda here that you don't need a better interface because no single drive can max it out? That claim ignores burst transfers and their role in daily computer use.

I still remember everyone here agreeing how they should just stick to ATA66 because ATA133 didn't offer any "improvements". It does, when you consider that cache and burst transfers plays a big role in how responsive your system is. I bet that is why some people complain that Vista is slow, while others with modest hardware find it to be fast. The whiners got crappy HDs or are still running PATA and they are experiencing the interface bottleneck...while the others are able to utilize the caching to the fullest and enjoy the benefits of burst transfers that are close to the max of the interface bandwidth limit.

I have no idea what you're getting at about RAID 0 performance here. It really isn't complicated -- most drives, including some very inexpensive ones, peak somewhere around 60-70 MB/s. Stripe that properly with 2 drives, and you get around 120-140 MB/s, which is quite a bit higher than the 80 MB/s that the Raptor does.

You think that striping drives doubles all performance across the board? No, it doesn't. It actually slows down responsiveness in a desktop environment because Raid 0 really only benefits access patterns that require large chunks of data being read or written. It sucks with small seeks and random access. Most desktop access patterns, including games, read smaller chunks of data...this means that Raid 0 increases the effective seek times and slow the system down.

Not just that, but the ghetto Raid most people here run is a joke. There is no real controller, there is no caching...and now they have Raid 5 which is computationally intensive. :p Too funny! Raid has its place, and it's not on the desktop.

Nobody's arguing here that the Raptor's not a nice drive or that it doesn't have nice access performance. What we're discussing is that the STR benchmark results are wrong. 177 MB/s STR for a single drive clearly indicates a faulty benchmark or bad test. Trying to defend or justify it or even argue about that... is not a good use of energy.

The benchmark results are what they are. They seem to be reflective of burst transfer...and yeah, if my system is still bursting with THAT kind of load then that's just great. Like I said, I RARELY hear my HDs because of very effective caching, and my system is very responsive.

If you want the best performance, you buy the best stuff available...if you don't have a RaptorX then you don't have the best (in EVERY category) SATA drive currently available. Nobody needs a benchmark to tell them that their drive is a slow POS. They should have known that when they decided to cheap out instead of buying the best parts. :)
 

Computerguy102

Junior Member
Feb 19, 2007
12
0
0
WD2500KS Here is what I got: 6505

Significant improvement in performance over my old IDE drive. 308.2500MBs for Linear read.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: Minerva
Even if your data sets exceed the size of your cache the cache still has an overall affect. Probably the easiest way to limit the windows cache is to use MAXMEM switches in BOOT.INI just as long as the OS stays out of paging. With a hardware controller it's easy to turn off cache.

You're right in theory. However, in practice with very large files, the impact of the cache is minimal -- it tends to be swamped by the data volume, and the behavior smooths out. This is what I've observed and measured with many tests; I'm not making this stuff up from theory. Note also that local file transfers involve a read + write. While you can optimize reads a lot, the writes aren't nearly as optimized -- there's no "uh huh, same data, don't have to write it to disk!" behavior that I've ever seen. At best there's a bit left in the cache and the app will say "all done", while the last bit is being flushed from cache to disk. However, this effect is relatively very small when you're dealing with a lot of data.

There's a place for both of course, decent benchmarks are useful and when used properly can tell you additional things. But, with the amount of variability that we can observe in disk benchmarks, we need a reference for STR correctness, and for me that's actual large file transfers. It's sound -- copying files is an application.

Of course it's not the only application, and there are other access patterns, etc. -- for those we have different applications and benchmarks.
 

bob4432

Lifer
Sep 6, 2003
11,727
46
91
Originally posted by: EricMartello
Originally posted by: Madwand1
Somewhat surprisingly, it looks like ATTO comes close to getting this right. 82 MB/s is much more reasonable than 177 MB/s.

Why is it surprising that ATTO is more accurate? It's a better program. 177 MB/s is not unreasonable. It's called burst speed...what did you really buy into that propaganda here that you don't need a better interface because no single drive can max it out? That claim ignores burst transfers and their role in daily computer use.

I still remember everyone here agreeing how they should just stick to ATA66 because ATA133 didn't offer any "improvements". It does, when you consider that cache and burst transfers plays a big role in how responsive your system is. I bet that is why some people complain that Vista is slow, while others with modest hardware find it to be fast. The whiners got crappy HDs or are still running PATA and they are experiencing the interface bottleneck...while the others are able to utilize the caching to the fullest and enjoy the benefits of burst transfers that are close to the max of the interface bandwidth limit.

I have no idea what you're getting at about RAID 0 performance here. It really isn't complicated -- most drives, including some very inexpensive ones, peak somewhere around 60-70 MB/s. Stripe that properly with 2 drives, and you get around 120-140 MB/s, which is quite a bit higher than the 80 MB/s that the Raptor does.

You think that striping drives doubles all performance across the board? No, it doesn't. It actually slows down responsiveness in a desktop environment because Raid 0 really only benefits access patterns that require large chunks of data being read or written. It sucks with small seeks and random access. Most desktop access patterns, including games, read smaller chunks of data...this means that Raid 0 increases the effective seek times and slow the system down.

Not just that, but the ghetto Raid most people here run is a joke. There is no real controller, there is no caching...and now they have Raid 5 which is computationally intensive. :p Too funny! Raid has its place, and it's not on the desktop.

Nobody's arguing here that the Raptor's not a nice drive or that it doesn't have nice access performance. What we're discussing is that the STR benchmark results are wrong. 177 MB/s STR for a single drive clearly indicates a faulty benchmark or bad test. Trying to defend or justify it or even argue about that... is not a good use of energy.

The benchmark results are what they are. They seem to be reflective of burst transfer...and yeah, if my system is still bursting with THAT kind of load then that's just great. Like I said, I RARELY hear my HDs because of very effective caching, and my system is very responsive.

If you want the best performance, you buy the best stuff available...if you don't have a RaptorX then you don't have the best (in EVERY category) SATA drive currently available. Nobody needs a benchmark to tell them that their drive is a slow POS. They should have known that when they decided to cheap out instead of buying the best parts. :)

it still strikes me as interesting that other with raptors don't get what you get, that is what i am saying. why is yours higher? i don't care about benches but just find your results way out of the ordinary

and according to wiki:
First-generation SATA interfaces, also known as SATA/150 or SATA 1, run at 1.5 gigabits per second (Gbit/s). Serial ATA uses 8B/10B encoding at the physical layer. This encoding scheme has an efficiency of 80%, resulting in an actual data transfer rate of 1.2 Gbit/s, or 150 megabytes per second (MB/s) (or 143.05 MiB/s). The relative simplicity of a serial link and the use of LVDS allow both the use of longer drive cables and an easier transition path to higher speeds.

is this information incorrect?

here is the actual WD documentation stating 150MB/s
 

Minerva

Platinum Member
Nov 18, 1999
2,134
25
91
Originally posted by: Madwand1
Originally posted by: Minerva
Even if your data sets exceed the size of your cache the cache still has an overall affect. Probably the easiest way to limit the windows cache is to use MAXMEM switches in BOOT.INI just as long as the OS stays out of paging. With a hardware controller it's easy to turn off cache.

You're right in theory. However, in practice with very large files, the impact of the cache is minimal -- it tends to be swamped by the data volume, and the behavior smooths out. This is what I've observed and measured with many tests; I'm not making this stuff up from theory. Note also that local file transfers involve a read + write. While you can optimize reads a lot, the writes aren't nearly as optimized -- there's no "uh huh, same data, don't have to write it to disk!" behavior that I've ever seen. At best there's a bit left in the cache and the app will say "all done", while the last bit is being flushed from cache to disk. However, this effect is relatively very small when you're dealing with a lot of data.

There's a place for both of course, decent benchmarks are useful and when used properly can tell you additional things. But, with the amount of variability that we can observe in disk benchmarks, we need a reference for STR correctness, and for me that's actual large file transfers. It's sound -- copying files is an application.

Of course it's not the only application, and there are other access patterns, etc. -- for those we have different applications and benchmarks.

I know my setup isn't typical, but going from the stock 256MB cache to 2048MB made large batch copying of compressed video much faster from a striped target to a pass through disk. The files are MUCH larger than the cache in both cases. The pass through disk still has the benefits of the controller's onboard write back cache.

As far as SATA 150 goes, a RAID0 pair will have a (theoretical) burst speed of 300 MBps because each port has 150 and the sum of the ports in (striping mode) will be the end result. Most times the burst is around 230-260 with a configuration like that.

With a dedicated hardware controller and fast IOP, the burst speed is limited by (in most cases) the speed of the interface itself. PCI-X 133MHz is actually a bottleneck to the ARC line, so PCI-E 8X is better. When you have arrays with 20 disks it makes a difference!

EDIT: All this fancy 'quipment and I still have a 15 year old keyboard and it shows! :Q

 

EricMartello

Senior member
Apr 17, 2003
910
0
0
Originally posted by: bob4432
it still strikes me as interesting that other with raptors don't get what you get, that is what i am saying. why is yours higher? i don't care about benches but just find your results way out of the ordinary

and according to wiki:

First-generation SATA interfaces, also known as SATA/150 or SATA 1, run at 1.5 gigabits per second (Gbit/s). Serial ATA uses 8B/10B encoding at the physical layer. This encoding scheme has an efficiency of 80%, resulting in an actual data transfer rate of 1.2 Gbit/s, or 150 megabytes per second (MB/s) (or 143.05 MiB/s). The relative simplicity of a serial link and the use of LVDS allow both the use of longer drive cables and an easier transition path to higher speeds.

is this information incorrect?

Wiki info is hit-or-miss. Not wrong, but not always 100% correct. I think the fact that this bench shows 177 MB/s contradicts the 80% efficiency generalization. At 1.5 Gbps, 20% overhead is very high.

Why are my results higher? Computers I build always run faster than what other people build. :D Seriously tho, my MB is the Asus A8N32 Deluxe (latest bios), and the drives are connected to the nvidia SATA with raid disabled in the bios. Running XP Pro with MS drivers for the drives. Maybe someone else with a similar setup will chime in and post their results.


Acutally no...it's stating:

SATA Hard Drives
150 GB, 1.5 Gb/s, 16 MB Cache, 10,000 RPM

Maybe you misread 150 GB as 150 MB/s?
 
Oct 4, 2004
10,515
6
81
Seagate ST380011A 7200.7 80GB IDE
---Linear Read: 54.9218 MB/s
---Random Read: 3.105 MB/s
---Access Time: 8.97ms
---Overall Score: 6232
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: Minerva
I know my setup isn't typical, but going from the stock 256MB cache to 2048MB made large batch copying of compressed video much faster from a striped target to a pass through disk. The files are MUCH larger than the cache in both cases. The pass through disk still has the benefits of the controller's onboard write back cache.

We're talking about a couple of different things here. First, I was talking about the file system cache, and you're talking about the controller cache. Second, I was concerned about caches' effect on artificially increasing throughput beyond STR (A), and you seem to be talking about cache's ability to improve STR (B). It can be confusing, but these can be balanced -- good test design / not so much caching that the results are not artificially enhanced, and enough cache so that the performance is good.

If you have a large enough data set and a good test (*), caches simply cannot artificially increase throughput performance materially. A "good test" in this context is simple -- don't access a limited data set over and over again; process the large data set linearly. We should be clear on this point. There's just no way that e.g. a linear 10 GB test can be significantly artificially enhanced within a 2 GB system RAM configuration. And if you have a ton of RAM so that the RAM to test size ratio start coming close, then simply increase the test size further. Again, one benchmark specifically advised 4x system RAM -- this is reasonable.

On (B), a large cache can help the system be more efficient -- by enabling a RAID controller to do a good amount of read-ahead in excess of the application's request -- but it cannot exceed the capabilities of the underlying drive system given (*). I believe this is closely related to why your system gets great throughput with this benchmark, but some others don't. The controller's internal algorithm takes advantage of the cache space, and implements a large read-ahead. Other implementations may be more conservative in read-ahead in order to conserve RAM. At 512 MB/s, a 256 MB cache would be exhausted in 1/2 a second. It's hard to understand what time duration is meaningful, but 1/2s is not long, and it's obvious that if the controller tried to keep stuff in cache longer for subsequent access, it'd benefit from more space.

Roadkil's maximum request size is 64k. This is a trivial amount that's well under what you need to keep the average RAID array's drives active concurrently. Here's where the read-ahead of the Areca comes in, and helps it meet the STR that its drives are capable of while other systems, such as the one I provided benches for earlier function under their STR capability because they don't have enough concurrent data demand when they don't implement their own aggressive read-ahead into system RAM.

Even Vista's lowly Explorer can request 1 MB at a time from the drive. This is why Roadkil's current benchmark does not adequately measure STR on some RAID arrays.
 

Minerva

Platinum Member
Nov 18, 1999
2,134
25
91
It was about 350GB of data and the larger cache did cut down the copy time. Remember the array has a sustained STR of 700MB/S and the target about 72MB/S. The activity monitors (each drive has its own) were showing much LESS activity with 2GB cache. It was more cyclic than continuous.

As far as the RK benchmark, it seems to show fairly consistent results amongst desktop systems. The passthrough disk benchmarks stratospherically high (1.7GB/S R/W) in ATTO because the data is in the cache but RK shows about 10,000. Weird stuff. But run RK over and over and the activity gets reduced and the seek time goes to zero because it's in the cache. Like I said like a ram disk without the air force one budget. ;)
 

EricMartello

Senior member
Apr 17, 2003
910
0
0
Originally posted by: Madwand1
Again, one benchmark specifically advised 4x system RAM -- this is reasonable.

On (B), a large cache can help the system be more efficient -- by enabling a RAID controller to do a good amount of read-ahead in excess of the application's request -- but it cannot exceed the capabilities of the underlying drive system given (*). I believe this is closely related to why your system gets great throughput with this benchmark, but some others don't. The controller's internal algorithm takes advantage of the cache space, and implements a large read-ahead. Other implementations may be more conservative in read-ahead in order to conserve RAM. At 512 MB/s, a 256 MB cache would be exhausted in 1/2 a second. It's hard to understand what time duration is meaningful, but 1/2s is not long, and it's obvious that if the controller tried to keep stuff in cache longer for subsequent access, it'd benefit from more space.

Even Vista's lowly Explorer can request 1 MB at a time from the drive. This is why Roadkil's current benchmark does not adequately measure STR on some RAID arrays.

Caching is an integral part of disk subsystem performance, and ignoring it and ONLY focusing on the raw performance of the drive is interesting but ultimately pointless.

As I mentioned before, the logic chipset that controls the HD itself plays a big role in how "fast" it is in real world situations, much moreso than peak transfer rates. Did you know that the peak transfer rate of the drive varies drastically with the type of access pattern? For example, when booting WinXP even a drive like the RaptorX will peak around 8-9 MB/s! That's a far cry from the raw benchmark results that show 75-80 MB/s...

Benchmarks are supposed to measure performance, and I think that saying cache invalidates a given benchmark is off the mark. If the cache and burst transfers can provide the data at near interface speeds, that means the disk cache is doing its job. It's supposed to be that fast...furthermore, how are people saying that a benchmark is supposed to give some idea of "real world" performance if they choose to ignore the effects of caching and bursting? Do you all run your OSes with caching disabled?

Hard drives are still the main bottleneck in a modern computer, but they've come a long way since the days of PATA. I don't think that looking at transfer rates really says much about the drive's actual performance. You need to evaluate the package, which includes factors like caching effectiveness, on-drive logic and processing as well as bursting.