Originally posted by: Tostada
Originally posted by: Pariah
How many people have 250GB of frequently accessed data? Any home user? You're too close minded. You don't have to leave the drive empty to benefit from shortstroking. Years ago people came up with a way to "partition" drives. And by using this secret method you can force the drive to remain within a certain area of the drive for only frequently accessed data, while creating one or more additional partitions on the rest of the drive for infrequently used data. That way you can guarantee your data doesn't drift all over the platters and remains on the fastest part of the drive for important data, yet you can still use the whole drive. Pretty amazing, huh?
You can argue all you want, and theoretically you have a point, but the benchmarks prove you wrong.
Yes, you can partition off the beginning of your 250GB drive, but it just doesn't matter. Most of the benchmarks focus on the beginning of the drive, and they prove that it doesn't make much of a difference.
I don't know what else to say. It is unfortunate that multiple people here are giving out bad information with nothing to back it up. Being an old member with almost 5000 posts, you should be particularly ashamed. It's pathetic for you to call me "closed minded" for looking at the hard numbers. Even if I were to ignore the benchmarks, there are just as many arguments in favor of a lower platter count, like improved access times, improved reliablilty, and less heat.
The benchmarks show nothing if you don't understand what it is they are testing. SR's benchmarks only test for peak peformance under ideal conditions. Their results won't show the effect of real world performance issues like file system fragmentation and using the slower parts of the drive that anyone who actually uses a hard drive would experience. They use IOmeter to test server performance. If you know how IOmeter tests drive performance you know why platter count is completely irrelevant to it, and why SR has stopped using it for workstation (single user) benchmarking. Their workstation benchmarks again are testing ideal scenario performance. How much space do you think their OS installation and benchmarks take up? 10GB tops? How does using that much space on a 400GB vs 250GB drive where more than 95% of each drive is empty and all data is perfectly fragmented going to mimic real world usage that would show the affects of adding platters? If they could partition the drive and test each section of the drive, I guarantee you you would see noticeable differences in performance. This still wouldn't take into account fragmentation and having to seek over different portions of the platter which is practically impossible to replicate. "Real world" hard drive benchmarks that are easily repeatable are practically impossible to derive which is why all we have are mostly lowlevel benchmarks which can't tell the whole story.
