The Math Behind Short Stroking HDDs

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
I ordered two new 1TB WD Caviar Black's to replace my current two-year-old 320GB Seagates. I'm excited for the performance increase but more so want to make sure I get the most performance out of these new harddrives. I've been short-stroking my hard drives for awhile, but was wondering if anyone had ever mathematically figured out the optimum configuration/percentages/set-ups for short-stroking. Googling has turned up nothing.

If not, I imagine it would be relatively simple to do some quick calculus to the figure out what would be optimal. Basically, one would figure out the rate of speed penalty as one moves from the outer edge of the platter to the innermost edge. Next, one would figure out how much space one gains moving from the outer edge of the platter into the innermost edge. Analyze the curves and you have your answer of how large of a partition to create. However, it might not be so simple, and I don't have the background in HDD technology to verify my hypothesis. First, is HDD space directly proportional to area across the platter? Or are there more dense zones and less dense zones? Also, is there anyway to make sure that you're writing to the outer edge of the disk with your first partition, or does Windows do that automatically, every time? Is there a program where you can actually "see" where your partitions physically are on an HDD?

If my hypothesis is correct, I want to write up some quick equations based on proportionality, and then one can easily multiply by platter density and number of platters to figure out the final "optimal" partition size for maximum speed (or where and how much give and take you can have).

Thanks :thumbsup:
 

Billb2

Diamond Member
Mar 25, 2005
3,035
70
86
Ultimate Defrag will not only tell you where files are on a HDD, but let you move then around too.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
Originally posted by: Billb2
Ultimate Defrag will not only tell you where files are on a HDD, but let you move then around too.
Cool, I'll check it out. However, I still would like to use partitions due to fragmentation/drift/etc.


Originally posted by: Idontcare
What would the math tell you that you wouldn't likely to surmise by observing the DTR for the drive as function of capacity?

http://images.anandtech.com/re...aunch/wd150hdtuneS.jpg
Well, extrapolating from HDTune would be an approximation (given how widespread and variable the data is). The flip side of that of course is that the formulas can only be as good as the data used to create them, so it might be all in vain anyway. Looking at that specific chart, it looks like up to about 10-13% there's little to no performance loss, however is that the normal or a fluke and is it the same for all drives? You could do an HDTune run 5 times on 100 different drives each to arrive at an average and even derive a formula curve from that average. Instead of actually doing those 500 test runs and then averaging them (statistical approach), you could arrive at the same conclusion doing a little bit of calc, never mind that it could be applied to all hard drives of the same speed and form factor (7200RPM 3.5" for example).
 

Intelman07

Senior member
Jul 18, 2002
969
0
0
My math tells me that I have seen this on another forum. Strange how the same posts pop up everywhere.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: MrK6
Originally posted by: Idontcare
What would the math tell you that you wouldn't likely to surmise by observing the DTR for the drive as function of capacity?

http://images.anandtech.com/re...aunch/wd150hdtuneS.jpg
Well, extrapolating from HDTune would be an approximation (given how widespread and variable the data is). The flip side of that of course is that the formulas can only be as good as the data used to create them, so it might be all in vain anyway. Looking at that specific chart, it looks like up to about 10-13% there's little to no performance loss, however is that the normal or a fluke and is it the same for all drives? You could do an HDTune run 5 times on 100 different drives each to arrive at an average and even derive a formula curve from that average. Instead of actually doing those 500 test runs and then averaging them (statistical approach), you could arrive at the same conclusion doing a little bit of calc, never mind that it could be applied to all hard drives of the same speed and form factor (7200RPM 3.5" for example).

You would generate the data from the specific drives of interest of course. The underlying physical parameters (track width, linear bit density, etc) all effect the end results but will vary for every drive out there so there is no reason to expect some generic equation to have pre-multiplier constants which will be applicable to anything more generic than a given product family from the same manufacturer.

My point was that an HDTune (or any other equivalent "bandwidth versus track" benchmark) is giving you the very emperical data regarding your specific drive that you need to enable your decision point. The data is already there, now you just need to decide what your question is exactly.

Questions could be "what capacity do I used as my cutoff point for short-stroking the drive while still retaining >80% peak bandwidth for the last track on my short-stroked drive", etc.

You would ask this question whether you had an analytical expression of the data or the actual empirical data itself, and since you can readily generate the empirical data and answer your question directly, why would you want to introduce error in your answer by resorting to an analytical expression which no-doubt would be missing some terms (you want a simple model for sake of sheer usability no doubt) and as such would be a mere approximation to the actual drive's capabilities...but not so the empirical data, it is exactly as it is.
 

chrisf6969

Member
Mar 16, 2009
82
0
0
I've seen the # of 10% thrown around for max peformance short stroking a harddrive.

The smaller the % the further out to the edge you go and the higher overall performance would be. But at a certain point you have diminishing returns in performance and a big cost of storage space.

If you look at most HD Tach & other benchmarks performance really only drops around the 50% mark. So you could increase performance decently by just trimming off the slowest 50% of storage space. So partition your 1Tb drives to 500Gb and you get a combination of increased performance with out sacrificing too much space.

Realize that the outer most edges hold more information b/c the circumfrance (rings) at the outer edge are probably 3x longer than the inner rings. (approx.)

So by cutting it to 50% storage space, you're probably using the outer most 1/3 of radius.

http://www.tomshardware.com/re...stroking-hdd,2157.html
Short stroking aims to minimize performance-eating head repositioning delays by reducing the number of tracks used per hard drive. In a simple example, a terabyte hard drive (1,000 GB) may be based on three platters with 333 GB storage capacity each. If we were to use only 10% of the storage medium, starting with the outer sectors of the drive (which provide the best performance), the hard drive would have to deal with significantly fewer head movements.

 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
Originally posted by: Idontcare
Originally posted by: MrK6
Originally posted by: Idontcare
What would the math tell you that you wouldn't likely to surmise by observing the DTR for the drive as function of capacity?

http://images.anandtech.com/re...aunch/wd150hdtuneS.jpg
Well, extrapolating from HDTune would be an approximation (given how widespread and variable the data is). The flip side of that of course is that the formulas can only be as good as the data used to create them, so it might be all in vain anyway. Looking at that specific chart, it looks like up to about 10-13% there's little to no performance loss, however is that the normal or a fluke and is it the same for all drives? You could do an HDTune run 5 times on 100 different drives each to arrive at an average and even derive a formula curve from that average. Instead of actually doing those 500 test runs and then averaging them (statistical approach), you could arrive at the same conclusion doing a little bit of calc, never mind that it could be applied to all hard drives of the same speed and form factor (7200RPM 3.5" for example).

You would generate the data from the specific drives of interest of course. The underlying physical parameters (track width, linear bit density, etc) all effect the end results but will vary for every drive out there so there is no reason to expect some generic equation to have pre-multiplier constants which will be applicable to anything more generic than a given product family from the same manufacturer.

My point was that an HDTune (or any other equivalent "bandwidth versus track" benchmark) is giving you the very emperical data regarding your specific drive that you need to enable your decision point. The data is already there, now you just need to decide what your question is exactly.

Questions could be "what capacity do I used as my cutoff point for short-stroking the drive while still retaining >80% peak bandwidth for the last track on my short-stroked drive", etc.

You would ask this question whether you had an analytical expression of the data or the actual empirical data itself, and since you can readily generate the empirical data and answer your question directly, why would you want to introduce error in your answer by resorting to an analytical expression which no-doubt would be missing some terms (you want a simple model for sake of sheer usability no doubt) and as such would be a mere approximation to the actual drive's capabilities...but not so the empirical data, it is exactly as it is.
Very interesting; I didn't know there was that much variation within each "class" of HDDs beyond just platter number and density (class being 3.5" 7200RPM drives, for instance). The original question was and still is where is the optimal trade-off for speed vs. storage space when setting-up a drive for short-stroking. Granted that is probably at the discretion of the user, but along the curves I was hoping to find a "sweet spot." However, given what you'd told me (thanks, I'm not that well versed in the tech), it does make more sense to continue to approximate off of HDTune - I just was looking for something more precise, but it seems like that's the easiest way to go and actually calculating it might not be more accurate anyway.
Originally posted by: chrisf6969
I've seen the # of 10% thrown around for max peformance short stroking a harddrive.

The smaller the % the further out to the edge you go and the higher overall performance would be. But at a certain point you have diminishing returns in performance and a big cost of storage space.

If you look at most HD Tach & other benchmarks performance really only drops around the 50% mark. So you could increase performance decently by just trimming off the slowest 50% of storage space. So partition your 1Tb drives to 500Gb and you get a combination of increased performance with out sacrificing too much space.

Realize that the outer most edges hold more information b/c the circumfrance (rings) at the outer edge are probably 3x longer than the inner rings. (approx.)

So by cutting it to 50% storage space, you're probably using the outer most 1/3 of radius.

http://www.tomshardware.com/re...stroking-hdd,2157.html
Short stroking aims to minimize performance-eating head repositioning delays by reducing the number of tracks used per hard drive. In a simple example, a terabyte hard drive (1,000 GB) may be based on three platters with 333 GB storage capacity each. If we were to use only 10% of the storage medium, starting with the outer sectors of the drive (which provide the best performance), the hard drive would have to deal with significantly fewer head movements.
Tom's suggested it and I've seen 10% thrown around elsewhere as well. I'm thinking on my new 1TB drives I might try 130GB (12%) just to give some breathing room in case I ever need it. The HDD should write to the outer edge of the partition first, so I should only use the last "slow part" if I almost completely fill the drive, right?
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
I tend to just create a partition that will hold Windows plus whatever applications I'm going to install. Instead of wasting the remaining space, I create another partition and store all of my archival stuff on it that I rarely access.

Typically the maximum sustained transfer rate will drop off substantially after about the 50% mark. Access times will always benefit from a smaller partition.
 

SunSamurai

Diamond Member
Jan 16, 2005
3,914
0
0
I got the same drive a few weeks ago and this was on my mind as well. I did 15% of teh drive for OS and apps. I have a dozen games and full PS installed with 30 gigs left. (around 130 on OS partition). Preforms well.

Acess times drop off after about 15%, but not alot,but with a 1TB drive you will need alot of apps and games installed to go much over 130GB. The bulc of space is taken up by all the documents. I would go to 200GB but no more, unless you like installing everything under the sun.

Swap file should go on a separate drive.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
as far as I can tell only the drive makers will have the data needed to perform such a calculation. You could set up a calculus equation, but you would not be able to actually solve it for a specific drive.