Originally posted by: sindows
Alright I give up now, all I know is that you want the lowest possible access time with the highest transfer rate. Thats pretty much as I understand it now...
So which of the following would be faster overall?
1. Same transfer speeds(say 80megs/sec) but different access times(13ms vs 16ms)
2. Same access time but different transfer speed(using the same numbers as example number 1)?
The answer depends entirely on the size of the files you are accessing.
If you are accessing 1GB files then the access time of either 13ms or 16ms is nothing compared to the 12.8seconds it will take to read the file (1GB/80MB/s = 12.8 seconds).
The total time it took your hard-drive to read the 1GB file (assuming it is defragmented) is 13ms + 12.8s = 12.813 seconds.
Had you been using the crappy higher access drive (but still with 80MB/s bandwidth) then your total time for reading the file (assuming it is defragmented) is 16ms + 12.8s = 12.816 seconds.
Notice how the access time is only changing that digit in the third decimal place?
Now let's say you aren't interested in doing things with 1GB files, but rather you've got bunches of 20KB files you like to read. In this case your access time remains the same (13 or 16ms) but the time it takes your drive to read the file is drastically lower as 20KB/80MB/s = 0.000244 seconds...that is a really small number, as in 0.24 ms or 244 µs.
Thus in this case the total time it took your hard-drive to read the 20KB file (assuming it is defragmented) is 13ms + 0.000244s = 0.013244 seconds = 13.244 ms
In other words the read time is now almost entirely dominated by the access time. A higher access time drive would markedly increase the time it takes to read the file, although the total time would be on the order of milliseconds so you won't likely notice a 13ms versus 16ms access time for a single file.
But if you transfer 1GB worth of 20KB files (~52,428 files) then the total time to read that 1GB of small files is now 52,428 x 13.244 ms = 694.37 seconds with the 13ms drive but increases to 851.66 seconds with the 16ms drive. Now the access time makes a huge difference but the bandwidth does not.
For 1GB worth of 20KB files whether you have an 80MB/s 13ms drive or a 800MB/s 13ms drive the time is only improved to 682.85 seconds with the 10x higher bandwidth drive.
But take that 1GB worth of 20KB files and transfer them with a crappy 30MB/s 0.01ms Intel SSD and suddenly your total time is a mere 34.66 seconds.
Am I making this any clearer or am I just muddying the waters with this example?