Are the "good" SSD's being held back by SATA-II?

wraxen

Junior Member
Jun 17, 2009
1
0
0
First some assumptions, as, if they aren't accurate, then the question asplode.

1) When we measure performance, we are measuring it averaged out over time, and there are peaks and valleys - I am sure you have seen those spiky zoomed in graphs of game frame rates, etc.
2) SATA-II caps out at 300 MB/s.
3) The good (intel/ocz) SSD's of the day are reported as having about a 250 MB/s max sequential read in throughput in tests.
4) Sequential Read (or write) is not as important to real world benefit as random read/write

With those assumptions, are we seeing lower sequential and random times on the current crop of nice SSD's due to being capped at certain periods of the cycle? Are there localized short periods of times where the drives are bandwidth capped, lowering the average times?

If it was occurring, would the random read times see this effect? Would we expect this to ever happen to write times, which are slower and not as likely to cap?

One thought would be that anytime the on drive RAM cache was being hit up, the interface would definitely be slower than the drive, so any cache friendly IO is definitely being slowed down.

The recently ratified next SATA doubles the throughput, so should hit 600 MB/s, would we expect to see a performance jump purely from the transition to the new interface alone?

Curious for your thoughts.
 

Forumpanda

Member
Apr 8, 2009
181
0
0
It seems to me that when interface bandwidth gets increased, eventually tech that utilizes it follows.

I think that there isn't any technological reason why SSDs couldn't have more chips in parallel for faster read/write speeds if the interface supported it, or that the memory chips used eventually could become faster as well. (SSDs seems to be the first application where speed is weighed at least equal with capacity)
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
This is an excellent topic.

Two things come to mind. First, to be sure the engineers who are tasked with creating the products know what the boundary conditions are.

If they know the device is going to operate with a SataII interface then they know to not bother with creating a device that internally can operate at bandwidths that exceed the rate-limiting step in the overall topology of the data transfer (from ram to physical storage media).

The dual-port Acard is an example where the engineers knew their device would have much higher intrinsic bandwidth capability (it did not add cost to have that intrinsic capability) so they made two SataII ports in an attempt to leverage that capability.

Are SSD's held back by the bandwidth limits? Probably not, but no doubt they are engineered to "sit right inside" that upper limit. And also no doubt that were the upper limit to be increased then the next round of products would be engineered to use it as well.

(meaning I do not think an X25M or vertex is capable of >300MB/s as currently engineered if you were to theoretically swap out the SataII for SataIII interface)

The second topic you are touching on is that we really have no idea what the intrinsic latency is for the SataII or SataIII protocols themselves. 600MB/s is the max theoretical bandwidth for SataIII but at what filesize? Can the protocol (when implemented in hardware) support 600MB/s of 4KB file writes, or does the intrinsic latency of the topology itself come into play and the max theoretical bandwidth for small files is markedly lower owing to limitations on the computer's side of the SataIII interface (ram, NB, etc) above and beyond the latency limitations of the drive on the other side of the interface?
 

TemjinGold

Diamond Member
Dec 16, 2006
3,050
65
91
Originally posted by: Idontcare
This is an excellent topic.

Two things come to mind. First, to be sure the engineers who are tasked with creating the products know what the boundary conditions are.

If they know the device is going to operate with a SataII interface then they know to not bother with creating a device that internally can operate at bandwidths that exceed the rate-limiting step in the overall topology of the data transfer (from ram to physical storage media).

The dual-port Acard is an example where the engineers knew their device would have much higher intrinsic bandwidth capability (it did not add cost to have that intrinsic capability) so they made two SataII ports in an attempt to leverage that capability.

Are SSD's held back by the bandwidth limits? Probably not, but no doubt they are engineered to "sit right inside" that upper limit. And also no doubt that were the upper limit to be increased then the next round of products would be engineered to use it as well.

(meaning I do not think an X25M or vertex is capable of >300MB/s as currently engineered if you were to theoretically swap out the SataII for SataIII interface)

The second topic you are touching on is that we really have no idea what the intrinsic latency is for the SataII or SataIII protocols themselves. 600MB/s is the max theoretical bandwidth for SataIII but at what filesize? Can the protocol (when implemented in hardware) support 600MB/s of 4KB file writes, or does the intrinsic latency of the topology itself come into play and the max theoretical bandwidth for small files is markedly lower owing to limitations on the computer's side of the SataIII interface (ram, NB, etc) above and beyond the latency limitations of the drive on the other side of the interface?

Not that I disagree or anything but just to play Devil's Advocate: How come improving the interface didn't do the same for mechanical spindle drives then? Even with SATAII, we don't really have anything that can saturate ATA-100...
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Originally posted by: TemjinGold
Not that I disagree or anything but just to play Devil's Advocate: How come improving the interface didn't do the same for mechanical spindle drives then? Even with SATAII, we don't really have anything that can saturate ATA-100...

We are talking about SSD's for which the bandwidth can easily be dialed in (or up as it were) by striping across as many discrete flash chips as the engineers cared to pack into the device. As more chips means more cost there are naturally some cost-management decisions that come into play here.

For hard-drives this is also technically possible when it comes to stacking platters into the drive and striping across the platters. But for hard-drives the mass limitations for the spindle motor restricted the number of practical platters that could cost-effectively be placed into the drive.

(engineering the necessary reliability simply begins to dominate the devices market cost, making it uncompetitive in the $/GB category, its cheaper/easier to reduce platter count and size and just increase the spindle speed instead)

However we did see the impact of improved sata interface bandwidth when it comes to the burst transfer rates from the drive's cache.
 

Absolution75

Senior member
Dec 3, 2007
983
3
81
For the record - it should be noted that SATA I/II both have an overhead due to the way the data is encoded - SATA II's happens to be 20MB/sec (I think). I believe SATA 6Gbit/s doesn't have this overhead.

I couldn't find the original site where I've read this, but the wiki page mentions it.
 
Nov 26, 2005
15,178
394
126
Originally posted by: Absolution75
For the record - it should be noted that SATA I/II both have an overhead due to the way the data is encoded - SATA II's happens to be 20MB/sec (I think). I believe SATA 6Gbit/s doesn't have this overhead.

I couldn't find the original site where I've read this, but the wiki page mentions it.

That's the first thing that came to mind.
 

SunSamurai

Diamond Member
Jan 16, 2005
3,914
0
0
SATA is per line right? Say im raiding a few Intels. 500MBPs throghput is doable as per line its at 250, and the pipe is like a few GB isnt it?
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: TemjinGold
Originally posted by: Idontcare
This is an excellent topic.

Two things come to mind. First, to be sure the engineers who are tasked with creating the products know what the boundary conditions are.

If they know the device is going to operate with a SataII interface then they know to not bother with creating a device that internally can operate at bandwidths that exceed the rate-limiting step in the overall topology of the data transfer (from ram to physical storage media).

The dual-port Acard is an example where the engineers knew their device would have much higher intrinsic bandwidth capability (it did not add cost to have that intrinsic capability) so they made two SataII ports in an attempt to leverage that capability.

Are SSD's held back by the bandwidth limits? Probably not, but no doubt they are engineered to "sit right inside" that upper limit. And also no doubt that were the upper limit to be increased then the next round of products would be engineered to use it as well.

(meaning I do not think an X25M or vertex is capable of >300MB/s as currently engineered if you were to theoretically swap out the SataII for SataIII interface)

The second topic you are touching on is that we really have no idea what the intrinsic latency is for the SataII or SataIII protocols themselves. 600MB/s is the max theoretical bandwidth for SataIII but at what filesize? Can the protocol (when implemented in hardware) support 600MB/s of 4KB file writes, or does the intrinsic latency of the topology itself come into play and the max theoretical bandwidth for small files is markedly lower owing to limitations on the computer's side of the SataIII interface (ram, NB, etc) above and beyond the latency limitations of the drive on the other side of the interface?

Not that I disagree or anything but just to play Devil's Advocate: How come improving the interface didn't do the same for mechanical spindle drives then? Even with SATAII, we don't really have anything that can saturate ATA-100...

the velocirapter is actually held back by SATA1... not by a LOT, but at the upper end... So i don't know what you mean by not having anything that can saturate ATA100. (as many drives not as fast as velociraptor can do so)
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Originally posted by: aeternitas
SATA is per line right? Say im raiding a few Intels. 500MBPs throghput is doable as per line its at 250, and the pipe is like a few GB isnt it?

Correct. The practical upper limit, even with dedicated raid cards that cost >$1k, is around 1.4-1.6GB/s of throughput regardless how many drives you raid together to get to that performance tier.

The controller on the raid card itself becomes the rate-limiting step in the topology.

For on-mobo raid solutions (ICH10R, etc) the practical upper appears to be around 800MB/s, presumably for the same reason (ICH10R controller maxes out).
 

Mango1970

Member
Aug 26, 2006
195
0
76
So on a related topic... if I wanted to raid my 2 Vertex 60GB (actually I just bought a 3rd one -- one for OS and two in raid 0 for games etc)... SHOULD I get a dedicated raid card? if so, other than the built in features of my P5Q-pro mobo, what would be considered a decent buy raid card wise? The whole idea with SSD is that with just the regular SATA port on most mobos, SSD shines and shows its improvements over regular drives. Do I really need to go and spend a boat load more money on a dedicated hardware raid card to see the true potential of the SSD phenomena?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Originally posted by: Mango1970
Do I really need to go and spend a boat load more money on a dedicated hardware raid card to see the true potential of the SSD phenomena?

No you don't.

Where dedicated raid cards and SSD's crossed paths in the realm of "conventional forum wisdom" was in the jmicron controller days when you wanted to avoid piss-poor IOP and latency associated with small file random writes by effectively masking it with the cache on the discrete raid controller.

Vertex and Intel SSD's don't need such a crutch to deliver on their customer's expectations of sustained performance and a consistent computing experience.
 

Denithor

Diamond Member
Apr 11, 2004
6,298
23
81
Originally posted by: Idontcare
The dual-port Acard is an example where the engineers knew their device would have much higher intrinsic bandwidth capability (it did not add cost to have that intrinsic capability) so they made two SataII ports in an attempt to leverage that capability.

Are SSD's held back by the bandwidth limits? Probably not, but no doubt they are engineered to "sit right inside" that upper limit. And also no doubt that were the upper limit to be increased then the next round of products would be engineered to use it as well.

So...how come no one has created an SSD that does the same thing - raid0 with itself to increase bandwidth to the system? If they could "turn up" bandwidth at will - why haven't they?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Originally posted by: Denithor
Originally posted by: Idontcare
The dual-port Acard is an example where the engineers knew their device would have much higher intrinsic bandwidth capability (it did not add cost to have that intrinsic capability) so they made two SataII ports in an attempt to leverage that capability.

Are SSD's held back by the bandwidth limits? Probably not, but no doubt they are engineered to "sit right inside" that upper limit. And also no doubt that were the upper limit to be increased then the next round of products would be engineered to use it as well.

So...how come no one has created an SSD that does the same thing - raid0 with itself to increase bandwidth to the system? If they could "turn up" bandwidth at will - why haven't they?

Cost. Have you seen the prices on the Acard?