Will SSDs any time soon exceed the bandwidth of SATA2?

Eeqmcsq

Senior member
Jan 6, 2009
407
1
0
Many SSDs are already bumping into the SATA 2 limit, mostly on sequential read operations.
 

Ben90

Platinum Member
Jun 14, 2009
2,866
3
0
And according to my 1+1=2 math, the OCZ Colossus should max out a SATA3 port on sequential reads

*edit*
OCZ Colossus X4.....and wow wtf the standard ones are already on the market, i never even knew about that
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
the only reason why the OCZ colossus hasn't exceeded the SATA gen 3 speed (which is 2x the speed of SATA gen2); is because of the limitations of the built in raid0 controller, which is just a hair under the max speed of SATA gen2. Although, as SATA gen3 becomes more mainstream in the next few years you can expect them to ship it with a better internal raid0 controller (we already have such controllers that can exceed SATA gen3 by quite a lot; the colossus isn't using any because it will be wasted due to SATA gen2 limitations).
 

pjkenned

Senior member
Jan 14, 2008
630
0
71
www.servethehome.com
Is this a trick question? SATA 2 SSD's won't exceed SATA 2 bandwidth today, in three years, or in five years. PCIe SSD's already can exceed SATA 2 bandwidth, and when SATA 3 comes out (in force and drives convert to SATA 3 standard), SATA 3 SSD's will start to exceed SATA 2 bandwidth.
 

ilkhan

Golden Member
Jul 21, 2006
1,117
1
0
As mentioned, we're already butting against the SATA2 limit on sequential reads with several drives. We'll be doing the same for SATA3 whenever the manufacturers develop wider controllers. Getting random small file reads and writes to bump against SATA2 limits is going to take a while.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Thanks for the answers.

Does anyone know when SATA 3 will appear on mobos?
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
isn't sata half duplex and pci express full duplex?

i'd not sweat peak read/write unless you are a benchmark queen.

there's a reason FC is still mostly 4GB and just now 8GB is moving (but FCOE 10gbe will be a challenger). Most systems that are non-linear don't get anywhere near peak in real life utilization
 

pjkenned

Senior member
Jan 14, 2008
630
0
71
www.servethehome.com
SATA is half, SAS is full, PCIe is full.

Peak is still really useful. I was less impressed by the performance of the Vertex (I do like the quiet) versus the SAS Raid 5 array. Seems like the 800+MB/s peak on the SAS array hid a lot of the access disadvantages of the 2.5" 15k rpm drives. On the other hand, this was a desktop workstation, not a server running heavy transaction. Then again, for non-benchmark queens (desktop users) it's really a mix of sequential and random, large and small transfers that determine performance...
 

Ayah

Platinum Member
Jan 1, 2006
2,512
1
81
Originally posted by: ilkhan
As mentioned, we're already butting against the SATA2 limit on sequential reads with several drives. We'll be doing the same for SATA3 whenever the manufacturers develop wider controllers. Getting random small file reads and writes to bump against SATA2 limits is going to take a while.

Can't even break ATA133 with random small writes. :(
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: Just learning
Thanks for the answers.

Does anyone know when SATA 3 will appear on mobos?

SATA is currently available via extension cards.
it will appear on mobos when intel or AMD will decide to put it there. since each company now has a monopoly on the production of chipsets for its CPUs.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: pjkenned
Or a motherboard manufacturer could add an onboard, off-chipset controller. Like the tons that use JMicron SATA controllers to give more than 6 SATA ports.

Yes... But it will be one SATA3 port with most ports being SATA2.
the problem is supplying it with enough bandwidth via PCIe links. Early on they were gonna do it, but it fell through when they realized that a single PCIe gen1 link is just not enough bandwidth to make a SATA3 port faster than SATA2.
Now there are some mobos about to come out which will use a another chip to combine two PCIe gen 1 links (250MB/s each) into 1 PCIe gen2 link (500MB/s) which then connects to a SATA3 link.

So instead of a SATA3 built into the southbridge to give: southbridge <SATA3> SSD
you have southbridge <pciev1> converter chip <pciev2> converter chip <SATA3, limited to 500MB/s> SSD

SATA2 = 300 MB/s
SATA3 = 600 MB/s
PCIe gen1 = 250MB/s
PCIe gen2 = 500MB/s

By doing that they are also using up the PCIe connections on the southbridge, using multiple chips, and getting a sub par SATA3 limited to less then its supposed bandwidth... we need an actual new southbridge design that can handle that natively.
 

pjkenned

Senior member
Jan 14, 2008
630
0
71
www.servethehome.com
I figured someone would take something like extra PCIe lanes on the X58 and instead of providing x1 and x4 connectors, provide on board SATA 3, save a few cents on the slots. The X58 has 40 PCIe 2.0 links (4 usually go to the ICH10R).

Also, don't forget the X58 also has a second QPI which is 12.8GB/s so if you really needed more PCIe links... you could add a ton cost prohibitive price :)

Actually, it would be pretty cool to have a dual X58 board, with one X58 just providing bandwidth to tons of SATA 3 ports. You know someone, somewhere, would hook up a ton of SSD's to it just to get a 12GB/s ATTO/CrystalMark/HD Tach screenshot.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
you could add a ton cost prohibitive price :)
That is a significant issue. Cost prohibitive to add something, that is currently not used by anything on the market... a lot of money for "future compatibility".

if you are willing to pay the cost, you can have them TODAY.
But they will not be ubiquitous for some time to come.
 

rivan

Diamond Member
Jul 8, 2003
9,677
3
81
Given there's now the potential to far exceed the bandwidth capacity of drive controllers in the very near future, why maintain that mode of operation?

Is there a significant downside to just ditching drive controllers as they exist today and just using the PCI-E bus as the new defacto storage interface?

Obviously there's the many-drives:fewer available slots ratio, but that's really not all that common outside powerusers around places like this.
 

pjkenned

Senior member
Jan 14, 2008
630
0
71
www.servethehome.com
Pretty much everything sits on PCIe right now. Raid cards are PCIe, the ICH10R is DMI which basically uses 4 PCIe lanes... In reality, if you have 2x PCIe 2.0 x16 cards even on an X58, you run out of PCIe lanes quickly (4 DMI for IC10R, 32 graphics, 4 remaining).

That means on the X58, and having two PCIe 2.0 x16 lanes going, you have 4 lanes leftover which is 2GB/s of bandwidth for anything else. (Remember the ICH10R has 2GB/s through DMI).

On the P55, in contrast, you have 16 lanes from the processor (GPU) + DMI (2GB/s) to the P55. Meaning, even if you replaced the P55 with a SATA 3 monster, you aren't getting more than 2GB/s. Also, if you added PCIe drives off the CPU lanes, your GPU now gets x8 bandwidth since you are splitting the x16 channel.

I think a big reason we still have cabled drives is that PCB costs would be fairly high if you moved to PCIe drives. For those with one drive, it isn't an issue, but for those that want to have multiple drives, you quickly need more slots which means expanding board size. SATA/SAS connectors are small!!!