Whatever happened to SATA 12Gb/s?

Hi-Fi Man

Senior member
Oct 19, 2013
601
120
106
We have SAS 12Gb/s but no SATA 12Gb/s why is this? SATA express never really caught on (because M.2 seemed to make it irrelevant) and while PCIe/M.2 storage is nice it isn't as convenient as SATA. Most boards only have one M.2 slot or not enough lanes of PCIe for storage. I think a faster SATA and the traditional 2.5/3.5inch form factor still have a place in computing that should be appropriately filled.

Discuss.
 

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
231
106
Personally, I love the size of traditional SATA connectors, would love to see a new, faster revision in the same compact form-factor.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
The reason is SATA is dying out fast. And with technologies like NVME its simply not possible to get anything useful out of it either. And HDs isn't exactly needing anything near it.

There was also a SATA 16Gb/sec proposed.

SAS is a tad different due to the often shared backplane.
 
Feb 25, 2011
16,800
1,474
126
Except in the case of port multipliers (which aren't all that common) SATA is a single port per drive, and most hard drives can't come close to saturating it, while SSDs are moving to PCI-E, m.2, and other faster interfaces. So there's really no need for faster SATA on the desktop.

SAS is using to connect dozens or hundreds of hard drives (with chaining) using a single cable, to a server or SAN that may have hundreds of clients, so MOAR SPEEDS IS BETTERS! :D
 

Hi-Fi Man

Senior member
Oct 19, 2013
601
120
106
I do believe there is a need for SATA SSDs still. I find the 2.5inch form factor better because throttling is a non-issue and the 2.5inch drives are cheaper, likely due to the use of less dense NAND. I believe RAID is also another thing that is much easier to do and manage with SATA. No reason NVME can't be implemented over SATA either. Finally there isn't any hotplug capabilities which is something I use quite often with my eSATA ports and for that matter there isn't an actual external PCIe interface besides thunderbolt.

With all that being said if SATA had similar bandwidth to a 2 or 4 lane PCIe link I think SATA would be the far superior choice for most non-portable applications and there is no technical reason this isn't possible.
 
Last edited:

zir_blazer

Golden Member
Jun 6, 2013
1,179
441
136
Pins are the technical reason. If you have to rearrage them to give the SATA connector PCIe Lanes, it is not SATA compatible anymore. That's why SATA Express was a composite of two standard SATA plus an extra 2 PCIe Lanes connector. And I doubt its worth to tunnel NVMe inside AHCI, which is what you see in some other protocols for some type of compatibility.
Also, SATA Express got totally outclasses by U.2, which is SATA Express done "right". Or at least, something that has an actual product that uses it, as the 2.5'' Intel 750 SSDs uses the U.2 connector. And yes, you need 2.5'' SSDs for standard racks, like this, and U.2 should work there if pushing a bit the PCIe Lanes issue.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Yep, U.2 is the replacement for 2½" drives SATA usage.

Time simply ran out for SATA.
 

Hi-Fi Man

Senior member
Oct 19, 2013
601
120
106
Interesting, I forgot all about U.2. It does look like a good replacement for SATA however, I get the feeling it's going to be limited to the enterprise segment due to the marketing I've seen and the fact that it uses an SFF-8639 interface which is derived from the SFF-8482 interface which is SAS enterprise space. I still feel SATA Express is just a silly looking connector and I'm not sure whether you can hotplug or not.
 

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
231
106
Running out of bandwidth, one way or another. NVLink comes to the rescue, ha!

Yep, U.2 is the replacement for 2½" drives SATA usage.
Not a single full atx mobo with that SFF-8639 connector out yet. I have zero incentive to upgrade from Haswell.Talking about mass adoption, adapters my ass.

Way too many connectors, standards and not enough bandwidth on the mainstream LGA 1151 platform. Might as well skip all that mess for now.
 
Last edited:

Deders

Platinum Member
Oct 14, 2012
2,401
1
91
Interesting, I forgot all about U.2. It does look like a good replacement for SATA however, I get the feeling it's going to be limited to the enterprise segment due to the marketing I've seen and the fact that it uses an SFF-8639 interface which is derived from the SFF-8482 interface which is SAS enterprise space. I still feel SATA Express is just a silly looking connector and I'm not sure whether you can hotplug or not.

Asus believe U.2 is the way things are heading. M.2. is only a stopgap brought over from laptops. That's why they don't generally put more than 1 M.2 slot on their motherboards, but some come bundled with a PCIe card that has an M.2 slot as well as a U.2 socket.
 

JimmiG

Platinum Member
Feb 24, 2005
2,024
112
106
Running out of bandwidth, one way or another. NVLink comes to the rescue, ha!


Not a single full atx mobo with that SFF-8639 connector out yet. I have zero incentive to upgrade from Haswell.Talking about mass adoption, adapters my ass.

Way too many connectors, standards and not enough bandwidth on the mainstream LGA 1151 platform. Might as well skip all that mess for now.

LGA 1151 seems pretty pointless as an upgrade for 1150 users. It's time that Intel give the consumer mid-range chipset some serious upgrades like the ability to handle two video cards at PCI-E 16x while still having PCI-e lanes to spare for other things.

The problem with faster storage interfaces is that they require either more expensive cabling, or that the device gets attached more directly to the chipset/CPU (either by limiting cable lengths or using a socket/slot directly attached to the motherboard). Long, cheap cables introduce interference and attenuation which limits the bandwidth that can be achieved. This is why you don't attach your RAM, video card etc. with cables but plug them directly into the mobo. This also works better than cables in mobile devices, which are more important these days. So cables are probably going away.
 

Essence_of_War

Platinum Member
Feb 21, 2013
2,650
4
81
handle two video cards at PCI-E 16x

I haven't seen any benchmarks that indicate that two gpus are bottlenecked by pcie x8. I think I have seen bottlenecks for x4, though. Do you have something more recent that makes you think two gpus at x16 is necessary?
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,692
136
Do you have something more recent that makes you think two gpus at x16 is necessary?

For me the issue isn't two GPUs, but the fact that you can't have a x16 slot at full bandwidth, unless you run everything else through the DMI link. Which, as we all know, is a simple PCIe 3.0 x4 interface, with a few bells-and-whistles. This will become a serious limitation in a few years as 10Gbit USB3.1, PCIe storage, 2.5/5/10Gbit Ethernet and perhaps TB3 takes off. Heck, TB3 is capable of fully saturating the DMI link already (40Gbit/s vs 32Gbit/s available).

What Intel should do is simply make a second PCIe x4 link available from the CPU exclusively for PCIe storage. So you'd have a x16/x4/x4(DMI) configuration instead of the current x16/x4(DMI). In other words, keep the PCH to handle all the legacy interfaces, but give the option of a high-speed interface from the CPU.

Hence, my next upgrade will not be desktop Skylake/Kaby Lake/Cannon Lake, but Skylake-E. Consumer or Xeon will depend on what models Intel launches.
 

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
231
106
LGA 1151 seems pretty pointless as an upgrade for 1150 users. It's time that Intel give the consumer mid-range chipset some serious upgrades like the ability to handle two video cards at PCI-E 16x while still having PCI-e lanes to spare for other things.

The problem with faster storage interfaces is that they require either more expensive cabling, or that the device gets attached more directly to the chipset/CPU (either by limiting cable lengths or using a socket/slot directly attached to the motherboard). Long, cheap cables introduce interference and attenuation which limits the bandwidth that can be achieved. This is why you don't attach your RAM, video card etc. with cables but plug them directly into the mobo. This also works better than cables in mobile devices, which are more important these days. So cables are probably going away.
Makes a lot of sense, yeah :thumbsup:

Hence, my next upgrade will not be desktop Skylake/Kaby Lake/Cannon Lake, but Skylake-E. Consumer or Xeon will depend on what models Intel launches.
Leaning towards that option as well.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
For me the issue isn't two GPUs, but the fact that you can't have a x16 slot at full bandwidth, unless you run everything else through the DMI link. Which, as we all know, is a simple PCIe 3.0 x4 interface, with a few bells-and-whistles. This will become a serious limitation in a few years as 10Gbit USB3.1, PCIe storage, 2.5/5/10Gbit Ethernet and perhaps TB3 takes off. Heck, TB3 is capable of fully saturating the DMI link already (40Gbit/s vs 32Gbit/s available).

What Intel should do is simply make a second PCIe x4 link available from the CPU exclusively for PCIe storage. So you'd have a x16/x4/x4(DMI) configuration instead of the current x16/x4(DMI). In other words, keep the PCH to handle all the legacy interfaces, but give the option of a high-speed interface from the CPU.

Hence, my next upgrade will not be desktop Skylake/Kaby Lake/Cannon Lake, but Skylake-E. Consumer or Xeon will depend on what models Intel launches.

I am sure we get all storage moved directly to the CPU. But we do look on a time front of maybe 3-5 years. If we exclude the HEDT series.

I still feel SATA Express is just a silly looking connector and I'm not sure whether you can hotplug or not.

SATA Express is already dead. I got one on my mITX board. And I honestly wonder what went wrong for MSI in that matter.
 

daniel1952

Junior Member
Sep 3, 2019
1
0
6
With pcie 5.0 with 4x the bandwidth of pci 3.0, I expect both Intel and AMD start pushing towards the first slot be x8 electrically, x16 physical (for back compatability).
That would allow x8,x8,x4,x4. or x8,x4,x4,x4,x4. or any other combinations. Since sli is gone so two x16 physical slots is not needed.