- Jun 6, 2013
- 1,206
- 502
- 136
Back when Haswell platform was released, the way that it homogenized Chipset I/O made the Motherboards topology mature and stable in a curious way. Intel provided Flex I/O that allowed in Z87 to have either a max of 18 I/O Ports made with a mix of 4-6 USB 3.0, 4-6 SATA-III and 6-8 PCIe 2.0 Lanes (With 18 maximum along them).
Since normally you would have 3 PCIe Slots attached to the Processor PCIe Controller, at most the Chipset would need to handle 4 PCIe 2.0 1x/4x Slots, which you had to feed with up to 7 Chipset PCIe 2.0 Lanes (Since from the 8, one was ate mandatorily by the integrated NIC). This allowed to do something like 4x/1x/1x/1x, but I think I never actually saw this arrangement since the Chipset lanes were mostly used to feed third party controllers for extra SATA or USB Ports, making it impossible.
However, at the end of the day, the topology of the Haswell generation Motherboards was usually rather consistent and predictable. The only exception was if Thunderbolt was added to the mix, since the Thunderbolt Controller ate 4 Chipset PCIe Lanes all by itself. But normally, PCIe Lanes, while limited, were often enough.
This changed with the introduction of the new hybrid connectors to allow for faster SSDs, which are actually making the situation that on Haswell generation was rather standarized, a total mess in H97/Z97 and Skylake.
The hybrid connectors usually uses SATA and PCIe Lanes, however, since the amount of lanes varies by connector and they can be 2.0 or 3.0, the possible theorical maximum bandwidth varies depending on the specific Motherboard implementation. This is worse since it also depends if the lanes comes from the Chipset or the Processor itself. I would say that lanes coming from the Processor should perform better than Chipset ones even if both are 3.0, simply because data doesn't have to travel an extra hop to get to the Processor itself (Nor risk getting bottlenecked by the Chipset DMI).
First, we have SATA Express, which is a ugly, PATA-like connector that fuses two standard SATA Ports and 2 PCIe Lanes. This was supposed to be the future Desktop high end SSD connector. However, one year after its introduction, I still have to see a SATA Express device. At this moment, I would say that the industry actually wants to prematurely abort it and forget that it happened.
Second, we have M.2. This was sort of a SATA Express version for Notebooks. However, the connector is intended not only for storage, it actually provides a whole bunch of additional stuff. Suddently it became widely popular in Desktops Motherboards, and now you usually see one on most of them.
Problem is that M.2 has several versions since the slot itself is keyed and supports a whole bunch of stuff. For SSDs, it can provide either 2 or 4 PCIe Lanes which can also be 2.0 or 3.0, and come from either Chipset or Processor. But there are simply too many variations, so its rather annoying.
When Broadwell Chipsets were introduced, they provided for either one SATA Express or a 2 PCIe Lanes M.2. Regardless, these 2 PCIe Lanes came from the maximum 8. Thankfully some good through implementations preferred to take it from the Processor to provide 4 3.0 PCIe Lanes, like this Motherboard.
Good thing is that neither SATA Express nor 2x 2.0 M.2 could saturate Haswell/Broadwell platform DMI 2.0 (Effectively 4 2.0 PCIe Lanes communicating Processor and Chipset), so at least until then, it could be possible to see a Motherboard with a near identical arrangement to a Z87 Motherboard with Thunderbolt BUT with a 4x 3.0 M.2 coming from the Processor working in 8x/4x/4x mode, which wasn't a bad idea for as long as you were running with a single Video Card (nVidia restricts SLI to at least 8x/8x).
Finally, Intel also released their SSD 750 line. You have the famous PCIe AIC version, and also a 2.5'' SSD type. This last one has a UGLY way to plug to the rest of the computer, since it requires a special SAS-based connector. The SSD connector was a SFF-8639, which includes data and power (Whose power component could be connected straight from a SATA Power of the Power Supply to a sort of Y cable), which was recently renamed to U.2. It provides 4 PCIe Lanes instead of SATA Express 2 lanes. However, the connector that goes to the Motherboard is a SFF-8643 miniSAS. This can't be connected directly, so you need an adapter, which would need a 4x 3.0 M.2 to work at its fullest. M.2-to-miniSAS adapters are like this. And if you want to troll, I THINK you can get a PCI Express 4x-to-M.2 card, plug in that M.2-to-miniSAS adapter, and finally plug the U.2 SSD. Now you get why I love the simplicity of the Intel 750 PCIe version, don't you?
There is also USB 3.1, which is usually done using two Chipset lanes and a third party controller.
Then we enter Skylake. The Skylake platform includes a new Flex IO system, providing 6-10 USB 3.0 (Still maximum 14 USB Ports including the USB 2.0 Ports, as with Z87/Z97), 0-6 SATA-III, a 1 GBps NIC, and 7-20 PCIe 3.0 Lanes (Assuming you sacrifice all but 6 USB 3.0 Ports).
While Skylake platform upgraded the Processor to Chipset connection to DMI 3.0 (4 3.0 PCIe Lanes), and all the Chipset PCIe Lanes are now 3.0 too, you suddently face a lot more possible bottlenecks since all the new SSD biased connectors heavily uses PCIe Lanes. For example, now you can put a 4x 3.0 M.2 to the Chipset lanes, but then you would saturate the DMI 3.0, so on Skylake, it would still make sense to use the Processor 8x/4x/4x arrangement in the same way that on Haswell platform.
Then you also have a possible third party controller for USB 3.1, and maybe the premium Thunderbolt controller eating 4 lanes. Assuming you are using 10 USB 3.0 and the 6 SATA-III, just by having Thunderbolt you're left with just 3 PCIe Lanes for maybe 3 PCIe 1x slots. Suddently, Thunderbolt and a 4x M.2 don't seem to fit together unless you sacrifice something else.
If you read all this, I think you get what I mean. Today, you got a lot of Motherboards with a plethora of I/O that can't really work together, since if you use the M.2 slot, maybe you lose some of the SATAs or a PCI Express slot gets disconnected since its lanes are being redirected to the M.2.
The topologies should be ridiculous complex at the moment, there are just TOO MANY different ways to do things and variety compared to Haswell platforms. This is not exactly good, I see it more like walking on a labyrinth.
Wouldn't it have been more simple if in Desktop Motherboards, instead of a dedicated M.2 slot (Usually at the cost of a PCIe Slot), you get a 4x PCIe Slot THEN use a PCIe-to-M.2 adapter like this? Except by the fact that you usually lose PCIe Slots thanks to Dual Slot Video Cards, I think that SATA Express and M.2 are a bad idea, and would be more flexible with such adapters than by making Motherboards so overly complex.
Since normally you would have 3 PCIe Slots attached to the Processor PCIe Controller, at most the Chipset would need to handle 4 PCIe 2.0 1x/4x Slots, which you had to feed with up to 7 Chipset PCIe 2.0 Lanes (Since from the 8, one was ate mandatorily by the integrated NIC). This allowed to do something like 4x/1x/1x/1x, but I think I never actually saw this arrangement since the Chipset lanes were mostly used to feed third party controllers for extra SATA or USB Ports, making it impossible.
However, at the end of the day, the topology of the Haswell generation Motherboards was usually rather consistent and predictable. The only exception was if Thunderbolt was added to the mix, since the Thunderbolt Controller ate 4 Chipset PCIe Lanes all by itself. But normally, PCIe Lanes, while limited, were often enough.
This changed with the introduction of the new hybrid connectors to allow for faster SSDs, which are actually making the situation that on Haswell generation was rather standarized, a total mess in H97/Z97 and Skylake.
The hybrid connectors usually uses SATA and PCIe Lanes, however, since the amount of lanes varies by connector and they can be 2.0 or 3.0, the possible theorical maximum bandwidth varies depending on the specific Motherboard implementation. This is worse since it also depends if the lanes comes from the Chipset or the Processor itself. I would say that lanes coming from the Processor should perform better than Chipset ones even if both are 3.0, simply because data doesn't have to travel an extra hop to get to the Processor itself (Nor risk getting bottlenecked by the Chipset DMI).
First, we have SATA Express, which is a ugly, PATA-like connector that fuses two standard SATA Ports and 2 PCIe Lanes. This was supposed to be the future Desktop high end SSD connector. However, one year after its introduction, I still have to see a SATA Express device. At this moment, I would say that the industry actually wants to prematurely abort it and forget that it happened.
Second, we have M.2. This was sort of a SATA Express version for Notebooks. However, the connector is intended not only for storage, it actually provides a whole bunch of additional stuff. Suddently it became widely popular in Desktops Motherboards, and now you usually see one on most of them.
Problem is that M.2 has several versions since the slot itself is keyed and supports a whole bunch of stuff. For SSDs, it can provide either 2 or 4 PCIe Lanes which can also be 2.0 or 3.0, and come from either Chipset or Processor. But there are simply too many variations, so its rather annoying.
When Broadwell Chipsets were introduced, they provided for either one SATA Express or a 2 PCIe Lanes M.2. Regardless, these 2 PCIe Lanes came from the maximum 8. Thankfully some good through implementations preferred to take it from the Processor to provide 4 3.0 PCIe Lanes, like this Motherboard.
Good thing is that neither SATA Express nor 2x 2.0 M.2 could saturate Haswell/Broadwell platform DMI 2.0 (Effectively 4 2.0 PCIe Lanes communicating Processor and Chipset), so at least until then, it could be possible to see a Motherboard with a near identical arrangement to a Z87 Motherboard with Thunderbolt BUT with a 4x 3.0 M.2 coming from the Processor working in 8x/4x/4x mode, which wasn't a bad idea for as long as you were running with a single Video Card (nVidia restricts SLI to at least 8x/8x).
Finally, Intel also released their SSD 750 line. You have the famous PCIe AIC version, and also a 2.5'' SSD type. This last one has a UGLY way to plug to the rest of the computer, since it requires a special SAS-based connector. The SSD connector was a SFF-8639, which includes data and power (Whose power component could be connected straight from a SATA Power of the Power Supply to a sort of Y cable), which was recently renamed to U.2. It provides 4 PCIe Lanes instead of SATA Express 2 lanes. However, the connector that goes to the Motherboard is a SFF-8643 miniSAS. This can't be connected directly, so you need an adapter, which would need a 4x 3.0 M.2 to work at its fullest. M.2-to-miniSAS adapters are like this. And if you want to troll, I THINK you can get a PCI Express 4x-to-M.2 card, plug in that M.2-to-miniSAS adapter, and finally plug the U.2 SSD. Now you get why I love the simplicity of the Intel 750 PCIe version, don't you?
There is also USB 3.1, which is usually done using two Chipset lanes and a third party controller.
Then we enter Skylake. The Skylake platform includes a new Flex IO system, providing 6-10 USB 3.0 (Still maximum 14 USB Ports including the USB 2.0 Ports, as with Z87/Z97), 0-6 SATA-III, a 1 GBps NIC, and 7-20 PCIe 3.0 Lanes (Assuming you sacrifice all but 6 USB 3.0 Ports).
While Skylake platform upgraded the Processor to Chipset connection to DMI 3.0 (4 3.0 PCIe Lanes), and all the Chipset PCIe Lanes are now 3.0 too, you suddently face a lot more possible bottlenecks since all the new SSD biased connectors heavily uses PCIe Lanes. For example, now you can put a 4x 3.0 M.2 to the Chipset lanes, but then you would saturate the DMI 3.0, so on Skylake, it would still make sense to use the Processor 8x/4x/4x arrangement in the same way that on Haswell platform.
Then you also have a possible third party controller for USB 3.1, and maybe the premium Thunderbolt controller eating 4 lanes. Assuming you are using 10 USB 3.0 and the 6 SATA-III, just by having Thunderbolt you're left with just 3 PCIe Lanes for maybe 3 PCIe 1x slots. Suddently, Thunderbolt and a 4x M.2 don't seem to fit together unless you sacrifice something else.
If you read all this, I think you get what I mean. Today, you got a lot of Motherboards with a plethora of I/O that can't really work together, since if you use the M.2 slot, maybe you lose some of the SATAs or a PCI Express slot gets disconnected since its lanes are being redirected to the M.2.
The topologies should be ridiculous complex at the moment, there are just TOO MANY different ways to do things and variety compared to Haswell platforms. This is not exactly good, I see it more like walking on a labyrinth.
Wouldn't it have been more simple if in Desktop Motherboards, instead of a dedicated M.2 slot (Usually at the cost of a PCIe Slot), you get a 4x PCIe Slot THEN use a PCIe-to-M.2 adapter like this? Except by the fact that you usually lose PCIe Slots thanks to Dual Slot Video Cards, I think that SATA Express and M.2 are a bad idea, and would be more flexible with such adapters than by making Motherboards so overly complex.