What companies design SAS SSD controllers?

Mar 27, 2009
12,968
36
106
#1
Here are some I found so far:

1. Seagate (LSI).

2, Xilinix

3. Samsung. (Not confirmed, but I can't imagine them not designing their own SAS SSD controllers)

4. Toshiba. Example SAS SSD here.

The PM5 SAS SSD and CM5 NVMe SSD are based on a new generation of SSD controllers that use the same architecture for both NVMe and SAS, allowing the two product families to share several key features.
(Interestingly, I found out this company has a value SAS SSD meant to replace SATA SSD--> https://business.toshiba-memory.com/en-us/company/tma/news/2018/06/storage-20180619-1.html)

P.S. Not sure yet who makes WD/Hitachi SAS SSD controller. An example of their SAS SSD here.
 
Last edited:
Aug 25, 2001
44,495
813
126
#2
SAS is 12Gbit/sec, right? So, assuming it doesn't have a special connector, what is the premium for budget NVMe controller, versus budget SAS controller, if there ever were such a thing?

Thinking, SAS 12Gbit/sec could double maximum SATA6G SSD speeds, so it would be close or equal to entry-level NVMe controller speeds. If cost were similar, and the 2.5" size kept, then perhaps an SAS 12Gbit/sec SSD might become a budget performance solution? But NVMe is getting critical mass, and very few consumer desktop motherboards include SAS controller support, although my older K9A2 Platinum had a Promise SAS controller added on board. Mobo makers could do that again, if they wanted to, for some higher-speed 12Gbit/sec links. In fact, they could do that, since the native Intel chipset's own lanes get used up by the existing NVMe PCI-E lanes, so they could provide some SAS controller(s) too, to bulk up the limited SAS/SATA port count.

I still think that it was a real mistake, not allowing SATA to grow to 12Gbit/sec, especially since we've already now got USB3.1 Gen2 10Gbit/sec. An external interface, FASTER, than the "internal" storage interface? Doesn't make much sense, does it, since Windows (consumer versions, anyways), are designed specifically to NEED to be installed to "internal" storage, and not to a USB3.1Gen2 external SSD, for example.

Adding controller support for SAS 12Gbit/sec internal interface would right that wrong, IMHO, assuming that SSD mfgs actually produced and marketed 12Gbit/sec SAS 2.5" SSD drives to consumer or pro-sumers as well.
 
Mar 27, 2009
12,968
36
106
#3
Thinking, SAS 12Gbit/sec could double maximum SATA6G SSD speeds, so it would be close or equal to entry-level NVMe controller speeds. If cost were similar, and the 2.5" size kept, then perhaps an SAS 12Gbit/sec SSD might become a budget performance solution?
Yep, that would be true for a single lane SAS 12 Gbps SSD.

With that noted SAS can also use up to 4 lanes. When used this way it is called "Wide port".

Unfortunately all the current SAS 12 Gbps SSDs that I know of have only the 2 PCIe lane SAS wide port. (Example this Seagate Nytro SAS 12 Gbps SSD with 2100 MB/s sequential read using 2 lane SAS 12 Gbps).

However, the following article does mention 4 PCIe lane SAS wide port is possible. Such a SSD (using SATA 24 GBps) would have 21.5% more bandwidth than a 4 lane PCIe 4.0 (ie, PCIe 4.0 x 4) NVMe SSD.

SIDE NOTE: SAS 12 Gbps uses PCIe 3.0 lanes and the upcoming SAS 24 Gbps uses PCIe 4.0 lanes.

https://searchstorage.techtarget.co...roup-claims-new-SAS-has-pluses-over-NVMe-PCIe

"He said the SCSI Trade Association hopes to hold its first plugfest for so-called “24 Gbps” SAS in mid-2017. He expects host bus adapters (HBAs), RAID cards, and expanders to support the new SAS technology in 2018, with server OEM products to follow in 2019.

Kutcipal claimed the 19.2 Gbps bandwidth would have a 21.5% per-lane performance advantage over non-volatile memory express (NVMe) running on top of PCI Express (PCIe) 4.0. The maximum bandwidth for single-lane PCIe 4.0 is 15.8 Gbps, he said.

SAS typically uses one lane to the drive, and enterprise NVMe SSDs typically use four-lane PCIe, Kutcipal acknowledged. Four-lane PCIe would obviously be faster than single-lane SAS.

But Kutcipal said, “The lanes are not free. [They’re] actually very expensive, so the comparison has to be per lane. SAS can go x2 or x4 [lanes] to the drive.”
 
Last edited:
Mar 27, 2009
12,968
36
106
#4
Altera: https://www.intel.com/content/www/u...ology/transceiver/protocols/pro-sata-sas.html

Intel® FPGA SATA and SAS
Intel developed SATA and SAS solutions based on the latest FPGAs with transceivers. Stratix® V GX, Stratix IV GX, Arria® V, Arria II GX, Arria II GZ, Cyclone® V, and Cyclone IV GX FPGAs support the electrical and signal requirements for SATA and SAS (see Table 1). Intel FPGAs, coupled with SATA and SAS intellectual property (IP), offer a solution for developing storage interfaces on a single chip.
OpenSSD also uses FPGA in one of their designs:

http://openssd.io/

The OpenSSD Project is an initiative to promote research and education on the recent solid-state drive (SSD) technology by providing easy access to OpenSSD platforms on which open source SSD firmware can be developed. Currently, we offers an OpenSSD platform based on a FPGA board called Cosmos OpenSSD, whose hardware and software designs are fully modifiable.
 
Aug 4, 2015
167
27
101
#5
SIDE NOTE: SAS 12 Gbps uses PCIe 3.0 lanes and the upcoming SAS 24 Gbps uses PCIe 4.0 lanes.
No, SAS lanes are not PCIe lanes. Some devices might have a SERDES connected to both SAS and PCIe MACs, but on the wire SAS signals don't resemble PCIe signals beyond simply both being high-speed differential serial links.
 
Mar 27, 2009
12,968
36
106
#6
SIDE NOTE: SAS 12 Gbps uses PCIe 3.0 lanes and the upcoming SAS 24 Gbps uses PCIe 4.0 lanes.
No, SAS lanes are not PCIe lanes. Some devices might have a SERDES connected to both SAS and PCIe MACs, but on the wire SAS signals don't resemble PCIe signals beyond simply both being high-speed differential serial links.
I was referring to the host bus adapter.

For example, a SAS 12 Gbps HBA can't be plugged into a PCIe 2.0 slot and operate at full speed in the same way a SATA 6 Gbps card can't operate at full speed when plugged into a PCIe 1.0 slot.

Likewise a port in a SAS 24 Gbps HBA wont work at full speed unless it has PCIe 4.0 lanes available.
 
Last edited:
Aug 4, 2015
167
27
101
#7
For example, a SAS 12 Gbps HBA can't be plugged into a PCIe 2.0 slot and operate at full speed in the same way a SATA 6 Gbps card can't operate at full speed when plugged into a PCIe 1.0 slot.
HBAs aren't a one-to-one mapping of PCIe lanes to SAS lanes. A 12Gbps SAS HBA in a PCIe 2.0 x8 slot can keep about three SAS ports saturated (assuming the disks are fast enough). Almost all SAS HBAs are unable to saturate all of their SAS ports simultaneously even when operating with the fastest and widest PCIe link they support.
 
Mar 27, 2009
12,968
36
106
#8
HBAs aren't a one-to-one mapping of PCIe lanes to SAS lanes.
Look how the following is written (particularly the last sentence of the first quote):

https://searchstorage.techtarget.co...roup-claims-new-SAS-has-pluses-over-NVMe-PCIe

He said the SCSI Trade Association hopes to hold its first plugfest for so-called “24 Gbps” SAS in mid-2017. He expects host bus adapters (HBAs), RAID cards, and expanders to support the new SAS technology in 2018, with server OEM products to follow in 2019.

Kutcipal claimed the 19.2 Gbps bandwidth would have a 21.5% per-lane performance advantage over non-volatile memory express (NVMe) running on top of PCI Express (PCIe) 4.0. The maximum bandwidth for single-lane PCIe 4.0 is 15.8 Gbps, he said.

SAS typically uses one lane to the drive, and enterprise NVMe SSDs typically use four-lane PCIe, Kutcipal acknowledged. Four-lane PCIe would obviously be faster than single-lane SAS.

But Kutcipal said, “The lanes are not free. [They’re] actually very expensive, so the comparison has to be per lane. SAS can go x2 or x4 [lanes] to the drive

And from this article:

https://searchstorage.techtarget.co...rise-in-popularity-in-tandem-with-speed-boost

Industry momentum appears to be strong for NVMe-based PCIe SSDs. However, the SCSI Trade Association says the SAS protocol offers some advantages, including faster per-lane performance of 12 Gbps SAS over PCIe 3.0, inherently greater scalability than NVMe, hot-pluggable drives and time-tested failover capabilities.
 
Last edited:
Aug 4, 2015
167
27
101
#9
That kind of per-lane comparison is something you'll usually only hear from people getting paid to promote SAS over NVMe. If you're trying to provision a system to support a large number of drives and be able to guarantee full bandwidth to any one drive at a time, then it can be cheaper to prefer SAS expanders over PCIe switches. But in reality, all the SAS HBAs support exactly 8 PCIe lanes and somewhere between 8 and 24 SAS ports, so SAS doesn't really have any advantage in aggregate bandwidth achievable across your whole array.
 
Mar 27, 2009
12,968
36
106
#10
But in reality, all the SAS HBAs support exactly 8 PCIe lanes and somewhere between 8 and 24 SAS ports, so SAS doesn't really have any advantage in aggregate bandwidth achievable across your whole array.
1. Could it be the current SAS HBA designs are limited by the controller?

2. If Rick Kutcipal is saying that PCIe lanes are expensive (and I do agree they are because PCIe lanes add processor die size and increase power consumption, etc.) could it be that there is a claim for greater aggregate bandwidth because SAS allows for a greater amount of bandwidth (per PCIe lane) in the same way that AMD's Infinity fabric allows for a greater amount of bandwidth* per PCIe lane?

*According to this article Infinity fabric has 32.5% more bandwidth (ie, 10.6 GT/s vs. 8 GT/s) than PCIe when both are used over a PCIe 3.0 x 16.

Each Zeppelin die can create two PCIe 3.0 x16 links, which means a full EPYC processor is capable of eight x16 links totaling the 128 PCIe lanes presented earlier. AMD has designed these links such that they can support both PCIe at 8 GT/s and Infinity Fabric at 10.6 GT/s
So based on that maybe a 21.5% boost in bandwidth over PCIe (for SAS) isn't too extreme?
 
Last edited:
Mar 27, 2009
12,968
36
106
#11
Info from the Broadcom link on this webpage indicates a SAS array is limited by the standard bandwidth of PCIe:






(snip)

 
Last edited:
Mar 27, 2009
12,968
36
106
#12
Thanks Billy Tallis for your input into this thread. You were right about the throughput and the mapping of SAS lanes not being one-to-one with PCIe lanes.
 
Last edited:
Mar 27, 2009
12,968
36
106
#13
I still think that it was a real mistake, not allowing SATA to grow to 12Gbit/sec, especially since we've already now got USB3.1 Gen2 10Gbit/sec.
Apparently they kept in at 6 Gbps because power for SATA 12 Gbps was too high.
 
Mar 27, 2009
12,968
36
106
#14
Thinking more about the future of SAS SSD controllers I suspect that the success of the Seagate Multi-actuator hard drives will strongly play into this.

If high actuator count (including dual pillar) SAS hard drives become a reality then I would think the demand for SAS SSD controllers will increase stronger than normal.
 
Last edited:


ASK THE COMMUNITY