Is 60MB/s the maximum performance I could expect from a PCI->SATA adapter card or am I just being bottlenecked by the inexpensive Syba Add-On card that I am using?
http://www.amazon.com/gp/product/B000BU7XNG?psc=1&redirect=true&ref_=oh_aui_detailpage_o01_s00
Is there a different 2-4 port PCI->SATA adapter card that will get me over 100MB/s on my connected drives?
Details
I recently added a PCI->SATA controller to my FlexRAID server. I hooked my Parity Drive to it, leaving my Data Drives connected to the onboard SATA ports and PCIe SATA adapter. My thinking was that since the parity drives are only really used for the nightly Snapshot updates and the weekly/monthly parity checks it should be fine. In general, it works except that it has slowed my parity checks to a crawl. FlexRAID is reporting throughput of about 49MB/s or about half of what it normally is.
I benchmarked the parity drive (3TB Toshiba ABA) with HDTune and it was topping off at about 60MB/s when connected to the PCI adapter. I shut the system down, moved the drive to the PCIe adapter card that I use for a couple of data drives and then rebenchmarked. Peak of 191MB/s and normal throughput in the mid 100s.
I thought the PCI lane was rated to 133MB/s so I figured it would have had adequate bandwidth for the job but it looks like I am getting about half of that in the real world. I have nothing else in the PCI slots that would be adding any congestion. Double checked the drivers for the card and they are the latest.
I realize the easy solution is just dump the PCI card and get PCIe but if it is possible to utilize the PCI slots for SATA adapters for the parity drives then that would be ideal because it keeps PCIe spots open for Data Drive Expansion in the future.
http://www.amazon.com/gp/product/B000BU7XNG?psc=1&redirect=true&ref_=oh_aui_detailpage_o01_s00
Is there a different 2-4 port PCI->SATA adapter card that will get me over 100MB/s on my connected drives?
Details
I recently added a PCI->SATA controller to my FlexRAID server. I hooked my Parity Drive to it, leaving my Data Drives connected to the onboard SATA ports and PCIe SATA adapter. My thinking was that since the parity drives are only really used for the nightly Snapshot updates and the weekly/monthly parity checks it should be fine. In general, it works except that it has slowed my parity checks to a crawl. FlexRAID is reporting throughput of about 49MB/s or about half of what it normally is.
I benchmarked the parity drive (3TB Toshiba ABA) with HDTune and it was topping off at about 60MB/s when connected to the PCI adapter. I shut the system down, moved the drive to the PCIe adapter card that I use for a couple of data drives and then rebenchmarked. Peak of 191MB/s and normal throughput in the mid 100s.
I thought the PCI lane was rated to 133MB/s so I figured it would have had adequate bandwidth for the job but it looks like I am getting about half of that in the real world. I have nothing else in the PCI slots that would be adding any congestion. Double checked the drivers for the card and they are the latest.
I realize the easy solution is just dump the PCI card and get PCIe but if it is possible to utilize the PCI slots for SATA adapters for the parity drives then that would be ideal because it keeps PCIe spots open for Data Drive Expansion in the future.