Maybe someone has some insight to this.
My server system had some three available PCI-E slots. I used the x1 slot for an Intel "Pro" NIC because I didn't like the onboard option. The remaining two were x16 and x8 -- respectively. I wanted to disable the onboard (nForce) SATA controller, and replace it with a PCI-E unit. didn't want to spend the big bucks again on a hardware controller capable of RAID 5/6, so I found this little item and bought it through the Egg:
There are only a few reviews. With a little insight to "small-sample" methods, it becomes pretty obvious that the folks who had trouble with these were using Linux or Ubuntu. Even so, the documentation is only a couple pages, and not illuminating guidance.
But after a few days of trouble in which I thought I had a "defective SATA port," I discovered that the standard MS AHCI drivers need to be reinstalled if you cable a drive to the unit that was formatted under "less than" AHCI compliance. This can lead to a choice of the wrong drivers (the unit's own RAID drivers) when you want the native AHCI. so much for that.
The unit features "port-multiplier" which can be used for one port -- limiting connections to the remaining three to one drive. "Port Multiplier" requires a "splitter" or "breakout"  cable which I'd previously priced at above $50, and they are not that easy to find.
So rather than cable seven disks to one controller with the expensive cable, I chose to buy two controllers -- one for each available slot. There are now six drives evenly split between the two controllers.
It seemed to me that this made sense: a $75 item, and a choice of $75 x 2 versus a single controller and expensive cable.
It's all working just fine now. But "how would YOU do it?" The devices are not choked up over too few PCI-E lanes; the only limitation is the PCI-E 1.0 of the motherboard. It made more sense for me to get two (actually -- three) such controllers for prospective use in other household systems, rather than buy the port-multiplier cable.
And here's another question that could be either stupid or na´ve. In RAID configurations other than RAID0 with three or more drives, if one experiences a drive failure -- How do YOU identify "which of the physical drives" has failed? To address this problem, I used to cable one drive at a time to the controller, check the BIOS before configuration, and then "label the cable." Or "label the drive."