Are the additional gbE + SATA ports on nforce 4 mobos as fast as those on the chipset? For instance, the asus a8n-sli premium comes with 2 gbE ports and 8 SATA ports. the nforce 4 chipset provides one of the gbE ports and 4 of the SATA ports. but the other gbE port is provided by the Marvell 88E81001 chip and the other 4 SATA ports are provided by the Silicon Image 3114R chip.
My question here is:
1) Does the aforementioned marvell chip runs off the pci bus, thus limiting the max potential bandwidth to ~132MB/s? If so, does this mean it will be unable to match the performance of the nforce 4's gbE (given the overhead inherient in the pci bus?
2) Does the aforementioned silicon image chip also run off the pci bus? are all 4 sata ports serviced by an individual pci bus, or are they sharing the same pci bandwidth? does that mean that if i have 4 HDDs (each with a max sustained xfer rate of 50MB/s for argument's sake) in RAID0, it would not reach its maximum speed?
3) if the marvell and silicon image chips are sharing the same pci bus, does that mean transfering large files from the raid0 setup to another networked system (which also has gbE and a ramdisk as its hdd for argument's sake) be agonisingly slow?
4) finally, are the pci slots sharing the same bus as the marvell and silicon image chips (hence contending for the same bandwidth)?
I have looked at the asus website including their manuals, but have not found the answer to my question. i am not specifically targeting the nforce 4 chipset - ive just used it as an example as this question could quite as easily apply to VIA and other chipsets on other mobos.
i have also done several searches in various fora but have not been able to find the answer. apologies if i have missed it.
My question here is:
1) Does the aforementioned marvell chip runs off the pci bus, thus limiting the max potential bandwidth to ~132MB/s? If so, does this mean it will be unable to match the performance of the nforce 4's gbE (given the overhead inherient in the pci bus?
2) Does the aforementioned silicon image chip also run off the pci bus? are all 4 sata ports serviced by an individual pci bus, or are they sharing the same pci bandwidth? does that mean that if i have 4 HDDs (each with a max sustained xfer rate of 50MB/s for argument's sake) in RAID0, it would not reach its maximum speed?
3) if the marvell and silicon image chips are sharing the same pci bus, does that mean transfering large files from the raid0 setup to another networked system (which also has gbE and a ramdisk as its hdd for argument's sake) be agonisingly slow?
4) finally, are the pci slots sharing the same bus as the marvell and silicon image chips (hence contending for the same bandwidth)?
I have looked at the asus website including their manuals, but have not found the answer to my question. i am not specifically targeting the nforce 4 chipset - ive just used it as an example as this question could quite as easily apply to VIA and other chipsets on other mobos.
i have also done several searches in various fora but have not been able to find the answer. apologies if i have missed it.