• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Gigabit and PCI

alizee

Senior member
I know you can't fully realize gigabit bandwidth over PCI, although it seems like it might come close (PCI is 133MB/s and GbE is 128MB/s, minus overhead). But, because all of the PCIe slots on my storage server are filled with storage controllers, if I have two PCI NICs and use link aggregation/NIC bonding, would I increase network bandwidth over one PCI NIC or decrease because they both have to share the same PCI bandwidth?
 
Regular peer-to-peer Network rarely can go over 70-80 Mb/sec.

So you are basically wasting your time/money.

In addition Windows client OS' (as oppose to real server OS like Win2008) do not support Bonding.



😎
 
Regular peer-to-peer Network rarely can go over 70-80 Mb/sec.

So you are basically wasting your time/money.

In addition Windows client OS' (as oppose to real server OS like Win2008) do not support Bonding.



😎

Thanks. I'm not really using Windows, and I wanted to use the server for iscsi. Would it still be a waste?
 
Thanks. I'm not really using Windows, and I wanted to use the server for iscsi. Would it still be a waste?

Depends on the drive arrays. #1 1 PCI slot = Max 133MB/sec If those 2 PCI slots share the same bus, the the PCI Bus limit is also 133MB/sec. So assuming your disk array can actually push more than 133MB/sec (not common at the home level except with SSDs) your aggregate link will likely top out at 133MB/sec - some overhead, and that is assuming you have MPIO on both ends set up properly.

The face that you are asking about aggregating 2 cables rather than using MPIO suggest that you don't have it set up to use that bandwidth and the only way you will see more than 1 iSCSI session running at a time is if you are trying to support multiple servers utilizing that target.

So really: 'Maybe?' Even with aggregation on the PCI bus I wouldn't be that surprised if you never saw more than 110MB/s counting for over head (IE never exceeding 1 link because of the PCI bus limit.) That is assuming your disks can push that. A whole pile of 7200 RPM SATA disks are going to have a hard time doing.
 
So really: 'Maybe?' Even with aggregation on the PCI bus I wouldn't be that surprised if you never saw more than 110MB/s counting for over head (IE never exceeding 1 link because of the PCI bus limit.) That is assuming your disks can push that. A whole pile of 7200 RPM SATA disks are going to have a hard time doing.

Maybe was kind of what I was thinking. I might just do it, the Intel PCI NICs are only $30 or so, and it will give me a little better reliability for not much money or effort.
 
Back
Top