• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

NICs: PCI or PCI-E?

conlan

Diamond Member
I've decided on getting an Intel Pro 1000 NIC to take some of the load off my CPU, and need to decide on PCI or PCI-E. The only benefit i can see for PCI-E is getting the NIC off the PCI bus and any possible Storage controller conflicts. Any other opinion please?
 
Originally posted by: JackMDS
There is No Functional diffrence.

you sure?

I thought the problem with PCI is that it only offers a throughput (theoretical) of 1.056Gbps or 132MBps...this is shared among all devices running on it since it only uses one bus.

So, if you have a Gigabit NIC running at 1000Mbps, you are using about 95% of the avaliable PCI bus bandwidth, basically maxxing out the PCI bus and taking usable bandwidth away from the other devices on the bus.
 
I agree with jlazarro. Would definitely go for the PCI-E version. You mentioned contention with the storage controller. The SATA controller is usually built into the southbridge these days, and does not sit on the PCI bus. This is only a concern if you use a peripheral storage PCI card such as a SCSI card or if you use ports from a peripheral controller like the JMicron controller built into some of the Asus and Abit motherboards.
 
you probably wouldnt see the difference due to pci's bandwidth limitations. PCI maximum theoretical bandwidth is shared, but unless your taxing your pci bus, you wouldnt theoretically see a difference. chances are that what ever your connecting to over Gigabit ethernet is going to be slowing it down, not the pci bus. as for conflicts, i highly doubt you would have any...

what would i buy? pci-express nic, cause its the "future" and 5-10 years from now you might not have pci on a mobo just like what happened to isa, etc before it, but this only matters if you will re-use the part down the many years road, otherwise go for what is cheapest w/ good reviews, etc.
 
If the OP is referring to a regular computer using Client OS.

There is No Functional difference.




 
Thanks for the speedy replies 🙂

I would prefer PCI-E just to get it off the PCI bus, (and because it's new 😉), but availability and price are becoming an issue. I'm running an Audigy2 ZS on a PCI slot already and was concerened with conflicts with it and the Storage controller.
 
PCI-E has higher bandwidth but also higher latency vs. PCI, if my understanding is correct.

I have not yet had a chance to benchmark in my lab and see what the real world performance difference looks like.

In a 2006/2007 machine, if you're using your chipset's ATA/SATA controller, that's usually hanging off a proprietary internal bus instead of the PCI bus - you are not sharing bandwidth between storage and NIC. Now, if you put a good hardware RAID controller on the PCI bus and a good NIC on the PCI bus, you could have contention. So for servers the story is a bit different (you need PCI-X or PCI-E).

If I'm not mistaken, the Creative boards do naughty things with the PCI bus, that might make a difference.
 
Ok, so no worries w/ the Storage controller anymore, that a good thing, but how about a NIC and Souncard conflict, i know the Creative soundcards (like mine) have historically caused conflicts w/ other devices on the PCI bus.
 
then just go for the pci-express then if your research says your sound card causes conflicts with other pci devices. worse case scenerio, return the pci nic if it doesnt work due to a conflict.
 
PCIe generally performs better, and sometimes has the additional benefit of newer chips (which can be a "mixed" blessing with cheapening of designs, etc., but let's try to be positive!). IME, PCIe is generally faster at the high end. Sometimes this can be somewhat mitigated with jumbo frames for PCI, sometimes jumbo frames are not necessary, and often jumbo frames are problematic at one level or another.

The saving grace of PCI is that often, with single hard drives, etc., it's not really possible to heavily load the NIC, and then, there can be little or no difference between one GbE NIC and another / one interface or another in terms of performance -- as they're all bottlenecked by the drives, etc.

The downside of PCIe is that as add-on cards, they're not as easily available, and are sometimes priced higher. But this is not always true -- I've bought very-nicely-performing PCIe NICs for as little as $5 + shipping via eBay.

Besides that, IMO PCIe / native on-board are the way to go when you have a choice.

Edit: Here are a couple of AT charts that show the sort of PCI vs. native/PCIe performance differences that I mention:

http://www.anandtech.com/mb/showdoc.aspx?i=2860&p=24
http://www.anandtech.com/showdoc.aspx?i=2865&p=10

Note however that PCI performance can vary a lot according to what else is on the system. PCIe performance tends to stabilize more because that also implies a newer implementation; however even with newer implementations, there's often a performance penalty with PCI in practice at the high end, which is shown in AT's charts.
 
i highly doubt you would have a conflict. if it were my pc, my money, i would buy a pci nic, if there were to be a conflict, it may be simple as manually specifying the irq, memory addresses that the 2 cards use, if i still couldn't get it working, moving to different slots, i would then return the card. i would buy the card from a store though, like bestbuy, etc. something "local" not have it shipped, if the rare chance of a conflict is a concern.
 
Originally posted by: sieistganzfett
i highly doubt you would have a conflict. if it were my pc, my money, i would buy a pci nic, if there were to be a conflict, it may be simple as manually specifying the irq, memory addresses that the 2 cards use, if i still couldn't get it working, moving to different slots, i would then return the card. i would buy the card from a store though, like bestbuy, etc. something "local" not have it shipped, if the rare chance of a conflict is a concern.

Easier said than done. That could easily take up couple hours at least, not to mention there's a chance you thought you fixed the problem, only for it to come back several days later, or when you're doing something important. Basically it boils down to if you're using the PC to do anything important, and whether you have a lot of free time at hand to waste.



 
Originally posted by: JackMDS
If the OP is referring to a regular computer using Client OS.

There is No Functional difference.

So Jack, if I'm transferring a huge file between two "regular computers using Client OS" each with a RAID-0 array of two SATA drives, where's the bottleneck?

 
Originally posted by: docinthebox
Originally posted by: JackMDS
If the OP is referring to a regular computer using Client OS.

There is No Functional difference.

So Jack, if I'm transferring a huge file between two "regular computers using Client OS" each with a RAID-0 array of two SATA drives, where's the bottleneck?
The Client OS itself is not optimized for very high transfers with Giga Cards.

 
Originally posted by: JackMDS
Originally posted by: docinthebox
So Jack, if I'm transferring a huge file between two "regular computers using Client OS" each with a RAID-0 array of two SATA drives, where's the bottleneck?
The Client OS itself is not optimized for very high transfers with Giga Cards.

Conjecture and myth IMO. If the claim is that client OSs cannot do high performance transfers, and that server OSs can, then this should be substantiated with some figure representing high performance transfers, and a demonstration on the same hardware of client OSs failing to meet that level and server OSs succeeding.

The links I previously posted showed several AT measurements of high bandwidth transfers using client OSs albeit just at the network level. However I think that's something in itself, and the counter-claim is insubstantial.
 
Originally posted by: Madwand1
Conjecture and myth IMO.
Yap, these ""Huge"" differences as reported in the testing. amount to functional Nothing in a regular working Computer hooked to a Normal peer-to-peer Network.

However, we can not ignore human psychological process. If some one spent more money on Hardware and ""feels"" that it is working better why Not.

Any one around here gets from his Giga network sustained performance of 946Mb/sec.?
 
Originally posted by: JackMDS
Yap, these ""Huge"" differences as reported in the testing. amount to functional Nothing in a regular working Computer hooked to a Normal peer-to-peer Network.

However, we can not ignore human psychological process. If some one spent more money on Hardware and ""feels"" that it is working better why Not.

Any one around here gets from his Giga network sustained performance of 946Mb/sec.?

If by this you mean most typical computers might see no difference between PCI and PCIe / etc. Yes, I agree, and said something like that myself earlier in this thread.

If however you're saying that there is no difference at all between PCI and PCIe, such that you cannot see this in a consumer computer as long a client OS is being used, then I think you're wrong, and might be setting yourself up with lowered expectations.

IMO, it's not the fact that a client OS is being used here, it's the part by part "can't do it anyways so why try" attitude that leads to thinking that PCI, ancient computers, etc., are just fine for file servers, which leads to lower performance, and the thinking that to get really fast file servers, well, we'd have to spend millions of bucks and write our own OS, wouldn't we?

You don't have to spend tons of money to get good on-board / PCIe networking -- it even comes for free with decent modern motherboards. You do have to get a number of things at least roughly right to get high performance, and IMO, avoiding PCI is sometimes a part of that.
 
I went ahead and ordered the PCI-E version, just to play it safe. The biggest problem was the only vendor that carries it at a comparable price to the PCI version is....gulp TD 🙁 hopefully i'll get it ok. Thanks for all the input, it really helps 🙂
 
JackMDS, I raise my hand. But that's not exactly home / desktop network gear doing it.

I can move 950-975Mb/s synthetically on desktop hardware, no problem. Doing anything useful (e.g., file serving) at that sustained data rate requires a purpose-built setup.
 
Originally posted by: cmetz
JackMDS, I raise my hand. But that's not exactly home / desktop network gear doing it.
LOL, I knew that it is going to come (and probably from you).

The point is that I doubt that home users on Peer-to-Peer network would get any significant improvement from buying a $50 (or more) NIC.

However, as I said before, feeling Good is important. PCI-E NIC cost about $40 more. On the other hand, an hour of therapy with a good "Shrink" in NYC currently goes for an average of $175, and there is No guarantee that you would feel better.
 
Originally posted by: JackMDS
Originally posted by: cmetz
JackMDS, I raise my hand. But that's not exactly home / desktop network gear doing it.
LOL, I knew that it is going to come (and probably from you).

The point is that I doubt that home users on Peer-to-Peer network would get any significant improvement from buying a $50 (or more) NIC.

However, as I said before, feeling Good is important. PCI-E NIC cost about $40 more. On the other hand, an hour of therapy with a good "Shrink" in NYC currently goes for an average of $175, and there is No guarantee that you would feel better.

Actually, the Intel Pro1000 PCI is about $30, while i found the PCI-E version for $35. 🙂 You are right about feeling good, i feel better not having to worry about conflicts w/ the PCI bus, wether real or imagined, and having one less thing to worry about is a good thing.😀
 
Originally posted by: conlanActually, the Intel Pro1000 PCI is about $30, while i found the PCI-E version for $35. 🙂 You are right about feeling good, i feel better not having to worry about conflicts w/ the PCI bus, wether real or imagined, and having one less thing to worry about is a good thing.😀
I agree 😀
 
Back
Top