Speed advantage with onboard NIC?

MidiGuy

Senior member
Jan 14, 2001
416
0
0
Can anyone tell me if there is any advantage - transfer speed or otherwise - to having your NIC built onto your motherboard as opposed to using a PCI card, and why or why not?

Thanks!

-Midi
 

Oaf357

Senior member
Sep 2, 2001
956
0
0
It really depends on the board. If the onboard NIC is using the PCI bus then there really would be no difference. If it's using a different bus then the performance improvement could be phenominal.
 

cmetz

Platinum Member
Nov 13, 2001
2,296
0
0
MidiGuy, right now, all on-board Ethernet controllers that I know of are, from a signalling perspective, attached to the PCI bus. That is, there is no difference between them being on-board and being on an add-in card. Most on-board controllers are actually just PCI Ethernet chips that have been hard-wired, though some are in the south bridge along with other PCI-attached integrated devices.

Intel has announced that a future north bridge chip will have a special bus for attachment to a future Intel gigabit Ethernet chip that would be separate from the PCI bus and presumably would be optimized for this purpose (Intel's info). This will help gigE performance a nice bit, as the 32 bit/33MHz PCI bus currently can't really sustain a gigabit in one direction, certainly not full-duplex, and going to 64 bit/66MHz helps but doesn't solve some of the architectural problems with PCI. Wiring it into the north bridge can make a lot more bandwidth available, decrease the latency, and in general is just a good thing.

Incidentally, moving gigE and SATA/SAS controllers to special optimized ports on the north bridge along with AGP already there, you've just offloaded the main heavy utilization devices from the PCI bus, which would breathe years more life into it. The PCI bus is fine for miscellaneous devices, but it's starting to be a performance limitation for those few things (AGP already replaced PCI for graphics). The move from ISA to PCI was pretty unpleasant, so avoiding another bus replacement for misc. devices would be good.
 

Lord Evermore

Diamond Member
Oct 10, 1999
9,558
0
76
Actually, every chipset from all the makers for like the past 2 or 3 years has had the Ethernet controller integrated in the southbridge, bypassing the PCI bus. Previous to that, the southbridge was actually running on the PCI bus itself, rather than the PCI bus being controlled by the south bridge separately from everything else.

If the motherboard maker used the chipset-integrated Ethernet controller, then in most cases it will bypass the PCI bus; I think some chipsets allow you to specify in the CMOS setup whether to put it on the PCI bus or not, but for the most part that's not an option.

If they used an add-in chip such as a Broadcom controller or an Intel Gigabit controller, then the chip is wired into the PCI bus in the same way that a PCI card controller would be.

If the board uses the integrated controller, then it does also need an external PHYceiver (such as the RealTek RTL8201L) which is the physical conversion chip, changing the data from the electrical signals on the board, to the signal used on an Ethernet cable. The integrated Ethernet controller is still the actual controller, and therefore is not affected by the PCI bus.

That Intel page doesn't really say that the Gigabit controller will be integrated in the northbridge. What that chip is, is actually a separate Ethernet controller chip with a new interface standard, which instead of being mounted on a PCI adapter card, would be mounted on another type of card, which fits in a slot with a dedicated connection to the northbridge (similar to how an AMR or CNR slot has a dedicated link to the southbridge rather than being a PCI connection). Alternatively and probably the only likely way, would be for it to be integrated on a motherboard like current external on-board controllers are, but wired to the CSA interface of the northbridge.

Intel is really the only company still making chipsets that need to have the GigE Ethernet controller or SATA attached or integrated with the northbridge. VIA's V-Link is currently at 533MBps, and SiS's MuTiol is 1GBps. Intel's hub architecture is still at 266MBps between the north and southbridge.

266MBps is only just enough to carry GigE full-duplex, assuming the two ran at full theoretical throughput rates -- if they put GigE into the southbridge, there'd be no bandwidth for anything else. Intel is also the only one who needs to segment their chipsets so much into server/power and consumer versions. If they integrated GigE into the southbridge, they'd have to produce a separate version of it for consumers (cheaper) with only 10/100 integrated. Rather than do that, they can interface it directly to the northbridge, and continue using the same version of the southbridge for high and low-end chipsets. (Springdale and Canterwood seem to be the first with CSA capability, but only the high-end versions will have CSA.)

SerialATA of course also adds a kink. Although a single drive can't stream data as fast as 150MBps, a drive with a large cache can burst near those speeds. Put in 2 or 4 port support, and the link between the chipsets isn't nearly enough to allow full through-put even just with all the drives streaming, let alone bursting. (I'd imagine that Intel's design is already a bottleneck with some Ultra320 SCSI setups.) Intel's SATA implementation may not be very good at first, unless they increase the IO-Hub interface speed.

Integrating things into the Northbridge of course does reduce latencies in many cases, but depends on exactly what is being done. CPU to GigE would benefit, but if the data is being stored to the hard drive and the drive controller is still on the southbridge or a PCI/PCI-Express bus, then the IO-Hub speed is still a bottleneck. Even a 4-drive SATA striped RAID array can't keep up with full-duplex GigE, so true performance matching would require an Ultra320 SCSI connection, so we're back to the IO-Hub bottleneck. For some reason, Intel still isn't increasing the speed of the Hub-Link design, even though the striped SATA RAID support of ICH5 will be enough to flood it.

VIA is only slightly better off. They could integrate GigE as well as 2 ports for SATA, but it would still slightly exceed the V-Link bandwidth of 533MBps. I expect they'll move to the next version of V-Link pretty soon.

SiS of course could integrate a couple of GigE controllers and 3 SATA ports into the southbridge and still have a little bit of bandwidth left over, and they certainly aren't standing still making new chipsets (even if they do take awhile to be available). We can only hope Intel lets them make 800MHz bus chipsets, and that the license is done soon enough that we don't have to wait around for chipsets based on it.

As an answer to the question, if the onboard NIC uses the chipset's controller, it can theoretically be faster, and it will avoid slowing down other devices on the PCI bus. If it uses an add-in chip integrated on the motherboard, then it's exactly the same as a PCI adapter.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
As an answer to the question, if the onboard NIC uses the chipset's controller, it can theoretically be faster, and it will avoid slowing down other devices on the PCI bus. If it uses an add-in chip integrated on the motherboard, then it's exactly the same as a PCI adapter.

And in real-life the difference is negligable. With a normal 100Mb network you'll max out ~8MB/s which won't stress anything but the cable, and on a GigE network you'll need a hefty disk subsystem to keep up before the PCI bus get strained.

I would say driver quality matters more than where it's attached to the motherboard right now, bad drivers can really f' your performance.