How is PCI E better than PCI?

pillage2001

Lifer
Sep 18, 2000
14,038
1
81
I've read that it's better and I'm sure it's betetr but I've been out of the loop for awhile now.

The PCI bus is running on a parallel interface now, is it not?? I can see that if multiple PCI cards were to be accessed, it will cause the bus to starve. What about PCI E?? Does it have an independent channel for each slot? Someone told me that the PCI E is based on serial interface, wouldn't this be a problem too when the first slot to gain priority of the bandwidth would starve the rest of the remainder ones??

Please clue me on this. I think I know how it operates but there are a few things bugging me. Unless the PCI E has a single channel to itself, it won't be much better than the PCI.
 

Tessel8

Member
Apr 13, 2001
34
0
0
PCI-E is a point to point connection, so "yes" it has an independent channel to each slot (actually, several of them, depending on the width of the slot). The slowdown you mention of the "first slot" getting priority might happen, but is not likely. Basically, with point to point interfaces, the performance limiter would be the router inside the chipset (contention or routing capacity) or the pipe feeding from the processor or memory. Now, if the "first slot" was requesting more bandwidth than the router can handle, and it has no "fairness" rules, then yes, technically, you could probably get the "first slot" to starve the rest. I don't think you will find any chipsets designed this way.

In general, the advantages of PCI-E are: higher bandwidth (even with the 1x link), easier to design for, smaller & cheaper connectors and adapter designs (unless you are the I/O designer for the adapter chip :).
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
Smaller connectors AND smaller chips, making for smaller cards and less densely packed mainboards. Much fewer traces to route, again benefitting mainboard design.

No busses, but individual links to each device. Links can be bundled in pairs, in four, eight or sixteen, for high bandwidth devices.

What we'll see is 16x and 4x links coming from chipset northbridges or dedicated HyperTransport tunnel devices on AMD64, and a number of 1x links coming from the chipset south bridge.

With southbridge uplinks at 1 GB/s at SiS and VIA, there's hardly a bottleneck there.