• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Ivy Bridge PCI-Express Scaling with HD 7970 and GTX 680

Don Karnage

Platinum Member
400x649px-LL-0f09c6b9_perfrel.gif


The last time we did an article on PCI-Express scaling, was when graphics cards were finally able to saturate the bandwidth of PCI-Express x16. Not only was it a time when PCI-Express 2.0 was prevalent, but also when the first DirectX 11 GPU hit the market, and that was over six years into the introduction of the PCI-Express bus interface. Since 2009, thanks to fierce competition between NVIDIA and AMD, GPU performance levels have risen at a faster rate than ever, and the latest generation of high-end GPUs launched by the two GPU rivals adds support for the new PCI-Express 3.0 interface. The new interface sprung new questions from users like "Do I need a new motherboard to run a PCI-Express 3.0 card?", "Will my new PCI-Express 3.0 card be much slower on an older motherboard?" or "My motherboard supports only x8 for multiple cards, will performance suck?"

Source

http://www.techpowerup.com/reviews/Intel/Ivy_Bridge_PCI-Express_Scaling/1.html
 
It basically means that for any single GPU setup, PCI E 2.0 8x onwards doesn't make any difference at all. And even PCI 2.0 4x doesn't hurt by a noticeable margin.
 
Probably won't matter too much for dual GPU either, maybe like 5-7% max or so. But tri setups would be affected unless they go SB E 🙂
 
It basically means that for any single GPU setup, PCI E 2.0 8x onwards doesn't make any difference at all. And even PCI 2.0 4x doesn't hurt by a noticeable margin.

I wouldn't say that looking at the 680 graph. 88% performance is a pretty sizable drop. Not that anyone is going to run a 680 on that weak of a system, though.
 
I'm inclined to say, "Because it's faster," but I think realistically it's due to driver differences and how AMD and Nvidia utilize the CPU to feed the GPU with information.

I could be very wrong though.
 
Because its faster? 😉

Check out the 1600p graphs though. The 680 is only 3% faster at that res and it still takes a big hit as bandwidth is reduced. Maybe the Kepler architecture needs more CPU power to keep it fed?
 
Check out the 1600p graphs though. The 680 is only 3% faster at that res and it still takes a big hit as bandwidth is reduced. Maybe the Kepler architecture needs more CPU power to keep it fed?


I think AMD's drivers have done better with less CPU for some time, I remember seeing some other benches in the past in regards to this.

Basically what I'm seeing is, if you pair your Pentium 4 system with a GPU, you might be better off with a 7970 than the GTX 680... This is one of those articles that satisfies some curiosity, but I don't know that it reflects anything that'll actually happen in real life use. 🙂
 
When bandwidth is reduced the 680 takes a bigger hit. That just shows that the 680 requires more pci-e bandwidth. It doesn't prove anything one way or the other about cpu usage/requirements.
 
When bandwidth is reduced the 680 takes a bigger hit. That just shows that the 680 requires more pci-e bandwidth. It doesn't prove anything one way or the other about cpu usage/requirements.


I know, I was referring to the other article/review I mentioned. Also, only older motherboards have PICE 1x (I think), hence my P4 comment.
 
I know, I was referring to the other article/review I mentioned. Also, only older motherboards have PICE 1x (I think), hence my P4 comment.

Yes. It is interesting from an academic aspect, but nobody is going to use this card on a system that isn't at least pci-e 2.0 x8.
 
Only time it might come into play is with triple cards running on a motherboard with an x4 PCIe 2.0 slot, but that's going to impact one card of three.
 
Only time it might come into play is with triple cards running on a motherboard with an x4 PCIe 2.0 slot, but that's going to impact one card of three.

This is a problem no more because MSI decided to split the lanes as such:
One card: PCIe 3.0@x16
Two cards: PCIe 3.0@x8/x8
Three cards😛CIe 3.0@x8/x4/x4

And since PCIe 2.0 x8=PCIe 3.0 x4, it shouldn't be much of a problem.

There are a few specific games where there is a difference of ~10% or a bit more going from PCIe 2.0 x8 to x16, though.
 
This is a problem no more because MSI decided to split the lanes as such:
One card: PCIe 3.0@x16
Two cards: PCIe 3.0@x8/x8
Three cards😛CIe 3.0@x8/x4/x4

And since PCIe 2.0 x8=PCIe 3.0 x4, it shouldn't be much of a problem.

There are a few specific games where there is a difference of ~10% or a bit more going from PCIe 2.0 x8 to x16, though.

I was thinking my Asus which is 3.0 @ 8/8 and 2.0 @ 4 AFAIK.
Not that I'm going to go triple GPU.
 
I was thinking my Asus which is 3.0 @ 8/8 and 2.0 @ 4 AFAIK.
Not that I'm going to go triple GPU.

Yeah. That last 2.0 x4 lane comes from the Z77 chipset and not the CPU. MSI did some better thinking than the other manufacturers when it came to that.

I'm not going triple-GPU, either, though I'm debating whether I'll upgrade to an HD 7950 or GTX 670 this year and I'm leaning in favor of the 670 unless the 7950 becomes $50+ cheaper than the 670.
 
Last edited:
Back
Top