CPU limiting high end graphics cards?

jshuck3

Member
Nov 23, 2004
49
0
0
I'm trying to figure this out, but I can't find it in any generic google searches. Some of the reviews of Core 2 are saying that the GPU is once again the limiting factor. Can somebody please explain to me how that is?

I'm actually looking for a technical reason why the CPU was the limitation up until now and suddenly the GPU is the limiting factor again.

Thanks
 

TanisHalfElven

Diamond Member
Jun 29, 2001
3,512
0
76
me too. i'd like a good solid technical explanation.

as far as i know (no vey far) its like this.
the cpu has to feed the gpu stuff to process and display. while the cpu will not do stuff like AA,AF,HDR, ligthing it still has to calculate all the physics, object movement, shadow intensity and placement, etc. this stuff is needed too in rendering a game. so if the cpu cannot provide enough of this stuff the gpu will be sitting idle while the cpu does its work

todays gamers like super high resolutions so the gpu has a lot more to do at those. since the cpu has to calculate the same ammount even at higher res (not sure of this) the limiting factor becomes the gpu which cannot render the given info fast enough
 

JAG87

Diamond Member
Jan 3, 2006
3,921
3
76
taking into account that you are using the fastest GPU solution available:

basically what it means is that with Core 2 Duo, we reached the point where if you play something demanding like BF2, Source, Oblivion, FEAR, and such, at high resolution (which is what everybody wants to do), you videocard limits your framerate before your cpu does. On the other hand, if you play at low resolution, where the videocard does not limit your fps, Core 2 Duo shows that it can feed more information then an A64, and thats why you get higher fps with Core 2 Duo at low resolutions.

So the GPU is limiting the potential of Core 2 Duo of giving you higher FPS at higher resolutions. All clear now?
 

jshuck3

Member
Nov 23, 2004
49
0
0
Not completely. I'm trying to completely understand how the path works. Is what tanis said above correct? CPU feeds GPU and GPU can't handle the raw data being spit to it? I guess what I'm looking for is what branch is too slow? Is it as simple as that?

I'm looking at chipset diagrams and I'm seeing the paths as PCI-E x16 to chipset, chipset to CPU via FSB, chipset to memory via it's own memory connection. If I'm understanding it right that's 4GB/s for PCI-E to chipset , 1066 FSB from chipset to CPU, and 10.7 GB/s chipset to memory. Where's the bottleneck?

I like numbers so I'm trying to understand where the numbers all are.

Thanks!
 

orangat

Golden Member
Jun 7, 2004
1,579
0
0
Originally posted by: jshuck3
Not completely. I'm trying to completely understand how the path works. Is what tanis said above correct? CPU feeds GPU and GPU can't handle the raw data being spit to it? I guess what I'm looking for is what branch is too slow? Is it as simple as that?

I'm looking at chipset diagrams and I'm seeing the paths as PCI-E x16 to chipset, chipset to CPU via FSB, chipset to memory via it's own memory connection. If I'm understanding it right that's 4GB/s for PCI-E to chipset , 1066 FSB from chipset to CPU, and 10.7 GB/s chipset to memory. Where's the bottleneck?

I like numbers so I'm trying to understand where the numbers all are.

Thanks!

Tanis has basically laid it out correctly. Newer games are typically gpu limited meaning the video card is simply the bottleneck.
 

sandorski

No Lifer
Oct 10, 1999
70,677
6,250
126
What tanis said sounds about right. It's not just the rendering it's the other stuff too, it all has to be presented to the end user in sync, so if one part takes longer, it slows the other parts.
 

dunno99

Member
Jul 15, 2005
145
0
0
Ok, I'll briefly describe how a the workflow of a modern game is structured. Basically, a CPU has to feed a GPU data (such as primitives, lighting, shaders, etc...) to process (transform, clip, rasterize, per-vertex/fragment, etc...). The CPU needs to take time to process stuff, and so does the GPU. The bottleneck arises when one has to wait for the other. If the CPU takes too long, the GPU finishes its work and idles. On the other hand, if the GPU is busy doing something, the CPU may have to wait on the GPU to do synchronization. This lack of "finishing jobs at the same time" is where the bottleneck arises.

Imagine a prducer/consumer scenario. The producer produces good so that the consumer can consume. Let's use the XBox 360 at launch...MS couldn't produce enough units to satisfy demand (in the US, at least). Therefore, it's like the CPU that can't produce enough "work/goods" for the GPU. On the other hand, the reverse situation is best illustrated with the Japanese XBox 360 launch. The bus and stuff in between? Think of that as the shipping routes trucks/cargo ships take to deliver XBoxes from the factory to the store shelves. If the trucking is really slow, there might be a launch or resupply delay, causing the stores/GPU to wait.

Now, does that make things a little more clear?