For you to say adding two more cores does not help or hurt gaming performance you'd need to actually do just that.
Uh, no. I dont think you understand how bottlenecking works. If two cores fully saturate my GTX285 so that its the primary performance limitation, why do you think an extra two cores will remove that bottleneck? Do you think those extra two cores will make the GPU run faster? Of course they wont.
If I have two lanes merging into a single lane bridge, that bridge is always a single lane, even if I increase the lanes merging into it to four.
And for the record, my i5 750 also shows practically no performance difference over my E6850 at 3 GHz in exactly the same gaming situations, and thats with four cores turboing to 3.2 GHz. I just havent gotten around to putting up the results yet.
Upgrading to an i5 750 has largely been a waste of time and money because its my GTX285 that is holding me back almost 100%.
Well, at least the upgrade will get the youre CPU limited with your E6850 people off my back, the sorts of people that refuse to believe the reality I show them.
Second your methodology is to always increase the graphics detail to the very limit of the GPU either by increasing the resolution or add increasing amounts of AA/ SAA so of course the GPU will be the limit in such a case. I think the goal of most gamers is to always have playable frame rates without sacrificing image quality or resolution.
Im pretty sure I repeatedly mentioned in that article that the settings I used are the actual settings I play the games at. As such my goal is exactly the same as the goal of most gamers.
As such we need to know what fps you are actually getting in your real world games, and also your video card settings need to be more standardized.
We already know the real-world FPS because the settings I used are the actual settings I game on my GTX285.
Standardizing is just another word for skewing to artificially inflate CPU differences. Im not going to run everything at 1680x1050 with no AA, because I dont play games at those settings. Someone with a 4 GHz i7 wont be playing games at such settings either.
The fact is, if you always configure your games to run at the highest playable settings for your particular GPU, the GPU will bottleneck you by far. Im pretty sure I also mentioned this in the article too.
In other words there is already a flaw in your methodology once you had to use 1680 x 1050 for Stalker Clearsky instead of 1920 x 1200 since that means the game is unplayable either due to your GPU or CPU. I assume the former though.
Youre wrong about the first part, but right about the second. Yes, Clear Sky is too slow to run at 1920x1200 but thats exactly the point, and it doesnt make the methodology flawed. Like I said, I presented the highest playable settings for my GTX285, the same settings that I game at.
So when I play Clear Sky, those are exactly the settings I use (1680x1050 with 2xTrSS). The same applies to the rest of the games I tested.
Thus the main difference I'd like to see is not average fps, but minimum fps.
A minimum by definition is a single data point, and as such is useless without a plot point putting it into perspective. I can have a lower minimum at a single point, but if the rest of benchmark has a higher framerate, the minimum is useless.
That and in my experience theres usually a strong correlation between average and minimum FPS, anyway.