Uh, no. I dont think you understand how bottlenecking works. If two cores fully saturate my GTX285 so that its the primary performance limitation, why do you think an extra two cores will remove that bottleneck? Do you think those extra two cores will make the GPU run faster? Of course they wont.
If I have two lanes merging into a single lane bridge, that bridge is always a single lane, even if I increase the lanes merging into it to four.
And for the record, my i5 750 also shows practically no performance difference over my E6850 at 3 GHz in exactly the same gaming situations, and thats with four cores turboing to 3.2 GHz. I just havent gotten around to putting up the results yet.
Upgrading to an i5 750 has largely been a waste of time and money because its my GTX285 that is holding me back almost 100%.
The reason I say this is is there appears to be differences between a dual core running at 3.6 Ghz and a quad core running at 2.4 Ghz according to some benchmarks I've seen. Since the quad is still faster despite being slower it stands to reason that adding or subtracting the number of cores can be different depending on how the games are programmed. In single threaded games clockspeed was all that mattered.
Thus I just wanted to see if in your results if changing the number of cores would have mattered in any particular game since it may change how the game works.
http://www.pcgameshardware.com/aid,...rks-75-percent-boost-for-quad-cores/Practice/
Well, at least the upgrade will get the youre CPU limited with your E6850 people off my back, the sorts of people that refuse to believe the reality I show them.
Im pretty sure I repeatedly mentioned in that article that the settings I used are the actual settings I play the games at. As such my goal is exactly the same as the goal of most gamers.
We already know the real-world FPS because the settings I used are the actual settings I game on my GTX285.
Standardizing is just another word for skewing to artificially inflate CPU differences. Im not going to run everything at 1680x1050 with no AA, because I dont play games at those settings. Someone with a 4 GHz i7 wont be playing games at such settings either.
The fact is, if you always configure your games to run at the highest playable settings for your particular GPU, the GPU will bottleneck you by far. Im pretty sure I also mentioned this in the article too.
Youre wrong about the first part, but right about the second. Yes, Clear Sky is too slow to run at 1920x1200 but thats exactly the point, and it doesnt make the methodology flawed. Like I said, I presented the highest playable settings for my GTX285, the same settings that I game at.
So when I play Clear Sky, those are exactly the settings I use (1680x1050 with 2xTrSS). The same applies to the rest of the games I tested.
Perhaps I'm not being clear what I mean by standardizing. When I build a system I'm looking for a certain minimum level of performance, and anything extra is gravy.
That minimum level of performance might be triple 2560 x 1600 screen at high settings with 2xAA 16xAF at over 60 fps average, or it could be a 1360 x 768 screen at medium with 0xAA 0xAF that has enough fps to feel smooth. If a game cannot perform at the required minimum that means some component needs upgrading to play that game. So what I want to know is playability at some fixed standard which I can extrapolate to the parts I use when building or upgrading.
Thus the problem I have with your test is it is too variable as it doesn't tell me if the equipment I buy can perform at the level I want throughout most games. You are using a subjective test based on what you consider playable. I think this basically involves sacrificing fps to raise the graphics options as high as they can go until the game becomes unplayable to you. You also lower resolution or certain settings to make a game playable.
In other words playability is your constant when it really should be what you are looking for (in the form of fps at some standard chosen for all games).
Otherwise your test has limited real world use and the results seem quite obvious to me. By cranking graphics settings and increasing resolution you are essentially creating more work for the GPU and since your standard is highest playable you do this till the GPU reaches its maximum limit and can barely provide enough fps to your satisfaction. Cranking graphics settings does little to create more work for the CPU so obviously if you downclock the GPU it will slowdown proportionately to the downclock. If it doesn't that just means you haven't maxed the workload yet. The amount of work the CPU has to do remains roughly the same at different graphic settings so you haven't maxed its workload. Thus it is normal for you to see only a small difference in average if 2.0 Ghz is sufficient for the normal workload. Since most of your games are older this should be expected. However this result is further skewed because you create a bottleneck in the first place by cranking the graphics settings to the highest playable which already limited the amount of work the 3.0 Ghz CPU had to do. Thus what you really prove is that it is possible to bottleneck a good midrange GPU almost as well on a 2.0 Ghz CPU as on a 3.0 Ghz CPU.
Unfortunately that answers a different question than what I and what I believe most gamers want to know which is "Can I get playable(subjective to the gamer, but can be measured by objective measurements) frame rates at [so and so resolution and settings] in [so and so game] by upgrading my video card or do I need to upgrade my CPU?"
Your test answers "You can get better image quality and a higher resolution with a better video card than with a new CPU". The flaw is it neglects the playable frame rates. As a gamer I could care less if the game is beautiful on a 30" monitor as compared to blocky on a 15" screen if they both move at 5 fps.
An improved test that can provide insight on the first question would have a standard setting or multiple standards used as consants and then show the results after at varying CPU and GPU speeds. That method could show someone there 2.0 Ghz CPU could get over 30 fps in Crysis with a new GPU or that over 30 fps is not possible with that CPU in that game.
A minimum by definition is a single data point, and as such is useless without a plot point putting it into perspective. I can have a lower minimum at a single point, but if the rest of benchmark has a higher framerate, the minimum is useless.
That and in my experience theres usually a strong correlation between average and minimum FPS, anyway.
I agree that minimum FPS isn't a perfect indicator of performance. However more data is better than less data, and a lower minimum can show that the game has become unplayable. Avergae fps are not without their problems as well since a benchmark could have many long segments where little is going on(i.e. an fade in screen or panning shot). Such segments would obscure shorter segments where the more intense action was happening and where fps are most vital. Ideally we'd have something like a video showing the current fps and the action going on at the same time along with the settings used.