• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

Wow! Didn't know CoH was multi-CPU optimized so well?!

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: chizow
Originally posted by: munky
n00b: "hey guys, I just upgraded from a 8800gt to a 4870x2 and I don't feel any difference. I used to get 115 fps in counter strike stress test, and now I only get 117. Is my cpu holding me back or what?"

n00b: "hey guys, I just upgraded from a 8800gt to a 4870x2 and I don't feel any difference. I used to get 15 fps in Crysis, WiC, SC, Witcher, Mass Effect, Assassin's Creed, AoC etc. etc., and now I only get 17. Is my gpu holding me back or what?"
:roll:
Reading comprehension FTW


Its obvious you're no longer capable of discussing the topic logically, which isn't a surprise coming from someone who came to the conclusion Vista was the source of poor performance rather than an Opteron showing its age.
Who said anything about an Opteron? You're claiming a modern C2Q is a bottleneck, so cough up some benches to prove your point, and not some lousy 1280x1024 benches nobody cares about.
But I'll leave you with one final bench that should be crystal clear, it even uses similarly slow CPUs as your Opteron:

Digit-Life CPU/GPU Comparison

Under Configuration:

1) Set PC1 Video to HD4850
2) Set PC2 Video to HD4870
3) Set PC1 CPU to Athlon X2 6000+
4) Set PC2 CPU to Athlon X2 4800+

Shocking that the slower 4850 is able to outperform the faster 4870 with a faster CPU......

Its really a great tool, you'll see some other interesting comparisons, like how there's less difference with both CPU set to 4800, but a much greater difference when both are set to 6000+. Textbook CPU bottlenecking any monkey can understand.
Great, now maybe you can show me how a gtx280 is bottlenecked by a Pentium3?
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: munky
:roll:
Reading comprehension FTW
And even in those examples there are clear examples of bottlenecking between resolutions and solutions.

Who said anything about an Opteron? You're claiming a modern C2Q is a bottleneck, so cough up some benches to prove your point, and not some lousy 1280x1024 benches nobody cares about.
I've already shown examples of CPU bottlenecking with faster CPUs at higher resolutions up to 2560, but you dismissed them due to RAM type/amount and chipset when those factors have little impact on FPS.

But here's another one. You can lay a ruler vertically and see quite clearly all of the fastest solutions fall within the same range.

Great, now maybe you can show me how a gtx280 is bottlenecked by a Pentium3?
And if an 8800GT with a 3GHz C2Q outperformed that set-up, then what would you conclude? The GTX 280 is slower? Or its faster, but only with 16xTRSSAA @ 2560 in Crysis (but still only 7 FPS)?

I've shown how modern CPUs can and will bottleneck modern single GPUs to the point "faster" GPUs will perform worst than "slower" ones based solely on CPU. Given these same single GPUs can be scaled for potentially 2-4x more performance its ridiculous to think faster CPUs aren't needed. Surely you're not saying a C2Q @ 3.0GHz is 2-4x faster than a 3.0GHz Athlon 64 X2 or enough to push a solution that can be 2-4x faster than the fastest single GPU.

 

Golgatha

Lifer
Jul 18, 2003
12,400
1,076
126
Originally posted by: chizow
Originally posted by: munky
n00b: "hey guys, I just upgraded from a 8800gt to a 4870x2 and I don't feel any difference. I used to get 115 fps in counter strike stress test, and now I only get 117. Is my cpu holding me back or what?"

n00b: "hey guys, I just upgraded from a 8800gt to a 4870x2 and I don't feel any difference. I used to get 15 fps in Crysis, WiC, SC, Witcher, Mass Effect, Assassin's Creed, AoC etc. etc., and now I only get 17. Is my gpu holding me back or what?"

Its obvious you're no longer capable of discussing the topic logically, which isn't a surprise coming from someone who came to the conclusion Vista was the source of poor performance rather than an Opteron showing its age.

But I'll leave you with one final bench that should be crystal clear, it even uses similarly slow CPUs as your Opteron:

Digit-Life CPU/GPU Comparison

Under Configuration:

1) Set PC1 Video to HD4850
2) Set PC2 Video to HD4870
3) Set PC1 CPU to Athlon X2 6000+
4) Set PC2 CPU to Athlon X2 4800+

Shocking that the slower 4850 is able to outperform the faster 4870 with a faster CPU......

Its really a great tool, you'll see some other interesting comparisons, like how there's less difference with both CPU set to 4800, but a much greater difference when both are set to 6000+. Textbook CPU bottlenecking any monkey can understand.

Ok, let's back up. The original article I linked to has a 4870 X2 with stock and overclocked C2Q CPUs running upwards of 3.0-3.6Ghz, and the settings included high levels of AA and AF. It was not some outdated dual-core Athlon X2. Not to mention there is still no sign of settings someone would actually use if they had a 4870 X2 video card (or even a single 48x0 card).

My point is that a C2Q at 3.0Ghz vs one at 3.6Ghz with a 4870 X2 shows hardly any improvement at all, so you get diminishing returns by overclocking (again 3 FPS just isn't worth it unless you're doing something over than gaming with said PC). Now the situation could certainly change with CrossfireX 4870 X2s. However, I think we can safely assume if you have a modern C2Q anywhere near 3.0Ghz, running games with settings someone would actually use, with a 4870 X2 or lower video card; you are not hampered in any real measurable way by your CPU when gaming because you're GPU limited.
 

Golgatha

Lifer
Jul 18, 2003
12,400
1,076
126
Surely you're not saying a C2Q @ 3.0GHz is 2-4x faster than a 3.0GHz Athlon 64 X2 or enough to push a solution that can be 2-4x faster than the fastest single GPU.

In applications optimized for multi-core CPUs, you can bet your ass a C2Q at 3.0Ghz can easily be 2x or more faster than a 3.0Ghz Athlon 64 X2. The whole reason I posted the CoH scores is because it seems to be much better optimized compared to the other games tested.

http://www.tomshardware.com/re...s-rampage,1316-11.html


 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: chizow
Originally posted by: munky
:roll:
Reading comprehension FTW
And even in those examples there are clear examples of bottlenecking between resolutions and solutions.
Like 4870CF flat-lining at ~120fps in Bioshock across resolutions? Nobody cares when you're cpu-limited at triple digit framerates.

Who said anything about an Opteron? You're claiming a modern C2Q is a bottleneck, so cough up some benches to prove your point, and not some lousy 1280x1024 benches nobody cares about.
I've already shown examples of CPU bottlenecking with faster CPUs at higher resolutions up to 2560, but you dismissed them due to RAM type/amount and chipset when those factors have little impact on FPS.
I dismissed them because you're drawing a conclusion by comparing 2 unrelated articles using 2 different systems.

But here's another one. You can lay a ruler vertically and see quite clearly all of the fastest solutions fall within the same range.
The fastest solutions are all getting close to 100fps, and you're complaining about something?

Great, now maybe you can show me how a gtx280 is bottlenecked by a Pentium3?
And if an 8800GT with a 3GHz C2Q outperformed that set-up, then what would you conclude? The GTX 280 is slower? Or its faster, but only with 16xTRSSAA @ 2560 in Crysis (but still only 7 FPS)?
I'd conclude the same thing that went right over your head a few posts back - faster video cards require more cpu power in order to reach gpu-bound conditions.

I've shown how modern CPUs can and will bottleneck modern single GPUs to the point "faster" GPUs will perform worst than "slower" ones based solely on CPU. Given these same single GPUs can be scaled for potentially 2-4x more performance its ridiculous to think faster CPUs aren't needed. Surely you're not saying a C2Q @ 3.0GHz is 2-4x faster than a 3.0GHz Athlon 64 X2 or enough to push a solution that can be 2-4x faster than the fastest single GPU.
But you haven't shown how a modern CPU will bottleneck a modern video card to the point where it causes a noticeable degradation in gaming experience. When you have 1000000 barrels explode in a scene with physics modeling, and your framerate chugs, then you can whine about cpu bottlenecking. If you're just complaining because you don't need a 3-way SLI setup to reach 100fps in UT3, then you're wasting time debating useless trivia.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Golgatha
Ok, let's back up. The original article I linked to has a 4870 X2 with stock and overclocked C2Q CPUs running upwards of 3.0-3.6Ghz, and the settings included high levels of AA and AF. It was not some outdated dual-core Athlon X2. Not to mention there is still no sign of settings someone would actually use if they had a 4870 X2 video card (or even a single 48x0 card).
Yes, the original linked article uses GPU limited settings when trying to show differences between CPUs, which is great for practical purposes but hardly shows evidence of diminishing returns. It simply shows you've hit a GPU bottleneck before the CPU, but that could very easily change once you introduce a 2nd X2 or run more CPU demanding games.

My point is that a C2Q at 3.0Ghz vs one at 3.6Ghz with a 4870 X2 shows hardly any improvement at all, so you get diminishing returns by overclocking (again 3 FPS just isn't worth it unless you're doing something over than gaming with said PC). Now the situation could certainly change with CrossfireX 4870 X2s. However, I think we can safely assume if you have a modern C2Q anywhere near 3.0Ghz, running games with settings someone would actually use, with a 4870 X2 or lower video card; you are not hampered in any real measurable way by your CPU when gaming because you're GPU limited.
That's not true at all though, as you could run 2560 with 24xAAA and get playable frame rates at say, 70FPS in UT3 and maybe not see any difference with a Core 2 Duo 2.0GHz up to 3.6GHz since any additional FPS increase from the 3.6GHz CPU will be masked by oppressive GPU limitations. Since there's no difference in FPS, you could come to the conclusion there's diminishing returns but that doesn't mean you'd be right......

In applications optimized for multi-core CPUs, you can bet your ass a C2Q at 3.0Ghz can easily be 2x or more faster than a 3.0Ghz Athlon 64 X2. The whole reason I posted the CoH scores is because it seems to be much better optimized compared to the other games tested.

http://www.tomshardware.com/re...s-rampage,1316-11.html
K, Phenom X4 then...apples to apples :roll:
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: munky
Like 4870CF flat-lining at ~120fps in Bioshock across resolutions? Nobody cares when you're cpu-limited at triple digit framerates.
Try Crysis, Witcher, Assassin's Creed, Oblivion all showing little difference from 1680 to 1920 and even 2560 in some cases at frame rates at or below 60FPS average for the fastest solutions. Oblivion is a particularly good example as an older title that shows you simply can't expect much higher FPS with the same speed C2 CPUs you've been using for 2 years even if you pair it with much faster GPU solutions.

I dismissed them because you're drawing a conclusion by comparing 2 unrelated articles using 2 different systems.
No, you dismissed them because they prove you wrong and clearly satisfy your Quad @ 3GHz vs 4GHz comparisons. Again, if you think chipsets and RAM amounts result in that great of a performance delta prove it with links. I have already and I'd be glad to pull sticks DIMM by DIMM to prove what I already know.

The fastest solutions are all getting close to 100fps, and you're complaining about something?
Its not a complaint, its an observation that faster CPUs are needed as you increase GPU speed when "slower" solutions are performing the same as newer "faster" ones. Unless this is where we start measuring performance in "free AA"......

I'd conclude the same thing that went right over your head a few posts back - faster video cards require more cpu power in order to reach gpu-bound conditions.
Rofl, that's the first thing you've said that makes sense. Except it contradicts nearly everything you've said, especially your imaginary 3GHz inflection point where CPU speed is enough.

But you haven't shown how a modern CPU will bottleneck a modern video card to the point where it causes a noticeable degradation in gaming experience. When you have 1000000 barrels explode in a scene with physics modeling, and your framerate chugs, then you can whine about cpu bottlenecking. If you're just complaining because you don't need a 3-way SLI setup to reach 100fps in UT3, then you're wasting time debating useless trivia.
I've shown CPU bottlenecks at both the high and low end, I could link more with SLI vs. Tri-SLI but is it even worth it with you? I doubt it. And that barrel modeling demonstration? You do realize that was done frame-by-frame and re-animated in real-time right?

 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: chizow
Originally posted by: munky
Like 4870CF flat-lining at ~120fps in Bioshock across resolutions? Nobody cares when you're cpu-limited at triple digit framerates.
Try Crysis, Witcher, Assassin's Creed, Oblivion all showing little difference from 1680 to 1920 and even 2560 in some cases at frame rates at or below 60FPS average for the fastest solutions. Oblivion is a particularly good example as an older title that shows you simply can't expect much higher FPS with the same speed C2 CPUs you've been using for 2 years even if you pair it with much faster GPU solutions.
If your flatline theory is an indication of a cpu limitation, then tell me, why do different cards flatline at different framerates in games like Assassins Creed, for example? Scenarios like this don't necessarily fall into your simple cookie-cutter cpu limitation theory.

I dismissed them because you're drawing a conclusion by comparing 2 unrelated articles using 2 different systems.
No, you dismissed them because they prove you wrong and clearly satisfy your Quad @ 3GHz vs 4GHz comparisons. Again, if you think chipsets and RAM amounts result in that great of a performance delta prove it with links. I have already and I'd be glad to pull sticks DIMM by DIMM to prove what I already know.
They don't prove anything to as far as I'm concerned.

The fastest solutions are all getting close to 100fps, and you're complaining about something?
Its not a complaint, its an observation that faster CPUs are needed as you increase GPU speed when "slower" solutions are performing the same as newer "faster" ones. Unless this is where we start measuring performance in "free AA"......
They are only "needed" when you can experience a difference in performance. Those framerates are above you monitor's refresh rate anyways, and you'll never see a difference beyond colorful lines on fancy graphs.

I'd conclude the same thing that went right over your head a few posts back - faster video cards require more cpu power in order to reach gpu-bound conditions.
Rofl, that's the first thing you've said that makes sense. Except it contradicts nearly everything you've said, especially your imaginary 3GHz inflection point where CPU speed is enough.
You still don't get it - nobody cares if a game is cpu-limited at triple digit framarates. Take an insanely fast cpu, you can make Quake3 gpu-bound at 1000+ fps. So what? The inflection point varies for each game and video card, it's not some magical 3ghz figure you invented. I guarantee you the people playing Quake3 don't care that their cpu can only push it to 300 fps.

But you haven't shown how a modern CPU will bottleneck a modern video card to the point where it causes a noticeable degradation in gaming experience. When you have 1000000 barrels explode in a scene with physics modeling, and your framerate chugs, then you can whine about cpu bottlenecking. If you're just complaining because you don't need a 3-way SLI setup to reach 100fps in UT3, then you're wasting time debating useless trivia.
I've shown CPU bottlenecks at both the high and low end, I could link more with SLI vs. Tri-SLI but is it even worth it with you? I doubt it. And that barrel modeling demonstration? You do realize that was done frame-by-frame and re-animated in real-time right?
Way to miss the point completely.
 

deadseasquirrel

Golden Member
Nov 20, 2001
1,736
0
0
Very good article. 2 things I'd love to see-- add some other cards too, such as the 4870 and 4850... and test more games. But, overall, a big :thumbsup from me for showing such a variety of CPUs, all clocked at various speeds.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: munky
If your flatline theory is an indication of a cpu limitation, then tell me, why do different cards flatline at different framerates in games like Assassins Creed, for example? Scenarios like this don't necessarily fall into your simple cookie-cutter cpu limitation theory.
Its impossible to say for certainty without looking at FPS vs. Time graphs, but looking at the examples that are provided on various sites, its quite clear what's going on when a slower card is shadowing the faster solutions. A certain speed CPU is going to produce a finite number of frames per second. Each frame and sequence can have varying degrees of rendering difficulty, so for harder frames or sequences the slower solutions will fall behind the faster solutions. In less intensive sequences, frame rates may mirror each other meaning the difference in overall average FPS may be slightly different. Another example of using frame vs. time is with SLI/CF, where the multi-GPU solutions boost their average by returning higher highs, but lower lows compared to single-GPU.

They don't prove anything to as far as I'm concerned.
Of course they don't, because they clearly prove you wrong. Its obvious that you'll just attempt to discredit any source that proves you wrong, but we'll try it once more. This is about as definitive and comprehensive as it gets and all results will be clearly under triple digits. These tests were done with a "modern CPU", a Core 2 @ 2.93GHz.

  • Mass Effect

    You can clearly see results are tiered, as you increase resolution, slower solutions become GPU limited and drop off. Keep your eye on GTX 280 SLI @ 1280 No AA, at 78.8 FPS. In the second and third set of benches, they introduce AA and similar happens, only sooner as the slower GPU solutions drop off sooner as you the GPU load is greater with resolution increases + AA. The interesting thing and reason why I said keep an eye on GTX 280 SLI is because even at 1920 8xAA there is virtually no difference in FPS, 74.6 @ 1920x8AA vs. 78.8 @ 1280x0AA. If that's not as clear an example of CPU bottlenecking I don't know what to tell you.

    The reason why its relevant is because a faster CPU would undoubtedly raise this CPU limited frame rate and cause separation between parts without resorting to oppressive GPU limitations. So in the case of GTX 280 SLI, is it correct to say there's no difference in performance? Or do we say GTX 280 SLI performance rating = +3 resolution + free 8xAA? Or do we simply acknowledge we need faster CPUs to remove CPU bottlenecks for current GPU solutions?
  • World In Conflict

    Another very CPU and GPU intensive game and illustrates the log jam at the top even better. At 1280 no AA its very obvious almost every solution even to a single 8800GT is CPU bottlenecked as the spread is very flat, 42.5 to 49.8. As you increase resolution and AA, the slower and single solutions again, begin to drop off. You'll also see very clearly that there is little to no difference between 4850CF and 4870CF throughout while the single cards scale as expected with resolution and AA increases.

    By the time you get to 1920x4AA, only 5 solutions manage to hold off GPU bottlenecking, GTX 280 SLI, 4870CF, 4850CF, GTX 260 SLI and 8800 Ultra SLI ranging from 42.2 to 45.6FPS. Once again, compare 1280 results at 45.4 and 1920x4AA at 45.6 and its obvious the results are CPU bottlenecked. What this means is a faster CPU would absolutely boost all scores at lower resolutions and even at the highest resolution tested until you reached a GPU bottleneck.
There's a few more benches in there that get the point across, I don't think it gets more obvious then this. I'm sure you'll probably just attempt to discredit yet another source, but the evidence is pervasive as most review sites are using "modern CPUs" @ 3GHz.

They are only "needed" when you can experience a difference in performance. Those framerates are above you monitor's refresh rate anyways, and you'll never see a difference beyond colorful lines on fancy graphs.
No, they're needed if you're going to claim there's no benefit of faster CPUs and then point to GPU limited testing as evidence. I just pointed out 4 titles from AT's review that show bottlenecking below 60FPS, not to mention those are averages anyways so there are going to be sequences below 60FPS unless its pegged at 60FPS with Vsync enabled.

You still don't get it - nobody cares if a game is cpu-limited at triple digit framarates. Take an insanely fast cpu, you can make Quake3 gpu-bound at 1000+ fps. So what? The inflection point varies for each game and video card, it's not some magical 3ghz figure you invented. I guarantee you the people playing Quake3 don't care that their cpu can only push it to 300 fps.
But we're not talking about triple digits and Quake 3, we're talking about modern titles that absolutely require faster CPUs to take advantage of faster GPUs and multi-GPU solutions. The 3GHz figure isn't invented, its your "modern CPU" which happens to be a 3-3.2GHz C2 as those are the fastest retail processor available and what the vast majority of reviewers are using. Its also why we are seeing CPU bottlenecking at historically GPU bottlenecked resolutions like 1920 and even 2560. If faster CPUs aren't needed, why bother with faster GPU solutions? For Free AA? If review sites are going to base performance on FPS, then they need to start using parts that remove limitations on performance, plain and simple.


 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: chizow
Its impossible to say for certainty without looking at FPS vs. Time graphs, but looking at the examples that are provided on various sites, its quite clear what's going on when a slower card is shadowing the faster solutions. A certain speed CPU is going to produce a finite number of frames per second. Each frame and sequence can have varying degrees of rendering difficulty, so for harder frames or sequences the slower solutions will fall behind the faster solutions. In less intensive sequences, frame rates may mirror each other meaning the difference in overall average FPS may be slightly different. Another example of using frame vs. time is with SLI/CF, where the multi-GPU solutions boost their average by returning higher highs, but lower lows compared to single-GPU.
It's only "clear" when you make assumptions about things you don't understand. There could be several other factors which limit performance in multi-gpu systems, like drivers and pci-e bandwidth, where using a faster cpu wouldn't necessarily increase your performance. But considering additional factors such as rendering load per frame and min/max framerates, it becomes even more obvious that the cpu isn't the only thing that can be holding back gpu performance.

Of course they don't, because they clearly prove you wrong. Its obvious that you'll just attempt to discredit any source that proves you wrong, but we'll try it once more. This is about as definitive and comprehensive as it gets and all results will be clearly under triple digits. These tests were done with a "modern CPU", a Core 2 @ 2.93GHz.

  • Mass Effect

    You can clearly see results are tiered, as you increase resolution, slower solutions become GPU limited and drop off. Keep your eye on GTX 280 SLI @ 1280 No AA, at 78.8 FPS. In the second and third set of benches, they introduce AA and similar happens, only sooner as the slower GPU solutions drop off sooner as you the GPU load is greater with resolution increases + AA. The interesting thing and reason why I said keep an eye on GTX 280 SLI is because even at 1920 8xAA there is virtually no difference in FPS, 74.6 @ 1920x8AA vs. 78.8 @ 1280x0AA. If that's not as clear an example of CPU bottlenecking I don't know what to tell you.

    The reason why its relevant is because a faster CPU would undoubtedly raise this CPU limited frame rate and cause separation between parts without resorting to oppressive GPU limitations. So in the case of GTX 280 SLI, is it correct to say there's no difference in performance? Or do we say GTX 280 SLI performance rating = +3 resolution + free 8xAA? Or do we simply acknowledge we need faster CPUs to remove CPU bottlenecks for current GPU solutions?
  • World In Conflict

    Another very CPU and GPU intensive game and illustrates the log jam at the top even better. At 1280 no AA its very obvious almost every solution even to a single 8800GT is CPU bottlenecked as the spread is very flat, 42.5 to 49.8. As you increase resolution and AA, the slower and single solutions again, begin to drop off. You'll also see very clearly that there is little to no difference between 4850CF and 4870CF throughout while the single cards scale as expected with resolution and AA increases.

    By the time you get to 1920x4AA, only 5 solutions manage to hold off GPU bottlenecking, GTX 280 SLI, 4870CF, 4850CF, GTX 260 SLI and 8800 Ultra SLI ranging from 42.2 to 45.6FPS. Once again, compare 1280 results at 45.4 and 1920x4AA at 45.6 and its obvious the results are CPU bottlenecked. What this means is a faster CPU would absolutely boost all scores at lower resolutions and even at the highest resolution tested until you reached a GPU bottleneck.
There's a few more benches in there that get the point across, I don't think it gets more obvious then this. I'm sure you'll probably just attempt to discredit yet another source, but the evidence is pervasive as most review sites are using "modern CPUs" @ 3GHz.
Nice try, except you forgot to mention that:
1. ME still runs at almost 100 fps in those benches
2. A single gtx280 beats SLI at lower resolutions in both of those examples
3. This article shows a modern cpu can run WIC at well over 70fps

So, instead of considering if the observed results are caused by Nvidia's craptastic drivers, multi-gpu limitations, or just incompetent testing methods, you keep spewing nonsense how no cpu in existence is fast enough for modern games and video cards.

They are only "needed" when you can experience a difference in performance. Those framerates are above you monitor's refresh rate anyways, and you'll never see a difference beyond colorful lines on fancy graphs.
No, they're needed if you're going to claim there's no benefit of faster CPUs and then point to GPU limited testing as evidence.
That's because for the most part modern games are gpu-limited at settings people actually use.
I just pointed out 4 titles from AT's review that show bottlenecking below 60FPS, not to mention those are averages anyways so there are going to be sequences below 60FPS unless its pegged at 60FPS with Vsync enabled.
No, you didn't. All those games in the AT review are either gpu-limited below 60fps (Crysys) or run above 60fps with faster video cards. And in all those examples the performance scales in line with faster video cards, except in cases of multi-gpu performance issues.

But we're not talking about triple digits and Quake 3, we're talking about modern titles that absolutely require faster CPUs to take advantage of faster GPUs and multi-GPU solutions. The 3GHz figure isn't invented, its your "modern CPU" which happens to be a 3-3.2GHz C2 as those are the fastest retail processor available and what the vast majority of reviewers are using. Its also why we are seeing CPU bottlenecking at historically GPU bottlenecked resolutions like 1920 and even 2560. If faster CPUs aren't needed, why bother with faster GPU solutions? For Free AA? If review sites are going to base performance on FPS, then they need to start using parts that remove limitations on performance, plain and simple.
Multi-gpu solutions have their own set of issues and limitations, and the results you saw are not necessarily caused by lack of cpu performance. The performance of single video cards scales in line with expected results in pretty much all the benches I saw. But more importantly, instead of blaming the cpu that a second gtx280 will not double your framerate in games, why don't you consider that 100 fps is more than your monitor can display anyways, and any Joe Schmoe who likes throwing money away on computer HW should have thought of that BEFORE he ordered that 2nd or 3rd video card.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: munky
It's only "clear" when you make assumptions about things you don't understand. There could be several other factors which limit performance in multi-gpu systems, like drivers and pci-e bandwidth, where using a faster cpu wouldn't necessarily increase your performance. But considering additional factors such as rendering load per frame and min/max framerates, it becomes even more obvious that the cpu isn't the only thing that can be holding back gpu performance.
External factors are certainly to be considered but in a review of the same system such variables will not change. In many cases they will not change between reviews done at the same time and are always scrutinized by readers. And lastly, those variables may have little or no impact on performance to begin with, like your assertion that chipset and RAM type would skew results.

Nice try, except you forgot to mention that:
1. ME still runs at almost 100 fps in those benches
2. A single gtx280 beats SLI at lower resolutions in both of those examples
3. This article shows a modern cpu can run WIC at well over 70fps

So, instead of considering if the observed results are caused by Nvidia's craptastic drivers, multi-gpu limitations, or just incompetent testing methods, you keep spewing nonsense how no cpu in existence is fast enough for modern games and video cards.
Wow, shocker, you trying to discredit concrete and blatant evidence proving you wrong. :roll:

1. 76 is almost 100? Lets just round up everything, let's just call 30 FPS 100. Even with an average of 76 FPS you will most likely have frame dips below 60 FPS, which will be below Vsync on a 60Hz monitor unless frames were hard capped ~80 and rarely dropped.

2. Yes, we know that SLI and multi-GPU has additional CPU overhead. This is well documented and mentioned in just about every review and tech site and further emphasizes the need for faster CPU solutions for the fastest GPU solutions.

3. That article is also done at MEDIUM settings, and I'm well aware WIC can score over 70FPS, with a faster CPU of course, as other review sites have shown. Even in your linked review there is clear CPU bottlenecking up to 2560 and very clear scaling with faster CPUs. So thanks for reinforcing my point.

Even funnier how you try to turn this into an Nvidia driver flame LMAO. Did you already forget the tests showing absolutely no difference between 4850CF and 4870CF or 8x "Free AA" in AoC at 1920 with the 4870X2? Its obvious you're not absorbing any of the content being discussed at this point and have just resorted to trolling.

That's because for the most part modern games are gpu-limited at settings people actually use.
Except when they're not, as I've shown time and time again.

No, you didn't. All those games in the AT review are either gpu-limited below 60fps (Crysys) or run above 60fps with faster video cards. And in all those examples the performance scales in line with faster video cards, except in cases of multi-gpu performance issues.
Reading Comprehension FTL
Assassin's Creed, Oblivion, The Witcher are all around 60FPS for even the fastest solutions, and show little difference between 1680 and 1920, sometimes even 2560 for the fastest solutions.

Multi-gpu solutions have their own set of issues and limitations, and the results you saw are not necessarily caused by lack of cpu performance. The performance of single video cards scales in line with expected results in pretty much all the benches I saw. But more importantly, instead of blaming the cpu that a second gtx280 will not double your framerate in games, why don't you consider that 100 fps is more than your monitor can display anyways, and any Joe Schmoe who likes throwing money away on computer HW should have thought of that BEFORE he ordered that 2nd or 3rd video card.
Except those limitations go away once you remove CPU bottlenecks by using a faster CPU. I've already shown examples of CPU bottlenecking below 100FPS so stop trying to use that as a crutch. And no, not every Joe Schmoe would consider these things before spending money on a 2nd or 3rd video card. Hell half the Joe Schmoes fail to even acknowledge the existence or possibility of CPU bottlenecking with current video cards. :roll:
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: chizow
External factors are certainly to be considered but in a review of the same system such variables will not change. In many cases they will not change between reviews done at the same time and are always scrutinized by readers. And lastly, those variables may have little or no impact on performance to begin with, like your assertion that chipset and RAM type would skew results.
:roll:
That makes about as much sense as putting skinny tires on a race car and saying it doesn't matter because those are "external factors".

1. 76 is almost 100? Lets just round up everything, let's just call 30 FPS 100. Even with an average of 76 FPS you will most likely have frame dips below 60 FPS, which will be below Vsync on a 60Hz monitor unless frames were hard capped ~80 and rarely dropped.
How about 94, is that almost 100? Or are you going to continue blaming the cpu for multi-gpu inefficiencies and limitations? And because the framerate dips below average, a faster gpu will increase the minimum fps too, unless you want to debate that topic as well, which I'm more than prepared to do.

2. Yes, we know that SLI and multi-GPU has additional CPU overhead. This is well documented and mentioned in just about every review and tech site and further emphasizes the need for faster CPU solutions for the fastest GPU solutions.
How about the fact that nobody needs a second gpu to begin with when a single gpu performs faster, and over 90 fps too?

3. That article is also done at MEDIUM settings, and I'm well aware WIC can score over 70FPS, with a faster CPU of course, as other review sites have shown. Even in your linked review there is clear CPU bottlenecking up to 2560 and very clear scaling with faster CPUs. So thanks for reinforcing my point.
Nobody except you will care about cpu bottlenecking at 70fps.

Even funnier how you try to turn this into an Nvidia driver flame LMAO. Did you already forget the tests showing absolutely no difference between 4850CF and 4870CF or 8x "Free AA" in AoC at 1920 with the 4870X2? Its obvious you're not absorbing any of the content being discussed at this point and have just resorted to trolling.
You should mention WOW, I bet you will also not see any difference between 4850CF and 4870CF. But the people playing the game won't care either way.

That's because for the most part modern games are gpu-limited at settings people actually use.
Except when they're not, as I've shown time and time again.
Except the cases you've shown are already running above your monitor's refresh rate.


Reading Comprehension FTL
Assassin's Creed, Oblivion, The Witcher are all around 60FPS for even the fastest solutions, and show little difference between 1680 and 1920, sometimes even 2560 for the fastest solutions.
Instead of just looking for horizontal lines in graphs, take your own advice and pay more attention to what those tables and graphs actually say.

Except those limitations go away once you remove CPU bottlenecks by using a faster CPU. I've already shown examples of CPU bottlenecking below 100FPS so stop trying to use that as a crutch. And no, not every Joe Schmoe would consider these things before spending money on a 2nd or 3rd video card. Hell half the Joe Schmoes fail to even acknowledge the existence or possibility of CPU bottlenecking with current video cards. :roll:
You haven't shown any article where the "cpu bottleneck" was removed, hence your collection of assumptions and generalizations on the topic.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: munky
:roll:
That makes about as much sense as putting skinny tires on a race car and saying it doesn't matter because those are "external factors".
We've already covered some of those external factors. Show me one review that shows more than 5-10% difference between DDR2 and DDR3 chipsets. Show me one review that shows more than 5% difference in FPS with additional RAM. Show me one review that shows 20% difference with a single driver revision. Its easy to dismiss these factors because review and review shows these changes are marginal and unspectacular. GPUs and CPUs are the tires and engines, those external factors are things like wings, fenders, grilles etc.

How about 94, is that almost 100? Or are you going to continue blaming the cpu for multi-gpu inefficiencies and limitations? And because the framerate dips below average, a faster gpu will increase the minimum fps too, unless you want to debate that topic as well, which I'm more than prepared to do.
And where do you see 94? Oh ya, at 1280. Or are you going to contradict yourself again to try and prove a point? What happened to "resolutions people actually play at". What happens when you increase to those resolutions? The single GPUs fall back as they reach GPU limits. But that doesn't discount the fact that a faster CPU would shift the entire result set forward from top to bottom at CPU bottlenecked resolutions until the GPU becomes the bottleneck at higher resolutions and settings.

And no the "faster" GPU/GPUs will not always increase minimum FPS, as has been shown time and again with multi-GPU that have higher AVG scores. Simply put, they mask lower minimums in more intensive sequences with higher maximums to come out to a higher average. COD4 Comparison GTX 280 and 9800GTX SLI have the same average, but its very clear the SLI solution not only has lower minimums, but also spends much more time at low frame rates, which it makes up with higher frame rates above refresh rate.

How about the fact that nobody needs a second gpu to begin with when a single gpu performs faster, and over 90 fps too?
At 1280 sure, but not at "resolutions and settings people actually play at". You do realize your arguments have crossed over into the nonsensical and are completely contradictory to your earlier points now right?

Nobody except you will care about cpu bottlenecking at 70fps.
But you're still clearly wrong, that modern CPUs are bottlenecking modern GPUs and your own link proved my point even further.

You should mention WOW, I bet you will also not see any difference between 4850CF and 4870CF. But the people playing the game won't care either way.
WOW, from what I hear is actually an incredibly CPU intensive game like most MMOs, so I wouldn't hesitate recommending a faster CPU, especially if someone were considering one of the faster GPU solutions available today.

Except the cases you've shown are already running above your monitor's refresh rate.
Only if Vsync is enabled. You'll still see frame drops below refresh unless you have Vsync enabled or some hard cap or frame rate smoothing enabled.


Instead of just looking for horizontal lines in graphs, take your own advice and pay more attention to what those tables and graphs actually say.
I'm well aware of what they're saying, they say if 50-60FPS is low to you, buying more GPU won't help you, buying a faster CPU will. Either that or you're paying for "free AA" unless you shell out $1300 for a 30" LCD.

You haven't shown any article where the "cpu bottleneck" was removed, hence your collection of assumptions and generalizations on the topic.
Actually I did, 4GHz Tweaktown review which still had relevant results relative to itself. They also have a 3GHz vs. 4GHz GTX 280 SLI/Tri-SLI review, as does Guru3D, but there's no need for me to link them, I already know what they say. :)

 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: chizow
We've already covered some of those external factors. Show me one review that shows more than 5-10% difference between DDR2 and DDR3 chipsets. Show me one review that shows more than 5% difference in FPS with additional RAM. Show me one review that shows 20% difference with a single driver revision. Its easy to dismiss these factors because review and review shows these changes are marginal and unspectacular. GPUs and CPUs are the tires and engines, those external factors are things like wings, fenders, grilles etc.
How about one system having nearly twice the L2 cache as the other? Are you gonna claim that makes no difference either?

And where do you see 94? Oh ya, at 1280. Or are you going to contradict yourself again to try and prove a point? What happened to "resolutions people actually play at". What happens when you increase to those resolutions? The single GPUs fall back as they reach GPU limits. But that doesn't discount the fact that a faster CPU would shift the entire result set forward from top to bottom at CPU bottlenecked resolutions until the GPU becomes the bottleneck at higher resolutions and settings.
When single gpu's fall back is when you buy a second gpu. Nobody in their right mind runs SLI at 1280. Is that too difficult a concept for you to grasp?

And no the "faster" GPU/GPUs will not always increase minimum FPS, as has been shown time and again with multi-GPU that have higher AVG scores. Simply put, they mask lower minimums in more intensive sequences with higher maximums to come out to a higher average. COD4 Comparison GTX 280 and 9800GTX SLI have the same average, but its very clear the SLI solution not only has lower minimums, but also spends much more time at low frame rates, which it makes up with higher frame rates above refresh rate.
Don't tell me about SLI when I'm talking about a faster single gpu. I'm well aware of the inefficiencies in multi-gpu systems.

How about the fact that nobody needs a second gpu to begin with when a single gpu performs faster, and over 90 fps too?
At 1280 sure, but not at "resolutions and settings people actually play at". You do realize your arguments have crossed over into the nonsensical and are completely contradictory to your earlier points now right?
So, if modern cards were cpu-limited at resolutions people actually use, you wouldn't see any benefit from adding a second card. That's obviously not the case.

Nobody except you will care about cpu bottlenecking at 70fps.
But you're still clearly wrong, that modern CPUs are bottlenecking modern GPUs and your own link proved my point even further.
Your ridiculous definition of the term "bottlenecking" is wrong.

You should mention WOW, I bet you will also not see any difference between 4850CF and 4870CF. But the people playing the game won't care either way.
WOW, from what I hear is actually an incredibly CPU intensive game like most MMOs, so I wouldn't hesitate recommending a faster CPU, especially if someone were considering one of the faster GPU solutions available today.
Maybe you should consider that modern gpu's can run WOW in idle mode before recommending anything.

Except the cases you've shown are already running above your monitor's refresh rate.
Only if Vsync is enabled. You'll still see frame drops below refresh unless you have Vsync enabled or some hard cap or frame rate smoothing enabled.
Vsync or not, you'll never see the additional frames as your monitor physically can't display them. All you'll see is screen tearing.


Instead of just looking for horizontal lines in graphs, take your own advice and pay more attention to what those tables and graphs actually say.
I'm well aware of what they're saying, they say if 50-60FPS is low to you, buying more GPU won't help you, buying a faster CPU will. Either that or you're paying for "free AA" unless you shell out $1300 for a 30" LCD.
Nothing in that article supports your ridiculous generalization. What the charts really say is if a single gtx280 only gets 33fps, get a second one and pray you'll reach 60. If that's not enough for you, then you're SOL until the next gen high end cards arrive.

You haven't shown any article where the "cpu bottleneck" was removed, hence your collection of assumptions and generalizations on the topic.
Actually I did, 4GHz Tweaktown review which still had relevant results relative to itself. They also have a 3GHz vs. 4GHz GTX 280 SLI/Tri-SLI review, as does Guru3D, but there's no need for me to link them, I already know what they say. :)

And I showed you at least 5 sites where performance increased with faster/more gpu's, which couldn't have happened if the cpu was really the bottleneck.
 

minmaster

Platinum Member
Oct 22, 2006
2,041
3
71
ok so is the point of this trying to say that playing modern games at high res is limited by the CPU and that high end GPUs are either way ahead of high end CPU or that CPU is way too behind GPU?
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: munky
How about one system having nearly twice the L2 cache as the other? Are you gonna claim that makes no difference either?
Not nearly as much as clock speed, as shown multiple times over in Penryn vs. Conroe benches. Not to mention the results relative to the same platform are still completely relevant showing no scaling at 3GHz, but performance increases to all platforms up to 1920 and scaling once you use a faster 4GHz CPU. I'm confident you'd see similar in the OP's test if they hadn't used a single oppressive GPU limited setting with games that aren't particularly CPU intensive.

When single gpu's fall back is when you buy a second gpu. Nobody in their right mind runs SLI at 1280. Is that too difficult a concept for you to grasp?
LMAO. Then why are you using 94 FPS @ 1280 as some sort of validation that faster CPUs aren't needed because a single GPU at that setting is faster than SLI? Once again, we all know SLI/CF requires additional CPU cycles in order to scale. In CPU limited situations (or in cases of poor scaling), performance will actually be worst than a single card. This is well documented across every tech and review site that handles multi-GPU. We can rule out poor scaling/driver issues in each of these titles because the SLI solutions do distance themselves from single card as GPU load increases, but the fact FPS do not change from 1280 to 1680 to 1920 even with 2x, 4x, 8x AA tells us a faster CPU would increase those scores across the board as the solution is clearly CPU bottlenecked. How is that so hard to understand?

Don't tell me about SLI when I'm talking about a faster single gpu. I'm well aware of the inefficiencies in multi-gpu systems.
Obviously not if you're going to point to 94 FPS for a single GPU at 1280 and 75 FPS for SLI as validation for your point that CPU bottlenecks don't exist......

So, if modern cards were cpu-limited at resolutions people actually use, you wouldn't see any benefit from adding a second card. That's obviously not the case.
Only if measuring performance comes in the form of levels of "free AA".

Your ridiculous definition of the term "bottlenecking" is wrong.
Bottlecorking, bottlecapping, bottlelimiting, bottletopping, whatever you want to call it you're still wrong. Feel free to look it up on dictionary.com or wiki, I'm quite sure my definition is more accurate than whatever you think the word means.

Maybe you should consider that modern gpu's can run WOW in idle mode before recommending anything.
I'd certainly not tell them to spend 2x as much on 4870CF/X2 over 4850CF or even a fast single card like GTX 280/260 or HD4870 if they were running a slow CPU that might not show any benefit from upgrading at all without upgrading the CPU also. Unless of course, they were willing to spend 2x as much for 24xAAA instead of 16xAAA.

There was also a thread in video sometime back where people weren't happy with the 4870X2's perrformance with AA enabled. Considering that's the fastest card available today, I don't think its so easy to dismiss WOW's GPU requirements.

Vsync or not, you'll never see the additional frames as your monitor physically can't display them. All you'll see is screen tearing.
And I'm not talking about frames above refresh, I'm talking about frames below. In the case of 60-80FPS averages, unless the game is capped or Vsync is enabled, frame distribution will undoubtedly drop below 60, which would be reflected even on a 60Hz panel. My point is you can't simply point to 60-80FPS averages and claim there's no point in higher FPS, higher is always better as that means there's fewer minimums for shorter durations below refresh rate.

Nothing in that article supports your ridiculous generalization. What the charts really say is if a single gtx280 only gets 33fps, get a second one and pray you'll reach 60. If that's not enough for you, then you're SOL until the next gen high end cards arrive.
Rofl, everything in those articles support my generalization. Its spot on accurate, you've already "proved" me right by pointing to AT's articles showing 50-60FPS from 1680 to 1920 and higher scores with single GPU at low resolutions in THG's comparison. I've shown similar examples with 4850CF vs. 4870CF/X2 showing no difference at lower resolutions with slower CPUs in certain games as well. 4870X2 is as next-gen as you're going to get and it doesn't clearly distance itself from other solutions until 2560. So again, unless you want "free AA" that will cost you 2x or you want to spend $1300 on a 30" LCD, you might see more "performance" gain when measured in FPS by upgrading your CPU.

And I showed you at least 5 sites where performance increased with faster/more gpu's, which couldn't have happened if the cpu was really the bottleneck.
Uh no. Performance never increased, it just didn't decrease as fast as the other solutions as you increased GPU load. There is a big difference heh. And yes, its quite obvious that the CPU no longer becomes the bottleneck when you increase resolution/settings/AA to the point you're GPU bottlenecked.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: chizow
Originally posted by: munky
How about one system having nearly twice the L2 cache as the other? Are you gonna claim that makes no difference either?
Not nearly as much as clock speed, as shown multiple times over in Penryn vs. Conroe benches. Not to mention the results relative to the same platform are still completely relevant showing no scaling at 3GHz, but performance increases to all platforms up to 1920 and scaling once you use a faster 4GHz CPU. I'm confident you'd see similar in the OP's test if they hadn't used a single oppressive GPU limited setting with games that aren't particularly CPU intensive.
It makes enough difference to invalidate any direct comparison you're making between the two articles, in addition to all the other uncontrolled variables like driver versions.

LMAO. Then why are you using 94 FPS @ 1280 as some sort of validation that faster CPUs aren't needed because a single GPU at that setting is faster than SLI? Once again, we all know SLI/CF requires additional CPU cycles in order to scale. In CPU limited situations (or in cases of poor scaling), performance will actually be worst than a single card. This is well documented across every tech and review site that handles multi-GPU. We can rule out poor scaling/driver issues in each of these titles because the SLI solutions do distance themselves from single card as GPU load increases, but the fact FPS do not change from 1280 to 1680 to 1920 even with 2x, 4x, 8x AA tells us a faster CPU would increase those scores across the board as the solution is clearly CPU bottlenecked. How is that so hard to understand?
So why are you using craptastic SLI scaling at 1280 to claim we need faster cpu's, when in fact nobody needs SLI at 1280 to begin with?

Obviously not if you're going to point to 94 FPS for a single GPU at 1280 and 75 FPS for SLI as validation for your point that CPU bottlenecks don't exist......
Don't blame the cpu for idiotic applications of multi-gpu solutions.

So, if modern cards were cpu-limited at resolutions people actually use, you wouldn't see any benefit from adding a second card. That's obviously not the case.
Only if measuring performance comes in the form of levels of "free AA".
Put some glasses on and take a look at those graphs again at 1920x1200, see if multi-gpu doesn't increase fps over a single gpu.

Bottlecorking, bottlecapping, bottlelimiting, bottletopping, whatever you want to call it you're still wrong. Feel free to look it up on dictionary.com or wiki, I'm quite sure my definition is more accurate than whatever you think the word means.
If I was wrong then you wouldn't see a benefit from SLI at high resolutions.

I'd certainly not tell them to spend 2x as much on 4870CF/X2 over 4850CF or even a fast single card like GTX 280/260 or HD4870 if they were running a slow CPU that might not show any benefit from upgrading at all without upgrading the CPU also. Unless of course, they were willing to spend 2x as much for 24xAAA instead of 16xAAA.
You shouldn't tell them to get CF/SLI for WOW regardless of what cpu they're using.

And I'm not talking about frames above refresh, I'm talking about frames below. In the case of 60-80FPS averages, unless the game is capped or Vsync is enabled, frame distribution will undoubtedly drop below 60, which would be reflected even on a 60Hz panel.
Frame distribution can drop regardless of vsync, it can even be worse with vsync. But that doesn't mean slapping in a faster cpu will eliminate that drop.

My point is you can't simply point to 60-80FPS averages and claim there's no point in higher FPS, higher is always better as that means there's fewer minimums for shorter durations below refresh rate.
That's the mentality of someone who debates useless trivia on a forum, and not someone who actually plays the game for enjoyment or competitively. Just like Canon/Nikon fanboys debating technical garbage of their DSLR when in fact both cameras provide indistinguishable prints at sizes larger than most people would use.

Rofl, everything in those articles support my generalization. Its spot on accurate, you've already "proved" me right by pointing to AT's articles showing 50-60FPS from 1680 to 1920 and higher scores with single GPU at low resolutions in THG's comparison. I've shown similar examples with 4850CF vs. 4870CF/X2 showing no difference at lower resolutions with slower CPUs in certain games as well. 4870X2 is as next-gen as you're going to get and it doesn't clearly distance itself from other solutions until 2560. So again, unless you want "free AA" that will cost you 2x or you want to spend $1300 on a 30" LCD, you might see more "performance" gain when measured in FPS by upgrading your CPU.
You're looking at ink blobs and trying to convince everyone they show clear shapes and figures. "50-60FPS from 1680 to 1920" is meaningless trivia, and you've failed to consider that:
1. Multi-gpu solutions are inefficient by design
2. A single gpu often matches or beats those numbers at low rez
3. A single gpu often falls behind those numbers at high rez.
So, instead of suggesting we need faster single gpu's for high rez gaming, you're blaming the cpu for the limitations of current gpu's and multi-gpu inefficiencies.

Uh no. Performance never increased, it just didn't decrease as fast as the other solutions as you increased GPU load. There is a big difference heh. And yes, its quite obvious that the CPU no longer becomes the bottleneck when you increase resolution/settings/AA to the point you're GPU bottlenecked.
Basic arithmetic FTW: at 1680, a single gtx280 gets 37fps, two of them combined get 57. Did the performance increase or not? Or are you gonna suggest that what we really need are faster cpu's? :laugh:
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: munky
It makes enough difference to invalidate any direct comparison you're making between the two articles, in addition to all the other uncontrolled variables like driver versions.
Like I said, find an article that shows 20% difference clock for clock between Kentsfield and Yorkfield in actual games. Those reviews were published on the same day, with the same drivers, so again, stop looking for excuses when the answer is obvious.

So why are you using craptastic SLI scaling at 1280 to claim we need faster cpu's, when in fact nobody needs SLI at 1280 to begin with?
I'm not, I was pointing out how ridiculous it was to use 1280 single-gpu results in an attempt to prove your point after hyping "resolutions and settings people actually use." Once again, I'm pointing out the fact there is virtually no difference in FPS, ie. no GPU bottlenecking across resolutions up to 1920x8AA with SLI, is that 1280? No its not. So once again, if 75FPS is low to you and those inevitable drops below refresh bother you, the only way to increase FPS would be to get a faster CPU as that is the bottleneck. Getting a third card in SLI would do nothing for you, in fact, it may again lower performance due to CPU overhead in a CPU bottlenecked scenario.

Also, stop trying to turn this into an ATI/NV flame war, I've already shown the same phenomena is happening with CF/X2, if anything, its even more obvious in cases where 4850CF and 4870CF show no differences in CPU bottlenecked situations and scaling up to 4 RV770 GPU in X2 CF only yields more "free AA".

Don't blame the cpu for idiotic applications of multi-gpu solutions.
Rofl, of course the CPU is to blame if its limiting maximum frame rates to the point there is no distinction between single and multi-GPU or varying multi-GPU solutions. Is this a joke? Honestly you need to look at some GTX 280 SLI/TRI-SLI and 4870X2 reviews before commenting further.

Put some glasses on and take a look at those graphs again at 1920x1200, see if multi-gpu doesn't increase fps.
As you so keenly pointed out.....multi-GPU actually has lower results than single GPU at 1280....there's no increasing past 94 or 75 for anything there. The only reason SLI distinguishes itself is because the other solutions' performance decreases as they hit GPU limitations. My point is that SLI never increases FPS relative to itself, its 75FPS across the board regardless of resolution or AA setting. This is text book CPU bottlenecking, I don't know how this is not blatantly obvious........

If I was wrong then you wouldn't see a benefit from SLI at high resolutions.
Which doesn't preclude the possibility you would see greater increases from a faster CPU in CPU bottlenecked situations. You've already linked to examples showing this to be true....remember how WiC can get 60FPS+ but clearly flatlines at 48FPS with a 2.93GHz CPU in THG's comparison? We know this to be true because other reviewers have used faster CPUs and shown scaling in CPU intensive games where GPU scaling with slow CPUs does nothing to increase FPS.

You shouldn't tell them to get CF/SLI for WOW regardless of what cpu they're using.
Certainly not if they're using an Opteron, Pentium D or similarly slow CPU and expecting a miracle.

Frame distribution can drop regardless of vsync, it can even be worse with vsync. But that doesn't mean slapping in a faster cpu will eliminate that drop.
Uh, no. If you have a 60FPS Average and Vsync enabled on a 60Hz monitor.....how would frame distribution drop below 60? Honestly stop arguing just for the sake of arguing, you're once again wrong.

That's the mentality of someone who debates useless trivia on a forum, and not someone who actually plays the game for enjoyment or competitively. Just like Canon/Nikon fanboys debating technical garbage of their DSLR when in fact both cameras provide indistinguishable prints at sizes larger than most people would use.
Rofl, useless trivia? Once again I'll point out this example since you obviously glazed over it the first time: COD4 Comparison

GTX 280 - Min: 32 Max: 211 Avg: 62.5
9800 GTX SLI - Min: 11 Max: 256 Avg: 62.8

Not only does the SLI solution have much lower lows, it also spends significant periods of time at much lower frame rates than the GTX 280. Is that trivia that two solutions can take very different paths to get to the same number? A higher AVG would push distributions for both higher, reducing the frequency and duration spent below refresh.
You're looking at ink blobs and trying to convince everyone they show clear shapes and figures. "50-60FPS from 1680 to 1920" is meaningless trivia, and you've failed to consider that:
1. Multi-gpu solutions are inefficient by design
Actually they've shown near 100% scaling in titles and situations that aren't CPU bottlenecked. The newest ATI solutions can actually achieve better than 100% efficiency.

2. A single gpu often matches or beats those numbers at low rez
Of course, due to CPU bottlenecking and additional multi-GPU overhead.

3. A single gpu often falls behind those numbers at high rez.
Wow, slower single GPU fall behind as they become GPU bottlenecked. Who'd have thunk it. :roll:

So, instead of suggesting we need faster single gpu's for high rez gaming, you're blaming the cpu for the limitations of current gpu's and multi-gpu inefficiencies.
No, I'm saying faster CPUs would shift the entire paradigm forward, particularly in cases where multi-GPU solutions are not showing FPS drops across resolutions and AA settings (ie. CPU bottlenecked) or overall low/borderline unplayable FPS. This need for faster CPUs should be blatantly obvious considering we're now dealing with single GPU solutions 2-3x faster than G80 that can scale to 3-4 GPU, yet still using the same, once fast 3GHz Core 2 processors for comparison. Honestly it makes no sense whatsoever.

Basic arithmetic FTW: at 1680, a single gtx280 gets 37fps, two of them combined get 57. Did the performance increase or not? Or are you gonna suggest that what we really need are faster cpu's? :laugh:
Uh, SLI performance does not increase relative to itself, it decreases slightly from 57.7 to 53.7. You're comparing single-GPU to multi-GPU in a situation where the single GPU is already GPU limited, so of course adding a 2nd card would increase performance. :laugh: So again, if you think 57.7 to 53.7 FPS is low, adding more GPUs isn't going to help, only a faster CPU or a retooling of the engine is going to help as the results of 4870CF, 4870X2 and 4870X2 in CF confirm.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: chizow
Originally posted by: munky
It makes enough difference to invalidate any direct comparison you're making between the two articles, in addition to all the other uncontrolled variables like driver versions.
Like I said, find an article that shows 20% difference clock for clock between Kentsfield and Yorkfield in actual games. Those reviews were published on the same day, with the same drivers, so again, stop looking for excuses when the answer is obvious.
Better yet, find me an article where the reviewer is competent enough to conduct a controlled experiment, and then you can make conclusions based on observed results.

So why are you using craptastic SLI scaling at 1280 to claim we need faster cpu's, when in fact nobody needs SLI at 1280 to begin with?
I'm not, I was pointing out how ridiculous it was to use 1280 single-gpu results in an attempt to prove your point after hyping "resolutions and settings people actually use." Once again, I'm pointing out the fact there is virtually no difference in FPS, ie. no GPU bottlenecking across resolutions up to 1920x8AA with SLI, is that 1280? No its not. So once again, if 75FPS is low to you and those inevitable drops below refresh bother you, the only way to increase FPS would be to get a faster CPU as that is the bottleneck. Getting a third card in SLI would do nothing for you, in fact, it may again lower performance due to CPU overhead in a CPU bottlenecked scenario.
What's ridiculous is you using SLI scaling at 1280 as any kind of performance metric.

Also, stop trying to turn this into an ATI/NV flame war, I've already shown the same phenomena is happening with CF/X2, if anything, its even more obvious in cases where 4850CF and 4870CF show no differences in CPU bottlenecked situations and scaling up to 4 RV770 GPU in X2 CF only yields more "free AA".
At what, 1280 rez? :laugh: Maybe you can show me how it also only yields "free AA" at 640x480?

Don't blame the cpu for idiotic applications of multi-gpu solutions.
Rofl, of course the CPU is to blame if its limiting maximum frame rates to the point there is no distinction between single and multi-GPU or varying multi-GPU solutions. Is this a joke? Honestly you need to look at some GTX 280 SLI/TRI-SLI and 4870X2 reviews before commenting further.
Doesn't make multi-gpu at 1280 any less idiotic.

Put some glasses on and take a look at those graphs again at 1920x1200, see if multi-gpu doesn't increase fps.
As you so keenly pointed out.....multi-GPU actually has lower results than single GPU at 1280....there's no increasing past 94 or 75 for anything there. The only reason SLI distinguishes itself is because the other solutions' performance decreases as they hit GPU limitations. My point is that SLI never increases FPS relative to itself, its 75FPS across the board regardless of resolution or AA setting. This is text book CPU bottlenecking, I don't know how this is not blatantly obvious........
See my point above...

If I was wrong then you wouldn't see a benefit from SLI at high resolutions.
Which doesn't preclude the possibility you would see greater increases from a faster CPU in CPU bottlenecked situations. You've already linked to examples showing this to be true....remember how WiC can get 60FPS+ but clearly flatlines at 48FPS with a 2.93GHz CPU in THG's comparison? We know this to be true because other reviewers have used faster CPUs and shown scaling in CPU intensive games where GPU scaling with slow CPUs does nothing to increase FPS.
Your flatline cpu limitation theory is based on crap multi-gpu scaling at low rez, which nobody would actually use in real life.

You shouldn't tell them to get CF/SLI for WOW regardless of what cpu they're using.
Certainly not if they're using an Opteron, Pentium D or similarly slow CPU and expecting a miracle.
I said regardless, which includes the fastest possible cpu you can imagine.

Frame distribution can drop regardless of vsync, it can even be worse with vsync. But that doesn't mean slapping in a faster cpu will eliminate that drop.
Uh, no. If you have a 60FPS Average and Vsync enabled on a 60Hz monitor.....how would frame distribution drop below 60? Honestly stop arguing just for the sake of arguing, you're once again wrong.
Vsync only limits the max framerate to your monitor's refresh rate. If your video card can't keep up then you'll drop to 30fps with vsync. You really have no idea what vsync is, do you?

Rofl, useless trivia? Once again I'll point out this example since you obviously glazed over it the first time: COD4 Comparison

GTX 280 - Min: 32 Max: 211 Avg: 62.5
9800 GTX SLI - Min: 11 Max: 256 Avg: 62.8

Not only does the SLI solution have much lower lows, it also spends significant periods of time at much lower frame rates than the GTX 280. Is that trivia that two solutions can take very different paths to get to the same number? A higher AVG would push distributions for both higher, reducing the frequency and duration spent below refresh.
All the more reason to avoid SLI if there's a single faster card available. What does that have to do with cpu bottlenecking?

You're looking at ink blobs and trying to convince everyone they show clear shapes and figures. "50-60FPS from 1680 to 1920" is meaningless trivia, and you've failed to consider that:
1. Multi-gpu solutions are inefficient by design
Actually they've shown near 100% scaling in titles and situations that aren't CPU bottlenecked. The newest ATI solutions can actually achieve better than 100% efficiency.
LOL. So if it shows 100% scaling then the cpu is not a bottleneck, is it?

2. A single gpu often matches or beats those numbers at low rez
Of course, due to CPU bottlenecking and additional multi-GPU overhead.
Which makes your low rez multi-gpu numbers irrelevant.

3. A single gpu often falls behind those numbers at high rez.
Wow, slower single GPU fall behind as they become GPU bottlenecked. Who'd have thunk it. :roll:
Apparently not someone who thinks the cpu is a bottleneck for a modern gpu.

So, instead of suggesting we need faster single gpu's for high rez gaming, you're blaming the cpu for the limitations of current gpu's and multi-gpu inefficiencies.
No, I'm saying faster CPUs would shift the entire paradigm forward, particularly in cases where multi-GPU solutions are not showing FPS drops across resolutions and AA settings (ie. CPU bottlenecked) or overall low/borderline unplayable FPS. This need for faster CPUs should be blatantly obvious considering we're now dealing with single GPU solutions 2-3x faster than G80 that can scale to 3-4 GPU, yet still using the same, once fast 3GHz Core 2 processors for comparison. Honestly it makes no sense whatsoever.
A faster cpu wouldn't do squat when you're gpu limited and need to resort to multiple gpu's.

Basic arithmetic FTW: at 1680, a single gtx280 gets 37fps, two of them combined get 57. Did the performance increase or not? Or are you gonna suggest that what we really need are faster cpu's? :laugh:
Uh, SLI performance does not increase relative to itself, it decreases slightly from 57.7 to 53.7. You're comparing single-GPU to multi-GPU in a situation where the single GPU is already GPU limited, so of course adding a 2nd card would increase performance. :laugh: So again, if you think 57.7 to 53.7 FPS is low, adding more GPUs isn't going to help, only a faster CPU or a retooling of the engine is going to help as the results of 4870CF, 4870X2 and 4870X2 in CF confirm.
Who cares how SLi scales relative to itself? People play games at their monitor's native rez, and they either need SLI or they don't. Nobody drops down to 1280 after upgrading to SLI and says "gee, I don't see any difference, I guess I need a faster cpu..." :roll:
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: munky
Better yet, find me an article where the reviewer is competent enough to conduct a controlled experiment, and then you can make conclusions based on observed results.
Like I said the first time, even if you think external factors skew results between the platforms that doesn't discount the fact there are no differences at 3GHz performed on the same platform and differences at 4GHz with 4850CF and 4870CF.

What's ridiculous is you using SLI scaling at 1280 as any kind of performance metric.
Of course 1280 is relevant when comparing scaling to higher resolutions....that's the whole point of using lower resolutions to test CPU bottlenecking and scaling. This metric has not changed ever since games and hardware were benchmarked........you raise resolution/settings/AA to test the GPU and lower them to test the CPU.

At what, 1280 rez? :laugh: Maybe you can show me how it also only yields "free AA" at 640x480?
I already showed it at 1920 up to 8xAA and with a 2nd X2 at 2560 up to 8xAA. Your own AoC link to AT shows the exact same thing, but since its obvious you're just going to argue for the sake of arguing even when you're clearly wrong, I wouldn't expect you to remember what you posted in any of your previous replies.

Doesn't make multi-gpu at 1280 any less idiotic.
Idiotic would be assuming there's no benefit of a faster CPU even at higher resolutions because a single-GPU outperforms multi-GPU at 1280 in a clearly CPU bottlenecked situation.....

See my point above...
Since you didn't understand it the first time.....multi-GPU actually has lower results than single GPU at 1280....there's no increasing past 94 or 75 for anything there. The only reason SLI distinguishes itself is because the other solutions' performance decreases as they hit GPU limitations. The point is that SLI never increases or decreases FPS relative to itself, its 75FPS across the board regardless of resolution or AA setting. This is text book CPU bottlenecking, I don't know how this is not blatantly obvious........

Your flatline cpu limitation theory is based on crap multi-gpu scaling at low rez, which nobody would actually use in real life.
Wrong, it exhibits itself with slower CPUs and/or lower resolutions without AA even up to 1920 with single GPU solutions as well. Considering you can't even see the corrolation at 1920 with 8xAA compared to 1280 with no AA with multi-GPU, I haven't even bothered to delve into those numbers showing similar bottlenecking at 1680 and 1920 with single-GPU solutions.

I said regardless, which includes the fastest possible cpu you can imagine.
Tell that to the folks that aren't happy with how even a 4870X2 is running AA with AAA enabled. Or the folks who are wondering why they aren't seeing higher FPS in more modern titles like AoC, WIC, ME, Witcher, Assassin's Creed etc. etc......

Vsync only limits the max framerate to your monitor's refresh rate. If your video card can't keep up then you'll drop to 30fps with vsync. You really have no idea what vsync is, do you?
No, you said you would still see FPS drops if your Average was 60 with Vsync enabled on a 60Hz monitor. You are wrong, this is impossible by definition. If your average FPS is 60 with Vsync enabled on a 60Hz monitor you will not have any drops as FPS will absolutely be locked and a graph would show a straight line at 60FPS. LOL @ having no idea what vsync is......not to mention anyone using vsync should enable triple buffering to avoid large incremental frame drops.

All the more reason to avoid SLI if there's a single faster card available. What does that have to do with cpu bottlenecking?
That you're arguing useless trivia because you're clearly wrong. There is meaning and answers in the details, and in this case, higher FPS above 60 or 80 or whatever subjective FPS limit you'll set next are important whether you think they're trivia or not. :)

LOL. So if it shows 100% scaling then the cpu is not a bottleneck, is it?
No, you said multi-GPU is inefficient by design when that is provably false. Its designed to be 100% efficient, external factors often prevent that from happening.

Which makes your low rez multi-gpu numbers irrelevant.
How so when it clearly shows CPU bottlenecking limiting maximum frame rates?

Apparently not someone who thinks the cpu is a bottleneck for a modern gpu.
Its obvious the CPU has become the bottleneck for modern GPUs, so much so that there's clear cases of CPU bottlenecking at historically GPU resolutions and settings. When you don't have any FPS hit at 1920 with 8xAA compared to 1280 with no AA I think its pretty obvious the CPU is the bottleneck. But not to everyone I suppose. :)

A faster cpu wouldn't do squat when you're gpu limited and need to resort to multiple gpu's.
A faster CPU clearly does improve performance in WiC and it provably does in other titles we've discussed already where using multiple GPU only enables higher level of AA. Simply put, how is it possible one review site is getting 60 FPS in WiC at the same settings when Tom's was capped at 48FPS across all settings and resolutions? L2 Cache? RAM amount? RAM type? Chipset? FSB speed? Driver revision? HDD Speed? LOL.

Who cares how SLi scales relative to itself? People play games at their monitor's native rez, and they either need SLI or they don't. Nobody drops down to 1280 after upgrading to SLI and says "gee, I don't see any difference, I guess I need a faster cpu..." :roll:
So again, if people shouldn't care how SLI scales and they're playing one of the intensive games we've discussed, WiC, Assassin's Creed, Witcher, Mass Effect, AoC, etc that hover around 60FPS and want to increase their overall FPS, would you tell them to spend 2, 3x as much money for TRI-SLI or CrossFireX with a pair of 4870X2 knowing they won't actually increase their FPS at all, just allow them to increase AA levels? Go back and look at the graphs, especially the ones at "resolutions and settings people actually play at". Think about this, then reply......its really not a hard concept to grasp.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: chizow
Like I said the first time, even if you think external factors skew results between the platforms that doesn't discount the fact there are no differences at 3GHz performed on the same platform and differences at 4GHz with 4850CF and 4870CF.
Which again is irrelevant because your monitor can't physically display anything faster than 60fps.

Of course 1280 is relevant when comparing scaling to higher resolutions....that's the whole point of using lower resolutions to test CPU bottlenecking and scaling. This metric has not changed ever since games and hardware were benchmarked........you raise resolution/settings/AA to test the GPU and lower them to test the CPU.
It isn't relevant because anyone getting 80fps at 1280 with a single gpu is not going to buy a second gpu for SLI.

I already showed it at 1920 up to 8xAA and with a 2nd X2 at 2560 up to 8xAA. Your own AoC link to AT shows the exact same thing, but since its obvious you're just going to argue for the sake of arguing even when you're clearly wrong, I wouldn't expect you to remember what you posted in any of your previous replies.
So you're complaining that you don't get a performance hit from AA with a 4870x2? Or are you assuming that because a game doesn't scale linearly with more gpu's then it must be a cpu problem? Either way, your claims are ridiculous.

Doesn't make multi-gpu at 1280 any less idiotic.
Idiotic would be assuming there's no benefit of a faster CPU even at higher resolutions because a single-GPU outperforms multi-GPU at 1280 in a clearly CPU bottlenecked situation.....
No, idiotic is assuming there's a cpu limit in situations where multiple gpu's are required just to get playable framerates.

Since you didn't understand it the first time.....multi-GPU actually has lower results than single GPU at 1280....there's no increasing past 94 or 75 for anything there. The only reason SLI distinguishes itself is because the other solutions' performance decreases as they hit GPU limitations. The point is that SLI never increases or decreases FPS relative to itself, its 75FPS across the board regardless of resolution or AA setting. This is text book CPU bottlenecking, I don't know how this is not blatantly obvious........
What's obvious is you're suggesting a solution in search of a problem.

Your flatline cpu limitation theory is based on crap multi-gpu scaling at low rez, which nobody would actually use in real life.
Wrong, it exhibits itself with slower CPUs and/or lower resolutions without AA even up to 1920 with single GPU solutions as well. Considering you can't even see the corrolation at 1920 with 8xAA compared to 1280 with no AA with multi-GPU, I haven't even bothered to delve into those numbers showing similar bottlenecking at 1680 and 1920 with single-GPU solutions.
The "corrolation" you're seeing is the result of not having a fast enough single gpu to run modern games at high rez and high framerates, not some magical cpu-limitation theory you invented.

I said regardless, which includes the fastest possible cpu you can imagine.
Tell that to the folks that aren't happy with how even a 4870X2 is running AA with AAA enabled. Or the folks who are wondering why they aren't seeing higher FPS in more modern titles like AoC, WIC, ME, Witcher, Assassin's Creed etc. etc......
Are they not happy because they used to get 115fps in WOW and now they only get 117? Or because AoC, WIC, ME, Witcher, Assassin's Creed etc. are placing too much load on a single gpu to run at playable framerates?

No, you said you would still see FPS drops if your Average was 60 with Vsync enabled on a 60Hz monitor. You are wrong, this is impossible by definition. If your average FPS is 60 with Vsync enabled on a 60Hz monitor you will not have any drops as FPS will absolutely be locked and a graph would show a straight line at 60FPS. LOL @ having no idea what vsync is......not to mention anyone using vsync should enable triple buffering to avoid large incremental frame drops.
No, I said you'd see fps drops below 60 when your video card can't keep up, regardless of vsync or not. If you have a straight line 60fps then you don't need a faster cpu or video card.

All the more reason to avoid SLI if there's a single faster card available. What does that have to do with cpu bottlenecking?
That you're arguing useless trivia because you're clearly wrong. There is meaning and answers in the details, and in this case, higher FPS above 60 or 80 or whatever subjective FPS limit you'll set next are important whether you think they're trivia or not. :)
Those details don't matter when the person playing the game won't notice a difference.

LOL. So if it shows 100% scaling then the cpu is not a bottleneck, is it?
No, you said multi-GPU is inefficient by design when that is provably false. Its designed to be 100% efficient, external factors often prevent that from happening.
External factors, like the way a game engine shares data between frames? Is that a problem of the game, or is it because a sufficiently fast single gpu doesn't exist to make those factors irrelevant?

Which makes your low rez multi-gpu numbers irrelevant.
How so when it clearly shows CPU bottlenecking limiting maximum frame rates?
No, it shows just as good or better fps with a single gpu, and nobody will use multi-gpu at 1280, hence they're irrelevant.

Its obvious the CPU has become the bottleneck for modern GPUs, so much so that there's clear cases of CPU bottlenecking at historically GPU resolutions and settings. When you don't have any FPS hit at 1920 with 8xAA compared to 1280 with no AA I think its pretty obvious the CPU is the bottleneck. But not to everyone I suppose. :)
Not having an fps hit is a bad thing? Clearly, you spend much less time playing games than debating pointless trivia on a forum.

A faster CPU clearly does improve performance in WiC and it provably does in other titles we've discussed already where using multiple GPU only enables higher level of AA. Simply put, how is it possible one review site is getting 60 FPS in WiC at the same settings when Tom's was capped at 48FPS across all settings and resolutions? L2 Cache? RAM amount? RAM type? Chipset? FSB speed? Driver revision? HDD Speed? LOL.
None of the AT benches I listed support your ridiculous theory. In all 1920x1200 benches there was in improvement going from 1 gpu to multiple ones, and you're whining about being cpu limited... :roll:

So again, if people shouldn't care how SLI scales and they're playing one of the intensive games we've discussed, WiC, Assassin's Creed, Witcher, Mass Effect, AoC, etc that hover around 60FPS and want to increase their overall FPS, would you tell them to spend 2, 3x as much money for TRI-SLI or CrossFireX with a pair of 4870X2 knowing they won't actually increase their FPS at all, just allow them to increase AA levels? Go back and look at the graphs, especially the ones at "resolutions and settings people actually play at". Think about this, then reply......its really not a hard concept to grasp.
The only point of increasing average fps over 60 is to raise the minimum fps, and the only way you will reliably accomplish that is by using a faster single gpu, not by multiple gpu's and sure as hell not by a faster cpu.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: munky
Which again is irrelevant because your monitor can't physically display anything faster than 60fps.
Uh, no. Just because FPS are above refresh doesn't discount the fact there is benefit from faster CPU. What if you had a 120Hz CRT, still irrelevant? Your argument is ridiculous.

It isn't relevant because anyone getting 80fps at 1280 with a single gpu is not going to buy a second gpu for SLI.
Huh? Seeing no difference in FPS at 1280 compared to 1920 is a glaring red flag of a CPU bottleneck or engine bottleneck and is absolutely relevant for anyone trying to extract information from such a benchmark. No one's talking about multi-GPU at 1280.

So you're complaining that you don't get a performance hit from AA with a 4870x2? Or are you assuming that because a game doesn't scale linearly with more gpu's then it must be a cpu problem? Either way, your claims are ridiculous.
No I'm complaining about modern GPUs and even multi-GPU not being able to average more than 51 FPS no matter how many GPUs you throw at it. And before you claim anymore BS about "not being able to see the difference due to refresh rate" there's minimums in the 20s in the AoC FPS graphs so don't even bother. I guess my claim that GPU solutions 2-4x faster than previous solutions would require faster CPUs makes no sense whatsoever right?

No, idiotic is assuming there's a cpu limit in situations where multiple gpu's are required just to get playable framerates.
Idiotic is ignoring the fact "playable framerates" don't change even in situations multiple GPUs aren't required to reach said playable framerate (GTX 280 @ 1280 example, again). But you wouldn't know if a CPU has any benefit because you're claiming that once you're GPU limited there is no further benefit of a faster CPU. That claim is clearly flawed, as I've said and shown many times, a faster CPU can shift entire result sets forward even in GPU limited situations as they're not mutually exlusive.

  • AT Proved this months ago with 8800 Ultra Tri-SLI:

    Crysis does actually benefit from faster CPUs at our 1920 x 1200 high quality settings. Surprisingly enough, there's even a difference between our 3.33GHz and 2.66GHz setups. We suspect that the difference would disappear at higher resolutions/quality settings, but the ability to maintain a smooth frame rate would also disappear. It looks like the hardware to run Crysis smoothly at all conditions has yet to be released.
This was probably the first documented proof that a 3GHz Core 2 was not enough to maximize performance from modern GPU solutions. Crysis is still the most GPU demanding title and now we have GPU solutions 2-4x faster than the Tri-SLI Ultra set-up used. Do you think the same 3.33GHz C2 processor is enough to fully extract that performance from newer solutions? Of course not, as our free AA/60FPS AVG tests show......

What's obvious is you're suggesting a solution in search of a problem.
Yes I consider review sites using slow CPUs a problem when they clearly have access to faster hardware. This leads to various ignorant posters claiming there is no need for faster CPUs because they can get free AA at 50-60FPS in all of their new titles. Well worth it for 2-3x the price don't you think?

The "corrolation" you're seeing is the result of not having a fast enough single gpu to run modern games at high rez and high framerates, not some magical cpu-limitation theory you invented.
LMAO. Really? I guess I can't just scale back my level of "Free AA" in Mass Effect at 1920 and get 90.1 FPS to get playable frame rates, which is still higher than the CPU bottlenecked, SLI overhead-lowered performance of GTX 280 SLI and 76.9FPS. Or I can't do the same in WiC at 1920 no AA and get 46.9 FPS with one GTX 280 compared to 45.6 with SLI? Or basically any other title that offers no higher FPS, only "free AA" in newer titles. Considering those are the highest possible FPS with a 2.93 GHz CPU and adding a 2nd card in SLI does nothing to increase FPS, what exactly would you recommend instead of "some magical cpu-limitation theory"? :laugh:

Are they not happy because they used to get 115fps in WOW and now they only get 117? Or because AoC, WIC, ME, Witcher, Assassin's Creed etc. are placing too much load on a single gpu to run at playable framerates?
They're not happy because they're paying 2-3x as much for higher frame rates but only getting free AA beyond a single GPU. And if those games are placing too much load on a single GTX 280 or 4870, what exactly are you running those games at? 640x480 on an EGA monitor?

No, I said you'd see fps drops below 60 when your video card can't keep up, regardless of vsync or not. If you have a straight line 60fps then you don't need a faster cpu or video card.
BS, we were discussing FPS averages of 60-80 which you said was plenty because you'd never see FPS above 60. I said that's clearly not true unless the game was Vsync'd or capped as you'd undoubtedly see frame distributions below 60FPS with an AVERAGE and no vsync. To which you replied you would could still see frame rate drops below 60 with Vsync enabled when averaging 60-80FPS, which is simply WRONG. Basically your assertion frame rates above 60FPS are useless is incorrect unless you have Vsync enabled and you are averaging 60FPS, which means you have a straight line at 60FPS and cannot have any drops below 60FPS.

And you're not going to see a straight line 60FPS average unless you have a very fast GPU and CPU solution, running less intensive settings and resolutions or the game is very old. Until you reach that point, its obvious you'll benefit from both a faster CPU and GPU, which clearly isn't the case if you're only AVERAGING 60-80FPS in a bench run without Vsync. So once again, your claim that frames averages above 60 or 80 or 100 or whatever subjective setting you'll claim next are clearly false.

Those details don't matter when the person playing the game won't notice a difference.
I can certainly distinguish FPS drops in the 20-30s, as can most gamers (and humans). Whether you can or not is irrelevant.

External factors, like the way a game engine shares data between frames? Is that a problem of the game, or is it because a sufficiently fast single gpu doesn't exist to make those factors irrelevant?
And? Its still external to multi-GPU, which you claimed is inefficient by design, which is still untrue.

No, it shows just as good or better fps with a single gpu, and nobody will use multi-gpu at 1280, hence they're irrelevant.
Rofl, if it wasn't CPU bottlenecked, the multi-GPU solution would distinguish itself beyond a single GPU, just as it does in higher resolutions/settings when the single GPU starts reaching GPU bottlenecks.

I guess a simpler way to look at it is, do you think WiC FPS is maxed out at 48FPS for all eternity, since thats the maximum its showing at 1280, even with 1, 2, 3, 4 of the fastest GPU available today? If you wanted to raise that 48FPS number, what would you change?

Not having an fps hit is a bad thing? Clearly, you spend much less time playing games than debating pointless trivia on a forum.
No its not a bad thing, is it worth 2-3x as much in price to get more AA when all you want are higher FPS? Is it a replacement for higher FPS in games that still dip below refresh? I wouldn't need to spend much time playing games or debating trivia on forums to understand this, these metrics have not changed for nearly a decade with PC games and hardware.

None of the AT benches I listed support your ridiculous theory. In all 1920x1200 benches there was in improvement going from 1 gpu to multiple ones, and you're whining about being cpu limited... :roll:
Sure they do, they improve by 3-4FPS and 2-4xAA right? When a single GTX 280 is scoring 55-60 between 1680 and 1920 and the SLI performance is 60-62.....ya great improvement there.

The only point of increasing average fps over 60 is to raise the minimum fps, and the only way you will reliably accomplish that is by using a faster single gpu, not by multiple gpu's and sure as hell not by a faster cpu.
Wrong.....Again. Also, increasing average is not only to increase minimums, as increasing the average would shift the entire distribution between minimums and refresh rate meaning you'll have higher lows across the board.

 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Golgatha
Yea, 4 pages of benchmarks showing GPU limited scenarios under settings someone would actually use to game.

http://www.firingsquad.com/har...e8600_review/page5.asp
That review was almost useful up until the point you realize they don't list what GPU is being used..... Think about it, if they used an 8800GTX, what would that tell you that you didn't know 2 years ago? Would that be relevant when compared to modern solutions that are 1.5-2x faster in single GPU that scale to 3-4 GPU?