Originally posted by: munky
Which again is irrelevant because your monitor can't physically display anything faster than 60fps.
Uh, no. Just because FPS are above refresh doesn't discount the fact there is benefit from faster CPU. What if you had a 120Hz CRT, still irrelevant? Your argument is ridiculous.
It isn't relevant because anyone getting 80fps at 1280 with a single gpu is not going to buy a second gpu for SLI.
Huh? Seeing no difference in FPS at 1280 compared to 1920 is a glaring red flag of a CPU bottleneck or engine bottleneck and is absolutely relevant for anyone trying to extract information from such a benchmark. No one's talking about multi-GPU at 1280.
So you're complaining that you don't get a performance hit from AA with a 4870x2? Or are you assuming that because a game doesn't scale linearly with more gpu's then it must be a cpu problem? Either way, your claims are ridiculous.
No I'm complaining about modern GPUs and even multi-GPU not being able to average more than 51 FPS no matter how many GPUs you throw at it. And before you claim anymore BS about "not being able to see the difference due to refresh rate" there's minimums in the 20s in the AoC FPS graphs so don't even bother. I guess my claim that GPU solutions 2-4x faster than previous solutions would require faster CPUs makes no sense whatsoever right?
No, idiotic is assuming there's a cpu limit in situations where multiple gpu's are required just to get playable framerates.
Idiotic is ignoring the fact "playable framerates" don't change even in situations multiple GPUs aren't required to reach said playable framerate (GTX 280 @ 1280 example, again). But you wouldn't know if a CPU has any benefit because you're claiming that once you're GPU limited there is no further benefit of a faster CPU. That claim is clearly flawed, as I've said and shown many times, a faster CPU can shift entire result sets forward even in GPU limited situations as they're not mutually exlusive.
- AT Proved this months ago with 8800 Ultra Tri-SLI:
Crysis does actually benefit from faster CPUs at our 1920 x 1200 high quality settings. Surprisingly enough, there's even a difference between our 3.33GHz and 2.66GHz setups. We suspect that the difference would disappear at higher resolutions/quality settings, but the ability to maintain a smooth frame rate would also disappear. It looks like the hardware to run Crysis smoothly at all conditions has yet to be released.
This was probably the first documented proof that a 3GHz Core 2 was not enough to maximize performance from modern GPU solutions. Crysis is still the most GPU demanding title and now we have GPU solutions 2-4x faster than the Tri-SLI Ultra set-up used. Do you think the same 3.33GHz C2 processor is enough to fully extract that performance from newer solutions? Of course not, as our free AA/60FPS AVG tests show......
What's obvious is you're suggesting a solution in search of a problem.
Yes I consider review sites using slow CPUs a problem when they clearly have access to faster hardware. This leads to various ignorant posters claiming there is no need for faster CPUs because they can get free AA at 50-60FPS in all of their new titles. Well worth it for 2-3x the price don't you think?
The "corrolation" you're seeing is the result of not having a fast enough single gpu to run modern games at high rez and high framerates, not some magical cpu-limitation theory you invented.
LMAO. Really? I guess I can't just scale back my level of "Free AA" in Mass Effect at 1920 and get 90.1 FPS to get playable frame rates, which is still higher than the CPU bottlenecked, SLI overhead-lowered performance of GTX 280 SLI and 76.9FPS. Or I can't do the same in WiC at 1920 no AA and get 46.9 FPS with one GTX 280 compared to 45.6 with SLI? Or basically any other title that offers no higher FPS, only "free AA" in newer titles. Considering those are the highest possible FPS with a 2.93 GHz CPU and adding a 2nd card in SLI does nothing to increase FPS, what exactly would you recommend instead of "some magical cpu-limitation theory"? :laugh:
Are they not happy because they used to get 115fps in WOW and now they only get 117? Or because AoC, WIC, ME, Witcher, Assassin's Creed etc. are placing too much load on a single gpu to run at playable framerates?
They're not happy because they're paying 2-3x as much for higher frame rates but only getting free AA beyond a single GPU. And if those games are placing too much load on a single GTX 280 or 4870, what exactly are you running those games at? 640x480 on an EGA monitor?
No, I said you'd see fps drops below 60 when your video card can't keep up, regardless of vsync or not. If you have a straight line 60fps then you don't need a faster cpu or video card.
BS, we were discussing FPS averages of 60-80 which you said was plenty because you'd never see FPS above 60. I said that's clearly not true unless the game was Vsync'd or capped as you'd undoubtedly see frame distributions below 60FPS with an AVERAGE and no vsync. To which you replied you would could still see frame rate drops below 60 with Vsync enabled when averaging 60-80FPS, which is simply WRONG. Basically your assertion frame rates above 60FPS are useless is incorrect unless you have Vsync enabled and you are averaging 60FPS, which means you have a straight line at 60FPS and cannot have any drops below 60FPS.
And you're not going to see a straight line 60FPS average unless you have a very fast GPU and CPU solution, running less intensive settings and resolutions or the game is very old. Until you reach that point, its obvious you'll benefit from both a faster CPU and GPU, which clearly isn't the case if you're only AVERAGING 60-80FPS in a bench run without Vsync. So once again, your claim that frames averages above 60 or 80 or 100 or whatever subjective setting you'll claim next are clearly false.
Those details don't matter when the person playing the game won't notice a difference.
I can certainly distinguish FPS drops in the 20-30s, as can most gamers (and humans). Whether you can or not is irrelevant.
External factors, like the way a game engine shares data between frames? Is that a problem of the game, or is it because a sufficiently fast single gpu doesn't exist to make those factors irrelevant?
And? Its still external to multi-GPU, which you claimed is inefficient by design, which is still untrue.
No, it shows just as good or better fps with a single gpu, and nobody will use multi-gpu at 1280, hence they're irrelevant.
Rofl, if it wasn't CPU bottlenecked, the multi-GPU solution would distinguish itself beyond a single GPU, just as it does in higher resolutions/settings when the single GPU starts reaching GPU bottlenecks.
I guess a simpler way to look at it is, do you think WiC FPS is maxed out at 48FPS for all eternity, since thats the maximum its showing at 1280, even with 1, 2, 3, 4 of the fastest GPU available today? If you wanted to raise that 48FPS number, what would you change?
Not having an fps hit is a bad thing? Clearly, you spend much less time playing games than debating pointless trivia on a forum.
No its not a bad thing, is it worth 2-3x as much in price to get more AA when all you want are higher FPS? Is it a replacement for higher FPS in games that still dip below refresh? I wouldn't need to spend much time playing games or debating trivia on forums to understand this, these metrics have not changed for nearly a decade with PC games and hardware.
None of the AT benches I listed support your ridiculous theory. In all 1920x1200 benches there was in improvement going from 1 gpu to multiple ones, and you're whining about being cpu limited... :roll:
Sure they do, they improve by 3-4FPS and 2-4xAA right? When a single GTX 280 is scoring 55-60 between 1680 and 1920 and the SLI performance is 60-62.....ya great improvement there.
The only point of increasing average fps over 60 is to raise the minimum fps, and the only way you will reliably accomplish that is by using a faster single gpu, not by multiple gpu's and sure as hell not by a faster cpu.
Wrong.....
Again. Also, increasing average is not only to increase minimums, as increasing the average would shift the entire distribution between minimums and refresh rate meaning you'll have higher lows across the board.