Wow! Didn't know CoH was multi-CPU optimized so well?!

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: chizow
Uh, no. Just because FPS are above refresh doesn't discount the fact there is benefit from faster CPU. What if you had a 120Hz CRT, still irrelevant? Your argument is ridiculous.
Not when it comes to increasing framerates.

Huh? Seeing no difference in FPS at 1280 compared to 1920 is a glaring red flag of a CPU bottleneck or engine bottleneck and is absolutely relevant for anyone trying to extract information from such a benchmark. No one's talking about multi-GPU at 1280.
You're comparing multi-gpu at 1280 to 1920. I can't stress enough how ridiculous that is.

No I'm complaining about modern GPUs and even multi-GPU not being able to average more than 51 FPS no matter how many GPUs you throw at it. And before you claim anymore BS about "not being able to see the difference due to refresh rate" there's minimums in the 20s in the AoC FPS graphs so don't even bother. I guess my claim that GPU solutions 2-4x faster than previous solutions would require faster CPUs makes no sense whatsoever right?
You may as well throw 10 gpu's at a game with the fastest imaginable cpu and not necessarily get any higher fps. What's your point?

Idiotic is ignoring the fact "playable framerates" don't change even in situations multiple GPUs aren't required to reach said playable framerate (GTX 280 @ 1280 example, again). But you wouldn't know if a CPU has any benefit because you're claiming that once you're GPU limited there is no further benefit of a faster CPU. That claim is clearly flawed, as I've said and shown many times, a faster CPU can shift entire result sets forward even in GPU limited situations as they're not mutually exlusive.


  • AT Proved this months ago with 8800 Ultra Tri-SLI:

    Crysis does actually benefit from faster CPUs at our 1920 x 1200 high quality settings. Surprisingly enough, there's even a difference between our 3.33GHz and 2.66GHz setups. We suspect that the difference would disappear at higher resolutions/quality settings, but the ability to maintain a smooth frame rate would also disappear. It looks like the hardware to run Crysis smoothly at all conditions has yet to be released.
This was probably the first documented proof that a 3GHz Core 2 was not enough to maximize performance from modern GPU solutions. Crysis is still the most GPU demanding title and now we have GPU solutions 2-4x faster than the Tri-SLI Ultra set-up used. Do you think the same 3.33GHz C2 processor is enough to fully extract that performance from newer solutions? Of course not, as our free AA/60FPS AVG tests show......
Again, your whole premise is based on the ridiculous idea that if you throw more gpu's at a game and don't get an improvement then it must be cpu limited.

Yes I consider review sites using slow CPUs a problem when they clearly have access to faster hardware. This leads to various ignorant posters claiming there is no need for faster CPUs because they can get free AA at 50-60FPS in all of their new titles. Well worth it for 2-3x the price don't you think?
Until a next gen high end gpu arrives, that may be the only way of reaching 50-60 fps at high rez, an you're claiming that what we really need are faster cpu's... :roll:

LMAO. Really? I guess I can't just scale back my level of "Free AA" in Mass Effect at 1920 and get 90.1 FPS to get playable frame rates, which is still higher than the CPU bottlenecked, SLI overhead-lowered performance of GTX 280 SLI and 76.9FPS. Or I can't do the same in WiC at 1920 no AA and get 46.9 FPS with one GTX 280 compared to 45.6 with SLI? Or basically any other title that offers no higher FPS, only "free AA" in newer titles. Considering those are the highest possible FPS with a 2.93 GHz CPU and adding a 2nd card in SLI does nothing to increase FPS, what exactly would you recommend instead of "some magical cpu-limitation theory"? :laugh:
I got news for you: nobody plying ME at 76fps is gonna whine about being bottlenecked by anything.

They're not happy because they're paying 2-3x as much for higher frame rates but only getting free AA beyond a single GPU. And if those games are placing too much load on a single GTX 280 or 4870, what exactly are you running those games at? 640x480 on an EGA monitor?
Another news flash: Nobody with half a brain is paying 2-3x as much for SLI/CF when they're getting 60+ fps with a single gpu.

No, I said you'd see fps drops below 60 when your video card can't keep up, regardless of vsync or not. If you have a straight line 60fps then you don't need a faster cpu or video card.
BS, we were discussing FPS averages of 60-80 which you said was plenty because you'd never see FPS above 60. I said that's clearly not true unless the game was Vsync'd or capped as you'd undoubtedly see frame distributions below 60FPS with an AVERAGE and no vsync. To which you replied you would could still see frame rate drops below 60 with Vsync enabled when averaging 60-80FPS, which is simply WRONG. Basically your assertion frame rates above 60FPS are useless is incorrect unless you have Vsync enabled and you are averaging 60FPS, which means you have a straight line at 60FPS and cannot have any drops below 60FPS.

And you're not going to see a straight line 60FPS average unless you have a very fast GPU and CPU solution, running less intensive settings and resolutions or the game is very old. Until you reach that point, its obvious you'll benefit from both a faster CPU and GPU, which clearly isn't the case if you're only AVERAGING 60-80FPS in a bench run without Vsync. So once again, your claim that frames averages above 60 or 80 or 100 or whatever subjective setting you'll claim next are clearly false.
If you already had drops below 60 and you enabled vsync, would you still get those drops or not? Vsync has no relevance to the topic, and you had no reason of mentioning it other than debating more pointless trivia.

I can certainly distinguish FPS drops in the 20-30s, as can most gamers (and humans). Whether you can or not is irrelevant.
Those drops are not caused by cpu bottlenecking, so what's your point?

External factors, like the way a game engine shares data between frames? Is that a problem of the game, or is it because a sufficiently fast single gpu doesn't exist to make those factors irrelevant?
And? Its still external to multi-GPU, which you claimed is inefficient by design, which is still untrue.
Are you the guy who claimed SW has nothing to do with HW? Because what you just said is no less ridiculous.

No, it shows just as good or better fps with a single gpu, and nobody will use multi-gpu at 1280, hence they're irrelevant.
Rofl, if it wasn't CPU bottlenecked, the multi-GPU solution would distinguish itself beyond a single GPU, just as it does in higher resolutions/settings when the single GPU starts reaching GPU bottlenecks.
Which again is irrelevant for the reasons I already mentioned.

I guess a simpler way to look at it is, do you think WiC FPS is maxed out at 48FPS for all eternity, since thats the maximum its showing at 1280, even with 1, 2, 3, 4 of the fastest GPU available today? If you wanted to raise that 48FPS number, what would you change?
Your 48fps theory was already proven wrong, which makes the argument moot.

No its not a bad thing, is it worth 2-3x as much in price to get more AA when all you want are higher FPS? Is it a replacement for higher FPS in games that still dip below refresh? I wouldn't need to spend much time playing games or debating trivia on forums to understand this, these metrics have not changed for nearly a decade with PC games and hardware.
Nobody with half a brain spends 2-3x as much on multi-gpu unless they need it at high rez because it's the only way of getting playable framerates at the moment.


None of the AT benches I listed support your ridiculous theory. In all 1920x1200 benches there was in improvement going from 1 gpu to multiple ones, and you're whining about being cpu limited... :roll:
Sure they do, they improve by 3-4FPS and 2-4xAA right? When a single GTX 280 is scoring 55-60 between 1680 and 1920 and the SLI performance is 60-62.....ya great improvement there.
Exactly which game improved by 3-4 fps at 1920?

Wrong.....Again. Also, increasing average is not only to increase minimums, as increasing the average would shift the entire distribution between minimums and refresh rate meaning you'll have higher lows across the board.

What happened to shifting the whole paradigm forward? Crysis is gpu-bound to 17fps, and you're rejoicing because your minimum increased from 5 to 12? How about showing something at settings people would actually use?
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: munky
Not when it comes to increasing framerates.
Rofl what? So if framerates are above refresh, a faster CPU can't increase frame rates? LMAO.

You're comparing multi-gpu at 1280 to 1920. I can't stress enough how ridiculous that is.
Yep its necessary to find the "inflection point", to clearly demonstrate CPU bottlenecking, which I've done time and time again. What's ridiculous is the fact you still don't understand this simple metric even after nearly a decade of software/hardware reviews which has used CPU/GPU scaling as the standard for performance benchmarking.

You may as well throw 10 gpu's at a game with the fastest imaginable cpu and not necessarily get any higher fps. What's your point?
Except that's provably not the case, as reviews that do use faster CPUs do show an improvement in FPS. You just choose to ignore the results and want to blame the amount of RAM, the chipset, and basically anything else.

Single card and dual-card are done on a 2.93GHz system. Tri-SLI is done on a 4.2GHz system:
  • Guru3D 280 GTX in SLI and Tri-SLI
    Quake Wars again has the same IQ settings, yet is a little difficult to explain. With the baseline test system I mentioned that the performance of 2-way SLI was slower than a single card due to the the game being CPU bound. Two cards need more CPU cycles than one.

    With the overclocked system at 4200 MHZ, we rule out that fact and give it much more CPU cycles. It shows clearly with 3way SLI, though Quake Wars still does not scale extremely well. Then again at nearly 90 FPS at 2560x1600 .. you can't really complain either ;)
  • GRAW2

    Again, same image quality settings are applied, everything is set to high. Now this is ridiculous to see, yet in the previous SLI results I already told you that we hit a CPU bottleneck with the 2.9 GHz Core 2 Duo Extreme processor.

    This is the result of the massive overclocked CPU in combo with the power of 3 GPUs, it's just ridiculous how fast that really is ... but it can get even sicker though, let's move towards the game FEAR.
  • UT3
    We again see with all these cards in play that we have a CPU limitation which stops us from moving past that 140ish FPS mark.
Some of the most interesting results show that even with a single GPU, a faster CPU may yield better, or the same results as multi-GPU due to CPU bottlenecking. Feel free to look over the 4870X2 results, you'll see similar although most of the are using slow 3GHz CPUs and therefore, they have to use GPU limited settings like "free AA" in order to show any separation between other parts.

Again, your whole premise is based on the ridiculous idea that if you throw more gpu's at a game and don't get an improvement then it must be cpu limited.
Except I've reinforced that premise with examples that show using a faster CPU does yield improvement in previously CPU bottlenecked situations. There can certainly be other limitations, like driver or game engine frame caps in some titles, but its quite obvious faster CPUs are needed for the current GPU solutions. Either that or Devs are going to have to code better for multi-CPU to take advantage of current CPUs or make more GPU intensive games.

Until a next gen high end gpu arrives, that may be the only way of reaching 50-60 fps at high rez, an you're claiming that what we really need are faster cpu's... :roll:
Why would we need a faster next gen high end GPU if we don't get faster CPUs and it'll still be bottlenecked anyways? You've already argued how adding a 2nd GPU "always adds more FPS", except when it doesn't (at 1280 or when CPU overhead drops it lower than single), so why would you dismiss multi-GPU or ignore the fact multi-GPU requires more CPU cycles than single in order to scale? I've already tried explaining this to you, if a CPU can produce a certain number of frames per second and modern GPUs are already reaching that limit, how would adding a faster GPU help?

I got news for you: nobody plying ME at 76fps is gonna whine about being bottlenecked by anything.
So is 76FPS fast enough or not? Once again you contradict yourself in an attempt to try and prove a point without even remembering what you posted in the previous reply. What happened to:

The "corrolation" you're seeing is the result of not having a fast enough single gpu to run modern games at high rez and high framerates, not some magical cpu-limitation theory you invented.

Despite satisfying your subjective 76FPS requirements, I'm quite sure anyone interested in SLI would like to know multi-GPU will only yield "Free AA" beyond a single card due to CPU bottlenecks.
Another news flash: Nobody with half a brain is paying 2-3x as much for SLI/CF when they're getting 60+ fps with a single gpu.
Of course they're not when they're only getting "Free AA" and the same low frame rates.

If you already had drops below 60 and you enabled vsync, would you still get those drops or not? Vsync has no relevance to the topic, and you had no reason of mentioning it other than debating more pointless trivia.
Vsync absolutely has relevance. You were claiming 60-80FPS averages "were enough" because of 60Hz refresh, at which point I said that is false as you would still see frame drops below refresh UNLESS you had Vsync enabled. So again, how would you have frame drops below refresh with Vsync enabled with an average FPS of 60-80? You can't. So once again, unless Vsync is enabled and you are averaging 60FPS, your subjective 60FPS and 60Hz "is enough" argument is ridiculously flawed. I've already shown clearly a 62FPS Average doesn't mean crap, much less being "enough" with the COD4 FPS vs. Time graphs.

Those drops are not caused by cpu bottlenecking, so what's your point?
My point is that you don't know that for certainty, as the FPS vs. Time graphs showed there is meaning in the detail, to which you replied "Those details don't matter when the person playing the game won't notice a difference." I can certainly see the difference between 20-30FPS and 60FPS. If adding a faster GPU or more GPUs isn't increasing frame rates but adding a faster CPU does I think its pretty safe to say the CPU is indeed the bottleneck.

Are you the guy who claimed SW has nothing to do with HW? Because what you just said is no less ridiculous.
Nope, but how does your example have anything to do with what you claimed, how multi-GPU is inefficient by design? How does sharing data between frames make single-GPU more efficient than multi-GPU when multi-GPU simply makes a copy for both GPU in discrete frame buffers? Once again, multi-GPU isn't inefficient by design, its even been shown to scale beyond 100%. That alone proves your claim is impossibly false.

Which again is irrelevant for the reasons I already mentioned.
And your subjective reasons have no bearing on the objective evidence showing clear CPU bottlenecking. That's what it really boils down to, I show clear evidence of CPU bottlenecking, you claim it doesn't matter because its above whatever subjective threshold you set for that particular example.

Your 48fps theory was already proven wrong, which makes the argument moot.
It actually furthers my point, just as your last example did (by showing higher FPS with a faster CPU), but thanks. :)

Nobody with half a brain spends 2-3x as much on multi-gpu unless they need it at high rez because it's the only way of getting playable framerates at the moment.
And nobody with half a brain would try to dismiss such clear evidence of CPU bottlenecking whether they think a single GPU is capable of producing playable framerates or not.

Exactly which game improved by 3-4 fps at 1920?
The same titles I mentioned earlier....AC, Witcher, Oblivion....

What happened to shifting the whole paradigm forward? Crysis is gpu-bound to 17fps, and you're rejoicing because your minimum increased from 5 to 12? How about showing something at settings people would actually use?
I said it can shift the entire result set, as it clearly did for WiC. Its even more obvious in cases where a single GPU with a faster CPU is outperforming multi-GPU with a slow CPU. Without seeing duration of the test and an FPS vs. Time Graph its impossible to say for sure but one obvious possibility is that Crysis is more GPU intensive than CPU intensive at that setting of Very High, which shouldn't be a real surprise....

And yes, if you can increase your minimum frame rates by 75% that would be reason to be excited, especially if minimums were single digits.....

 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: chizow
Rofl what? So if framerates are above refresh, a faster CPU can't increase frame rates? LMAO.
Your monitor can't physically display anything faster than the refresh. Think about this before posting more nonsense.

Yep its necessary to find the "inflection point", to clearly demonstrate CPU bottlenecking, which I've done time and time again. What's ridiculous is the fact you still don't understand this simple metric even after nearly a decade of software/hardware reviews which has used CPU/GPU scaling as the standard for performance benchmarking.
So what? Reviews have used 800x600 to review cpu performance for a decade, that doesn't
make it any more relevant or useful to the person actually using the HW for gaming.

Except that's provably not the case, as reviews that do use faster CPUs do show an improvement in FPS. You just choose to ignore the results and want to blame the amount of RAM, the chipset, and basically anything else.
See here how much difference the L2 cache makes in your favorite game. There's a 16% difference between a Q9300 and a Q9450, and your simpleton theory ignores these factors completely.

Single card and dual-card are done on a 2.93GHz system. Tri-SLI is done on a 4.2GHz system:
  • Guru3D 280 GTX in SLI and Tri-SLI
    Quake Wars again has the same IQ settings, yet is a little difficult to explain. With the baseline test system I mentioned that the performance of 2-way SLI was slower than a single card due to the the game being CPU bound. Two cards need more CPU cycles than one.

    With the overclocked system at 4200 MHZ, we rule out that fact and give it much more CPU cycles. It shows clearly with 3way SLI, though Quake Wars still does not scale extremely well. Then again at nearly 90 FPS at 2560x1600 .. you can't really complain either ;)
  • GRAW2

    Again, same image quality settings are applied, everything is set to high. Now this is ridiculous to see, yet in the previous SLI results I already told you that we hit a CPU bottleneck with the 2.9 GHz Core 2 Duo Extreme processor.

    This is the result of the massive overclocked CPU in combo with the power of 3 GPUs, it's just ridiculous how fast that really is ... but it can get even sicker though, let's move towards the game FEAR.
  • UT3
    We again see with all these cards in play that we have a CPU limitation which stops us from moving past that 140ish FPS mark.
Some of the most interesting results show that even with a single GPU, a faster CPU may yield better, or the same results as multi-GPU due to CPU bottlenecking. Feel free to look over the 4870X2 results, you'll see similar although most of the are using slow 3GHz CPUs and therefore, they have to use GPU limited settings like "free AA" in order to show any separation between other parts.

How does your ridiculous theory explain this scaling? You would assume a 9500gt is cpu-limited in AOC because it performs basically the same between 1024 and 1280, except that it's not, since a 3850 is getting much higher fps with the exact same cpu.

Except I've reinforced that premise with examples that show using a faster CPU does yield improvement in previously CPU bottlenecked situations. There can certainly be other limitations, like driver or game engine frame caps in some titles, but its quite obvious faster CPUs are needed for the current GPU solutions. Either that or Devs are going to have to code better for multi-CPU to take advantage of current CPUs or make more GPU intensive games.
That's only an assumption you've made based on your ridiculous theory.

Why would we need a faster next gen high end GPU if we don't get faster CPUs and it'll still be bottlenecked anyways? You've already argued how adding a 2nd GPU "always adds more FPS", except when it doesn't (at 1280 or when CPU overhead drops it lower than single), so why would you dismiss multi-GPU or ignore the fact multi-GPU requires more CPU cycles than single in order to scale? I've already tried explaining this to you, if a CPU can produce a certain number of frames per second and modern GPUs are already reaching that limit, how would adding a faster GPU help?
Except that it's not bottlenecked. Multi-gpu scaling is not necessarily an indicator of single-gpu performance nor a cpu bottleneck.

So is 76FPS fast enough or not? Once again you contradict yourself in an attempt to try and prove a point without even remembering what you posted in the previous reply. What happened to:

The "corrolation" you're seeing is the result of not having a fast enough single gpu to run modern games at high rez and high framerates, not some magical cpu-limitation theory you invented.

Despite satisfying your subjective 76FPS requirements, I'm quite sure anyone interested in SLI would like to know multi-GPU will only yield "Free AA" beyond a single card due to CPU bottlenecks.
Again, nobody plying ME at 76fps is gonna whine about being bottlenecked by anything, much less considering something as dumb as SLI for 1280.

Of course they're not when they're only getting "Free AA" and the same low frame rates.
Why, because you have a monitor capable of displaying 200fps and can tell the difference from 60fps?

Vsync absolutely has relevance. You were claiming 60-80FPS averages "were enough" because of 60Hz refresh, at which point I said that is false as you would still see frame drops below refresh UNLESS you had Vsync enabled. So again, how would you have frame drops below refresh with Vsync enabled with an average FPS of 60-80? You can't. So once again, unless Vsync is enabled and you are averaging 60FPS, your subjective 60FPS and 60Hz "is enough" argument is ridiculously flawed. I've already shown clearly a 62FPS Average doesn't mean crap, much less being "enough" with the COD4 FPS vs. Time graphs.
So you claim enabling vsync will eliminate frame drops? :laugh:

My point is that you don't know that for certainty, as the FPS vs. Time graphs showed there is meaning in the detail, to which you replied "Those details don't matter when the person playing the game won't notice a difference." I can certainly see the difference between 20-30FPS and 60FPS. If adding a faster GPU or more GPUs isn't increasing frame rates but adding a faster CPU does I think its pretty safe to say the CPU is indeed the bottleneck.
You didn't add a faster gpu because they don't exist yet. Don't assume a cpu limit based on poor multi-gpu scaling.

Nope, but how does your example have anything to do with what you claimed, how multi-GPU is inefficient by design? How does sharing data between frames make single-GPU more efficient than multi-GPU when multi-GPU simply makes a copy for both GPU in discrete frame buffers? Once again, multi-GPU isn't inefficient by design, its even been shown to scale beyond 100%. That alone proves your claim is impossibly false.
A single gpu never has to deal with things like dependent texture reads between frames from another gpu. But of course, someone who sees the world as black and white would never consider these "external factors."

And your subjective reasons have no bearing on the objective evidence showing clear CPU bottlenecking. That's what it really boils down to, I show clear evidence of CPU bottlenecking, you claim it doesn't matter because its above whatever subjective threshold you set for that particular example.
Your objective evidence has no bearing to the person running a game at 60+fps, and would only concern someone who spends more time extrapolating stupid theories from line graphs and then debating them on a forum.

Your 48fps theory was already proven wrong, which makes the argument moot.
It actually furthers my point, just as your last example did (by showing higher FPS with a faster CPU), but thanks. :)
LOL, at low settings, no AA/AF and 140fps? How many games do you run at that setting?

And nobody with half a brain would try to dismiss such clear evidence of CPU bottlenecking whether they think a single GPU is capable of producing playable framerates or not.
There's no such evidence beyond your dumb theory.[/quote]

The same titles I mentioned earlier....AC, Witcher, Oblivion....
No they didn't.

What happened to shifting the whole paradigm forward? Crysis is gpu-bound to 17fps, and you're rejoicing because your minimum increased from 5 to 12? How about showing something at settings people would actually use?
I said it can shift the entire result set, as it clearly did for WiC. Its even more obvious in cases where a single GPU with a faster CPU is outperforming multi-GPU with a slow CPU. Without seeing duration of the test and an FPS vs. Time Graph its impossible to say for sure but one obvious possibility is that Crysis is more GPU intensive than CPU intensive at that setting of Very High, which shouldn't be a real surprise....

And yes, if you can increase your minimum frame rates by 75% that would be reason to be excited, especially if minimums were single digits.....

LOL. Have fun playing Crysis at 17fps and rejoice that at least you're not cpu bottlenecked :laugh:
 

deadseasquirrel

Golden Member
Nov 20, 2001
1,736
0
0
Originally posted by: chizow
Originally posted by: Golgatha
Yea, 4 pages of benchmarks showing GPU limited scenarios under settings someone would actually use to game.

http://www.firingsquad.com/har...e8600_review/page5.asp
That review was almost useful up until the point you realize they don't list what GPU is being used..... Think about it, if they used an 8800GTX, what would that tell you that you didn't know 2 years ago? Would that be relevant when compared to modern solutions that are 1.5-2x faster in single GPU that scale to 3-4 GPU?

They list that the GPU is a 4870. It's right there on the third page.
 

Golgatha

Lifer
Jul 18, 2003
12,400
1,076
126
Originally posted by: deadseasquirrel
Originally posted by: chizow
Originally posted by: Golgatha
Yea, 4 pages of benchmarks showing GPU limited scenarios under settings someone would actually use to game.

http://www.firingsquad.com/har...e8600_review/page5.asp
That review was almost useful up until the point you realize they don't list what GPU is being used..... Think about it, if they used an 8800GTX, what would that tell you that you didn't know 2 years ago? Would that be relevant when compared to modern solutions that are 1.5-2x faster in single GPU that scale to 3-4 GPU?

They list that the GPU is a 4870. It's right there on the third page.

Yes. Thanks for clearing that up. It's listed in the comments section of the review as well.
 

deadseasquirrel

Golden Member
Nov 20, 2001
1,736
0
0
So, it appears there are various reviews all over the place that one could point to in order to support their position. While I can see valid evidence for both points of view presented here**, I've come to my own conclusion that I won't feel a bit bad at all for grabbing a 4870 and matching it up with my 3800+ X2 at 2.8ghz. (**By both points of view, I'm specifically talking about these 2-- a) games today are still mainly gpu-dependent at 19x12+ resolutions and b) that GPUs of this generation are showing more cpu-limiting at what used to be high resolutions than they have in the past.**)

I have no doubt that I'd be able to push the 4870 a little further, even on my 1920x1080 display, with a faster CPU. But, a complete platform upgrade is not in the cards right now. And its gonna be hard to convince me that keeping my x1900xtx and upgrading to a C2D platform is a better idea than just replacing the video card, even if it's stuck on an old socket939 chipset.

Now, I just need to find a way to afford the $250ish for a 4870.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126

Digit-Life CPU/GPU Comparison

I can't believe you're bringing this up again when we've already been over it: http://forums.anandtech.com/me...215815&highlight_key=y

You keep linking these two fringe examples and make sweeping generalizations about them as if they were fact across the board, but they aren?t fact. The fact is most modern games are heavily GPU bottlenecked.

Tweaktown's figures didn?t use any AA and UT3 is a CPU limited game. If we look at some more realistic UT3 benchmarks? http://www.bootdaily.com/index...6&limit=1&limitstart=7 ?we see that even a Phenom @ 2.2 GHz has no trouble driving a 4870 CF at 1680x1050.

As for Digit-Life, they used slow processors and ran benchmarks at 1680x10250 with no AA. Of course you're going to be CPU limited in such a situation, but again that isn't the norm.

Also Munky is right: just because CF/SLI is not showing a performance gain it does not automatically mean it's CPU limited. You simply can?t make that inference as there could be multi-GPU issues at play that obviously wouldn?t affect single cards.