Minimum CPU necessary to drive new 28nm high-end GPUs?

VirtualLarry

No Lifer
Aug 25, 2001
56,571
10,207
126
Was curious what CPU people think is going to be the minimum to not be bottlenecked with the newest 28nm GPUs when they come out.

Perhaps this question is pre-mature, since they aren't out yet. But it was on my mind.

Would a C2Q @ 3.5 be too slow?

What about two mid-range 28nm GPUs, rather than one high-end one?

Edit: CPU in question is a Q9550 @ 3.5Ghz, 8GB RAM, Win7 64-bit. Currently has two HD6870s in CF.
 
Last edited:

edplayer

Platinum Member
Sep 13, 2002
2,186
0
0
If you want to completely eliminate the cpu bottleneck, you would need the fastest cpu available. Even then, I'm sure cpus will come out later showing that with the same videocards you could get higher fps with certain games/conditions.

As to if a C2Q at 3.5 would be too slow depends on the game
 

alcoholbob

Diamond Member
May 24, 2005
6,380
448
126
There's no answer to that. There are games that depend almost entirely on the CPU and those that are almost completely GPU limited. There's a wide spectrum.
 

toyota

Lifer
Apr 15, 2001
12,957
1
0
pretty much what edplayer said. your cpu will certainly give you good performance but no way will get 100% out of something like a gtx670 or gtx680. the game and settings will determine just how much better performance something like a Sandy Bridge 2500/2600 cpus would be with that level of gpu.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
If you want to completely eliminate the cpu bottleneck, you would need the fastest cpu available. Even then, I'm sure cpus will come out later showing that with the same videocards you could get higher fps with certain games/conditions.

As to if a C2Q at 3.5 would be too slow depends on the game
Look at it this way, a Phenom II 980 BE is not too slow with GTX 580 SLI or HD 6990 TriFire for most games.
:\
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I disagree. Even when I upgraded from Q6600 @ 3.4ghz to i7 860 @ 3.9ghz, the performance increase was significant in some games, and that's just with a single GTX470.

HardOCP found that the difference can be 10-16% by going from Core i7 @ 3.6ghz to a 2600k @ 4.8ghz with their Tri-SLI 580 setup -- and that's at 2560x1600! Xbitlabs found about a 10% difference at 1080P when comparing an i5 760 to an i7 975 with a GTX480.

Core i7 is at least 20-25% faster per clock than a Q6600 is. So a 3.6ghz i7 is equivalent to a 4.5ghz Core i7 then. That means a 3.6ghz Q6600 is at least 20-25% slower than that system. Therefore, logically it would follow that a 2600k @ 4.8ghz would be 30-41% faster with Tri-SLI 580s. However, it's most likely that with improved architecture and better scaling of 2 cards, dual 680s will be faster than 3 580s. Therefore, this performance difference will grow even more.

I wouldn't be suprised if at 1080P, the Q6600 @ 3.6ghz system was a full 30-40% slower than a 2600k @ 4.8ghz with dual 680s. But then one has to ask a question, who in the world is going to spend $1000 on a 28nm GPU setup to pair it with a CPU from 2007?

Our forum tends to undermine the importance of CPU performance. In Starcraft II at 1080P, Q9400 has 2x lower framerates than a 2500k. This means you'd need a Q9400 @ 5.2 ghz to match a stock 2500k in that game -- and that's with a single GTX460. :eek:

At the end of the day, its $60 for 8GB of DDR3, $140 for a decent mobo and $220 or so for 2500k. It makes sense to just sell your setup if you plan on spending $500+ on a next generation GPU setup. Otherwise, a single mid-range 28nm GPU should easily than max out the overclocked Q6600. There are certain games that don't really care for CPU speed - Crysis, Metro 2033 come to mind. But it's way too easy to conclude that CPU speed doesn't matter if you are already getting 50+ fps in Dirt 3. But then why buy a next generation GPU if you don't want that 100 fps?
 
Last edited:

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
I disagree. Even when I upgraded from Q6600 @ 3.4ghz to i7 860 @ 3.9ghz, the performance increase was significant in some games, and that's just with a single GTX470.

HardOCP found that the difference can be 10-16% by going from Core i7 @ 3.6ghz to a 2600k @ 4.8ghz with their Tri-SLI 580 setup -- and that's at 2560x1600! Xbitlabs found about a 10% difference at 1080P when comparing an i5 760 to an i7 975 with a GTX480.

On the other hand it did squat for AMD cards.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
On the other hand it did squat for AMD cards.

GTX580 Tri-SLI setup is faster than HD6990 + 6970 so perhaps AMD cards were already maxed out with a slower CPU. Also, we know that NV cards are more picky with regard to CPU speed. There is just no way that a Q6600 @ 3.6ghz will be fast enough to drive 2 680s at 1080P. In Resident Evil 5 my GTX470 and 4890 were maxed out at only 55 fps at 1080P 8AA/16AF with a Q6600 @ 3.4ghz. Now in the same benchmark I get 118 fps with my 6950 @ 6970 speeds. That's more than double the framerate increase with a single 6970 + i7 860 @ 3.9ghz!

People forget just how old the Q6600 is. My i7 860 at 2.8ghz was as fast as my Q6600 @ 3.4ghz was. You are saying that a 2600k @ 4.5-4.8ghz will have no tangible benefit at 1080P with 2 top-end 28nm GPUs over a Core i7 @ 2.8ghz?

Q6600 vs 2600K: http://www.anandtech.com/bench/Product/53?vs=287 (2600K shows 40-100% improvement in every benchmark)
Q6600 vs i7 950: http://www.anandtech.com/bench/Product/53?vs=100 (i7 950 shows 20-100% improvement in every benchmark)

The gaming benchmarks are done at 1680x1050. Even if we move to 1080P, that's only an 18% increase in pixels on the screen. You figure a GTX680 will be at least 40-50% faster than the 580? Now imagine pairing 2 of those 680s....
 
Last edited:
Dec 30, 2004
12,553
2
76
i swear everything turns into amd vs intel

It's going to depend on the game. Assuming you play at 1080p+ resolutions, the CPU isn't going to matter that much. I plan to stick with mine for quite a while. Most games just don't need that much CPU power, and ones that do are usually multithreaded. I don't play any of Blizzards asstastically optimized (notoriously NOT good at spreading work out over >2 cores) games, so I don't have any problems with my current setup.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
You are saying that a 2600k @ 4.5-4.8ghz will have no tangible benefit at 1080P with 2 top-end 28nm GPUs over a Core i7 @ 2.8ghz?

The OP didn't ask what CPU would be required to run a 28nm high end SLI/Tri-SLI/CF/Trifire/Quadfire. And [H] i7 was @3.6GHz.

Additionally the trend seems to be that single GPUs of similar performance to last gen x2 cards are generally much less dependent on CPU to achieve its max performance.

World&

World-In-Conflict-1920x1200.png

World-in-Conflict-1920x1200.png

STALKER%201920x1200.png

STALKER-1920x1200.png

STALKER-1920x1200.png
 
Last edited:

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
I disagree. . . .

What what and with whom? The OP asks if a C2Q is too slow at 3.5GHz; and i am talking Penryn.
Generally and PRACTICALLY it is not - if you have super-fast graphics and use max details at high resolutions in most PC games - with exceptions generally being RTS games.

Our forum tends to undermine the importance of CPU performance.
And you may tend to exaggerate it. Practically a Phenom II 980 BE is fine for my GTX 580 SLI and my HD 6990-X3 Tri-Fire. Of course, the faster CPU will often produce faster framerates .. but we are talking *practically*.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Or perhaps the improvement wasn't because of the CPU, but because the 3rd 580 wasn't running in an x4 slot?

Legit Reviews tested this theory. 16x/4x (Asus P8P67) vs. 8x/8x (Asus P8P67 Deluxe) CF with 6970s produced nearly identical results at 1080P. The 4x PCIe 2.0 slot gets 2x more bandwidth off the P67 chipset than was the case with P55 generation. So the 4x PCIe 2.0 slot is equivalent to a single PCIe 1.0 8x. That WS Revolution runs at 8x/8x/16x. 4x PCIe 2.0 only has a 5% penalty with a GTX480 at 1080P as found by TechPowerup.

How do you account for the huge performance differences then in Crysis, BF:BC2, F1 2010? Keep in mind that's coming from an i7 @ 3.6ghz which is another 20-25% faster than a Q6600 @ 3.6ghz.

And you may tend to exaggerate it. Practically a Phenom II 980 BE is fine for my GTX 580 SLI and my HD 6990-X3 Tri-Fire. Of course, the faster CPU will often produce faster framerates .. but we are talking *practically*.

If you aren't getting maximum performance due to a CPU limitation, then why even get a 580 SLI / 6990 CF system? You'd be better off with a 570 SLI or an HD6950 CF is my point. With your logic, you should just always buy the fastest GPU sub-system you can afford then since "practically" CPU performance doesn't matter you say. You are probably arriving at this conclusion from testing at 2560x1600 4AA/8AA. You know that's 2x the load compared to a 1080P setup.

That would be like me comparing gaming on a 960x540 screen to 1920x1080.

Look, at 1680x1050, an i3-2300 is much faster than a Phenom II X4 955 in games. 1080P isn't that far off from that resolution.

And like I said in my real world testing, Q6600 @ 3.4ghz was already too slow. Look at Far Cry 2, Civ5, Starcraft 2, GTAIV, Resident Evil 5. In all of those games, it's going to be slaughtered by an i5/i7, esp. SB.

If CPU speed didn't matter, why does Intel have > 80% market share? You think most people sit there and count SuperPi times with a stopwatch?

Look, I realize that if you are getting 60 fps+, then getting 180 fps+ may not be that advantageous from a practical standpoint. But then you are perfectly fine with a GTX560 Ti over an HD6990 CF.
But if you were to simply add that HD6990 CF setup to a Q6600 CPU, guess what, you may only go from 60 to 80+ fps and that's it.
 
Last edited:

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
If you aren't getting maximum performance due to a CPU limitation, then why even get a 580 SLI / 6990 CF system? You'd be better off with a 570 SLI or an HD6950 CF is my point. With your logic, you should just always buy the fastest GPU sub-system you can afford then since "practically" CPU performance doesn't matter you say. Also, you were likely testing at 2560x1600 4AA. You know that's 2x the load compared to a 1080P setup.

That would be like me comparing gaming on a 960x540 screen to 1920x1080.
There are always limitations. There is also the issue of unnecessarily changing out your MB just to have something a bit faster when PRACTICALLY there is no difference.

And i tested this at 2560x1600 and 5760x1080. No i would NOT be better off with GTX 570 SLI or HD 6950 CF and i can prove that also .. upgrading from that to HD 6990-X3 TriFire or to GTX 580 SLI produced some *practical* and meaningful results - FAR MORE than upgrading from a Phenom II 980-X4 or a Penryn C2Q to a i5 2600K with the original graphics cards that you suggested.

And if you are only running at 1080p, you certainly won't need more than a Phenom II X4 or a Penryn C2Q for any game i am aware of with fast graphics.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
There are always limitations. There is also the issue of unnecessarily changing out your MB just to have something a bit faster when PRACTICALLY there is no difference.

And i tested this at 2560x1600 and 5760x1080. No i would NOT be better off with GTX 570 SLI or HD 6950 CF and i can prove that also .. upgrading from that to HD 6990-X3 TriFire or to GTX 580 SLI produced some *practical* and meaningful results - FAR MORE than upgrading from a Phenom II 980-X4 or a Penryn C2Q to a i5 2600K with the original graphics cards that you suggested.

You are not making any sense. We are talking about CPU limitation at 1080P. The OP doesn't game at the resolutions you listed. 1920x1080 is only 18% higher resolution than 1680x1050 I linked, where the CPU bottleneck was significant. By you "proving" that more GPU power helped you at 2-5x the resolutions from 1080P is just stating the obvious and has nothing to do with 1080P being discussed. Of course at 5760x1080, you'll be almost entirely GPU limited - no wonder you felt little difference between a Phenom II X4 and the i7 2600k...but who is arguing that in this thread? No one.

Secondly, you realize HD6990 Tri-Fire costs $2100 right? So you are not willing to upgrade to a 2600k from an 980-x4 to get another 20-30% performance increase in games at 1080P but you are eager to go from HD6950 CF setup ($500) to a $2100 setup with 6990 Tri-Fire? :hmm: What kind of an enthusiast would get 3x HD6990s and pair it with anything other than an Core i5/i7?

You provided no evidence to support the view that a Q6600 @ 3.6ghz is sufficient enough to drive dual GTX580s, nevermind dual 680s at 1080P without bottlenecking them.

And if you are only running at 1080p, you certainly won't need more than a Phenom II X4 or a Penryn C2Q for any game i am aware of with fast graphics.

** Shakes head **

I don't think you are understanding what a CPU bottleneck is.

So let's go ahead and get GTX590 in SLI then because CPU bottlenecking doesn't matter according to you.

......and then you are going to get 57 fps at 1920x1200 in Dirt 3, just the same as you would with a single 590.

Oh wait, there is more! You can already get 58 fps with just a single GTX480 in this game when paired with an i7 processor. Yet according to you, go ahead and blow $2100 on an HD6990 Tri-Fire setup because Phenom II X4 / C2Q CPU bottlenecks don't matter. If we listened to you, we would have just spent $2100 on a GPU setup with a Phenom II X4 to get identical framerates to a $300 GTX480.

The lower the resolution, the more your CPU matters. So if you are only gaming at 1080P, then you'll want the fastest CPU you can get your hands on if you are planning to go GTX680 SLI with it.
 
Last edited:

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
I'm pretty sure you can't add 3 6990 together.

His tri-fire is 6990+6970 and that is $1000 give or take.

You provided no evidence to support the view that a Q6600 @ 3.6ghz is sufficient enough to drive dual GTX580s, nevermind dual 680s at 1080P without bottlenecking them.

Read the OP.

What about two mid-range 28nm GPUs, rather than one high-end one?
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
His tri-fire is 6990+6970 and that is $1000 give or take.

My bad on that. But you can see in my previous post that a GTX480 gets identical frame rate to a GTX590 at 1920x1200 4AA in Dirt 3 when paired with a Phenom II X4 processor. In other words, an entire GTX570 GPU is just wasted due to a CPU bottleneck. :eek:

2 mid-range 28nm GPUs will likely be as fast 2 GTX580s.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Legit Reviews tested this theory. 16x/4x (Asus P8P67) vs. 8x/8x (Asus P8P67 Deluxe) CF with 6970s produced nearly identical results at 1080P. The 4x PCIe 2.0 slot gets 2x more bandwidth off the P67 chipset than was the case with P55 generation. So the 4x PCIe 2.0 slot is equivalent to a single PCIe 1.0 8x. That WS Revolution runs at 8x/8x/16x. 4x PCIe 2.0 only has a 5% penalty with a GTX480 at 1080P as found by TechPowerup.

How do you account for the huge performance differences then in Crysis, BF:BC2, F1 2010? Keep in mind that's coming from an i7 @ 3.6ghz which is another 20-25% faster than a Q6600 @ 3.6ghz.



If you aren't getting maximum performance due to a CPU limitation, then why even get a 580 SLI / 6990 CF system? You'd be better off with a 570 SLI or an HD6950 CF is my point. *With your logic, you should just always buy the fastest GPU sub-system you can afford then since "practically" CPU performance doesn't matter you say. You are probably arriving at this conclusion from testing at 2560x1600 4AA/8AA. You know that's 2x the load compared to a 1080P setup.

That would be like me comparing gaming on a 960x540 screen to 1920x1080.

Look, at 1680x1050, an i3-2300 is much faster than a Phenom II X4 955 in games. 1080P isn't that far off from that resolution.

And like I said in my real world testing, Q6600 @ 3.4ghz was already too slow. Look at Far Cry 2, Civ5, Starcraft 2, GTAIV, Resident Evil 5. In all of those games, it's going to be slaughtered by an i5/i7, esp. SB.

If CPU speed didn't matter, why does Intel have > 80% market share? You think most people sit there and count SuperPi times with a stopwatch?

Look, I realize that if you are getting 60 fps+, then getting 180 fps+ may not be that advantageous from a practical standpoint. But then you are perfectly fine with a GTX560 Ti over an HD6990 CF.
But if you were to simply add that HD6990 CF setup to a Q6600 CPU, guess what, you may only go from 60 to 80+ fps and that's it.

Let's not confuse things with more info than we need here to cover what I said.

I was commenting on the increase when [H] went from their x58 setup to the P67 one and the AMD cards showing no increase. You stated that it might be because the AMD cards were already maxed out with the x58 setup. I just stated that it might have something to do with the 3rd 580 being in an x4 slot. It might also be because the NF200 chip doesn't work efficiently in an unbalanced crossfire setup. Seeing as how nVidia cards don't operate in that configuration. Does running the 3rd 580 in an x4 slot effectively reduce available bandwidth for all 3 cards? Why did performance for the AMD cards drop in AvP with the "faster" CPU?

I'd like to see the tests repeated with an x58 setup that would run the 3rd card at x8, as is the case with most x58 mobos. I'd also like to see them run in a P67 board that doesn't use the NF200 chip.

We only know the outcome with the 2 different setups. We don't have enough information to come to any definitive conclusion though. More testing would need to be done, IMO.

*I never said any such thing. Nor does most of your post have anything to do with what I said.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
You are not making any sense. We are talking about CPU limitation at 1080P. The OP doesn't game at the resolutions you listed. 1920x1080 is only 18% higher resolution than 1680x1050 I linked, where the CPU bottleneck was significant. By you "proving" that more GPU power helped you at 2-5x the resolutions from 1080P is just stating the obvious and has nothing to do with 1080P being discussed. Of course at 5760x1080, you'll be almost entirely GPU limited - no wonder you felt little difference between a Phenom II X4 and the i7 2600k...but who is arguing that in this thread? No one.

Secondly, you realize HD6990 Tri-Fire costs $2100 right? So you are not willing to upgrade to a 2600k from an 980-x4 to get another 20-30% performance increase in games at 1080P but you are eager to go from HD6950 CF setup ($500) to a $2100 setup with 6990 Tri-Fire? :hmm: What kind of an enthusiast would get 3x HD6990s and pair it with anything other than an Core i5/i7?

You provided no evidence to support the view that a Q6600 @ 3.6ghz is sufficient enough to drive dual GTX580s, nevermind dual 680s at 1080P without bottlenecking them.



** Shakes head **

I don't think you are understanding what a CPU bottleneck is.

So let's go ahead and get GTX590 in SLI then because CPU bottlenecking doesn't matter according to you.

......and then you are going to get 57 fps at 1920x1200 in Dirt 3, just the same as you would with a single 590.

Oh wait, there is more! You can already get 58 fps with just a single GTX480 in this game when paired with an i7 processor. Yet according to you, go ahead and blow $2100 on an HD6990 Tri-Fire setup because Phenom II X4 / C2Q CPU bottlenecks don't matter. If we listened to you, we would have just spent $2100 on a GPU setup with a Phenom II X4 to get identical framerates to a $300 GTX480.

The lower the resolution, the more your CPU matters. So if you are only gaming at 1080P, then you'll want the fastest CPU you can get your hands on if you are planning to go GTX680 SLI with it.
i am trying to point out that there is an ENTIRE SPECTRUM of resolutions - beginning with 1080 that doesn't matter PRACTICALLY if you are using a fast Phenom II/Penryn Quad or a i5 2600K

When you upgrade your graphics from Fast to Superfast, you get far more *details* and incredible levels of filtering that you simply cannot match by getting a faster CPU. If your minimums are satisfactory - and they are with MOST (non-RTS) games - then upgrading the video opens an entire new world to you - including adding more displays for Eyefinity/Surround or playing in S3D

HD 6990-TriFire costs about a thousand dollars; GTX 580 SLI is less. Surely the next gen single GPU video card will cost much less - however, looking at SLI and Tri-Fire can give us a certainty that our same CPUs will be fine for next gen single-GPU.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Let's not confuse things with more info than we need here to cover what I said.

Nor does most of your post have anything to do with what I said.

The 2nd large part of my post was in response to apoppin. I thought you saw me quoting his comment? My response to you was only in regard to PCIe 4x linkw to Legit Reviews and TechPowerup which both show minimal performance hit. In other words, even if one of those 580s was only running at 4x PCIe 2, the expected performance hit would have been about 5%.

i am trying to point out that there is an ENTIRE SPECTRUM of resolutions - beginning with 1080 that doesn't matter PRACTICALLY if you are using a fast Phenom II/Penryn Quad or a i5 2600K

Well practically, going beyond 4AA/16AF doesn't really matter that much either. So if a game runs at 60 fps 4AA, I could care less if that HD6990+6970 setup can do 32xAA at 1080P. In other words, a GTX560 Ti SLI setup would net me identical performance to a GTX580 SLi setup because of my CPU bottleneck - exactly my point. So you'd save yourself $400 and go for the cheaper GPU sub-system.

When you upgrade your graphics from Fast to Superfast, you get far more *details* and incredible levels of filtering that you simply cannot match by getting a faster CPU.

If you feel like spending $400-600 more is worth it to go from 4AA to 16x Super-Sampling, then sure. On the anisotropic filtering side, I am confident 16AF is enough for 99% of people. For most people, I don't think this "extra new world of filtering" is worth extra hundreds of dollars. I imagine for most people, as long as they can max out all the sliders within the game and have 4AA/16AF, that's probably sufficient. Most people don't even know the differnet types of anti-aliasing.

then upgrading the video opens an entire new world to you - including adding more displays for Eyefinity/Surround or playing in S3D

That's going a bit off-topic don't you think? If I was running 3 monitors in 3D vision surround, then sure I might want GTX590 in SLI. But I don't think this situation applies to the OP.

Surely the next gen single GPU video card will cost much less - however, looking at SLI and Tri-Fire can give us a certainty that our same CPUs will be fine for next gen single-GPU.

Considering you are using 3x 120hz monitors, 5760x1080 resolution, you are talking about 3x the resolution of a 1920x1080P monitor with 2x the required framerate of 60Hz. So sure, someone like you may think it's required to have HD6990 CF setup. You realize your "normal" is 6x the average load requirement we are discussing in this thread? Way to scew the viewpoint towards a GPU bottleneck. Just think about what you just said. If you were only gaming on a single 1080P 60hz monitor, what videocard would you pair with a Q6600 @ 3.6ghz? Is your answer still HD6990+6970?
 
Last edited:

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
My bad on that. But you can see in my previous post that a GTX480 gets identical frame rate to a GTX590 at 1920x1200 4AA in Dirt 3 when paired with a Phenom II X4 processor. In other words, an entire GTX570 GPU is just wasted due to a CPU bottleneck. :eek:


CPU2.png


But the Phenom II X4/X6 is faster than the i7-2600K. *scratch heads in confusion*

I think you got confused with the Athlon II X4, which seems to suffer from lack of L3$?

I wonder how much of Intel performance advantage in general comes from their ability to store more stuff on cache.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
The 2nd large part of my post was in response to apoppin. I thought you saw me quoting his comment? My response to you was only in regard to PCIe 4x link to Legit Reviews.

Sorry. Was reading while I was responding and the prior quotes are omitted while composing a response. I should have been more careful.

Still, I think you misunderstood my response to mean that there's no improvement by a faster processor. I was only commenting that we'd need more info to conclusively say why the improvement for the nVidia cards and none (or even less performance w/AvP) for the AMD cards.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
You are the one dragging this off course with talk of Q6600. Here is what the OP asked (and notice he did not mention resolution - you restricted it).

Was curious what CPU people think is going to be the minimum to not be bottlenecked with the newest 28nm GPUs when they come out.

Perhaps this question is pre-mature, since they aren't out yet. But it was on my mind.

Would a C2Q @ 3.5 be too slow?

What about two mid-range 28nm GPUs, rather than one high-end one?
Let's distill his question to the essentials:

The OP is asking if a high-end single GPU of next gen will be bottlenecked by a Core2Quad at 3.5GHz

It will not be bottlenecked practically by a 3.5 GHz Penryn Quad if multi-GPU of this generation is anything to go by.

You can throw up as big a smokescreen as you want - but please try to stick to the topic as the OP asked it and not place artificial restrictions on it. :p




The 2nd large part of my post was in response to apoppin. I thought you saw me quoting his comment? My response to you was only in regard to PCIe 4x linkw to Legit Reviews and TechPowerup which both show minimal performance hit. In other words, even if one of those 580s was only running at 4x PCIe 2, the expected performance hit would have been about 5%.



Well practically, going beyond 4AA/16AF doesn't really matter that much either. So if a game runs at 60 fps 4AA, I could care less if that HD6990+6970 setup can do 32xAA at 1080P. In other words, a GTX560 Ti SLI setup would net me identical performance to a GTX580 SLi setup because of my CPU bottleneck - exactly my point. So you'd save yourself $400 and go for the cheaper GPU sub-system.



If you feel like spending $400-600 more is worth it to go from 4AA to 16x Super-Sampling, then sure. On the anisotropic filtering side, I am confident 16AF is enough for 99% of people. For most people, I don't think this "extra new world of filtering" is worth extra hundreds of dollars. I imagine for most people, as long as they can max out all the sliders within the game and have 4AA/16AF, that's probably sufficient. Most people don't even know the differnet types of anti-aliasing.



That's going a bit off-topic don't you think? If I was running 3 monitors in 3D vision surround, then sure I might want GTX590 in SLI. But I don't think this situation applies to the OP.



Considering you are using 3x 120hz monitors, 5760x1080 resolution, you are talking about 3x the resolution of a 1920x1080P monitor with 2x the required framerate of 60Hz. So sure, someone like you may think it's required to have HD6990 CF setup. You realize your "normal" is 6x the average load requirement we are discussing in this thread? Way to scew the viewpoint towards a GPU bottleneck. Just think about what you just said. If you were only gaming on a single 1080P 60hz monitor, what videocard would you pair with a Q6600 @ 3.6ghz? Is your answer still HD6990+6970?
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I think you got confused with the Athlon II X4, which seems to suffer from lack of L3$?

I wonder how much of Intel performance advantage in general comes from their ability to store more stuff on cache.

Yup, I did glance at the graph too fast. But take a look what happens to Q6600 when it has cache limitations as well as using slower IPC cores. So I actually predict it would do just as poorly in Dirt 3 as those Athlon II X4's did.

On one hand, we talk about how most games are consoles ports that don't really need high-end GPUs, and on the other hand we are saying that there is no CPU bottleneck with the Q6600 since most of the load goes to the GPU. These statements both can't be correct at the same time in those games.

Take a look at Witcher 2 with a GTX590 at 1920x1080.
Q9550 2.83ghz = 45 fps
i7 930 2.8ghz = 72 fps

Now, I realize they are testing it at LQ. Notice how at HQ 1920x1200 GTX580 SLI gets 99 fps with an i7 920 @ 4.2ghz? That means you aren't going to see any of that benefit no matter how much GPU you throw at Witcher 2 with a Q9550 -- you'll be able to use HQ, but it'll never go above 45 fps in that setting. So you just get "free" IQ increase at the same frames. And if you only have an i7 930 @ stock speed, you'll see 72 fps, and still lose another 27 fps. If we look at the graph, a single GTX590 accomplishes 78 fps with an i7 @ 4.2ghz. Therefore, you would have wasted $300 on a GTX580 SLI setup to get similar performance to a $700 GTX590 if you were only gaming on stock 930.

The implication of the CPU bottleneck in Witcher 2 @ 1920x1080:

So if you were to pair HD6970 with i7 930 @ 4.2ghz in this particular game, you would get 49 fps with a GTX580 at HQ 1920x1080. A GTX590 is only able to achieve 50 fps when paired with a Phenom II X4 955 @ 3.2ghz at the LQ settings at 1920x1080. With a Q9550, your GTX580 SLI setup would be only running at half of its speed - 49 vs. 99 fps - and this is one of the most GPU intensive games out right now.
 
Last edited:

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Yup, I did glance at the graph too fast. But take a look what happens to Q6600 when it has cache limitations as well as using slower IPC cores. So I actually predict it would do just as poorly in Dirt 3 as those Athlon II X4's did.

On one hand, we talk about how most games are consoles ports that don't really need high-end GPUs, and on the other hand we are saying that there is no CPU bottleneck with the Q6600 since most of the load goes to the GPU. Something has to give, no?

Let's get the OP back in here. Is he specifically talking about his Q6600 and 1920x1080? If so, it is slower than Penryn and Phenom II 980 BE in gaming. If he is talking generally, the later C2Qs at 3.5 GHz do fine in modern games with 6970-TriFire and GTX 580 SLI and will also do fine with next gen single-GPU.