CPU or GPU for games? - Intel E6850 Bottleneck Investigation

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

GaiaHunter

Diamond Member
Jul 13, 2008
3,732
432
126
oh I see. the 5000 X2 is now pitiful in many current games though. again a gtx260 i7 setup would offer better overall playability in many more modern games than a gtx280 5000 X2 setup.

Is a GTX280 90% faster than a GTX 260? Because a E8x00 at 3.9 GHz should be 90% faster than a 2.0 GHz.

How faster is an i7 compared to a E8x00 in games?

Better yet, how faster is an i7 compared to an Athlon II X4 or a Phenom II X4 or a Q9xxx in games in real playing resolution and IQ?

Cause in tom's hardware articles the difference was only perceptible in games where core count did matter.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Tom's hardware article of a Good balanced setup proved that you don't need a very powerful CPU for gaming. Of course we're not talking about Pentium 4, Pentium E series like the E21x0 which indeed is a bottleneck for a HD 3870 ( I pitied it against my old Pentium M setup with an HD 3850 512MB oced and both gave same gaming performance.)

Means that a nice midrange CPU like the Core 2 6, 7 or 8 series will be enough for now, but there will be some games which are very CPU bound which are very few that will require a quad, if you are the type of person who doesn't like such games which aren't the best games in the market anyways, why bother?

Having an i7 for solely gaming doesn't make any sense when a Phenom II X4 provides similar gaming in terms of pure performance, in terms of playability, a nice dual core processor won't render your game unplayable and the i7 won't make it more playable or will allow you to increase eye candy (Except in such rare games that we know)
 
Last edited:

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
What's the point of all the bickering in this thread? CPUs should be bought for games the same way graphics cards are. For some reason, it's well know how the graphics cards available perform at different resolutions in many games, but when it comes to CPUs, there's a giant "DER" and all kinds of unreal bullshit and generalizations start flinging around.

Different games require different amounts of CPU horsepower. Going to be playing your standard console port? Any decently clocked dual core will do (for reference C2D @ 3.0GHz+). Looking at some more performance intensive titles, e.g. Crysis or Far Cry 2, the higher clocked the CPU, the better. Grabbing a multi-core optimized game like Dragon Age, Arma 2, or GTA IV? Strap in a Quad and clock it. This isn't that complicated, but some of you are making it so.
 

Blue Shift

Senior member
Feb 13, 2010
272
0
76
Ok, so what did we learn? For many first person shooters released in the last 4 years the CPU is not as important as a video card at high resolution when FSAA is enabled. I have no trouble believing that.

Now let's mix it up a bit with some RTS, MMOs, sims (flight and racing) including some poor console ports, shall we?

QFT.

I'd like to see Red Faction:Guerrilla and Supreme Commander (hosting vs not).
Or even the Supreme Commander 2 demo!

Also, I'd like to mention that most of these games didn't run at 60 frames per second with the E6850 even at stock clocks.

Edit: Find a game that's going to hold your interest in the long run, and get what you need to make it look nice and run well. Why does it need to be more complicated? Example: This is my Oblivion rig.
 
Last edited:

aggressor

Platinum Member
Oct 10, 1999
2,079
0
76
I like how a few months later he ended up getting a quad core CPU anyway :)

I just went from an E6400 to an i7 860 and I definitely noticed a difference even with a single HD4870.
 

blanketyblank

Golden Member
Jan 23, 2007
1,149
0
0
Uh, no. I don’t think you understand how bottlenecking works. If two cores fully saturate my GTX285 so that it’s the primary performance limitation, why do you think an extra two cores will remove that bottleneck? Do you think those extra two cores will make the GPU run faster? Of course they won’t.

If I have two lanes merging into a single lane bridge, that bridge is always a single lane, even if I increase the lanes merging into it to four.

And for the record, my i5 750 also shows practically no performance difference over my E6850 at 3 GHz in exactly the same gaming situations, and that’s with four cores turboing to 3.2 GHz. I just haven’t gotten around to putting up the results yet.

Upgrading to an i5 750 has largely been a waste of time and money because it’s my GTX285 that is holding me back almost 100%.
The reason I say this is is there appears to be differences between a dual core running at 3.6 Ghz and a quad core running at 2.4 Ghz according to some benchmarks I've seen. Since the quad is still faster despite being slower it stands to reason that adding or subtracting the number of cores can be different depending on how the games are programmed. In single threaded games clockspeed was all that mattered.
Thus I just wanted to see if in your results if changing the number of cores would have mattered in any particular game since it may change how the game works.

http://www.pcgameshardware.com/aid,...rks-75-percent-boost-for-quad-cores/Practice/

Well, at least the upgrade will get the “you’re CPU limited with your E6850” people off my back, the sorts of people that refuse to believe the reality I show them.


I’m pretty sure I repeatedly mentioned in that article that the settings I used are the actual settings I play the games at. As such my goal is exactly the same as the goal of most gamers.


We already know the real-world FPS because the settings I used are the actual settings I game on my GTX285.

“Standardizing” is just another word for “skewing to artificially inflate CPU differences”. I’m not going to run everything at 1680x1050 with no AA, because I don’t play games at those settings. Someone with a 4 GHz i7 won’t be playing games at such settings either.

The fact is, if you always configure your games to run at the highest playable settings for your particular GPU, the GPU will bottleneck you by far. I’m pretty sure I also mentioned this in the article too.

You’re wrong about the first part, but right about the second. Yes, Clear Sky is too slow to run at 1920x1200 but that’s exactly the point, and it doesn’t make the methodology flawed. Like I said, I presented the highest playable settings for my GTX285, the same settings that I game at.

So when I play Clear Sky, those are exactly the settings I use (1680x1050 with 2xTrSS). The same applies to the rest of the games I tested.

Perhaps I'm not being clear what I mean by standardizing. When I build a system I'm looking for a certain minimum level of performance, and anything extra is gravy.
That minimum level of performance might be triple 2560 x 1600 screen at high settings with 2xAA 16xAF at over 60 fps average, or it could be a 1360 x 768 screen at medium with 0xAA 0xAF that has enough fps to feel smooth. If a game cannot perform at the required minimum that means some component needs upgrading to play that game. So what I want to know is playability at some fixed standard which I can extrapolate to the parts I use when building or upgrading.

Thus the problem I have with your test is it is too variable as it doesn't tell me if the equipment I buy can perform at the level I want throughout most games. You are using a subjective test based on what you consider playable. I think this basically involves sacrificing fps to raise the graphics options as high as they can go until the game becomes unplayable to you. You also lower resolution or certain settings to make a game playable.
In other words playability is your constant when it really should be what you are looking for (in the form of fps at some standard chosen for all games).

Otherwise your test has limited real world use and the results seem quite obvious to me. By cranking graphics settings and increasing resolution you are essentially creating more work for the GPU and since your standard is highest playable you do this till the GPU reaches its maximum limit and can barely provide enough fps to your satisfaction. Cranking graphics settings does little to create more work for the CPU so obviously if you downclock the GPU it will slowdown proportionately to the downclock. If it doesn't that just means you haven't maxed the workload yet. The amount of work the CPU has to do remains roughly the same at different graphic settings so you haven't maxed its workload. Thus it is normal for you to see only a small difference in average if 2.0 Ghz is sufficient for the normal workload. Since most of your games are older this should be expected. However this result is further skewed because you create a bottleneck in the first place by cranking the graphics settings to the highest playable which already limited the amount of work the 3.0 Ghz CPU had to do. Thus what you really prove is that it is possible to bottleneck a good midrange GPU almost as well on a 2.0 Ghz CPU as on a 3.0 Ghz CPU.

Unfortunately that answers a different question than what I and what I believe most gamers want to know which is "Can I get playable(subjective to the gamer, but can be measured by objective measurements) frame rates at [so and so resolution and settings] in [so and so game] by upgrading my video card or do I need to upgrade my CPU?"
Your test answers "You can get better image quality and a higher resolution with a better video card than with a new CPU". The flaw is it neglects the playable frame rates. As a gamer I could care less if the game is beautiful on a 30" monitor as compared to blocky on a 15" screen if they both move at 5 fps.

An improved test that can provide insight on the first question would have a standard setting or multiple standards used as consants and then show the results after at varying CPU and GPU speeds. That method could show someone there 2.0 Ghz CPU could get over 30 fps in Crysis with a new GPU or that over 30 fps is not possible with that CPU in that game.

A minimum by definition is a single data point, and as such is useless without a plot point putting it into perspective. I can have a lower minimum at a single point, but if the rest of benchmark has a higher framerate, the minimum is useless.

That and in my experience there’s usually a strong correlation between average and minimum FPS, anyway.

I agree that minimum FPS isn't a perfect indicator of performance. However more data is better than less data, and a lower minimum can show that the game has become unplayable. Avergae fps are not without their problems as well since a benchmark could have many long segments where little is going on(i.e. an fade in screen or panning shot). Such segments would obscure shorter segments where the more intense action was happening and where fps are most vital. Ideally we'd have something like a video showing the current fps and the action going on at the same time along with the settings used.
 

scooterlibby

Senior member
Feb 28, 2009
752
0
0
I agree that minimum FPS isn't a perfect indicator of performance. However more data is better than less data, and a lower minimum can show that the game has become unplayable. Avergae fps are not without their problems as well since a benchmark could have many long segments where little is going on(i.e. an fade in screen or panning shot). Such segments would obscure shorter segments where the more intense action was happening and where fps are most vital. Ideally we'd have something like a video showing the current fps and the action going on at the same time along with the settings used.

That's a good idea. Just spouting off the top of my head, but maybe some ANOVA measures would be more helpful than just min FPS or average FPS. Percent of time that the frame rate in the benchmark spent in the lower quartile or the size of the standard deviations could be more informative. Some sort of measure to get at 'choppiness' (didn't some website do an extensive study of microstuttering using some pretty neat measures to capture the choppy feel? I can't remember who it was).

Let's run some regressions dammit! I think, though, the average reader would mash the keyboard with their stubby fingers in anger at seeing anything other than average FPS (higher is better, you see).
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
By cranking graphics settings and increasing resolution you are essentially creating more work for the GPU

But does increasing settings in the form of AA actually deliver any true benefit to the human eye? Or are we at a point in 2010 with very small Pixel LCDs that this setting is useful mostly just for running electronic benchmarks?

Which graphical setting (Tessellation vs AA) will end up delivering more subjective value (to the human eye) per dollar spent? Will Video card manufacturers lower AA capabilities on their video cards to make room for more tessellation power?
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
I agree that minimum FPS isn't a perfect indicator of performance. However more data is better than less data, and a lower minimum can show that the game has become unplayable. Avergae fps are not without their problems as well since a benchmark could have many long segments where little is going on(i.e. an fade in screen or panning shot). Such segments would obscure shorter segments where the more intense action was happening and where fps are most vital. Ideally we'd have something like a video showing the current fps and the action going on at the same time along with the settings used.

You make a good point about minimum FPS. My eye doesn't notice *much* difference between 20 and 40 FPS, but it sure can notice a difference between 10 FPS and 20 FPS.

Minimum FPS is where I think Nvidia could really clean up in the value segment. If the Fermi architecture really does a good job with "load balancing" we might hear about their cards feeling very fast when they finally reach the low end market (even if average FPS happens to be slightly less per dollar).
 
Last edited:

zerocool84

Lifer
Nov 11, 2004
36,041
472
126
But does increasing settings in the form of AA actually deliver any true benefit to the human eye? Or are we at a point in 2010 with very small Pixel LCDs that this setting is useful mostly just for running electronic benchmarks?

Which graphical setting (Tessellation vs AA) will end up delivering more subjective value (to the human eye) per dollar spent? Will Video card manufacturers lower AA capabilities on their video cards to make room for more tessellation power?

You can't tell the difference between AA and no AA???
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
You can't tell the difference between AA and no AA???

Maybe I need to experiment with more games, but to me the difference isn't exactly earthshaking and certainly nothing I would want to pay $100's of dollars for in the form of extra video card costs.

However, I reserve the right to change my opinion when and if more evidence comes my way. It could certainly be that I don't have enough exposure yet to accurately judge the benefits or lack of benefits.
 

zerocool84

Lifer
Nov 11, 2004
36,041
472
126
http://hardforum.com/showthread.php?t=1288175

Here is a thread I just found using a Google search.

It seems other people are agreeing with me that as pixel size decreases so does the need for ant-aliasing.

Well yea that's widely known but if you can't tell the difference between any AA and no AA then something must be wrong. I moved to 1920x1080 from 1280x1024 and yes it doesn't need as much AA for it to look great but it's not like you can't tell the difference.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Well yea that's widely known but if you can't tell the difference between any AA and no AA then something must be wrong. I moved to 1920x1080 from 1280x1024 and yes it doesn't need as much AA for it to look great but it's not like you can't tell the difference.

When I play Fallout 3 GOTY with AA turned completely off I am not noticing much difference in image quality at 1680 x 1050 native resolution.

Sure there are small "jaggies" when I stop and look very closely at rocks, but this level of detail is really hard for me to notice during normal gameplay.

In fact, this lack of appreciable effect from AA has even made me reconsider a long standing idea I have had about ATI HD5xxx video cards being bandwidth limited. It might be memory bandwidth is more crucial for higher levels of AA, but what if a person doesn't need AA? In this case maybe HD5xxx has much more bandwidth than it needs?
 
Last edited:

Phil1977

Senior member
Dec 8, 2009
228
0
0
If you are worried about min. frames I would look at the CPU or the rest of the system...

I know that i5/i7 systems are very strong when it comes to min. frames...

I had a Athlon II X2 250 and now a i7 860. The AMD was the best mainstream dual core you could buy. And it's really a dog of a gaming chip. Later I looked at benchmarks (anandtechs benchmark database for example) and that CPU is usually at the bottom.

I am playing at a low resolution (1366 x 768) with a ATI 5750 and that CPU was holding me back in simple games like Mass Effect and Splinter Cell Double agent.

My board couldn't take a fast phenom (goes only up to 95W), so I got i7 parts instead.

Now everything is snappy and quick. I am glad I made the switch.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
If you are worried about min. frames I would look at the CPU or the rest of the system...

That is true, but the GPU matters also.


I know that i5/i7 systems are very strong when it comes to min. frames...

I had a Athlon II X2 250 and now a i7 860. The AMD was the best mainstream dual core you could buy. And it's really a dog of a gaming chip. Later I looked at benchmarks (anandtechs benchmark database for example) and that CPU is usually at the bottom.

I am playing at a low resolution (1366 x 768) with a ATI 5750 and that CPU was holding me back in simple games like Mass Effect and Splinter Cell Double agent.

My board couldn't take a fast phenom (goes only up to 95W), so I got i7 parts instead.

Now everything is snappy and quick. I am glad I made the switch.

Apparently even the Phenom II x4 processors are much worse than the Intel products when it comes to minimum frame rates (even if the average frame rates are comparable). However, I have seen tests where overclocking the cpu-nb (ie, l3 cache) really helps this a lot.
 

Phil1977

Senior member
Dec 8, 2009
228
0
0
Yea I didn't have many options. I tried overclocking the cpu but that didn't help really.

I could have gone for a Phenom II dual core, but apart from the cache I would have had similar performance and I wanted something noticeable faster, not just 10% you know.

The 955 or 965 would have been nice as they perform similar to an i5. But I would have had to buy a new mobo and then I thought if I reuse my DDR2 sticks I might miss out on performance. And in the end it was the same cost to go with a Lynnfield system compared to getting a AM3 board, 965 cpu and DDR3 sticks...

In the end the lower power usage was a big factor in my decision... When I went to the shop they ran out of 750s so I got a 860. Hoping that HT will make an impact down the road. Still the 750 was the cpu to get...
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Will Video card manufacturers lower AA capabilities on their video cards to make room for more tessellation power?

http://alienbabeltech.com/main/?p=14170

I just found this article on Alienbabeltech.

So Fermi will be able to deliver 32xAA, but how will enabling this impact its tessellation performance? I know ATI uses a dedicated tessellator, but apparently Nvidia uses a software based solution to derive tessellation from its general compute hardware? Is this correct?
 
Last edited:

blanketyblank

Golden Member
Jan 23, 2007
1,149
0
0
http://alienbabeltech.com/main/?p=14170

I just found this article on Alienbabeltech.

So Fermi will be able to deliver 32xAA, but how will enabling this impact its tessellation performance? I know ATI uses a dedicated tessellator, but apparently Nvidia uses a software based solution to derive tessellation from its general compute hardware? Is this correct?

Now can you really tell the difference between 16xAA and 32x?
And how many fps will you have to sacrifice to enable it? I think this is more important than
whether or not it will afect tesselation performance since if it makes no perceivable difference or it makes the game unplayable by itself how it works with tesselation no longer matters.

Also just out of curiosity, but does eyefinity make AA worthless?
Correct me if I'm wrong here, but the basic idea of AA is it's necessary because the actual image resolution is higher than what the monitor can actually display. So if in theory you now have 6 x 30" monitors wouldn't you actually have more pixels than necessary for the image? I've never tried eyefinity so I'm not sure how it works, but I do know AA becomes less important as resolution increases.
 

Blue Shift

Senior member
Feb 13, 2010
272
0
76
Also just out of curiosity, but does eyefinity make AA worthless?
Correct me if I'm wrong here, but the basic idea of AA is it's necessary because the actual image resolution is higher than what the monitor can actually display. So if in theory you now have 6 x 30" monitors wouldn't you actually have more pixels than necessary for the image? I've never tried eyefinity so I'm not sure how it works, but I do know AA becomes less important as resolution increases.

AA would seem to become less important as pixel density increases, and it becomes harder to distinguish between individual pixels. EyeFinity wouldn't affect that, so I'd expect that AA would be just as important-- At least on your central screen. It would be awesome if you could disable it on the outer screens in order to gain performance!
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Also just out of curiosity, but does eyefinity make AA worthless?
Correct me if I'm wrong here, but the basic idea of AA is it's necessary because the actual image resolution is higher than what the monitor can actually display. So if in theory you now have 6 x 30" monitors wouldn't you actually have more pixels than necessary for the image? I've never tried eyefinity so I'm not sure how it works, but I do know AA becomes less important as resolution increases.

I don't think Eyefinity would have any pro/con effect provided native resolution was used on the monitors.

But what happens if lower/non-native resolitions need to be used on the monitors? Now the person would have larger pixels. In this instance would enabling AA produce a smoother image?

This is what interests me the most because one day I can almost imagine multiple bezel-less LCD PC monitors being a money saving strategy if used in place of a larger 1080p dedicated TV. In fact, three 1080p monitors in portrait configuration almost produces the same aspect ratio as one larger 16:9 on its side

Maybe we will see an enterprising manufacturer get the jump on some of the giants for this purpose? According to this article by Andy Marken smaller Tvs are the real money makers.
 
Last edited:

blanketyblank

Golden Member
Jan 23, 2007
1,149
0
0
AA would seem to become less important as pixel density increases, and it becomes harder to distinguish between individual pixels. EyeFinity wouldn't affect that, so I'd expect that AA would be just as important-- At least on your central screen. It would be awesome if you could disable it on the outer screens in order to gain performance!

Well couldn't you just sit somewhat farther away if there was a way to eliminate the bezels? Of course I'm sure that powering more monitors is just as taxing or more taxing on the video card then enabling AA, but being able to increase the displayed resolution is really the ideal solution, no?
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,005
126
and again you have missed the point. most people buy high end gpus for better framerates first and better visuals second. how many threads are started where people are saying they want 8 or 16x AA? yet there are plenty of threads where people are complaining about framerate performance and wanting more though. ideally almost everyone wants great framerates with great visuals of course.
Sure, and a consequence of a poor framerate is reduced game levels, reduced resolution, and reduced AA. Almost everyone wants to run at their native resolution, and also wants to increase game detail levels whenever they obtain extra horsepower.

If framerate was the only thing that mattered, everyone would be running all of their games at the lowest details levels @ 640x480.

in addition to a good video card, most modern games require a decent cpu and some games even require a really good cpu(Prototype, GTA 4, RF Guerrilla, Ghostbusters, ARMA 2, BFBC 2 just to name a few) if you want smooth gameplay. I am playing some of those games that most certainly need a fast cpu.
A decent CPU is a fast dual-core, and most modern games work just fine with one. Plenty of modern games (i.e. 2009 titles) showed no difference on even an E6850 @ 2 GHz in my tests.

you keep playing those other 95% of games though.
I sure will. I’ll also be investing the most money into a graphics card so that 95% of my gaming situations improve, rather than just the 5% fringe titles out there that are still CPU limited even when running at the highest playable settings.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,005
126
Also, I'd like to mention that most of these games didn't run at 60 frames per second with the E6850 even at stock clocks.
Yeah, that's because the GPU was holding me back, so a faster processor would do nothing in that situation. If I dropped a 5870 or Fermi into my system, all of those games would show very large performance gains. A quad-core OTOH would show practically no benefit, as I will show soon (I'm typing up the article now).
 

toyota

Lifer
Apr 15, 2001
12,957
1
0
Sure, and a consequence of a poor framerate is reduced game levels, reduced resolution, and reduced AA. Almost everyone wants to run at their native resolution, and also wants to increase game detail levels whenever they obtain extra horsepower.

If framerate was the only thing that mattered, everyone would be running all of their games at the lowest details levels @ 640x480.


A decent CPU is a fast dual-core, and most modern games work just fine with one. Plenty of modern games (i.e. 2009 titles) showed no difference on even an E6850 @ 2 GHz in my tests.


I sure will. I’ll also be investing the most money into a graphics card so that 95% of my gaming situations improve, rather than just the 5% fringe titles out there that are still CPU limited even when running at the highest playable settings.
yes most people want to get the most framerate possible at their native res. most people are not concerned with insane levels of AA though. and cranking the AA is certainly no excuse to buy a $400 video card when even a $200 video card would be noticeably bottlenecked by the cpu. overall playability is MUCH more important and whether you accept it or not the cpu plays a huge role in that.

with having your cpu at 2.0 if you mean no change in framerate by "no differnce" then you are mistaken or lying. framerate is most certainly effected in most games.

while you are doing that I will try to enjoy ALL my games. also in most of the threads where people are complaining about performance it usually involves one of those "fringe games". of course your advice would be to go play the other 95% of games or just ignore those low framerates and increase the AA until the gpu is the bottleneck. lol
 
Last edited: