• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Are CPU tests with non-CPU bottlenecks good indicators of CPU performance?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Are secondary component bottleneck tests valid measures of CPU performance

  • Yes

  • No


Results are only viewable after voting.
Because unless your CPU is very old (to the point where nobody puts it on modern benchmarks) or you've got a new-ish CPU and play at 1080p they are all roughly the same (minus Blizzard titles which tend to heavily favor Intel's architecture).

When 10 CPUs are all equal at 1080p gaming does that make them equal gaming processors? Take a look at those benchmarks again and tell me with a straight face that the 2500K is just as good as a Bulldozer :\

Also, you're under the assumption that multi-GPU gaming is somehow not a "real life scenario." Or that people don't and won't ever play on anything more than a single 1080p monitor. Clearly they will if both nVidia and AMD have offered multi-GPU and multi-monitor gaming. Furthermore, why restrict your options? want to get a large IPS monitor? Sorry, bud, but you'll never get 60FPS with that Bulldozer CPU despite all of those worthless benchmarks showing you that there was no difference at 1080p.

This isn't to say that people should buy the best possible gaming CPU. We're way past that era where the CPU was a big factor. Nowadays a GPU matters far more than it used to and you see that at any 1080p single-GPU benchmark. If you're going to play at 1080p then just get a well rounded system that will cost you less money and offer great performance. If you're looking for something that will last you longer and more room to stretch your legs without dumping your platform then the benchmarks above provide a perfect example of why "equal at ..." isn't equal.

Never said multi-gpu was not a real life scenario, that's why each person should buy a cpu the best fits his usage needs on his system. But I stand by my point, that for someone like myself who has absolutely no desire to go multi-gpu, it's important to know whether he will or will not see any benefit from spending money on a cpu upgrade.
 
Never said multi-gpu was not a real life scenario, that's why each person should buy a cpu the best fits his usage needs on his system. But I stand by my point, that for someone like myself who has absolutely no desire to go multi-gpu, it's important to know whether he will or will not see any benefit from spending money on a cpu upgrade.

Or play at higher resolutions than 1080p. Which brings us to the problem at hand...

Hardware review sites have to bear in mind that other people view their sites. Like for instance, people who aren't you.
 
No, my questions are :

i7-2600k = i7-920 = 1100T = 8150? The graph says it is true. Is it true? (This is a true or false question)

Are you aware that no cpu will ever get more than 41-42fps in that exact test? The best processor that ever is made with a chipset supporting PCIe x16 lanes (so that the same video card may be used) will score the exact same. The CPU could be 100 times faster in all metrics that were cpu bottlenecked, but it will still get 41-42 fps here.

If you dispute the validity of the question, please cite an example. Otherwise, let us clarify here. Is it your position that a cpu with the same video card can score higher than 41-42?

Again, loaded question. It is true for the scenario presented in the graph. Doesn't mean it's true for every other scenario, but on the other hand, your tri-sli graph is definitely not true for people using a single gpu.
 
Or play at higher resolutions than 1080p. Which brings us to the problem at hand...

Hardware review sites have to bear in mind that other people view their sites. Like for instance, people who aren't you.

Or you. You're saying your point is more important? Or are you upset that some site dared to show a use case where a shiny new Core i7 was no better than a Phenom X6?
 
Microsoft used to provide a null driver for Direct3D, took the video card right out of the picture. Benchmarks used to be run using this driver. Unfortunately MS no longer provides this.
 
Again, loaded question. It is true for the scenario presented in the graph. Doesn't mean it's true for every other scenario, but on the other hand, your tri-sli graph is definitely not true for people using a single gpu.


You're back to refusing to answer questions that point out the flaws in your logic. It's easy to hide behind the excuse of "loaded question" any time your view is challenged, but it shows that you know you can't back up your claim.
 
You're back to refusing to answer questions that point out the flaws in your logic. It's easy to hide behind the excuse of "loaded question" any time your view is challenged, but it shows that you know you can't back up your claim.

For some of us, the world is not black and white. You know you're asking a loaded question, and seem upset that I'm not falling for it.
 
Or you. You're saying your point is more important? Or are you upset that some site dared to show a use case where a shiny new Core i7 was no better than a Phenom X6?

No, but would you recommend an x6 because it was cheaper than the i7 for a gaming machine?

Oddly enough, if the person answered that their resolution would remain at 1080p we'd both say save the money and get a better GPU (so the bottleneck would be pushed up further). But what if they'd asked that they'd try eyefinity or use a 30" IPS panel? Would you stick to that same recommendation? Or even a 8150 vs a 2500K where they would actually save money going the Intel route. See my point? 1080p gaming as a CPU benchmark shows nothing other than a GPU bottleneck and is therefore a (near) worthless CPU benchmark.

I don't dislike AMD as I'm typing this on a 955, I just hate Bulldozer and those gaming benchmarks show why 😛
 
No, my questions are :

i7-2600k = i7-920 = 1100T = 8150? The graph says it is true. Is it true? (This is a true or false question)

Are you aware that no cpu will ever get more than 41-42fps in that exact test?

Yes, it is true for that particular game with that particular settings and video card. It is useful information to have, because there are people out there with that video card who choose to play at v high/extreme settings.

The fact that perhaps no CPU will ever do better than 41-42 fps is just more reason to buy the cheapest possible CPU that can hit those numbers, if that game is your primary buying decision.

Talking about future games and future video cards is really not a very strong argument. Someone has a sig that sums it up perfectly, basically the most future proof thing you can have is cash money. Spending extra money to make your CPU more future proof is a poor strategy most of the time, as you can get a lot more for your money if you wait and upgrade in a few years.



Hardware review sites have to bear in mind that other people view their sites. Like for instance, people who aren't you.


While this is true, where do you think the majority is? Tri SLI GTX 580 setups, single high end cards, moderate $200 deals, or bargain $100 video cards? Do you think more than 5% of computer gamers have multiple $500 video cards in crossfire or SLI?

It's interesting to see a high end performance article here and there showing what you can get with a $2000 SLI setup, but it's not really a realistic scenario for most readers IMO. The most useful numbers come with more mainstream level video cards.
 
Last edited:
For some of us, the world is not black and white. You know you're asking a loaded question, and seem upset that I'm not falling for it.


So why do you have an i7, and not something cheaper? They all score the same in GPU benchmarks.

Do you know the definition of a loaded question? Because asking if you agree if 4 processors are identical is not a loaded question. If you believe it is, what presumption does it put forth? You're confusing "question that exposes the flaw in my logic" with "loaded question".

Secondly, I asked you if you realize that no processor will ever score higher, and if you don't believe it, to back up why. This is not a "loaded question" either.

A well-reasoned, consistent view will not fall apart, despite the question. Do you disagree with this statement?
 
Last edited:
While this is true, where do you think the majority is? Tri SLI GTX 580 setups, single high end cards, moderate $200 deals, or bargain $100 video cards? Do you think more than 5% of computer gamers have multiple $500 video cards in crossfire or SLI?

It's interesting to see a high end performance article here and there showing what you can get with a $2000 SLI setup, but it's not really a realistic scenario for most readers IMO. The most useful numbers come with more mainstream level video cards.

Yes, but two 7950s isn't exactly an unreasonable setup is it? yet it would be enough to choke up some modern CPUs that are marketed as "gaming CPUs." Obviously you don't have to overspend and [H] testing methodology may seem strange to some but that's what the guys at HardOCP do extremely well: gaming related benchmarks and reviews. While Tom's stops at "They're the same there. Well, whatever," the guys at [H] go the extra step to find out just which one is actually worth your hard earned money.

Furthermore, I'd much rather have gaming benchmarks done that way than at 1080p despite the ones at 1080p being more relevant to my current setup (I have 3 monitors but game on one). When benchmarking the way the guys at [H] did it's also far easier to extrapolate certain differences in CPUs where you wouldn't see otherwise, or at least not clearly.

ARMA 2 is not a new game, in fact it is a very old game. ARMA 2 was launched in the summer of 2009, and it is a DX9 game. Why are we including it here then? Simply because it is one of the most CPU demanding games that is out there. We have used this game in the past, but never found it to be much use in terms of testing GPU performance. It had even less significance when it came to testing SLI or CrossFireX, as it just did not push those technologies, relying instead on massive CPU performance.

13201474041PaaGdw9mZ_2_1.gif


Now what exactly does this show? It shows that in lightly-threaded games that rely heavily on the CPU you would already be seeing the Bulldozer lag far behind a 2500K. Mind you, this is only at 1920x1200 and would still be very noticeable at 1080p. This isn't something that should be surprising, but seeing that noticeable difference at such a low res is still shocking. If you were to do the same with any Blizzard title or a game like Red Orchestra 2, you would see results like the one above. It doesn't make the Bulldozer a bad chip, but if you're buying a gaming CPU then there's a cheaper and better alternative (unless you live near a Microcenter 😛)

That's what I mean by going the extra mile to show the difference. The guys there know how to test their video cards and CPUs when it comes to gaming and they really are a step above the rest. No offense to AT which is far more technical but less gamer-focused 😛
 
Last edited:
Perhaps you find the question confusing.

I am not asking if the graphs have no value at all (they show that a GPU is at its limit). What I asked is do they give you any meaningful comparison as far as the relative performance of each processor? If so, which, from those graphs, is the fastest processor? Or do you need a different measure to get any idea of that?
 
So why do you have an i7, and not something cheaper? They all score the same in GPU benchmarks.

Do you know the definition of a loaded question? Because asking if you agree if 4 processors are identical is not a loaded question. If you believe it is, what presumption does it put forth? You're confusing "question that exposes the flaw in my logic" with "loaded question".

Secondly, I asked you if you realize that no processor will ever score higher, and if you don't believe it, to back up why. This is not a "loaded question" either.

A well-reasoned, consistent view will not fall apart, despite the question. Do you disagree with this statement?

Because I do other tasks besides gaming, where an i7 is faster. The 4 processors are obviously not identical, what's your point? Does that disqualify the scenario where the i7 and X6 perform the same? No it doesn't.
 
But the question is do such graphs give us any meaningful information about which processor is faster, not "does this graph show each processor as equal".

You and I both know that the SB is the fastest processor in the tom's graph, but the graph gives absolutely no indication of that (because it is measuring GPU performance).
 
No, but would you recommend an x6 because it was cheaper than the i7 for a gaming machine?

Oddly enough, if the person answered that their resolution would remain at 1080p we'd both say save the money and get a better GPU (so the bottleneck would be pushed up further). But what if they'd asked that they'd try eyefinity or use a 30" IPS panel? Would you stick to that same recommendation? Or even a 8150 vs a 2500K where they would actually save money going the Intel route. See my point? 1080p gaming as a CPU benchmark shows nothing other than a GPU bottleneck and is therefore a (near) worthless CPU benchmark.

I don't dislike AMD as I'm typing this on a 955, I just hate Bulldozer and those gaming benchmarks show why 😛

I disagree. I find 1920x1200 gaming benches hell of a lot more useful than 800x600. Even if they show all the cpu's performing the same, it is a useful metric to me, because that's how they will actually perform in that case. If you want to look at pure cpu-bound cases, then look at benches other than gaming.
 
But the question is do such graphs give us any meaningful information about which processor is faster, not "does this graph show each processor as equal".

You and I both know that the SB is the fastest processor in the tom's graph, but the graph gives absolutely no indication of that (because it is measuring GPU performance).

Ok, simple question - TRUE or FALSE, when you stick the different cpu's into a high rez gaming scenario such as Tom's showed, they will perform the same?
 
Yup, the framerate will be identical, no matter how capable the processor once the gpu is the bottleneck. That's entirely my point 😉
 
You can't tell me that it's irrelevant that a slow processor and a fast processor perform within a few FPS of each other and resolution X and with video card Y.

If I have a monitor with resolution X and video card Y, I want to know if an upgrade from one processor to another will provide benefit.

Yes, that benchmark is not showing the true performance differential between the processors, but it *IS* showing that the user experience does not improve given certain resolution and GPU limitations.

It tells me that if I have a Ph II X6, and I run at 1920x1200 with a GTS250, that maybe a video card upgrade would be a better purchase than a Core i7.

Data doesn't have faults in and of itself; it is the interpretation of that data that is important.
 
You can't tell me that it's irrelevant that a slow processor and a fast processor perform within a few FPS of each other and resolution X and with video card Y.

If I have a monitor with resolution X and video card Y, I want to know if an upgrade from one processor to another will provide benefit.

Yes, that benchmark is not showing the true performance differential between the processors, but it *IS* showing that the user experience does not improve given certain resolution and GPU limitations.

It tells me that if I have a Ph II X6, and I run at 1920x1200 with a GTS250, that maybe a video card upgrade would be a better purchase than a Core i7.

Data doesn't have faults in and of itself; it is the interpretation of that data that is important.


But my specific question is if it gives you any indication of the relative capabilities of the *cpu*. Does it tell you which is faster or which is slower? Because all I see are "they're all the same", but that's a position that I don't believe anyone would support.
 
the guys at [H] go the extra step to find out just which one is actually worth your hard earned money.

Well see, that is where we disagree.


The i7 3770 does do better if I were to play those games at 800X600 res with low video settings, thus moving the bottleneck to the CPU.

I don't care, I don't play games at that low of a res.

The i7 3770 does do better if I were to play ARMA 2.

I don't play ARMA 2. For that matter, I haven't heard of ANYONE who plays it.

The i7 3770 does do better if I were to upgrade from my 6970 to quadfire 7970s, thus moving the bottleneck to the CPU.

I don't care, I am not going to spend that much money on video cards for gaming, ever.

The i7 3770 does do better if I were to play some hypothetical imaginary game from 2016, which is CPU limited.

I don't care, I'd be better served by saving my money and upgrading when that game is actually released.

The i7 3770 does do better if I were to play some game on a hypthetical Radeon HD 9970 from the year 2016.

I don't care, I'd be better served by saving my money and upgrading when that video card is actually released.

Even if a product offers huge advantages in numerous situations, it is all irrelevant if none of those situations apply to the consumer in question.



But my specific question is if it gives you any indication of the relative capabilities of the *cpu*. Does it tell you which is faster or which is slower? Because all I see are "they're all the same", but that's a position that I don't believe anyone would support.


I am not saying they are all the same. I don't think anyone is trying to say that. I am saying the advantages of many high priced CPU do not apply to me or many other average consumers. They are the same in all relevant situations. They may be superior in hypothetical or in situations that don't apply to me, but in all relevant situations they are equal at best.
 
Last edited:
But my specific question is if it gives you any indication of the relative capabilities of the *cpu*. Does it tell you which is faster or which is slower? Because all I see are "they're all the same", but that's a position that I don't believe anyone would support.

It does provide some indication of the relative capabilities of the CPU given the constraints of certain configurations of currently available hardware and peripherals.

Is it important to know that CPU A is faster than CPU B if you upgrade your video card?
Yes

Is it important to know that CPU A is faster than CPU B if you have a monster system?
Yes

Is it important to know that CPU A and CPU B are both constrained given certain system limitations?
Yes


I actually think the question you ask in the poll is different from the question you pose in the OP.

Is a GPU-limited scenario a "valid measure of CPU performance"?
It's questionable at best

Does a GPU-limited scenario provide "any meaningful insight in to the performance of that processor, or its capabilities in comparison to another processor"?
Absolutely, yes.

Mainly because it's important to understand what circumstances result in non-CPU bottlenecks, and which don't.
 
Last edited:
One could pull any 2 or 3 shooter or racing game benches at low and high res to demonstrate a point, but it would still be far from conclusive. Mainly because different games can utilize the hardware differently. RTS games especially can show a greater difference at 1920x1200 in CPU performance than those Dirt3 or Crysis examples.

CPU.png


world%20of%20warcraft%201920.png
 
Or you. You're saying your point is more important? Or are you upset that some site dared to show a use case where a shiny new Core i7 was no better than a Phenom X6?

Munky, his point is more important in that the test he proposes gives more information. You cannot make a test that caters to specific groups, so you make a test where everyone can take some information home. The test itself may not be instantly applicable to ones specific situation, but it gives you the raw information that you can use with some deduction and interpolation skills.

Now if you see a test that shows differences only with SLI/CF, what is the problem? It is then expected of you to interpolate with other (GPU) tests what performance you can roughly expect in all kinds of scenarios (SP, MP, heavy action scene (CPU), dense foliage with TSSAA (GPU) etc. No single test can answer all the questions for you.

It's like this:
A CPU test that is not GPU-capped shows the potential of the CPU.
A GPU test that is not CPU-capped shows the potential of the GPU.

Combine both tests to chose your products. If you focus only on one test, you will lack the information to make a good choice for your whole system. It will not give you exact numbers because you probably will have to pull the test results from different websites with different testing scenes, but that is not so important. No single benchmark scene can give you an accurate representation of actual performance of hours of gameplay, anyway.
It should also be noted, that these tests should be very demanding on either side, CPU and GPU respectively. If, for example, you were to benchmark GTA 4 with the integrated benchmark and reached 40fps there, you would be sorely disappointed with actual gameplay as the fps there are quite a bit lower.
 
Last edited:
It should also be noted, that these tests should be very demanding on either side, CPU and GPU respectively. If, for example, you were to benchmark GTA 4 with the integrated benchmark and reached 40fps there, you would be sorely disappointed with actual gameplay as the fps there are quite a bit lower.

That is a great example of why these sorts of benchmarks are terribly worthless. Tests should be done in realistic situations with realistic hardware. If done in that manner, the test results will actually match most user's actual game-play more closely.
 
That is a great example of why these sorts of benchmarks are terribly worthless. Tests should be done in realistic situations with realistic hardware. If done in that manner, the test results will actually match most user's actual game-play more closely.

So that isn't realistic hardware... You should really choose your words more carefully. Instead of realistic you should go with "more applicable" because then nobody will find fault in that.

One can't test every game that way, but it's best to pick out certain games that can give you a good estimate as to how well the chip will perform. If they wanted to be heavily Intel biased they'd have picked a blizzard title like they do here at AT, but would that represent a neutral gaming benchmark? No, not in the least.

The point is to stress the CPU until it reaches its ceiling because you need to factor out the GPU as the bottleneck. With the current SB chips, that ceiling is higher than it is with a Bulldozer. So which one is a better gaming CPU? Quite clearly it's the 2500K. Does that mean the Bulldozer chips (or other chips) that perform worse aren't gaming CPUs? No. The point to take away here (and that you guys who inhabit the Anandtech CPU forums should know by now... seriously wtf?) is that if you're gaming at a GPU bottlenecking resolution/settings, then there is no point on overspending and you can get by with a cheaper CPU because at those particular settings/resolutions they are all roughly the same.

But what would those CPU benchmarks look like? Completely useless and uninteresting. Nobody cares what the 1080p numbers are for a game because you could have gotten those numbers yourself had you used some common sense from either the low resolution (like it's done by Anand. Again, if you have a problem with why or how it's done this way I'm sure he'd love to hear your senseless remarks) or extremely high (like it's done at HardOCP). There *might* be a difference between those two extremes depending on how a particular game is coded, but generally speaking you'll get roughly the same results (which is why AT omits the extremely high res).

So, 15 CPUs, all of them score the same on a GPU bottlenecked 1080p scenario and you guys are trying to convince me, and anyone with half a brain, that those numbers are more relevant than the way that established review sites have been doing for years. It isn't. Next time you see a benchmark at 800x600 remember that that's the benchmark that's showing you just how many frames that shiny new CPU is putting out and not the one at 1080p, because it's a CPU benchmark and not a GPU one
 
Back
Top