Why OC'ing CPUs in GPU reviews? (Long Read Warning)

Zenoth

Diamond Member
Jan 29, 2005
5,190
185
106
Hey,

So yeah before you hit the wall the text too hard...

TL;DR @ BOTTOM

Good? Ok now I'm starting...

Alright so I wasn't sure under which forum category I should post this, either Video Cards or the CPUs and OC'ing one. But since it relates to OC'ing CPUs more than anything I opted to post this here (since ultimately I feel that the answers will be in relation to OC'ing CPUs, rather than talking about the GPUs being reviewed). So basically I'm asking, simply (just wondering) why is it that in most graphics cards reviews I've seen lately most of the review sites (and reviewers themselves) opt to review the GPU(s) in question under a heavily over-clocked system? By "heavily" (which of course is quite a subjective adjective) I mean, for example reviewing... say... a GTX 680 on a system that happens to run with a Core i7 that went from a non-OC'ed "base" clock of say... 3.4Ghz or so, to anywhere between 4.0 to as high as 4.7Ghz?

If anything, why not running the benchmark tests under both conditions then? Indeed, why not benchmarking under potential bottleneck conditions (which sometimes we common gamers can't always be sure of since we're almost never presented with actual stock frequencies systems running said benchmarks) and also under over-clocked conditions? Are there any known reviewing sites doing that? I'm asking this because I've bought a new GPU literally this morning (GTX 670, upgrading from my previous GTX 285 which I bought nearly four years ago, back in early Feb. 2009). But before buying it I looked at... well... countless reviews, from known to lesser-known reviewing sites. And, to be honest, I do not remember each and every single one of them by heart but I do clearly remember that most of them (my own estimation is something along the lines of maybe 8 out of 10 of them) reviewed their cards under over-clocked conditions (various cards of course, one of which was the GTX 670 which I was eying for a long time).

The only thing that I remember reading regarding this subject, which was about a year ago, was in relation to what I mentioned above, namely system bottlenecks. I'm going to be frank here I'm mostly a gamer and I don't know much about hardware, just the basics, and what I do know of hardware is related to what is only needed for gaming in general. All I know is that a bottleneck is when one specific PC component, usually being the CPU, is "not fast enough" to "answer" or "provide" enough said power that the other component "asks for" (which usually is the GPU, but I guess it can be the other way around as well). The end result is (regardless of how well or badly I explained it in Layman's terms) that, basically, you're losing frame rates simply because one out of two components can't catch up for the other and is "suffocating", and can't provide to you the consumer the fullest of its potential... so is therefor "bottlenecked".

Which... ultimately, explains why (and excuses, as well as justifies) over-clocking a system (be it CPU, or perhaps the system's memory and GPU as well) is done to simply show to you potential consumer and buyer of said reviewed components what those components can indeed deliver under "real", fully-exploited non-bottlenecked conditions (basically what the hardware can physically do without any sorts of issues whatsoever, and what is "designed" to provide on paper and in pure raw number terms). Well if that's the case, then I agree... but I also disagree, to some extent. It's more like I don't understand why such review sites doing that don't also take the time to do it under "normal" non-overclocked (albeit with potential bottlenecks) conditions and then in the end simply show us both results on a graph (or on multiple graphs anyway).

Would it be... possible? I mean... ok yeah sure it is possible on paper. It's "easy to do" but... then why not doing it? By the way, if there are such reviewing sites please let me know I'd like to see some of their work (and I'm genuinely asking for this, I just can't recall ever seeing a reviewing site being content with reviewing anything under non-overclocked conditions). The thing is, if you happen to be someone who likes to over-clock not only out of simple pleasure and curiosity but also because you can no longer be content with a "stock frequencies" system, and can only game on an OC'ed system, then such review sites doing their work on OC'ed conditions seem completely normal to you. But what if... imagine just a moment... what if there's gamers out there who don't "want to", nor "care to" over-clock? Let's say I buy a Core i7 2600K like I did and just... you know... install it, set up basic options in the BIOS and call it a night for the next five or six years?

My "problem"... I guess I can call it this way, is that going on a review site that tells me that for example (purely an example here, keep this in mind) in a very specific game, and in a very specific screen resolution I am "supposed" to get around that exact frame rate with that exact system (being used to test during the review) and that exact GPU, and that exact CPU over-clocked at exactly 4.2Ghz isn't exactly going to be "accurately representing" the product for ME, me being a person who happens not to over-clock (and haven't over-clocked anything in almost three years by now, even though I did over-clock a bit in the past).

So in other words let's say that review specifies the following system specs being used to do the benchmarks... the CPU (whatever it is) over-clocked @ 4.5Ghz. The memory (whatever it is, but usually being super fast and expensive, rarely the lower speed variants being used) is at stock frequencies (but sometimes OC'ed as well, although of course not as much as CPUs and GPUs are), and the GPUs at stock as well (whatever they are being tested, sometimes themselves being OC'ed too). Now let's presume that only the CPU was OC'ed (and extensively so, not just by 400Mhz or so, more like 1Ghz+ of so called "stable OC"). Alright, say that the benchmark then shows that at the exact screen resolution of 1920 x 1080 with 4xAA this (any) specific GPU spits out a minimum of 76FPS and a maximum of 88FPS in ... let's say ... Battlefield 3 (popular game being used in most benchmarks since it was released).

Alright... that's... cool to know... that under those conditions it delivers that performance. Ok but... what about the minimum and maximum frames with the CPU clocked at its default 3.2, 3.3 or 3.4Ghz? Can I know how many frames (less, surely, but still... want to know, hence reason as to why we look at benchmarks, yeah?) I'd get too? Say the minimum and maximum would respectively change to something along the lines of 68FPS and maybe 77FPS respectively? I mean... anything that would more accurately represent my system's conditions when playing this, or that game, with this or that GPU. Now sure you might say "but lolz dude CPU at 3.2Ghz is bottleneck! lolz you mad bro? OC your cpu or don't buy PCs get a 360 lulz!". Ok, now the thing is regardless of the arguments "for" or "against" there's a fact that remains, in that there's gamers out there who (as I said earlier) simply don't want to nor care about over-clocking, they are content with buying components and installing them and installing their games and playing them.

Let's say that under non-OC'ed conditions that the GPU(s) being reviewed would then be bottleneck (whatever the extent of which happens to be). The thing is even if it is, then I won't know the difference between it (the bottlenecked performance I'd get) and the other way around. In other words... if I never over-clock my system I will never experience that supposed minimum 75+FPS that was shown in the benchmark, because to get that minimum number of frames I need at the very least a CPU of that architecture, probably the exact same model to start with and certainly running that specific over-clocked frequency to "remove the bottleneck" that itself would otherwise remove 5+ or 10+ frames from my overall game's performance.

The bottom line remains, that ignorance is bliss. If I wouldn't have known to start with (by seeing the benchmarks doing that) what would have the performance been like under OC'ed conditions, then I also wouldn't have known that the performance I do get under "normal" conditions is in fact a bottlenecked performance, even if it means that my GPU isn't "working at its full potential"... the thing is... again... I wouldn't be aware of it, hence not being able to care about it and even thinking that the performance I do get is in fact the performance I am even supposed to get. And I would only notice that "extra" increased performance "out of nowhere" the next time I would simply happen to upgrade just the CPU itself which would happen to simply be faster than the previous one was by default, which would "unlock" that famous performance that the GPU was so eagerly waiting for all that time.

So, what's your opinion(s) about this? Are you bothered by the seemingly standardized over-clocked conditions in most benchmarks out there? Would you like it if we could get both conditions just as commonly as we get OC'ed reviews? Have you ever felt "misinformed" due to seeing benchmarks that ultimately never approach the "system you have" or "system you'd like to get"? And do you feel that you have to "interpret" the performances you'd get, by yourself, due to lack of actual tangible data to look at for normal system conditions?

Please do share, I'm just curious basically.

----------

TL;DR

º Over-clocked components in reviews seem to be standardized, does that bother you?

º Not being aware of potential bottlenecks would justify "normalized" non over-clocked conditions for reviews?

º Are potential bottlenecks the only "useful reason" as to why GPUs are more often than not reviewed under OC'ed systems? (usually OC'ed CPUs)

º What are consumers "supposed to buy", if they don't over-clock but can only seemingly justify their GPU purchases by buying CPUs at frequencies that don't exist yet, lest you over-clock them as well as the review sites do?

º Pretending that review sites wouldn't OC in their reviews, then ignorance is bliss would prevail, us not being aware at all of actual bottleneck effects until upgrading actual CPUs later on.

º Please, share you thoughts about this if you want!

----------

Thanks for reading (if you did!).
 
Last edited:

Avalon

Diamond Member
Jul 16, 2001
7,565
150
106
They're trying to better show the differences between the video cards by taking the CPU factor further out of the equation by OC'ing it.
 

Zenoth

Diamond Member
Jan 29, 2005
5,190
185
106
They're trying to better show the differences between the video cards by taking the CPU factor further out of the equation by OC'ing it.

Indeed, but isolating GPUs by taking the CPU factor further out of the equation is only relevant to the consumers/gamers whom also happen to over-clock enough, comparatively to the reviewing site's own over-clocks. Because, otherwise, a gamer who happens to see such a review won't recognize the performance difference between isolated cards under OC'ed conditions if their own system's frequencies (of all components) are set at stock to start with.

Basically, it serves its purpose which I understand and agree to, but that purpose belongs to those who also happen to have systems running under identical or at least similar non-factory default frequencies. And those consumers are the ones potentially "suffering" from the otherwise standardized reviewing methods and procedures of nowadays' benchmarks.
 

Greenlepricon

Senior member
Aug 1, 2012
468
0
0
Indeed, but isolating GPUs by taking the CPU factor further out of the equation is only relevant to the consumers/gamers whom also happen to over-clock enough, comparatively to the reviewing site's own over-clocks. Because, otherwise, a gamer who happens to see such a review won't recognize the performance difference between isolated cards under OC'ed conditions if their own system's frequencies (of all components) are set at stock to start with.

Basically, it serves its purpose which I understand and agree to, but that purpose belongs to those who also happen to have systems running under identical or at least similar non-factory default frequencies. And those consumers are the ones potentially "suffering" from the otherwise standardized reviewing methods and procedures of nowadays' benchmarks.

I can see where you're coming from, but from what I understand the point is to isolate the gpu. It wouldn't be useful if a game comes out that everybody says hits 30fps in a game, when in fact it may get 60. This is also why they have cpu game reviews. If a game gets 30fps on a cpu, but scores 60fps on a gpu, you'll know that you're cpu bottlenecked. Basically I don't want to know that with this exact setup I'll get x number of frames. I want to know what my max fps is, and where I'll hit my bottleneck first.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Also think future. People can look up and see if they are CPU or GPU limited in terms of their next upgrade.

Having 2 bottlenecks with for example GPU and CPU wouldnt make it a GPU review, it would make it a combo, or rather system review.
 

lehtv

Elite Member
Dec 8, 2010
11,900
74
91
If they didn't OC the CPU, they might make wrong conclusions about the GPUs being reviewed. E.g. 7950 isn't any faster than 7870, or that overclocking the GPU by 20% doesn't improve framerates by more than 5%.
 

Black Octagon

Golden Member
Dec 10, 2012
1,410
2
81
If they didn't OC the CPU, they might make wrong conclusions about the GPUs being reviewed. E.g. 7950 isn't any faster than 7870, or that overclocking the GPU by 20% doesn't improve framerates by more than 5%.

Exactly. Using slower clocked CPUs would allow easier identification of CPU bottlenecks, but would undermine the main purpose of the review, i.e., to benchmark a gpu and thereby help rank it in comparison to others
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,601
2
81
^this

If you have a slower CPU and want to know about "real world performance", you need to look at CPU benchmarks of that game, as well. No single benchmark will tell the whole story, because you are using two components that will get stressed differently in different parts of the game/games.
 

bunnyfubbles

Lifer
Sep 3, 2001
12,248
3
0
If they didn't OC the CPU, they might make wrong conclusions about the GPUs being reviewed. E.g. 7950 isn't any faster than 7870, or that overclocking the GPU by 20% doesn't improve framerates by more than 5%.

except that isn't a "wrong conclusion" for the people who aren't going to overclock their CPU and/or might not even have CPU that is as good a stock i5 3570K

I get what the OP is trying to say here, and while there is a place for CPU scaling to be done in these reviews, it doesn't need to be any more extensive than testing stock vs. a healthy max overclock when the review is specifically being done for the GPU (if its a CPU review we could then do a plethora of CPUs in a wide range of clocks to show how scaling goes hand in hand with clockrate)

at any rate, testing only at stock seems silly.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
They certainly should be testing stock. Without doing so we don't get to see efficiency differences in the drivers when the games go CPU limited. I certainly want to know both points and this Russian review site does different CPUs to show scaling: gamegpu.ru
 

lehtv

Elite Member
Dec 8, 2010
11,900
74
91
except that isn't a "wrong conclusion" for the people who aren't going to overclock their CPU and/or might not even have CPU that is as good a stock i5 3570K

It's either a wrong conclusion or it isn't, it doesn't depend on who's reading. 7950 is a faster GPU than 7870, whether or not it is being bottlenecked by the CPU. In case of bottleneck, the faster card is simply less utilized.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
I may have the wrong attitude and approach about it, which would make my opinion and perspective irrelevant to the OP and to anyone who reads my post here, but I have always viewed GPU reviews as being mere "guides to set expectations on relative performance rankings".

I have never read a GPU reviewed and assumed just because they show frames per second at a given screen resolution and AA settings that I too would observe the same...and I base that assumption on the fact that there is no way I'd configure my system (inluding the OS and software apps) the same as the people who did the GPU review.

I run anti-virus 24x7 with real-time detect and so forth. I have a myriad of apps installed that are all set to periodically check for updates, and if one is available they all take up resources to update themselves. The differences in these background processes are going to make a difference in my gaming experience, and I am OK with that.

What I do take away from a review though is the relative rankings. I knew my GTX460 was going to perform less than a GTX470 or 480, regardless how I configured my system, regardless how I fast or slow my processor was. The rankings would be unchanged. As would power use, noise, and cost.

Now the series 460 -> 470 -> 480 is easy to know this in advance, thank you Nvidia...but what if I wanted to know if I should buy a 460 or an AMD HD5770? I need a review to tell me in advance what the relative ranking of those two GPU products are. Including power, including noise, and cost of course.

Nothing is perfect though, naturally, the penalty for having a CPU bottleneck is not the same on an Nvidia card as it is on an AMD card, but the bottleneck penalty isn't going to be so severly unbalanced as to impact the relative ordering in a materially significant way IMO.

GPU reviews are there to show you GPU capabilities, OC'ing the CPU to ensure it is not a limiting factor in the GPU's performance is relevant in that pursuit.

If you need to know whether or not you need to upgrade your CPU because it might be the bigger bottleneck in your gaming experience then that is what CPU reviews are for, and that is why in CPU reviews the reviewer do what they can to make the GPU not be the bottleneck (by turning off AA and setting the resolution really low and so forth).

The difference you are seeing here is that the reviewers are doing "component review" whereas you are wanting to see "platform/system review".

The reviewers are trying to review an individual component - how good is this GPU? - but you are wanting to see a system-level review - how good is this GPU when combined with this CPU and this ram and this SSD?

System reviews are done too but you have to know where to look, they are not as common as component reviews because of one simple reason - money. Nvidia and AMD are not motivated to spend money supporting a review eco-system that is doing system-level reviews that showcase how well their GPU's work when combined with Intel CPU's.

Nor is Intel interested in subsidizing a review ecosystem that is going to dilute the main message ("our CPUs are great") by compounding the article with a bunch of standardized gaming tests. Intel wants a component review that highlights their CPU, Nvidia wants a component review that highlights their GPU.

Now if DELL or HP ships out a completed system for review, a system review, then you are going to see a system-level review that highlights the strengths and weaknesses of the interplay between the CPU and GPU (and ram, ssd, OS, etc).

Don't read component reviews and expect a comprehensive system review. (or vice versa) Both types of reviews exist because they serve two different market needs, and neither can substitute for the other so they will both continue to exist so long as marketing deptartments continue to have money to allocate towards supporting the existing 3rd-party reviewer ecosystem.

(it takes money to generate review samples, coordinate review timelines, provide support for the reviewers, etc...nothing is free, it costs the vendor real money to have reviews generated for their products even if money doesn't directly change hands from vendor to reviewer)
 
Last edited:

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
GPU reviews are there to show you GPU capabilities, OC'ing the CPU to ensure it is not a limiting factor in the GPU's performance is relevant in that pursuit.

Correct. Sadly, the converse is becoming less and less common. Far too many reviewers are happy to present GPU limited benchmarks in CPU reviews when it's a meaningless datapoint (other than a binary, fast enough to feed this GPU to maximum in this scenario or not).
 

sm625

Diamond Member
May 6, 2011
8,172
137
106
There are plenty of reviews out there that show pretty conclusively that going from 3.8GHz to 4.5GHz (+20%) does almost nothing for framerates (they might go up by 5% on average).

I think reviewers just want to use the absolute fastest hardware possible, so as to remove as much of the cpu bottleneck as possible. I have always wished that they would do all video card reviews on at least 3 tiers of cpu. Luckily it is possible to get that info, or at least close enough to where a budget enthusiast can make an educated guess as to how a lesser tier cpu will perform relative to a 4.8GHz 3770k.
 

TakeNoPrisoners

Platinum Member
Jun 3, 2011
2,600
1
81
They are trying to show the cards on their own merits. It wouldn't be a very good review if all the high end cards got the same average FPS because they didn't overclock the CPU. Some games such as Skyrim are CPU limited to a degree with high end GPUs anyway. You can see this as they tend to bunch up near the top of the graph.

It would add to the review of a new GPU if the site tested different clock speeds of the CPU they are testing as well as core counts. That could help people decide if they should upgrade their CPU for the GPU being reviewed. It is easy enough to disable cores and test a CPU from clockspeeds of 2.5GHz to 4.5 GHz with the GPU in question. Depending on the way this is implemented it could add more time to the review but it would add an interesting metric that could help people decide how much they should spend on an upgrade.

They don't even need to test other CPUs, a high end Intel CPU with disabled cores and lowered clock speeds can get pretty close to estimating the performance of a lower tier Intel CPU. If they want to be even better they can use an AMD CPU but these days serious gamers don't really use AMD anymore so it can be safely omitted.
 
Last edited:

Puppies04

Diamond Member
Apr 25, 2011
5,909
17
76
Another thing to keep in mind OP is that a graphics card can easily cost double your CPU outlay (or even 4x if you are buying something like a GTX690).

Now anyone spending that amount of money can say to themselves "ok it costs a lot but if I find in a couple of years that my CPU can't keep up with my moster GPU I can just buy a new one and a new mobo to go with it". It would be awesome if we could get a time machine and bring back a 2015 CPU to test with your top of the range GPU but unfortunately that won't be happening so the best we can do is OC the snot out of todays best chips to show what headroom your GPU will have in the years to come.

Make sense?
 

Zenoth

Diamond Member
Jan 29, 2005
5,190
185
106
Thanks to everyone for sharing your thoughts about this! There's very valid points in many posts. I especially have to agree on the simple fact that most benchmarks are done with over-clocked conditions to actually show the real potential of the components (GPUs mostly). I also completely agree with lehtv's comment.

I guess that - ultimately - I only wished that most known reviewing sites out there would do both their benchmarks under OC'ed and stock conditions as well, to accommodate more people's "thirst for accurate data" out there (like me, for instance). Because, really, in the end if they did that (commonly so) then everyone would be happy(ier) and wouldn't wonder why, nor would I at least have found reasons to create this thread to start with, hehe.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
I guess that - ultimately - I only wished that most known reviewing sites out there would do both their benchmarks under OC'ed and stock conditions as well, to accommodate more people's "thirst for accurate data" out there (like me, for instance). Because, really, in the end if they did that (commonly so) then everyone would be happy(ier) and wouldn't wonder why, nor would I at least have found reasons to create this thread to start with, hehe.

The reviewers are doing reviews because they are a business first and foremost. They have a demographic they feel their product (the review) is serving, and they hone that product (the review) in ways they hope will only better serve their target audience.

If reviewers felt that they were under-serving their demographic, or that they were needlessly excluding themselves from the opportunity of serving a wider demographic then you can be sure the reviewers would chase after that demographic ($$$) and adjust their reviews accordingly.

I think what you are seeing in the reviews (the lack of extra data) is proof of what must be the truth, that there just aren't enough people out there like yourself with a thirst for more data to justify the additional expenses needed to accommodate a much lengthier review in generating all the extra data.

Reviewers must be able to monetize their products (the reviews), increasing the expense of producing the product without elevating the value of that product commensurately is a dead-end for that business.

I personally feel the way you do but in another aspect - I deplore the sample sizes that are used in reviews when they are measuring things that are population-dependent.

Power consumption for example, totally sample-to-sample dependent because of process variation, and yet we'll get power-numbers from a sample of one.

OC'ing headroom is another one, you'll see one review sample that was OC'ed and based on that the conclusions are drawn as to what kind of OC'ing the end-user can expect.

But, again, I go back to the reason the review is created in the first place ($) and I have to conclude that if the target demographic really cared about the sample size then so too would the reviewer (because at that point neglecting the demographic's care-abouts would be financial suicide, no one bothers to read a crappy review).

I'm guessing you don't see the data you are looking for because you are in the minority in the first place, and reviewers aren't going to cater to the minority if it elevates their cost of business too much to do so.