• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Possible GTX 880 benched in 3DMark Firestrike Extreme!

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
HardOCP isnt trustworthy in any way.
They seems biased toward AMD cards and they do not have tools to measure the card power draw.

TechPowerUp does however. This is their results on "Peak"


Totally different result than HardOCP, yet another reason why they should not be trusted

So TPU using the older 13.11 Beta drivers when everyone else used the Catalyst 14.1 Beta in February are to be taken seriously but [H] using the latest drivers 6 months later are AMD Biased. 🙄

Edit: Not only that, but they used the latest (at the time) NV driver 334.69 Beta for the GTX-750. :whiste:
 
Last edited:
HardOCP isnt trustworthy in any way.
They seems biased toward AMD cards and they do not have tools to measure the card power draw.

TechPowerUp does however. This is their results on "Peak"


Totally different result than HardOCP, yet another reason why they should not be trusted

http://tpucdn.com/reviews/ASUS/GTX_750_Ti_OC/images/power_peak.gif[/IMG.][/QUOTE]

Card power draw is useless, except maybe for showing how much cpu dependant certain gpu architectures are. Total system power is a much more representative number for mine.
 
Anandtech review was made in February with Catalyst 14.1 Beta 6. In fact, every review you linked was made in February with Catalyst 14.1 Beta. Hell, TechPowerUp review used the Catalyst 13.11 Beta(the one you use the power graph).
HardOCP review in question was made in July with Catalyst 14.7 Beta.

Also, [H] takes the average power consumption of all games, AT and everyone else takes a single game like Crysis 3 or some synthetics.

So, [H] review is with the latest driver 6 months after the first reviews. I will consider [H] review the most realistic TO DAY than those made 6 months ago. 😉

Yet the performance of a castrated Pitcairn card is by and large virtually the same between all those different driver revisions.

And great for H for averaging the power consumption of all the games they tested. We can average the power consumption for all the different websites I listed, which used Crysis 2, 3, Unigine Valley, Battlefield 4, and possibly something else and still easily see that the gtx 750 TI is still 50% more efficient than the r7 265.

It's the entire internet we are talking about. H has the skewed results. The rest of the internet does not. Yet you (and 3dVagabond) want to take 1 review as holy grail, and throw the other 48 reviews out of the window.

Interesting numbers there, notice it uses ~80% the power that the 265 uses and it averages about ~20% slower in performance.

Wow. Reading failure? All of the reviews I linked (sans TPU) shows TOTAL SYSTEM POWER DRAW. Techpower up is the only site I linked to that isolates the video card power draw. Which, again, shows the 750 TI to be greater than 50% more efficient than the r7 265.
 
Last edited:
HardOCP isnt trustworthy in any way.
They seems biased toward AMD cards and they do not have tools to measure the card power draw.

Pure garbage.

[H] is one of the few sites we can trust. They go back and forth on which card is better, not which brand is pushing manipulative reviewing guidelines on them.

The benefit is that when nVidia or AMD is actually delivering better cards, [H] gives readers the straight shot. AMD is delivering better cards now, this is due to better cards from AMD, not [H] bias.

It wont be [H] bias when they are telling readers nVidia is delivering the better product either, FWIW, which they are known to do when nVidia is delivering the better product.
 
Wow. Reading failure? All of the reviews I linked (sans TPU) shows TOTAL SYSTEM POWER DRAW. Techpower up is the only site I linked to that isolates the video card power draw. Which, again, shows the 750 TI to be greater than 50% more efficient than the r7 265.

Anandtech 184W vs 230W =0.8
HC 238W vs 301 =0.79
TR 181W vs 218W =0.83
PcPer 184W vs 225W =0.81

My figures are based on TOTAL SYSTEM POWER DRAW.
 
Yet the performance of a castrated Pitcairn card is by and large virtually the same between all those different driver revisions.

And great for H for averaging the power consumption of all the games they tested. We can average the power consumption for all the different websites I listed, which used Crysis 2, 3, Unigine Valley, Battlefield 4, and possibly something else and still easily see that the gtx 750 TI is still 50% more efficient than the r7 265.

It's the entire internet we are talking about. H has the skewed results. The rest of the internet does not. Yet you (and 3dVagabond) want to take 1 review as holy grail, and throw the other 48 reviews out of the window.

Not true, I have acknowledge the fact that when GTX-750 was released in February 2014 it had very high performance/watt. In fact nobody refuted that here.
But since February, new Game Patches have been made available and new drivers were released. Also new games like WatchDogs and Mantle on BF4 MP were introduced something not a single review in February has.

All those deltas changes the performance/watt of both the GTX-750ti and R7 265 after 6 months of the original 750 release.

We are not only taking this review alone, we have acknowledge the first reviews but because [H] shows different performance/watt 6 months later and with Games that were not available in February you guys dismiss them and making accusations of Bias
 
Last edited:
Not true, I have acknowledge the fact that when GTX-750 was released in February 2014 it had very high performance/watt. In fact nobody refuted that here.
But since February, new Game Patches have been made available and new drivers were released. Also new games like WatchDogs and Mantle on BF4 MP were introduced something not a single review in February has.

All those deltas changes the performance/watt of both the GTX-750ti and R7 265 after 6 months of the original 750 release.

We are not only taking this review alone, we have acknowledge the first reviews but because [H] shows different performance/watt 6 months later and with Games that were not available in February you guys dismiss them and making accusations of Biases.

I'm not clear how you think patches to games should affect assessments of hardware performance.
 
I'm not clear how you think patches to games should affect assessments of hardware performance.

Really ??? Bench BF4 MP without all the patches and with 2013 Drivers and then bench again with all the patches and latest 2014 July drivers (Mantle or not) and see how the performance changes from card to card.

Edit: [H] used Mantle in BF4 MP in their July 2014 review, that makes the R7 265 having way different performance/watt than it would have in February with D3D and Catalyst 14.1 Beta.
 
Last edited:
Not true, I have acknowledge the fact that when GTX-750 was released in February 2014 it had very high performance/watt. In fact nobody refuted that here.
But since February, new Game Patches have been made available and new drivers were released. Also new games like WatchDogs and Mantle on BF4 MP were introduced something not a single review in February has.

All those deltas changes the performance/watt of both the GTX-750ti and R7 265 after 6 months of the original 750 release.

We are not only taking this review alone, we have acknowledge the first reviews but because [H] shows different performance/watt 6 months later and with Games that were not available in February you guys dismiss them and making accusations of Bias

The fact that you believe you are making sense has blown my mind. Crazy world we live in.
 
The fact that you believe you are making sense has blown my mind. Crazy world we live in.

Same card, different performance/watt depending on the Game Patch and Driver. Just use the Core i7 3770K + HD7950 to understand what im talking about.

With D3D it has 67,9fps with 302W peak total system power usage
With Mantle it has 75,7fps with 299W peak total system power usage.
Same card has higher Performance/watt with Mantle than D3D. Same can happen with more optimized drivers and game patches.

x37q0j.jpg


2nivdli.jpg
 
I can see how it would make sense in some games that weren't running to their full potential, for instance if a game was only using 2 cores, but then a patch was released that allowed it to use 4 then the power consumption would go up.

Likewise with BF4 I would imagine that the boost in framerates people got from using Windows 8 instead of 7 would also result in more power being used.

But in that case it would most likely mean that the Graphics card was initially using much less than it's full TDP, a patch isn't going to send it too much over what it's designed to run at.
 
So TPU using the older 13.11 Beta drivers when everyone else used the Catalyst 14.1 Beta in February are to be taken seriously but [H] using the latest drivers 6 months later are AMD Biased. 🙄

Edit: Not only that, but they used the latest (at the time) NV driver 334.69 Beta for the GTX-750. :whiste:

How can a driver change the power draw from the card? It can`t.

Card power draw is useless, except maybe for showing how much cpu dependant certain gpu architectures are. Total system power is a much more representative number for mine.
Card power draw is actually what is interesting for several reasons.
1. It shows the efficiency of an architecture vs previous vs other brand
2. It gives a picture on what to expect for future cards

Who the hell have made up these theories about an architecture being more CPU dependant than others?
And how does it matter other than the power bill? The CPU can`t help the GPU on 3D models in games. It can only bottleneck or not bottleneck. Unless its a CPU dependant game, in which the GPU again doesnt matter as much.

Pure garbage.

[H] is one of the few sites we can trust. They go back and forth on which card is better, not which brand is pushing manipulative reviewing guidelines on them.

The benefit is that when nVidia or AMD is actually delivering better cards, [H] gives readers the straight shot. AMD is delivering better cards now, this is due to better cards from AMD, not [H] bias.

It wont be [H] bias when they are telling readers nVidia is delivering the better product either, FWIW, which they are known to do when nVidia is delivering the better product.

I`ve seen far too many biased conclusions in their tests favouring AMD cards for other reasons than performance.
I`ve seen them test AMD cards with less settings vs Nvidia cards with higher settings and put that as a big picture, while making the apple vs apple (same settings) smaller and pushed all the way down on the page.
I`ve seen them put out FPS results on games that other respectable reviewers find totally different.
They only test a few games to even get a close to the whole picture on how the cards really stack up against each other.

I havent read too many reviews from them, I can admit that, but the few I had read, I found very unbalanced.
Maybe Im wrong but thats what I got from reading them.
 
Last edited:
Card power draw is actually what is interesting for several reasons.
1. It shows the efficiency of an architecture vs previous vs other brand
2. It gives a picture on what to expect for future cards

In addition to these:

3. Modern graphics cards are power-limited, either in drawing it or in exhausting it. Better power efficiency means higher performance maximum.
 
How can a driver change the power draw from the card? It can`t.

I really don't care to comment on this thread, but be sure, a driver absolutely can change the power draw from a card.

GPU power is not static. It greatly depends on the software driving the card. A large piece of the software is the driver. You can have driver changes effect the power profile in many ways (boost behavior, throttle behavior, fan profiles, etc.). Even without those changes, the driver could make a power difference for each game you test.

Power depends on many things, but one of those things is how active the transistors are for each gate on your chip. If the driver changes how the game is rendered (happens all the time) then you will likely have a power difference between drivers. How big those changes are, of course, are difficult to determine, but they are there.
 
I really don't care to comment on this thread, but be sure, a driver absolutely can change the power draw from a card.

GPU power is not static. It greatly depends on the software driving the card. A large piece of the software is the driver. You can have driver changes effect the power profile in many ways (boost behavior, throttle behavior, fan profiles, etc.). Even without those changes, the driver could make a power difference for each game you test.

Power depends on many things, but one of those things is how active the transistors are for each gate on your chip. If the driver changes how the game is rendered (happens all the time) then you will likely have a power difference between drivers. How big those changes are, of course, are difficult to determine, but they are there.

You are thinking about vbios that resides on the GPU itself. You can change the power target there, but that is absolutely impossible with a driver that is installed on the computer hard drive.
Maybe its possible to increase GPU utilization with a new driver and make it stick to 100% when its not needed, but you can`t surpass the power limit with a tweaked driver
 
Last edited:
You are thinking about vbios that resides on the GPU itself. You can change the power target there, but that is absolutely impossible with a driver that is installed on the computer hard drive.
Maybe its possible to increase GPU utilization with a new driver and make it stick to 100% when its not needed, but you can`t surpass the power limit with a tweaked driver

I don't see how what you wrote here is a response to what I wrote. . .

Also, you're attacking [H] for being biased because you don't understand their methodology. You can disagree with their methodology, but calling bias is something completely different for which you have shown no evidence.
 
I really don't care to comment on this thread, but be sure, a driver absolutely can change the power draw from a card.

GPU power is not static. It greatly depends on the software driving the card. A large piece of the software is the driver. You can have driver changes effect the power profile in many ways (boost behavior, throttle behavior, fan profiles, etc.). Even without those changes, the driver could make a power difference for each game you test.

Power depends on many things, but one of those things is how active the transistors are for each gate on your chip. If the driver changes how the game is rendered (happens all the time) then you will likely have a power difference between drivers. How big those changes are, of course, are difficult to determine, but they are there.

+1000

Drivers can and will effect power draw for those reasons listed and for the more complicated ones.

You have to think about how Drivers improve performance. The more work the card does, the harder you push it. This can result in more power consumption. Also, the reverse. There are more elegant ways which can do the work in a more efficient manner. This can effect consumption. Every driver that changes the end performance has the likelihood to change power consumption, even if its only by very small amounts.

When you change the way your card processes the information, it can effect consumption.
But the most simplest way i can think of to convey this, lets think of a bottleneck in the flow. CPU, driver overhead, memory congestion, whatever but the bottleneck is causing wasted cycles and cores with nothing to do. Then the driver team finds a way around the issue and performance goes up by many times cause now the cores never sit waiting. They are always being fed data. This example would have a dramatic effect of performance per watt.

This is happening all the time but rarely ever on a dramatic scale. Its actually mostly extremely minor and may not even be detectible. But if you can see how it happens on the exaggerated scale, then you must realize that it happens on the small scale too. It is real, drivers have a real impact on performance per watt and even so on the overall consumption during a particular app. Then consider that even if the effect is small, overtime with many many driver improvements/changes the effect on performance per watt can grow larger and larger.
 
Measured power consumption GTX 750 Ti card

System in IDLE = 120 W
System Wattage with GPU in FULL Stress = 202W
Difference (GPU load) = 82W
Add average IDLE wattage ~10W
Subjective obtained GPU power consumption = ~92 Watts

Bear in mind that the system wattage is measured at the wall socket side and there are other variables like PSU power efficiency

yep which is a big one. At 80% your down to 73watts

Then there is the big issue of the rest of the system, including the CPU. There is a big difference in a CPU at full load that one at idle. You cannot possibly believe that the CPU isnt being utilized when gaming. haha that is a crazy thought.

The only way you can measure the true consumption of a GPU is.......
Measure the consumption of the GPU. Measuring system draw is a totally different thing. It can tell you what your electric bill will come out to but it is not the proper way to measure the GPU consumption with pin point accuracy. You have to realize the changing PSU efficiency and CPU/Ram/HD/MB/etc all use more power when the system is stressed and when the temperatures go up.
 
The only proper way to test perf. per watt is to measure both performance and power consumption with each respective application. That type of data is not easy to come by at any website. And ideally we need to see not just idle power consumption or peak power consumption but also average power consumption (which again is very hard to come by at any website). Also note that GPU perf. per watt can only be determined when the GPU power consumption is isolated (ie. the average power consumption should be subtracted by the idle power consumption in order to determine the actual additional power consumed while running any GPU-intensive application).
 
Last edited:
Same card, different performance/watt depending on the Game Patch and Driver. Just use the Core i7 3770K + HD7950 to understand what im talking about.

With D3D it has 67,9fps with 302W peak total system power usage
With Mantle it has 75,7fps with 299W peak total system power usage.
Same card has higher Performance/watt with Mantle than D3D. Same can happen with more optimized drivers and game patches.

x37q0j.jpg


2nivdli.jpg

The question is how the wattage changes between CPU and GPU. It may well be the case that the CPU uses less since it has to do less work but the GPU renders more fps and thus uses more power. Therefore perf/w may be unchanged for the graphics card. And that is exactly what is relevant for discussions about GPUs and architectures, even if it may not be relevant for the end user since you obviously cannot use a GPU alone without the rest of the system.
 
The question is how the wattage changes between CPU and GPU. It may well be the case that the CPU uses less since it has to do less work but the GPU renders more fps and thus uses more power. Therefore perf/w may be unchanged for the graphics card. And that is exactly what is relevant for discussions about GPUs and architectures, even if it may not be relevant for the end user since you obviously cannot use a GPU alone without the rest of the system.

Except TPU, every other review uses the System Total power consumption at the wall to measure the Performance/watt. Obviously that is not the GPU power draw alone but people read the reviews and make their conclusions from that data.
But my example was not to show you the GPU power draw but that the Performance/watt is not static. Even within the same Game the performance can change in time due to drivers/patches etc. And obviously performance/watt is not the same in every Game.
AT and all the other reviews posted in this topic measured a single Game or synthetic. HardOCP review measured every game they benched and reported the Average Power Draw. Not only that, but [H] reviewed different Graphics Cards than all the other Reviews made in February with current Drivers and using Games like Watch Dogs and Mantle that dramatically affects performance and Performance/watt.
That is a big difference in Review metrics, something people didnt consider when they made their accusations of Bias results or when they specifically quote performance/watt for a single game that was benched 6 months ago.
It seams to me that people doesnt know how to read reviews, they just watch the graphs and thats it. And when one review shows different numbers because they use different methods or different drivers or something else they are biased or wrong or whatever.
 
I realize it'll be different for someone coming from 470/480/570/580 though.

Or a GTX 670/GTX 680 or a 5870/6970/7970 owner!

Upgrading from a GTX 670 -- I'm a customer for a potential GTX 870! If I could garner 10-20 percent percent more performance than a GTX 780ti with 4 gigs of ram and continue to have the nVidia eco-system, with hopefully some new gaming experience abilities -- for $399!

That would offer around 65-75 percent more performance!
 
Or a GTX 670/GTX 680 or a 5870/6970/7970 owner!

Upgrading from a GTX 670 -- I'm a customer for a potential GTX 870! If I could garner 10-20 percent percent more performance than a GTX 780ti with 4 gigs of ram and continue to have the nVidia eco-system, with hopefully some new gaming experience abilities -- for $399!

That would offer around 65-75 percent more performance!

Agreed. My GTX670 has been a great card but I could use a little more performance at 25x14 resolution. If the GTX870 is 10-20 percent faster than a GTX 780ti, I wonder whether that jump from a GTX670 would be even greater than my jump from GTX470 to GTX670. Good times ahead, I hope.
 
Except TPU, every other review uses the System Total power consumption at the wall to measure the Performance/watt. Obviously that is not the GPU power draw alone but people read the reviews and make their conclusions from that data.
But my example was not to show you the GPU power draw but that the Performance/watt is not static. Even within the same Game the performance can change in time due to drivers/patches etc. And obviously performance/watt is not the same in every Game.
AT and all the other reviews posted in this topic measured a single Game or synthetic. HardOCP review measured every game they benched and reported the Average Power Draw. Not only that, but [H] reviewed different Graphics Cards than all the other Reviews made in February with current Drivers and using Games like Watch Dogs and Mantle that dramatically affects performance and Performance/watt.
That is a big difference in Review metrics, something people didnt consider when they made their accusations of Bias results or when they specifically quote performance/watt for a single game that was benched 6 months ago.
It seams to me that people doesnt know how to read reviews, they just watch the graphs and thats it. And when one review shows different numbers because they use different methods or different drivers or something else they are biased or wrong or whatever.

Not wrong, but in a discussion about GPUs and GPU architecture and their perf/W quite beside the point if the GPU is not measured directly. I would not draw any conclusions from total system power...ever. Too many variables.
 
Not wrong, but in a discussion about GPUs and GPU architecture and their perf/W quite beside the point if the GPU is not measured directly. I would not draw any conclusions from total system power...ever. Too many variables.

Is it possible to do any measurements on silicon level?
 
Back
Top