• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Maxwell Power Consumption from Tom's Hardware

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
I think most efficiency gains from the architecture during max constant load on the GPU without throttling is eaten by relatively high clocks compared to GK110, because they need it to up the performance appreciably. What needs to be clarified is that this scenario doesn't happen during ordinary gaming loads so it's only slightly relevant for a mid-range chip which is not a compute chip, not with such an abysmal dp performance. Teslas will most likely only use big Maxwell. If it can get better battery life during gaming loads then it's a success from efficiency POV. The chip is much smaller to make it appreciably faster they really need those high clocks. At lower clocks and voltages it would draw less power than GK110.
 
Does anyone know if Tom's card was of the shelves, or did nv sent it?

Also:
perfwatt_2560.gif

very fishy...
 
Does anyone know if Tom's card was of the shelves, or did nv sent it?

Also:
perfwatt_2560.gif

very fishy...
The GB one at TH was factory OCed, and at least one other review showed it running fairly hot (but not by that much). MSI and Asus seem to have stock mildly OCed cards that don't use substantially more power, but if you are marketing it for overclocking, that's likely not going to cut it (and MSI, at least, is likely to later release Lightning versions, that run hot in general).

http://www.guru3d.com/articles_pages/asus_geforce_gtx_970_strix_review,7.html

I suspect Palit just has it stock overvolted a bit, to manage their 10% OC.
 
Right. Tom's numbers are about double, while everyone else's are +0-20%. I think THG really needed to compare known cards on the new testing system, and work out some measuring and statistical kinks, before publishing a review with form conclusions like they did.

I suspect the difference in the cards is just binning, assuming Tom's speculated 970 is whack, and the stock OC GB card is a bit power hungry (since other reviews have different results with actual 970 cards). That, while some 970 (and 960?) GPUs are going to salvaged, many are likely to handle being fully enabled, but be a little on the hot side, so don't make the cut for the stated TDP fully enabled at full speeds.

AT has the 970 consuming 3 W more in idle than it's bigger/better 980. This is no fluke clearly, even with different cooler configurations you can maybe add or subtract 0.7 W for each fan. Also Ryan Smith says idle voltage of both cards is 0.856 V, so it's unlikely the GPU in this case.
Something is different about that card maybe VRs maybe VRAM. That could be it, when you compare the die Shots the 980 uses 8 nondiscript VRAM modules, while the 970 has just 4 presumably cheaper ones from Samsung.

67930.png
 
AT has the 970 consuming 3 W more in idle than it's bigger/better 980. This is no fluke clearly, even with different cooler configurations you can maybe add or subtract 0.7 W for each fan. Also Ryan Smith says idle voltage of both cards is 0.856 V, so it's unlikely the GPU in this case.
Something is different about that card maybe VRs maybe VRAM. That could be it, when you compare the die Shots the 980 uses 8 nondiscript VRAM modules, while the 970 has just 4 presumably cheaper ones from Samsung.

67930.png
That 970 has 8 modules. 4 on the front and another 4 on the back. It should be identical to the RAM on the 980 (7GHz Samsung).
 
AT has the 970 consuming 3 W more in idle than it's bigger/better 980. This is no fluke clearly, even with different cooler configurations you can maybe add or subtract 0.7 W for each fan. Also Ryan Smith says idle voltage of both cards is 0.856 V, so it's unlikely the GPU in this case.
Something is different about that card maybe VRs maybe VRAM. That could be it, when you compare the die Shots the 980 uses 8 nondiscript VRAM modules, while the 970 has just 4 presumably cheaper ones from Samsung.

67930.png

Those charts are inherently about +/- 2-3% due to chip to chip binning. Different models increases the difference.
 
That 970 has 8 modules. 4 on the front and another 4 on the back. It should be identical to the RAM on the 980 (7GHz Samsung).

You're right, the've put another 4 modules on the back possibly to cram more stuff on the smaller PCB. I guess we'll never learn why there is a difference, but there is a difference that much is certain.

Hardware.fr confirms Tom's data, except their Nvida reference card is the least power hungry. They also differentiate idle and idle screen off(veille écran).
They also measures consumption for the cards separately, and Hardware.fr have the Nvidia 980 Reference card at just 9 W, while the tripple fan cooler Gigabyte 980 G1 raises that to 12 W, and the 970 further raises it to 15 W.
Hardware.fr also has the 750 Ti at 7W (same as Toms), which is as low as Nvidia idle power can get.
 
Reading iPhone5 review something occurred to me, why Tegra K1 is still using Kepler SMXes rather than MAXWELL SMMes if the latter is so much more efficient? I think the key to gaming efficiecy of desktop Maxwell is something other than the changes in shader arrangement and that secret sauce is already incorporated in Tegra K1, that's why NV released Tegra K1 based on Kepler architecture rather than rushing Maxwell version to the market. If the GPU portion of Maxwell tegra ends up 30-35% more efficient than kepler counterpart rather than 100% then that will mean that the above hypothesis is correct. Somehow I find it hard to believe that they can be competitive with a part that is 100% less efficient then their newest tech.
 
Reading iPhone5 review something occurred to me, why Tegra K1 is still using Kepler SMXes rather than MAXWELL SMMes if the latter is so much more efficient? I think the key to gaming efficiecy of desktop Maxwell is something other than the changes in shader arrangement and that secret sauce is already incorporated in Tegra K1, that's why NV released Tegra K1 based on Kepler architecture rather than rushing Maxwell version to the market. If the GPU portion of Maxwell tegra ends up 30-35% more efficient than kepler counterpart rather than 100% then that will mean that the above hypothesis is correct. Somehow I find it hard to believe that they can be competitive with a part that is 100% less efficient then their newest tech.

K1 have been in development longer than Maxwell most likely. And to change it, you have to redo a lot and postpone everything. New validation etc.
 
Back
Top