Kepler/NVidia: Power Control/Dynamic OC etc. is extremely annoying

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eureka

Diamond Member
Sep 6, 2005
3,822
1
81
I think it's funny to blame nvidia cards when amd cards do the same thing. I think there are maybe 2 models of 7950s now that aren't locked down. As long as the card reaches a guaranteed speed then really you got what you paid for. It's to keep the number of blown cards down.
 

flexy

Diamond Member
Sep 28, 2001
8,464
155
106
I think it's funny to blame nvidia cards when amd cards do the same thing. I think there are maybe 2 models of 7950s now that aren't locked down. As long as the card reaches a guaranteed speed then really you got what you paid for. It's to keep the number of blown cards down.

I am not even complaining about the "theory", rather about the implementation and the inconsistencies.

You say the dynamic OC is a measure to prevent damaged cards, good...but then why do I see throttling in scenarios where there is no reason - WHILE AT THE SAME TIME I can run some benches/games generating UP TO 82C on my effing card about to get smoked and don't see any throttling at all? If it would really require throttling it doesn't.

Logic says it can run with max VDDC/core up to a certain TDP and heat level and then throttle consistently once certain thresholds are reached, but the values are all over the place.

In addition, I also found an issue (which I wrote about in the forum yesterday too) with applying certain power limit values and seeing "erroneous" downclocks with certain PL values where it shouldn't downclock. (Those downclocks can be as high as 17% which I consider significant). Eg PL 100% fine, 101% -200 core, 102% fine, 103 -200 core etc...something is just off with how the entire system works.
 

Rezist

Senior member
Jun 20, 2009
726
0
71
It lets them build cheaper and cheaper quality cards is why NV does it. The board partners love it as well if a 670 board is cheaper to make then a 7870 board and you look at the sale price difference you can see that this has already paid for itself.

Not to mention by effectively disabling OC'ing RMA's are gonna go down as well.
 

willomz

Senior member
Sep 12, 2012
334
0
0
I really don't see the problem, you can still overclock, it's just got easier really.

What people seem to be complaining about is that they can't show off as much any more.

Just enjoy your video card, it doesn't have to be some dick measuring competition.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
I think the lack of user control voltage may be limited for protection but dynamic clocks and voltage makes a lot of sense from an efficiency point-of-view.

Why leave performance on the table based on many applications don't use all of a fixed TDP and fixed clocks.

I dont think this is dumb downed but takes a lot of innovation and engineering.
 

ICDP

Senior member
Nov 15, 2012
707
0
0
AMD's boost system is no better IMHO. My MSI 7950 TF3 has a default clock of 960 but when the GPU gets to 65c it throttles back to much lower speeds. The only way to fix it is to use 20% powertune which should be enabled out of the box IMHO.

I can understand that both companies designed their cards this way to reduce RMAs but when default out of the box speeds for both can vary so much it means you aren't always getting the same cards the reviewers got. Both companies get around this issue by saying it is a non-guaranteed boost speed.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
The default aspects of a reference GTX 670 are 915MHz base clock and can boost up to 980Hz but my MSI GTX 670 PE consistently boosts up to around 1202 out of box -- did garner close to 1300 with oc and ov.

As long as the dynamic clocks are primarily transparent, don't effect smoothness, and trying to offer as much performance while being stable in the life of the GPU --pretty pleased. Happy with 20 percent OC scaling from default. Some may desire more and try to find the choice that offers what they desire based on subjective tastes, tolerances, thresholds and wallet..
 

wand3r3r

Diamond Member
May 16, 2008
3,180
0
0
The default aspects of a reference GTX 670 are 915MHz base clock and can boost up to 980Hz but my MSI GTX 670 PE consistently boosts up to around 1202 out of box -- did garner close to 1300 with oc and ov.

As long as the dynamic clocks are primarily transparent, don't effect smoothness, and trying to offer as much performance while being stable in the life of the GPU --pretty pleased. Happy with 20 percent OC scaling from default. Some may desire more and try to find the choice that offers what they desire based on subjective tastes, tolerances, thresholds and wallet..

The PE (a great card, I had one too) was factory overclocked and factory overvolted. There was a little fiasco with it and I actually don't know what the final outcome was. This was back last summer(ish or was it fall) and it might have been around the time NV took demanded the card manufacturers remove overvolting.

It still doesn't make the fact they neutered the overvolting any better.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Imho,

Offer views, discuss, vote with your wallet and if enough sales are loss --hopefully nVidia may listen and offer more flexible volts for their customers and partners.
 

Eureka

Diamond Member
Sep 6, 2005
3,822
1
81
Avoiding the products alone isn't enough to remove overvolting... we don't make up nearly enough sales, and it's not a selling point in the first place.

Take note, how many 7870 XTs, 7950s and 7970s sold despite the locked down voltages? It's nice that AMD offers overvolting in its reference designs but every board house is now locking down their cards in order to "protect" their cards.

Which isn't necessarily a bad thing. The 570s would blow up if you even looked at their voltage for too long. Maybe that the implementation is kind of bad, especially with core throttling, but our options are getting more and more limited.
 

Sheninat0r

Senior member
Jun 8, 2007
515
1
81
The argument that throttling/power control etc. is implemented to "save power" doesn't fly for me either.

You are choosing to willfully ignore information that goes against your viewpoint without providing any justification. Why don't you accept the power savings argument?
 

flexy

Diamond Member
Sep 28, 2001
8,464
155
106
You are choosing to willfully ignore information that goes against your viewpoint without providing any justification. Why don't you accept the power savings argument?

If they really want to provide throttling for the sake of "power saving" during gaming, they can well do this as option in CPanel and not force this upon the user.

In "theory" I would not even mind the philosophy behind throttling to keep within TDP specs, but as I said already the problem is how it's implemented and how inconsistent it is.

In addition to that, now in my very own case, the power savings would be plenty already when the card uses as little as power as possible in idle (which it already does) seeing that I game maybe 20%-25% only.

Even if we assume that dynamic clock throttling makes sense DURING GAMES I would still want it as optional feature.

What you do, you face what in reality is a limitation by Nvidia and make it sound as if it's a feature. Having LESS control for me is a negative, not a positive.

Due to the "faulty" way how it works right now, reality is that you might load your favorite game and never know about what performance your card is actually running - you might check your clocks in your fav. game and see it clocked down 17% - 20% without justification - which can hardly be in anyone's interest who spends $300+/- on a new graphics card.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I think it's funny to blame nvidia cards when amd cards do the same thing. I think there are maybe 2 models of 7950s now that aren't locked down. As long as the card reaches a guaranteed speed then really you got what you paid for. It's to keep the number of blown cards down.

AMD doesn't lock cards, the board partners do (supposedly). If a company locks their cards and someone wants to O/C to get the max performance, they shouldn't buy that company's cards. That simple. There are lots of SI models that aren't locked (the statement only 2 7950's is FUD). On the other hand it is nVidia that locks their cards.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I really don't see the problem, you can still overclock, it's just got easier really.

What people seem to be complaining about is that they can't show off as much any more.

Just enjoy your video card, it doesn't have to be some dick measuring competition.

It is all about enjoying his video card. Nothing at all to do with "dick waving". Belittling the OP's position by making it out to be something it isn't? Poor form.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I think the lack of user control voltage may be limited for protection but dynamic clocks and voltage makes a lot of sense from an efficiency point-of-view.

Why leave performance on the table based on many applications don't use all of a fixed TDP and fixed clocks.

I dont think this is dumb downed but takes a lot of innovation and engineering.

I agree it helps efficiency. It does other things as well though which aren't so good. Like the OP's problem.

AMD's boost system is no better IMHO. My MSI 7950 TF3 has a default clock of 960 but when the GPU gets to 65c it throttles back to much lower speeds. The only way to fix it is to use 20% powertune which should be enabled out of the box IMHO.

I can understand that both companies designed their cards this way to reduce RMAs but when default out of the box speeds for both can vary so much it means you aren't always getting the same cards the reviewers got. Both companies get around this issue by saying it is a non-guaranteed boost speed.

Again, it helps with efficiency. It's designed to improve power consumption while benchmarking for reviews. Not as well done as nVidia's, at all though. Push the power slider to +20% though and it's gone. What we need is a way to defeat nVidia's setup to make it go away as well.

Only nVidia sends cards to reviewers that boost beyond the stated boost specs. If an AMD card says it boosts to 950MHz the review samples will only boost to 950MHz. They won't boost to +1200MHz as we see with some 680's.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Avoiding the products alone isn't enough to remove overvolting... we don't make up nearly enough sales, and it's not a selling point in the first place.

Take note, how many 7870 XTs, 7950s and 7970s sold despite the locked down voltages? It's nice that AMD offers overvolting in its reference designs but every board house is now locking down their cards in order to "protect" their cards.

Which isn't necessarily a bad thing. The 570s would blow up if you even looked at their voltage for too long. Maybe that the implementation is kind of bad, especially with core throttling, but our options are getting more and more limited.

Completely made up. The red part is just out and out false.
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
I agree it helps efficiency. It does other things as well though which aren't so good. Like the OP's problem.



Again, it helps with efficiency. It's designed to improve power consumption while benchmarking for reviews. Not as well done as nVidia's, at all though. Push the power slider to +20% though and it's gone. What we need is a way to defeat nVidia's setup to make it go away as well.

Only nVidia sends cards to reviewers that boost beyond the stated boost specs. If an AMD card says it boosts to 950MHz the review samples will only boost to 950MHz. They won't boost to +1200MHz as we see with some 680's.
That's because the two vendors boost sytems function differently. You know this, and are just misrepresenting the truth to put in your typical negative slant.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
Nvidia's boost is totally variable. The simplest way to describe it is that different cards will boost to different clocks, all of them having the same exact maximum voltage, but obviously all able to perform differently at the same voltage in tandem with staying below the exact same TDP level. Temperature also plays a role by causing the cards to throttle when a certain temperature threshold is hit. The only control you have over the nvidia cards is setting additional clock speeds over and above the boost clock and increasing the TDP by a locked additional margin.

AMD reference cards all run with the same voltage if they are using the same BIOS and have the same maximum clock they can reach. The difference being you can increase voltages, increase TDP and increase clocks of course. You can choose how much power, voltages and clocks you want to make use of.

As an overclocker or hardware enthusiast, whatever, it's obvious AMD's method is better than nvidia's. GPU boost's only positive from my view is the variable voltage adjustment when the card is not being fully loaded, which keeps temperatures lower. Otherwise it's more an annoyance than anything else. Never mind that it does not always function as advertised in it's 2.0 version. Titan has quirks and inconsistencies with how GPU boost 2.0 functions that led a lot of people to use a custom BIOS because GPU boost 2.0 was inhibiting the cards from getting the most out of your overclocks.

OP is correct that it begs the question why it is not simply an option to turn it on or off, rather than a mandate. Look at a card like the MSI Lightning 680 or EVGA 680 Classified before nvidia mandated them having their voltage/TDP controls disabled. There obviously is more performance on the table, but that would entail a more expensive PCB using better components and would allow overclockers to get more out of their hardware rather than spending more for the next card up.

GPU boost just reeks to me of passing of something as a 'feature' that in reality allows selling a card designed as bare-bones as possible to maximize profit.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
That's because the two vendors boost sytems function differently. You know this, and are just misrepresenting the truth to put in your typical negative slant.

I wasn't spinning anything. I was correcting this.
but when default out of the box speeds for both can vary so much it means you aren't always getting the same cards the reviewers got.

When AMD says "boost to 1GHz" that's what it boosts to. Your card, the reviewer's card, everyone.

You don't get something like THIS
This means the GPU clock speed could increase from 1006MHz to 1.1GHz or 1.2GHz or potentially even higher. (Kyle saw a GTX 680 sample card reach over 1300MHz running live demos but it could not sustain this clock.)
 

thilanliyan

Lifer
Jun 21, 2005
12,060
2,273
126
It's nice that AMD offers overvolting in its reference designs but every board house is now locking down their cards in order to "protect" their cards.

My MSI and HIS non-reference cards would beg to differ. :)
I'm so glad AMD vendors still offers some freedom for voltage changes to the end user. I have never destroyed a card from OCing and am glad I am given the choice to do what I please without resorting to hardware mods.
 
Last edited:

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
Here is more grey area then to trigger your review paranoia.
Reading about AMD's boost functionality in the reviews, mentioned AMD shot for worst case scenario's. So the user would not have to adjust power-tune for correct advertised performance.

The idea that a company can send a golden chip for a review could apply to either vendor, and is probably not happening. In AMD's boost sytem, there would never be any pullback from 1000 to 1050 mhz, but I've seen user graphs that show that.

http://www.anandtech.com/show/6025/radeon-hd-7970-ghz-edition-review-catching-up-to-gtx-680/2
At the same time however, while AMD isn’t pushing the 7970GE as hard as the GTX 680 they are being much more straightforward in what they guarantee – or as AMD likes to put it they’re being fully deterministic. Every 7970GE can hit 1050MHz and every 7970GE tops out at 1050MHz. This is as opposed to NVIDIA’s GPU Boost, where every card can hit at least the boost clock but there will be some variation in the top clock. No 7970GE will perform significantly better or worse than another on account of clockspeed, although chip-to-chip quality variation means that we should expect to see some trivial performance variation because of power consumption.

Finally, on the end-user monitoring front we have some good news and some bad news. The bad news is that for the time being it’s not going to be possible to accurately monitor the real clockspeed of the 7970GE, either through AMD’s control panel or through 3rd party utilities such as GPU-Z. As it stands AMD is only exposing the base P-states but not the intermediate P-states, which goes back to the launch of the 7970 and is why we have never been able to tell if PowerTune throttling is active (unlike the 6900 series). So for the time being we have no idea what the actual (or even average) clockspeed of the 7970GE is


http://www.anandtech.com/show/6152/amd-announces-new-radeon-hd-7950-with-boost/3
Unfortunately AMD still hasn’t come through on their promise to expose the precise clockspeeds of their Southern Islands cards, which means we’re stuck looking at clockspeeds in a halfway blind manner. We cannot tell when PowerTune throttling has kicked in
7950BClockspeed.png

So what’s going on? As near as we can tell, the power requirements for boosting are so high that the 7950B simply cannot maintain that boost for any significant period of time. Almost as soon as the 7950B boosts needs to go back to its base state in order to keep power consumption in check. The culprit here appears to be the 7950B’s very high boost voltage of 1.25v, which at 0.125v over the card’s base voltage makes the boost state very expensive from a power standpoint.

Review chips would benefit from low leakage or reviewers and/or users would have to up the powertune setting of -0- for normal functionality.
Does that occur?
 

DaveSimmons

Elite Member
Aug 12, 2001
40,730
670
126
It's just a different emphasis -- stability and safety instead of offering some people the fun of seeing how far they can push the card while keeping it semi-stable.

I'd rather spend my time playing games than playing the overclocking "game" so I'm happier with this approach. Buy the card, plug it in, play games.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Here is more grey area then to trigger your review paranoia.
Reading about AMD's boost functionality in the reviews, mentioned AMD shot for worst case scenario's. So the user would not have to adjust power-tune for correct advertised performance.

The idea that a company can send a golden chip for a review could apply to either vendor, and is probably not happening. In AMD's boost sytem, there would never be any pullback from 1000 to 1050 mhz, but I've seen user graphs that show that.

http://www.anandtech.com/show/6025/radeon-hd-7970-ghz-edition-review-catching-up-to-gtx-680/2






http://www.anandtech.com/show/6152/amd-announces-new-radeon-hd-7950-with-boost/3

7950BClockspeed.png



Review chips would benefit from low leakage or reviewers and/or users would have to up the powertune setting of -0- for normal functionality.
Does that occur?

That is not at all what I was responding to. I've said it twice. Reread it if you need to.

There is no paranoia. It's right there in black and white.