SirPauly
Diamond Member
- Apr 28, 2009
- 5,187
- 1
- 0
I think it's funny to blame nvidia cards when amd cards do the same thing. I think there are maybe 2 models of 7950s now that aren't locked down. As long as the card reaches a guaranteed speed then really you got what you paid for. It's to keep the number of blown cards down.
The default aspects of a reference GTX 670 are 915MHz base clock and can boost up to 980Hz but my MSI GTX 670 PE consistently boosts up to around 1202 out of box -- did garner close to 1300 with oc and ov.
As long as the dynamic clocks are primarily transparent, don't effect smoothness, and trying to offer as much performance while being stable in the life of the GPU --pretty pleased. Happy with 20 percent OC scaling from default. Some may desire more and try to find the choice that offers what they desire based on subjective tastes, tolerances, thresholds and wallet..
The argument that throttling/power control etc. is implemented to "save power" doesn't fly for me either.
You are choosing to willfully ignore information that goes against your viewpoint without providing any justification. Why don't you accept the power savings argument?
I think it's funny to blame nvidia cards when amd cards do the same thing. I think there are maybe 2 models of 7950s now that aren't locked down. As long as the card reaches a guaranteed speed then really you got what you paid for. It's to keep the number of blown cards down.
I really don't see the problem, you can still overclock, it's just got easier really.
What people seem to be complaining about is that they can't show off as much any more.
Just enjoy your video card, it doesn't have to be some dick measuring competition.
I think the lack of user control voltage may be limited for protection but dynamic clocks and voltage makes a lot of sense from an efficiency point-of-view.
Why leave performance on the table based on many applications don't use all of a fixed TDP and fixed clocks.
I dont think this is dumb downed but takes a lot of innovation and engineering.
AMD's boost system is no better IMHO. My MSI 7950 TF3 has a default clock of 960 but when the GPU gets to 65c it throttles back to much lower speeds. The only way to fix it is to use 20% powertune which should be enabled out of the box IMHO.
I can understand that both companies designed their cards this way to reduce RMAs but when default out of the box speeds for both can vary so much it means you aren't always getting the same cards the reviewers got. Both companies get around this issue by saying it is a non-guaranteed boost speed.
Avoiding the products alone isn't enough to remove overvolting... we don't make up nearly enough sales, and it's not a selling point in the first place.
Take note, how many 7870 XTs, 7950s and 7970s sold despite the locked down voltages? It's nice that AMD offers overvolting in its reference designs but every board house is now locking down their cards in order to "protect" their cards.
Which isn't necessarily a bad thing. The 570s would blow up if you even looked at their voltage for too long. Maybe that the implementation is kind of bad, especially with core throttling, but our options are getting more and more limited.
That's because the two vendors boost sytems function differently. You know this, and are just misrepresenting the truth to put in your typical negative slant.I agree it helps efficiency. It does other things as well though which aren't so good. Like the OP's problem.
Again, it helps with efficiency. It's designed to improve power consumption while benchmarking for reviews. Not as well done as nVidia's, at all though. Push the power slider to +20% though and it's gone. What we need is a way to defeat nVidia's setup to make it go away as well.
Only nVidia sends cards to reviewers that boost beyond the stated boost specs. If an AMD card says it boosts to 950MHz the review samples will only boost to 950MHz. They won't boost to +1200MHz as we see with some 680's.
That's because the two vendors boost sytems function differently. You know this, and are just misrepresenting the truth to put in your typical negative slant.
but when default out of the box speeds for both can vary so much it means you aren't always getting the same cards the reviewers got.
This means the GPU clock speed could increase from 1006MHz to 1.1GHz or 1.2GHz or potentially even higher. (Kyle saw a GTX 680 sample card reach over 1300MHz running live demos but it could not sustain this clock.)
It's nice that AMD offers overvolting in its reference designs but every board house is now locking down their cards in order to "protect" their cards.
At the same time however, while AMD isn’t pushing the 7970GE as hard as the GTX 680 they are being much more straightforward in what they guarantee – or as AMD likes to put it they’re being fully deterministic. Every 7970GE can hit 1050MHz and every 7970GE tops out at 1050MHz. This is as opposed to NVIDIA’s GPU Boost, where every card can hit at least the boost clock but there will be some variation in the top clock. No 7970GE will perform significantly better or worse than another on account of clockspeed, although chip-to-chip quality variation means that we should expect to see some trivial performance variation because of power consumption.
Finally, on the end-user monitoring front we have some good news and some bad news. The bad news is that for the time being it’s not going to be possible to accurately monitor the real clockspeed of the 7970GE, either through AMD’s control panel or through 3rd party utilities such as GPU-Z. As it stands AMD is only exposing the base P-states but not the intermediate P-states, which goes back to the launch of the 7970 and is why we have never been able to tell if PowerTune throttling is active (unlike the 6900 series). So for the time being we have no idea what the actual (or even average) clockspeed of the 7970GE is
Unfortunately AMD still hasn’t come through on their promise to expose the precise clockspeeds of their Southern Islands cards, which means we’re stuck looking at clockspeeds in a halfway blind manner. We cannot tell when PowerTune throttling has kicked in
So what’s going on? As near as we can tell, the power requirements for boosting are so high that the 7950B simply cannot maintain that boost for any significant period of time. Almost as soon as the 7950B boosts needs to go back to its base state in order to keep power consumption in check. The culprit here appears to be the 7950B’s very high boost voltage of 1.25v, which at 0.125v over the card’s base voltage makes the boost state very expensive from a power standpoint.
Here is more grey area then to trigger your review paranoia.
Reading about AMD's boost functionality in the reviews, mentioned AMD shot for worst case scenario's. So the user would not have to adjust power-tune for correct advertised performance.
The idea that a company can send a golden chip for a review could apply to either vendor, and is probably not happening. In AMD's boost sytem, there would never be any pullback from 1000 to 1050 mhz, but I've seen user graphs that show that.
http://www.anandtech.com/show/6025/radeon-hd-7970-ghz-edition-review-catching-up-to-gtx-680/2
http://www.anandtech.com/show/6152/amd-announces-new-radeon-hd-7950-with-boost/3
![]()
Review chips would benefit from low leakage or reviewers and/or users would have to up the powertune setting of -0- for normal functionality.
Does that occur?