[H] HD 7970 Dual-X Review

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
I agree that GPU boost should be considered stock, I mean, why didn't AMD think of using something similar? Instead of proclaiming that gpu boost shouldn't be considered, the blame really rests on AMD for not making a similar type of tech. I'm about 99% sure they'll do the same thing with their next GPU (gpu boost) though.

The only other issue is with stock cards varying in terms of max GPU boost. I've seen this myself with 680s in sli, 1 of my cards boost higher than the other. They are very close though so its not really a big issue, although someone on H reported a pretty large variance with their sli setup.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
Fair enough but out of the box is YMMV it seems for the 680, although probably not a drastic difference between the best and worst.

So how do you make a fair comparison with YMMV muddying it all? There really should be more control of the boost feature.

It's really not worth all the concern. It really isn't. Just like it's not a concern about
Intels turbo or AMDs turbo.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
I agree that GPU boost should be considered stock, I mean, why didn't AMD think of using something similar? Instead of proclaiming that gpu boost shouldn't be considered, the blame really rests on AMD for not making a similar type of tech. I'm about 99% sure they'll do the same thing with their next GPU (gpu boost) though.

The only other issue is with stock cards varying in terms of max GPU boost. I've seen this myself with 680s in sli, 1 of my cards boost higher than the other. They are very close though so its not really a big issue, although someone on H reported a pretty large variance with their sli setup.

Absolutely. It's a smart move to voice your support ahead of time as I'm certain AMD will follow up with their own version soon. And, they have an opportunity to one up Nvidia by offering direct voltage adjustment. Who knows.
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
The main concern is the lack of an OC edition in the vein of SB K series.
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
Absolutely. It's a smart move to voice your support ahead of time as I'm certain AMD will follow up with their own version soon. And, they have an opportunity to one up Nvidia by offering direct voltage adjustment. Who knows.

I think we've seen with the frankencard that hit 1900Mhz or whatever that the capability exists for non-reference designs to be more lenient in regards to the voltage they'll let you apply.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Maybe nvidia will have an improved version in their upcoming GK112 card for this fall? One that improves dynamic clocking even futher? Possibly with futher dynamic voltage to further please enthusiasts. We should get an nvidia employee here to brief us on that , it would be interesting to hear from the horses mouth without the continual marketing spin.

Really though, i'm surprised nobody thought of something like gpu boost sooner.
 
Last edited:

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
Maybe nvidia will have an improved version in their upcoming GK112 card for this fall? One that improves dynamic clocking even futher? We should get an nvidia employee here to brief us on that.

Really though, i'm surprised nobody thought of something like gpu boost sooner.

One never knows what will come about. But, not only are Nvidia employees unable to comment on unreleased products, they also arent allowed to post in any public forums even if they wanted to, unless its Nvidia's own forums. So we are out of luck.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
nvidia list the base clock at 1006, so thats stock then right? :)

No, it's not. Stock performance is what you get out of the box. So if AMD rates HD7970 at 925 and MSI sells you a factory preoverclocked 7970 with 1070mhz overclock, should we test the MSI card downclocked to AMD factory defaults?

You make it sound like there is some wild stock variation among GTX680s. It's 1058-1110mhz. Very rarely do they go above that. That's your margin of error. Anything else means someone tweaked the power settings or enabled a manual offset.

@ Computerbase, their factory 680 never even got to 1110mhz supporting this view.

You can check their review and you'll see 680 has a 20% lead in some games while HD7970 leads in other games. Like I said in games where 680 leads, a 1280mhz HD7970 has won't catch an overclocked 680. In games where 7970 leads, 680 has little chance to overcome the deficit.

Considering 680 has >2x the Tessellation performance and almost 2x the texture performance, if anything 680 will gain over 7970 in next generation games that become more texture and tessellation heavy. Not that this matters since most 680/7970 users will have upgraded anyway.

tessmark-x32.gif

b3d-filter-fp16.gif


Even from future proofing perspective, it's still hard to recommend the 7970.

Then there is PhysX. Most people don't care for it. But if you can get a game to look better, why not right?

Borderlands 2 PhysX

Right now the 680 is:
- Cheaper
- Performs better out of the box / can still overclock another 8-13% to make it competitive against a massively overclocked 7970
- Consumes less power at stock and ~ 90-100W less in overclocked states vs. an overclocked 7970
- Has more features
- EVGA Warranty

The are 2 superior features that count on the 7970 right now:

- GPGPU double precision performance (MilkyWay, CollatzConjective, Bitcoin mining, etc.)
- Eyefinity that allows up to 6 displays in action

its already been pointed out how CPU turbo boost and GPU boost are different. ie locked vs variable

CPU boost is still factory defaults like GPU boost is. If GPU boost was 1.3ghz for one card and 1.1ghz for another card, it'd be worth talking about. As it stands, at stock speeds, GTX680s boost to 1058 and at most 4 banks above that. Anything above that means the card was not ran at default power settings. This is because the 680 has 4 power boost banks @ 13mhz steps. Max speed is 1110mhz from the factory @ 100% power setting.

I suggest people read reviews in more detail because there is too much misunderstanding about how 680's boost works at stock 100% power band.

no but if its supposed to have 540whp, then the test should be ran with the 540 hp one that most people are likely to get

That's impossible and unrealistic. You can't expect someone to dyno each vehicle and downrate. Cars are guaranteed to have the minimum amount of HP that they are rated at. Some cars will for sure produce more HP. This is done so that automakers do not get sued for not delivering enough HP. What you are arguing is the fact of life = there is always some small variation among most products. In CPUs its VIDs, in cars, its HP, right now its GPU Boost. The variation only matters if there is a material impact.

In this case, no one here has shown it to be such.

if you take [H]'s card for example, it was significantly faster than others being tested. so if they cherry pick that review and push it everywhere, you are likely not buying what is being advertised.

People are using HardOCP's one off example and ignoring 20-30 other reviews that still show GTX680 beating HD7970.

Here is Hexus.net review where HD7970 was even overclocked to match GTX680's speeds and still lost.

MOST 680's do 1200 now?

Not factory 680s. I said a lot of reference 7970s do 1125-1175mhz while most 680s easily do 1200mhz with manual overclocking. So it's far rarer to get an HD7970 that can do 1250-1280mhz that it is to achieve a 1200+mhz overclock on a 680. Meaning you are playing an overclocking roulette when pretty much most 680s will do 1200mhz+ with a manual offset of 140-150mhz.

too many people are taking #'s from outliers like the 1200 @ [H] and calling that a normal stock card, even though its 5-10% faster than other reviews out there.

It's 10% faster on average, sure. But in more popular recent games it's more like 15-20% faster. The only reason HD7970 even hangs in there is because of old games such as Metro 2033, AVP and Crysis 1 that offset the average in its favour. The fact that HD7970 costs $40-100 more and consumes more power while requiring a very high 1250+ overclock to beat an overclocked 680 makes it irrelevant right now.

At HardOCP, 70% of hardware enthusiast picked a reference 680 over a $100 more expensive MSI Lightning.

Our forum is the only forum that keeps having these debates. This is the first time in a long time where the choice couldn't be more clearer since it comes down to price and the games which you play.

Until HD7970 falls in price to $499 for after market versions, it's worse value for most people.
 
Last edited:

thilanliyan

Lifer
Jun 21, 2005
12,085
2,281
126
It's really not worth all the concern. It really isn't. Just like it's not a concern about
Intels turbo or AMDs turbo.

I won't lose any sleep over it lol but I don't like things one sided especially when there is some merit to what some people are saying.

What if a board partner releases a 7970 clocked at say 1200mhz and beats a 680 and reviewers claim it is the fastest card? Is that fair? It is out of the box.

Personally I think boost is overclocking and the 7970 should be given a fair chance at that too since we know it was clocked conservatively. I have no idea how the reviews would be made fair though...like I said, boost has muddied the waters a bit.
 
Last edited:

BoFox

Senior member
May 10, 2008
689
0
0
Personally I think boost is overclocking and the 7970 should be given a fair chance at that too since we know it was clocked conservatively. I have no idea how the reviews would be made fair though...like I said, boost has muddied the waters a bit.

Only if the 7970 is overclocked by 50-100 MHz like the "boost" feature of the 680.

Keep in mind HD 6970's 880MHz clock that is actually down-clocked in Metro2033 with default Powertune settings. At least Nvidia does not downclock its GTX 680, but rather mildly overclocks it for most games. I'm one of those who appreciates this "upward" rather than "downward" feature for a change.
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
its already been pointed out how CPU turbo boost and GPU boost are different. ie locked vs variable

Those who know begs to differ:

http://www.hardocp.com/article/2012/04/04/nvidia_kepler_geforce_gtx_680_overclocking_review

There is a new auto-clocking feature (not overclocking) in the GeForce GTX 680 called GPU Boost. This new auto-clocking feature is exactly that, the video card has smart logic, and clocks itself dynamically. Through hardware monitoring the clock speed is able to be adjusted on the fly to deliver the best performance possible. NVIDIA tells us that 13 different GPU parameters are considered in the clocking and voltage algorithms. GPU Boost is able to raise and lower the GPU voltage as needed to give the GPU higher or lower frequencies, all culminating to the maximum TDP of the video card. It is important to keep in mind that this is not auto-"overclocking" at all, if you think of this as an automatic overclocking technology, you will be mistaken in its intent and function.

GPU Boost will not set a frequency that is outside of the TDP of this video card, it will also only set a voltage that is within a certain limit. The end-user is still able to manually adjust voltage and TDP and GPU offset, equaling total frequency, to actually overclock the GPU beyond factory specifications. While GPU Boost is able to hit a certain frequency on its own, there is certainly more overclocking headroom to be had via voltage tweaking and telling the card to operate over the TDP limit. It sounds complicated, but it is rather easy.

It's that simple...inside rated TDP != overclocking.

Trying to make this any different than Intel's Turbo Boost or AMD's Turbo CORE is false.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
What if a board partner releases a 7970 clocked at say 1200mhz and beats a 680 and reviewers claim it is the fastest card? Is that fair? It is out of the box.

Personally I think boost is overclocking and the 7970 should be given a fair chance at that too since we know it was clocked conservatively. I have no idea how the reviews would be made fair though...like I said, boost has muddied the waters a bit.

You know the answer to this. If a factory overclocked 7970 at @1200MHz out of the box bests a 680 all around, then of course its the fastest card. Did you believe I would say anything else? As you well know, the last 4 gens (give or take) of fans for the AMD side gave much more credence to price/performance. So "technically" right now, the 680 wins on all metrics that are important according to those past gen arguments. GPGPU is not important unless your bit-coining or MilkyWay'ing at Home. ;)
Sure, if an out of the box 7970 is clocked high enough to best a 680, it should have the fastest card badge. I bet it'll have a price tag too.
 
Feb 24, 2009
147
0
0
www.rackmountsales.com
I know that as we gamers don't really care about this, but it's hard to compare the 7970 and 680 directly. The 7970 isn't geared for gaming only; it simply destroys the 680 in compute performance. The 680 is actually slower than a 580 in many compute scenarios. The architecture is different enough that a respin of code is probably necessary to ensure good performance.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
I know that as we gamers don't really care about this, but it's hard to compare the 7970 and 680 directly. The 7970 isn't geared for gaming only; it simply destroys the 680 in compute performance. The 680 is actually slower than a 580 in many compute scenarios. The architecture is different enough that a respin of code is probably necessary to ensure good performance.

In what capacity to an end user?
 

thilanliyan

Lifer
Jun 21, 2005
12,085
2,281
126
So "technically" right now, the 680 wins on all metrics that are important according to those past gen arguments.

No argument from me about that. The power consumption is especially attractive for me. I wish it was better in bitcoin mining though, as that is something I use my cards for 24/7 and just leave it running even while gaming actually, which I do very little of nowadays admittedly.

I just wish there was a bit more control of the boost feature so that you could see both sides of the coin with boost off and boost on. Maybe it's the engineer in me talking :).
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
No argument from me about that. The power consumption is especially attractive for me. I wish it was better in bitcoin mining though, as that is something I use my cards for 24/7 and just leave it running even while gaming actually, which I do very little of nowadays admittedly.

I just wish there was a bit more control of the boost feature so that you could see both sides of the coin with boost off and boost on. Maybe it's the engineer in me talking :).

If you *really* want to, you can use a negative offset to lock it down at 1006Mhz (but that's going to potentially be a different offset per card.)

It's about as real world as locking a 7970 at 800 Mhz though. (and it probably a similar performance loss)
 
Feb 18, 2010
67
0
0
I really think the number of forums/forum threads discussing which card is the most powerful between the 680 and the 7970 has gone through the roof. Both of them are great cards and really top of the line from both companies and we stay here and fight for the 10% difference between them. That is rather stupid. My suggestion is:
- 7970 owners should be happy with their cards.
- 680 owners should be happy with theirs.
- for people coming and asking for advice regarding which one of those 2 cards to buy I suggest recommending the cheapest one.
I really don't think anyone will notice a few FPS difference and the majority will not care which consumes more watts than the other or is a compute king.
Overclocking will not be used by 90% of people buying one of these cards so that is pointless as well.
For fanboys, just go with the card of your favorite company.
 

Zargon

Lifer
Nov 3, 2009
12,218
2
76
No, it's not. Stock performance is what you get out of the box. So if AMD rates HD7970 at 925 and MSI sells you a factory preoverclocked 7970 with 1070mhz overclock, should we test the MSI card downclocked to AMD factory defaults?
different situation. card is advertised as OC'd. we have no problems finding reference card benches at stock clocks for the 7970 anyways so its not at all the same situation. the only issue at hand is people taking [h]s cards at face 'stock card value'and trumpeting them any/everywhere. it happened alot in the thread here on it. It doesnt even matter to me as a consumer because well, I'm going water so I can expect above average clocks with low heat


You make it sound like there is some wild stock variation among GTX680s. It's 1058-1110mhz. Very rarely do they go above that. That's your margin of error. Anything else means someone tweaked the power settings or enabled a manual offset.

@ Computerbase, their factory 680 never even got to 1110mhz supporting this view.

You can check their review and you'll see 680 has a 20% lead in some games while HD7970 leads in other games. Like I said in games where 680 leads, a 1280mhz HD7970 has won't catch an overclocked 680. In games where 7970 leads, 680 has little chance to overcome the deficit.

Considering 680 has >2x the Tessellation performance and almost 2x the texture performance, if anything 680 will gain over 7970 in next generation games that become more texture and tessellation heavy. Not that this matters since most 680/7970 users will have upgraded anyway.

tessmark-x32.gif

b3d-filter-fp16.gif


Even from future proofing perspective, it's still hard to recommend the 7970.

Then there is PhysX. Most people don't care for it. But if you can get a game to look better, why not right?

Borderlands 2 PhysX

Right now the 680 is:
- Cheaper
- Performs better out of the box / can still overclock another 8-13% to make it competitive against a massively overclocked 7970
- Consumes less power at stock and ~ 90-100W less in overclocked states vs. an overclocked 7970
- Has more features
- EVGA Warranty

The are 2 superior features that count on the 7970 right now:

- GPGPU double precision performance (MilkyWay, CollatzConjective, Bitcoin mining, etc.)
- Eyefinity that allows up to 6 displays in action



CPU boost is still factory defaults like GPU boost is. If GPU boost was 1.3ghz for one card and 1.1ghz for another card, it'd be worth talking about. As it stands, at stock speeds, GTX680s boost to 1058 and at most 4 banks above that. Anything above that means the card was not ran at default power settings. This is because the 680 has 4 power boost banks @ 13mhz steps. Max speed is 1110mhz from the factory @ 100% power setting.

I suggest people read reviews in more detail because there is too much misunderstanding about how 680's boost works at stock 100% power band.

I do need to read up on it more, and hope to when I have more time. they are quite impressive cards nonetheless

That's impossible and unrealistic. You can't expect someone to dyno each vehicle and downrate. Cars are guaranteed to have the minimum amount of HP that they are rated at. Some cars will for sure produce more HP. This is done so that automakers do not get sued for not delivering enough HP. What you are arguing is the fact of life = there is always some small variation among most products. In CPUs its VIDs, in cars, its HP, right now its GPU Boost. The variation only matters if there is a material impact.

In this case, no one here has shown it to be such.



People are using HardOCP's one off example and ignoring 20-30 other reviews that still show GTX680 beating HD7970.

yes they are and its a problem, but how the reviews are being done is making the problem worse, not that its not a graet review, it is, I have merely this one issue

Here is Hexus.net review where HD7970 was even overclocked to match GTX680's speeds and still lost.



Not factory 680s. I said a lot of reference 7970s do 1125-1175mhz while most 680s easily do 1200mhz with manual overclocking. So it's far rarer to get an HD7970 that can do 1250-1280mhz that it is to achieve a 1200+mhz overclock on a 680. Meaning you are playing an overclocking roulette when pretty much most 680s will do 1200mhz+ with a manual offset of 140-150mhz.

gotcha makes more sense with the clarification

It's 10% faster on average, sure. But in more popular recent games it's more like 15-20% faster. The only reason HD7970 even hangs in there is because of old games such as Metro 2033, AVP and Crysis 1 that offset the average in its favour. The fact that HD7970 costs $40-100 more and consumes more power while requiring a very high 1250+ overclock to beat an overclocked 680 makes it irrelevant right now.

At HardOCP, 70% of hardware enthusiast picked a reference 680 over a $100 more expensive MSI Lightning.

Our forum is the only forum that keeps having these debates. This is the first time in a long time where the choice couldn't be more clearer since it comes down to price and the games which you play.



Until HD7970 falls in price to $499 for after market versions, it's worse value for most people.
agree 100%

I think you have my motives confused. I have no want to try and manipulate anything to make the 7970 look better. merely to get benchmarks/reviews to be consisntent.

the rest in bold inside quotes cuz it was easier and I am on a time crunch
 
Last edited:

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
I really don't think anyone will notice a few FPS difference and the majority will not care which consumes more watts than the other or is a compute king.


Where it really makes the most difference is on those edge cases where one card is playable and the other is just barely not. I don't like to dip below 50 fps when I can avoid it, for example.

The other reason we try to look at all the scenarios is because not many of us upgrade every cycle (especially if we're paying $500-$600 a card). The difference between 120 and 140 fps may not be much, but if down the road that turns in to 40 vs 50 fps on games in 2 years time, it's a massive difference.

Sadly, it's often hard to tell. When it was 5870 vs 480, at the time, they looked neck and neck with the 480 being slightly better (for a heat, noise, and lateness tradeoff). But if you look at what happened after they were out a while, you will see which one was clearly the better performing product. We don't have a crystal ball though, so we can never tell, but *usually* the one that seems a little faster has maintained that title with newer games.
 
Feb 18, 2010
67
0
0
Where it really makes the most difference is on those edge cases where one card is playable and the other is just barely not. I don't like to dip below 50 fps when I can avoid it, for example.

The other reason we try to look at all the scenarios is because not many of us upgrade every cycle (especially if we're paying $500-$600 a card). The difference between 120 and 140 fps may not be much, but if down the road that turns in to 40 vs 50 fps on games in 2 years time, it's a massive difference.

Sadly, it's often hard to tell. When it was 5870 vs 480, at the time, they looked neck and neck with the 480 being slightly better (for a heat, noise, and lateness tradeoff). But if you look at what happened after they were out a while, you will see which one was clearly the better performing product. We don't have a crystal ball though, so we can never tell, but *usually* the one that seems a little faster has maintained that title with newer games.

Considering both of them are the highest end now from both companies, I really think they will last mostly anyone 2 generations. My last upgrade was the AMD 4870 from an nVidia 7600, so I know what you're saying.