[H] 1300mhz 7970 Versus GTX580

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,001
126
This is bonkers why tf don't they compare a good but easy overclock on both cards?


It seems like Hardocp usually only tests two or three cards at a time, probably so their graphs are easily readable.

I think some of you are missing the point of the review, the review is on a nonreference, overclocked version of a 7970. When you push the card to around what is likely it's limit, they compare that to the stock clock 7970 and the previous generation's fastest single GPU to give you a better idea of how good the performance is.

That article is not a shootout between an insanely overclocked 580 and a 7970 overclocked as far as it can go... not sure why some of the Nvidia guys keep trying to make a fight where there is non to be made.
 

skipsneeky2

Diamond Member
May 21, 2011
5,035
1
71
Think this model is the one i may purchase in a month or two.

The thing that really stood out for me was the temperature at 34% fanspeed being 67 cel hell my system has been up for 1 day 17 hours at least 6 hours worth of BF3 on ultra and my highest temp on my evga gtx560 according to msi afterburner has been 76cel at 60% fanspeed:eek:

Never been a big fan of centered fan cards dumping hot air in the case but that gigabyte card is a impressive piece both in performance and thermals.
 

JBT

Lifer
Nov 28, 2001
12,095
1
81
It's to bad the minimums from the OC'ed to the stock don't change much. Its great the averages are up but if you still drop to 22-30fps from 60 fps its noticeable.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,107
1,260
126
I guess this is most certainly not a reference PCB. I wonder if a vendor could make a custom card with more robust circuitry that would still fit into a design that is compatible with heatsinks/blocks intended for reference cards.

You can notice on the reference 7970 PCB there is one power phase that is empty. That could be filled on a custom card. Would make life easier for people looking for custom cards that can use reference design 7970 blocks.

EBCB2B35-EEB4-48E5-B285AEFB9CDC9618.jpg
 

Concillian

Diamond Member
May 26, 2004
3,751
8
81
It's to bad the minimums from the OC'ed to the stock don't change much. Its great the averages are up but if you still drop to 22-30fps from 60 fps its noticeable.

If you look at most of the minimums, they happen at a very specific place. Like the Skyrim one is a few frames out of the whole chart. I assume something happens in that transition like going from inside a building to outside or something like that.

This is why it's awesome that they plot the charts. You don't have to speculate... you can SEE what's happening and make a pretty educated guess at the causes.
 

96Firebird

Diamond Member
Nov 8, 2010
5,709
316
126
If you look at most of the minimums, they happen at a very specific place. Like the Skyrim one is a few frames out of the whole chart. I assume something happens in that transition like going from inside a building to outside or something like that.

Or a CPU bottleneck at that instance.
 

Concillian

Diamond Member
May 26, 2004
3,751
8
81
This is bonkers why tf don't they compare a good but easy overclock on both cards?

Because they test at the settings someone might actually use on each card.

Sure, it's a little bit subjective. What they call playable you might not, or what they'd spend their "performance budget" on might be different than what you would, but it does add something tangible that the basic "AVG FPS" 'rank' charts don't really give you.

Sure card X may be 20% faster than card Z, but does that mean you can run a higher res or does it still have hiccups that make you stay at the same res as the slower card? Who the hell knows in your standard AVG FPS chart review.

The subjectivity in [H] reviews mean they'll never be perfect, but IMO it's very useful to have [H] there, not doing what 17 other sites are doing. If EVERY site did things like [H] it would suck, but having one out there presenting info in a different way is pretty nice.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
It's to bad the minimums from the OC'ed to the stock don't change much. Its great the averages are up but if you still drop to 22-30fps from 60 fps its noticeable.

good response found elsewhere:

Its because how the benchmark suites work. Batman: AC and Skyrim both have menus or loading screens capped at 30 fps, and the "in between" screens in Batman: AC are framerate limited. That obviously skews the final minimum fps result. That does not translate into gameplay. Another great example is Metro 2033, if you look at ANY benchmark in metro 2033, the minumum fps is something stuipd like 10 fps. Thats because the loading screen at the start always has a transition where there is a framerate cap.

That doesn't translate during actual gameplay, those transition screens like in the batman: AC benchmark or the metro 2033 benchmark are only seen during synthetic benchmarks. BF3 is another one, the looading screens are capped at 30 fps, and fraps will count that as 30 fps in benchmark tests.

However, if you look at the graph:

1328588884f5KPiYuIly_5_4.jpg


You'll see its a half second blip where a new area is loaded (and happens on all GPUs). You can see this yourself, if you run the ingame batman: AC benchmark in dx11 mode with high tessellation, the minumum fps will always be 30. In dx11 very high detail metro 2033, your minimum fps will always be 10-15 because of the loading transition at the start of the benchmark.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
Loading screen again?

1328588884f5KPiYuIly_6_4.jpg


AVG is near 60 fps, it's still however spending a large portion of the test well below 60 fps.

1600p still requires CF/SLI. Hopefully Nvidia can bring a product that once overclocked well past "avg" doesn't still require two cards for that res.

[H] does a good job marketing for AMD though. Max playable showed the 580 and 6970 getting almost exactly the same fps(same settings), while in apples to apples they did one with the 580 and another with the 6970 and the 580 just happened to be 15% faster there.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Is this still a "thing"? I've seen some pretty scathing AMD reviews/articles at HardOCP.

Also, which chart are you referring to in the 2nd part of your post :D (just curious)
 

Lonyo

Lifer
Aug 10, 2002
21,939
6
81
I was playing BF3 with an HD6850 and averaging ~35fps with no real complaints (with decent quality settings, but not all ultra).

Just because you think in your head that you need 60fps to enjoy a game doesn't mean it's actually required.

1600p doesn't "require" SLI/Crossfire to be enjoyable.
It requires SLI/Crossfire to maintain ~60fps at all times with maximum settings. They are not even remotely the same thing.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
I love how some people unsuccessfully pick at anything, AMD, the 7970, or [H], to try to heal their bruised egos. It's a great review and an awesome card, go cry somewhere by yourself and not on the forums. :rolleyes:

I think Gigabyte is offering a nice alternative to the Asus 7970 DirectCU II, and either is an excellent choice if you want ultimate performance with cool and silent operation. Now it'd be nice if any place could keep them in stock. :eek:
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
BF3 at the bottom, they do two apples to apples. The top has max playable, which showed almost no difference (no AA) while apples to apples was done with 2xAA and produced a 15% performance difference where there was none before.

Still nice to see a real 1300MHz performance review, not the fake ones others tried to pass off as legit.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
BF3 at the bottom, they do two apples to apples. The top has max playable, which showed almost no difference (no AA) while apples to apples was done with 2xAA and produced a 15% performance difference where there was none before.

Still nice to see a real 1300MHz performance review, not the fake ones others tried to pass off as legit.

The "max playable" usually has different settings for both cards though, right? For instance if there were a "max playable" comparison between a 6970 and a GTX 580, they would have similar framerates in that test, except the GTX 580 would have more eye candy turned on. I'm still confused on which graph you're referring too though o_O
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
No difference:
1328588884f5KPiYuIly_6_1.gif


Bam, 2xAA trashes the 2GB 6970 allowing the 1.5GB 580 to open up a near 15% increase over "max playable".

1328588884f5KPiYuIly_6_4.jpg

1328588884f5KPiYuIly_6_5.jpg



Edit: I believe this is what the "butthurts" are trying to say: http://www.anandtech.com/bench/Product/304?vs=305

7970 is a better card for sure, faster, better power/performance, better thermals, good overclocking over stock... What else would you expect from 40nm down to 28nm though, that's a huge die shrink.
 
Last edited:

Grooveriding

Diamond Member
Dec 25, 2008
9,107
1,260
126
Oh get over it. The 7970 is laying waste. You can't pick it apart no matter how unpallatable you might find it.

I don't get the attacks on hard because one can not stomach the stellar performance of an AMD card. They just took AMD to task for CF driver problems in a front page article. It's just silly trying to attack benchmarks like this because you don't like the results, it's not going to change how well the card performs.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
You're obviously misreading what I'm saying.

But I think most people are, so either I'm doing something wrong... Or everyone is missing the entire point.

Either way it doesn't really matter, your reasoning on why [H] may or may not be in AMD's pocket is really poor though. Stating things that are blatantly obvious vs using misleading graphs are two different things. If [H] never did the i5-2500k vs 8150 SLI review would nobody have known how poorly bulldozer would have performed? No, of course not... It's common knowledge, just like AMD's poor driver support is. Stating something that is common knowledge isn't detracting from AMD at all, it's just making it easier for them to continue to make misleading graphs and doing awful stock vs max overclock reviews.

I can pull up some stock 7970 reviews @ 1080p and easily eclipse that performance by 70% or more who would that help though?
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Every GTX 580 he reviewed got a gold award, while no 6970s did IIRC,...You're alright in my book but I can't support you on this one balla :(
 
Last edited:

badb0y

Diamond Member
Feb 22, 2010
4,015
30
91
Every thread on 7970 has BallaTheFeared complaining about something...
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,001
126
You're obviously misreading what I'm saying.

But I think most people are, so either I'm doing something wrong... Or everyone is missing the entire point.

Either way it doesn't really matter, your reasoning on why [H] may or may not be in AMD's pocket is really poor though. Stating things that are blatantly obvious vs using misleading graphs are two different things. If [H] never did the i5-2500k vs 8150 SLI review would nobody have known how poorly bulldozer would have performed? No, of course not... It's common knowledge, just like AMD's poor driver support is. Stating something that is common knowledge isn't detracting from AMD at all, it's just making it easier for them to continue to make misleading graphs and doing awful stock vs max overclock reviews.

I can pull up some stock 7970 reviews @ 1080p and easily eclipse that performance by 70% or more who would that help though?


It's not unusual for two mid range to performance level cards to out perform an expensive single GPU. Two GTX460's were better than a 5870 and GTX480, often. Two 6950's are well faster than a GTX580. That's nothing all that new. But, multi-GPU isn't the answer for many people. It's generally louder, some people are sensitive to microstutter, the scaling can be pretty poor at times or even non-existent other times. I had multiGPU, I liked it, but it's not for everyone, and two performance level cards beating the fastest single GPU isn't anything new... the fastest single GPU is priced as the fastest for a reason.
 

kami

Lifer
Oct 9, 1999
17,627
5
81
Another "next gen card beats last gen card" thread?


This is like someone starting a "GTX 680 vs. HD6970" thread come april
 
Last edited:

CKTurbo128

Platinum Member
May 8, 2002
2,702
1
81
I guess this is most certainly not a reference PCB. I wonder if a vendor could make a custom card with more robust circuitry that would still fit into a design that is compatible with heatsinks/blocks intended for reference cards.

You can notice on the reference 7970 PCB there is one power phase that is empty. That could be filled on a custom card. Would make life easier for people looking for custom cards that can use reference design 7970 blocks.

EBCB2B35-EEB4-48E5-B285AEFB9CDC9618.jpg

Yup. Even the size of the Gigabyte PCB is smaller than the standard HD7970, measuring at 10.5" in length, despite Gigabyte's website listing it at 285mm (~11.2). Kyle of [H] confirmed this.