AMD Radeon 7000-Series 28nm (Southern Islands) | 7990 7970 7870 7770 | Discussion

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

cotak13

Member
Nov 10, 2010
129
0
0
AMD is adopting a smart compute approach. Graphic Core Next is a true MIMD (Multiple-Instruction, Multiple Data) architecture. With the new design, the company opted for "fat and rich" processing cores that occupy more die space, but can handle more data. So comparing 2048 SPs of GCN architecture to VLIW-4 is probably meaningless.

In general, comparing cards on paper can work if you are comparing across the same architecture/generation, but it's almost always meaningless across 2 entirely different architectures imo. What if GCN is 20-30% more efficient than VLIW-4? What about vastly improved geometry/tessellation performance? NV cleans up in Lost Planet 2, Hawx 2, Crysis 2, partly due to superior Tessellation performance. Also, what if AMD introduces a multi-threaded DX11 driver for HD7900 series allowing it to surprass NV in games like CIV5?

All I am saying is these specs appear to be "only" about 50% faster than HD6970 (outside of ROPs), but the performance improvement might be far greater since the specs don't tell us anything about how much better (or worse) GCN is vs. VLIW-4.

I think we have to wait and see. I personally don't expect too much from GCN because it's a new architecture. Could be good or bad it really is a mystery until it gets reviewed. Frankly, I am more concerned about it under performing as GCN is AMD's answer to FEMI on the compute side. For pure graphics it can be argued that VLIW was the right way. And when you build something that has to be good at 2 things typically it ends up being less good at either then a chip that's designed just for 1 kind of operation. I really have this feeling that the new GCN chips are going to lose the kind of power efficiency that the older VLIW chips had over nvidia. It's a lot more hardware to do basically the same job.
 

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126
how do you get 64 and 60 ROPs with 384 bit?

the only answear i got, is that...the ring-bus is back (from beyond 3d)

or that there is 2 ROPs per CU...( i will search better)
 
Last edited:

ManBearPig

Diamond Member
Sep 5, 2000
9,173
6
81
Ugh...anyone know how much better the 7950 is over the 6950 or how much more it'll cost or when it'll release? I have a month to return this 6950 2gb if necessary.
 

toyota

Lifer
Apr 15, 2001
12,957
1
0
Ugh...anyone know how much better the 7950 is over the 6950 or how much more it'll cost or when it'll release? I have a month to return this 6950 2gb if necessary.
yes we all know exactly how much better it is but we decided not to let you know...
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I think we have to wait and see. I personally don't expect too much from GCN because it's a new architecture. Could be good or bad it really is a mystery until it gets reviewed. Frankly, I am more concerned about it under performing as GCN is AMD's answer to FEMI on the compute side. For pure graphics it can be argued that VLIW was the right way. And when you build something that has to be good at 2 things typically it ends up being less good at either then a chip that's designed just for 1 kind of operation. I really have this feeling that the new GCN chips are going to lose the kind of power efficiency that the older VLIW chips had over nvidia. It's a lot more hardware to do basically the same job.

The chip just might be much larger in size, as in 430-480mm^2. Through architectural improvements of ROPs, TMUs and SPs, the efficiency might be just as good as VLIW-4. But I fully expect the 7970 card to have a TDP of 190-200W. That's fine with most of us, since we are talking about a high-end card. Honestly, if they need to go with 250W TDP and deliver 80% more performance than a 6970, that's cool too. High-end cards should prioritize performance over efficiency. 28nm shrunk 6970 in the form of 7870 should address efficiency concerns for those who want a 120W or so card. Also, Fermi didn't have a lot of problems providing excellent compute and graphical performance. I think AMD cards will do just fine in this regard. I just hope the performance is at least 30% faster than a GTX580. If Kepler is delayed until late 2012, AMD can release a refresh of HD7970 series and add another 15% performance increase to combat Kepler.
 
Last edited:
Feb 19, 2009
10,457
10
76
2x 6970 runs 1080p x 3 monitor BF3 fine if you turn off MSAA, fine as in ~50 fps.

If a single 7970 is ~180% of a 6970, a single 7970 is already overkill for pretty much everything UNLESS you game using 3 monitors, then you'd need CF 7970 3gb and enjoy games at over 100 fps on 3 monitors.. thats some serious power.

I just hope they don't make them longer than 10.5 inch cos it aint going to fit my case. lol
 

alcoholbob

Diamond Member
May 24, 2005
6,296
342
126
But can it run Crysis?

Depends what you mean by "run."

Minimum fps in Crysis is heavily cpu dependant, whereas average is gpu dependent. Even a GTX580 can hit over 35fps average now.

Minimum fps is pretty much linear with clockspeed and the game only effectively runs 2 cores, so until Intel comes out with a Cpu that's clock-for-clock 4 times faster than i7-9xx, then no, nothing will ever "run" Crysis if you want a minimum fps above 30.
 

videoclone

Golden Member
Jun 5, 2003
1,465
0
0
2x 6970 runs 1080p x 3 monitor BF3 fine if you turn off MSAA, fine as in ~50 fps.

If a single 7970 is ~180% of a 6970, a single 7970 is already overkill for pretty much everything UNLESS you game using 3 monitors, then you'd need CF 7970 3gb and enjoy games at over 100 fps on 3 monitors.. thats some serious power.

I just hope they don't make them longer than 10.5 inch cos it aint going to fit my case. lol

Yeah' a 7990/7970 combo would probably be a little overkill and 1x 7990 or 7970 will be more then enough to max out any game out.

I guess in a year or so the money saved could be spent on an upgrade to a HD8970 Graphics card. :) hehe
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Depends what you mean by "run."

Minimum fps in Crysis is heavily cpu dependant, whereas average is gpu dependent. Even a GTX580 can hit over 35fps average now.

Minimum fps is pretty much linear with clockspeed and the game only effectively runs 2 cores, so until Intel comes out with a Cpu that's clock-for-clock 4 times faster than i7-9xx, then no, nothing will ever "run" Crysis if you want a minimum fps above 30.

Not sure how you arrived at this conclusion. When I swapped out my GTX470 @ 760mhz GPU for an HD6950 2GB @ 6970 speeds, my min fps in Crysis improved tremendously. The game is almost entirely GPU limited on a modern i3/i5/i7 CPU. Lowering clock speeds from 3.9ghz to 2.8ghz on my Core i7 860 had no effect on performance.

You can see below that Crysis responds extremely well to a faster GPU in regard to min frames:

42690.png
 

Subyman

Moderator <br> VC&G Forum
Mar 18, 2005
7,876
32
86
Not sure how you arrived at this conclusion. When I swapped out my GTX470 @ 760mhz GPU for an HD6950 2GB @ 6970 speeds, my min fps in Crysis improved tremendously. The game is almost entirely GPU limited on a modern i3/i5/i7 CPU. Lowering clock speeds from 3.9ghz to 2.8ghz on my Core i7 860 had no effect on performance.

You can see below that Crysis responds extremely well to a faster GPU in regard to min frames:

42690.png

FWIW, that's gamer quality, not that ultra requires more cpu though.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,110
1,260
126
There might be something to the CPU improving minimums. Per [h]'s CPU scaling in tri-sli comparison.

1304409842JEKIDRyqse_2_5.gif


This could likely be unrelated though and more a product of CPU opening up multi-gpu scaling because there are increases in all numbers.

My motherboard is finally arriving tomorrow so I should have my results down by the end of the weekend comparing scaling with my i7 920 and my SB-E.
 

96Firebird

Diamond Member
Nov 8, 2010
5,714
316
126
I just hope they don't make them longer than 10.5 inch cos it aint going to fit my case. lol

This. When I was watching a video about my case before I bought it, they gave an example of how much space was needed for the AMD cards. I was shocked, and the reviewer had to take out a fan so it would fit. I haven't seen one in person, so I was a little suprised. And I know a lot of cases don't have as much room as my case, but then again I suppose many enthusiest cases have that room.
 

cotak13

Member
Nov 10, 2010
129
0
0
The chip just might be much larger in size, as in 430-480mm^2. Through architectural improvements of ROPs, TMUs and SPs, the efficiency might be just as good as VLIW-4. But I fully expect the 7970 card to have a TDP of 190-200W. That's fine with most of us, since we are talking about a high-end card. Honestly, if they need to go with 250W TDP and deliver 80% more performance than a 6970, that's cool too. High-end cards should prioritize performance over efficiency. 28nm shrunk 6970 in the form of 7870 should address efficiency concerns for those who want a 120W or so card. Also, Fermi didn't have a lot of problems providing excellent compute and graphical performance. I think AMD cards will do just fine in this regard. I just hope the performance is at least 30% faster than a GTX580. If Kepler is delayed until late 2012, AMD can release a refresh of HD7970 series and add another 15% performance increase to combat Kepler.

I think we should all wait and see.

I am more of the personal unsubstantiated opinion that the gcn chips are going to be similar to whatever nvidia puts out this gen. And it's likely that nvidia is keeping pace with amd on the new 28 nm chip.

Saying this because of the rumored apple switch back to nvidia. If nvidia has slower chips or if they are having issues with delivery of their 28nm part I doubt one of the most discerning computer maker would pick them over amd.
 
Feb 19, 2009
10,457
10
76
This. When I was watching a video about my case before I bought it, they gave an example of how much space was needed for the AMD cards. I was shocked, and the reviewer had to take out a fan so it would fit. I haven't seen one in person, so I was a little suprised. And I know a lot of cases don't have as much room as my case, but then again I suppose many enthusiest cases have that room.

AMD always over-engineer their cards, cram in extra mosfets and make their card way too long, more than necessary and their insistence of the blower fan annoyed me when i had a full tower with fans.. but now im on a mitx and a tiny case, its an awesome design to push all the heat from the gpu out of the case rather than in it. If i had a gtx570, my rig would melt in summer time here.

I guess its a compromise.
 
Feb 19, 2009
10,457
10
76
I think we should all wait and see.

I am more of the personal unsubstantiated opinion that the gcn chips are going to be similar to whatever nvidia puts out this gen. And it's likely that nvidia is keeping pace with amd on the new 28 nm chip.

Saying this because of the rumored apple switch back to nvidia. If nvidia has slower chips or if they are having issues with delivery of their 28nm part I doubt one of the most discerning computer maker would pick them over amd.

It may have something to do with Apple and especially NV having first "dibs" on wafers at TSMC, its a long withstanding tradition. Obviously if 28nm is going to be crap (it is), Apple may feel they have a better chance of demands being met by NV.

IvB is not coming out for awhile, there's time for kepler and IvB to ship together for a win. Does apple need 28nm gpus in early Q1 2012? Prolly not.
 

StinkyPinky

Diamond Member
Jul 6, 2002
6,883
1,096
126
I want to upgrade my 5850

Just waiting to see if the 7870 or 7950 is the way to go.
 

nanaki333

Diamond Member
Sep 14, 2002
3,772
13
81
as long as it doesn't turn in to a bulldozer GPU, i'll be more than happy to upgrade my 6970 xfire and wife's 6970/6950 xfire to 2 7970.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
There might be something to the CPU improving minimums. Per [h]'s CPU scaling in tri-sli comparison.

Ya, it only takes 3x GTX580s in SLI to shift the bottleneck to the CPU. :biggrin:
Of course every game eventually becomes CPU dependent once you add enough graphics power. But for most of us 3x $400-500 GPUs are outside of our price range, which pretty much makes it GPU dependent.

My motherboard is finally arriving tomorrow so I should have my results down by the end of the weekend comparing scaling with my i7 920 and my SB-E.

:thumbsup: It seems most 3930/3960s max out at 4.6ghz (at least on our forum). I have a feeling you'll be upgrading to IVB-E next year.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I think we should all wait and see.

I am more of the personal unsubstantiated opinion that the gcn chips are going to be similar to whatever nvidia puts out this gen. And it's likely that nvidia is keeping pace with amd on the new 28 nm chip.

Saying this because of the rumored apple switch back to nvidia. If nvidia has slower chips or if they are having issues with delivery of their 28nm part I doubt one of the most discerning computer maker would pick them over amd.

It appears, writes Feeney, that there has been not a technical slip but a cooling of the relationship between Apple and AMD :

"Contacts confirm that AMD GPU performance is not the issue &#8212; this is some sort of a customer-supplier tiff and Apple is known to be very difficult with its suppliers."
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
It appears, writes Feeney, that there has been not a technical slip but a cooling of the relationship between Apple and AMD :

"Contacts confirm that AMD GPU performance is not the issue — this is some sort of a customer-supplier tiff and Apple is known to be very difficult with its suppliers."

And they are quoting Charlie as to the why. It never changes. With Nvidia it's always design problems and inept engineering (either way) and when they get the contracts and gain penetration it's because they are selling at a loss.
As to the timing of higher end 28nm beyond mobile, which AMD has publicly shown ES samples running a couple times. I think we will also see a effort to clear out their current offerings. Because they would instantly be worth less , as is the conjecture with the gtx 580 price cut logic.
 
Status
Not open for further replies.