AMD HD7*** series info

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
Ya, pretty much with some minor changes (HDMI 1.4, UVD3.0, some reduction in unneeded shaders / TMUs). But my main concern is that AMD is simply looking to shift VLIW-4 HD6950/6970 chips to 28nm HD7850 / 7870 chips. Sure they might get more than 50% reduction in power consumption but that means almost no performance improvement. Is that what the consumer wants for discrete desktop GPUs? I think 20-30% more performance in the mid-range at 170-180W power envelope would have been far more preferable imo. It's good to keep increasing mid-range performance since a lot of buyers want these cards.

I'm sorry, but what are you ranting on about? They're giving people yesterday's Enthusiast card performance at a Performance card price with a Mainstream card power consumption.

The Radeon HD 6970 is 25% faster than the Radeon HD 6870, so it's not "almost no performance improvement". In any case, there's probably still a performance improvement in comparison to the 6970. I'd say around 35% faster than the 6870 would be right.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Not all GTX 460's reach 900MHz, but most samples should. Same for the HD 6850 at 1000MHz. Some 6850s reach 1050MHz yet you don't see me arguing over them as there's too little of them; same as GTX 460s reaching 950MHz.

Your games, rebates and what not argument is meaningless. Like I've said a million times, you still have to pay upfront price, and whether a game comes or not depends on the manufacturer.

By our own poll here only 19% (5/26) (removing Troll Trolling's votes since he just voted for every option :rolleyes:) chose "over 880MHz". So, no, it's not even close, most samples absolutely won't reach 900. 42% (11/26) won't even do 850MHz, as has been touted as virtually automatic. This is an "urban legend". It's been repeated over and over so much that it's just accepted as true, when it's not.

http://forums.anandtech.com/showthread.php?t=2128805&highlight=

The GTX-460 1Gb and the HD-6850 are to close to call to choose one over the other in frames/sec only. Depending on the benchmarks/games chosen it could be skewed either way. Both will O/C well, but you can't tip it one way or the other there either. They are both good O/C'ers. The 6870 is a faster card. Sure you might find a golden sample of one of the cheaper cards that will be faster. Our own poll though shows that it's by no means something you can bank on.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
By our own poll here only 19% (5/26) (removing Troll Trolling's votes since he just voted for every option :rolleyes:) chose "over 880MHz". So, no, it's not even close, most samples absolutely won't reach 900. 42% (11/26) won't even do 850MHz, as has been touted as virtually automatic. This is an "urban legend". It's been repeated over and over so much that it's just accepted as true, when it's not.

http://forums.anandtech.com/showthread.php?t=2128805&highlight=

The GTX-460 1Gb and the HD-6850 are to close to call to choose one over the other in frames/sec only. Depending on the benchmarks/games chosen it could be skewed either way. Both will O/C well, but you can't tip it one way or the other there either. They are both good O/C'ers. The 6870 is a faster card. Sure you might find a golden sample of one of the cheaper cards that will be faster. Our own poll though shows that it's by no means something you can bank on.

The poll isn't really good because of how it's worded.

In any case, nothing changes really. If we're to say most GTX 460s reach 850MHz and most HD 6850s reach 950MHz the landscape remains the same.

I agree with your overall take on it.
 
Last edited:
Feb 19, 2009
10,457
10
76
http://www.rambus.com/us/news/press_releases/2011/110830.html

Looks like GF will be making the 28nm ram for the 79xx. I don't know much about the benefits of XDR2 vs GDDR5, can some ppl care to elaborate?

Guys stop arguing over old tech, doesn't matter anymore. Also your benches aren't correct due to big driver improvements over the months following 68xx or 69xx release.

I'm speculating that Tahiti XT will be ~75% of a 6990 or a 50% increase over a 6970. The die size won't be as big as Cayman XT so there's no chance for a doubling of performance. If the 64 ROPs are true, 79xx series will be beastly at 1600p or multi monitor res, and its totally expected for AMD to head towards a focus on high res performance to continue the trend in 69xx.

A 190W 7970 can be put x2 into a 7990 and be a very fast card at the 300W TDP range. Hence, we won't have the crazy power use scenario of the 6990 or worse the 590.
 

james1701

Golden Member
Sep 14, 2007
1,791
34
91
So, with these power figures, what kind of power supplies are we talking about pushing a couple of 7970's or even a couple of 7990's. A 1000 watts or a little less for the 7990's, with a 650 for the 7970's? With SB and everything else using little power, are we looking at the start of buying small power supplies for high end setups?
 
Last edited:

trollolo

Senior member
Aug 30, 2011
266
0
0
in single card setups, sure. but for running 2 in crossfire, i imagine we'll still be needing large PSU's.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
in single card setups, sure. but for running 2 in crossfire, i imagine we'll still be needing large PSU's.

Eh, not really. You don't need anything more than a high quality 650W PSU for running 2x GTX 570s, even if both the cards and CPU is OCed (as long as it's not a Thuban or LGA 1366 Nehalem, that is).
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
For all of those people who refuse to admit it, or can't understand it, efficiency is very very important. It matters on many levels. Lower power consumption means smaller PSU's, cheaper cooling, less materials, less surrounding structure, cheaper packaging, shipping, and so on. Smaller chips mean better yields, and lower overall costs. While the costs on a single computer for you or I might not be too big of a consideration, to an OEM saving $20... $50... $100 on every build is freakin huge. I remember when appliances had 10ft power cords standard. Now they're 6ft. Just that 4ft of wire is enough of a savings (a few cents, maybe) to be taken advantage of.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
I'm with Russian on wanting to see big gains when new cards come out on a new node. The 5870 delivered, it was as fast as a GTX 295 and I think a reasonable expectation is for a new single gpu card on a new node to be as fast as the last node's dual-gpus cards. There has been nothing exciting released since the 5870, just minor refreshes.

If the 7970 is in 6990 territory it will be good enough for me. As far as the 7870, if it is as fast as a 6970/580 and costs $250, it will do what it had to do. Deliver last gen's top-tier performance to the mainstream buyer's market.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I'm sorry, but what are you ranting on about? They're giving people yesterday's Enthusiast card performance at a Performance card price with a Mainstream card power consumption.

The Radeon HD 6970 is 25% faster than the Radeon HD 6870, so it's not "almost no performance improvement".

HD6970 is only 15% faster than HD5870. So I don't know how you can be happy with a 28nm GPUs with speed ~ HD6970 for ~$200-250. I expect way more since it's been 2 years since HD5850/5870. AMD should have 75-100% more performance than HD5850 at $269 around September 2011. 24 months timeframe is a long time to still be stuck at HD6970 performance level for mid-range because HD6970 performance was a sad increase from HD5870 to begin with!

For all of those people who refuse to admit it, or can't understand it, efficiency is very very important. It matters on many levels. Lower power consumption means smaller PSU's, cheaper cooling, less materials, less surrounding structure, cheaper packaging, shipping, and so on. Smaller chips mean better yields, and lower overall costs. While the costs on a single computer for you or I might not be too big of a consideration, to an OEM saving $20... $50... $100 on every build is freakin huge. I remember when appliances had 10ft power cords standard. Now they're 6ft. Just that 4ft of wire is enough of a savings (a few cents, maybe) to be taken advantage of.

Dude, dude!! (from Big Lebowski), do you realize what happened in the last 10 years of GPU progress? Did we not see 75-100% performance increases every 18-24 months? We sure did. I agree that efficiency is very important but more so for mobile market. This is desktop space where users who buy $350-500 discrete GPUs have solid power supplies and good case airflow to begin with.

Also, I am not saying GPUs should go to 350-400W. HD6970 works perfectly fine with a 250W TDP using a standard reference design cooler that probably costs $30-40 at most.

You are telling me you'd take a 190W HD7970 with 2048 shaders over one with 250W and 2560 ALUs? Why the limit to 190W all of a sudden for their top-end offering? They can focus on lower power consumption for sub $150 parts, but top-end cards should not be artificially limited to such a low TDP as 190W imo.
 
Last edited:

chihlidog

Senior member
Apr 12, 2011
884
1
81
If the only advantage is in power consumption, then I will still have no real reason to upgrade. The only real upgrade for my 2 year old 5850 right now is a 6970. Power consumption doesnt concern me - what's it going to save me, 3 dollars a month?

Based on the 7970 specs, though, if those are true, the advantage isnt only in power consumption by a long shot.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
HD6970 is only 15% faster than HD5870. So I don't know how you can be happy with a 28nm GPUs with speed ~ HD6970 for ~$200. I expect way more since it's been 2 years since HD5850/5870. AMD should have 75-100% more performance than HD5850 at $269 around September 2011. 24 months timeframe is a long time to still be stuck at HD6970 performance level for mid-range because HD6970 performance was a sad increase from HD5870 to begin with!

How couldn't I be happy with that? 6970 performance for $200, not to mention lower power consumption and probably higher over-clocking, is great.

Also, we're not in 2008 anymore. The 4800 series was never really a very high-end series, hence we're not gonna see gains comparable to going from that to the 5800 series. The 4890 didn't try to milk everything it could out of the 55nm process node, so huge upgrades followed. The 6970 is the max AMD could do without making a huge die. I don't know why 50-75% higher performance than it would be in any way bad.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
How couldn't I be happy with that? 6970 performance for $200, not to mention lower power consumption and probably higher over-clocking, is great.

You know you could have purchased an HD6950 2GB and unlocked it for $220-230 7 months ago? Also, as chihlidog mentioned, that's not much of an upgrade from an overclocked HD5850 which was $270 2 years ago.

Also, we're not in 2008 anymore. The 4800 series was never really a very high-end series, hence we're not gonna see gains comparable to going from that to the 5800 series. I don't know why 50-75% higher performance than it would be in any way bad.

Let's see,

1) 8500 --> 9700Pro (>100%)
2) 9700Pro/9800Pro --> X800XT/XT PE (75-100%)
3) X850XT --> X1800XT/X1850XT (75-100%)
4) X1900/1950 --> 2900XT (failure)
5) HD3870 --> HD4870/4890 (75-100%)
6) HD4870/4890 --> HD5870 (75-100%)

It would be nice if HD7970 followed this pattern.
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
Also, we're not in 2008 anymore. The 4800 series was never really a very high-end series, hence we're not gonna see gains comparable to going from that to the 5800 series. The 4890 didn't try to milk everything it could out of the 55nm process node, so huge upgrades followed. The 6970 is the max AMD could do without making a huge die. I don't know why 50-75% higher performance than it would be in any way bad.

300 watt 55nm 4870x2 and gtx295 were about = 225 watt 40nm 5870 and gtx480.
350 watt 40nm gtx590 and 6990 should be = 250 watt 28nm gtx680 and 7970

Correct me if I'm wrong ,but isn't the jump from 40nm to 28nm much bigger?
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Dude, dude!! (from Big Lebowski), do you realize what happened in the last 10 years of GPU progress? Did we not see 75-100% performance increases every 18-24 months? We sure did. I agree that efficiency is very important but more so for mobile market. This is desktop space where users who buy $350-500 discrete GPUs have solid power supplies and good case airflow to begin with.

Also, I am not saying GPUs should go to 350-400W. HD6970 works perfectly fine with a 250W TDP using a standard reference design cooler that probably costs $30-40 at most.

You are telling me you'd take a 190W HD7970 with 2048 shaders over one with 250W and 2560 ALUs? Why the limit to 190W all of a sudden for their top-end offering? They can focus on lower power consumption for sub $150 parts, but top-end cards should not be artificially limited to such a low TDP as 190W imo.

I'm saying that Dell, etc. would love to see 190W. 225W is the upper limit of what they "like". You and I might buy a couple cards between us, but Dell will buy thousands if they can save money on the rest of the build.
 
Feb 19, 2009
10,457
10
76
Let's see,

1) 8500 --> 9700Pro (>100%)
2) 9700Pro/9800Pro --> X800XT/XT PE (75-100%)
3) X850XT --> X1800XT/X1850XT (75-100%)
4) X1900/1950 --> 2900XT (failure)
5) HD3870 --> HD4870/4890 (75-100%)
6) HD4870/4890 --> HD5870 (75-100%)

It would be nice if HD7970 followed this pattern.

There's a reason we are not going to get 75-100% from 6970 -> 7970.

The 6970 is a very big GPU and does not fit with the sweet spot design, it was a compromise product.

IF the 7970 is going to be 360-380mm2, then you may have a good chance of 75-100% performance increases. But i highly doubt it will be that big. The TDP of 190W on a chip going at 1ghz is too low for a huge chip. Yes i'm aware a node shrink lowers power use, but all things being equal, cramming more transistors into a given area, if that area is big like the last node gpu, then the power use will be ~same. It will be a perf/mm2 increase rather than a perf/watt increase etc. Setting the TDP at 190W means AMD is going for both increases.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
You know you could have purchased an HD6950 2GB and unlocked it for $220-230 7 months ago? Also, as chihlidog mentioned, that's not much of an upgrade from an overclocked HD5850 which was $270 2 years ago.



Let's see,

1) 8500 --> 9700Pro (>100%)
2) 9700Pro/9800Pro --> X800XT/XT PE (75-100%)
3) X850XT --> X1800XT/X1850XT (75-100%)
4) X1900/1950 --> 2900XT (failure)
5) HD3870 --> HD4870/4890 (75-100%)
6) HD4870/4890 --> HD5870 (75-100%)


It would be nice if HD7970 followed this pattern.

There's some problems with those I bolded out. The HD 3870 was only a Performance market card from the get-go, and the 4800 series was on the same process node. Also, the HD 5870 is 'only' 43% faster than the HD 4890, so it's not the huge upgrade it was made out to be. Remember that the 4890 is comparable to the HD 5830.

Now that I see that, I think my perspective has changed. I think 50-60% more performance than previous gen is more realistic.

300 watt 55nm 4870x2 and gtx295 were about = 225 watt 40nm 5870 and gtx480.
350 watt 40nm gtx590 and 6990 should be = 250 watt 28nm gtx680 and 7970

Correct me if I'm wrong ,but isn't the jump from 40nm to 28nm much bigger?

The HD 4870X2 is a bit faster than the HD 5870. Also, the GTX 480 consumes a lot more power than both. Again, don't expect something even close to HD 6990 performance in a single GPU. Just don't.
 
Last edited:

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
The gtx 480 matched/beat the 4870x2 and used less power.
Yes, its a dirty secret, but the 4870x2 was a power pig.
And it was very fast, and new single gpu cards beat it, in performance and p/w.http://detonator.dynamitedata.com/c...mpid=in_r329_personalization/browse2/home_PDP
http://www.anandtech.com/show/2977/...x-470-6-months-late-was-it-worth-the-wait-/19
22204.png


22168.png

22162.png
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
The gtx 480 matched/beat the 4870x2 and used less power.
Yes, its a dirty secret, but the 4870x2 was a power pig.
And it was very fast, and new single gpu cards beat it, in performance and p/w.
http://www.anandtech.com/show/2977/...x-470-6-months-late-was-it-worth-the-wait-/19
22204.png


22168.png

22162.png

Eh, you're right. Still, though, the 480 was a huge power hog. The reason why the GTX 480 beat the 4870X2 overall was because CrossFire scaling wasn't very good back then. If we got 85-95% scaling like we do now, it'd be the 480 coming out losing.
 
Last edited:

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
But in a couple situations we got double performance, for same power numbers. Will be nice to see that again. A 4850 used the same power as a 5850 and the 5850 got 2x the fps, at least in Crysis.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
But in a couple situations we got double performance, for same power numbers. Will be nice to see that again. A 4850 used the same power as a 5850 and the 5850 got 2x the fps, at least in Crysis.

This is probably the biggest thing AMD tries to achieve with each new process node: much higher efficiency.

As of now, the HD 6850 and 6950 are the most efficient high-performance graphics cards.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
There is no way we will have 2x the performance (avg) with the next gen cards even if they double the transistor count.

HD5870 at release day was only 37% faster than last gen HD4890 and it had more than double the transistors, double the SPs double almost everything.

http://www.techpowerup.com/reviews/ATI/Radeon_HD_5870/30.html

hd5870avggraph.gif


GTX480 at release date was 39% faster than GTX285 but GTX480 wasn't a full GF100 chip like GTX580.

http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_480_Fermi/32.html

gtx480avggraph.gif


I dont expect more than 50% better performance from the 28nm high end products over last gen high end cards.


As for the middle end cards, they will get last gen performance with less power.

Middle end will get HD6950/70 performance at the same power as HD6850/70(120-150W) and price, that means that people will be able to buy an HD7870 at ~$200 that will have the performance of HD6970 and power usage of 6870, i believe that's a nice upgrade ;)