Rumored specifications for HD8850/8870 - Launch January 2013 (?)

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

aaksheytalwar

Diamond Member
Feb 17, 2012
3,389
0
76
680 level performance at $350 next gen would be crap. A 7950 comes within 15% and costs 15% less. A 7970 for $380ish costs $30 over $350 and gives same performance right now.

IMO a more realistic thing would be 7970ghz level performance at $250-300ish
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,601
2
81
No you have not, you show highly OC custom cards (higher clocks than Ghz ed) use a bit more than stock 7970.

Yes I have. Those cards use the Tahiti XT2 GPU and as such are GHz Editions. Additionally they come with Boost, something that non-GHz Editions do not have. And all cards I selected were released after AMDs announcemend of the GHz Edition. As the partners have free reign, they can clock these cards higher. To my knowledge there is not a single GHz Edition that uses the clocks proposed by AMD - that is not my problem.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I don't know what you are making a fuss about with your long post. The GE partner cards use about as much as the 480 (your 225-238 compared to an averaged 235W for the 480), but they earn it by clocking higher, providing more performance. I see no problem here honestly.

No they don't.

Total system power consumption at the PSU level with a Core i7 2600K @ 4.4ghz:

GTX680 = 276W
HD7970 GE Vapor-X = 308W (32W more)
http://www.overclockersclub.com/reviews/sapphire_hd7970_vaporx_ghz/14.htm

^ How can an HD7970 GE draw 235W of power when the entire system draws 308W at the PSU level? Stop trolling and spreading false information.

An overclocked after-market 1150-1165mhz HD7970 card uses 225-235W of power. After-market HD7970 GE cards use about 190-210W, depending on the model.

You can continue to believe what you want to believe but GTX480 and HD7970 are far apart in terms of power consumption for 3D games. It's not even close.

power_peak.gif


HD7970 Reference = 189W
HD7970 GE after-market = 209W
HD7970 GE reference (can't buy in retail) = 238W
GTX480 = 272W

Ya, they are exactly the same......

Even my own HD7970 @ 1150mhz draws less than 225W of power so what you are claiming is not reality.

You also have failed to take into account how much faster an HD7970 @ 1150mhz is vs. a stock GTX680 as well. You can't even focus on the 225-235W of power consumption aspect and ignore that at those clocks the 7970 is much faster than a 680.

With latest drivers and MSAA, HD7970 GE owns the 680. @ 1150-1200mhz (235W of power), HD7970 would mop the floor with a GTX680 in the latest games:

IMG0038720.gif


Remember this amazing RPG game, the Enhanced Edition came out and 680 gets beaten by 25%.
IMG0038651.gif


210W of power and just 3 fps slower than GTX660Ti SLI in the Witcher 2: EE. :p
 
Last edited:

boxleitnerb

Platinum Member
Nov 1, 2011
2,601
2
81
Again you are ignoring my post where I linked 3 reviews of 7970 GE after-market. And again you are only using one single data source. Additionally you are looking at the peak value, not the average. Finally (but I'm not completely sure here) all GHz Edition use at least 1500MHz memory:
http://geizhals.at/de/?cat=gra16_512&xf=132_2048~1440_HD+7970+GHz+Edition#xf_top
The Gigabyte SOC does not, thus it is not a GHz Edition.

Across several reviews the 480 used 235W on average (see the 3DC link I posted), not 272W (peak). Please read my posts and links first next time, that would be helpful.

Edit:
I'm taking power consumption here, nothing else. Why are you so giddy in pointing out the 7970 GE is faster? What purpose does that serve? If you want to talk partner cards, I linked two overclocked partner cards from Nvidia as well in the post you ignored, that use less or the same amount of power as the reference 680. Now what?
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Sorry, I actually own an after-market HD7970 card so I know exactly how much power it draws at 1000, 1050, 1100, 1150 mhz +.

If you want to talk about power consumption of HD7970 vs. GTX480, please start another thread. This has been beaten to death. Believe what you want. Actual owners that measured HD7970's power consumption using a P3 Kill-a-Watt on a modern system disagree with you.

Back to the original topic: HD8850/8870 cards.

Let's not derail this thread. Please discuss HD8850/8870 and why you guys think AMD won't enlarge the die size to 270-280mm^2 and why it's somehow impossible that AMD can manufacture a 28nm 1792 SP 1100mhz chip that uses 160-170W of actual power and has GTX670/680 level of performance.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,601
2
81
So again you cannot discuss the concrete points I raised. I thought you had more integrity.

I agree, btt.
 

FalseChristian

Diamond Member
Jan 7, 2002
3,322
0
71
It's time we see a monster GPU come out:

1024-bit GDDR6 (2.7TB GB/sec memory bandwidth)
32 GB of 2500MHz GDDR6
128 ROPS
320 texture units
5000 Cuda cores running twice that of the core
2500MHz core clock (320GB/second pixel fill-rate)
1.6TB/second of texture fill-rate

This card should be out by 2025.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
For the price level these cards occupy, that's probably ok though.





I'm not saying you're wrong, you very well may be correct. But, I don't think you can say that it isn't possible for the power savings to be there simply because Pitcairn was not as rushed to the market as Fermi. Fermi was obviously a rush job, no doubt. But this is both AMD's and Nvidia's first go at 28nm, as they have learned the process we very well may see some solid improvements.

I never said there wouldn't be power savings. For the same performance you'd either see lowered power consumption and for higher performance the same. For much higher performance, noticeably higher power consumption.

There will be an increase in performance/watt, but nowhere near as big as going from GF100 to GF110.
 

The Alias

Senior member
Aug 22, 2012
647
58
91
I actually wasn't replying to you, I hope next time your advanced comprehension of reading allows you to see that before you question another users ability to read. It should have been quite easy to see I wasn't implying anyone in particular but a group as a whole, how you missed that is beyond my abilities though perhaps a mystic can lead us there.

The fact that 5 series doesn't clock much better when overclocked also says a lot too about those transistors. On water the difference between GF100 and GF110 clock wise is pretty much nothing. GK110 wasn't so much better if you ignored power (arguably if you ignore the first batch of fermi you'd have less of a case as well), my point was most people today try to say power doesn't matter now. However the Ghz card from AMD uses a sizable amount of extra power vs the reference 680, it does not however, come anywhere near as fast compared to the 680 as the 480 was against the 5870.

Yet from those same people saying power consumption numbers from highly clocked 7970's (which is required to beat the 680 mind you, not by much), don't matter still attempt to dismiss GF100 for the very same reasons the higher clocked 7970's are being discounted. The 680 uses more power than the 5870, the 7970GHz uses a bit less power than a first batch 480, however the 7970 is nowhere near as fast compared to the competition that the 480 is. This doesn't even account for the fact that you have to eat a large amount of the 7970s clock headroom just to get to this point where it's slightly faster. GF100 was already well past where the 7970 is, without that and still had a vast wealth of untapped power.
a 125 mhz overclock isn't that much compared to how much oc room is left over
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
There will be an increase in performance/watt, but nowhere near as big as going from GF100 to GF110.

Actually this isn't a given. The 6970 saw a decrease in performance/watt compared to the 5870, although of course that also included a change in architecture.
 

KompuKare

Golden Member
Jul 28, 2009
1,030
980
136
All this talk of power consumption sort of ignores chip quality and leakage. I already mentioned this Asus 7950 which was not a silicon lottery winner (ASIC 60%) and defaulted to 1.07V for 900Mhz. Undervolting it to 0.95V saved almost 30W at load.

I don't think that's atypical, so even if we only take the more scientific sites who measures the actual power drawn from a card, it's easy to see major variations. So, next round if AMD PR is clever they'll do what Nvidia probably already do: cherry pick binned parts to send to reviewers. If 8870/8970 are going to be power hungry, the difference between 90%+ ASIC quality and 60%+ could easily be 30W-40W or more.

It's a bit hard to guage exactly how 8870 would be based on these romoured specs but even if they increase the shaders and ROPs, the space used for memory controllers etc. would not have to increase so it's hard to take a linear guestimate of how big a chip will be with a certain amount of shaders and so on.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126

What are you even arguing about?

All it takes is 45-50W of extra power to take a $380 HD7970 and at least match (and I am being generous here since at 1600P, it would crush a 680 OC by 10-20%) a GTX680 OC. So what's your point exactly? Not sure how at all a GTX480/GTX580 for $500 is related to this discussion.

What you fail to realize is that HD7970 can be overclocked to 1150-1200mhz and still come in at GTX580's level of power consumption, and at the same time save $100-120 over the GTX680 and wait for it, tangibly beat GTX680 OC in some games. Also, I had your "favourite" Fermi generation and your amazing GTX470 (in my computer at 760mhz) got pounded into the ground in Crysis 1 by a stock HD6950. This is what happens to GTX680 in certain games like Crysis 1/Warhead, Anno 2070, Alan Wake, Arma II, Sniper Elite V2, Dirt Showdown. There are hardly any games where 680 is winning by 10% or more and yet costs $100. Overpriced.

So really, what are you arguing about? That 130W of extra power consumption and $130 price premium on the 480 was worth the extra 15-20% performance increase? Well that's your opinion, but it has nothing to do with this thread.

At current prices and levels of performance, it's impossible to even make a case for a 680 unless you are an NV fanboy:
1) HD7970 OC ~ GTX680 OC for $100 less
2) HD7970 makes $ bitcoin mining
3) In games where 680 falls apart, HD7970 pummels it by 15-30% (You already know these games, no need for us to list them).

The reason power consumption for HD7970 vs. GTX680 doesn't matter is simple - the difference is immaterial on a modern system pulling 300W+:

HD5870 = 143W Peak
GTX680 = 186W Peak
HD7970 GE after-market = 200-210W
HD7970 @ 1150-1165mhz = 225-238W
GTX480 = 273W
http://www.techpowerup.com/reviews/ASUS/GeForce_GTX_660_Direct_Cu_II/25.html

The difference between GTX480 and HD5870 on average was about 15-20% in games but 480 used 130W more power and cost $130 more. How do you not understand this?

HD7970 @ 1150-1165 ~ GTX680 @ 1280-1290mhz and uses only about 45W more power. The delta between 7970 and 680's power consumption is miniscule on a modern system and 680 OC cannot beat an HD7970 OC despite costing $100 more. :rolleyes:

Glad to know you still love the 470/480 generation but the rest of us could care less about those cards. They were great back in the days but now their time is up:

i%20am%202560.png


That 15-20% performance advantage GTX480 held at point point accounts for nothing now since GTX480 is just as much of a slideshow as an HD5870 is in modern titles:

b3%20ac%201920.png


======================

But please, not another thread about the ridiculous GTX470/480 cards. We don't care to hear how awesome they are.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I keep reading this, but didn't Nvidia spend a long time designing Fermi. If I remember correctly, the estimated release date was many months earlier than actually happened. My understanding was that poor design decisions led to the power leakage problems and they spent a very long time trying to correct them. Hardly what one would consider a rush job.

Is it impossible for Nvidia to make mistakes, or if they do, then it automatically means that time was short?

This^^ I don't know how something can be released months late and be called "rushed"?
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
So again you cannot discuss the concrete points I raised. I thought you had more integrity.

I agree, btt.

What concrete points? I checked this thread again and didn't see you linking a single retail HD7970 GE card being tested for power consumption in a game. You linked 5 reviews that tested HD7970 GE reference card. You cannot produce anything with facts that proves my position otherwise. Do you have an HD7970 or HD7970 GE card? Or are you just franticly trying to find information to prove your point that an HD7970 GE uses as much power as a GTX480? What concrete points have you brought so far in this regard, you know in respect to real world after-market 7970 GE cards?

Any time an owner of an overclocked 7970 card pulls 225-230W, since you don't believe us anyway, and you ignore every review that shows this as well, the onus is on you to come up with fictitious data not for us to prove that our real world data is 'made up'. Many of us have provided the power consumption data and you guys keep ignoring it, insisting we need 1.25V to reach 1050mhz on a 7970. It's like talking to a wall around here. Every time we say stop linking HD7970 GE reference cards, you guys keep doing it. Please, keep wasting more of your and our time. You cannot produce real world data of an HD7970 @ 1150mhz on 1.175V since you don't have one and all the reviews that have tested HD7970 after-market cards at 1050mhz of faster contradict your point that HD7970 @ 1050mhz draws as much power as a GTX480.
I provided at least 3 such cards.

You also have failed to account that a person can go out and buy an HD7970 and overclock it on stock voltage to 1.175V and reach 1150mhz. Again.....repeated 10x. In the span of 12 months, as 28nm node improved, AMD can achieve 1050mhz clock speeds on HD8870/8950/8970 cards at lower voltages. That's because 28nm node tends to mature over time. If you have some connection between HD7970 GE and HD8870, I am waiting to hear about it because I can't seem to connect the dots?

The topic of this thread is HD8850/8870, not HD8970 or HD7970 GE vs. GTX680.
 
Last edited:

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
What are you even arguing about?

All it takes is 45-50W of extra power to take a $380 HD7970 and at least match (and I am being generous here since at 1600P, it would crush a 680 OC by 10-20%) a GTX680 OC. So what's your point exactly? Not sure how at all a GTX480/GTX580 for $500 is related to this discussion.

What you fail to realize is that HD7970 can be overclocked to 1150-1200mhz and still come in at GTX580's level of power consumption, and at the same time save $100-120 over the GTX680 and wait for it, beat an OC GTX680 anyway.

So really, what are you arguing about? Nothing

1) HD7970 OC ~ GTX680 OC for $100 less
2) HD7970 makes $ bitcoin mining
3) In games where 680 falls apart, HD7970 pummels it by 20-30% (You already know these games, no need for us to list them).

The reason power consumption for HD7970 vs. GTX680 doesn't matter is simple:

HD5870 = 143W Peak
GTX680 = 186W Peak
HD7970 GE after-market = 200-210W
HD7970 @ 1150-1165mhz = 225-238W
GTX480 = 273W
http://www.techpowerup.com/reviews/ASUS/GeForce_GTX_660_Direct_Cu_II/25.html

The difference bewteen GTX480 and HD5870 on average was about 15-20% in games but 480 used 130W more power.

HD7970 @ 1150-1165 ~ GTX680 @ 1280-1290mhz and uses only about 45W more power.

How are you even comparing this? Oh and HD7970 costs $380-$450.

Why in the world are you even bringing this into play?

- The delta between 7970 and 680's power consumption is miniscule on a modern system and 680 OC cannot beat an HD7970 OC despite costing $100 more. :rolleyes:

I dunno you snipped out everything I said I don't even know what post you're referencing.

1600p, who cares? Niche market doesn't matter, stop pretending it does.

You're not sure how $500 cards relate to the $500 market segment?

You aren't beating the 680 OC with only 1150 get real, the lightning can't even beat the reference card at stock, go find a few reviews to refute this from some no name websites while I laugh.


The performance increase from the the 680 is laughable, you can't even change different settings, not to mention any review mentioning it states even at lower fps the 680 SLI provides a smoother gameplay experience :rolleyes: You're stuck with a single lackluster 7970, that's not even as fast as 470 SLI, what a joke.

Keep going back to TPU, I heard they have a forum perhaps you should ask for compensation.

The difference between the 480 and 5870 in DX11 is 35.5% according to your beloved TPU site, and in some titles it's as much as over 100% faster. You know the difference between playable, and unplayable, something you can't ever claim with the 7970 heater vs 680 220w TDP OC.

680 doesn't need to beat it, it just need to be comparable and offer the same gameplay (or in this case better) experience. The Nvidia name carries a premium, AMD's name carries a driver stigma, have they fixed their CF woes yet? Probably not, it's only been nine months.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
IMHO doubling up pitcaring would be the best way to go. It should outperform CF7870 without all the hassle that comes with CF. And they should just leave tahiti for FireGL market.

I don't think AMD sells enough FirePro cards to make a dedicated design. Although I can see the logic of not limiting your gaming performance just so your "non compute" product isn't faster for games.

In reality Pitcairn isn't very compute limited. All of the GCN cards do well in the processes that the drivers are optimized for. CLBenchmark
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
680 doesn't need to beat it, it just need to be comparable and offer the same gameplay (or in this case better) experience. The Nvidia name carries a premium, AMD's name carries a driver stigma, have they fixed their CF woes yet? Probably not, it's only been nine months.

What's the title of this thread? Cross-fire vsl SLI? Power consumption of Tahiti XT? Performance of GTX480 vs. HD5870 3 years ago?

Do you have anything you would like to to contribute to an HD8850/8870 discussion?
 
Last edited:

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
What's the title of this thread? Cross-fire vsl SLI? Power consumption of Tahiti XT? Performance of GTX480 vs. HD5870 3 years ago?

Do you have anything you would like to to contribute to an HD8850/8870 discussion?
Just keep reporting him until he's banned again.

Thanks for the new RS, I'm liking the power specs. Of course price will determine everything, but I'll be interested in what's coming out for the flagships :).
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
What's the title of this thread? Cross-fire vsl SLI? Power consumption of Tahiti XT? Performance of GTX480 vs. HD5870 3 years ago?

Do you have anything you would like to to contribute to an HD8850/8870 discussion?

You're right RS, CF isn't an option. :whiste:

I never started the 480 discussion, I only commented on what was already here in your rumor mill thread.
 
Last edited:

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,249
136
This^^ I don't know how something can be released months late and be called "rushed"?

It was delayed because of issues. They fixed some of them and launched to save face. They later fixed the issues that they chose to live with at launch. Dependant on how you look at it Fermi was rushed to market.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I've never had to troll this forum, only offer opinions not shared by AMD loyalist - they troll themselves flip flopping like fish out of water.

You seem to not grasp 2 different situations:

$500 GTX480 that used 130W more power than a $369 HD5870 in games but had an average lead of say 20% at 1080P (less at 1600P)

vs.

$380 HD7970 OCed that uses 30-45W more power than an OCed $480 GTX680 but is at least as fast at 1080P and faster at 1600P.

How these 2 scenarios are comparable?

---

Do you want to say that HD8850/8870 growing the die size to 270-280mm^2 is a negative development because it will raise the power consumption of HD7870 from 115W to 170W?

If HD8870 uses 160-170W but has the performance of a GTX670/680 for $349, that's bad?


Just keep reporting him until he's banned again.

Thanks for the new RS, I'm liking the power specs. Of course price will determine everything, but I'll be interested in what's coming out for the flagships :).

Cheers! Although given the source, it's still just rumors. I am looking forward to the 8900 series specs myself.

I find it a little bit odd that some weeks ago people claimed that NV is just waiting for the right moment to drop a 520-600mm^2 7 Billion transistor 15 SMX GTX780 product (aka the real flagship Kepler GK110 we have all been waiting for since February 2012), but based on 4 pages of this thread, apparently AMD's engineers are at a dead end, completely unable to realize any transistor space savings, take advantage of a more mature 28nm node, and increase that die size without blowing way past GTX480's power consumption....that's what I am hearing.

I suppose NV has the world's top secret 28nm transistors stashed away at TSMC since just weeks ago people claimed that GTX780 should have no problem increasing the functional units by 50-87%, despite GTX680 already using 186W of power at peak. Makes sense......

I also do not understand the artificial constraint for why AMD cannot increase the power consumption beyond 115W of power of HD7870.
 
Last edited:

chimaxi83

Diamond Member
May 18, 2003
5,649
61
101
You seem to not grasp 2 different situations:

$500 GTX480 that used 130W more power than a $369 HD5870 in games but had an average lead of say 20% at 1080P (less at 1600P)

vs.

$380 HD7970 OCed that uses 30-45W more power than an OCed $480 GTX680 but is at least as fast at 1080P and faster at 1600P.

How these 2 scenarios are comparable?

---

Do you want to say that HD8850/8870 growing the die size to 270-280mm^2 is a negative development because it will raise the power consumption of HD7870 from 115W to 170W?

If HD8870 uses 160-170W but has the performance of a GTX670/680 for $349, that's bad?

Yes, because it's from AMD, not Nvidia :whiste: