Leaked AMD Catalyst driver codenames for Volcanic Islands GPU's: Hawaii confirmed

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
I don't see how AMD can increase performance 30-35% on 28nm node. Such a car would beat a Titan. If Titan is a 561mm2 chip, how can AMD increase the performance on 28nm when their chip is only 365mm2 and they do not make 500mm2 die chips? I am thinking 10-17% is optimistic their refresh.

Maybe they will increase the die size, hence lots of SKUs from the same die. Rumor says about 40CU (2560 SPs), that is 25% more than 7970. If it is GCN 2.0 and it can keep the clocks above 1ghz then we may very well look into 30%+ improvements over 7970GHZ! And with the assumption of taking out ROP bottleneck - a lot more!

The only one bad thing about such an overkill (10%+ over titan) would be very high prices from AMD.
My guess is they will release those GPUs a few month after next gen, so lots of people will have the need to upgrade PC to mean demands of next gen games.
 
Last edited:

SiliconWars

Platinum Member
Dec 29, 2012
2,346
0
0
AMD67BE.1 = "HAWAII LE (67BE)"
AMD67B1.1 = "HAWAII PRO (67B1)"
AMD67B0.1 = "HAWAII XT (67B0)"

3 different SKU's from the start might mean it's a big chip. Wouldn't AMD normally release their "LE" a good few months later, and this wouldn't be seen in the same driver?
 

Fastx

Senior member
Dec 18, 2008
780
0
0
Maybe they will increase the die size, hence lots of SKUs from the same die. Rumor says about 40CU (2560 SPs), that is 25% more than 7970. If it is GCN 2.0 and it can keep the clocks above 1ghz then we may very well look into 30%+ improvements over 7970GHZ! And with the assumption of taking out ROP bottleneck - a lot more!

The only one bad thing about such an overkill (10%+ over titan) would be very high prices from AMD.
My guess is they will release those GPUs a few month after next gen, so lots of people will have the need to upgrade PC to mean demands of next gen games.


Well if they release a VI Hawaii per your above performance possible performance specs I know this will temp me vs. waiting for a 20nm (depending on price) but I will still try and hold for the 20nm but it will be temping. In my opinion and no offense to RS post opinion, but if they were /are going to release a 28nm Hawaii with a 10-17% performance increase I would think they would have just released it as a HD 7980?. If I was AMD I would not be releasing a VI Hawaii 28nm as a HD 8970 with only a 10-17% increase that would be a joke but this is just my opinion. :)
 

Greenlepricon

Senior member
Aug 1, 2012
468
0
0
Well if they release a VI Hawaii per your above performance possible performance specs I know this will temp me vs. waiting for a 20nm (depending on price) but I will still try and hold for the 20nm but it will be temping. In my opinion and no offense to RS post opinion, but if they were /are going to release a 28nm Hawaii with a 10-17% performance increase I would think they would have just released it as a HD 7980?. If I was AMD I would not be releasing a VI Hawaii 28nm as a HD 8970 with only a 10-17% increase that would be a joke but this is just my opinion. :)

While they may definitely do some sort of refresh with some cards, I really think they need a titan of their own if they want to stand out. I know that there isn't anything even remotely in Titan's price bracket, but if AMD released a card that performed at least close to as well for quite a bit less they should get some business. I think they can do better than 17%, but we'll have to wait and see.
 

Kippa

Senior member
Dec 12, 2011
392
1
81
First of all AMD said that there was going to be an announcement at Computex, which some were guessing or hoping of a new gfx card and all the released was talk about a new cooler for the gfx card. Now we have information about supposedly new gfx cards in a driver leak. Call me a sceptic but I'll wait until there is an official announcement by AMD about a new card coming from the horses mouth officially.

I am not AMD bashing, I actually like AMD and their gfx cards as I have a 7970 myself. I am just not sure whether this is just a PR exercise or whether there really will be a new card coming soon. I'd like to see a new gfx card by them, just gotta wait for some more information which is more concrete/official.
 
Last edited:

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
Maybe they will increase the die size, hence lots of SKUs from the same die. Rumor says about 40CU (2560 SPs), that is 25% more than 7970. If it is GCN 2.0 and it can keep the clocks above 1ghz then we may very well look into 30%+ improvements over 7970GHZ! And with the assumption of taking out ROP bottleneck - a lot more!

The only one bad thing about such an overkill (10%+ over titan) would be very high prices from AMD.
My guess is they will release those GPUs a few month after next gen, so lots of people will have the need to upgrade PC to mean demands of next gen games.

AMD is going to address the bottlenecks in Tahiti and increase sp count. more front end resources - 8 ACE (same as PS4), 3 or 4 geometry engines , 3 or 4 raster engines. more back end resources - 48 ROPs. With improved power management (as found in Bonaire) and chip binning AMD can fit a 2560 SP, 8 ACE, 3 or 4 geometry and raster engines, 48 ROPs GPU running at 1 Ghz within a 250 w tdp. remember the original HD 7970 ran at 1.175v and was power efficient. the HD 7970 Ghz screwed up perf/watt with a 1.25v voltage.
35% more perf over HD 7970 Ghz is not difficult. Remember AMD could fit a Bonaire with 30% higher perf in same TDP as Cape Verde. Hawaii would be atleast 420 - 440 sq mm even accounting for denser packing of transistors as the 28nm process has matured. But that should definitely be manufacturable on a mature 28nm process.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,601
2
81
Why is Bonaire always coming up? Bonaire is not an outlier regarding efficiency. Pitcairn is more efficient. Bonaire only fixed the efficiency of Cape Verde which was (after Tahiti) the weakest part of AMDs lineup.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
"%AMD67A0.1%" = ati2mtag_NewZealand, PCI\VEN_1002&DEV_67A0
AMD67A0.1 = "HAWAII XTGL (67A0)"

Called New Zealand one place, Hawaii the other?
 

Leadbox

Senior member
Oct 25, 2010
744
63
91
Why is Bonaire always coming up? Bonaire is not an outlier regarding efficiency. Pitcairn is more efficient. Bonaire only fixed the efficiency of Cape Verde which was (after Tahiti) the weakest part of AMDs lineup.

I'm pretty sure raghu was only trying to show how much more performance can be had on the same process node and in the same power envelope, not Bonaire's efficiency
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
I'm pretty sure raghu was only trying to show how much more performance can be had on the same process node and in the same power envelope, not Bonaire's efficiency

exactly. of the originally launched 3 chips Tahiti, Pitcairn and Cape verde , Tahiti was the least efficient. So it stands to reason that Tahiti has the most to gain by addressing the bottlenecks. That followed by using the latest power management algorithm and chip binning should provide a very competitive Hawaii wrt Titan.
 

Will Robinson

Golden Member
Dec 19, 2009
1,408
0
0
Hmm,the old "There's no substitute for cubic inches" truism comes into play here somewhat.
While Tahiti 2 does have the clever GCN architecture on a smallish die it's still a tough ask to compete against a large complex chip like Titan.
Pretty close to Titan but at $499 would be a game changer and is what I'd like to see happen.:thumbsup:
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
If I was AMD I would not be releasing a VI Hawaii 28nm as a HD 8970 with only a 10-17% increase that would be a joke but this is just my opinion. :)

GTX780 is only 16-22% faster than HD7970GE, depending on the review, and NV charges $650 for that.
http://www.3dcenter.org/artikel/lau...launch-analyse-nvidia-geforce-gtx-780-seite-2
and
http://www.computerbase.de/artikel/grafikkarten/2013/nvidia-geforce-gtx-770-im-test/4/

That's comparing a 365mm2 to a 561mm2 chip with similar power consumption. If HD8970 is 17% faster, it would roughly tie the 780, but AMD could price it $100 less. I just don't see how AMD can do 30-35% faster on the same 28nm node. If it was that easy, what were they doing since June 2012 when 7970GE came out?
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
With improved power management (as found in Bonaire) and chip binning AMD can fit a 2560 SP, 8 ACE, 3 or 4 geometry and raster engines, 48 ROPs GPU running at 1 Ghz within a 250 w tdp. remember the original HD 7970 ran at 1.175v and was power efficient. the HD 7970 Ghz screwed up perf/watt with a 1.25v voltage.

It's understandable how AMD could reduce the GPU voltage at a similar clock speed OR reduce the GPU voltage with a larger die size but end up with a much lower clock speed (similar to 780 vs. 680). However, what you are proposing is a lot lower voltage than HD7970GE and a much larger die size on the same node with barely a 50mhz reduction in GPU clock I am not sure that can be done with 250W of real world power consumption.

35% more perf over HD 7970 Ghz is not difficult. Remember AMD could fit a Bonaire with 30% higher perf in same TDP as Cape Verde.

I strongly disagree. 35% more over 7970GE would put it faster than the Titan. If AMD "easily" release such a card (i.e., not difficult), what were they doing since February 2013 when they already knew Titan is dropping? It's obviously not easy at all since it's already near end of June and AMD has no such card in sight.

Also, Bonnaire's die size went up 30% to get that 30% increase in performance, while GPU clock remained at 1Ghz like Cape Verde. 30% increase in die size over 365mm2 is nearly 475mm2, not 420-440mm2.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
GTX780 uses less 30-50W less than the 7970GHz.

Here we go now, Sontin making up NV-biased facts again:
http://tpucdn.com/reviews/NVIDIA/GeForce_GTX_780/images/power_peak.gif
http://tpucdn.com/reviews/NVIDIA/GeForce_GTX_780/images/power_average.gif

252 vs. 255W
http://ht4u.net/reviews/2013/nvidia_geforce_gtx_780_review/index51.php

228 vs. 231W
http://www.guru3d.com/articles_pages/asus_geforce_gtx_780_directcu_ii_review,7.html

and
54906.png

GTX-780-EVGA-46.jpg

04-Power-Consumption-Gaming.png

mess2-1.jpg

power-load.gif

power-consumption.png
 
Last edited:

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Let me guess:
You use performance numbers from sites which "burn in" Boost 2.0 cards and power numbers from sites which using the cards right from the start?

Man RussianSensation, you are always great.
 

Fastx

Senior member
Dec 18, 2008
780
0
0
GTX780 is only 16-22% faster than HD7970GE, depending on the review, and NV charges $650 for that.
http://www.3dcenter.org/artikel/lau...launch-analyse-nvidia-geforce-gtx-780-seite-2
and
http://www.computerbase.de/artikel/grafikkarten/2013/nvidia-geforce-gtx-770-im-test/4/

That's comparing a 365mm2 to a 561mm2 chip with similar power consumption. If HD8970 is 17% faster, it would roughly tie the 780, but AMD could price it $100 less. I just don't see how AMD can do 30-35% faster on the same 28nm node. If it was that easy, what were they doing since June 2012 when 7970GE came out?

I was a little disappointed with your opinion on the possible 10-17% increase on the VI 28nm and was a little extreme when I said it should be released as a 7980. :). I read the possible Curacao XT 28nm specs and rumored specs on the VI possible specs and hoping/thinking for a 30-35% possible increase on the VI Hawaii 8950 28nm.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,601
2
81
REally? Why would we do that? The card on it's own ain't going to do nothing now is it?

Because it minimizes other factors that have nothing to do with the consumption of the card itself. A little hotter/cooler day, another power supply or mainboard, some spike in CPU usage will influence results.
 

Leadbox

Senior member
Oct 25, 2010
744
63
91
Because it minimizes other factors that have nothing to do with the consumption of the card itself. A little hotter/cooler day, another power supply or mainboard, some spike in CPU usage will influence results.

It's obviously using more something (I'm guessing cpu) which maximises the need for better cooling there. Of all the metrics out there, that right there is quite possibly one of the more pointless ones
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,601
2
81
Why pointless? If we want to know how much power a graphics card uses, we measure the graphics card. Plain and simple. This applies to all measurements that take place in a multi-component system.
 

wand3r3r

Diamond Member
May 16, 2008
3,180
0
0
Let me guess:
You use performance numbers from sites which "burn in" Boost 2.0 cards and power numbers from sites which using the cards right from the start?

Man RussianSensation, you are always great.

As far as I see it, RS backed up his post with facts, whereas yours is just baseless. Hard to spin facts, unless you have proof?
 

Daedalus685

Golden Member
Nov 12, 2009
1,386
1
0
Why pointless? If we want to know how much power a graphics card uses, we measure the graphics card. Plain and simple. This applies to all measurements that take place in a multi-component system.

The problem is that you can't just measure the card alone... You could perhaps rig up an adapter to measure what passes through the direct power supply links but it draws power from the PCI express slot as well. Who knows what internal limits are placed on a high end card as to the fine details of its power system and where it draws from.

There is a reason you don't see this measurement on many reviews it is technically near impossible to measure reliably. Most reviews that list it subtract off a baseline they estimate for the system alone, which frankly tells you less than the entire system power measurement.

As far as other components in a computer system it is even more ridiculously difficult to isolate a power consumption number for something like a CPU with a third party test.

Edit: And I should never assume tech folk won't be resourceful and suborn when trying to measure things... http://www.behardware.com/articles/781-1/report-the-true-power-consumption-of-73-graphics-cards.html

Was totally off base in assuming it would not be worth the trouble to some folk, should have known better.
 
Last edited: