Polaris 10 and 11 confirmed to be GDDR5 based

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

airfathaaaaa

Senior member
Feb 12, 2016
692
12
81
Yes, we can always find some cases. But lets be honest, its not going to work without the bandwidth. And to say AMD doesn't gain from more memory bandwidth was outright silly.

when your rop's are bottlenecking the card instead of the memory OC'ing the memory wont help neither will by installing a bigger bus this was the problem with the fury line
 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
If that was so AMD wouldn't increase the memory speed on 390/390X. And Tahiti wouldn't beat Tonga.

And I am only talking 1080p. 850Mhz 280X(384bit) beating a 918Mhz 285(256bit+new memory compression). Memory bandwidth sure matters!
perfrel_1920.gif

Sorry but R9 285 should be compared to HD7950 or R9 280

Here is R9 380X 256bit vs R9 280X 384bit, well they are equals.

perfrel_1920_1080.png
 

ultima_trev

Member
Nov 4, 2015
148
66
66
980 Ti is probably more like 25-30% over 980 vanilla.

In any case, I recall Robert Hallock saying their new uarch should be compatible with both GDDR5 and HBM. Do we know if GDDR5X is compatible with vanilla GDDR5 memory controllers? I'm assuming it would be.

This is just a conspiracy theory on my part, but this is how I see it:

Polaris 11 = small GPU + GDDR5
Polaris 10 = medium GPU + GDDR5
Vega 11 = Polaris 10 rebrand with HBM
Vega 10 = proper Fiji successor
 

Leadbox

Senior member
Oct 25, 2010
744
63
91
Yes, we can always find some cases. But lets be honest, its not going to work without the bandwidth. And to say AMD doesn't gain from more memory bandwidth was outright silly.

Please do not misrepresent what I said, I said they don't gain much from more bandwidth which is not the same as the bolded bit above.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Please do not misrepresent what I said, I said they don't gain much from more bandwidth which is not the same as the bolded bit above.

280X vs 380X.

GCN 1.0 vs GCN 1.2 (With a 40% higher bandwidth due to memory compression according to AMD).

That's a 14% gain from memory bandwidth. I considering that quite a bit.
 

Leadbox

Senior member
Oct 25, 2010
744
63
91
280X vs 380X.

GCN 1.0 vs GCN 1.2 (With a 40% higher bandwidth due to memory compression according to AMD).

That's a 14% gain from memory bandwidth. I considering that quite a bit.

I'm not so sure if memory compression gives you higher bandwidth rather than reduce your need for more bandwidth? I'll let someone else tackle that.
 

ultima_trev

Member
Nov 4, 2015
148
66
66
Having done several benchmarks on my own video cards, bandwidth does make a pretty significant impact to performance. In fact, that's why the R9 390 performs the same as the R9 290X despite having less shaders and texture units. Upping memory from 1250 to 1500 helps it much more than a mere 53 MHz core boost.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Having done several benchmarks on my own video cards, bandwidth does make a pretty significant impact to performance. In fact, that's why the R9 390 performs the same as the R9 290X despite having less shaders and texture units. Upping memory from 1250 to 1500 helps it much more than a mere 53 MHz core boost.

:thumbsup:
 

Leadbox

Senior member
Oct 25, 2010
744
63
91
This is what AMD says.

ColorCompress.png

Again, that too is very different to what you said. 40% higher bandwidth EFFICIENCY is very different to "40% higher bandwidth due to memory compression". The efficiency from the memory compression means you don't need more bandwidth.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Nvidia has to start getting back all the stuff they took out with Kepler and Maxwell to improve perf/watt, in order to regain SP perf and specially DP FP.

Maxwell does fine on single-precision computing. And double-precision support at high rates (~1/3) is almost certainly only going to be in GP100, while the smaller chips will have double-precision at a low rate like 1/16 or 1/32. That's how it has always been done before; no reason to think it will change now.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Pascal will also support half precision. Something Maxwell doesn't. Only Broadwell/Skylake/GCN 1.2 does currently and GCN 1.3 will obviously as well.
 

Trovaricon

Member
Feb 28, 2015
28
43
91
This is what AMD says.

ColorCompress.png
I am aware only of nVidia's PR stunt where they used term "effective bandwidth" which was a buzzword used to claim memory bandwidth higher than theoretically possible for hardware configuration of (afaik) GM206 in one of their presentations. What we see now is probably the outcome of that: Thought that got stuck on (mostly) enthusiast's brains.

Lossless compression of render targets is very far from "compress everything passing memory controller"

Take a look at http://gpuopen.com/dcc-overview/ and you might have an idea why for some renderers the gain could be significant. On the other hand, I can think of several approaches to rendering a scene where the gain would be much less visible (that is an answer for 280 vs. 285/380)
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
That's the reality of how expensive this node is.
Previous nodes had a far more forgiving price per wafer per parametric specs (both logic and memory cell's size) for both companies at the start of said nodes, which allowed totally different die size's spectrum for a new product stack on a new node. The people that can't grasp how this new product stack introduced roughly at next mid year will completely go against past experience because of the innate cost of 14/16nm FF wafers compared to the other nodes' starting prices and electrical behavior at certain parameters of die size, clock and voltage targets.
Because this node is expensive.
Well considering there is no transistor cost savings with 14/16nm

Can someone explain to me why everyone seems to be taking these "FinFET is massively expensive" claims as gospel? Most of that comes from years-old projections by independent "consultants". Those same "doomsday charts" (as Reddit's III-V calls them) projected 20nm as being more expensive than 28nm, yet even in June 2014, Qualcomm was reporting that 20nm was "more cost effective compared to 28nm HKMG processes". In the same article, it's also noted that "around nine critical mask layers were taken out compared to the initial definition of 20nm and 14nm/16nm finFET processes" (this is for TSMC), thus substantially reducing cost. Furthermore, TSMC reports that 16nm "has blown past 28nm in the same time frame on revenues and yields". Of course, this doesn't indicate per-transistor costs to the customers, but good yields usually mean lower costs per usable die.

Some of the old doomsday slides come from an Nvidia marketing presentation, where they were complaining about the cost of 28nm. It was a negotiating move against TSMC, and taking it at face value would be extremely naive. Note the date on this article (March 2012).

I think the talk about foundry FinFET being insanely expensive is just Intel FUD.
 

Timmah!

Golden Member
Jul 24, 2010
1,416
630
136
I for one would be a buyer for a 8GB 125W card with 980TI/Fury X performance ;)

But unless GP104 delivers its hard to imagine. Polaris needs GDDR5X minimum at this point to even have a chance to reach it.

And what GPU you currently run/own?
 

jpiniero

Lifer
Oct 1, 2010
14,571
5,202
136
Some of the old doomsday slides come from an Nvidia marketing presentation, where they were complaining about the cost of 28nm. It was a negotiating move against TSMC, and taking it at face value would be extremely naive. Note the date on this article (March 2012).

Look at the chart again, they are complaining about 20/14 nm pricing and not 28 nm. The lead time on nodes is long; I'm sure nVidia had been given good guidance as to what the costs are going to be. It's not just the manufacturing costs that's a problem but also the design costs are way higher. Apple and Qualcomm might be able to realize some savings simply due to the huge volume but not discrete GPUs in which the market is shrinking.

I think the talk about foundry FinFET being insanely expensive is just Intel FUD.

ARM confirmed it not too long ago when they were promoting new 28 nm designs. This was in 2015.
 

Head1985

Golden Member
Jul 8, 2014
1,863
685
136
980 Ti is probably more like 25-30% over 980 vanilla.

In any case, I recall Robert Hallock saying their new uarch should be compatible with both GDDR5 and HBM. Do we know if GDDR5X is compatible with vanilla GDDR5 memory controllers? I'm assuming it would be.

This is just a conspiracy theory on my part, but this is how I see it:

Polaris 11 = small GPU + GDDR5
Polaris 10 = medium GPU + GDDR5
Vega 11 = Polaris 10 rebrand with HBM
Vega 10 = proper Fiji successor
polaris 11-7770 successor-GTX960 performance
polaris10-7870 successor -390x/GTX980 performance
Vega11-tahiti/7970 successor-compete against GP104 just like 7970 vs GK104
Vega10-hawaii/290x successor-compete against GP100 just like 290x vs GK110

Btw here is Tonga vs tahiti at same clock
http://www.hardware.fr/articles/945-24/tonga-vs-tahiti-round-2.html
 
Last edited:

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
^And the 7870 beat the 6970 at launch, which was well before AMD's Never Settle drivers that really saw GCN take off.

7870 212mm2 vs 6970 389mm2
Polaris 10 232mm2 vs 390X 438mm2

Nearly identical ratios. As long as AMD are not conservative on the clock rates I see no reason Polaris 10 cannot beat the 390X. Maybe it is a long shot to see it match or beat the Fury X even if it is badly designed, but I can hope.
 
Last edited:

ultima_trev

Member
Nov 4, 2015
148
66
66
Between 390 and 390X performance from Polaris 10 sounds about right. AMD would be cannibalizing Nano/Fury/Fury X sales if Polaris 10 was faster, so I doubt that will be the case.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
^And the 7870 beat the 6970 at launch, which was well before AMD's Never Settle drivers that really saw GCN take off.

7870 212mm2 vs 6970 389mm2
Polaris 10 232mm2 vs 390X 438mm2

Nearly identical ratios. As long as AMD are not conservative on the clock rates I see no reason Polaris 10 cannot beat the 390X. Maybe it is a long shot to see it match or beat the Fury X even if it is badly designed, but I can hope.

If we ignore core clock speed.

HD6970=176GB/sec
HD7870=154GB/sec

390X=384GB/sec
Polaris 10=192GB/sec (If leaks are true)

If Polaris 10 got GDDR5X, then its a whole other discussion.
 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
Between 390 and 390X performance from Polaris 10 sounds about right. AMD would be cannibalizing Nano/Fury/Fury X sales if Polaris 10 was faster, so I doubt that will be the case.

Polaris 11 and 10 could replace all current graphics cards from $100 to $650 including Fury-X and especially Nano.