[Ars] AMD confirms high-end Polaris GPU will be released in 2016

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

MrTeal

Diamond Member
Dec 7, 2003
3,919
2,708
136
Why, AMD have said Polaris was designed to support both GDDR5 and HBM and which they use will depend on the market segment due to HBM pricing, its in their videos.

Polaris encompasses multiple dies, so yeah, big ones will support HBM and small ones GDDR5. The only way they could use both with the same die would be if they included both a GDDR5 and HBM memory controller on die, at the cost of a huge waste of die space. Even then, if you wanted to use GDDR5 you'd still need to mount the die on a silicon interposer, since the bump density of the bottom of the die needed to support the HBM interface would be greater than is feasible with a standard substrate.
 
Feb 19, 2009
10,457
10
76
@MrTeal
That was my prior understanding of how it works, as integrating a GDDR5 controller on die would take up a lot of space and TDP, but the way it was said in the videos made it seem like a simple switch so it was confusing.

That would mean they are limited in the price segment for harvested Polaris 11 if its a HBM2 part. Unlikely to have a cheap mid-range product from that chip. This leaves a huge hole in their stack.

They are missing a ~200mm2 mid-range SKU, as Polaris 10 was touted as ~100mm2.

Would be hilarious if they shrunk Hawaii to the new node and made it the mid-range part haha..
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
@MrTeal
That was my prior understanding of how it works, as integrating a GDDR5 controller on die would take up a lot of space and TDP, but the way it was said in the videos made it seem like a simple switch so it was confusing.

That would mean they are limited in the price segment for harvested Polaris 11 if its a HBM2 part. Unlikely to have a cheap mid-range product from that chip. This leaves a huge hole in their stack.

They are missing a ~200mm2 mid-range SKU, as Polaris 10 was touted as ~100mm2.

Would be hilarious if they shrunk Hawaii to the new node and made it the mid-range part haha..


Why would they even shrink it? 28nm is mature and has great yields. They mentioned only having two FinFet dies in 2016. They've done this same mix of old and new with 200 and 300 series. My bet is it just comes down to how close Polaris 10 highest clock viable SKU can get in performance compared to Tonga/Hawaii and FinFet yields.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Lorien said:
Pascal is several months (estimated up to 2 quarters) late.

On topic:

Small GPU demoed at CES was a Polaris 10 engineering sample. Will be used in Raven Ridge APU and R7 460X Discrete GPU which uses less than 50W
16CUs, 64SP per CU = 1024 SP + 64TMUs Pitcairn is dead long live Polaris 10!

Polaris 11 demoed behind closed doors was R9 480X engineering sample using less than 150W
48CUs, 64SP per CU = 3072 SP + 192 TMU (pretty much 3X Polaris 10)

Big Polaris, Fury X replacement should be double of Polaris 11 which will come out to be 6144SPs

No concrete clocks or performance figures except huge smiles on AMD reps when asked about performance, interpret that however you want :)

P.S: Forgot about 490X, it should be 3840 SPs
P.P.S: Gotta love how defensive and knee jerk some people get at a few lines of information lol.

This was posted by someone at [H]. I'd take it as rumor as opposed to speculation.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
If nV can get away with +3/4 mkt share with just 3 dies, AMD might be fine with just 2 until it gets some market share back. This time tho the die salvaging needs to get serious. So I agree with the salvaging up to 30% less shaders. This goes in line with a process that will start with low yielding and this wont rule out a third die sometime down the road for Polaris (obviously for a node expected to run for 4 years, we will see a fuckton of dies for said node, but I'm specifically talking about this very "generation" of skus).

I would say Polaris 11 is a high end flagship using HBM2 and will have lots of salvage SKUs to fill the mid-range too. Polaris 10 would be entry level GPU and perfect for ultrathin notebooks and for low power desktops.

Here is what I am guessing

Polaris 11 - 4096 sp, GCN 4th gen, 128 ROPs, 8GB HBM2, 2048 bit HBM2 bus, 512 GB/s, 180-200w.

Will have 3 salvage SKUs. 3584, 3072 and 2048 sp. The 2048 sp would be a heavily salvaged (50% disabled) GPU and would be sold for USD 350. The improved shader efficiency and architectural improvements should bring it close to GTX 980/R9 390X at roughly 100w.

Polaris 10- 1024 sp, GCN 4th gen, 128 bit GDDR5, 112-128 GB/s (7-8 Ghz GDDR5 chips), 50-60W.
 

MrTeal

Diamond Member
Dec 7, 2003
3,919
2,708
136
Lorien said:
Pascal is several months (estimated up to 2 quarters) late.

On topic:

Small GPU demoed at CES was a Polaris 10 engineering sample. Will be used in Raven Ridge APU and R7 460X Discrete GPU which uses less than 50W
16CUs, 64SP per CU = 1024 SP + 64TMUs Pitcairn is dead long live Polaris 10!

Polaris 11 demoed behind closed doors was R9 480X engineering sample using less than 150W
48CUs, 64SP per CU = 3072 SP + 192 TMU (pretty much 3X Polaris 10)

Big Polaris, Fury X replacement should be double of Polaris 11 which will come out to be 6144SPs

No concrete clocks or performance figures except huge smiles on AMD reps when asked about performance, interpret that however you want

P.S: Forgot about 490X, it should be 3840 SPs
P.P.S: Gotta love how defensive and knee jerk some people get at a few lines of information lol.
This was posted by someone at [H]. I'd take it as rumor as opposed to speculation.
If true that would be an absolute naming debacle. The 460X would be (depending on clocks of course) only marginally faster than the 260X with its 896 SPs @ 1.1GHz. At least the 480X would be a big improvement over the current 380X, but that also means there's a 3x performance gap between the 480X and 460X. There's not a lot of room in there to stick another die or harvested parts.
The shader counts aren't outrageous, though you'd hope for something a little closer to the 4k shaders on Fiji if they want a chance to push nVidia this go around.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
If the specs provided by Lorien are correct, then we could see single Polaris 10 up to $250, dual Polaris 10 up to $499 and Polaris 11 from $650 with Dual Polaris 11 up to $1500.

I may buy a R9 390 at $300 today after all.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
If the specs provided by Lorien are correct, then we could see single Polaris 10 up to $250, dual Polaris 10 up to $499 and Polaris 11 from $650 with Dual Polaris 11 up to $1500.

I may buy a R9 390 at $300 today after all.

Not if they label them 460X and 480X. Also, if he's correct, they are using the Polaris 10 on an APU. I don't see the discrete GPU being that expensive. This is just a rumor though.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Not if they label them 460X and 480X. Also, if he's correct, they are using the Polaris 10 on an APU. I don't see the discrete GPU being that expensive. This is just a rumor though.

The APU will only release in Q2-Q3 2017, one year after Polaris 10 release. By then Polaris 10 will get a price cut.

Also if they will use a 16 CU iGPU, I sure hope they will use HBM2 on the top models because not even DDR-4 @ 4000MHz will be enough for that beast.

edit: And who tells us that 2017 APUs will not be more expensive than current line ??? With ZEN CPU Cores and Polaris 10 iGPU + HBM2 for the top models it may directly compete against Intels IRIS $300-400 SKUs. Also with DX-12 games by 2017 a 16 CU iGPU CrossFired with Polaris 10 dGPU will give you the same perf as current R9 380X or even 390X.
 
Last edited:

MrTeal

Diamond Member
Dec 7, 2003
3,919
2,708
136
If the specs provided by Lorien are correct, then we could see single Polaris 10 up to $250, dual Polaris 10 up to $499 and Polaris 11 from $650 with Dual Polaris 11 up to $1500.

I may buy a R9 390 at $300 today after all.

LOL, short of the specs being wrong and Polaris 10 producing 50W from zero-point energy, anyone who pays $250 for a card that will be considerably slower than a $180 4GB 380 deserves to end up with that crap.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
LOL, short of the specs being wrong and Polaris 10 producing 50W from zero-point energy, anyone who pays $250 for a card that will be considerably slower than a $180 4GB 380 deserves to end up with that crap.

How people easily forget,
Same happened with GCN at 28nm, HD7850/70 was $100 more expensive than HD6950/70 but had the same performance on release.
 
Feb 19, 2009
10,457
10
76
You don't need those specs to know there's a huge gap in their lineup.

A small die of Polaris 10 needs a chip twice that size, before getting a chip of ~300mm2 mid-range caliber.

There's no way they will be stupid enough to do a dual Polaris 10 SKU, because CF is still iffy (you think NV's GimpWorks is going to make it easier for AMD to support CF moving forward?!).

Polaris 11 has to be twice harvested to fall into the $300 segment.

Since they stated only 2 Polaris chips, they are missing at the real high-end. Their only option would be Dual Polaris 11 SKU. Which is sub-optimal due to reliance on CF support. If I were to go CF again, I would only do so with 2 huge dies, not 2x mid-range.

I guess its to be expected given the new node woes. AMD trying to produce a massive die on the new node is suicidal.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
You don't need those specs to know there's a huge gap in their lineup.

A small die of Polaris 10 needs a chip twice that size, before getting a chip of ~300mm2 mid-range caliber.

There's no way they will be stupid enough to do a dual Polaris 10 SKU, because CF is still iffy (you think NV's GimpWorks is going to make it easier for AMD to support CF moving forward?!).

Polaris 11 has to be twice harvested to fall into the $300 segment.

Since they stated only 2 Polaris chips, they are missing at the real high-end. Their only option would be Dual Polaris 11 SKU. Which is sub-optimal due to reliance on CF support. If I were to go CF again, I would only do so with 2 huge dies, not 2x mid-range.

I guess its to be expected given the new node woes. AMD trying to produce a massive die on the new node is suicidal.

They didn't say only 2 chips forever. Just this year. And the last time they said no further GPU's this year they released Hawaii. So, nothing is engraved in stone.

Remember we have no idea what nVidia is planning this year. We don't know if what AMD has coming is going to be competitive or not at this point.
 

MrTeal

Diamond Member
Dec 7, 2003
3,919
2,708
136
How people easily forget,
Same happened with GCN at 28nm, HD7850/70 was $100 more expensive than HD6950/70 but had the same performance on release.

Not really. The 7870 was still at least faster than the 6970, by 10% or so on release. The 6970 was closer to the 7850, and even after rebate you didn't see 6970s for $250*. On release the perf/$ of the 7870 wasn't really any better than Cayman, but it wasn't worse. You got the new features and better power consumption as well.

If AMD launched a 1024 shader part at $250, even using $200 for a 4GB 380 you're talking about a card that's 25% more expensive, and given that the 380 has 75% more shaders it'd probably end up being minimum another 25% faster that a 1024 shader Polaris card. That'd be along the lines of Polaris 10 being 64% the Perf/$ of Tonga. Pitcairn wasn't the big step forward in perf/$ that people expected, but it wasn't a massive step back, either.

*Edit: I don't even really remember 6970s under $300 really, unless it was a BF type deal.
 
Last edited:
Feb 19, 2009
10,457
10
76
They didn't say only 2 chips forever. Just this year. And the last time they said no further GPU's this year they released Hawaii. So, nothing is engraved in stone.

Remember we have no idea what nVidia is planning this year. We don't know if what AMD has coming is going to be competitive or not at this point.

Ofc, fully expect big chip when the node is capable.

But somewhat disappointed already because I was hoping to upgrade to a big chip real next-gen this year. :)
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
Who cares when they release it. When will you actually be able to buy one?

Remember the Fury X hype train rolling uncontrolled for months in advance of the release? Then remember how just before the official release Nvidia released the 980Ti almost out of the blue which stole all the Fury X thunder? Remember how you could actually buy a 980Ti at launch despite the stealth launch yet you couldn't buy a Fury X in the US for months after its release?

Remember the same thing with the 290x launch? Complete disappointment, AMD supporters cry wait for the after market cooling solutions when everything will be fixed? 2 weeks later, Nvidia releases 780 Ti, buries the 290x. Takes months for aftermarket 290x cards to be released dooming the reputation of the card which still exists today.

7970Ghz (last AMD card I owned) was just an overclocked 7970 designed to try and compete with the GTX 680, which it only partially succeeded in doing. From Anand's review:

"Simply put, the 7970GE is unquestionably hotter and uncomfortably louder than the GTX 680 for what amounts to the same performance. If power and noise are not a concern then this is not a problem, but for many buyers they're going to be unhappy with the 7970GE. It’s just too loud....

The end result is that while AMD has tied NVIDIA for the single-GPU performance crown with the Radeon HD 7970 GHz Edition, the GeForce GTX 680 is still the more desirable gaming card. There are a million exceptions to this statement of course (and it goes both ways), but as we said before, these cards may be tied but they're anything but equal."


Which means, we have to go back over 4 years and 4 flagships ago to the launch of the original 7970 by AMD to find a card that was faster than Nvidia's flagship for more than a couple weeks, and that you could actually buy at launch and without having to make huge ergonomic sacrifices.

History does not tell us to be optimistic here on the AMD side. The competition will not be won by who hypes their product the most or who paper launches first. AMD better get this one right. If they don't have significant product to sell at launch, they better not paper launch just to beat Nvidia to the punch, because you can bet that when Nvidia does launch there will be product available.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Who cares when they release it. When will you actually be able to buy one?

Remember the Fury X hype train rolling uncontrolled for months in advance of the release? Then remember how just before the official release Nvidia released the 980Ti almost out of the blue which stole all the Fury X thunder? Remember how you could actually buy a 980Ti at launch despite the stealth launch yet you couldn't buy a Fury X in the US for months after its release?

Remember the same thing with the 290x launch? Complete disappointment, AMD supporters cry wait for the after market cooling solutions when everything will be fixed? 2 weeks later, Nvidia releases 780 Ti, buries the 290x. Takes months for aftermarket 290x cards to be released dooming the reputation of the card which still exists today.

7970Ghz (last AMD card I owned) was just an overclocked 7970 designed to try and compete with the GTX 680, which it only partially succeeded in doing. From Anand's review:

"Simply put, the 7970GE is unquestionably hotter and uncomfortably louder than the GTX 680 for what amounts to the same performance. If power and noise are not a concern then this is not a problem, but for many buyers they're going to be unhappy with the 7970GE. It’s just too loud....

The end result is that while AMD has tied NVIDIA for the single-GPU performance crown with the Radeon HD 7970 GHz Edition, the GeForce GTX 680 is still the more desirable gaming card. There are a million exceptions to this statement of course (and it goes both ways), but as we said before, these cards may be tied but they're anything but equal."


Which means, we have to go back over 4 years and 4 flagships ago to the launch of the original 7970 by AMD to find a card that was faster than Nvidia's flagship for more than a couple weeks, and that you could actually buy at launch and without having to make huge ergonomic sacrifices.

History does not tell us to be optimistic here on the AMD side. The competition will not be won by who hypes their product the most or who paper launches first. AMD better get this one right. If they don't have significant product to sell at launch, they better not paper launch just to beat Nvidia to the punch, because you can bet that when Nvidia does launch there will be product available.

No. I don't remember. :rolleyes:
 
Feb 19, 2009
10,457
10
76
@Pariah

Are you dissing the 7970 series based on the reference design review? There were plenty of custom designs.

Folks frequently cite custom 980Ti and that "people don't buy reference GPUs"...
 

beginner99

Diamond Member
Jun 2, 2009
5,318
1,763
136
It now all makes sense. AMD jacked up prices for 300 Series especially 390(x) so now when the release the next gen midrange, that they can sell it at a higher price and lower 300 series pricing.

So polaris 10 will be entry-level sub $200 part (probably around $150), 380x probably gets re-branded to 470x (the 390 instead of 390 due to GCN 1.2).

Bigger question is Polaris 11. I it get fury +10% performance, why would it be called a 480x? that would leave huge difference between 470 and 480. If that leak form [H] is true...

So either there is a huge gap or it means it's be between 390x and fury and hence also with GDDR5. Obviously much lower power consumption. They can sell it at 400-500. And "real" next Gen will only come in 2017 with bigger dies. Makes sense due to 14 nm and HBM2 issues.
 

flash-gordon

Member
May 3, 2014
123
34
101
History does not tell us to be optimistic here on the AMD side. The competition will not be won by who hypes their product the most or who paper launches first. AMD better get this one right. If they don't have significant product to sell at launch, they better not paper launch just to beat Nvidia to the punch, because you can bet that when Nvidia does launch there will be product available.
History just tells one thing about Tahiti and Hawaii: we were all wrong, as both proved to be the superior solutions, and we shouldn't just use launch reviews if looking to make a good use of our money.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Just show this on the net,

http://videocardz.com/58169/nvidia-to-launch-geforce-gtx-980mx-and-gtx-970mx

NVIDIA GeForce GTX 900MX refresh coming to notebooks

Without a doubt NVIDIA is ruling the battle in mobile graphics card market with its powerful solutions based on GM204 GPU. Radeon R9 M390 series, which are based on Tonga are struggling to fit into demanding power requirements. It’s even hard to find any direct comparisons due to rarity of those Radeon solutions.
Surprisingly NVIDIA is planning two mobile graphics cards more to fill the gap between its notebook variant of GTX 980 and GTX 980M , plus slightly faster version of 970M. Technically it doesn’t really make sense to launch those GPUs now, unless NVIDIA is just trying expand its offer for more flexibility for notebook manufacturers. Or maybe, NVIDIA is preparing for new GPUs from AMD?
GeForce GTX 980MX would sport 1664 CUDA Cores, while GTX 970MX – 1408. Both cards would also be slightly overclocked compared to its -M predecessors.
The new GeForce 900MX graphics cards are expected in the second quarter of 2016. Are these the last 28nm high-end mobile GPUs?

Ehm any commends on that ?? are those 28nm or 16nm ??
 

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
I like how all of a sudden an R7 260X is now "only" slightly slower than a 1024 shader part,aka,known as the HD7850/R7 265/R7 370.

If you look at the latest Techpowerup reviews for cards like the GTX950,the R7 370 is 20% faster than a GTX750TI and 43% faster than an HD7790 which is a downclocked R7 260X:

https://tpucdn.com/reviews/EVGA/GTX_950_SSC/images/perfrel_1920.gif

perfrel_1920.gif


Also,since Polaris has additional improvements in tessellation and other areas,it will probably be quicker than that,and a bus powered card too.

That is also assuming AMD has not made any changes to the shaders themselves,let alone the clockspeeds and people need to be careful making assumptions.

A GTX980 had "only" 2048 shaders but is faster than a 2880 shader GTX780TI.

A GTX960 with 1024 shaders is in between a 1344 shader GTX970 and a 1536 GTX680 in many games.

So we don't know what performance level a 1024 shader part will have.
 
Last edited: