[Ars] AMD confirms high-end Polaris GPU will be released in 2016

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Nvidia was 2-3 steps ahead of AMD the last few years, and as such has been able to do things like time releases to cause the most market damage possible to their competition, while also getting the most money from the market as possible by not cannibalizing their own existing products needlessly. I mean, I want AMD to get their act together if only to force Nvidia to compete and release new technology instead of sitting on things for 8+ months simply because they can.

Basically what Intel has been doing. No need to really bring anything huge to the market if you're only fighting yourself.

Why I get this weird feeling AMD (and all of their outspoken supporters) are going to find out both Intel and NV have been quiet about things because well they aren't threatened.

Zen is going to come out swing, hit (throwing them a slice of pie) Skylake IPC numbers, Intel is going to laugh and unveil what they've been working on while AMD was playing catch. Same with Pascal. It's not like AMD is deterring ar from GCN so what they've had available has been on the table since 2012.


I will be pleasantly surprised if AMD can catch BOTH Intel and Nvidia with their pants down. But just reading AT's recent Carrizo artile makes me realize more so AMD has no idea what the hell they are doing. This Carrizo fiasco reminds me of when AMD sat approvingly and nodding of a Freesync vs Gsync competition while the organizers basically sabotaged it.
 

caswow

Senior member
Sep 18, 2013
525
136
116
Nvidia sat on GK110 because the only competition they had were themselves. Their previous gen cards still had the high end crown (in everything but 4k which was and still is a niche market). AMD still had yet to release the a card which would rival the 780 TI let alone the 980 until the Fury X, and Nvidia timed their launch of the GK110 to take all the thunder out of AMD's Fury X announcements by releasing the card when AMD was introducing the Fury X (introducing is the correct name of the event as the Fury X wouldn't be on the market for another 2 months after the introduction, while you could in theory buy a 980 Ti at stores), stealing the news cycle that would have possibly given AMD some steam.

Nvidia was 2-3 steps ahead of AMD the last few years, and as such has been able to do things like time releases to cause the most market damage possible to their competition, while also getting the most money from the market as possible by not cannibalizing their own existing products needlessly. I mean, I want AMD to get their act together if only to force Nvidia to compete and release new technology instead of sitting on things for 8+ months simply because they can.

this reminds me of the people who went from amd to nvidia and evidently stole data from amd...D:

http://www.alphr.com/news/379345/amd-staff-stole-100-000-files-before-moving-to-nvidia

Desai is accused of emailing Kociuk about how to "manipulate and eliminate certain data on her AMD computer", and of copying a database containing confidential product development information.

it lines up so perfectly to what nvidia was able to do to amd. it should end with polaris because i dont think amd had proper plans for 16/14nm at this timeframe.
 

maddie

Diamond Member
Jul 18, 2010
5,157
5,545
136
No it doesn't, the enthusiast class (& HALO) products are always going to be expensive especially at launch. The deep(er) price cuts only happen when the competition has a better product at similar or slightly lower price level. Kind of like how the Fury X was rumored to be $750 but ended up $100 short of that number as the 980Ti eventually stole its thunder, AMD will most likely exploit any high margin (low volume) product for as long as they can & they should :thumbsup:
This is a perfect example of a local expert. Did you read anything from the article?

An AMD Corporate VP says that they need to and can make the minimum specs for VR [GTX970/R9 290] more affordable with Polaris, but you know better.
AMD might have decided a greater market share ASAP is needed to support their ecosystem, thus having enticing prices at launch, but you know better.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
This is a perfect example of a local expert. Did you read anything from the article?

An AMD Corporate VP says that they need to and can make the minimum specs for VR [GTX970/R9 290] more affordable with Polaris, but you know better.
AMD might have decided a greater market share ASAP is needed to support their ecosystem, thus having enticing prices at launch, but you know better.

An AMD Corporate VP said this? Then you know it's the wrong thing to do.

"Guy's we're hemorrhaging left and right, let's offer up out newest GPU family at consumer friendly prices! That should stop the bleeding!"
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
An AMD Corporate VP said this? Then you know it's the wrong thing to do.

"Guy's we're hemorrhaging left and right, let's offer up out newest GPU family at consumer friendly prices! That should stop the bleeding!"

Well, lets look theoretically what they will offer.

120 mm2 Die with 256 Bit memory bus, 2048 GCN4 cores with 75W TDP. GTX 750 Ti price bracket
~220 mm2 Die size with >3072 GCN4 cores with who knows what Memory type with 125 W TDP. GTX 960 Price bracket.


Is this bleeding cash or just emanation of the improvement in process?
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
I dont think you get 256bit and 75W the same time for gtx750ti price. But rather another round of 128bit.
 

xthetenth

Golden Member
Oct 14, 2014
1,800
529
106
An AMD Corporate VP said this? Then you know it's the wrong thing to do.

"Guy's we're hemorrhaging left and right, let's offer up out newest GPU family at consumer friendly prices! That should stop the bleeding!"

It could just as easily mean "let's sell every die we can make of our small die part by pricing it aggressively by midrange part standards (never mind that it isn't nearly as expensive to make).
 

boozzer

Golden Member
Jan 12, 2012
1,549
18
81
Well, lets look theoretically what they will offer.

120 mm2 Die with 256 Bit memory bus, 2048 GCN4 cores with 75W TDP. GTX 750 Ti price bracket
~220 mm2 Die size with >3072 GCN4 cores with who knows what Memory type with 125 W TDP. GTX 960 Price bracket.


Is this bleeding cash or just emanation of the improvement in process?
who knows. no one here would know save for amd. but if those are the new chips, 2016 is gonna be super exciting.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
Guys, I don't know how you cannot understand this. Its a shrink. GTX980 will become GTX 1060 with new features, and lower TDP.

R9 390X performance/core count will be brought into lower tier of price, power consumption, etc. That is exactly what Roy Taylor implied in his interview page ago posted in this thread.
 

maddie

Diamond Member
Jul 18, 2010
5,157
5,545
136
An AMD Corporate VP said this? Then you know it's the wrong thing to do.

"Guy's we're hemorrhaging left and right, let's offer up out newest GPU family at consumer friendly prices! That should stop the bleeding!"
I hope I should not have to show that Corporate profit is not the same as unit margin/profit. I would imagine that any reasonable person would have the overall profits be more important than the profit on an individual item. In other words, overall sales are important for corporate profits. With a very low market share, AMD increasing sales volumes is as important as unit profit. They are not back in the early 28nm position.
 

R0H1T

Platinum Member
Jan 12, 2013
2,583
164
106
Well, lets look theoretically what they will offer.

120 mm2 Die with 256 Bit memory bus, 2048 GCN4 cores with 75W TDP. GTX 750 Ti price bracket
~220 mm2 Die size with >3072 GCN4 cores with who knows what Memory type with 125 W TDP. GTX 960 Price bracket.


Is this bleeding cash or just emanation of the improvement in process?
You'll likely get up to 380x level of (absolute) performance with the 120 mm^2 die :D
http://forums.anandtech.com/showpost.php?p=38019704&postcount=26

The 256bit wide bus isn't necessary for an entry level card, especially with memory compression & the incoming GDDR5x, so I'd say your estimates are offbeat. The TDP will likely be 100~120W for the full Polaris 10 chip, though the cut down parts could be competitive with the 750/ti price wise & a lot closer to their ~60W TDP, as I don't see AMD leaving any bit of performance off the table with the first gen Arctic Islands.
 
Last edited:

MrTeal

Diamond Member
Dec 7, 2003
3,919
2,708
136
Well, lets look theoretically what they will offer.

120 mm2 Die with 256 Bit memory bus, 2048 GCN4 cores with 75W TDP. GTX 750 Ti price bracket
~220 mm2 Die size with >3072 GCN4 cores with who knows what Memory type with 125 W TDP. GTX 960 Price bracket.


Is this bleeding cash or just emanation of the improvement in process?

If AMD offers a 120mm² die with 2048 GCN cores and a 75W TDP, I would literally eat my hat. That would be the same number of shaders as a 280X or 380X in 1/3rd the area, simultaneously using 1/3rd the power. 14nm ain't that good.

If they did pull off that feat, I would further chase my hat with my socks if they offered it at an MSRP of $150.
 
Last edited:

R0H1T

Platinum Member
Jan 12, 2013
2,583
164
106
I hope I should not have to show that Corporate profit is not the same as unit margin/profit. I would imagine that any reasonable person would have the overall profits be more important than the profit on an individual item. In other words, overall sales are important for corporate profits. With a very low market share, AMD increasing sales volumes is as important as unit profit. They are not back in the early 28nm position.
I sure hope AMD doesn't listen to this kind of advice. The dGPU market is shrinking annually & is getting eroded by the IGP i.e. bottom of the barrel & entry level GPU's are the hardest hit. It isn't a pure volume play anymore, in fact the max rev & profits, for Nvidia & AMD, are coming from GPU's worth $300 or above. If anything AMD should make all the profits, they possible can, till the time they're forced into cutting prices by Nvidia.
 
Last edited:

xthetenth

Golden Member
Oct 14, 2014
1,800
529
106
If AMD offers a 120mm² die with 2048 GCN cores and a 75W TDP, I would literally eat my hat. That would be the same number of shaders as a 280X or 380X in 1/3rd the area, simultaneously using 1/3rd the power. 14nm ain't that good.

If they did pull off that feat, I would further chase my hat with my socks if they offered it at an MSRP of $150.

The 380X is pretty inefficient with die space, and there is a chance that 28nm was that bad compared to 14nm with finfets. If there's any process change that's going to be able to do it, this is it.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
You'll likely get up to 380x level of (absolute) performance with the 120 mm^2 die :D
http://forums.anandtech.com/showpost.php?p=38019704&postcount=26

The 256bit wide bus isn't necessary for an entry level card, especially with memory compression & the incoming GDDR5x, so I'd say your estimates are offbeat. The TDP bracket will likely be 80~120W for the full Polaris 10 chip, though the cut down parts could be competitive with the 750/ti price wise & a lot closer to their ~60W TDP, as I don't AMD leaving any bit of performance table with the first gen Arctic Islands.

AMD Staff said that 70% of efficiency in GCN4 comes from die shrink itself, and another 30% from architecture. Think about it. If 200W R9 280X would be ported to 28 nm GCN4 it would be 140W GPU. It is even without die shrink.
Staff from the Radeon Technology Group did admit that the bulk of the efficiency improvements that we will see with AMD's newest GPUs will come from the so-called "FinFET Advantage", with PCPER stating that is is "on the order of a 70/30 split".
- http://www.overclock3d.net/articles/gpu_displays/amd_has_two_polaris_gpus_coming_this_year/1
So divide that 140W by half and you end up in 75W levels.

Also its worth to note that Density is increased. GloFo/Samsung is 2.2 times denser, or simply 60% denser than 28 nm TSMC. 360mm2 GPU on TSMC process by the looks of things would be 140mm2 on 14 nmFinFET Samsung/GloFo. - https://www.semiwiki.com/forum/content/3884-who-will-lead-10nm.html On the bottom there is a table with density.
600mm2 4096 GCN core GPU from TSMC 28 nm would be on 14 nm from GloFo around 250 mm2. But we don't know how new architecture will affect the die size. One of the things that imply that they will be smaller is the efficiency split mentioned upper. They simply have to have less hardware to suck less power. And that implies it has to be slightly smaller also, to keep the power down.
 

MrTeal

Diamond Member
Dec 7, 2003
3,919
2,708
136
The 380X is pretty inefficient with die space, and there is a chance that 28nm was that bad compared to 14nm with finfets. If there's any process change that's going to be able to do it, this is it.

Not really, unless you compare it to Hawaii and Fiji. The only thing really inefficient with Tonga is that they never enabled the full memory bus so it's essentially the same size as Tahiti but without the 384-bit bus.
Die - Shaders/mm²
Cape Verde - 5.20
Pitcairn - 6.04
Tahiti - 5.82
Oland - 4.27
Bonaire - 5.60
Hawaii - 6.43
Tonga - 5.70
Fiji - 6.87
 

R0H1T

Platinum Member
Jan 12, 2013
2,583
164
106
AMD Staff said that 70% of efficiency in GCN4 comes from die shrink itself, and another 30% from architecture. Think about it. If 200W R9 280X would be ported to 28 nm GCN4 it would be 140W GPU. It is even without die shrink. - http://www.overclock3d.net/articles/gpu_displays/amd_has_two_polaris_gpus_coming_this_year/1
So divide that 140W by half and you end up in 75W levels.

Also its worth to note that Density is increased. GloFo/Samsung is 2.2 times denser, or simply 60% denser than 28 nm TSMC. 360mm2 GPU on TSMC process by the looks of things would be 140mm2 on 14 nmFinFET Samsung/GloFo. - https://www.semiwiki.com/forum/content/3884-who-will-lead-10nm.html On the bottom there is a table with density.
600mm2 4096 GCN core GPU from TSMC 28 nm would be on 14 nm from GloFo around 250 mm2. But we don't know how new architecture will affect the die size. One of the things that imply that they will be smaller is the efficiency split mentioned upper. They simply have to have less hardware to suck less power. And that implies it has to be slightly smaller also, to keep the power down.
That's probably one of those best case scenarios, you know up to with a massive asterisk(*) at the end, & if you check the link in my post you'll see that 380x level of performance is very much possible. This aside from the fact that 380x is the full Tonga, hence a better comparison, having 190W as TDP.
 

MrTeal

Diamond Member
Dec 7, 2003
3,919
2,708
136
AMD Staff said that 70% of efficiency in GCN4 comes from die shrink itself, and another 30% from architecture. Think about it. If 200W R9 280X would be ported to 28 nm GCN4 it would be 140W GPU. It is even without die shrink.
- http://www.overclock3d.net/articles/gpu_displays/amd_has_two_polaris_gpus_coming_this_year/1
So divide that 140W by half and you end up in 75W levels.

Also its worth to note that Density is increased. GloFo/Samsung is 2.2 times denser, or simply 60% denser than 28 nm TSMC. 360mm2 GPU on TSMC process by the looks of things would be 140mm2 on 14 nmFinFET Samsung/GloFo. - https://www.semiwiki.com/forum/content/3884-who-will-lead-10nm.html On the bottom there is a table with density.
600mm2 4096 GCN core GPU from TSMC 28 nm would be on 14 nm from GloFo around 250 mm2. But we don't know how new architecture will affect the die size. One of the things that imply that they will be smaller is the efficiency split mentioned upper. They simply have to have less hardware to suck less power. And that implies it has to be slightly smaller also, to keep the power down.

Your numbers don't add up. 360/2.2 = 163, not 140. Getting 2048 shaders into that kind of die size seems reasonable, but 163 is also 36% larger than 120. A 280X is also a 250W card and pulls that under gaming loads.

The 600mm² Fiji die is a bad example to use though, it's unbalanced and too shader heavy. Hopefully with GCN4 AMD will be able to address the limits that keep them from using more than four shader engines and 64 ROPs
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Ah ok. Have thought about mGPU in the past but never pursued it much (if that's not obvious already :)).


I recently got a 4K monitor and starting to think it might be worth the investment... the talk about a new dual GPU from AMD might be interesting and worth the first leap.
Hope you get your system. I know a lot of people are downplaying MGPU, but I think it could be fun just messing with it.

Interesting read:
http://www.gamecrate.com/interview-amds-roy-taylor-dawn-virtual-reality-age/12842
The second thing is, I mentioned just now that we're going to need the minimum specs to be available at a much more aggressive target price to drive the number of platforms available. We're ahead to market with 14 nanometer FinFET process, way ahead of our competitors, so our ability to ramp high-performance parts which are at a very good price with low power consumption is also going to be an advantage for us.

Fighting words indeed. Looks like next gen might not be as expensive as some of our experts claimed.

How good would it be for us if AMD pulled another Evergreen release? If they can get their CPU business rolling again with Zen they wouldn't need the large profit margins on GPU's that nVidia shoots for.

It absolutely blows my mind that there are people on these forums advocating high pricing. I don't think I've ever read anything more ridiculously stupid in my life. Literally, "Take my money" talk. Unfreakinbelievable! :D
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
Your numbers don't add up. 360/2.2 = 163, not 140. Getting 2048 shaders into that kind of die size seems reasonable, but 163 is also 36% larger than 120. A 280X is also a 250W card and pulls that under gaming loads.

The 600mm² Fiji die is a bad example to use though, it's unbalanced and too shader heavy. Hopefully with GCN4 AMD will be able to address the limits that keep them from using more than four shader engines and 64 ROPs

Are you sure? 360/2(50%) = 180. 180-36(10% from 360)= 144.

What matter here is that GCN4 is cleaned up Architecture. Simpler, without clunky bits and pieces that were added on the road. Its first time in 4 years they did complete revision of their arch.
 

MrTeal

Diamond Member
Dec 7, 2003
3,919
2,708
136
That's probably one of those best case scenarios, you know up to with a massive asterisk(*) at the end, & if you check the link in my post you'll see that 380x level of performance is very much possible. This aside from the fact that 380x is the full Tonga, hence a better comparison, having 190W as TDP.

I wouldn't take AtenRa's post as fact; it's opinion the same as anyone else here. For one, his scaling assumption is removing two 128-bit memory controllers from Tonga to save 66mm². Even outside the fact that it's questionable you would save that much die area, Tonga is already slightly gimped vs the 280X with its 256-bit bus. A 128-bit part with 2048 shaders would be ridiculous. Even then, looking at the Tonga die removing 256-bits worth of MC likely wouldn't even get you to 66mm² saved.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Things dont scale linear down either.

GTX980, 2048SP, 256bit, 165W, 398mm2.
GTX960, 1024SP, 128bit, 120W, 227mm2.

Half the chip wont use half the power. Nor will it be half the size.

The best case to get half the power usage is to shrink a 980TI/Fury X. But the lower you go the harder it gets.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
Things dont scale linear down either.

GTX980, 2048SP, 256bit, 165W, 398mm2.
GTX960, 1024SP, 128bit, 120W, 227mm2.

Half the chip wont use half the power. Nor will it be half the size.

The best case to get half the power usage is to shrink a 980TI/Fury X. But the lower you go the harder it gets.

We are not talking about cutting the GPUs but shrinking the process. AMD staff said that 14nm will bring 50-60% reduction in power draw alone of the same level of performance. If that is correct, and that density applies, and efficiency and density will come also from the architecture, all of What I have written today is pretty much accurate.
 

MrTeal

Diamond Member
Dec 7, 2003
3,919
2,708
136
Are you sure? 360/2(50%) = 180. 180-36(10% from 360)= 144.

What matter here is that GCN4 is cleaned up Architecture. Simpler, without clunky bits and pieces that were added on the road. Its first time in 4 years they did complete revision of their arch.

That is a very strange way to do math. You're equating 0.5X - 0.1X with X/2.2 Your number actually gives X(0.5-0.1) = 0.4X, IE a die on the new node would be 40% the size of the old one. 360/144 is 2.5 times the density.

We are not talking about cutting the GPUs but shrinking the process. AMD staff said that 14nm will bring 50-60% reduction in power draw alone of the same level of performance. If that is correct, and that density applies, and efficiency and density will come also from the architecture, all of What I have written today is pretty much accurate.
Why do you think the new uarch will be much more dense?
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
We are not talking about cutting the GPUs but shrinking the process. AMD staff said that 14nm will bring 50-60% reduction in power draw alone of the same level of performance. If that is correct, and that density applies, and efficiency and density will come also from the architecture, all of What I have written today is pretty much accurate.

You forget all the more "fixed" power consumption that doesn't care much about a shrink.