AMD's next GPU uarch is called "Polaris"

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86

Maybe 14FF small die launch in Summer and big die 16FF+ at end of year? If they still think the VR market has potential they'll need a big die of some sort. TSMC's process is looking like a better fit for large die products than Samsung/GF. Also would make sense of the new name IF Polaris is GCN4 14FF while Arctic Islands is (or perhaps "was" before getting lumped in with Polaris) GCN4 16FF.
 
Last edited:

csbin

Senior member
Feb 4, 2013
904
605
136
http://www.pcper.com/reviews/Graphics-Cards/AMD-Radeon-Technologies-Group-Previews-Polaris-Architecture

According to pcper.com, the first Polaris based GPUs could be seen in March.
[FONT=verdana, arial, helvetica, sans-serif]Quote: It is likely that this is the first Polaris GPU being brought up (after only 2 months I’m told) and could represent the best improvement in efficiency that we will see. I’ll be curious to see how flagship GPUs from AMD compare under the same conditions.
[/FONT]


[FONT=verdana, arial, helvetica, sans-serif]I'll speculate that it will be a mobile chips to OEMs, unless they really plan to start shipping their low end graphics cards that early. They also said that mobile Polaris will be GDDR5.[/FONT]

[FONT=verdana, arial, helvetica, sans-serif]Quote: It looks like the mobile variations of Polaris, at least, will be using standard GDDR5 memory interfaces. AMD didn’t comment more specifically than that, only elaborating that they “were invested in HBM” and that it would “continue to be on the company’s roadmap.” To me, that sounds like we’ll see a mix of products with HBM and GDDR5 technology, even in the desktop market. Likely, only more expensive, flagship graphics cards will be getting HBM.[/FONT]
 
Feb 19, 2009
10,457
10
76
Their lineup desperately needs a low-end solution with a massive leap in efficiency.

The 950 and 960 are owning them, despite how strong the 380/X are in performance. For many systems, the power use of these GPUs, even the 370/X class is just too high and their only option is a 750Ti or 950/960. I know this first hand since I recently upgraded a friend's rig, some crap Dell thing with a 240W PSU that's really awful.

IF either AMD or NV release a huge chip on 14/16ff early, it would be the surprise. I think everyone understands how much more sensible it is to test the waters on an un-proven node tech with smaller chips. Certainly for consumer GPUs that do not enjoy the HPC $ margins.

The demoed Polaris, ~30W GPU with that kind of performance profile is absolutely a winner.
 

gamervivek

Senior member
Jan 17, 2011
490
53
91
Probably would end up in apple's laptops first. Available on desktop with the bigger chips later.

Again, here people were talking of tapeouts and AMD(or is it RTG now) show up with working silicon for press. :eek:
 

desprado

Golden Member
Jul 16, 2013
1,645
0
0
Their lineup desperately needs a low-end solution with a massive leap in efficiency.

The 950 and 960 are owning them, despite how strong the 380/X are in performance. For many systems, the power use of these GPUs, even the 370/X class is just too high and their only option is a 750Ti or 950/960. I know this first hand since I recently upgraded a friend's rig, some crap Dell thing with a 240W PSU that's really awful.

IF either AMD or NV release a huge chip on 14/16ff early, it would be the surprise. I think everyone understands how much more sensible it is to test the waters on an un-proven node tech with smaller chips. Certainly for consumer GPUs that do not enjoy the HPC $ margins.

The demoed Polaris, ~30W GPU with that kind of performance profile is absolutely a winner.
in your point of view Fury X was also the world fastest card, cherry pick benchmark ,however, when that GPU launched then it was totally opposite. That is why Nvidia never do these kind of cherry pick silly things to gain market and investors.
 
Feb 19, 2009
10,457
10
76
Probably would end up in apple's laptops first. Available on desktop with the bigger chips later.

Again, here people were talking of tapeouts and AMD(or is it RTG now) show up with working silicon for press. :eek:

If people knew real next-gen is out so soon, would they folk out so much $$ for Fury/X or 980 Ti? These products are really stop-gaps on a 28nm node that has gone on for far too long.

Now some credible sites are saying we're ~2 months out from low end Polaris, AMD have publicly said they have 2 Polaris chips tapeout awhile ago, from this info, we can assume low-end and mid-range.

The mid-range chip should outright kill all the high-end stuff now on performance, at much less power usage.

Oh, we're gonna have to get used to paying premium prices for mid-range parts.
 
Feb 19, 2009
10,457
10
76
That is why Nvidia never do these kind of cherry pick silly things to gain market and investors.

Do you really believe what you are busy typing up?

ps. In my PoV, before the launch, I had said I HOPE it will beat Titan X by ~15-20%, that if it does, it would make ME want to upgrade immediately. It's not the fastest GPU, but it ain't far behind, and it is the fastest in multi-GPU configs, requiring +OC OC 980Ti SLI models to catch up.
 
Last edited:

Azix

Golden Member
Apr 18, 2014
1,438
67
91
people were speculating AMD would not be ready for months after nvidia. Looks like they will be getting a head start.

If we see this in march that would be great and if it makes it to laptops that could revive AMDs near non-existence there. If they can get oems away from nvidia. This could be a strategic decision to put out a quick low end part. If nvidia really has gone for full "mid" range like before then AMD has a chance to gain some market.
 
Last edited:
Mar 10, 2006
11,715
2,012
126
people were speculating AMD would not be ready for months after nvidia. Looks like they will be getting a head start.

If we see this in march that would be great and if it makes it to laptops that could revive AMDs near non-existence there. If they can get oems away from nvidia

Not really a head start, NVIDIA just hours ago just announced Drive PX 2 with two Pascal GPUs on board as well as two new Tegra chips...with Pascal GPUs integrated.

Polaris looks great, but it's silly to think NVIDIA is "behind" just because AMD wanted to buy headlines by demo'ing Polaris.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
people were speculating AMD would not be ready for months after nvidia. Looks like they will be getting a head start.

If we see this in march that would be great and if it makes it to laptops that could revive AMDs near non-existence there. If they can get oems away from nvidia

It was worded strangely, but I don't get that it's going to be released in 2 months from that statement. I get that it took 2 months from maybe tapeout to having the engineering sample? Or something along those lines.

Although the "speculation" that AMD would be late on the new node wasn't based in any kind of facts. Simply AMD haters and marketing spin. If AMD was later than nVidia to the new node it would be the first time ever I believe (Might be wrong, but I don't think so???). Just like they have HBM first, which is typical for new memory tech as well.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Not really a head start, NVIDIA just hours ago just announced Drive PX 2 with two Pascal GPUs on board as well as two new Tegra chips...with Pascal GPUs integrated.

Polaris looks great, but it's silly to think NVIDIA is "behind" just because AMD wanted to buy headlines by demo'ing Polaris.

Did they have working units? An announcement isn't the same thing as actual operating hardware.
 

DarkKnightDude

Senior member
Mar 10, 2011
981
44
91
Didn't both Nvidia and AMD both say last year they had tapeouts already? We might see big card releases by Spring.
 

beginner99

Diamond Member
Jun 2, 2009
5,315
1,760
136
Nope..I am pretty sure pascal/GTX970/980 successor will beat TITANX around 40%.
If AMD want be competetive they need GPu around 300-350mm2 on 14/16nm

I don't disagree. I disagree with the timeline and price. That 300mm^2 16nm card will probably sell for $700 in 2017 while in yearly 2015 you could get a 290(x) for around $250. So price/performance will not be better 2 years later. Remember 28 nm was cheaper than 14/16nm.
 

Elixer

Lifer
May 7, 2002
10,371
762
126
This looks pretty good on paper.
But, there lies the problem, it still has major unknowns, just like with the Fury line.
They (engineers) did say the Fury was a o/c'ers dream, and that just wasn't the case.
So, again, we got them talking about targets in very broad strokes, and this is all fine and well, but there just isn't any way to quantify anything that is being said.

Do we want this project to be a huge success? Yes, we really need some solid competition in the GPU arena, it has been far too long with just barely incremental advances between both sides.
 

flopper

Senior member
Dec 16, 2005
739
19
76
amd efficiency is due to better control over the transistors workload, so while the node is major, their tech is used to make sure the target power is reach when you set the fps for 60 or whatever number efficiently.
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
amd efficiency is due to better control over the transistors workload, so while the node is major, their tech is used to make sure the target power is reach when you set the fps for 60 or whatever number efficiently.

Which should work well in combination with Adaptive Sync and a maximum FPS target. Going to be hard to compete with AMD on that front value-wise unless Nvidia adopts Adaptive Sync support with Pascal.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
I'm more interested in their perf/area metrics ...

If they can beat their competitor by a decent margin when it comes to been cost efficient there's a very real chance that AMD could very well take the performance crown ...
 

imaheadcase

Diamond Member
May 9, 2005
3,850
7
76
I see AMD still does not understand gamers. Hopefully that test was just to show power requirements and not a actual "we are better at gaming" test. A gamer doesn't care about how much power it uses...it does not matter if the best AMD card used 1 watt of power, if the nvidia care used 250watt but was faster they will still get nvidia for games.

AMD does this all the time, its all about saving power for them, or new technology that no one actually uses in real world.


Somewhere down the line they turned into the guy on the block that buys a Alienware computer and says its better than anyone else.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
If people knew real next-gen is out so soon, would they folk out so much $$ for Fury/X or 980 Ti? These products are really stop-gaps on a 28nm node that has gone on for far too long.

You can always wait for a better product. Doesn't help for what you need/want today.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I see AMD still does not understand gamers. Hopefully that test was just to show power requirements and not a actual "we are better at gaming" test. A gamer doesn't care about how much power it uses...it does not matter if the best AMD card used 1 watt of power, if the nvidia care used 250watt but was faster they will still get nvidia for games.

AMD does this all the time, its all about saving power for them, or new technology that no one actually uses in real world.


Somewhere down the line they turned into the guy on the block that buys a Alienware computer and says its better than anyone else.

Well, they've been beaten over the head since Maxwell that perf/W is their biggest failing. I can understand why they are touting it.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Performance/watt is the king. Specially in mobile where AMD got even less than desktops. So I agree, its very important.
 
Feb 19, 2009
10,457
10
76
From AT's article, this should be quite clear for anyone who has paid attention to AMD & Glofo for the past several years.

Unfortunately what’s not clear at this time is why RTG is splitting designs like this. Even without dual sourcing any specific GPU, RTG will still incur some extra costs to develop common logic blocks for both fabs. Meanwhile it's also not clear right now whether any single process/fab is better or worse for GPUs, and what die sizes are viable, so until RTG discloses more information about the split order, it's open to speculation what the technical reasons may be. However it should be noted that on the financial side of matters, as AMD continues to execute a wafer share agreement with GlobalFoundries, it’s likely that this split helps AMD to fulfill their wafer obligations by giving GlobalFoundries more of AMD's chip orders.

In the last financial, AMD had to pay GloFo for un-used wafers due to their contracts. With APUs not being able to keep up in volume, AMD needs dGPU chips being made from GloFo.

The question would be why they would source from TSMC..

With Samsung touting their alternate process, 14ff enhanced power node, clock speed or high TDP GPU is a non-issue.

Thus, the logical answer would be GloFo is incapable of producing a big die on 14ff with decent yields.

But why would TSMC be any better at it? There's rumbling from unhappy SOC customers that their small dies aren't yielding well on TSMC 16nm ff...

Add two together, the big-chip next-gen sounds like a long way away.