• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

AMD's next GPU uarch is called "Polaris"

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Pascal will never be 10X of Maxwell in terms of performance and efficiency and Nvidia never said that. Pascal is only 10X interns of deep learning, data etc.

yea, nice marketing as always.

Screenshot-95.png
 
Does that mean, that we will have GTX970/R9 390 performance packed in 86W?

If so, that is around 70% decrease in power consumption, compared to R9 390.

They used a GTX950 for compare didnt they?

In AMDs case you may end up seeing 390 performance in a 125-150W power area.
 
Easily. Specially since 16FF+ is better than 14LPP.

But AMD needs to compete with Pascal. So the uarch delta increases, and on top of that the introduction of the node delta. Its a one way road like the CPU division unfortunately.
 
Remember how Nvidia started the 28nm generation, with a power efficient small die. They only ramped up to large dies as 28nm costs went down. Expect the same this time around.
 
For all the huff and puff about performance, CF Fury X is still very competitive even against custom 980Ti SLI. It's also got the performance where it matters, at the high-end, 4K gaming is the niche of multi-GPU! Despite being a compute focused design, with an "old" Tonga front-end.

Not bad for running crippled in the DX11 era huh?

Next-gen's battle will be full swing in the DX12 era, so let that sink in some.
 
They used a GTX950 for compare didnt they?

In AMDs case you may end up seeing 390 performance in a 125-150W power area.

IF the GPU from AMD was with GDDR5X that power consumption is very good.

If HBM1 and 2 will go to mid-end and high-end only, we can see much higher efficiency gains.

I have to say still, it is not bad...

And if we look at the stats of GTX950 it means it is comparable to R9 285, R9 280X, it is directly in between them. Makes it even more not bad. 😉
 
Last edited:
IF the GPU from AMD was with GDDR5X that power consumption is very good.

If HBM1 and 2 will go to mid-end and high-end only, we can see much higher efficiency gains.

I have to say still, it is not bad...

And if we look at the stats of GTX950 it means it is comparable to R9 285, R9 280X, it is directly in between them. Makes it even more not bad. 😉

Its not bad at all. But is it up to the competition level? I doubt.

However as said before, its now more about performance/watt than performance.
 
Does that mean, that we will have GTX970/R9 390 performance packed in 86W?

If so, that is around 70% decrease in power consumption, compared to R9 390.

So, much higher than the slides imply... Why are they being so conservative then?


EDIT: Okay, I see that the slide is completely meaningless now.
 
Last edited:
Yes it is laughable and people were not getting my point but now they are. AMD has made GCN 4.0 competitive with Maxwell not Pascal.

If this is GDDR5X GPU than it is not exactly wise to judge the WHOLE lineup of GPUs, especially those with HBM1 and 2 😉.

I know, Im getting hyped, but lets see what it has under the hood, and the rest of the lineup how behaves.
 
Yes it laughable and people were not getting my point but now they are. AMD has made GCN 4.0 competitive with Maxwell not Pascal.

They cant compare it to pascal since we dont know how it performs. Its obvious even from a marketing slide its going to be an order of magnitude better than nvidias maxwell and not just competitive. The big question is how good will pascal be. They will use 16/14nm and HBM2 just like AMD, but they will have to add a few missing features that AMD already has and that means extra transistors and extra power draw.

IMO perf/watt will be fairly similar. Personally i mostly care about perf/$ and dont really care about power consumption that everyone started talking about after maxwell was released yet they didnt care about a day before that.
 
They used a GTX950 for compare didnt they?

In AMDs case you may end up seeing 390 performance in a 125-150W power area.

You do realize that what you said makes no sense/is a pretty random guess if they managed to make a GPU with GTX 950 performance at 50W less, right? It would be a huge drop-off to go from that in the low-end tier to just being on-par with Maxwell in the mid-range tier. In fact, that would actually make it only slightly better than the Nano in terms of performance/watt. I think what's going on here is that you're applying the 2x perf/watt increase to Hawaii when you should be applying it to Fiji.
 
Last edited:
If this is GDDR5X GPU than it is not exactly wise to judge the WHOLE lineup of GPUs, especially those with HBM1 and 2 😉.

I know, Im getting hyped, but lets see what it has under the hood, and the rest of the lineup how behaves.
I am not talking about that.

I am talking about architecture improvement and node improvement. If Nvidia improves Maxwell a bit and compete with a new node than it is a real competition for GCN 4.0 so leave pascal alone.
 
The primitive discard accelerator is the real deal here. It can give a huge minimum fps boost, especially on complex scenes.
 
You do realize that what you said makes no sense/is a pretty random guess if they managed to make a GPU with GTX 950 performance at 50W less, right? It would be a huge drop-off to go from that in the low-end tier to just being on-par with Maxwell in the mid-range tier. In fact, that would actually make it only slightly better than the Nano in terms of performance/watt. I think what's going on here is that you're applying the 2x perf/watt increase to Hawaii when you should be applying it to Tonga or Fiji.

AMDs slides from financial day expects a 2x performance/watt over current 28nm products. Its also confirmed now that 14nm products is more or less just shrinked 28nm products.
 
A lot of what was neutered or castrated to make maxwell efficient is going back in because nvidia is looking to make telsa and quadros out of Pascal.
Keep that in mind as you speculate on its comparitive efficiency.
 
Back
Top