The future of AMD in graphics

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,730
136
RX 480 at release had almost the same performance as RX290X/390X at almost half the power.
As far as power efficiency was concerned, there was a huge gulf between marketing and reality. Then there's the whole PCI-e power debacle which didn't help at all improving the image of Polaris shortly after launch.
 

Guru

Senior member
May 5, 2017
830
361
106
At least some of the 580s improving Performance vs the 1060 is likely to be due to AMD's influence on the Software side of things. Partially from DX12, but also from controlling the Consoles which practically all Game Developers have become accustomed to.
Yeah, but you can't put 15%-20% performance improvement in Wolf 2 and Doom as software influence, clearly Polaris could process Vulkan much better than Pascal. Heck the RX 580 basically competes with the 1070 in Vulkan titles. Vega 64 actually matches 1080ti.

Nvidia did improve their architecture with Turing, they are basically more like AMD's arch now. Which shows how ahead AMD were in terms of low level api's processing. It took Nvidia 3 years to match AMD in DX12 and Vulkan.

If all games were magically converted to great DX12 or Vulkan engine designs, RX 580 would literally be competing with the 1070 in most games, and Vega 64 would be on par with the 1080ti. Radeon 7 would actually trade blows with the RTX 2080 and probably even beat it in several games, rather then being about 7% slower on average.

The issue is MS didn't introduce DX12 in windows 7, so devs continued developing DX11 games or dual mode games, it was done because of greed, they knew no one would migrate to Win10 if DX12 wasn't exclusive to it, but this has meant very slow adoption of DX12, with half the gaming PC's still using Win7.

AMD miscalculated with DX12 and Vulkan penetration to the market, its been much slower than anticipated and that has been their biggest mistake. They also developed Vega as a dual purpose GPU, rather than developing 2 iterations of it, one for the professional market and one for Gamers, though they probably didn't have enough devs and money to be developing two iterations of it.

Right now since they have Vega, I think they are going to use it and develop it and iterate it into compute powerhouse for the professional market and their new Navi architecture as the gaming cards.
 
Last edited:

jpiniero

Lifer
Oct 1, 2010
16,823
7,267
136
Yeah, but you can't put 15%-20% performance improvement in Wolf 2 and Doom as software influence, clearly Polaris could process Vulkan much better than Pascal. Heck the RX 580 basically competes with the 1070 in Vulkan titles. Vega 64 actually matches 1080ti.

Wolf 2 uses FP16, which both Polaris+ and Turing both support 2X FP32. Pascal doesn't.

The problem with AMD in DX11 was more because their GPU driver was essentially single threaded, so any efficiency gains using DX12 translated into more time for the GPU to process. DX11 Games typically now have a decent amount of threading now, making it less of an issue.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,730
136
Turing is Fermi5 and SKL is P6 #whatever.
Now what?
Really?

Kepler got rid of shader clock, doubled the # of functional units per SM, doubled the size of each functional unit in each SM, moved from 2 16-wide warps in succession to one 32-wide warp per clock, reverted back to static scheduling.

Maxwell reduced the number of functional units in each SM to 128 from 192, partitioned each SM so that resources aren't shared between the warp schedulers, thereby increasing utilization within each block.

Pascal was a die-shrunk Maxwell, so not much change here.

Enough has been already said about Turing; no need for me to add anything more.

Meanwhile the only significant change in GCN was during Tahiti->Hawaii. Everything since then has only been been minor increments whithou any changes to the fundamental layout of SP/TMU/ROPs.
 

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
  • Like
Reactions: DarthKyrie

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,730
136
Yes, Fermi was the last time nV did a fundamental change by introducing their current FF setup.

What is Tonga and Vega?

What is Vega?
Tonga had changes to tessellation and color compression. Nothing fundamental. Fiji was Tonga x2. Polaris was Tonga with further changes to tessellation and color compression. Vega is Fiji + Polaris + higher clocks with support for new data types.

Nothing fundamental like Fermi->Kepler or Kepler->Maxwell or Maxwell->Turing.
 

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
Tonga had changes to tessellation and color compression
That part where major ISA updates and scheduling changes but whatever.
Vega is Fiji + Polaris + higher clocks with support for new data types.
Something something from tiling to ROP setup (to support said tiling).
Nothing fundamental like Fermi->Kepler or Kepler->Maxwell or Maxwell->Turing.
Nothing fundamental happened since Fermi either, unless you consider the introduction of MORE accelerator blocks fundamental, then Volta/Turing are fundamental.
 
  • Like
Reactions: DarthKyrie

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,730
136
while raising performance ca 25-30%.
Power efficiency =/= board power.
Ok so using TPU's reviews with the metric performance at 4K/peak power, we get Vega 64 -> Radeon VII 26% better perf/watt.

Doing the same with GTX 980 -> GTX 1080 we get 70% better perf/watt.

Clearly NVIDIA does a far better job increasing efficiency with a node shrink than AMD.

Heck looking at the 1660 Ti reviews, NVIDIA manages a larger increase in power efficiency over the 1060 on a node tweak than AMD manages on a full node shrink.
 
  • Like
Reactions: Innokentij

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
Doing the same with GTX 980 -> GTX 1080 we get 70% better perf/watt.
Wow almost as if they went from planar to FinFETs.
Polaris also posted some decent perf/w gains.
Heck looking at the 1660 Ti reviews, NVIDIA manages a larger increase in power efficiency over the 1060 on a node tweak than AMD manages on a full node shrink.
At the slight (veeeery slight) expense of that die area.
G-good thing we have 10 shrinks ahead, r-right?
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,730
136
That part where major ISA updates and scheduling changes but whatever.
Oh so now ISA changes disconnected from architectural changes counts as a fundamental change?
Something something from tiling to ROP setup (to support said tiling).
Yeah tiling - that's why Fiji and Vega perform exactly the same clock for clock.
Nothing fundamental happened since Fermi either, unless you consider the introduction of MORE accelerator blocks fundamental, then Volta/Turing are fundamental.
So according to you if it's addition of disparate accelerator blocks only then it counts as fundamental. Yeah whatever.
 

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
Oh so now ISA changes disconnected from architectural changes counts as a fundamental change?
Did I say fundamental? Uh-oh.
Yeah tiling - that's why Fiji and Vega perform exactly the same clock for clock.
It saves b/w and gives perf when b/w bound.
It works!
So according to you if it's addition of disparate accelerator blocks only then it counts as fundamental.
I mean, yeah.
GPUs are inherently incremental ever since we went unified shaders.
14 years.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,730
136
Wow almost as if they went from planar to FinFETs.
Polaris also posted some decent perf/w gains.
Do the same exercise for Fiji -> Vega. NVIDIA managed 4x higher increase in perf/watt from FinFET than what AMD got.
At the slight (veeeery slight) expense of that die area.
G-good thing we have 10 shrinks ahead, r-right?
Perf/mm^2 is a useless metric from the point of view of the end-user.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,730
136
Good for them.
How is that relevant to AMD "having no changes in their uArch"?
Clearly if your competitor manages to do something much better with the same technology than you can, you're doing something wrong.
Ah yes I'm totally ready to pay for 815mm^2 dGPUs.
Joke's on you. Price per mm^2 is still much lower for the 2080Ti than Radeon VII.
 

maddie

Diamond Member
Jul 18, 2010
5,157
5,545
136
This is going absolutely nowhere, can someone close the thread down?

Then it's not actively b/w bound, yay.

No way.
:^)
I absolutely don't believe in censorship for adults. It took me quite a while to really ignore posts while still reading them. Try it, you won't believe how funny the world becomes.
 

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
I absolutely don't believe in censorship for adults. It took me quite a while to really ignore posts while still reading them. Try it, you won't believe how funny the world becomes.
This whole thread is pointless.
"Uuugh what will AMD do next?"
I mean, maybe release Navi?
And whatever comes after.
And another thing.
Until we're out of shrinks and GPUs turn into commodities.