The future of AMD in graphics

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

exquisitechar

Senior member
Apr 18, 2017
727
1,033
136
Why? Intel likes high margins. AMD can always retreat to lower margins that Intel shareholders would find unacceptable. Plus for the time being, Intel has nothing to show us but Gen11 iGPUs that would be underpowered compared to Navi. Xe isn't going to be ready until 2020 at the earliest, and we don't know exactly when in 2020. Navi should be here in dGPU form by Q4 2019, and console variants will probably be ready for testing earlier than that.
AMD has this generation of consoles in the bag, I was more talking about a future where Intel will be able to offer a more competitive solution to Sony and MS. Mind you, I'm not sure how likely it is, it just seems more likely than Nvidia doing it. Intel could offer a strong CPU solution as well, after all, and they haven't burned any bridges with console manufacturers unlike NV. Also, their IoT and modem business isn't really high margin either.
Is Navi really going to compete against Turing? Maybe the 1660Ti? I don't see it as a competitor to the 2080Ti.
I think that, initially, it won't target the high end. It is rumored that high end Navi will be released in 2020.
I don't think that's true. We'll see Gen11 graphics in laptops this June. AMD may not have Navi in any of their PC APUs until 2020. It sure won't be in Picasso.
Yes, this is true. The staggered release schedule of Ryzen APUs gives Intel an opportunity on the iGPU front.
 

NTMBK

Lifer
Nov 14, 2011
10,520
6,037
136
AMD has this generation of consoles in the bag, I was more talking about a future where Intel will be able to offer a more competitive solution to Sony and MS. Mind you, I'm not sure how likely it is, it just seems more likely than Nvidia doing it. Intel could offer a strong CPU solution as well, after all, and they haven't burned any bridges with console manufacturers unlike NV. Also, their IoT and modem business isn't really high margin either.

I think that, initially, it won't target the high end. It is rumored that high end Navi will be released in 2020.

Yes, this is true. The staggered release schedule of Ryzen APUs gives Intel an opportunity on the iGPU front.

Given Intel's recent track record, I certainly would not stake my company on their ability to stick to their roadmap. I think they need to do a lot of work rebuilding credibility before they can win any console contracts.
 
  • Like
Reactions: DarthKyrie

insertcarehere

Senior member
Jan 17, 2013
712
701
136
AMD are going to have a process node advantage for at least 6 months though, so I'm sure they are going to use that. They are also not rushing Navi, they are taking their time with it, and working with Vega to release new products in the professional market and now with Radeon 7 in the gaming market.

I think we might see another Vega 20 offshoot, either with its full shaders or even more cutdown in order to compete at the lower price range.

A process node advantage for AMD does them no good if they can't execute architecture-wise. Vega on 7nm is still clearly behind Turing on 12nm (and Pascal on 16nm) in terms of efficiency. If Navi doesn't improve on that big-time the process advantage was just to paper over the architectural disadvantage. Whatever 7nm Nvidia product at year-end 2019/early 2020 that comes out is gonna be on a more mature process, improving yields and voltages, which will allow for more leeway in die sizes and clocks in the design.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
ΑMD's GPGPU sales have been increasing since they launched mi25. People seem to forget this fact. Vega20 has only helped AMD in that department. But again, we're veering away from the main topic.

I dont believe I said that only VEGA 20 increased Compute Revenue.

Is Navi really going to compete against Turing? Maybe the 1660Ti? I don't see it as a competitor to the 2080Ti.

I really dont have an idea, but from what we now gamers are migrating to middle and upper class of dGPUs so I wouldnt be surprised if AMD will aim for $300+ market next.

I don't think that's true. We'll see Gen11 graphics in laptops this June. AMD may not have Navi in any of their PC APUs until 2020. It sure won't be in Picasso.

I dont believe Gen 11 to be competitive against Ryzen 3xxx APUs in 2019 (at the same segments).
 

DrMrLordX

Lifer
Apr 27, 2000
23,183
13,270
136
I dont believe I said that only VEGA 20 increased Compute Revenue.

No, I was more adding to your point than detracting from it. When people assumed Vega64's scarcity was due to miners buying them all up, the reality might have been quite different. RTG committed a lot of dice to mi25 accelerators. It made a difference, and it certainly guided their decision-making process when it came to Vega20.

I really dont have an idea, but from what we now gamers are migrating to middle and upper class of dGPUs so I wouldnt be surprised if AMD will aim for $300+ market next.

Honestly I would be surprised if that happened. Polaris barely touches on that pricepoint thesedays, and in most incarnations in falls far short. If Navi is a replacement for Polaris, it won't show up with many SKUs north of $300.

I dont believe Gen 11 to be competitive against Ryzen 3xxx APUs in 2019 (at the same segments).

We will see. It has a lot to do with memory configurations. I don't think Gen11 or Vega iGPUs will fare terribly well with slow single-channel memory. It certainly does not seem that AMD has done much to update their iGPUs from Raven Ridge, though. That makes it easier for Intel to gain ground.
 

Head1985

Golden Member
Jul 8, 2014
1,867
699
136
AMD need new architecture and they needed it when maxwell launched 5 years ago.Not even better 7nm proces can save them because radeon 7 didnt even beat 2080TI(and it should because it is on new node.Something like 7970 vs GTX580.Its like 7970 will only compete with GTX560-second tier GPU.Thats how AMD is behind today)If they will keep using gcn they will never catch up nv again.
 
  • Like
Reactions: Muhammed

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
AMD need new architecture and they needed it when maxwell launched 5 years ago.Not even better 7nm proces can save them because radeon 7 didnt even beat 2080TI(and it should because it is on new node.Something like 7970 vs GTX580.Its like 7970 will only compete with GTX560-second tier GPU.Thats how AMD is behind today)If they will keep using gcn they will never catch up nv again.

Why does AMD need a whole new architecture? nVidia has been been making iterative updates to the same architecture for years. AMD never set out to beat a 2080Ti, and the process the chip is built on has no direct impact on performance. There are phone SoCs that are on 7nm, that doesn't suddenly make them the fastest chip out there. Going from 16nm down to 7nm just allowed for AMD to raise the clocks up.

So, what part of GCN is the issue? AMD themselves have said that (Well, Raja stated when he was still with them) that available funds were the key reason AMD didn't have a Titan fighter. And since GPU's take years and year to design, Navi will be the first GPU to be released where AMD had some money to dump in it.
 
  • Like
Reactions: Gikaseixas

Head1985

Golden Member
Jul 8, 2014
1,867
699
136
Why does AMD need a whole new architecture? nVidia has been been making iterative updates to the same architecture for years. AMD never set out to beat a 2080Ti, and the process the chip is built on has no direct impact on performance. There are phone SoCs that are on 7nm, that doesn't suddenly make them the fastest chip out there. Going from 16nm down to 7nm just allowed for AMD to raise the clocks up.

So, what part of GCN is the issue? AMD themselves have said that (Well, Raja stated when he was still with them) that available funds were the key reason AMD didn't have a Titan fighter. And since GPU's take years and year to design, Navi will be the first GPU to be released where AMD had some money to dump in it.
Whole GCN is bad(for gaming).It was designed to fight fermi.It is compute heavy and eats alot of power.Also its not scalling well.AMD is stuck on 4096/256/64 4x SE from furyx/2015.
Only saving grace for GCN is that it is in consoles.

Imagine NV would use fermi today in 2019....
 

SPBHM

Diamond Member
Sep 12, 2012
5,076
440
126
Whole GCN is bad(for gaming).It was designed to fight fermi.It is compute heavy and eats alot of power.Also its not scalling well.AMD is stuck on 4096/256/64 4x SE from furyx/2015.
Only saving grace for GCN is that it is in consoles.

Imagine NV would use fermi today in 2019....

Fermi!?

GCN 1.0 - 1.2 was a perfectly adequate competitor to Kepler for the most part
https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_780_Ti/28.html

the problem was that they didn't really have an answer to Maxwell the following year.
 

Head1985

Golden Member
Jul 8, 2014
1,867
699
136
Fermi!?

GCN 1.0 - 1.2 was a perfectly adequate competitor to Kepler for the most part
https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_780_Ti/28.html

the problem was that they didn't really have an answer to Maxwell the following year.
It takes few years to develop new architecture and GCN was there to fight compute heavy/fermi architectures not kepler/maxwell/pascall.Terascale was better for gaming than what NV have during GTX280-580.They crushed the NV in perf/watt and perf/die size during hd4000-6000 series.NV barely keep up with AMD only because they were using 500++mm2 dies vs 200-300mm2 AMD dies.But terascale was bad for compute and AMD wanted earn money from dataceters so with GCN they pretty much switched from gaming oriented to compute oriented architecture.NV on other hand switched from compute focused to gaming focused during kepler/maxwell and then they created separate SKU for compute/gaming with pascall.
7970 was pretty bad after terascale.It was bigger than GTX680 die and it was not faster and with worse perf/watt and perf/die size, but good in compute.So AMD and NV switched 180°.Now NV have better architecture for gaming and with maxwell they continue with it.
AMD on other hand was stuck with compute focused GCN and lose with NV every generation in perf/watt and perf/die size more and more.They needed switch from GCN to more gaming oriented architecture after 290x.But they didnt have money because Rory Read almost destroyed AMD GPU division so they still using old compute focused GCN in 2019!
 
Last edited:
  • Like
Reactions: MangoX

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
136
Imagine NV would use fermi today in 2019....
Given Fermi was the last fundamental change, that they do.
But terascale was bad for compute and AMD wanted earn money from dataceters so with GCN
VLIW5 was bad for everything, so it got replaced by VLIW4, which was also bad for everything so it was replaced by something properly SIMT, being GCN.
on other hand was stuck with compute focused GCN
What's compute-focused about it?
 
  • Like
Reactions: DarthKyrie

Head1985

Golden Member
Jul 8, 2014
1,867
699
136
Yeah vliw/terascale was so bad in games that 5870 with 334mm2 die was only 15% slower than GTX480 with 530mm2 die.Really disaster for AMD:rolleyes:
Same with 4870 vs GTX280(256mm2 vs 576mm2)
Terascale was very good at gaming.Its oposite what GCN is.They had big lead, but they all lost it with introduction of GCN and now they are so far behind.

Vega64 495mm2 die vs GTX1080 314mm2 die.
 
Last edited:

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
136
vliw/terascale was so bad in games that 5870 with 334mm2 die was only 15% slower than GTX480 with 530mm2 die
GF100 was a FP64 die.
VLIW5 utilization was still beyond atrocious.
Same with 4870 vs GTX280(256mm2 vs 576mm2)
Merely one node ahead!
Terascale was very good at gaming.Its oposite what GCN is
Hence why GCN easily reaches way better SIMD utilization than anything VLIW.
It's magic!
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Yeah vliw/terascale was so bad in games that 5870 with 334mm2 die was only 15% slower than GTX480 with 530mm2 die.Really disaster for AMD:rolleyes:
Same with 4870 vs GTX280(256mm2 vs 576mm2)

Die size means nothing when the two chips are on different processes.
 
  • Like
Reactions: DarthKyrie

NostaSeronx

Diamond Member
Sep 18, 2011
3,815
1,294
136
We are suppose to see a chiplet Graphic Core Next soon-ish. This is me guessing based on the Research and Patents I have seen.

Common Chiplet; Command Processor, L3, Render Back End/HBCCv2, HBM2, Uncore; VCN, XDMA/DMA, Display, Bus/etc
GPU Chiplet; Shader Engines, Command Queue, CUs, and L2 cache
HBM Chiplet...


Navi was a 40 CU design.
This design is a 48 CU design.

After that, there is suppose to be a big one going up to 128 CUs; 4 32 CUs.
Two GPU chiplets on top, two GPU chiplets on bottom, with two HBM3 to the left and two HBM3 to the right.

48 CU, 2x16(+2x24GB) GB HBM2e -> 2020 <== Paired with Vermeer?
128 CU, 4x64 GB HBM3 -> 2021 <== Paired with Genoa?

They might be based on Super-SIMD which started in 2017(?) at AMD. 48 SuperCU = 96 StandardCU(ALU-count) and 128 StandardCU(IPC-amount) and 128 SuperCU = 256 StandardCU(ALU-count) and 384 StandardCU(IPC-amount).
 
Last edited:
  • Like
Reactions: DarthKyrie

piesquared

Golden Member
Oct 16, 2006
1,651
473
136
AMD need new architecture and they needed it when maxwell launched 5 years ago.Not even better 7nm proces can save them because radeon 7 didnt even beat 2080TI(and it should because it is on new node.Something like 7970 vs GTX580.Its like 7970 will only compete with GTX560-second tier GPU.Thats how AMD is behind today)If they will keep using gcn they will never catch up nv again.

Why should it? It's half the size.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
AMD need new architecture and they needed it when maxwell launched 5 years ago.Not even better 7nm proces can save them because radeon 7 didnt even beat 2080TI(and it should because it is on new node.Something like 7970 vs GTX580.Its like 7970 will only compete with GTX560-second tier GPU.Thats how AMD is behind today)If they will keep using gcn they will never catch up nv again.


Radeon VII = 13.23B Transistors at 7nm
RTX2080Ti = 18.600B Transistors at 12nm

What you saying is the same as GTX 1060 with 4.4B Transistors at 16nm cannot even beat the GTX980Ti with 8B transistors at 28nm.

Vega 20 was developed using as little as possible resources by just simple updating the old Vega 10 and just port it to 7nm to gain performance and higher efficiency. This chip is very efficient for Machine Learning and for data center workloads which was its main target from the start.
I believe NAVI will continue to be very compute heavy but also better in power efficiency. If i was AMD i would first introduce a NAVI graphics card in the $300 to $500 segment and replace the older Vega 10 parts (Vega 56/64) for the gaming market. Because this is where the gamers are migrating the last 2-3 years and because this way they will keep Radeon VII as a $700 product until they will release a bigger Navi (or next gen) chip later on.

So for 2019 im expecting something like the following,

RX570 = $150
RX 580 = $180
RX 590 = $220
Navi = $300/330
Navi = $400/500
Radeon VII = $700
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
What makes you think that Navi will increase power efficiency when Vega 20 didn't?

NAVI is a new design unlike Vega 20 vs Vega 10, they had more time and resources spend on NAVI vs Vega 20 and the design should be more efficient compared to Vega 20 witch is just a Vega 10 + minor upgrades + 7nm.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,730
136
NAVI is a new design unlike Vega 20 vs Vega 10, they had more time and resources spend on NAVI vs Vega 20 and the design should be more efficient compared to Vega 20 witch is just a Vega 10 + minor upgrades + 7nm.
Navi is just GCN 6. I don't remember any iteration of GCN ever targeting power efficiency and achieving it in the real world unlike Kepler, Maxwell and Pascal. All we got from AMD regarding power efficiency is a 2.8x perf/watt meme from Raja.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Navi is just GCN 6. I don't remember any iteration of GCN ever targeting power efficiency and achieving it in the real world unlike Kepler, Maxwell and Pascal. All we got from AMD regarding power efficiency is a 2.8x perf/watt meme from Raja.

Well actually Polaris 10 (RX 480) had a huge perf/watt increase over Havaii (R9 390X). We could see the same with Navi vs Vega 10.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,730
136
Well actually Polaris 10 (RX 480) had a huge perf/watt increase over Havaii (R9 390X). We could see the same with Navi vs Vega 10.
Polaris power efficiency was a meme. The 2.8x performance/watt figure became so embarrassing for AMD that they removed that reference from most of their promotional materials.

EDIT: Wow it looks like after all this time I've finally found what exactly that figure stands for.
Footnote 4 said:
Testing conducted by AMD Performance Labs as of May 10, 2016 on 3DMark 11 and 3DMark Firestrike using a test system comprising of an i7-4600M, 8GB, AMD Radeon driver 16.20. AMD Radeon R9 M280X (14CUs) scored 5700 and 3500 with a board power of 82W. AMD Radeon RX 480M (16CUs) scored 7200 and 4070 with a board power of 35W. Using Performance/Board power, the resulting average across the 2 different titles was a perf per watt of 2.8X vs the Radeon R9 M280X.
The funny thing is that there exists no RX 480M on the TPU GPU database and Wikipedia lists it as "unknown". So AMD's claims of power efficiency of Polaris were made using a part that is essentially vaporware. GG AMD.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Polaris power efficiency was a meme.

Really ??

RX 480 at release had almost the same performance as RX290X/390X at almost half the power.
Not only that but Polaris 10 has less transistors and also way higher compute capabilities such as FP16 vs Hawaii.

I will agree that the 2.8x efficiency was PR but Polaris 10 perf/watt was extremely good against Hawaii.
 

Guru

Senior member
May 5, 2017
830
361
106
AMD need new architecture and they needed it when maxwell launched 5 years ago.Not even better 7nm proces can save them because radeon 7 didnt even beat 2080TI(and it should because it is on new node.Something like 7970 vs GTX580.Its like 7970 will only compete with GTX560-second tier GPU.Thats how AMD is behind today)If they will keep using gcn they will never catch up nv again.
They are actually ahead and Turing is more like AMD's architecture, than the other way around.

If you compare RX 580 and GTX 1060 6GB, you'd see that the RX 580 wins in almost all new games. RX 580 is about 15% faster than the 1060 in Wolfenstein 2, same in Doom and it generally beats the 1060 6GB is pretty much most DX12 games, as well as the newer DX11 games.

Turing architecture has been iterated and designed more like AMD's arch, as it is more efficient and better at processing low level api's. The miscalculation with AMD is the speed of adoption of DX12 and Vulkan. They thought that a lot more games would be DX12 or Vulkan and that just hasn't been the case. As you can see Turing processes low level api's much better now and a card like the RTX 2060 beats even the 1080 in games like Wolfenstein and some better designed DX12 games.

There is a huge case to make for Vulkan, both Doom and Wolfenstein 2 are running extremely well, much more than most other games, yet devs are still using DX11 and older engines.

Also Nvidia's Turing architecture is basically a slow reiteration of their 10+ years old GTX 200 architecture, which was their first unified shader architecture. Just because AMD has kept the same codename GCN vs Nvidia who change the codename doesn't mean shit.

Vega failed as a gaming card because AMD wanted it to serve dual purpose, to be a compute monster AND gaming card, so it was big and packed with compute power, but that didn't make any difference in games. Truth be told DX12/Vulkan games are heading towards being more compute centric and having this general unified process, even things like raytracing are designed as such, but it's a slow process, mostly with game engines and GPU devs unwillingness to go all out, as that would mean having worse performance in DX11 titles.

Thing is it took 1 year for Wolfenstein 2 to pack some of the technologies used in Vega and utilize them. Barely any other game takes advantage of the full assortment of features Vega offers and if they did Vega would perform at least 10% better across all DX12/Vulkan games.
 
Last edited:

sandorski

No Lifer
Oct 10, 1999
70,850
6,387
126
They are actually ahead and Turing is more like AMD's architecture, than the other way around.

If you compare RX 580 and GTX 1060 6GB, you'd see that the RX 580 wins in almost all new games. RX 580 is about 15% faster than the 1060 in Wolfenstein 2, same in Doom and it generally beats the 1060 6GB is pretty much most DX12 games, as well as the newer DX11 games.

Turing architecture has been iterated and designed more like AMD's arch, as it is more efficient and better at processing low level api's. The miscalculation with AMD is the speed of adoption of DX12 and Vulkan. They thought that a lot more games would be DX12 or Vulkan and that just hasn't been the case. As you can see Turing processes low level api's much better now and a card like the RTX 2060 beats even the 1080 in games like Wolfenstein and some better designed DX12 games.

There is a huge case to make for Vulkan, both Doom and Wolfenstein 2 are running extremely well, much more than most other games, yet devs are still using DX11 and older engines.

Also Nvidia's Turing architecture is basically a slow reiteration of their 10+ years old GTX 200 architecture, which was their first unified shader architecture. Just because AMD has kept the same codename GCN vs Nvidia who change the codename doesn't mean shit.

At least some of the 580s improving Performance vs the 1060 is likely to be due to AMD's influence on the Software side of things. Partially from DX12, but also from controlling the Consoles which practically all Game Developers have become accustomed to.
 
  • Like
Reactions: DarthKyrie