The future of AMD in graphics

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

NTMBK

Lifer
Nov 14, 2011
10,448
5,831
136
I never foresaw Larrabee coming to fruition in full capacity anyway but I do foresee Intel trying lock-in their customers as much as possible ... :wink:

Anyone without an x86 license can go pout elsewhere and design their own x86 cores instead leaching off of either AMD or Intel ... :innocent:

Intel are not pursuing dedicated graphics just to lose it. They will try and pull all the stops as they can such as taking the performance crown by introducing a big die and excluding the RT and tensor cores to win in benchmarks. If Apple are going to Apple then Intel are probably going to Intel as well since neither of them are compelled to make their platforms compatible with 3rd party devices ...

Here's hoping that Intel manages to ship anything smaller than 7nm, if they're planning on winning through sheer transistor count...
 

Topweasel

Diamond Member
Oct 19, 2000
5,437
1,659
136
Raja has a different opinion on that.
I am sure that Raja and Intel want to win the gaming market as well. Nvidia, Intel, AMD even with Zen are proving that designs made for server first can still be competitive and win in the consumer market in different ways and it should be easier to compete since all will making the same sacrifices. But I until proven otherwise I refuse to believe that Intel is going to chuck out everything compute to be the third player in the discrete video card business. Are they going to do Tensor cores for RT or something like that? I don't know. But I believe Intel is looking for a long term solution in the markets that Nvidia and now AMD with the MI series are really hitting their strides. Those were the same ones they were targeting with Larrabee and its ilk.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
Intel is not getting into the discrete graphics business to win gaming benchmarks. Intel wants to own up and down the whole server infrastructure. The compute Graphics business has quickly outpaced everything they have been trying to do since Larrabee for multi-small core compute. They don't want to be pushed out. That's why they are doing it.

It's too bad that they don't have any competent hardware or a compute stack to be able to deploy yet as server solutions. Intel needs to win gaming benchmarks to be to be able to court the professionals among the general public to invest in their platform and I mean a platform for development. Who's going to use Intel graphics for deployment when the professionals don't even want to develop on Intel graphics ? There's more developers who work on GPU compute acceleration for AMD than there is for Intel which still doesn't offer a single source C++ compute API ...

CUDA wasn't even originally designed for servers since it's original release also didn't feature multi-node support. It was initially intended for DCC applications from Adobe or distributed computing like Folding@home. ROCm had a pretty similar humble beginning as well since it's predecessor, HSA only intended GPU compute acceleration for daily productivity applications and it wasn't meant for machine learning either ...

It's more important to have development platform than it is to have deployment platform at the beginning. People deploy x86 devices because they build for it as well and a similar principle applies to CUDA devices. Intel's integrated GPU won't do them any good because nobody is interested in porting their CUDA accelerated applications and developing them on their integrated graphics. Intel must fix this by offering some compelling gaming benchmarks to determine if it's worthy or not for the professionals to starting adding in support for Intel GPU compute acceleration ...

Raja has a different opinion on that.

Exactly this ...

Here's hoping that Intel manages to ship anything smaller than 7nm, if they're planning on winning through sheer transistor count...

Here's to also hoping that the entire semiconductor foundry market won't slow down either because just about every customer aside from AMD is ordering less production and that especially applies to Apple and I imagine winning through sheer transistor count is their only path to salvation. Intel has had enough of 3rd party discrete graphics vendors supplying them for years so it's time for them to take destiny into their own hands. Just as Apple violently challenged CUDA so too will Intel do the very same. It's every chip designer for themselves from now on in world where it's a free for all ...
 

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
Here's hoping that Intel manages to ship anything smaller than 7nm, if they're planning on winning through sheer transistor count...
They are entering the market in 2020 - so 14nm? Intel 7nm isn’t due till 2021.
 

DrMrLordX

Lifer
Apr 27, 2000
22,932
13,014
136
They are entering the market in 2020 - so 14nm? Intel 7nm isn’t due till 2021.

Could be 10nm. Intel should have a working 10nm node with high yields by then. If they don't, I don't even know what to say.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
Could be 10nm. Intel should have a working 10nm node with high yields by then. If they don't, I don't even know what to say.
I don’t think so, GPU die will be too big for useful yields on 10nm unless a miracle occurs. Intel could use P1273, their SoC process to get high density, and lower power per mm^2 - and build a large die for the top of their stack. I expect that Intel is using it’s 4500 GFX engineers to design a follow on 7nm GPU stack pretty quickly (two hardware teams).
 

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
Tru, but we're >impying here applying MCM concept to graphics.
Which is beyond tricky.
Well, I think GFX will need to go dual die to get high end GPUs @ 3nm. $1.5B, for a TU102 size GPU will be unprofitable for every market segment aside from compute/ML. So it’s really a problem for NV. If AMD sticks with smaller dies (sub $750M) then they don’t have a problem, aside from increasing prices. The spoiler will be Intel, who will likely 'spend' their way into the market.
 
  • Like
Reactions: guachi

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
$1.5B, for a TU102 size GPU will be unprofitable for every market segment aside from compute/ML
We just don't know the actual design costs for something this big@3nm.
That also really assumes we'll ever reach 3nm!
 

Topweasel

Diamond Member
Oct 19, 2000
5,437
1,659
136
it's too bad that they don't have any competent hardware or a compute stack to be able to deploy yet as server solutions. Intel needs to win gaming benchmarks to be to be able to court the professionals among the general public to invest in their platform and I mean a platform for development. Who's going to use Intel graphics for deployment when the professionals don't even want to develop on Intel graphics ? There's more developers who work on GPU compute acceleration for AMD than there is for Intel which still doesn't offer a single source C++ compute API ...

CUDA wasn't even originally designed for servers since it's original release also didn't feature multi-node support. It was initially intended for DCC applications from Adobe or distributed computing like Folding@home. ROCm had a pretty similar humble beginning as well since it's predecessor, HSA only intended GPU compute acceleration for daily productivity applications and it wasn't meant for machine learning either ...

It's more important to have development platform than it is to have deployment platform at the beginning. People deploy x86 devices because they build for it as well and a similar principle applies to CUDA devices. Intel's integrated GPU won't do them any good because nobody is interested in porting their CUDA accelerated applications and developing them on their integrated graphics. Intel must fix this by offering some compelling gaming benchmarks to determine if it's worthy or not for the professionals to starting adding in support for Intel GPU compute acceleration ...

That's kind of a perverse way to look at it. Intel needs to get into the compute graphics server space so there answer is to start from the ground up developing graphics with little to no compute capability?

Cuda not designed for servers? It was answer to AMD's work in OpenCL and was/is/will forever be a api and hardware tool set to offer compute capability on their hardware. Now waaaay back in the day when Cuda was first created graphics cards didn't have enough oooomph to viable server hardware choice and the markets they excel in didn't really exist back then. It was more of a CPU accelerator then a full blown computing solution. But that's how these always start. But it was always a professional tech meant for compute.

The last part is non-sense. This has nothing to do with integrated graphics. Yes the Brought Raj in for discrete graphics. Yes they are going to try to be competitive in Discrete graphics. But this isn't happening at the cost of Compute. Compute is the key to the whole venture.
 

Guru

Senior member
May 5, 2017
830
361
106
hogwash, NVIDIA's data center revenue is several times bigger than AMD's RTG revenue all together.

NVIDIA reported $679 million for their Q4 data center alone, that's 2/3 of AMD's entire CPU + Graphics division revenue, which garnered $986M .

https://www.anandtech.com/show/13965/nvidia-earnings-report-q4-2019-crypto-pain-but-full-year-gain
https://www.anandtech.com/show/13917/amd-earnings-report-q4-fy-2018
AMD had a strong growth, Nvidia actually had a decent loss.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
Cuda not designed for servers? It was answer to AMD's work in OpenCL and was/is/will forever be a api and hardware tool set to offer compute capability on their hardware.
Cuda predated openCL by two years. Apple was the original author of openCL, not AMD.
 

Guru

Senior member
May 5, 2017
830
361
106
No, no it hasn't, and claiming it has doesn't make it so. A percent or two gain in marketshare due to the mining boom doesn't represent "significant" improvement for AMD. I really do hope that AMD gets their act together on the GPU side of things like they have with their CPUs, but they're still likely many many years out from competing with NVIDIA in a way that increases marketshare. I fully expect to see 3000 series NVIDIA cards bring NVIDIA back with a vengeance due to the poor reception/sales of the 2000 series. If AMD hopes to compete they need a completely new architecture and direction on their gaming GPUs.
AMD are doing great in the professional GPU segment, they've had strong growth throughout the year, Nvidia has been posting decrease after decrease, yet AMD has been posting increase after increase.

They have the 7nm process advantage over Nvidia and they've been developing Navi for 3 years. The RX 400/500 series has been performance wise and price wise very competitive with Nvidia, in fact I'd say generally the RX 570 and 580 have been the better buy for a long time. Vega 56 when it came out was 100% the better buy over the GTX 1070.

Nvidia's next gen is likely very late Q4 2019 or most likely Q1 2020. So AMD has good 9 months of process node advantage.
 
  • Like
Reactions: guachi

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
AMD had a strong growth, Nvidia actually had a decent loss.
Look at the yoy results, NV was up 50% in datacenter.
AMD doesn’t break down GFX sales. It uses computing and graphics which has been driven up by Ryzen sales.
 
Last edited:

ozzy702

Golden Member
Nov 1, 2011
1,151
530
136
AMD are doing great in the professional GPU segment, they've had strong growth throughout the year, Nvidia has been posting decrease after decrease, yet AMD has been posting increase after increase.

They have the 7nm process advantage over Nvidia and they've been developing Navi for 3 years. The RX 400/500 series has been performance wise and price wise very competitive with Nvidia, in fact I'd say generally the RX 570 and 580 have been the better buy for a long time. Vega 56 when it came out was 100% the better buy over the GTX 1070.

Nvidia's next gen is likely very late Q4 2019 or most likely Q1 2020. So AMD has good 9 months of process node advantage.

A "process node advantage" which means absolutely nothing due to inferior software and architecture. NVIDIA's GPUs are more efficient on a worse process... again, AMD needs a massive overhaul of their GPU stack and nobody knows if NAVI will provide anywhere near a large enough change. Time will tell, but claiming that AMD's GPU sales have been anything but dismal is hilarious.

The fallout from the mining boom means that used 470/480/570/580/Vega 56/1060/1070/1070TI have flooded the GPU market which is eroding both AMD and NVIDIA's sales and will continue to do so for quite some time.

I really do hope that AMD gets their GPU stack together. Ryzen 2 will be the first AMD CPU in over a decade I'd consider using in my main box and I'd love to see the same happen on the GPU end of things. I'm an AMD fan, but I'm also realistic and don't fall for fanboi hype and intellectual dishonesty.

We'll see on the pro segment. I don't have that data but do know that anecdotally, none of the CAD builds I've done over the past 10 years have wanted anything but NVIDIA.
 
  • Like
Reactions: Muhammed

Topweasel

Diamond Member
Oct 19, 2000
5,437
1,659
136
Cuda predated openCL by two years. Apple was the original author of openCL, not AMD.
I didn't say AMD wrote it. I said the work AMD was doing work with it. They had partnered early with Apple early on it and AMD had working with several guys on working on a decent API to use and OpenCL was widely known before it's first release (including a pre launch, launch in 2008). But you have to remember that Hetergenous processing and using graphics as compute and 3d were the reasons AMD purchased ATI to begin with. Nvidia was working with an on OpenCL as well a bit in a Rambus like setting, where they always planned on pushing Cuda and locking everyone else out, but they still wanted to make sure their stuff worked in OpenCL and got to be part of the decision making process. The problem is like every open Standard. Cuda came as a finished product really quick because it was all AMD. OpenCL took longer because a comity was working on it through in features and requirements.
 

Hitman928

Diamond Member
Apr 15, 2012
6,696
12,373
136
AMD doesn’t break down GFX sales. It uses computing and graphics which has been driven up by Ryzen sales.

In the call Lisa Su said that enterprise revenue was a roughly 50/50 split between GPU and CPU. That gives the GPU side $100M or more for the quarter or about 12% of the GPU market with Nvidia taking the other 88%. The year prior, Nvidia had very near 100% of the market, though the market has expanded since then but is expected to stall a little bit in 2019.
 
  • Like
Reactions: guachi and Ajay

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
Maybe.
Maybe.
They have nowhere else to go, if they don’t innovate they will grind to a halt. Also, Intel will keep pressing on as well. The pace will likely slow down for both as more exotic materials and geometries extend the time line for tool, metrology and process development. Smarter, more powerful software and hardware will need to be developed to control costs and keep design timelines from exploding. It all just engineering, really hard engineering, but still engineering (well, and physics and chemistry). The biggest challenge is cost - which can be dealt with by amortizing over a longer period of time than older process nodes (as we’ve already begun to see). It’s not a matter of if, but only a matter of when.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
That's kind of a perverse way to look at it. Intel needs to get into the compute graphics server space so there answer is to start from the ground up developing graphics with little to no compute capability?

Cuda not designed for servers? It was answer to AMD's work in OpenCL and was/is/will forever be a api and hardware tool set to offer compute capability on their hardware. Now waaaay back in the day when Cuda was first created graphics cards didn't have enough oooomph to viable server hardware choice and the markets they excel in didn't really exist back then. It was more of a CPU accelerator then a full blown computing solution. But that's how these always start. But it was always a professional tech meant for compute.

The last part is non-sense. This has nothing to do with integrated graphics. Yes the Brought Raj in for discrete graphics. Yes they are going to try to be competitive in Discrete graphics. But this isn't happening at the cost of Compute. Compute is the key to the whole venture.

It's not a one way ticket like you think it is. Architecting for graphics does not mean a weak correlation for architecting GPU compute as well. Graphics IS the stepping stone for GPU compute since programmable shaders were not originally designed for doing professional compute applications but were designed for graphics operations. If Intel's discrete graphics can't show any promise that they are performant in high-end graphics then what do you think about the prospects of Intel succeeding in professional GPU compute where the kernels are far more complex ?

CUDA really wasn't originally designed for server workloads. Heck, you can't even run Tensorflow, PyTorch or any other machine learning frameworks before Kepler based GPUs which is the cornerstone of GPU compute. OpenCL is rubbish for the most part since it's not even an equivalent to CUDA. OpenCL isn't even single source like CUDA! It's a totally different compute API that's very much designed around the limitations of a graphics API. AMD's main compute API is HIP/OpenMP since they've clearly given up hope on OpenCL being truly competitive in the near future ...

It has everything to do with their integrated graphics because of the fact that they have a really awful development environment. Almost no graphics developers put in any effort for their integrated GPUs and you can be sure no one is interested in their OpenCL implementation or other compute solutions. They may as well just start out with being focused on high-end graphics like the others did. What would even be the point of introducing a high-end compute solution so early when they still don't even have a single source compute API ? Using your strategy might even harm to Intel's potential growth early on when they're building a developer community but to them it's just cheaper to replace hardware later on compared to rewriting and maintaining code so do you truly think that Intel even with all of it's resources are prepared alone to start tackling high-end GPU compute from the beginning ? For reference it's taken AMD nearly 3 years to be close to upstreaming their GPU compute acceleration support for the most popular machine learning frameworks so can you imagine how long it would take Intel to do the same without community support and where they still don't support an implementation for a singe source compute API ?
 
  • Like
Reactions: guachi