The future of AMD in graphics

Page 13 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Topweasel

Diamond Member
Oct 19, 2000
5,437
1,659
136
It's not a one way ticket like you think it is. Architecting for graphics does not mean a weak correlation for architecting GPU compute as well. Graphics IS the stepping stone for GPU compute since programmable shaders were not originally designed for doing professional compute applications but were designed for graphics operations. If Intel's discrete graphics can't show any promise that they are performant in high-end graphics then what do you think about the prospects of Intel succeeding in professional GPU compute where the kernels are far more complex ?

CUDA really wasn't originally designed for server workloads. Heck, you can't even run Tensorflow, PyTorch or any other machine learning frameworks before Kepler based GPUs which is the cornerstone of GPU compute. OpenCL is rubbish for the most part since it's not even an equivalent to CUDA. OpenCL isn't even single source like CUDA! It's a totally different compute API that's very much designed around the limitations of a graphics API. AMD's main compute API is HIP/OpenMP since they've clearly given up hope on OpenCL being truly competitive in the near future ...

It has everything to do with their integrated graphics because of the fact that they have a really awful development environment. Almost no graphics developers put in any effort for their integrated GPUs and you can be sure no one is interested in their OpenCL implementation or other compute solutions. They may as well just start out with being focused on high-end graphics like the others did. What would even be the point of introducing a high-end compute solution so early when they still don't even have a single source compute API ? Using your strategy might even harm to Intel's potential growth early on when they're building a developer community but to them it's just cheaper to replace hardware later on compared to rewriting and maintaining code so do you truly think that Intel even with all of it's resources are prepared alone to start tackling high-end GPU compute from the beginning ? For reference it's taken AMD nearly 3 years to be close to upstreaming their GPU compute acceleration support for the most popular machine learning frameworks so can you imagine how long it would take Intel to do the same without community support and where they still don't support an implementation for a singe source compute API ?


It is when the discussion start with the Intel tossing out everything compute to compete as a discrete GPU.

Intel isn't going to start a design on new GPU from the ground up to have to toss it almost immediately to build a whole new arch for one the includes compute. Intel may understand that for them to succeed they have to give Nvidia especially a hard time in the enthusiast market. But that won't come at the cost of having nothing for the market that they really care about, the compute market. Personally I think it will be descete be damned, but I'll give them the benefit of the doubt here. But Intel also has to realize that slow and slow process development will mean that they have to get into the market (servers) running. They aren't going to have a whole lot of opportunity to stay ahead (if they can at all) If they are going to be stuck on a process for 4+ years. We are almost done with the ebbs and flows of one doing a process change taking the lead for a bit, the next one does 2 years later and takes the lead. AMD blew it last time. Whoever has the best archs will keep a lead for a lot longer and a lot more effort across the board will be done to prep for the next node shift. I know people want Intel to come in and shake up gaming and they may. But they aren't going to tie their hands behind their backs when all they really want is that high margin 2k-10k server compute card sales.

As far as the API's are concerned. Do we know went Intel is launching this hardware? What the specs are? What their software guys are doing? Have we never seen Intel put the cart in front of the horse?
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
I believe I understand what you mean, but this is a metaphor I haven't seen before. Money machine?
I was Jensen making fun of Intel and Otellini. His criticism was Intel was only in it for making money as I interpreted it. And it was money cannon with a drawing of Otellini sitting on a cannon full of money. I think he was on to something but it looks to me as nv has fallen in the same trap.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
It is when the discussion start with the Intel tossing out everything compute to compete as a discrete GPU.

Intel isn't going to start a design on new GPU from the ground up to have to toss it almost immediately to build a whole new arch for one the includes compute. Intel may understand that for them to succeed they have to give Nvidia especially a hard time in the enthusiast market. But that won't come at the cost of having nothing for the market that they really care about, the compute market. Personally I think it will be descete be damned, but I'll give them the benefit of the doubt here. But Intel also has to realize that slow and slow process development will mean that they have to get into the market (servers) running. They aren't going to have a whole lot of opportunity to stay ahead (if they can at all) If they are going to be stuck on a process for 4+ years. We are almost done with the ebbs and flows of one doing a process change taking the lead for a bit, the next one does 2 years later and takes the lead. AMD blew it last time. Whoever has the best archs will keep a lead for a lot longer and a lot more effort across the board will be done to prep for the next node shift. I know people want Intel to come in and shake up gaming and they may. But they aren't going to tie their hands behind their backs when all they really want is that high margin 2k-10k server compute card sales.

As far as the API's are concerned. Do we know went Intel is launching this hardware? What the specs are? What their software guys are doing? Have we never seen Intel put the cart in front of the horse?

It's perfectly fine to design a new GPU from the ground up since the API/drivers will take care of things but I don't believe the solution is to just focus on an architecture specific for a single segment. The solution I believe is for Intel to have different architectures optimized for different use cases like their competitors do. It does good for Intel to have it both ways since their gaming solutions can still be viable for a professional's development environment and that they can then decide to deploy their compute solutions since the code written will likely end up working on both of the solutions. It's fine to target high margins but the reason CUDA is succeeds at that is mainly because of it's community. Not many programmers personally own an expensive Volta GPU which is entirely suited for compute but they probably own either a Pascal or a Turing GPU at which they can develop on ... (it's also very expensive to get their hands on the Tesla lines of GPUs as well)

For Intel, they are working on getting SYCL support up and running as their approach to single source programming but I don't think they're done yet ... (frankly Intel will definitely need a community to get started quickly because many machine learning frameworks like PyTorch don't have a SYCL backend but on the other hand PyTorch has a HIP backend for AMD GPUs)
 

Muhammed

Senior member
Jul 8, 2009
453
199
116
AMD had a strong growth, Nvidia actually had a decent loss.
Did you even look at the links I posted before muttering your nonsense, NVIDIA'd data center eclipses the entire AMD division of both CPUs+ GPUs.

Look it up genius.


Quit being rude.

AT Moderator ElFenix
 
Last edited by a moderator:

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
They both had an increase in datacenter GPU revenue YoY in Q4 2018 (FY2019).
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Yeah, as evident by the 81% NVIDIA and 18% AMD market split, do you live in the dream lands?

HUH?! You are not making any sense. We aren't saying AMD has more market share than nVidia, literally NOBODY has said that. Everybody here knows you hate AMD for whatever weird reason. But that doesn't mean you should just out right ignore data.
 

Guru

Senior member
May 5, 2017
830
361
106
Did you even look at the links I posted before muttering your nonsense, NVIDIA'd data center eclipses the entire AMD division of both CPUs+ GPUs.

Look it up genius.
Genius I'm not talking about accumulated market share over the past decade you fool, I'm talking about the recent year and more specifically the recent quarter.

You quit being rude too.

AT Moderator ElFenix
 
Last edited by a moderator:

Guru

Senior member
May 5, 2017
830
361
106
A "process node advantage" which means absolutely nothing due to inferior software and architecture. NVIDIA's GPUs are more efficient on a worse process... again, AMD needs a massive overhaul of their GPU stack and nobody knows if NAVI will provide anywhere near a large enough change. Time will tell, but claiming that AMD's GPU sales have been anything but dismal is hilarious.

The fallout from the mining boom means that used 470/480/570/580/Vega 56/1060/1070/1070TI have flooded the GPU market which is eroding both AMD and NVIDIA's sales and will continue to do so for quite some time.

I really do hope that AMD gets their GPU stack together. Ryzen 2 will be the first AMD CPU in over a decade I'd consider using in my main box and I'd love to see the same happen on the GPU end of things. I'm an AMD fan, but I'm also realistic and don't fall for fanboi hype and intellectual dishonesty.

We'll see on the pro segment. I don't have that data but do know that anecdotally, none of the CAD builds I've done over the past 10 years have wanted anything but NVIDIA.
Software wise AMD is MUCH better than Nvidia, MUCH better. From desktop software and front end features, to backend features and stability, performance, compatibility. AMD has much less response time, much better multi monitor compatibility, much wider hardware compatibility, etc... On the software side on the front end, all of their features are miles better than Nvidia, in fact Nvidia is very much very behind.

Also overall driver stability has been much better for AMD, much less big issues and things like BSOD and corruption.

Process node advantage is huge, because it allows them to develop smaller chips with more density, I get it its still early 7nm production, so its more expensive, erasing the benefits, but they are using 7nm on the professional market first, their Epyc processors which they can sell for $3000 to $10k and make big profits and their MI60 and MI50 GPU's which they can get a profit on.

From there, using those to mature the 7nm process and as it gets cheaper to push out the lower margin parts and various desktop parts.

And that said 7nm is expensive, but 12nm is not that much cheaper, so AMD are in a way better position long term.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
Software wise AMD is MUCH better than Nvidia, MUCH better. From desktop software and front end features, to backend features and stability, performance, compatibility. AMD has much less response time, much better multi monitor compatibility, much wider hardware compatibility, etc... On the software side on the front end, all of their features are miles better than Nvidia, in fact Nvidia is very much very behind.

Also overall driver stability has been much better for AMD, much less big issues and things like BSOD and corruption.

Process node advantage is huge, because it allows them to develop smaller chips with more density, I get it its still early 7nm production, so its more expensive, erasing the benefits, but they are using 7nm on the professional market first, their Epyc processors which they can sell for $3000 to $10k and make big profits and their MI60 and MI50 GPU's which they can get a profit on.

From there, using those to mature the 7nm process and as it gets cheaper to push out the lower margin parts and various desktop parts.

And that said 7nm is expensive, but 12nm is not that much cheaper, so AMD are in a way better position long term.
Oh great, another vapid fanboi - please stop, just stop. Kthxby


Read the rules stickies. We don't repeatedly call people fanboy or troll around here.

AT Moderator ElFenix
 
Last edited by a moderator:
  • Like
Reactions: Innokentij

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,730
136
The 1660 has launched and it has never looked so bad for AMD, and it's only going to get worse in 1.5 months when the 1650 comes into the fray.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,730
136
It still loses to ages old Polaris in perf/$.
Only the RX 570 is ahead in perf/$; and the 1660 is in a different performance tier so the comparison is moot.
Now that's a properly dramatic statement
JPR says it's 81-18 last quarter, the lowest it has been historically. It'll be even lower this quarter.
 
  • Like
Reactions: Innokentij

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
The game bundle is probably the only redeeming feature of the RX 590. Otherwise it's a turd - slower than the 1660 and uses almost 2.5x more power.

From the reviews i have seen the cards are equal in performance, the RX 590 total system power was 100W more than the GTX 1660. That is 250W vs 350W, not even worth talking about it and only a concern for people that want to use an SFF case with a 300W PSU. For every one else a 450W Gold PSU will be more than enough.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,730
136
From the reviews i have seen the cards are equal in performance, the RX 590 total system power was 100W more than the GTX 1660. That is 250W vs 350W, not even worth talking about it and only a concern for people that want to use an SFF case with a 300W PSU. For every one else a 450W Gold PSU will be more than enough.
Nope, according to TPU the 1660 is ~9% faster and consumes less than half the power. Less power consumed = less heat, meaning a quieter and cooler card and an all round better experience.

It is unquestionable that the RX 590 is a worse card if not for the game bundle.