The Potential Disruptiveness of AMD’s Open Source Deep Learning Strategy

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
Vulkan isn't generating any more Android profits for AMD than Mantle did, which is precisely $0.00. AMD is still losing money. That makes Mantle, Vulkan and DX12 a collective financial failure for AMD. They spent money on it and got nothing in return.

They've open sourced tons of their work. They also co-developed HBM and are not charging royalties on it. They do a lot of hard work and don't charge people to use it. We got 2 great APIs out of Mantle's development (DX12 / Vulkan) and AMD got to prove there was a better way and also got a great API testing language out of it that they still use internally. They still use Mantle for LiquidVR and other efforts.
Nonsense, they're just lead by people with a non-existent business acumen. Mantle/Freesync/Crossfire/HBM all should've been licensed, just as nVidia collects fees from every single gsync monitor sold, and every single SLI motherboard with "certification".

They also should be locking out nVidia GPUs from their chipsets unless nVidia pay "certification" fees, exactly how nVidia locks out paying customers from PhysX when they detect a non-nVidia GPU in the system. Clearly nVidia's customers appreciate such tactics given they continue to make a profit. So now it's time for AMD to start cashing in.

Yeah I'd hate to live in a world with you in charge.

Want to play Prey? Buy an AMD GPU!

Want to play Gears of Wars? Buy a Nvidia GPU!

Want to play XYZ? Buy Ryzen!

Want to play ABC? Buy Intel!

That harms the PC world. Its the same reason VR adoption is terrible right now with vendor specific games.
 
  • Like
Reactions: DarthKyrie

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
Are you actually saying that AMD doesn't release parts as quickly because they're nice guys and don't want us to spend too much money on computer parts?!

No? I was specifically talking about software not hardware. Look at how well original GCN is still fairing against Kepler in newer games. Mantle / DX12 / Vulkan have paid off very well for people who bought AMD over Nvidia.
 
Aug 11, 2008
10,451
642
126
They've open sourced tons of their work. They also co-developed HBM and are not charging royalties on it. They do a lot of hard work and don't charge people to use it. We got 2 great APIs out of Mantle's development (DX12 / Vulkan) and AMD got to prove there was a better way and also got a great API testing language out of it that they still use internally. They still use Mantle for LiquidVR and other efforts.


Yeah I'd hate to live in a world with you in charge.

Want to play Prey? Buy an AMD GPU!

Want to play Gears of Wars? Buy a Nvidia GPU!

Want to play XYZ? Buy Ryzen!

Want to play ABC? Buy Intel!

That harms the PC world. Its the same reason VR adoption is terrible right now with vendor specific games.
Actuallly, your efforts to portray AMD as such a magnanimous entity simply reinforce BFG's points. They have done all these great things to benefit the consumer(at least in your opinion), but still havent turned a profit. Even if you accept the "evil empire" concept of Intel/nVidia (which I do not) surely there must be some middle ground that is reasonably fair to the consumer but still brings more profits to the company as well. And look what mantle/DX10/Vulcan have done for their gpu business. While rolling out these API's they dropped to a historically low market share, still have only slightly recovered, and have no top end products.
 

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
Actuallly, your efforts to portray AMD as such a magnanimous entity simply reinforce BFG's points. They have done all these great things to benefit the consumer(at least in your opinion), but still havent turned a profit. Even if you accept the "evil empire" concept of Intel/nVidia (which I do not) surely there must be some middle ground that is reasonably fair to the consumer but still brings more profits to the company as well. And look what mantle/DX10/Vulcan have done for their gpu business. While rolling out these API's they dropped to a historically low market share, still have only slightly recovered, and have no top end products.

Hmm if only I said the exact same thing in my original post:

The sad part about AMD is they are focused on doing better by their customers instead of constantly charging them for new hardware. Thats great for end users, but terrible for profits.
 

richaron

Golden Member
Mar 27, 2012
1,357
329
136
Actuallly, your efforts to portray AMD as such a magnanimous entity simply reinforce BFG's points. They have done all these great things to benefit the consumer(at least in your opinion), but still havent turned a profit. Even if you accept the "evil empire" concept of Intel/nVidia (which I do not) surely there must be some middle ground that is reasonably fair to the consumer but still brings more profits to the company as well. And look what mantle/DX10/Vulcan have done for their gpu business. While rolling out these API's they dropped to a historically low market share, still have only slightly recovered, and have no top end products.

Of course there's a middle ground; I would argue AMD is walking the middle ground. I don't think anyone here is pretending AMD is being a not for profit organization by going open source. But this isn't a zero sum game; they can both help the community and try to make money.
 

beginner99

Diamond Member
Jun 2, 2009
5,320
1,768
136
Most people DON'T have brand loyalty, so really all they have to do is offer comparable performance for $50 less and they should sell about half the cards.

That doesn't even work in consumer space let alone in HPC space. In the later all it matters what Gartner report (or other crap) the deciding person last read and that's what you will have to buy.
 

OatisCampbell

Senior member
Jun 26, 2013
302
83
101
That doesn't even work in consumer space let alone in HPC space. In the later all it matters what Gartner report (or other crap) the deciding person last read and that's what you will have to buy.
We'll have to agree to disagree.
It wasn't that long ago AMD offered competitive high end products, if they do so again for less cash people will buy.
 

Illyan

Member
Jan 23, 2008
86
0
66
I'm going to have to disagree with that. Granted nV got into the market first and therefore has a much greater tool base and user base - both very important concerning viability in certain situations - but I think mind share may be clouding your judgement if you think nV is the "only viable option".
In the domain of academic research, deep learning, autonomous driving, data science, etc, absolutely no one uses AMD gpus, because none of the software supports AMD gpus and no one is going to spend hundreds of thousands of dollars to develop their own implementation of tools to run on AMD gpus when those tools already exist for the price of a $500 nvidia GPU, leaving nvidia the only viable option for scientific computing.
 

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
ofICU2P.jpg


pB7gRgr.jpg


Tvqi0kM.jpg


Bqit6AI.jpg


G68sd0Y.jpg


NrLfXcG.jpg
 

richaron

Golden Member
Mar 27, 2012
1,357
329
136
In the domain of academic research, deep learning, autonomous driving, data science, etc, absolutely no one uses AMD gpus, because none of the software supports AMD gpus and no one is going to spend hundreds of thousands of dollars to develop their own implementation of tools to run on AMD gpus when those tools already exist for the price of a $500 nvidia GPU, leaving nvidia the only viable option for scientific computing.

Well this time I'm just going to have to say you are flat out wrong.

In many cases you might have a point, which I conceded earlier, but to claim "none of the software supports AMD GPUs" is just narrow minded and short sighted. I have personal experience with world class molecular dynamics software running OpenCL, an no doubt there are other software packages like this. Plus many people in these sectors have the time and motivation to experiment with their own code.
 

beginner99

Diamond Member
Jun 2, 2009
5,320
1,768
136
Which tools? Can you be more specific?

Theano

The backend was designed to support OpenCL, however current support is incomplete. A lot of very useful ops still do not support it because they were ported from the old backend with minimal change.

Plus before this new backend, it only worked with CUDA.

TensorFlow

Which is from Google which for sure woudl ahev the money and capacity to add OpenCL if they wanted to. But given above link it's very unlikely to happen. There are some efforts to make this work but if you want to use it now you need CUDA eg NV GPU.

Deeplearning4j

Roadmap:
Medium priority:

  • OpenCL for ND4J

Molecular Modeling

Go through the list. Most of them only support CUDA eg NV. Some mention OpenCL but if you read the docs it only really works with CUDA. Yeah some work with OpenCL but it's far the minority.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
So all AMD needs to do to be competitive in HPC is to find a way to run CUDA code on their GPUs, right?
http://gpuopen.com/compute-product/hip-convert-cuda-to-portable-c-code/
Why would you port CUDA and have all the problems associated with porting (i.e. it not working properly) when you could just buy Nvidia and use CUDA? No one is going to buy AMD to use CUDA.

Either AMD get their own software stack that works and convince dev's to use it, or they can't really compete. But then software has always been AMD's problem. Many years ago Nvidia decided to make writing software as much a priority as producing the hardware. AMD didn't, they would just produce the hardware and hope someone else wrote the software. That works fine for x86 cpu's but not so well for gpu compute.

There have been many AMD threads about how AMD was going to take over the world with their gpu/cpu compute stuff off the back of many power point slides with fancy names produced by AMD and they have nearly all come to nothing. The only exceptions were things AMD never really expected like bitcoin mining, where the software was really simple, and it just happened to suit AMD's architecture.
 
  • Like
Reactions: frozentundra123456

BFG10K

Lifer
Aug 14, 2000
22,709
3,007
126
They've open sourced tons of their work. They also co-developed HBM and are not charging royalties on it. They do a lot of hard work and don't charge people to use it. We got 2 great APIs out of Mantle's development (DX12 / Vulkan) and AMD got to prove there was a better way and also got a great API testing language out of it that they still use internally. They still use Mantle for LiquidVR and other efforts.
So? Who cares? If AMD can't make a profit who cares? It's not a charity making donations.

Yeah I'd hate to live in a world with you in charge.

Want to play Prey? Buy an AMD GPU!

Want to play Gears of Wars? Buy a Nvidia GPU!

Want to play XYZ? Buy Ryzen!

Want to play ABC? Buy Intel!

That harms the PC world. Its the same reason VR adoption is terrible right now with vendor specific games.
What's the alternative? Lalalala-money-doesn't-matter-lalala! How many more years do you think AMD can continue with sustained losses?

I'd rather have the above than AMD going out of business. Then you'd see what a real monopoly looks like.

nVidia has proven time and time again that licensing and closed source/proprietary technology make money in the graphics market.
 

SpaceBeer

Senior member
Apr 2, 2016
307
100
116
Why would you port CUDA and have all the problems associated with porting (i.e. it not working properly) when you could just buy Nvidia and use CUDA? No one is going to buy AMD to use CUDA.

Either AMD get their own software stack that works and convince dev's to use it, or they can't really compete. But then software has always been AMD's problem. Many years ago Nvidia decided to make writing software as much a priority as producing the hardware. AMD didn't, they would just produce the hardware and hope someone else wrote the software. That works fine for x86 cpu's but not so well for gpu compute.

There have been many AMD threads about how AMD was going to take over the world with their gpu/cpu compute stuff off the back of many power point slides with fancy names produced by AMD and they have nearly all come to nothing. The only exceptions were things AMD never really expected like bitcoin mining, where the software was really simple, and it just happened to suit AMD's architecture.

If there's a tool that will automatically enable CUDA code to run on GCN, without (m)any issues, and the GCN card has better price/perf ratio, why would't you use it?

http://hothardware.com/reviews/nvidia-quadro-p4000-and-p2000-pro-workstation-gpu-review?page=3
http://hothardware.com/reviews/nvidia-quadro-p4000-and-p2000-pro-workstation-gpu-review?page=5

The reason why people choose CUDA is because it's good. And it's been here for ~10 years now. As soon as there is a tool (software and hardware) that would ensure work is done faster/easier/cheaper, people will start using it. Saying nVidia and CUDA will be one and only choice for HPC/ANN makes no sense. It would be the same as if 25 years ago someone said no other language but C/C++ will be widely used. Or that "wintel" will be only choice for personal computers (today's PCs, tablets & smartphones).

So, as soon as there is better offer, people will ditch nVidia and go for that one. Just like PowerPC for example. When you compare Tesla and Xeon Phi spec. numbers, it seems no-one has reason to buy Intel's (co)processors. However, there are many happy Xeon Phi customers. One of them is Google, which is using products of nVidia, Intel, AMD and its own chip (TPU). Every company would like to be independent of its suppliers. In (manufacturing) industry, it is advised to keep each supplier under 50%, if it's possible (if you have 3+ suppliers), or bellow 70% if there are only two.

The best thing AMD can do at this moment is to improve/make software so it's powerful, easy to use, and supported on other platforms (other GPUs and CPUs). And it is very easy to do that. You only need large team of highly skilled developers, huge amount of money, and lot of time. Therefore, it is obvious why is AMD still behind nVidia, Intel, Google... even though they've been working on this for at least year and a half (http://www.anandtech.com/show/9792/amd-sc15-boltzmann-initiative-announced-c-and-cuda-compilers-for-amd-gpus), but probably much longer
 
Last edited:

beginner99

Diamond Member
Jun 2, 2009
5,320
1,768
136
So, as soon as there is better offer, people will ditch nVidia and go for that one.

The issue is "What is a better offer?" Especially if you already have a running NV/Cuda ecosystem. Using HIP eg. AMDs conversion tool doesn't guarantee success and anyway you will need to assign resources to learn it, test it and implement it. This cost a ton of employee time and hence a lot of money. It doesn't matter if AMDs GPU then cost $500 less for same performance. Too little a difference given other costs. That's the problem. hardware cost in professional area is only a small part of the cost.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
Why would you port CUDA and have all the problems associated with porting (i.e. it not working properly) when you could just buy Nvidia and use CUDA? No one is going to buy AMD to use CUDA.

Either AMD get their own software stack that works and convince dev's to use it, or they can't really compete. But then software has always been AMD's problem. Many years ago Nvidia decided to make writing software as much a priority as producing the hardware. AMD didn't, they would just produce the hardware and hope someone else wrote the software. That works fine for x86 cpu's but not so well for gpu compute.

There have been many AMD threads about how AMD was going to take over the world with their gpu/cpu compute stuff off the back of many power point slides with fancy names produced by AMD and they have nearly all come to nothing. The only exceptions were things AMD never really expected like bitcoin mining, where the software was really simple, and it just happened to suit AMD's architecture.
It is not using CUDA on AMD GPU.

HIP is compiler, that transcodes automatically code from CUDA, to OpenCL. It is not "real-time" compiler. You have to port whole application from CUDA, to OpenCL. But the compiler translates 99.96% of CUDA code to OpenCL, and requires minimal interaction with it. Overall, whole compilation of any application, optimization and validation can be done in 120 minutes.

It is actually one of best software initiatives AMD has provided from GPUOpen.

Over previous 24 months AMD has made an unprecedented effort to change the perception about their brand. First fruits of this efforts are slowly coming up. Nvidia had monopoly for past 10 years, and their position is genuinely grounded up. AMD can dig it up however very strongly.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
The reason why people choose CUDA is because it's good. And it's been here for ~10 years now. As soon as there is a tool (software and hardware) that would ensure work is done faster/easier/cheaper, people will start using it. Saying nVidia and CUDA will be one and only choice for HPC/ANN makes no sense. It would be the same as if 25 years ago someone said no other language but C/C++ will be widely used. Or that "wintel" will be only choice for personal computers (today's PCs, tablets & smartphones).
No. People use CUDA because it saves money and time(money, again) in software optimization. You just buy Nvidia hardware and nothing else, even if it costs you more money over period of time. OpenCL is hard to optimize. It can run much better on AMD and Intel hardware than on Nvidia, but it is hard to optimize the application for architectures. CUDA is not. You just drop the API, and you leave the rest to Nvidia software engineers.

But what can be done with OpenCL is seen lately with Blender.
https://wiki.blender.org/index.php/Dev:Source/Render/Cycles/OpenCL
Timings.png

This is comparison between CUDA(GTX 1060) and OpenCL(RX 480), and as you can see AMD GPU is faster. MUCH faster.
 

Krteq

Golden Member
May 22, 2015
1,010
730
136
Theano

The backend was designed to support OpenCL, however current support is incomplete. A lot of very useful ops still do not support it because they were ported from the old backend with minimal change.

Plus before this new backend, it only worked with CUDA.

TensorFlow

Which is from Google which for sure woudl ahev the money and capacity to add OpenCL if they wanted to. But given above link it's very unlikely to happen. There are some efforts to make this work but if you want to use it now you need CUDA eg NV GPU.

Deeplearning4j

Roadmap:
Medium priority:

  • OpenCL for ND4J

Molecular Modeling

Go through the list. Most of them only support CUDA eg NV. Some mention OpenCL but if you read the docs it only really works with CUDA. Yeah some work with OpenCL but it's far the minority.
Well, at least TensorFlow is currently supported by ROCm SW stack

rocmyrydv.jpg
 

SpaceBeer

Senior member
Apr 2, 2016
307
100
116
No. People use CUDA because it saves money and time(money, again) in software optimization. You just buy Nvidia hardware and nothing else, even if it costs you more money over period of time. OpenCL is hard to optimize. It can run much better on AMD and Intel hardware than on Nvidia, but it is hard to optimize the application for architectures. CUDA is not. You just drop the API, and you leave the rest to Nvidia software engineers.

But what can be done with OpenCL is seen lately with Blender.
https://wiki.blender.org/index.php/Dev:Source/Render/Cycles/OpenCL

This is comparison between CUDA(GTX 1060) and OpenCL(RX 480), and as you can see AMD GPU is faster. MUCH faster.
Please have in mind that RX 480 has ~50% more compute power than GTX 1060 (in TFLOPs), so we can't be sure what is actual software influence

The issue is "What is a better offer?" Especially if you already have a running NV/Cuda ecosystem. Using HIP eg. AMDs conversion tool doesn't guarantee success and anyway you will need to assign resources to learn it, test it and implement it. This cost a ton of employee time and hence a lot of money. It doesn't matter if AMDs GPU then cost $500 less for same performance. Too little a difference given other costs. That's the problem. hardware cost in professional area is only a small part of the cost.

So we agree CUDA is better because someone else needs to think about architecture and optimizations of your code. And that is fine. But it also means your organization and your stuff depend on 3rd party, their roadmap, licensing policy, etc. So you can either go that (easy) way, or invest in your stuff so they know how to do the same job using different tools, and use those tools on some other, small(er) projects. There is a difference between knowledge (how to make good ANN) and skills (what tool to use to make it).

Being tied to one software/hardware supplier can be a huge issue, and it should be avoided if possible. Also, users (employees) should (try to) learn more than one programming language or whatever tool they use, since they'll have better chance to find good job. Or adapt to changes in their company.
 
Last edited:

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
Please have in mind that RX 480 has ~50% more compute power than GTX 1060 (in TFLOPs), so we can't be sure what is actual software influence
Previous versions of Blender were showing that GTX 1060 using CUDA was faster than RX 480 using OpenCL. And its not 50% more. Its 4.4 vs 5.7 TFLOPs.
 
  • Like
Reactions: DarthKyrie

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
It is not using CUDA on AMD GPU.

HIP is compiler, that transcodes automatically code from CUDA, to OpenCL. It is not "real-time" compiler. You have to port whole application from CUDA, to OpenCL. But the compiler translates 99.96% of CUDA code to OpenCL, and requires minimal interaction with it. Overall, whole compilation of any application, optimization and validation can be done in 120 minutes.

It is actually one of best software initiatives AMD has provided from GPUOpen.

Over previous 24 months AMD has made an unprecedented effort to change the perception about their brand. First fruits of this efforts are slowly coming up. Nvidia had monopoly for past 10 years, and their position is genuinely grounded up. AMD can dig it up however very strongly.
I know it's not using CUDA directly - that's what "port" means. The whole porting bit isn't trivial. It won't support all the CUDA libraries, it won't "just work" for everything. Developing your code means writing/fixing it/etc. That is much easier on CUDA where you have decent debugging tools. Porting it and then trying to fix them would be a nightmare. The CUDA code will be optimised for Nvidia, if you port it you get code you'd have to spend time re-writing anyway to get it optimised for AMD.

All this to save a little money on hardware - in comparison to the extra effort it would take (which costs money!) it just doesn't add up. Development is hard enough as it is, you are not going to make it significantly harder. All that would change that is if OpenCL or whatever is to be used catches up with CUDA in terms of libraries, debugging, support which it won't do unless someone puts a lot of money and effort into it.