Question State of AMD GPGPU Development

mmusto

Junior Member
Jan 23, 2019
2
1
6
#1
I recently visited AMD's developer pages and found them to be devoid of all GPGPU tools and SDKs.


https://developer.amd.com/tools-and-sdks/


In the past, there were links to their openCL development tools for Visual Studio, more recently links and write ups for ROCm. Now, I find nothing and worry all may be abandoned. If someone were to begin a GPGPU project today on Windows and wanted it to be able to be run on AMD hardware, where would one start and in what language should they write?


This lack of any information on developer.amd.com does not bode well.
 

NTMBK

Diamond Member
Nov 14, 2011
8,250
227
126
#2
Yeah, that doesn't look great.
 

EXCellR8

Platinum Member
Sep 1, 2010
2,908
72
126
#3
Maybe they're reorganizing it or it got moved?
 
Oct 6, 2016
140
58
71
#4
Before Mergers, they start shutting down the smaller departments first.

It seems like M$ does not know what is going on. They are asking stupid people for advice
 
Last edited:

Krteq

Senior member
May 22, 2015
712
33
106
#6
I recently visited AMD's developer pages and found them to be devoid of all GPGPU tools and SDKs.


https://developer.amd.com/tools-and-sdks/


In the past, there were links to their openCL development tools for Visual Studio, more recently links and write ups for ROCm. Now, I find nothing and worry all may be abandoned. If someone were to begin a GPGPU project today on Windows and wanted it to be able to be run on AMD hardware, where would one start and in what language should they write?


This lack of any information on developer.amd.com does not bode well.
Wrong site mate :)

They have moved all GPGPU stuff to GPUOpen site years ago - GPUOpen - Professional Compute
 
Feb 2, 2009
12,892
178
126
#7
I recently visited AMD's developer pages and found them to be devoid of all GPGPU tools and SDKs.


https://developer.amd.com/tools-and-sdks/


In the past, there were links to their openCL development tools for Visual Studio, more recently links and write ups for ROCm. Now, I find nothing and worry all may be abandoned. If someone were to begin a GPGPU project today on Windows and wanted it to be able to be run on AMD hardware, where would one start and in what language should they write?


This lack of any information on developer.amd.com does not bode well.

https://gpuopen.com/
 

Krteq

Senior member
May 22, 2015
712
33
106
#9
Well, last revision of OpenCL standard specifications (OpenCL 2.2) have been released in May 2017, latest release of AMD OCL SDK is from June 2017. What do you think needs to be updated there?

Anyway, if you are serious with GPGPU/HPC, move to Linux and ROCm like all others.
 
Dec 19, 2014
49
2
71
#10
According to a Github issue posting as of March 27th 2018:
"Guys, Step 1 is getting foundation on Windows that support what we doing. This is happing. We will have HIP and the LC compiler on Windows the work is in full earnest. This step one. Once we have this in place, everything else can fall into place. "
 

NTMBK

Diamond Member
Nov 14, 2011
8,250
227
126
#11
According to a Github issue posting as of March 27th 2018:
"Guys, Step 1 is getting foundation on Windows that support what we doing. This is happing. We will have HIP and the LC compiler on Windows the work is in full earnest. This step one. Once we have this in place, everything else can fall into place. "
And almost a year later there is nothing to show for it. Meanwhile CUDA on Windows is fully supported with great tools.
 

Krteq

Senior member
May 22, 2015
712
33
106
#13
Well, he probably is.. or Ryan just visited this thread and made that post after that :)
 

Stuka87

Diamond Member
Dec 10, 2010
4,082
83
106
#14
Ryan has an actual account on here, not sure why he would go incognito.
 

ThatBuzzkiller

Senior member
Nov 14, 2014
820
7
91
#15
OpenCL and it's differing implementations makes it a dumpster fire of compute API. OpenCL from the ground up since it's concept should've been designed as an open alternative to CUDA with a single source C++ kernel language in mind but instead what we got were separate kernel files and a C compute API ...

OpenCL 2.2 from Khronos could've righted some wrongs from the original design flaws that kept OpenCL back but it's too little too late for that right now since no implementation supports OpenCL 2.2. Designing SYCL on top of OpenCL kills alot of the performance from an accelerator. OpenCL itself by design is very backwards compared to CUDA so programmers are left with a verbose low level API that's under featured ...

OpenCL's programming model is downright WRONG and it's not an exaggeration either since that's the consensus among the GPU compute community. OpenCL is an example of what goes wrong when multiple vendors design a standard to fit their own needs rather than the needs of an end user and AMD learned this the hard way when they were trying again to garner industry support for HSA but with both of them being thankfully obsolete we won't be seeing anymore disappointments (OpenCL) or stillborn projects (HSA) from AMD ...

I wonder if Intel's been taking in notes all these years on designing a compute API because let's hope their oneAPI initiative doesn't just stop at supporting SYCL when that's just putting lipstick (SYCL) on a pig (OpenCL) ...

You see much of the computer graphics and high performance compute industry does not actually care about whether or not industry standard tools are open or not. They care more about productivity, maintainability, features, design, and performance than being open source so what exactly is one more API worth to big game publishers or deep learning giants like Google who develops Tensorflow or Facebook who develops PyTorch ?

FYI, compute has no future on Windows so you may as well ditch that platform for Linux instead. If our future is coping with multiple vendor specific APIs then I'm perfectly content with that happening since at this point having a single standard API to rule all of the virtual landscape is nothing more than a pure fantasy at this point because it's been proven that having different APIs even from the same hardware vendor is more sane than having an open body like the Khronos Group creating open standards then the rest of the industry will learn to deal with having multiple APIs. If platform vendors can dictate whatever graphics API they want to great success then it's no big deal for hardware vendors to dictate whatever compute API they want for their own success. If we can get 3 really good compute APIs from AMD (HIP/HCC), Intel (oneAPI), Nvidia (CUDA) then deep learning frameworks developers will be fine with adding backends for all of them instead of having them to object adding in a backend for a really bad API like OpenCL ...
 

SpaceBeer

Senior member
Apr 2, 2016
289
7
71
#16
But what about other accelerators such as Xilinx Alveo or something? Then you need another compute API for each one of them
 

NTMBK

Diamond Member
Nov 14, 2011
8,250
227
126
#17
FYI, compute has no future on Windows so you may as well ditch that platform for Linux instead.
Rubbish. CUDA on Windows works fantastically. It's the other vendors who aren't providing good Windows support.

And remember, Deep Learning and Compute are two separate things. There are plenty of workstation applications that want access to GPU compute without using machine learning.
 

NTMBK

Diamond Member
Nov 14, 2011
8,250
227
126
#18
But what about other accelerators such as Xilinx Alveo or something? Then you need another compute API for each one of them
Was OpenCL ever actually a good option for programming FPGAs? It always struck me as a pretty terrible idea. They're fundamentally very different to a GPU, it makes sense that you should target them with an API that actually makes use of their unique benefits.
 

ThatBuzzkiller

Senior member
Nov 14, 2014
820
7
91
#20
But what about other accelerators such as Xilinx Alveo or something? Then you need another compute API for each one of them
Xilinx is big enough to roll out support for their own compute API if they're truly serious about general purpose compute. As a matter of fact if mobile hardware vendors also care about compute as well then they'll all bring out their own compute APIs to save us from the tyranny of OpenCL and ironically their bad drivers as well ...

Us complaining about applications developers not supporting a specific API for technical reasons isn't going to help solve any real problems, ok ? We need all of us to demand the absolute best possible solution from a vendor and if their going to stay obstinate we need to leave them behind or abandon them for good if they aren't going to be attentive to customer needs ...

If the industry wanted OpenCL to be viable then it would've been in a much better condition by now but instead what have is a pile of filth from Khronos so here we are right now where there's already two hardware vendors which have their own proprietary compute APIs and another possible hardware vendor joining in the competition as well. OpenCL should've been under competitive pressure from the beginning to better itself but AMD and possibly Intel as well realized it better than anyone else that they could not hope to match CUDA's programming model which was at least a decade ahead so it was time for both of them to cut loose of OpenCL ...

If we can't have one API to rule them all then APIs NEED to compete in the marketplace. It is time to stop contemplating about OpenCL and start considering the other options out there because the war between CUDA vs OpenCL is already over since Khronos is ending development of OpenCL but it's not too late for HIP/HCC (AMD) or oneAPI (Intel) to make a break for it ...

Rubbish. CUDA on Windows works fantastically. It's the other vendors who aren't providing good Windows support.

And remember, Deep Learning and Compute are two separate things. There are plenty of workstation applications that want access to GPU compute without using machine learning.
It's true that the other vendors have subpar support for GPU compute on Windows but that's probably because there's no vision on Microsoft's part to improve compute on their own platform. Their C++ AMP programming model didn't get any traction, they aren't interested in expanding GPU compute with another API of their own, and they aren't going to fix their kernel space limitations within Windows to allow for a HSA runtime. CUDA uses an ICD model that's built on top of WDDM but with HIP/HCC this really isn't possible since they rely on having a HSA runtime that comes with it's own kernel drivers and using WDDM instead really isn't appropriate option either. CUDA even on Windows with WDDM shows limitations compared to Linux. We have yet to know what Intel seeks to do with their oneAPI initiative but if it ends up being no more than a SYCL implementation then they stop being a real competitor in GPU compute at that point ...

As far as I can see deep learning is a subset of compute so calling them two separate things is frivolous but let's face the reality for a moment. The vast majority of data centre, deep learning, and other high performance computing customers are based on Linux and not Windows so what exactly are the workstation applications exclusively on Windows that needs a CUDA-like GPU compute model ? If you meant professional rendering then there are other options out there that don't need single source C++ support plus HIP/HCC isn't going to help either in that case since it currently doesn't do graphics. Why risk developing an API on a platform that you don't even own when you face the possibility of being locked out as well ? With Linux, a vendor has full control of their own compute stack which has undeniable benefits for their purposes ...
 
Dec 19, 2014
49
2
71
#23
Its all a shame because the lack of a fully competitive stack across platforms has allowed nVidia to corner the market in certain areas by default - DCC Ray/Path Tracing renderers are currently nVidia only on GPU for any significant commercial implementations, and unfrotunately the swarm of nVidia attention around Arnold GPU all but signals a continuation of this trend, essentially walling off coomercial Ray Tracing on GPU from anyone not shelling out for nVidia and their CUDA monopoly.
 
Dec 19, 2014
49
2
71
#24
With any luck if a new compute standard is drawn up, software makers like Autodesk will have a word in its making as with Vulkan.
 

ThatBuzzkiller

Senior member
Nov 14, 2014
820
7
91
#25
Its all a shame because the lack of a fully competitive stack across platforms has allowed nVidia to corner the market in certain areas by default - DCC Ray/Path Tracing renderers are currently nVidia only on GPU for any significant commercial implementations, and unfrotunately the swarm of nVidia attention around Arnold GPU all but signals a continuation of this trend, essentially walling off coomercial Ray Tracing on GPU from anyone not shelling out for nVidia and their CUDA monopoly.
It's ok to let Nvidia have professional rendering, there are already several alternatives that use OpenCL and eventually Vulkan could soon become a viable alternative with bindless or ray tracing extensions so their monopoly in that segment is only temporary in that regard ...

DCC apps also includes video editing and 3D modeling as well and the vast majority of them don't exclusively use CUDA either ...

I'm a little bit more optimistic about the future in some ways compared to the past because there were nearly no alternatives to CUDA. The Metal 2 API has shown us a path that it's possible for a graphics API to include capabilities to do professional rendering as well so in the near future Vulkan could soon include those capabilities as well to do professional rendering ...

It's graphics APIs that need to become more competitive against CUDA in DCC and that especially applies for Microsoft and Apple platforms. We don't need a compute API to do DCC, we need a compute API for deep learning and high performance scientific computing and OpenCL is not at all competitive so hopefully HIP/HCC or oneAPI is more competitive in this aspect. With Windows and Mac, it's the graphics API that matters and not the compute API. With Linux, it's the compute API that matters and not the graphics API ...

Soon we will see the fruits of a HIP/HCC port of Tensorflow upstreamed from AMD then their next goal is should be getting a HIP/HCC port of PyTorch upstreamed ...
 


ASK THE COMMUNITY

TRENDING THREADS