Question State of AMD GPGPU Development

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

mmusto

Junior Member
Jan 23, 2019
2
0
36
I recently visited AMD's developer pages and found them to be devoid of all GPGPU tools and SDKs.


https://developer.amd.com/tools-and-sdks/


In the past, there were links to their openCL development tools for Visual Studio, more recently links and write ups for ROCm. Now, I find nothing and worry all may be abandoned. If someone were to begin a GPGPU project today on Windows and wanted it to be able to be run on AMD hardware, where would one start and in what language should they write?


This lack of any information on developer.amd.com does not bode well.
 

soresu

Platinum Member
Dec 19, 2014
2,664
1,863
136
I'd like nothing more than a break from nVidia's monopoly in rendering, but as for alternatives, the main commercial suppliers are either already CUDA locked, or about to be. Pixar Renderman XPU seems set to be Optix/CUDA locked as does Autodesk Arnold, while all other currently existing high end commercial solutions such as V-Ray, Redshift and Octane are now CUDA locked.
While Octane did briefly make a noise about breaking that lock in by supporting AMD cards - this lasted for only one version (v3.x) and did not translate to the latest (v4.x), and by their own admission in forums, they likely wont do so, citing trivial reasons for breaking from AMD support (Raja and some other guy leaving, which sounds more like an excuse than anything concrete). Considering AMD's constrained budget, I'd chalk it up to a lack of under the table money from AMD to Otoy to build and maintain a HIP/OpenCL port.

I'd certainly hope for Vulkan compute to be a viable alternative to CUDA and OpenCL in rendering, apparently Radeon ProRender has a Vulkan port already. Unfortunately ProRender is not state of the art by my reckoning of Arnold and Renderman, and Path Tracing is much more complicated than the functions that Adobe translated from OpenCL for new CC products on Android and iOS systems.

As for other renderers, there seem to be only a handful that are OpenCL and therefore cross platform, including Blender Cycles, and LuxRender - both of which are already open source, so not exactly surprising.

All things considered, the HIP/Boltzmann initiative has progressed far slower than necessary for good competition, and has only recently even begun to support consumer cards aswell as workstation cards.
Perhaps Windows support can turn this around, but I'm used to disappointment at this point where AMD is concerned.

Its obviously hard to compete against the staggering R&D advantage that nVidia has gathered with their Optix platform, and their CUDA lock in only serves to compound this as time goes by. Heres hoping with Zen 2 shuffling out the door presently that AMD will refocus some budget more significantly into both the software and hardware side of GPU's.
 
Last edited:

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
Right now the only people who really gain financially from their being an open non-nvidia gpu compute solution are AMD because it sells AMD gpu's. The software sellers will sell their software whether the end users get a choice of AMD or Nvidia, or not. Hence it really has to be AMD that puts the investment in. That's something they have not been willing to do in the past, and now they have a mountain to climb with Nvidia's 10+ year headstart.

Much more likely then AMD putting any serious money into this is Intel having another crack at it. They have tried really hard in the past, and they now have their new gpu team. This must be a focus for them as Intel is all about big margins and there are big margins to be had here. If Intel get involved pretty likely they'd attempt to partner with microsoft who are the ones capable of writing the software. Then Nvidia would have some real competition (even if it's x86 based and windows only).
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
I'd like nothing more than a break from nVidia's monopoly in rendering, but as for alternatives, the main commercial suppliers are either already CUDA locked, or about to be. Pixar Renderman XPU seems set to be Optix/CUDA locked as does Autodesk Arnold, while all other currently existing high end commercial solutions such as V-Ray, Redshift and Octane are now CUDA locked.
While Octane did briefly make a noise about breaking that lock in by supporting AMD cards - this lasted for only one version (v3.x) and did not translate to the latest (v4.x), and by their own admission in forums, they likely wont do so, citing trivial reasons for breaking from AMD support (Raja and some other guy leaving, which sounds more like an excuse than anything concrete). Considering AMD's constrained budget, I'd chalk it up to a lack of under the table money from AMD to Otoy to build and maintain a HIP/OpenCL port.

I'd certainly hope for Vulkan compute to be a viable alternative to CUDA and OpenCL in rendering, apparently Radeon ProRender has a Vulkan port already. Unfortunately ProRender is not state of the art by my reckoning of Arnold and Renderman, and Path Tracing is much more complicated than the functions that Adobe translated from OpenCL for new CC products on Android and iOS systems.

As for other renderers, there seem to be only a handful that are OpenCL and therefore cross platform, including Blender Cycles, and LuxRender - both of which are already open source, so not exactly surprising.

All things considered, the HIP/Boltzmann initiative has progressed far slower than necessary for good competition, and has only recently even begun to support consumer cards aswell as workstation cards.
Perhaps Windows support can turn this around, but I'm used to disappointment at this point where AMD is concerned.

Its obviously hard to compete against the staggering R&D advantage that nVidia has gathered with their Optix platform, and their CUDA lock in only serves to compound this as time goes by. Heres hoping with Zen 2 shuffling out the door presently that AMD will refocus some budget more significantly into both the software and hardware side of GPU's.

For the vast majority of high end production rendering such as Disney or Universal films (where there's most money to be made from), their workflows from their workstations to their render farms consists of having several dozens of gigabytes of memory so I doubt using the GPU to do professional rendering is an option for many of them since even Nvidia's latest Quadro RTX series only offers upto 48GB. This might be enough for medium sized or smaller projects and for prototyping purposes which have lower scene complexity but this is simply not enough memory to attempt high quality film rendering ...

As far as V-Ray is concerned only the RT version of the renderer supports a GPU backend and it also includes OpenCL support as well while the advanced version of the renderer only supports the CPU so CUDA really isn't getting the same feature set as the CPUs. Redshift is considering adding support for other GPU vendors and they'll be even more compelled to do so soon once Intel releases their discrete graphics part. Octane stopped working on an OpenCL port because there was no support from either AMD or Apple so they're well on their way to having a both Metal and Vulkan backend. The future is OpenCL getting phased out and professional rendering and moving on to Metal or Vulkan if they want to be cross-platform for the foreseeable future ...

Otoy could perfectly build and maintain an OpenCL port themselves but they need a driver issue to be resolved to be able to ship the Octane render at least on Windows and Linux but if AMD saw that there was no future to be had with OpenCL then there would be no point anymore in investing the standard. HIP really isn't appropriate for rendering since it doesn't offer graphics API interop and lacks even the most basic of graphics functionalities that's natively offered by CUDA ...

OpenCL based renderers may not offer as many features as some of the other renderers with a CUDA backend but for the most part it's only a small advantage for Nvidia in the grand scheme of things ... (plenty of OpenCL established solutions out there that have enough features to be usable for similarly sized projects out there and Blender's Cycles engine is an example of feature parity with it's CUDA counterpart out there for a decently complex renderer)

For the most part, AMD is well positioned in the professional rendering segment with their Epyc processor lines. Besides, GPU's will probably fall behind again once DDR5 releases since it'll offer upto 400GB/s per socket which will open more room for higher computational capacity. In the future having workstation grade APUs with octa-channel DDR5 memory might not be such a bad idea provided AMD adds some graphics capabilities to HIP if they see it fit to compete better against Nvidia in that segment ...
 

DeathReborn

Platinum Member
Oct 11, 2005
2,746
741
136
For the vast majority of high end production rendering such as Disney or Universal films (where there's most money to be made from), their workflows from their workstations to their render farms consists of having several dozens of gigabytes of memory so I doubt using the GPU to do professional rendering is an option for many of them since even Nvidia's latest Quadro RTX series only offers upto 48GB. This might be enough for medium sized or smaller projects and for prototyping purposes which have lower scene complexity but this is simply not enough memory to attempt high quality film rendering ...

As far as V-Ray is concerned only the RT version of the renderer supports a GPU backend and it also includes OpenCL support as well while the advanced version of the renderer only supports the CPU so CUDA really isn't getting the same feature set as the CPUs. Redshift is considering adding support for other GPU vendors and they'll be even more compelled to do so soon once Intel releases their discrete graphics part. Octane stopped working on an OpenCL port because there was no support from either AMD or Apple so they're well on their way to having a both Metal and Vulkan backend. The future is OpenCL getting phased out and professional rendering and moving on to Metal or Vulkan if they want to be cross-platform for the foreseeable future ...

Otoy could perfectly build and maintain an OpenCL port themselves but they need a driver issue to be resolved to be able to ship the Octane render at least on Windows and Linux but if AMD saw that there was no future to be had with OpenCL then there would be no point anymore in investing the standard. HIP really isn't appropriate for rendering since it doesn't offer graphics API interop and lacks even the most basic of graphics functionalities that's natively offered by CUDA ...

OpenCL based renderers may not offer as many features as some of the other renderers with a CUDA backend but for the most part it's only a small advantage for Nvidia in the grand scheme of things ... (plenty of OpenCL established solutions out there that have enough features to be usable for similarly sized projects out there and Blender's Cycles engine is an example of feature parity with it's CUDA counterpart out there for a decently complex renderer)

For the most part, AMD is well positioned in the professional rendering segment with their Epyc processor lines. Besides, GPU's will probably fall behind again once DDR5 releases since it'll offer upto 400GB/s per socket which will open more room for higher computational capacity. In the future having workstation grade APUs with octa-channel DDR5 memory might not be such a bad idea provided AMD adds some graphics capabilities to HIP if they see it fit to compete better against Nvidia in that segment ...

Thanks to NVSwitch they can have 512GB (Volta) of pooled VRAM across 16GPU's as demonstrated by DGX-2, Turing version (if they make one) should be able to do 768GB (48GB/GPU) or even 1TB (64GB/GPU). AMD can of course leverage their SSG for increased local capacity of working data.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
Thanks to NVSwitch they can have 512GB (Volta) of pooled VRAM across 16GPU's as demonstrated by DGX-2, Turing version (if they make one) should be able to do 768GB (48GB/GPU) or even 1TB (64GB/GPU). AMD can of course leverage their SSG for increased local capacity of working data.

NVLink/NVSwitch aren't scalable past 8 GPU configurations to be competitive against current CPU based solutions. Using NVLink to pool more memory together so that you could access memory across GPUs kills a lot of the potential performance from them. NVLink only gives you 150GB/s of read bandwidth and NVSwitch is even slower ...

You're better off either just getting an Epyc or a Xeon system as their both cheaper and more scalable than even the most expensive Quadro SKU ...
 

soresu

Platinum Member
Dec 19, 2014
2,664
1,863
136
The Radeon SSG series is certainly interesting for GPGPU purposes for its huge onboard memory, though until a more durable memory type than NAND Flash is capable of matching its capacity, I could see how companies would be dubious investing in it. I've still not seen any figures on its lifespan, though presumably its using SLC for maximum speed and write endurance.