• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

AMD and CUDA

prtskg

Senior member
Last edited:
Hard to say. Seems a good start, but until more migrate to it I wouldn't expect any impact yet. Could be the start of something significant though.
 
More than 1 petaflops with the FirePro S9150s ?

If we assume the minimum of just over that performance with no buying in bulk reduction savings that nets AMD less than 1 million dollars in revenue so it's pretty irrelevant in the grand scheme of their balance sheet but at least this bodes well for AMD now being *actually* able to dent Nvidia's hold on HPC with CUDA ...

Still too late for them as I think Intel's Xeon Phi line will crush everyone in the process ...
 
Last edited:
Quite relevant. CGGVeritas is a rather large client. Last time around they went for Kepler based Tesla K10 clusters.
 
Wait a minute, how did they quickly convert from CUDA to OpenCL when the former is a superset of the latter ?

Don't they mean from CUDA to C++ ?
 
As far as I remember, Boltzmann Initiative talked about changing CUDA to C++.

That's what I mean but in their press release they they converted from CUDA to OpenCL ...

My question still stands as to how they went from CUDA to OpenCL with their HIP tool when it converts from CUDA to openCL ...
 
That's what I mean but in their press release they they converted from CUDA to OpenCL ...

My question still stands as to how they went from CUDA to OpenCL with their HIP tool when it converts from CUDA to openCL ...
Haha.. I've no idea 🙂. i was skeptical of Boltzmann Initiative. The error and performance penalty that an additional layer of software can cause! Good to see it being used. Though it's just the beginning. If it works properly, it'll help AMD a lot with cash flow.
 
Last edited:
Wait a minute, how did they quickly convert from CUDA to OpenCL when the former is a superset of the latter ?

Don't they mean from CUDA to C++ ?

CUDA is not a language, C++ is.

The correct thing is moving from CUDA to OpenCL. Which are both APIs.
 
It converts source code from CUDA to OpenCL. They're both C++. CUDA and OpenCL are similar enough that you could already mostly convert between them by changing terminology.
 
It converts source code from CUDA to OpenCL. They're both C++. CUDA and OpenCL are similar enough that you could already mostly convert between them by changing terminology.

OpenCL only supports C at the moment, not C++. C++ kernels are meant to be coming in OpenCL 2.1, but that's not available yet.
 
CUDA is not a language, C++ is.

The correct thing is moving from CUDA to OpenCL. Which are both APIs.

I'm pretty sure you can't do that since OpenCL does not even have the same set of features as CUDA does ...

BTW you can convert from API to a programming language in CUDA's case since it's just a parallel abstraction layer so even x86 CPUs can compile programs that require CUDA runtimes much like how Intel SPMD Program Compiler can target Nvidia PTX!

All you need is a C++ compiler and maybe a specialized API conversion tool like HIP for porting performance ...

One day a mobile GPU vendor may step up and do the same too ...
 
I'm sure this has nothing to do with Boltzmann initiative/HIP/hcc. It takes years to develop and deploy production code on a petaflop scale computers and CGG will not risk their business on tool chain that was just released as a preview.

I have attended a webinar by AMD/Acceleware that described how they were working on library for Oil and Gas industry and I'm sure this is using them. See http://www.acceleware.com/oil-and-gas (https://www.youtube.com/watch?v=ixuCVIkyuos)

Boltzmann/HIP is interesting but they are in very early stages of development. Also, it is for porting CUDA to C++ with hc extension. Not to OpenCL. It also needs HSA/AMDGPU kernel which is not quite ready for deployment yet.
 
I'm sure this has nothing to do with Boltzmann initiative/HIP/hcc. It takes years to develop and deploy production code on a petaflop scale computers and CGG will not risk their business on tool chain that was just released as a preview.

I have attended a webinar by AMD/Acceleware that described how they were working on library for Oil and Gas industry and I'm sure this is using them. See http://www.acceleware.com/oil-and-gas (https://www.youtube.com/watch?v=ixuCVIkyuos)

Boltzmann/HIP is interesting but they are in very early stages of development. Also, it is for porting CUDA to C++ with hc extension. Not to OpenCL. It also needs HSA/AMDGPU kernel which is not quite ready for deployment yet.
Oh thanks. I too was thinking this is happening way too fast.
 
I'm surprised Intel hasn't pushed this, since they are trying to claw into the domain of CUDA with their HPC GPU accelerators.

That's because the majority of the top500 super computers are already exclusively Intel systems so there's no need for Intel themselves to snatch up the crumbs from Nvidia with their accelerator when it would be easier for the programmers to just make use of the new AVX-512 extension ...

Oh thanks. I too was thinking this is happening way too fast.

What you quoted states that CGG used GPUOpen tools so this isn't happening suddenly when AMD's partners have had beta access to them for months ...
 
@ThatBuzzkiller
Intel CPU powered systems.

They seem intent on pushing GPU accelerators in the HPC market, and that is still widely legacy CUDA based.

So it's in their interest to develop tools to move away from CUDA. Right?
 
@ThatBuzzkiller
Intel CPU powered systems.

They seem intent on pushing GPU accelerators in the HPC market, and that is still widely legacy CUDA based.

So it's in their interest to develop tools to move away from CUDA. Right?

Don't even call them "GPUs" since they don't even have texture units or any of the special fixed function units you'd find in an actual GPU ...

Xeno Phi is more along the lines of a generic massively parallel accelerator but sooner or later the identity of a GPU will cease to be once software rendering becomes the norm again ...

While there's a lot of CUDA legacy floating around most HPC software developers usually add x86 support to their programs for offloading work and if not then there are many more alternatives they can look at since x86 software ecosystem dwarfs whatever CUDA has ...

They only need to enhance their x86 program branch with the AVX-512 extensions and then profit comes next ...
 
I'm surprised Intel hasn't pushed this, since they are trying to claw into the domain of CUDA with their HPC GPU accelerators.

They don't push it because it doesn't make sense. If you write in CUDA and have the choice between:
a) using native CUDA code with all the debug, testing, optimisations, etc.
b) transcoding it to something else adding a whole new layer of bugs and issues while making it harder to debug, test, optimise, etc.

which would you do? Obviously as long as the native cuda hardware (i.e. nvidia) was even vaguely competitive you'd use that. The last thing Intel wants is to encourage anyone to use CUDA for anything.
 
Back
Top