hpcwire: NVIDIA Opens Up CUDA (Compiler)

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
http://www.hpcwire.com/hpcwire/2011-12-13/nvidia_opens_up_cuda_compiler.html



GPU maker NVIDIA is going to make its CUDA compiler runtime source code, and internal representation format public, opening up the technology for different programming languages and processor architectures. The announcement was made on Wednesday at the kick-off of the GPU Technology Conference Asia in Beijing, China.


The company says it will use the LLVM compiler infrastructure as the vehicle for the public CUDA source code. LLVM is a open source project that maintains source code collections of various of compile, runtimes, and other development tools. The new LLVM-based CUDA source, will be available in the latest release of the CUDA Toolkit, version 4.1, which was also launched this week.


The CUDA open source set-up does not, however, mean NVIDIA will arbitrarily accept changes and enhancements to its compiler technology from other developers. The company still intends to retain complete control of its source code. Tool developers will be able to modify the standard compiler and runtime for their own customized needs, but little of this is likely to be folded back into NVIDIA's code base.


The main idea is to allow software tool makers to port the CUDA compiler to other environments that NVIDIA or its commercial partners are not interested in pursuing on their own. In the case of programming languages, there are already compilers for C, C++, and Fortran, which are the big three for high performance computing. But as the market for GPU computing expands, NVIDIA foresees the need for other languages such as Python or Java, as well as domain specific languages.


NVIDIA_CUDA_LLVM-based_compiler.jpg



As far as CUDA compiler targets, there is a lot of room for interesting ports to other platforms. The prime candidate here is the AMD/ATI GPU platform. Even though CUDA is the most widespread programming environment for GPU computing, it only currently works on NVIDIA GPUs (and x86 multicore via a PGI compiler implementation). There are likely to be plenty of users with CUDA-based applications that are now interested in running their applications on AMD GPUs/APUs, or at least are interested in the prospect that their codes can do so at some future date.


AMD is still pushing its OpenCL strategy for GPU computing. OpenCL, a non-vendor-specific open standard for parallel computing, is supported by NVIDIA as well, but has not yet managed to attract a lot applications. By offering to open up CUDA, NVIDIA has probably blunted some of the appeal of OpenCL, that is, assuming a compiler vendor or an academic research group builds an CUDA-ized AMD GPU compiler.


Since CUDA is a general-purpose parallel computing technology, essentially any multicore/manycore architecture would be a potential target. Other possible architectures for CUDA include Intel's upcoming Many Integrated Core (MIC) coprocessor, Power CPUs, multicore ARM chips (especially for future 64-bit implementations), and even more exotic fare, like Texas Instruments' new floating-point capable DSPs.


The academic community most likely to take early advantage of an open CUDA compiler. For example, at Georgia Tech, the Ocelot project is focused on applying CUDA C to different processors, including AMD GPUs and x86-CPUs. The project lead there, Sudhakar Yalamanchili, says the opening up of the CUDA technology is "a significant step."

Even compiler vendors who already have special arrangements with NVIDIA will be able to take advantage of the new open source strategy. In the press release, The Portland Group (PGI) director Doug Miles says “This initiative enables PGI to create native CUDA Fortran and OpenACC compilers that leverage the same device-level optimization technology used by NVIDIA CUDA C/C++. It will enable seamless debugging and profiling using existing tools, and allow PGI to focus on higher-level optimizations and language features.”


NVIDIA will not always directly benefit from its new open source stance. Certainly, if some enterprising team ports CUDA to AMD chips, that could cut into Tesla GPU sales. But for the greater good of attracting customers to its own hardware, NVIDIA realized that a closed platform discourages plenty of users who don't want to be locked into a single hardware platform or rely on a sole vendor. As with NVIDIA's recent endorsement of the OpenACC directives, the opening of CUDA seems to be part of a strategy designed to broaden the appeal of GPU computing rather than just NVIDIA products. It appears the GPU maker has calculated that expanding the pie will get them further in the long run than just trying to maximize their slice of it.

CUDA for everyone one now :)

Guess that means more people will be able to use Photoshop CS5, with GPGPU computeing now
(since they refused to do anything but cuda).

Nice to hear, if you own a AMD GPU.


Smart move on nvidia's part, this will likely prolong CUDA's life, and or maybe even fight off OpenCL, keeping Nvidia in a position of power.
 
Last edited:

zebrax2

Senior member
Nov 18, 2007
977
70
91
http://www.anandtech.com/show/5238/nvidia-releases-cuda-41-cuda-goes-llvm-and-open-source-kind-of

Finally, with the move to LLVM NVIDIA is also opening up CUDA, if ever so slightly. On a technical level NVIDIA’s CUDA LLVM compiler is a closed fork of LLVM (allowed via LLVM’s BSD-type license), and due to the changes NVIDIA has made it’s not possible to blindly plug in languages and architectures to the compiler. To actually add languages and architectures to CUDA LLVM you need the source code to it, and that’s where CUDA is becoming “open.” NVIDIA will not be releasing CUDA LLVM in a truly open source manner, but they will be releasing the source in a manner akin to Microsoft’s “shared source” initiative – eligible researchers and developers will be able to apply to NVIDIA for access to the source code. This allows NVIDIA to share CUDA LLVM with the necessary parties to expand its functionality without sharing it with everyone and having the inner workings of the Fermi code generator exposed, or having someone (i.e. AMD) add support for a new architecture and hurt NVIDIA’s hardware business in the process.

^AMD won't benefit from it
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
This allows NVIDIA to share CUDA LLVM with the necessary parties to expand its functionality without sharing it with everyone and having the inner workings of the Fermi code generator exposed, or having someone (i.e. AMD) add support for a new architecture and hurt NVIDIA’s hardware business in the process.

*shady eyes*

So their opening up..... but not really.... wouldnt want AMD actually useing CUDA.

*grumbles*

Nvidia must have seen the 7970 compute capabilites and though...
lets not open it up anyways.


I guess pointless article, then is pointless.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Actually having AMD using CUDA is nvidias biggest dream come true since they control the code they will simply dally on implementing any form of optimization for it.

This is NOT going FOSS, and if you might notice, FOSS stands for "Free Open Sources Software" and "Free as in speech, not as in beer".
nVidia is trying to leverage OSS without the F to ensure it retains an otherwise impossible to hold market dominance, developers are now going to say "why should I port my app from CUDA to openCL when I can just let someone else port CUDA to AMD" and performance is going to suffer for that on anything but nvidia cards.

This move is a master stroke, I have long criticized nVidia for shooting itself in the foot with its policy of failing to milk its monopoly. Where they have confused "intent" with result and instead of milking a monopoly (their intent) merely took steps to prevent themselves from ever becoming one. Now they are starting to figure things out and this stands to be a very effective tool at actually giving them a monopoly and actually milking real benefits from it.

http://www.anandtech.com/show/5238/nvidia-releases-cuda-41-cuda-goes-llvm-and-open-source-kind-of

Finally, with the move to LLVM NVIDIA is also opening up CUDA, if ever so slightly. On a technical level NVIDIA’s CUDA LLVM compiler is a closed fork of LLVM (allowed via LLVM’s BSD-type license), and due to the changes NVIDIA has made it’s not possible to blindly plug in languages and architectures to the compiler. To actually add languages and architectures to CUDA LLVM you need the source code to it, and that’s where CUDA is becoming “open.” NVIDIA will not be releasing CUDA LLVM in a truly open source manner, but they will be releasing the source in a manner akin to Microsoft’s “shared source” initiative – eligible researchers and developers will be able to apply to NVIDIA for access to the source code. This allows NVIDIA to share CUDA LLVM with the necessary parties to expand its functionality without sharing it with everyone and having the inner workings of the Fermi code generator exposed, or having someone (i.e. AMD) add support for a new architecture and hurt NVIDIA’s hardware business in the process.

^AMD won't benefit from it

Aha, ok never-mind... nvidia is still same old nvidia and will not be leveraging a non F, OSS approach in such a cunning and frightening move to crush AMD beneath their heels...
This is a closed source with "per request" access to some limited stuff according to anand, who wrote a much better and more accurate article then the hpcwire
 
Last edited: