• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Full Kepler?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
First of all CUDA is far from dying and secondly persons serious about compute buys a Quadro or Firepro,they don't play with these amateur cards for compute.It's irrelevant how fast a 7970 or 680 is in compute,they will still be eaten for breakfast by their professional counterparts.
 
Last edited:
If you only care about 32bit (FP) Floatpoint point performance it looks something more like this:

AMD-Radeon-HD-7970-Much-Better-than-Kepler-in-OpenCL-3.jpg


42,3% differnce.

Yeah programming languages are like that, if you learnt CUDA I can understand why you want to stick with something your formilur with. Or if your useing a program that only works with CUDA currently.

That said CUDA is dying.

Intel/AMD/NVIDA/Samsung/IBM/ARM (TI and Qualcomm ect) (and more)
Are all behinde OpenCL, vs Nvidia only for CUDA.


You said this:


Thats why people have been saying get a 7970, instead of a 680.
I guess they just persumed you cared about performance, but if your a nvidia only kinda guy, then the 680 is the fastest 32 FP card they have.

However for any programs that are double precision workloads, the 580 will be faster than the 680.

I digress. OpenCL may have a lot of companies "backing" it, but that means nothing (hell even NVIDIA "backs" it). OpenCL doesn't even have a Matrix library out (AFAIK). CUDA has so much support and existing libraries out there w. a very strong ecosystem. If I was using OpenCL I would be spending my time coding and writing libraries from scratch rather than running simulations getting results for my research.
 
Back
Top