Help - OpenCL on 6950 vs 560 Ti ?

harshal

Member
Jun 23, 2011
32
0
0
I am building my rig for scietific computing and already discussing in this thread.
I am stuck between two GPUs: AMD 6950 and NVidia 560 Ti.
Which GPU would perform better for OpenCL/OpenGL computing as well for games like Caesar IV and Civ V ?

Since it is OpenCL and not CUDA to be used for development, would NVidia perform any worse? I would be going for multi-GPU setup in future.

Thanks for your time.
 
Last edited:

Mistwalker

Senior member
Feb 9, 2007
343
0
71
No idea about the OpenCL part (all the benchmark comparisons I can find seem to be quite old), hopefully someone with actual experience can offer insight there.

As for gaming, it depends on the resolution, whether you're going for 1GB or 2GB cards, etc; AMD has great OpenGL performance but Nvidia absolutely dominates Civ 5 due to their support of multi-threaded rendering, which AMD seems to be dragging their heels on.
 

harshal

Member
Jun 23, 2011
32
0
0
By any chance, is it a driver-related issue? I also heard that some cores of 6950 can be released using 6970 bios upgrade, is it true?
 

Mistwalker

Senior member
Feb 9, 2007
343
0
71
By any chance, is it a driver-related issue? I also heard that some cores of 6950 can be released using 6970 bios upgrade, is it true?
At this point it's pretty clear there's more than just a driver issue going on, Civ 5--while being an anomaly--runs much much better on Nvidia cards. Not that a 6950 isn't a decent performer, it just lags in that one title.

Certain 6950s can still have additional shaders unlocked via a BIOS flash (essentially putting them within a couple percent of a 6970), but there is no guarantee, and it's harder everyday to find cards that still allow this. I think there was a post recently where someone bought a non-reference card that actually came out of the box pre-modded this way, but you need to check your card reviews carefully before buying to see if a particular card has proven unlockable.

Even without being unlocked the 6950s are one of the best bang-for-your-buck cards, and scale very well in Crossfire (as you mentioned wanting to go multi-GPU later on).
 

harshal

Member
Jun 23, 2011
32
0
0
@Mistwalker, thanks your for reply. Does modding void warranty/support for the card?

For me OpenCL/OpenGL is more crucial than Civ 5.
So would you suggest 6950 over 560 Ti?
 

Mistwalker

Senior member
Feb 9, 2007
343
0
71
@Mistwalker, thanks your for reply. Does modding void warranty/support for the card?

For me OpenCL/OpenGL is more crucial than Civ 5.
So would you suggest 6950 over 560 Ti?
Flashing the BIOS to unlock shaders does indeed void the warranty. Fortunately cards that can be flashed come with a backup BIOS, so you can't do too much damage even if something goes wrong.

According to one of the posters who's used both (can't recall if it was here or on NeoGAF): AMD cards have significantly better OpenGL performance, while Nvidia has better support tools on the developer side.

If you're running a single monitor at 1920 (or lower) resolution, either one would be a very capable gaming card, and two of them would be nigh-overkill (unless you like lots of AA and eye candy)...I'd tend towards the 6950 for the OpenGL performance and slightly superior multi-card scaling. If you're running 2560x1600 or multi-monitor resolutions, the 6950 is definitely the way to go as it has double the VRAM.
 

harshal

Member
Jun 23, 2011
32
0
0
According to one of the posters who's used both (can't recall if it was here or on NeoGAF): AMD cards have significantly better OpenGL performance, while Nvidia has better support tools on the developer side.
You are right. Actually, I am planning to write more OpenCL applications and using OpenGL tools for visualization, but no significant programming. I hope current support of AMD/Intel for OpenCL libraries and tools is enough for me to start.

If you're running a single monitor at 1920 (or lower) resolution, either one would be a very capable gaming card, and two of them would be nigh-overkill (unless you like lots of AA and eye candy)...I'd tend towards the 6950 for the OpenGL performance and slightly superior multi-card scaling. If you're running 2560x1600 or multi-monitor resolutions, the 6950 is definitely the way to go as it has double the VRAM.
This makes me far more inclined towards 6950.

Thanks a lot @Mistwalker for your help. Appreciated.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
AMD has been incrementally improving ATi designs for generations, now (maybe they should try that for CPUs, now?), and they've got high-throughput VLIW processors. Give them simple tight high-ILP loops, and they can sometimes be >10x the speed of a Geforce at a similar price.

Got slightly complicated RAM access? Need to stress the hell out of shared memory? Need to swap memory back and forth between the system RAM and video card RAM (or at least pretend to)? Need more than the most basic branching? Then, Geforces can do an equally good job of wiping the floor with Radeons. NVidia's GPUs have been, each generation, much more resembling parallel CPUs, getting things like virtual memory, limited cache coherency, far better branching support than AMD (at least until GCN), etc..

It's very much the classic speed demon v. brainiac.

Also, NVidia's SDK is better than AMD's. Based on that, I'd get the GTX 560 Ti. Based on gaming, I'd get the 6950. Given the cost of the GTX 570, it's not the easiest decision in the world.
 
Last edited:

harshal

Member
Jun 23, 2011
32
0
0
@Cerb Thanks for your reply.

AMD has been incrementally improving ATi designs for generations, now (maybe they should try that for CPUs, now?), and they've got high-throughput VLIW processors. Give them simple tight high-ILP loops, and they can sometimes be >10x the speed of a Geforce at a similar price.

That's massive ! I wish I could get it. With the launch of Southern Islands GPU family in next month, I expect AMD to throw GPU computing in next orbit (I am tempted to wait, but can't be sure if it is a BD-style hype :mad:). I am guessing that what you mentioned GCN below is part of this new family.

Got slightly complicated RAM access? Need to stress the hell out of shared memory? Need to swap memory back and forth between the system RAM and video card RAM (or at least pretend to)? Need more than the most basic branching? Then, Geforces can do an equally good job of wiping the floor with Radeons. NVidia's GPUs have been, each generation, much more resembling parallel CPUs, getting things like virtual memory, limited cache coherency, far better branching support than AMD (at least until GCN), etc..

I have respect for AMD and NVidia GPUs as they are good in their arenas, but NVidia should price their products reasonably lesser. I always suspect that for engineering-class applications where data crunching is dominant than rendering, Nvidia would rule, because their Quadro is generally chosen over ATI's FirePro in that space. But probably 69xx is probably an exception due to their floating point support (ref Wikipedia).

As your comment highlights Nvidia GPUs are like parallel CPUs, It is interesting. Do you have any references/pointer where I can read more about this? I just looked at GCN article on AT and I am puzzled what if Fusion CPU and GCN GPU would converge in future. :eek: This can be incredible given the number of cores on GPU. Who cares for best IPC by quad/octa-cores when satisfactory IPC is to be delivered thousand cores.

It's very much the classic speed demon v. brainiac.

The gap is slowly being closed, isn't it?

Also, NVidia's SDK is better than AMD's. Based on that, I'd get the GTX 560 Ti. Based on gaming, I'd get the 6950. Given the cost of the GTX 570, it's not the easiest decision in the world.

That's true. But for OpenCL, Intel too provides its own SDK. Moreover, CUDA by NVidia requires special tools, but not OpenCL. Usual Linux/Windows development environment should be sufficient. But I may be wrong.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
That's massive !
They are very different beasts. Sometimes they'll be close, sometimes one faster, sometimes the other. Usually it's not as pronounced as 10x, more along the lines of 2-4x, but hey, gotta try to make a point, right? :) One reason for not programming straight on the hardware is so they can implement your code how they think best, with the hardware they think is best. I don't know what I'd even want to try to figure out performance differences of all the mobile vendors beginning to support OpenCL. :eek:

I have respect for AMD and NVidia GPUs as they are good in their arenas, but NVidia should price their products reasonably lesser. I always suspect that for engineering-class applications where data crunching is dominant than rendering, Nvidia would rule, because their Quadro is generally chosen over ATI's FirePro in that space. But probably 69xx is probably an exception due to their floating point support (ref Wikipedia).
Nvidia artificially crippled double-precision floating point on Fermi Geforces. Commoditization is on its way in, and that's one way they are fighting it for a few more years. For Quadros, back in the day, they had far superior drivers. Recently, it's been momentum from then, and now, they got ECC memory before AMD.

As your comment highlights Nvidia GPUs are like parallel CPUs, It is interesting. Do you have any references/pointer where I can read more about this?
Summaries of the most important difference(s), after scalar v. VLIW:
http://www.realworldtech.com/page.cfm?ArticleID=RWT121410213827&p=8
http://www.realworldtech.com/page.cfm?ArticleID=RWT093009110932&p=8

The main thing Fermi has going on is integrating ideas from old supercomputer processors, and going for running arbitrary highly-parallel code first, graphics second.

The gap is slowly being closed, isn't it?
To some degree GCN definitely will, but by how much, who knows? Current Radeons still have too simplistic a view of memory, and AMD is planning to bring in GCN to fix that (among other things). But, that won't necessarily mean they'll perform the same.

Moreover, CUDA by NVidia requires special tools, but not OpenCL. Usual Linux/Windows development environment should be sufficient. But I may be wrong.
NVidia supports OpenCL, too. In all cases, if you want the best performance, you'll want the vendor's tools, because OpenCL is still too low to completely free you from worrying about registers and other memory details. If it's an obvious case for GPGPU, though, generic code should still blow away CPUs.

If you plan to try to use the GPU for processing on Linux, I would absolutely go with the Geforce. Both companies' binary drivers are behind the times, but nVidia's works, usually with no more effort than installing a package. AMD's binary drivers are needed to get all the features of their current-gen chips, and while they sometimes effortlessly work, they are just as often a nightmare.
 

harshal

Member
Jun 23, 2011
32
0
0

@Cerb, the reviews you posted are interesting. It sounds like AMD Cayman and prior models suffer due to bottleneck at the junction of wavefront queues. I will get back with more concrete opinion.

If you plan to try to use the GPU for processing on Linux, I would absolutely go with the Geforce. Both companies' binary drivers are behind the times, but nVidia's works, usually with no more effort than installing a package. AMD's binary drivers are needed to get all the features of their current-gen chips, and while they sometimes effortlessly work, they are just as often a nightmare.

I wish the situation could improve. I will prefer Linux environment over Windows, but finally choose whatever will work for me.
 
Last edited:

itstsui

Junior Member
Oct 31, 2011
18
0
0
Get a 6950 2GB you can unlock the shaders to those of a 6970. If you want you could even flash the bios and make it into a 6970.