AMD vs NVIDIA for GPGPU?

GWestphal

Golden Member
Jul 22, 2009
1,120
0
76
The top of the line AMD have like 1600 stream processors, while the NVIDIA have around 512 stream processors. Is the conversion just a simple 1-3 ratio in terms of performance? Is NVIDIA including 3 FPUs per core where AMD counts each FPU or is one or the other actually better for certain tasks?

Are there any single slot cards these days that are passive cooled or at least quiet and have 512/1600 cores?
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
You should figure out if the programs you plan to use are capable of using AMD cards, if they are the 7970 is a better purchase currently than the 580.

AMD has great GPGPU performance, have for awhile, their problem is they have very little software support.
 
Last edited:

IGemini

Platinum Member
Nov 5, 2010
2,472
2
81
As stated, nVidia CUDA is more likely to have software support for some things like Adobe or transcoding. It really depends on what kind of computing you're doing, but very few GPGPU projects operate on both makers. Based on a quick search:

Folding@home ~ nVidia (only one I'm 100% certain about)
Collatz ~ AMD
GPUGrid ~ nVidia, AMD support is in beta
PrimeGrid ~ nVidia, especially Fermi cards

The astropulse project for SETI supports GPU processing but I can't tell which is the preferable maker.

Bitcoin isn't really GPGPU in the research sense, but AMD is much faster with it. After enough crunching, you could conceivably pay for the card(s) in bitcoin exchanges (assuming values hold well enough).

I also have yet to see a high-end card air-cooled that was single-slot. You only see those with built-in water blocks. It's not hard to find a quiet one though--just usually want to stay away from reference shrouds that have smaller fans (emphasis on usually).
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
Folding@Home:
Nvidia 580 (normal version) = ~16,500 points pr day. (~305watts used)
5870 (The beta V7 GPU3 (for OpenCL)) = ~8500 points pr day. (~240watts used)

So for a program like Fold@home, your better off with nvidia.


I dont really think its about which design is better, its about what the program is designed for. Alot of it comes down to codeing.


If you like useing your electric bill, as a way to donate to society via running research programs that ll tax your GPU. You ll need to figour out which program, then buy the card thats best suited for it.


If all you want to do is just some odd photo touch up, or movie editing, there are programs there that work on both (ei. finding software that ll suit those funktions isnt hard). If you have a favorite program, for those things you again need to look up who does best at those.
 
Last edited:

GWestphal

Golden Member
Jul 22, 2009
1,120
0
76
I thought it was pretty much impossible to make money with bitcoins now since all the "easy" coins have been mined and didn't it pretty much tank and die last summer after the hacking thing?
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
I thought it was pretty much impossible to make money with bitcoins now since all the "easy" coins have been mined and didn't it pretty much tank and die last summer after the hacking thing?

Demand and supply will drive prices up.

Its just a matter of.... how cheap is electristy where you live?
If you live some place where its dirt cheap compaired to most other places, chances are you ll make money off bit coin mineing (atleast more than in places where its not dirt cheap).
 

GWestphal

Golden Member
Jul 22, 2009
1,120
0
76
Any chance of OpenGL, CUDA, and AMDs system merging into one OpenCL 2.0 spec, so we can have one platform to work with and have each manufacturer take care of the code optimization for each set of cards.
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
CUDA is still nvidia cards only, the open source? part is just that they ll allow programers full accesse to it now.

The only GPGPU language thats open source and works on everything (intel/amd/nvidia/via/arm) is OpenCL. Thats also why its the future of GPGPU and not CUDA, even if currently cuda is a more complete solution for gpgpu's. The benefit of doing 1 version of a program code that works on everything outweighs everything else (for most companies that make a liveing off of selling software, where time spent on code = $ for them, that they ll have to recoup in sales).
 
Last edited:

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
CUDA is still the #1 choice for GPGPU as of today, future or not, many great programs use it but not OpenCL.

The answer isn't in a opinion war between AMD vs Nvidia, the answer is simply what programs is the OP looking to use, then figuring out which is better in that particular case.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
For now in general NV is better, AMD is much faster sometimes, but not very often, but much slower at other times. NV has more chance of being better.

BUT, there still isn't much GPGPU related stuff anyway, so see if there's anything that actually benefits before basing any purchase on GPGPU. If there's no value to it from your POV, then ignore it.

Going forward AMD might improve their performance overall, with their new architecture the HD7970 uses, but that remains to be seen.

Until GPGPU stuff really picks up though, unless you do something you know benefits from GPGU acceleration, there's no point in paying much attention to it IMO.