Milky Way@H ATI vs Nvidia

Assimilator1

Elite Member
Nov 4, 1999
24,120
507
126
Hi guys, longtime no chat! :$

Anyhow, I know when I started MW that ATI cards were much quicker at it than Nvidia (can't remember why now, lol). That's why I switched my HD 4830 from F@H to MW (which I run sporadically, ambient temps allowing).

Recently I was given an Nvidia GTX 260 core 216 after fixing a mates car, I know it's about as old as my 4830 but looking at various gaming benchmarks it's about 30-50% faster :). For now I fitted it into my 2nd rig to replace the aging X1950 XT, but I was wondering about swapping it back at some point.

Is ATI still much faster at MW than Nvidia? Or in this case will the 260s greater horsepower overcome that? Btw my 2nd rig is not on 24/7, only my main rig is.

While we at it, any big news in MW lately?:)
 
Last edited:

Bradtech519

Senior member
Jul 6, 2010
520
47
91
ATI dominates Milkyway@home. If you look at the top performers the first 47 are AMD GPUs.
Hi guys, longtime no chat! :eek:

Anyhow, I know when I started MW that ATI cards were much quicker at it than Nvidia (can't remember why now, lol). That's why I switched my HD 4830 from F@H to MW (which I run sporadically, ambient temps allowing).

Recently I was given an Nvidia GTX 260 core 216 after fixing a mates car, I know it's about as old as my 4830 but looking at various gaming benchmarks it's about 30-50% faster :). For now I fitted it into my 2nd rig to replace the aging X1900 XT, but I was wondering about swapping it back at some point.

Is ATI still much faster at MW than Nvidia? Or in this case will the 260s greater horsepower overcome that? Btw my 2nd rig is not on 24/7, only my main rig is.

While we at it, any big news in MW lately?:)
 

Assimilator1

Elite Member
Nov 4, 1999
24,120
507
126
I'll google them when I next get time, but for now could you tell me their basic goals?

Bradtech
So my HD 4830 would still be faster than the 260 c216 in MW?
 

Sunny129

Diamond Member
Nov 14, 2000
4,823
6
81
what's up man? long time no see!

the thing you have to remember is that, while the nVidia GPU may be stronger in the general compute department, the GPU architecture must also be taken into account. in that respect, MW@H data is just better suited to run on AMD's VLIW5, VLIW4, and GCN architectures that it is on the Fermi (and possibly Kepler) architecture(s). also, it could be that the FP64 (double precision floating point) capability of the 4830 is better than that of the 260 c216...though that's strictly speculation...
 

Assimilator1

Elite Member
Nov 4, 1999
24,120
507
126
Fair enough :).

Just wondering whether to buy my brothers now spare 4870 1GB for £30-40 & sell my 4830 for ~£20 ;).
IIRC the 4870 is 40-50% faster than my 4830 in games, what about in MW?

Hmm that benchmark thread would be useful...... found it :) http://forums.anandtech.com/showthread.php?t=2064356&highlight=milkway

Seems the 4870 would be ~63% faster than my 4830 (o/c 595 MHz), hmm tempting for such a low cost!
 
Last edited:

Sunny129

Diamond Member
Nov 14, 2000
4,823
6
81
found the Milkyway@Home - GPU performance statistics thread. according to that data, it would appear that the 4870 is only ~30% more efficient than the 4830 in MW@H (191s run times vs 275s run times). but take those numbers w/ a grain of salt...someone else might have a 4870 paired w/ a different CPU, OS, etc. that could result in run times longer or shorter than 191s.

it might be better to strictly compare the FP64 (double-precision) performance of these cards than to compare their gaming performance, since MW@H performance will depend almost entirely on FP64 performance. check out Wiki's Comparison of AMD CPUs article, and scroll down to the HD 4xxx series table. you'll see that the 4870 is closer to a ~39% increase in efficiency over the 4830 (240 GFLOPs vs 147.2 GFLOPs). and even this "more accurate" performance difference is going to vary a bit, again depending on differences between platforms. factor in OCing, and then its a different story...
 

Rattledagger

Elite Member
Feb 5, 2001
2,989
18
81
Recently I was given an Nvidia GTX 260 core 216 after fixing a mates car, I know it's about as old as my 4830 but looking at various gaming benchmarks it's about 30-50% faster :).
Gaming-benchmarks isn't a good indicator then it comes to double-precision crunching-speed...

While double-precision is always slower than single-precision, the main reason Nvidia is terrible on projects like Milkyway is that Nvidia is intentionally crippling the double-precision-speed on their consumer-cards, to instead sell the much more expensive Tesla-cards to users that needs double-precision.

In double-precision, a Tesla has 1/2 the speed of single-precision.
A 5xx-Nvidia on the other hand has only 1/8 the speed of single-precision, meaning Nvidia is intentionally crippling double-precision to 1/4 of the speed the hardware really could handle.

For the GTX-680 and GTX-690-cards, the double-precision-speed is 1/24 :eek: :eek: :eek: compared to single-precision-speed, if there's not something seriously wrong with the table http://www.anandtech.com/show/5805/...-review-ultra-expensive-ultra-rare-ultra-fast

Meaning, a GTX-560 will be faster than a GTX-680 then it comes to double-precision-speed, and both will be beaten the crap out of by an old Ati-4830-card.

The old Nvidia-260 on the other hand is another story, since this was before Nvidia started crippling their consumer-cards. Not sure if it's 1/4 of single-precision-speed, if so it should be faster than the Ati-4830...
 

Assimilator1

Elite Member
Nov 4, 1999
24,120
507
126
Hey RD, longtime no see, good to hear from ya :), btw I see you still get 'then' & 'when' mixed up, lol.

Thanks for the info guys :)

Isn't this article http://www.sccg.sk/~vgsem/data/zima2008/FP64_GPU.pdf showing the HD 48xx cards as having 1/4 FP64 speed & the GTX 280 1/12th? (Wiki's FP64 numbers make no sense as they don't say what units they are, GFlops I guess, but they don't show FP32 speed... oh yes they do but it's out by x10 :confused: ).

So assuming the 260 is slower than the 280 in all aspects the HD 4830 should still be faster? (can't find out FP64 info for the 260).

Sunny
I found that thread too just a few minutes before you did, lol, odd that we seem to of come up with different figs :confused:

I'll post the numbers here incase I made a mistake.

HD 4830 o/c 600MHz ... 5m 11s = 311s
HD 4870 .................... 3m 11s = 191s

That's a ~39% cut in times :) (inline with your wiki figures). I see where I went wrong too lol, the 4830 is 63% slower than the 4870, the 4870 isn't 63% faster, damn stats!
Ah & I see what you've done now, you've picked the times for the 4830 with the large o/c to 700 MHz ;). Mine's o/c to 595 MHz (never found out it's max).

I think a 39% speed up is worth £20 :cool:, assuming his card is ok, it was having VPU errors.

I'm gonna run some MW@H WUs now & compare the 4830 & 4870.
 
Last edited:

Sunny129

Diamond Member
Nov 14, 2000
4,823
6
81
Sunny
I found that thread too just a few minutes before you did, lol, odd that we seem to of come up with different figs :confused:

I'll post the numbers here incase I make a mistake.

HD 4830 o/c 600MHz ... 5m 11s = 311s
HD 4870 .................... 3m 11s = 191s

That's a ~39% cut in times :) (inline with your wiki figures). I see where I went wrong too lol, the 4830 is 63% slower than the 4870, the 4870 isn't 63% faster, damn stats!
Ah & I see what you've done now, you've picked the times for the 4830 with the large o/c to 700 MHz ;). Mine's o/c to 595 MHz (never found out it's max).
oops...i just realized that too. i guess i just didn't realize there were three different entries for the 4830 in that table, and just went with the first one i saw (the one @ 700mhz lol). i'm glad we were able to clear up any confusion regarding the numbers.
 

Assimilator1

Elite Member
Nov 4, 1999
24,120
507
126
Lol :D

Sunny
No probs, I had a quick go at o/cing my 4830s GPU (as in a 5min run on OCCT GPU), errored at 670 MHz, passed at 660 MHz, not a huge o/cer ;).
Gonna see what my 4870 o/cs to soon too.

Oh btw I've got some accurate comparisions between my 4830 & 4870 on MW now.
Times for 159.86 credit WUs.
4830 @600 MHz 231s (default 575 MHz)
4870 @750 MHz 150s (default)

That's a 35% cut in WU times :).
Looking at ~1/2 dozen WUs for each card the time didn't vary by more than 1s.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Times for 159.86 credit WUs.
4830 @600 MHz 231s (default 575 MHz)
4870 @750 MHz 150s (default)

When I first started the MW@Home GPU comparison thread, I had a lot of trouble figuring out how to actually compare the GPUs. This is because the WU credit varies. Also, not everyone runs their GPU for 24 hours which would have allowed us to have a daily average work-rate in BOINC points.

Hi guys, longtime no chat! :eek:

Anyhow, I know when I started MW that ATI cards were much quicker at it than Nvidia (can't remember why now, lol). That's why I switched my HD 4830 from F@H to MW (which I run sporadically, ambient temps allowing).

Double Precision compute performance is the key.

Milkyway@home (OpenCL support and Double precision GPU required, so on the AMD side: Radeon 47xx, 48xx, 58xx, 59xx, 69xx, 77xx, 78xx, 79xx; FirePro V87xx, V78xx, V88xx, V98xx; FireStream 92xx, 93xx) (Windows only)

Here is a chart that sorts AMD GPUs by Double Precision level of performance at stock clock speeds:
http://boinc.berkeley.edu/wiki/ATI_Radeon

It doesn't have everything but you can calculate it fairly easily.

All you need to know for each GPU is what the double precision performance multiplier is of its single precision performance.

The double precision multiplier is 1/4th for HD6900/7900 series, 1/5th for HD4800/5900 series.
Other series such as HD7800 have a multiplier of 1/16th = here is a chart for 7800/5800/6900 series

To calculate Single Precision performance before using the Double Precision multiplier: # of Shaders x Shader clock speed (which is the same for AMD GPUs as the GPU clock) x 2 Ops/clock cycle / 1000

For example, quick math shows why NV consumer GPUs are so slow in MW@Home (or any DC project that uses double precision compute).

GTX560: 336 Shaders x 1620mhz Shader clock x 2 Ops per clock / 1000 x 1/12th FP32 Double Precision Multiplier = 91 Gflops
GTX580: 512 Shaders x (772mhz GPU clock x 2 to arrive at Fermi shader clock = 1544mhz) x 2 Ops per clock / 1000 x 1/8th of Single Precision adjustment (Double Precision GTX580 Multiplier) = 198 Gflops
GTX680: 1536 Shaders/CUDA cores x 1058mhz GPU/Shader clock x 2 Ops per clock / 1000 x 1/24th of Single Precision adjustment for double precision (Double Precision GTX680 multiplier) = 135 Gflops

HD7970 925mhz = 947 Gflops
HD7970 1150mhz = 1178 Gflops

You are going to need at least 8 GTX680s to match a single overclocked 7970 in MilkyWay@Home. :biggrin:

If you guys have a suggestion on how I can update that previous thread with MilkWay Times OR average BOINC points per day, please let me know.:thumbsup:

I will gladly redo the entire chart but I would need all of your help since I do not have my 4850/4890/6950 GPUs anymore. So I cannot retest them. This also means I cannot simulate HD4870 or HD6970 levels of performance!
 
Last edited:

Assimilator1

Elite Member
Nov 4, 1999
24,120
507
126
Not o/ced my 4870 yet.

Btw I can't seem to get any WUs for MW, although I'm wondering if the updated drivers some how screwed it up. Although I have dettached/re-attached MW.
Their is a server down but I don't understand what that server does http://milkyway.cs.rpi.edu/milkyway/server_status.php

Are WUs available?

Russian Sensation
Why do the charts need re-doing?
Thx for all the info btw :)
 
Last edited:

Sunny129

Diamond Member
Nov 14, 2000
4,823
6
81
Not o/ced my 4870 yet.

Btw I can't seem to get any WUs for MW, although I'm wondering if the updated drivers some how screwed it up. Although I have dettached/re-attached MW.
Their is a server down but I don't understand what that server does http://milkyway.cs.rpi.edu/milkyway/server_status.php

Are WUs available?
yes there are WU's available for ATI GPUs. the downed server is for n-body production only (the kind of WU that only runs on CPUs, not GPUs), so you needn't be concerned with it unless you run M@H on your CPU as well.

as for your not getting MW@H WU's for your ATI GPU, that very well could be caused by an incompatible driver version. but i suspect it is due to the fact that BOINC isn't recognizing your GPU (i just went to your MW@H account page to view your computers, and your one and only MW@H host (the Intel C2 Q6600) isn't showing any GPUs).
 

Assimilator1

Elite Member
Nov 4, 1999
24,120
507
126
I updated the driver to the latest version (12.6) in an attempt to be able to manually control the fan speed (which I could with the 4830 but not the 4870), so BOINC has got confused by the driver update? Damn, ok I'll roll back to the previous version, or whichever works!

Thx :)
Oh & yea GPU MW only, & yes to Q6600 rig.
I'd forgotten that the MW account page could show that info!
 

Rudy Toody

Diamond Member
Sep 30, 2006
4,267
421
126
When you run BOINC as a service on Windows or a daemon on Linux, the BOINC apps cannot access the AMD/ATI drivers properly. Stop the service/daemon and launch as administrator/root.

I saw a thread that showed a work-around for the Windows services, but I can't find it again.

On Linux, I run from a root terminal and everything is going great.

When you first launch BOINC, the log will state whether any suitable gpu is found. For my pc, with BOINC as a daemon, the log says no gpu. When run as root, the log says gpu found.
 

Sunny129

Diamond Member
Nov 14, 2000
4,823
6
81
I updated the driver to the latest version (12.6) in an attempt to be able to manually control the fan speed (which I could with the 4830 but not the 4870), so BOINC has got confused by the driver update? Damn, ok I'll roll back to the previous version, or whichever works!

Thx :)
Oh & yea GPU MW only, & yes to Q6600 rig.
I'd forgotten that the MW account page could show that info!
well i went back to your MW@H webpage and noticed that your host is now recognizing the HD 4870 GPU. i don't know if that was due to the update to CCC 12.6 or not...or perhaps you did as Fred mentioned and uninstalled BOINC as a service and reinstalled it as a normal program? either way, you're running BOINC v7.0.28, which is fine (BOINC v7.0.20 and up support OpenCL). keep us posted...
 

Assimilator1

Elite Member
Nov 4, 1999
24,120
507
126
I was running BOINC 6.12.34 without any probs, then switched to BOINC 7.xx & still no probs. It was updating to CCC 12.6 which killed it.

Well I don't know why the host is recogonising it now as BOINC events log still shows 'missing GPU' :confused:

Btw, I seem to vaguely remember something about maybe needing the SDK kit? But I've noticed CCC 12.3-12.6 doesn't have it. Oh & neither 12.4 or 3 works with MW, grrr.

Rudy
Re service, not sure but BOINC isn't solely running as a service, it's in the sys tray. If that's what you meant......
 

Sunny129

Diamond Member
Nov 14, 2000
4,823
6
81
I was running BOINC 6.12.34 without any probs, then switched to BOINC 7.xx & still no probs. It was updating to CCC 12.6 which killed it.

Well I don't know why the host is recognizing it now as BOINC events log still shows 'missing GPU' :confused:

Btw, I seem to vaguely remember something about maybe needing the SDK kit? But I've noticed CCC 12.3-12.6 doesn't have it. Oh & neither 12.4 or 3 works with MW, grrr.

Rudy
Re service, not sure but BOINC isn't solely running as a service, it's in the sys tray. If that's what you meant......
well if BOINC was recognizing your GPU previously, then i doubt you had installed BOINC as a service in the first place. as for finding the driver that'll work w/ your hardware combo, trial and error is the only way...that, and searching the MW@H forums for threads that talk about CCC drivers. and actually, i'm running CCC v12.4, and it works fine w/ my HD 6950. i have a feeling v12.4 is incompatible w/ your HD 4870 (at least when it comers to using OpenCL to crunch MW@H). you might try 11.9 or 12.1 and let us know how that goes...
 

Assimilator1

Elite Member
Nov 4, 1999
24,120
507
126
Oh ok, well I was using 12.1 originally so I know that works, just 12.2 to try now before going back to 12.1.

Oh & also none of the drivers manual fan control worked :\, looks like I'm going to have to use a 3rd party program to do that. And speedfan which I already have installed can't control it either.
 

Sunny129

Diamond Member
Nov 14, 2000
4,823
6
81
Oh ok, well I was using 12.1 originally so I know that works, just 12.2 to try now before going back to 12.1.
Oh & also none of the drivers manual fan control worked :/, looks like I'm going to have to use a 3rd party program to do that. And speedfan which I already have installed can't control it either.
i know, it can get to be a PITA to use multiple hardware monitoring programs...but i actually don't even use CCC b/c it seems limited in its capabilities and clock ranges compared to some of the 3rd party software out there. i also use SpeedFan, but i only use it to monitor CPU/case/drive temps, and control my CPU and case fans. i use MSI Afterburner to monitor GPU temps and control GPU fan speeds. you should give it a try and let us know if that finally gives you control of the GPU fan speed.