• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Milkyway@Home - GPU performance statistics

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
4. 4870 1GB = 3m:40s (<-- this seems way too slow)

My mistake: That should be a 4850!

default clocks: 3min, 44 sec
Shader OC 625MHz -> 700 MHz: 3 min, 24 sec

default:

Device 0: ATI Radeon HD4700/4800 (RV740/RV770) 1024 MB local RAM (remote 64 MB cached + 128 MB uncached)
GPU core clock: 625 MHz, memory clock: 993 MHz
800 shader units organized in 10 SIMDs with 16 VLIW units (5-issue), wavefront size 64 threads
supporting double precision

CPU time: 11.3281 seconds, GPU time: 222.891 seconds, wall clock time: 226.285 seconds, CPU frequency: 3.0056 GHz

If I increase core clock to 700MHz:

Device 0: ATI Radeon HD4700/4800 (RV740/RV770) 1024 MB local RAM (remote 64 MB cached + 128 MB uncached)
GPU core clock: 700 MHz, memory clock: 993 MHz
800 shader units organized in 10 SIMDs with 16 VLIW units (5-issue), wavefront size 64 threads
supporting double precision

WU completed.
CPU time: 7.95313 seconds, GPU time: 199.953 seconds, wall clock time: 203.362 seconds, CPU frequency: 3.0056 GHz

Increasing the memory clock to 1200 MHz made no difference
 
Thanks biodoc! Updated the chart to reflect your information.

dajeepster, hopefully that GTX480 will work so we can see what the beast is capable of.
 
Milkyway@home 0.24 (cuda23), de_new_test2_...


EVGA GTX 480 SC (1451MHz clk/1900MHz mem) - xxxx
GTX 285 - xxxx
(times deleted, see post below)

I don't see GTX 480 getting a full workout - the fan is only running at 60&#37; and the GPU is still around 85C (vs FurMark which pushes it over 100C at almost full fan).
 
Last edited:
Welcome to the forum 🙂.
Lol @ your name, (a BSG fan I assume?, the newer show is my favourite).

Btw I don't understand your M@H times, they seem much slower than anything else in the chart. I take it I've missed something?


I must remember to give M@H a shot on my 4830 at the weekend......
 
Last edited:
@dajeepster: I'm guessing you're only running a GTX 480, right? Read on...

@Assimilator1: Yip, I enjoy Battlestar Galactica, old and new: old for its cheesy style and the new for its sleekness (and red dresses don't hurt).

Since my initial post, I've been playing around with disabling cards and running Milkyway@Home on different mixes of cards. It appears something strange is in the works here...

If I only have GTX 285s (one or two) active in Windows, CUDA WUs complete in about 11m 30s each.

If I have my GTX 480 and a GTX 285 active in Windows, CUDA WUs complete in around 12m and 24m BUT I can't say for sure which is doing which.

If I only have my GTX 480 active in Windows, I get "computation error" type messages and no CUDA WUs are done.

If I have my GTX 480 and two GTX 285s active in Windows but cc_config.xml ignore the GTX 480, WUs complete in 11m 30s.

If I have my GTX 480 and two GTX 285s active in Windows but cc_config.xml ignore everything but the GTX 480, WUs still run on one of the GTX 285s.

GPU: GTX 480
Time: n/a, all computation errors 🙁

GPU: GTX 285 (one or two)
Time: 11m 30s 🙂

GPU: GTX 480, GTX 285
Time: 11m 30s, 23m 30s 😱

GPUs: GTX 480, GTX 285, GTX 285
Time: 11m 30s, 23m 30s, 23m 30s 😱

GPUs: GTX 480 (cc_config ignored), GTX 285, GTX 285
Time: 11m 30s, 11m 30s 🙂 :hmm:

GPUs: GTX 480, GTX 285 (cc_config ignored), GTX 285 (cc_config ignored)
Time: 11m 30s and running on one of the ignored GTX 285s 😱

So it appears the GTX 480 isn't working at this time and something ODD is happening with WUs rolling over from the GTX 480 to the GTX 285s, even if the GTX 285s are set to be ignored. Since the WUs appear to roll to the 3rd GPU, I'm guessing my initial results were actually due to 3 CUDA tasks running on 2 CUDA cards, none of them the GTX 480.

One possible exception to the GTX 480 not working: I hit one of the Milkyway@Home 3 CUDA Work Units and it completed in about 6 minutes. I barely caught the GPU usage status but it looked and SOUNDED like my GTX 480 was running hard (>90% usage per EVGA Precision GPU monitor/tuner) and hot!

And lets not get me started on Collatz GPU WUs - they basically lock the PC up if the GTX 480 is enabled. :thumbsdown:
 
I will update further after the prime grid race but I believe my 5850 was down to 1:49 per WU after initial 2:00 wu completion times but I will check again. it is idle at the moment...too much heat already from all the race rigs!

All should consider joining TAC and the TeAm from at least the 5th-10th (milkyway@home portion) for the boinc pentahalon!!
 
@dajeepster: I'm guessing you're only running a GTX 480, right?
I'm only running my GTX480 on Collatz and DNETC.. those are the only programs I can get to run reliably.

And lets not get me started on Collatz GPU WUs - they basically lock the PC up if the GTX 480 is enabled. :thumbsdown:
I'm not having any problems with my system running collatz. of course the system isn't currently overclocked and I do have the fan turned up to 80%
 
Okay, I received more MilkyWay@Home Version 3 0.03 (cuda23) work units and verified they run on the GTX 480. I ran some on my GTX 480 and one on my GTX 285 with the following results:

de_mw3_s12_test... on GTX 480 = 5m 45s
de_mw3_test2... on GTX 480 = 5m 59s
de_mw3_s11_test... on GTX 285 = 11m 31s

Verified that the 480 and 285 were working using EVGA Precision.
 
From over at the Milkyway@Home forums:

BOINC estimated HD 6970 w/o OCing in 3307 GFLOPS. It take 70 secs approx to crunch one MilkyWay@Home WU
If anyone can confirm this it looks like we have a new crunching king for Milkyway@Home
 
Well I finally got around to installing it, my HD 4830 completed its 1st 2 WUs in 324 & 328s, which is 5 mins 24 & 28s. [edit] 3rd WU 322s.

Hmm that's much slower than the 4850, which is about 20% faster than my 4830 in games. Have the WUs changed or is something else wrong here?
Btw running Cat 10.12

Btw2, I'm getting quite bad GUI lag now, anyway of adjusting that out?

[update]
10 WUs completed now, times range from 321.5s to 329.6, although 8 of them are between 321.5s & 324s.
 
Last edited:
Practically nothing. According to Task Manager ~1s CPU time/WU.
Well now that I'm running it I can give a more definitive answer, & the MW GPU client uses considerably more than nearly nothing, indirectly.

Running F@H SMP alone it gets 98-99&#37; of CPU time, running MW GPU F@H SMP gets about 80-95%, with most of that time at ~88-92%.

The funny thing is that MW GPU is only taking ~1% of CPU time (sporadically), it seems to be affecting other process too, alot of the remaining CPU time is going to explorer(2-8%) , my firewall, & taskmanager (0-3%,obviously that won't be on when I'm not looking at CPU time). This doesn't happen when MW GPU is off. It seems that MW GPU is causing other programs to use more CPU time than usual 😕.
That's on Win XP SP3, Cat 10.12 drivers.

Still, it's a good project & F@H is still getting most of the CPU so I shall carry on running MW for a while 🙂.

Can no one answer me about GUI lag with MW? 🙁
 
Last edited:
can anyone confirm MW@H performance of a 4670 1GB card? or perhaps the performance of two 4670's in Crossfire? i could have two of these cards for $80 shipped, and was wondering if the price-to-performance ratio be worth it. i don't do MW@H yet, but would like to get into it if i can find some decent GPUs for crunching.

i'm already thinking that it isn't worth while judging by the existing numbers in post #1. given that the 4770 is crunching a WU in ~4:16, i would imagine a 4670 would crunch in the ~5:00 range. also, considering that the 4890 is crunching WU's in ~2:39 and can be had for ~$100, perhaps a single 4890 will be better (and of course consume less power) than two 4670's...what do you all think?

TIA,
Eric
 
Last edited:
I would think you'd be better off getting 1 higher performance card (performance & energy useage wise), as you usually are where you can get 1 better card vs CF/SLI.
I know in games the 4670 was much slower than my 4830 (which the 4770 is slightly faster), a quick look at prices here in the UK shows 4670s going for about &#163;50. For &#163;100 you can get 1 1 GB 5770 or &#163;85 for a 5750 1GB.

Looking at THGs chart (no 4670s listed on ATs bench 🙁) here you'll see that the 5770 is about 2-2.5 times faster than 1 4670! (note that the 1st 4670 column is 2x4670s). So you'd be better off with 1 5770 (assuming price differences are the same in the USA), not to mention no driver problems with games without CF 🙂. The 5750 isn't quite twice as fast as a 4670 but it's very close, that would be a cheaper option. I haven't looked at Nvidia as you only mentioned ATI.
Looking again at the chart I see the 5750 is slightly faster than the 4670 CF in almost all benchmarks.

I usually find the best deals are in 2nd hand cards from ebay with previous generation, top of the range cards, although of course they tend to be more power hungry you do get much more bang for ya buck.

[update]
I see in the time I've written my post you've come to the same conclusions 😉, add the 4890 to the chart to see where that sits, although I didn't see the 4890 listed at the site I looked at (Ebuyer), are you sure you can still get them new?
Although I did notice that the 4870 is about even with the 5770, in some benchmarks the 4870 is slightly faster in others the 5770, so I'd imagine the 4890 will be faster all round over the 5770.
 
Last edited:
thanks for the detailed response and the link to the video card comparison. i see that the 5750 is very close in performance to 2 Xfired 4670's. my major concern (and i should have specified this earlier) is Milkyway@Home performance...that is, i will not be using these cards for any sort of gaming - they will strictly be used for crunching MW@H. that is also why i didn't mention anything about nVidia cards - MW@H seems to be far more optimized for ATI/AMD cards. those comparisons are pretty much all based on FPS, whether its a game or pure benchmarking software. will this translate into the same (or at least similar) performance with respect to MW@H?

i'll have to go back and add a 4890 to the comparison, but there are so many cards to choose and compare...so i think i'll wait until after work to look at more comparisons.

thanks again,
Eric
 
Within the ATI family at least, yes, naturally their will be some variation as some architecures will better at certain types of DC/apps/games than others, but overall it will give you a good general idea, in terms of degrees of difference & 'rankings'.
And unless their is a comprehensive MW benchmarking list of different ATI cards this is probably the best idea you'll get. Unless anyone here can post 4670 MW scores? 🙂.

Re picking ATI, I thought so :thumbsup:.

4890 vs 5770 vs 5750 vs 4670
The 4890 mostly beats the 5770 by a modest margin (roughly upto 20%), matches it in a few games & loses slightly in 1.
 
Referring back to the various scores posted, are you guys running an optimised app?
Or was that only a benefit in the earlier days? [edit] Just found out they were blocked off 2 years ago due to some returning bad results.

Trying to find out why my HD 4830 scores are much lower than they should be compared to the 4850 scores posted earlier.
 
Last edited:
thanks for pulling that 2nd comparison together for me! as regards the question at the end of your previous post, no, i cannot find the 4890 new anywhere. but i've found some decent prices in the AT for sale forum. i understand that nVidia cards will be better for other DC projects (specifically Einstein@Home, which i already participate in using CPU crunching power only), but for now i want to get started with MW@H b/c 1) i am a big fan of the science behind the project, and 2) it seems to yield very good PPD results considering all the other DC projects out there with GPU support. once i get this machine up and running, then i'll consider adding an nVidia Fermi w/ CUDA card to my other box which already crunched for E@H.

thanks again,
Eric
 
Yep fair enough 🙂, apart from the fact that I'm very interested in astronomy the fact that ATI do well in MW was a another reason for running it on my HD 4830 🙂.
 
cool man...i currently participate in Einstein@Home, SETI@Home, and LHC@Home, which are all astronomy/astrophysics-related. and while SETI is probably the least important to me, i like the fact that the project offers the alternative Astropulse v5.05 application that stands a chance of finding more than just possible signals from little green men 🙂 (there's a chance it'll detect pulsars, black holes, and other things). i'm looking forward to joining the MW@H cause...just gotta figure out which GPU to go with, and then i gotta muster up the cash to buy one lol...
 
Lol, yep!
I run SETI,F@H,DIMES,MW & DPAD (Distributed Particle Acclerator Design), I think that one might be up your street too 🙂.
 
Going back on topic, does anyone know why my HD 4830 times look so much slower than the 4850s? (more than the ~20&#37; speed difference).
 
Was anyone using optimised apps? (although I believed they were blocked 2yrs ago, is this still the case?)

[update]
OK I think I've finally found out what's going on.

Like SETI each WU crunching length can be different, the key to comparing WU times is picking WUs with the same amount of credit.

So I'm afraid Russian Sensation that your GPU stats list is meaningless unless you guys all happened to crunch the same credit value WUs (which is possible in a short space of time). But seeing as none of you recorded the WU credit for your scores then these times can't be compared with any certainty.

I know when I tried setting up a benchmark for SETI I had the same hassles, it's a nightmare trying to compare different systems as the WUs keep changing. As I mentioned you can choose a particular WU credit to compare but when these WUs dissappear your back to not being able to compare times. The only way to do it* AFAIK is to copy an entire BOINC install with a fresh 'typical' WU which has not been started, then this entire folder can be zipped up & uploaded for others to crunch for an accurate & lasting benchmark.
I didn't do this for SETI in the end due to lack of interest 🙁.

If their is interest in doing this for MW maybe we could do that?
Although I am wondering if you calculated the time/credit, whether then different WUs could be compared?..........

*Unless anyone knows how to capture & insert WUs into BOINC? This IIRC is a tricky process as it's not a single file.

Sunny129
I've just spotted this in a review I'm only reading now!:$

From Anandtechs review of the 5750 & 5770

What&#8217;s Juniper? In a nutshell, it&#8217;s all of Cypress&#8217; features with half the functional units (and no Double Precision for you scientist types).

Which really sucks because that means it can't do MW! 🙁
 
Last edited:
Back
Top