Any pointers on crunching Einstein@home w/ R9 290?

Plimogz

Senior member
Oct 3, 2009
678
0
71
I was thinking about Collatz for my 290s, but the server has been down for the better part of 2 days and as I'm running out of WUs, I'm going to check out Einstein instead.

Are there specific sub-projects I should shoot for, considering the 290s? And more importantly, are there specific configuration tweaks I should be making to my app_info.xml file, or other things of that sort?
 

GLeeM

Elite Member
Apr 2, 2004
7,199
128
106
I have been running Binary Radio Pulsar Search, both Arecibo and Perseus Arm. One does better on one card and the other better on the other card :rolleyes:
I should check all sub-projects again though, as things change over time.

You should try running one WU at a time for a few WUs and then try two (maybe even three) to see what works best.
 

Plimogz

Senior member
Oct 3, 2009
678
0
71
Yeah, I probably should. But I'm still hoping somebody has done so before.

I suppose I'll start going through the sub-projects tonight, seeing as I'm running out of Collatz work.
 

Plimogz

Senior member
Oct 3, 2009
678
0
71
I'm finding that simply running Einstein in its default state results in pretty low GPU usage. And while this is the first Boinc project I've seen so far which offers settings to configure running more than one WU at a time through the project web page, it isn't working for me.
 

Plimogz

Senior member
Oct 3, 2009
678
0
71
I have to admit that I wasn't patient: I came across an app_config.xml somewhere about the same time that I figured out what the "GPU utilization factor" settings did -- and, well, the app_config file kicked in before the web client settings did.

That being said, the project seems well behaved -- at least compared to Collatz which is being a real pain.

***

I've settled on running two Binary Radio Pulsar Search (Perseus Arm Survey) WUs per GPU to start and get a baseline PPD from which to try and improve.

The app_config.xml I found was set to 0.6 CPU per WU; I'll leave it there for now, I guess.
 

GLeeM

Elite Member
Apr 2, 2004
7,199
128
106
The app_config.xml I found was set to 0.6 CPU per WU; I'll leave it there for now, I guess.
Watch the "GPU Usage" with your OC app or with GPU-Z. And then lower the CPU cores used by one to see if the GPU usage goes up.
 

zzuupp

Lifer
Jul 6, 2008
14,863
2,319
126
I've been running Einstein on the R290 for the last few days. Both Arecibo and Perseus. Neither has an app-config going, just one at a time.
 

Plimogz

Senior member
Oct 3, 2009
678
0
71
Like I said, I decided to go with two simultaneous Perseus Arm Survey v1.39 (BRP5-opencl-ati), and after attempting to compare efficiency with your completed tasks I realize that I don't yet have one of these validated. Still, FWIW:

zzuupp:
Application: Binary Radio Pulsar Search (Perseus Arm Survey) v1.39 (BRP5-opencl-ati) | Status: Completed and validated | Run time (sec): 18,463.00 | CPU time (sec): 3,895.25 | Claimed credit: 16.59 | Granted credit: 3,333.00
Plimogz:
Application: Binary Radio Pulsar Search (Perseus Arm Survey) v1.39 (BRP5-opencl-ati) | Status: Completed, waiting for validation | Run time (sec): 9,057.20 | CPU time (sec): 3,660.44 | Claimed credit: 37.15 | Granted credit: pending

I do, on the other hand have some valid Binary Radio Pulsar Search (Arecibo, GPU) v1.39 (BRP4G-opencl-ati) tasks. A similar comparison for those, looks like:

zzuupp:
Application: Binary Radio Pulsar Search (Arecibo, GPU) v1.39 (BRP4G-opencl-ati) | Status: Completed and validated | Run time (sec): 5,016.86 | CPU time (sec): 1,034.12 | Claimed credit: 4.40 | Granted credit: 1,000.00
Plimogz:
Application: Binary Radio Pulsar Search (Arecibo, GPU) v1.39 (BRP4G-opencl-ati) | Status: Completed and validated | Run time (sec): 2,886.14 | CPU time (sec): 851.84 | Claimed credit: 8.65 | Granted credit: 1,000.00
Sunny129:
Application: Binary Radio Pulsar Search (Arecibo, GPU) v1.39 (BRP4G-opencl-ati) | Status: Completed and validated | Run time (sec): 3,298.79 | CPU time (sec): 292.19 | Claimed credit: 3.03 | Granted credit: 1,000.00

***

Am I wrong or is this looking like running two WUs roughly doubles performance? And how is it that Sunny129's CPU time is so much lower?

I need to go look up Bradtech519's times, I think I remember him saying that he was running 3 concurrent tasks.
 
Last edited:

GLeeM

Elite Member
Apr 2, 2004
7,199
128
106
I've been running Einstein on the R290 for the last few days. Both Arecibo and Perseus. Neither has an app-config going, just one at a time.
You must give the GPU one CPU. Your ppd is 1/4th of my GTX 560 Ti!
 

Bradtech519

Senior member
Jul 6, 2010
520
47
91
I ended up running two GPU tasks on my card and leaving a core free on CPU tasks. The latest & greatest omega drivers gave many invalids. Some of the older drivers work better but were hit and miss on other bugs. I find SETI to be very friendly towards the R9 290. One task makes 100% utilization of it.
 

Plimogz

Senior member
Oct 3, 2009
678
0
71
I'm keeping an eye out for invalids & errors, as I am using the Omega drivers, and I saw in your thread that there might be a problem.

The app_config.xml I somewhat randomly grabbed was set to run 2 simultaneous Perseus per GPU (w/0.6 CPU per task) and 3 Arecibo GPU per card (w/0.5 CPU per)... Actually, here it is:
Code:
<app_config>
<app>
<name>einsteinbinary_BRP4G</name>
<gpu_versions>
<gpu_usage>0.33</gpu_usage>
<cpu_usage>0.5</cpu_usage>
</gpu_versions>
</app>
<app>
<name>einsteinbinary_BRP5</name>
<gpu_versions>
<gpu_usage>0.50</gpu_usage>
<cpu_usage>0.6</cpu_usage>
</gpu_versions>
</app>
</app_config>

I wonder if 3 Arecibo at a time is courting trouble -- they don't seem to be doing worse than Perseus, to be honest; which isn't saying much: So far of 19 completed Perseus Arm (BRP5) tasks, only 1 has been deemed valid, 3 have thrown up validate errors and all of 15 are pending in limbo.
 

Plimogz

Senior member
Oct 3, 2009
678
0
71
It started off as something like 50% invalid tasks (for the most part Perseus), so I switched sub-projects to Arecibo and still got tons of invalids. So I knocked down the GPU clocks, even though they'd been good enough for Moo, Collatz, F@H and maybe some others, I forget. Didn't help. Turned down the CPU clocks even though they've been stable for months. Rolled back my driver to 14.9 from 14.12. Decreased number of simultaneous WUs (both sub-projects) by increments 'til it was just one a time.

No love; nearly 100% invalids now.

In fact, today was the worst day of the lot: while at was a work, not one valid WU. All invalid!

I'm going back to Collatz now that it's back up (finally) to regroup and reflect.

Maybe rollback to an earlier driver. Urgh, I don't feel like testing my RAM, but... Or maybe my app_config is jinxing me somehow. Or...I don't know.