SETI@Home Wow Event 2017

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ao_ika_red

Golden Member
Aug 11, 2016
1,679
715
136
I'm planning to re-connect to s@h server at 11p.m. (ET) to avoid initial data surge.
 
Last edited:

Kiska

Golden Member
Apr 4, 2012
1,013
290
136
The outage usually is about 12 hours long so it shouldn't be too long before the servers are back up
 

Pokey

Platinum Member
Oct 20, 1999
2,766
457
126
Murphy is at work...............
I am on vacation and apparently a thunderstorm has knocked out some of my rigs. :mad: (or network equip) And me nowhere near home. It will be a couple of days before I can get them back up and running.
 

uallas5

Golden Member
Jun 3, 2005
1,426
1,548
136
I just dumped my 3 day bunker only to have my main GPU rig go down 5 minutes later when I was about to leave for work. Will have to see what's up with it tonight.
 

Thebobo

Lifer
Jun 19, 2006
18,592
7,673
136
Iam a newbie to the post dsl, pre multiple cores/GPU ways of crunching. Didn't realize you could run multiple instances on one computer. I am running it on a I5 tower and a crappy laptop. Is there info specifically on running multiple clients? I would be mostly doing seti.

And I am crunching for the WOW event, won't be much like your alls but eh all helps.
 
  • Like
Reactions: TennesseeTony

Assimilator1

Elite Member
Nov 4, 1999
24,120
507
126
Well, luckily for SETI, the weather's been pretty rubbish here for a while now, so no probs running the GPU client 24/7 :)
 

StefanR5R

Elite Member
Dec 10, 2016
5,512
7,818
136
@Thebobo, check out the link from Tony's post:
http://www.overclock.net/t/1628924/guide-setting-up-multiple-boinc-instances

Running more than one client on a single host (one after another, or several at the same time) is especially useful in cases like these:
  • You want to download a large number of tasks, so that you can run the machine for an extended time without network connection, or without having to rely on steady network connection and steadily working project server. But the project server may allow only few tasks-in-progress per client. (And the client limits itself to 1000 tasks in progress.)
  • A variation of the theme: You want your host to issue project updates more frequently than the server allows per client.
  • You want to run two or more projects at the same time, but you need different local preferences for each project. E.g. you want one project to have networking connection, but want to disable networking for the other project temporarily. Other example: You want to finely control how many % CPU each of the projects get, i.e. you don't want the client to sort this out by itself via the coarse method of "resource share" percentages per project.
Another potential use case would be to improve utilization of a large GPU by running more than one job on the GPU at the time. Actually, this can also improve utilization of medium-size GPUs: Most GPU applications have a setup and teardown phase in which they use CPU but not GPU. Rather than leaving the GPU idle during that time, better rune another GPU job with a suitable shift in time. — However, you don't need multiple clients in order to run more than one job per GPU at a time. You just need to provide a suitable app_config.xml file in the project directory.

Here is a C:\ProgramData\BOINC\projects\setiathome.berkeley.edu\app_config.xml which contains entries for current NVidia GPU applications:
Code:
<app_config>
    <app_version>
        <app_name>setiathome_v8</app_name>
        <plan_class>cuda50</plan_class>
        <avg_ncpus>0.01</avg_ncpus>
        <ngpus>0.5</ngpus>
    </app_version>
    <app_version>
        <app_name>setiathome_v8</app_name>
        <plan_class>cuda42</plan_class>
        <avg_ncpus>0.01</avg_ncpus>
        <ngpus>0.5</ngpus>
    </app_version>
    <app_version>
        <app_name>setiathome_v8</app_name>
        <plan_class>opencl_nvidia_SoG</plan_class>
        <avg_ncpus>0.01</avg_ncpus>
        <ngpus>0.5</ngpus>
    </app_version>
</app_config>
In this example, "<avg_ncpus>0.01</avg_ncpus>" makes the client believe that this application does almost not use CPU at all. This ensures that the client continues to launch GPU jobs even if it is loaded with CPU jobs simultaneously. In reality, the SETI Nvidia GPU tasks will need one CPU thread for some short periods of time. So, take care that you don't overwhelm your CPU with CPU tasks + GPU tasks.

And "<ngpus>0.5</ngpus>" lets the client believe that the application will only use half of the computational resources of the GPU. Therefore the client will then always launch two GPU jobs in parallel. In reality, the application will use varying amounts of the GPU over time, if you run a single job. You can watch GPU utilization (cores and memory) e.g. with the GPU-Z application or other sensor applications. If you run two jobs, and especially if you use boincmgr to defer start of the 2nd job a little bit after start of the 1st job, then the jobs will increase GPU utilization while they fight each other a bit for resources.

Store this as plain text file but with .xml file extension of course in the mentioned directory, then use the "Options/ Read config files" menu item of the advanced view of boincmgr, and the file becomes active.

Note, I am a SETI@Home newbie, and I suspect there are more and possibly better ways to configure SETI@Home as best as possible for a given GPU model.

Back to the topic of multiple clients per host: Another potential use case is if you want to launch more CPU tasks than you normally can with 100 % allowed CPU usage. You may want to do so in rare cases of applications which have poor CPU utilization. However, this can also be solved with a single client per host by means of the <ncpus> option in C:\ProgramData\BOINC\cc_config.xml. (Documentation: https://boinc.berkeley.edu/wiki/Client_configuration)
 
  • Like
Reactions: Smoke

TennesseeTony

Elite Member
Aug 2, 2003
4,209
3,634
136
www.google.com
Stefan, you never cease to amaze me at the amount of time/research/thought you put into helping others on the TeAm. I thank you. And I nominate you for the DC user of the year (if such a thing existed). :D


EDIT: Just wanted to mention: I'm only running GPU for SETI WOW, and TN-Grid for the CPU side of things. Good to have my fleet running again, after some cost saving measures the previous weeks. :)
 
  • Like
Reactions: Smoke

Thebobo

Lifer
Jun 19, 2006
18,592
7,673
136
@Thebobo, check out the link from Tony's post:
http://www.overclock.net/t/1628924/guide-setting-up-multiple-boinc-instances

Running more than one client on a single host (one after another, or several at the same time) is especially useful in cases like these:
  • You want to download a large number of tasks, so that you can run the machine for an extended time without network connection, or without having to rely on steady network connection and steadily working project server. But the project server may allow only few tasks-in-progress per client. (And the client limits itself to 1000 tasks in progress.)
  • A variation of the theme: You want your host to issue project updates more frequently than the server allows per client.
  • You want to run two or more projects at the same time, but you need different local preferences for each project. E.g. you want one project to have networking connection, but want to disable networking for the other project temporarily. Other example: You want to finely control how many % CPU each of the projects get, i.e. you don't want the client to sort this out by itself via the coarse method of "resource share" percentages per project.
Another potential use case would be to improve utilization of a large GPU by running more than one job on the GPU at the time. Actually, this can also improve utilization of medium-size GPUs: Most GPU applications have a setup and teardown phase in which they use CPU but not GPU. Rather than leaving the GPU idle during that time, better rune another GPU job with a suitable shift in time. — However, you don't need multiple clients in order to run more than one job per GPU at a time. You just need to provide a suitable app_config.xml file in the project directory.

Here is a C:\ProgramData\BOINC\projects\setiathome.berkeley.edu\app_config.xml which contains entries for current NVidia GPU applications:
Code:
<app_config>
    <app_version>
        <app_name>setiathome_v8</app_name>
        <plan_class>cuda50</plan_class>
        <avg_ncpus>0.01</avg_ncpus>
        <ngpus>0.5</ngpus>
    </app_version>
    <app_version>
        <app_name>setiathome_v8</app_name>
        <plan_class>cuda42</plan_class>
        <avg_ncpus>0.01</avg_ncpus>
        <ngpus>0.5</ngpus>
    </app_version>
    <app_version>
        <app_name>setiathome_v8</app_name>
        <plan_class>opencl_nvidia_SoG</plan_class>
        <avg_ncpus>0.01</avg_ncpus>
        <ngpus>0.5</ngpus>
    </app_version>
</app_config>
In this example, "<avg_ncpus>0.01</avg_ncpus>" makes the client believe that this application does almost not use CPU at all. This ensures that the client continues to launch GPU jobs even if it is loaded with CPU jobs simultaneously. In reality, the SETI Nvidia GPU tasks will need one CPU thread for some short periods of time. So, take care that you don't overwhelm your CPU with CPU tasks + GPU tasks.

And "<ngpus>0.5</ngpus>" lets the client believe that the application will only use half of the computational resources of the GPU. Therefore the client will then always launch two GPU jobs in parallel. In reality, the application will use varying amounts of the GPU over time, if you run a single job. You can watch GPU utilization (cores and memory) e.g. with the GPU-Z application or other sensor applications. If you run two jobs, and especially if you use boincmgr to defer start of the 2nd job a little bit after start of the 1st job, then the jobs will increase GPU utilization while they fight each other a bit for resources.

Store this as plain text file but with .xml file extension of course in the mentioned directory, then use the "Options/ Read config files" menu item of the advanced view of boincmgr, and the file becomes active.

Note, I am a SETI@Home newbie, and I suspect there are more and possibly better ways to configure SETI@Home as best as possible for a given GPU model.

Back to the topic of multiple clients per host: Another potential use case is if you want to launch more CPU tasks than you normally can with 100 % allowed CPU usage. You may want to do so in rare cases of applications which have poor CPU utilization. However, this can also be solved with a single client per host by means of the <ncpus> option in C:\ProgramData\BOINC\cc_config.xml. (Documentation: https://boinc.berkeley.edu/wiki/Client_configuration)

Great! Thanks Stefan. Take me a while to digest all that. : )
 

ao_ika_red

Golden Member
Aug 11, 2016
1,679
715
136
Woohoo! Taurus leads the pack! Go Taurus! (and TeAm)! :D
It seems there will be two horse race between Virgo and Taurus. But, there's 12 days to go. Anything can happen.
TeAm is trailing Planet3DNow by about 300k credits behind. There's a chance to reach Top10.
 

Smoke

Distributed Computing Elite Member
Jan 3, 2001
12,649
198
106
Murphy is at work...............
I am on vacation and apparently a thunderstorm has knocked out some of my rigs. :mad: (or network equip) And me nowhere near home. It will be a couple of days before I can get them back up and running.

Let's hope it is just your network.
 

ao_ika_red

Golden Member
Aug 11, 2016
1,679
715
136
After updating new tasks last night, I notice that the GPU tasks are getting longer to be done. I wonder if this come from s@h or my gpu performance gets weaker.
 

TennesseeTony

Elite Member
Aug 2, 2003
4,209
3,634
136
www.google.com
I've had some problems with GPU tasks from 2nd/3rd BOINC clients on the same computer, taking far more than an hour, when it should be only 12-20 minutes (4 at a time per GPU). Do you have the issue with only a single client, or multiple?

**********

G-man, sorry to hear of the outage. As the other G-man says, hopefully you're still crunching for the duration, and all those completed tasks will be waiting for you to hit the router/modem with a hammer and send once you return home!
 

QuietDad

Senior member
Dec 18, 2005
523
79
91
Had two of my rigs do down today. One's back and time for a nap before the next. May have lost a gpu, but Oh,well
 

ao_ika_red

Golden Member
Aug 11, 2016
1,679
715
136
I only use single client, usually a gpu task done in 12-14 minutes time, but today it needs 20+ minutes to do it.
 

QuietDad

Senior member
Dec 18, 2005
523
79
91
No worries. Have a junkyard of computer pieces taking up a full garage bay. I'll get it up.
 

ao_ika_red

Golden Member
Aug 11, 2016
1,679
715
136
No worries. Have a junkyard of computer pieces taking up a full garage bay. I'll get it up.
I don't think a place filled with multi TFLOPS of GPUs can be called as a junkyard.
I only use single client, usually a gpu task done in 12-14 minutes time, but today it needs 20+ minutes to do it.
I partially fix my problem by enabling 2 tasks per GPU (usually 1 task per GPU because its utillisation already hit 100%). Now it needs 18 minutes to do single GPU task. Not optimal solution, but it's better than nothing.
 

StefanR5R

Elite Member
Dec 10, 2016
5,512
7,818
136
Re #36, thanks. :blush:
Woohoo! Taurus leads the pack!
The top user brought 113 hosts. :astonished:
I've had some problems with GPU tasks from 2nd/3rd BOINC clients on the same computer, taking far more than an hour, when it should be only 12-20 minutes (4 at a time per GPU).
I have seen such variations too. Could this just be the difference between
opencl_nvidia_SoG (fast) and cuda42/cuda50 (slow)? Note the different GFLOPS:

xjXAtni.png
 

TennesseeTony

Elite Member
Aug 2, 2003
4,209
3,634
136
www.google.com
Wow. What is SOG, by the way? I'm quite familiar with the different versions of cuda and the OpenCL tasks, but SOG is new to me. Maybe I should Google instead of asking....post or not.....hmmmmmm...post.

Edit: No luck finding an answer.