8th Annual BOINC Pentathlon

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

StefanR5R

Elite Member
Dec 10, 2016
5,510
7,816
136
On the subject of Cosmology@Home's server struggling with performing task validations, seti.ger user nexiagsi16v writes (in German; errors and stylistic faults in the translation are mine):
If you look around at Cosmo, they normally grant ~3 million points per day. Yesterday and today it is quadruple of that with ~12 M per day. This squarely puts these days into the top 5 days of the project. The maximum was 74 million once, in 2009 though. No idea whether the credit allocation has been changed meanwhile. The second best day was 30 M in 2011, 3rd 17 M in 2012, 4th 15 M in 2013.

Planck are scarce goods right now. There are merely ~500 camb_legacy WUs to be handed out.

Estimating from the current count of WUs in progress, they have to be good for 70 million. I believe someone is under a bit of stress currently.
 
  • Like
Reactions: TennesseeTony

TennesseeTony

Elite Member
Aug 2, 2003
4,209
3,634
136
www.google.com

StefanR5R

Elite Member
Dec 10, 2016
5,510
7,816
136
List of possible upcoming projects
Regarding Moo!Wrapper,
also have a look through thread "RC-5/72 -- Two more days - Team Anandtech moves up". The thread is originally about the distributed.net RC5-72 project, but discussion about Moo!Wrapper got mixed in (notably, how to make sure that work being done via Moo!Wrapper is also conted for TeAm AnandTech at distrubuted.net).

Moo!Wrapper is a bit special on multi-GPU hosts because one task will occupy all GPUs of same architecture at once. On my PC with 3 GPUs of 2 different models (one being a little bit slower than the other two), this causes that the faster GPUs remain unused for a short while near the end of a task. From what I understand, it's just that several dnetc blocks are packed into one moowrapper task, the task grabs all GPUs of same architecture, and then sends off several blocks onto each GPU. I have read that this may even be an issue on PCs with identical GPUs, as some tasks may distribute blocks unevenly over the GPUs. Also, it's problematic if other projects are meant to run on the same machine simultaneously because resource allocation between projects is messy.

A posting from 2012 describes how to force one task per GPU on ATI GPUs:
http://moowrap.net/forum_thread.php?id=215&postid=2449
I haven't tried yet to adapt it to my NVIDIA GPUs.

Task duration on a single AMD W7000 is 1/2 h, on 3 GTX1080/1080ti somewhat over 7 minutes.
 
  • Like
Reactions: Ken g6

ocukguest

Member
Apr 15, 2017
30
10
36
tony hanluc isn't me no. im me...
on the other hand that sneaky bunker you have stashed away may need a little bit of counter power from our end.
 

crashtech

Lifer
Jan 4, 2013
10,524
2,111
146
I was having trouble with yoyo@home not giving other projects a turn. Had one PC dedicated to it alone for a while, but that's not fair to everything else.
 

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,250
3,845
75
SR Base was an LLR project like PrimeGrid. It's been awhile since I was able to connect to their website though.
 

TennesseeTony

Elite Member
Aug 2, 2003
4,209
3,634
136
www.google.com
We won Gold there, remember? :sunglasses:
Too much all natural, liquid muscle relaxer I'm afraid.

tony hanluc isn't me no. im me...
on the other hand that sneaky bunker you have stashed away may need a little bit of counter power from our end.
I am nothing if not frank. (it's all for science!) Your lead has nothing to fear. Now Shhhh! don't let the other teams know for which project my little bunker is intended. But, if it takes OcUK away from crunching the other one, well, I guess that's a good thing for TeAm AnandTech. ;)
 

GLeeM

Elite Member
Apr 2, 2004
7,199
128
106
When are they due to release the names of projects to the next two parts of the race?

Which is next - Sprint or Swimming?
 

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,250
3,845
75
Swimming will be announced tomorrow, because otherwise it would overrun the end of the race.

Edit: or should it have been announced today?
 

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,250
3,845
75
Apparently they're announcing at random times of day now.
 

Orange Kid

Elite Member
Oct 9, 1999
4,328
2,112
146
FYI on Collatz. I have only been able to get 100 tasks at a time. a hour or two's worth of work...so bunkering will be challenging on that one.
 

StefanR5R

Elite Member
Dec 10, 2016
5,510
7,816
136
[Marathon, Cosmology@Home]
Some data about credits/hour (1/24 PPD).[...]
Conclusion: camb_boinc2docker > planck_param_sims, credit wise. YMMV.

New data from the last 4x 20 validated WUs paint a drastically different picture about camb_boinc2docker credits/WU and credits/hour:

Ivy Bridge-E, planck_param_sims v2.04 (vbox64_mt) windows_x86_64
tasks downloaded on May 1, reported and validated on May 7
constantly 50 credits/WU
runtime min/ max/ avg = 315/ 945/ 488 s
min/ max/ avg 191/ 572/ 405 credits/hour (one 6-threaded task at a time)​

mobile Haswell, planck_param_sims v2.04 (vbox64_mt) windows_x86_64
tasks downloaded on May 1, reported and validated on May 7
constantly 50 credits/WU
runtime min/ max/ avg = 444/ 1,256/ 779 s
min/ max/ avg 143/ 405/ 267 credits/hour (one 4-threaded task at a time)​

Broadwell-E, planck_param_sims v2.04 (vbox64_mt) windows_x86_64
tasks downloaded, reported, and validated on May 6
constantly 50 credits/WU
runtime min/ max/ avg = 324/ 1,315/ 479 s
min/ max/ avg 2 * 137/ 555/ 429 credits/hour (two 5-threaded tasks at a time, together with six Einstein Nvidia feeders)​

Broadwell-E, camb_boinc2docker v2.04 (vbox64_mt) windows_x86_64
tasks downloaded, reported, and validated on May 6
min/ max/ avg = 5/ 22/ 18 credits/WU
runtime min/ max/ avg = 69/ 316/ 249 s
min/ max/ avg 2 * 249/ 262/ 255 credits/hour (two 5-threaded tasks at a time, together with six Einstein Nvidia feeders)​

I still did not load camb_legacy onto any of these machines.

Conclusion: planck_param_sims > camb_boinc2docker, as far as credits go.
That's the opposite of the conclusion which I drew from the first few WUs in my earlier post.

Judging from that, credits allocation matches the near-term scientific interest now (whether intentional or not), as planck results are needed by the project scientists for a paper which is nearing publication. Of course there may be a large variance, but I am too lazy to check more of my results.

There is another downside of camb_boinc2docker: For me at least, planck WUs are validated much quicker than camb WUs.
Ivy Bridge-E: 0.6 % validation backlog at planck_param_sims
Still working its way through its May 1 download.
State: All (1011) · In progress (222) · Validation pending (5) · Valid (781) · Error while computing (2) · manually aborted due ever-increasing estimated time to completion (1)
Application: All (1011) · planck_param_sims (1011)​

mobile Haswell: 0.0 % validation backlog at planck_param_sims
Also still munching on the May 1 download.
State: All (805) · In progress (153) · Validation pending (0) · Valid (619) · Error while computing (2) · manually aborted due to nearing deadline (31)
Application: All (805) · planck_param_sims (805)​

Broadwell-E: 39 % validation backlog overall
0.0 % validation backlog at planck_param_sims
71 % validation backlog at camb_boinc2docker

May 1 download was pretty much completed and uploaded on May 5.
Working on a shallow queue now, with a few bubbles filled by WCG.
State: All (1726) · In progress (3) · Validation pending (679) · Valid (1028) · Error while computing (16)
Application: All (1726) · camb_boinc2docker (969) · planck_param_sims (757)​
 
Last edited:

StefanR5R

Elite Member
Dec 10, 2016
5,510
7,816
136
The 4th discipline, Swimming, was announced.
[https://www.seti-germany.de/boinc_pentathlon/start.php]

Marathon: Cosmology@Home (CPU, virtualbox recommended)
Start ......... 2017-05-05 00:00:00 UTC
End .......... 2017-05-19 00:00:00 UTC

City Run: WCG OpenZika (CPU)
Start ......... 2017-05-05 00:00:00 UTC
End .......... 2017-05-10 00:00:00 UTC

Cross Country: Einstein@Home (GPU)
Start ......... 2017-05-09 00:00:00 UTC
End .......... 2017-05-14 00:00:00 UTC

Swimming: LHC@home (CPU, virtualbox recommended)
Start ......... 2017-05-12 00:00:00 UTC
End .......... 2017-05-19 00:00:00 UTC
CPU-only applications
Have a look at the FAQ.

Edit: server status says: no SixTrack WUs available right now (SixTrack is the only application which does not require VirtualBox), only WUs for applications which depend on VirtualBox. Most of them for the ATLAS application.

Edit 2: ATLAS is optionally multithreaded. More info in the LHC forum; follow the link to the app_config.xml posting. Also have a look at Yeti's checklist for ATLAS.

Edit 3: Multithreading is only recommended for Atlas, not for the other vboxwrapper based applications.
 
Last edited:

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,250
3,845
75
What is this, the year of VBoxApps?

I don't suppose LHC validates any faster than Cosmology, does it? It may make sense to get off the Cosmology train at some point with so many pending WUs.
 

TennesseeTony

Elite Member
Aug 2, 2003
4,209
3,634
136
www.google.com
What is the web address to enter into the hosts file, to block LHC uploads, in the unlikely event we actually get any tasks?

The ratio of Cosmo to WCG on my machines is rather low, only 4 are allowed to run the VM's, and then those 4 are only using up to half the CPU. But you may be correct about the points not getting counted in time for the race.

I'm feeling bloated, I think I will go dump my WCG bunker now, as all the machines have finally finished the week of work. I didn't realize SG was reporting the scores based on WCG score, not BOINC points, so this is going to count for about 1.8 million!! :)
 

TennesseeTony

Elite Member
Aug 2, 2003
4,209
3,634
136
www.google.com
app_config.xml

<app_config>
<app>
<name>camb_boinc2docker</name>
<max_concurrent>How many threads/tasks you want to run goes here, a number</max_concurrent>
</app>
<app_version>
<app_name>camb_boinc2docker</app_name>
<plan_class>vbox64_mt</plan_class>
<avg_ncpus>1</avg_ncpus> This line specifies the number of threads per VM
</app_version>
</app_config>
 

StefanR5R

Elite Member
Dec 10, 2016
5,510
7,816
136
We will see WCG bunkers from all teams being unloaded on May 9. From some because they are naturally sneaky, but from more because they computed WCG and Einstein in the same client.

What is the web address to enter into the hosts file, to block LHC uploads, in the unlikely event we actually get any tasks?
I fetched four tasks for now. Looking for "upload_url" in client_state.xml shows
lhcathomeclassic.cern.ch​
as server name. I have not tried yet whether blocking this still lets the VMs compute.

BTW, I incrementally changed my web preferences (applications to download, max number of tasks to have in fly) so that I downloaded the VM images of all of the current vboxwrapper applications: Atlas, CMS, LHCb, Theory. Downloads are almost complete now, and the project subdirectory contains 4.2 GB.

It may make sense to get off the Cosmology train at some point with so many pending WUs.
Or at least disable reception of camb_boinc2docker at some point, as validation of planck_param_sims happens rather promptly even for the one of my machines with shallow queue.
 
  • Like
Reactions: TennesseeTony

4thKor

Junior Member
Apr 7, 2017
21
16
36
What is the web address to enter into the hosts file, to block LHC uploads, in the unlikely event we actually get any tasks?

The ratio of Cosmo to WCG on my machines is rather low, only 4 are allowed to run the VM's, and then those 4 are only using up to half the CPU. But you may be correct about the points not getting counted in time for the race.

I'm feeling bloated, I think I will go dump my WCG bunker now, as all the machines have finally finished the week of work. I didn't realize SG was reporting the scores based on WCG score, not BOINC points, so this is going to count for about 1.8 million!! :)

Follow this thread: http://www.overclock.net/t/1627903/8th-boinc-pentathlon-swimming-lhc-home-project-support/50

Don't laugh too hard at my ignorance.

Hopefully I won't get shot for cahootenizing with you guys. :p
 
  • Like
Reactions: TennesseeTony

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,250
3,845
75
Or use <project_max_concurrent> as an application-independent limit.
I think that's the opposite of what I want. I want 4 WUs running at once. Not a maximum of one. Is there a project_min_concurrent?