On the subject of Cosmology@Home's server struggling with performing task validations, seti.ger user nexiagsi16v writes (in German; errors and stylistic faults in the translation are mine):
If you look around at Cosmo, they normally grant ~3 million points per day. Yesterday and today it is quadruple of that with ~12 M per day. This squarely puts these days into the top 5 days of the project. The maximum was 74 million once, in 2009 though. No idea whether the credit allocation has been changed meanwhile. The second best day was 30 M in 2011, 3rd 17 M in 2012, 4th 15 M in 2013.
Planck are scarce goods right now. There are merely ~500 camb_legacy WUs to be handed out.
Estimating from the current count of WUs in progress, they have to be good for 70 million. I believe someone is under a bit of stress currently.
I'm wondering if it might be of a benefit to us to suspend all current work (only for a few minutes) and add any projects above that currently are not in our lists, and download 3-5 days of work from them....just in case....then suspend that work and go back to the current race projects. Except GPUGrid, they reward you based on turn-around time, and limit you to 2 tasks per card at a time.
also have a look through thread "RC-5/72 -- Two more days - Team Anandtech moves up". The thread is originally about the distributed.net RC5-72 project, but discussion about Moo!Wrapper got mixed in (notably, how to make sure that work being done via Moo!Wrapper is also conted for TeAm AnandTech at distrubuted.net).
Moo!Wrapper is a bit special on multi-GPU hosts because one task will occupy all GPUs of same architecture at once. On my PC with 3 GPUs of 2 different models (one being a little bit slower than the other two), this causes that the faster GPUs remain unused for a short while near the end of a task. From what I understand, it's just that several dnetc blocks are packed into one moowrapper task, the task grabs all GPUs of same architecture, and then sends off several blocks onto each GPU. I have read that this may even be an issue on PCs with identical GPUs, as some tasks may distribute blocks unevenly over the GPUs. Also, it's problematic if other projects are meant to run on the same machine simultaneously because resource allocation between projects is messy.
I was having trouble with yoyo@home not giving other projects a turn. Had one PC dedicated to it alone for a while, but that's not fair to everything else.
I am nothing if not frank. (it's all for science!) Your lead has nothing to fear. Now Shhhh! don't let the other teams know for which project my little bunker is intended. But, if it takes OcUK away from crunching the other one, well, I guess that's a good thing for TeAm AnandTech.
tasks downloaded on May 1, reported and validated on May 7
constantly 50 credits/WU
runtime min/ max/ avg = 315/ 945/ 488 s
min/ max/ avg 191/ 572/ 405 credits/hour (one 6-threaded task at a time)
mobile Haswell, planck_param_sims v2.04 (vbox64_mt) windows_x86_64
tasks downloaded on May 1, reported and validated on May 7
constantly 50 credits/WU
runtime min/ max/ avg = 444/ 1,256/ 779 s
min/ max/ avg 143/ 405/ 267 credits/hour (one 4-threaded task at a time)
tasks downloaded, reported, and validated on May 6
constantly 50 credits/WU
runtime min/ max/ avg = 324/ 1,315/ 479 s
min/ max/ avg 2 * 137/ 555/ 429 credits/hour (two 5-threaded tasks at a time, together with six Einstein Nvidia feeders)
tasks downloaded, reported, and validated on May 6
min/ max/ avg = 5/ 22/ 18 credits/WU
runtime min/ max/ avg = 69/ 316/ 249 s
min/ max/ avg 2 * 249/ 262/ 255 credits/hour (two 5-threaded tasks at a time, together with six Einstein Nvidia feeders)
I still did not load camb_legacy onto any of these machines.
Conclusion: planck_param_sims > camb_boinc2docker, as far as credits go.
That's the opposite of the conclusion which I drew from the first few WUs in my earlier post.
Judging from that, credits allocation matches the near-term scientific interest now (whether intentional or not), as planck results are needed by the project scientists for a paper which is nearing publication. Of course there may be a large variance, but I am too lazy to check more of my results.
There is another downside of camb_boinc2docker: For me at least, planck WUs are validated much quicker than camb WUs.
Ivy Bridge-E: 0.6 % validation backlog at planck_param_sims
Still working its way through its May 1 download.
State: All (1011) · In progress (222) · Validation pending (5) · Valid (781) · Error while computing (2) · manually aborted due ever-increasing estimated time to completion (1)
Application: All (1011) · planck_param_sims (1011)
mobile Haswell: 0.0 % validation backlog at planck_param_sims
Also still munching on the May 1 download.
State: All (805) · In progress (153) · Validation pending (0) · Valid (619) · Error while computing (2) · manually aborted due to nearing deadline (31)
Application: All (805) · planck_param_sims (805)
Broadwell-E: 39 % validation backlog overall
0.0 % validation backlog at planck_param_sims
71 % validation backlog at camb_boinc2docker
May 1 download was pretty much completed and uploaded on May 5.
Working on a shallow queue now, with a few bubbles filled by WCG.
State: All (1726) · In progress (3) · Validation pending (679) · Valid (1028) · Error while computing (16)
Application: All (1726) · camb_boinc2docker (969) · planck_param_sims (757)
Marathon: Cosmology@Home (CPU, virtualbox recommended)
Start ......... 2017-05-05 00:00:00 UTC
End .......... 2017-05-19 00:00:00 UTC
City Run: WCG OpenZika (CPU)
Start ......... 2017-05-05 00:00:00 UTC
End .......... 2017-05-10 00:00:00 UTC
Cross Country: Einstein@Home (GPU)
Start ......... 2017-05-09 00:00:00 UTC
End .......... 2017-05-14 00:00:00 UTC
Swimming: LHC@home (CPU, virtualbox recommended)
Start ......... 2017-05-12 00:00:00 UTC
End .......... 2017-05-19 00:00:00 UTC
CPU-only applications
Have a look at the FAQ.
Edit: server status says: no SixTrack WUs available right now (SixTrack is the only application which does not require VirtualBox), only WUs for applications which depend on VirtualBox. Most of them for the ATLAS application.
Edit 2: ATLAS is optionally multithreaded. More info in the LHC forum; follow the link to the app_config.xml posting. Also have a look at Yeti's checklist for ATLAS.
Edit 3: Multithreading is only recommended for Atlas, not for the other vboxwrapper based applications.
I don't suppose LHC validates any faster than Cosmology, does it? It may make sense to get off the Cosmology train at some point with so many pending WUs.
What is the web address to enter into the hosts file, to block LHC uploads, in the unlikely event we actually get any tasks?
The ratio of Cosmo to WCG on my machines is rather low, only 4 are allowed to run the VM's, and then those 4 are only using up to half the CPU. But you may be correct about the points not getting counted in time for the race.
I'm feeling bloated, I think I will go dump my WCG bunker now, as all the machines have finally finished the week of work. I didn't realize SG was reporting the scores based on WCG score, not BOINC points, so this is going to count for about 1.8 million!!
<app_config>
<app>
<name>camb_boinc2docker</name>
<max_concurrent>How many threads/tasks you want to run goes here, a number</max_concurrent>
</app>
<app_version>
<app_name>camb_boinc2docker</app_name>
<plan_class>vbox64_mt</plan_class>
<avg_ncpus>1</avg_ncpus> This line specifies the number of threads per VM
</app_version>
</app_config>
We will see WCG bunkers from all teams being unloaded on May 9. From some because they are naturally sneaky, but from more because they computed WCG and Einstein in the same client.
I fetched four tasks for now. Looking for "upload_url" in client_state.xml shows
lhcathomeclassic.cern.ch
as server name. I have not tried yet whether blocking this still lets the VMs compute.
BTW, I incrementally changed my web preferences (applications to download, max number of tasks to have in fly) so that I downloaded the VM images of all of the current vboxwrapper applications: Atlas, CMS, LHCb, Theory. Downloads are almost complete now, and the project subdirectory contains 4.2 GB.
Or at least disable reception of camb_boinc2docker at some point, as validation of planck_param_sims happens rather promptly even for the one of my machines with shallow queue.
What is the web address to enter into the hosts file, to block LHC uploads, in the unlikely event we actually get any tasks?
The ratio of Cosmo to WCG on my machines is rather low, only 4 are allowed to run the VM's, and then those 4 are only using up to half the CPU. But you may be correct about the points not getting counted in time for the race.
I'm feeling bloated, I think I will go dump my WCG bunker now, as all the machines have finally finished the week of work. I didn't realize SG was reporting the scores based on WCG score, not BOINC points, so this is going to count for about 1.8 million!!
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.