Formula Boinc Sprints 2018

Page 41 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

zzuupp

Lifer
Jul 6, 2008
14,863
2,319
126
I just aborted 4 ATLAS tasks that have been "postponed" for twelve hours. My preferences on the vm-able box is now just six track only.
 

Howdy

Senior member
Nov 12, 2017
572
480
136
I just aborted 4 ATLAS tasks that have been "postponed" for twelve hours. My preferences on the vm-able box is now just six track only.

I read somewhere (I believe @ LHC forums) that it is caused by the updated VB, people were rolling it back to an earlier version. With the newest VB supplied with the new BOINC version in order to get ATLAS to continue when postponed, you need to restart BOINC. This was according to the thread and I cannot find it again, easy fix- Six Track- which is what I myself did too.
 
  • Like
Reactions: zzuupp

StefanR5R

Elite Member
Dec 10, 2016
5,459
7,718
136
I just aborted 4 ATLAS tasks that have been "postponed" for twelve hours.
I haven't had this problem myself yet. Just that the tasks appear running to the client, but are actually using only a negligible amount of CPU for hours (perhaps days, if allowed to).

I have been running the other vbox-loving project, Cosmology@home, often and for extended duration. I was occasionally getting "postponed" tasks there too. But there it was manageable for me, because (a) run times of Cosmo vbox tasks are measured in minutes, not hours, and (b) I have a background script which automatically rids me of such "postponed" tasks.

--------
It has been more than a year ago that I last ran LHC. I remembered various problems with it from that time, but this time is different again for me since I have now VirtualBox available on considerably more computers than last year.

By this return to LHC@home, I now learned:
  • I cannot run Theory/vbox because these tasks fall idle far too much. Host utilization is a joke.
  • I cannot run Atlas/vbox exclusively. The constraint here is not even that I lack RAM in order to feed all CPUs with Atlas/vbox only, but worse than that, my internet connection cannot satisfy the extreme download bandwidth requirement that Atlas/vbox has. Effect: Host utilization is very poor in this scenario too.
  • I cannot run Atlas/vbox in combination with other applications either. While an application mix effectively works around the RAM and networking bottlenecks, problem #1 manifests itself: Atlas/vbox tasks randomly fall idle too. It's a good thing that my hosts are practically overcommitted by means of Intel Hyperthreading, and thus the other applications still use CPUs while some Atlas tasks don't.
  • At least the LHC@home admins appear to have implemented an effective workaround against SixTrack failing with "exceeded elapsed time limit", which used to happen after the FLOPS estimation was corrupted by series of tasks that complete after a few seconds. (There are still many short tasks, and the FLOPS estimation still gets corrupted by them, making task queue management impossible. Just this particular client-side cancellation of good tasks doesn't happen anymore, in my current observation. But it's less than 2 days of observation so far.)
To summarize: While I have the technical means to run Theory/vbox and Atlas/vbox, the fact alone that these tasks go idle at random means that I must not run any of LHC's vbox based applications. SixTrack is a poor alternative: Queue management is impossible, which is very bad for example if you run this quorum-2 application in a competition.

Now, it would be somewhat interesting to me whether Atlas/Linux-native is plagued by the same network bandwidth problem as Atlas/vbox. If I had holidays like you at the other side of the pond, I might have taken the time to play with it. I detest having to install a custom networked filesystem for this,* but on the other hand, for all I know the vbox tasks may already be using that same networked filesystem.

*) Newsflash to the LHC devs: The boinc client can transfer files too! And can manage the bandwidth use, number of concurrent transfers, and daily/ weekly schedule for networking.
 

zzuupp

Lifer
Jul 6, 2008
14,863
2,319
126
I read somewhere (I believe @ LHC forums) that it is caused by the updated VB, people were rolling it back to an earlier version. With the newest VB supplied with the new BOINC version in order to get ATLAS to continue when postponed, you need to restart BOINC. This was according to the thread and I cannot find it again, easy fix- Six Track- which is what I myself did too.


checklist thread from LHC: Number crunching

I think this would be it.
 

StefanR5R

Elite Member
Dec 10, 2016
5,459
7,718
136
Obviously @emoga brought the right tools for the job!

LHC sprints of June 2017 and November 2018 in comparison:

all teams of all leagues combined: 9.47 M -> 24.09 M (+150 %)​
TeAm AnandTech: 587 k -> 2.235 k (+280 %)​
RKN: 284 k -> 1,707 k (+500 %)​
P3D: 86 k -> 1,263 k (+1370 %)​
LAF: 323 k -> 916 k (+180 %)​
CNT: 843 k -> 851 k (+1 %)​
OCN: 401 k -> 453 k (+13 %)​
UBT: 292 k -> 308 k (+ 5 %)​
SUSA: 586 k -> 63 k (-90 %)​
--------

I believe I remarked last year when we were battling the overclockers and EVGA in league 2, that we may win the occasional medal in 2018 even when promoted into league 1. I was wrong on two counts:
  • We ended up in the top three more often than not, in fact every single time since the 4th sprint. :dizzy: :grin:
  • No medals were engraved since September 2017 anymore. :unamused:
 

ao_ika_red

Golden Member
Aug 11, 2016
1,679
715
136
What a year!
We operate like Mercedes F1 team this year.
A bit teething problem at the beginning, but consistently improving and having podium finishes (and also Wins!) until the end of the season.
Congratulations everyone! Glad to be part of this wonderful year!
 

StefanR5R

Elite Member
Dec 10, 2016
5,459
7,718
136
Oh, the engraver was busy over the holidays!

sprint_2018_4_l1_silver.png
sprint_2018_5_l1_gold.png
sprint_2018_6_l1_silver.png
sprint_2018_7_l1_bronze.png
sprint_2018_8_l1_silver.png
sprint_2018_9_l1_bronze.png
sprint_2018_10_l1_bronze.png
sprint_2018_11_l1_silver.png
sprint_2018_12_l1_silver.png
sprint_2018_13_l1_bronze.png
sprint_2018_14_l1_silver.png
sprint_2018_15_l1_silver.png
sprint_2018_16_l1_gold.png
(Einstein gold medal still missing)
sprint_2018_18_l1_silver.png
sprint_2018_19_l1_silver.png
sprint_2018_20_l1_gold.png
sprint_2018_21_l1_silver.png


Shiny... =)