PrimeGrid Races 2017

Discussion in 'Distributed Computing' started by Ken g6, Dec 29, 2016.

  1. GLeeM

    GLeeM Elite Member

    Joined:
    Apr 2, 2004
    Messages:
    7,001
    Likes Received:
    36
    Thanks for the stats, Ken :)
     
  2. Orange Kid

    Orange Kid Elite Member

    Joined:
    Oct 9, 1999
    Messages:
    3,120
    Likes Received:
    154
    Thanks for the stats updates :cool:
    We'll get'em next one! :)
     
  3. StefanR5R

    StefanR5R Senior member

    Joined:
    Dec 10, 2016
    Messages:
    498
    Likes Received:
    179
    Must I be the first one to point out that our team rank is prime? :cocktail:

    As promised, xii5ku is being consigned to community service at WCG now. I sentenced him to 10 days crunching OpenZika on all of the nodes which he used during the PG race.
     
    TennesseeTony, bds71 and Ken g6 like this.
  4. geecee

    geecee Platinum Member

    Joined:
    Jan 14, 2003
    Messages:
    2,372
    Likes Received:
    8
    As always, thanks Ken for organizing and updating us.

    Surprisingly, the old X6 did ok crunching PG. Just uses a lot of power. :p
     
  5. crashtech

    crashtech Diamond Member

    Joined:
    Jan 4, 2013
    Messages:
    6,696
    Likes Received:
    254
    I missed this one, but come March 10th I'll have 12 Westmere cores ready to add to our team totals.
     
  6. StefanR5R

    StefanR5R Senior member

    Joined:
    Dec 10, 2016
    Messages:
    498
    Likes Received:
    179
    Done. Total from Sat, Jan 15 to Fri, Jan 27: 14,723,857 WCG points = 2,103,408 BOINC credits
     
    Ken g6 and TennesseeTony like this.
  7. StefanR5R

    StefanR5R Senior member

    Joined:
    Dec 10, 2016
    Messages:
    498
    Likes Received:
    179
    I get a 404 instead of the image. Here is the list in ASCII.
    Code:
        Date             Time UTC   Project(s)   Challenge                               Duration
    ---------------------------------------------------------------------------------------------
    1   3-13 January     18:00:00   GCW-Sieve    Isaac Newton's Birthday Challenge       10 days
    2   10-25 March *)   12:00:00   SoB-LLR      Year of the Fire Rooster Challenge      15 days
    3   7-22 April       12:00:00   PSP-LLR      Mathematics Awareness Month Challenge   15 days
    4   12-13 June       00:00:00   SGS-LLR      PrimeGrid's Birthday Challenge           1 day
    5   20-23 August     18:00:00   GCW-LLR      Solar Eclipse Challenge                  3 days
    6   3-8 September    18:00:00   321-LLR      Number Theory Week Challenge             5 days
    7   18-23 October    00:00:00   TRP-LLR      Diwali/Deepavali Challenge               5 days
    8   17-20 November   12:00:00   GFN-15       Pierre de Fermat's Birthday Challenge    3 days
                                    GFN-16
                                    GFN-17-Low
    9   18-21 December   16:28:00   PPS-Sieve    Winter Solstice Challenge                3 days
    
    Edit, March 6:
    *) Races # 2 and 3 postponed, as noted by Ken g6 below
    Edit, April 2:
    Race # 3 is a go, as originally scheduled.
     
    #57 StefanR5R, Feb 26, 2017
    Last edited: Apr 2, 2017
  8. Ken g6

    Ken g6 Programming Moderator, Elite Member
    Moderator

    Joined:
    Dec 11, 1999
    Messages:
    12,630
    Likes Received:
    291
    In case you were wondering, PrimeGrid races are indefinitely delayed. The next race will not start on March 10. April is likely also out. It's hard to say beyond that.

    Maybe we should try FormulaBOINC racing instead?
     
  9. Orange Kid

    Orange Kid Elite Member

    Joined:
    Oct 9, 1999
    Messages:
    3,120
    Likes Received:
    154
    Well, that is really the only time I run PrimeGrid :(
    I guess I'll focus more on the FB 3day sprints. It breaks up the monotony of just running a project here and there. Gives a little excitement. Guess I need a lifeo_O:eek:
     
    Ken g6 likes this.
  10. Ken g6

    Ken g6 Programming Moderator, Elite Member
    Moderator

    Joined:
    Dec 11, 1999
    Messages:
    12,630
    Likes Received:
    291
    Time to bump this thread for the next race, in less than a week.
     
  11. StefanR5R

    StefanR5R Senior member

    Joined:
    Dec 10, 2016
    Messages:
    498
    Likes Received:
    179
    Interesting: LLR can apparently be run in multithreaded mode.
    It's a trade-off between multi-threading overhead and multi-processing overhead.

    Edit: I posted 2P Broadwell-EP results in the PrimeGrid CPU thread.
     
    #61 StefanR5R, Apr 2, 2017
    Last edited: Apr 7, 2017
  12. StefanR5R

    StefanR5R Senior member

    Joined:
    Dec 10, 2016
    Messages:
    498
    Likes Received:
    179
    Engaged two humble 4-core Haswells with PSP-LLR now.

    Against logic, I decided to leave the Broadwell-EPs at Einstein for a little more.
     
  13. Orange Kid

    Orange Kid Elite Member

    Joined:
    Oct 9, 1999
    Messages:
    3,120
    Likes Received:
    154
    Got a few cores going. :)
     
  14. Ken g6

    Ken g6 Programming Moderator, Elite Member
    Moderator

    Joined:
    Dec 11, 1999
    Messages:
    12,630
    Likes Received:
    291
    This one was an early start for me, but I got a few cores going too.
     
  15. Kiska

    Kiska Senior member

    Joined:
    Apr 4, 2012
    Messages:
    391
    Likes Received:
    48
    Unfortunately I don't think I'll be able to contribute any cores for this race, since its all on Einstein right now
     
  16. Ken g6

    Ken g6 Programming Moderator, Elite Member
    Moderator

    Joined:
    Dec 11, 1999
    Messages:
    12,630
    Likes Received:
    291
    This is a 15-day race. The current FormulaBOINC race will end before this one does. (So will the next FormulaBOINC race.)
     
  17. StefanR5R

    StefanR5R Senior member

    Joined:
    Dec 10, 2016
    Messages:
    498
    Likes Received:
    179
    OK, 4x14 cores went from Einstein to Sierpiński.

    Edit:
    Also, a 10-core CPU which is currently carrying 6 GPU feeder tasks for Einstein, is now running PSP-LLR on the remaining 4 cores like so:
    Code:
    <app_config>
       <app>
          <name>llrPSP</name>
          <max_concurrent>2</max_concurrent>
          <fraction_done_exact/>
       </app>
       <app_version>
          <app_name>llrPSP</app_name>
          <cmdline>-t 2</cmdline>
          <avg_ncpus>2</avg_ncpus>
          <max_ncpus>2</max_ncpus>
       </app_version>
    </app_config>
    
     
    #67 StefanR5R, Apr 7, 2017
    Last edited: Apr 8, 2017
  18. Orange Kid

    Orange Kid Elite Member

    Joined:
    Oct 9, 1999
    Messages:
    3,120
    Likes Received:
    154
    One day down and only two to go till I finish some WU's. These things are as slow as molasses.:eek:o_O:)
     
  19. Ken g6

    Ken g6 Programming Moderator, Elite Member
    Moderator

    Joined:
    Dec 11, 1999
    Messages:
    12,630
    Likes Received:
    291
    Day 1.2 stats:

    Rank___Credits____Username
    11_____169145_____xii5ku

    Rank__Credits____Team
    5_____570957_____BOINC@Poland
    6_____398897_____Czech National Team
    7_____204596_____The Knights Who Say Ni!
    8_____169145_____TeAm AnandTech
    9_____157951_____US Navy
    10____67306______Special: Off-Topic
    11____65936______PrimeSearchTeam
    And that's why it's a 15-day race. :eek: That's also one reason why @StefanR5R's multithreading post is useful. Multithreading may be easier than juggling.
     
  20. StefanR5R

    StefanR5R Senior member

    Joined:
    Dec 10, 2016
    Messages:
    498
    Likes Received:
    179
    Multithreading is also the major (if not only) reason why there are more than 0 credits in the day 1.2 stats. :sunglasses:

    By the way, these WUs are not only looong, their runtimes are also untypically variable for PrimeGrid's standards. They seem to be bimodally distributed. This confused me already during the past week when I tested PSP-LLR multicore scaling.
     
  21. TennesseeTony

    TennesseeTony Elite Member

    Joined:
    Aug 2, 2003
    Messages:
    1,912
    Likes Received:
    361
    I'll be joining later, so be ye not dismayed at my absence.
     
    Ken g6 likes this.
  22. Ken g6

    Ken g6 Programming Moderator, Elite Member
    Moderator

    Joined:
    Dec 11, 1999
    Messages:
    12,630
    Likes Received:
    291
    Day 2.125 stats:

    Rank___Credits____Username
    7______697292_____xii5ku
    142____22926______Ken_g6

    Rank__Credits____Team
    4_____1969519____Czech National Team
    5_____1589216____BOINC@Poland
    6_____1187362____BOINC@MIXI
    7_____720219_____TeAm AnandTech
    8_____641068_____US Navy
    9_____636052_____The Knights Who Say Ni!
    10____623846_____Team 2ch

    Everybody but Stefan and me using slow computers and not multithreading?

    By the way, I turned on multithreading on my laptop yesterday, and it only ran one existing WU at a time. I had to change the config file to turn it back off temporarily.
     
  23. Orange Kid

    Orange Kid Elite Member

    Joined:
    Oct 9, 1999
    Messages:
    3,120
    Likes Received:
    154
    Yes.
    Can I switch to multicore in the middle of a WU?
    Is this for real cores or can virtual cores be included? IE: my I5 as two cores or four?
     
  24. Ken g6

    Ken g6 Programming Moderator, Elite Member
    Moderator

    Joined:
    Dec 11, 1999
    Messages:
    12,630
    Likes Received:
    291
    Switching in the middle of a WU seems to finish it on only one core. :( HT doesn't help either.
     
  25. StefanR5R

    StefanR5R Senior member

    Joined:
    Dec 10, 2016
    Messages:
    498
    Likes Received:
    179
    I am not sure anymore, but perhaps the same (under-utitilized host) happened to me during my experiments with this feature on the Xeons before the challenge.

    Right now all is fine:
    4 single-thread tasks on a 4C CPU,
    2 dual-thread tasks on another 4C,
    2 dual-thread tasks plus 6 e@h feeders on 10C (PrimeGrid forced to 2 tasks max as posted above, by means of the max_concurrent tag),
    1 four-thread task plus 3 e@h feeders on 6C/HT (PrimeGrid forced to 1 task max),
    14 dual-thread tasks on 2x14C,
    4 seven-thread tasks on 2x14C.​
    I will revisit how the two 2x14s do in terms of PPD after one or two more days and reconfigure as indicated. (One 4C and the 6C will have to leave PrimeGrid during the week due to their noise.)

    Note, omit max_concurrent in the app_config.xml if you do not want to specify a CPU limit in this way. (I guess Ken g6 is aware of that, just mentioning it for others who, like myself, do not deal with app_config.xml's day in, day out...)

    I switched once in the following way:
    • Had four single-threaded tasks running on a 4C,
    • wrote C:\ProgramData\BOINC\projects\www.primegrid.com\app_config.xml,
    • exited boincmgr and had it shut down the tasks at this point,
    • restarted boincmgr.
    This resulted in two of the four tasks continuing from their previous progress percentage, but now running dual-threaded instead of single threaded. And the other two tasks sat there waiting for their turn, and they were also continued dual-threaded without losing previous progress. To be sure, I later looked these tasks up at the PrimeGrid web site, and they were marked valid (3) or pending (1), not invalid or error.

    I tested hyperthreading only with single-threaded tasks, and it was detrimental even on Linux (which I assume to deal better with hyperthreading than Windows). My guess is that it is detrimental with multi-threaded PSP-LLR tasks too. (edit) Unfortunately, to prove or disprove it by measurement takes a long time. But from what I read elsewhere, HT is generally discouraged with LLR.

    (edit) That would be two cores on i5-6200U, 2 single-threaded workers or 1 dual-threaded worker with HT off, or with HT on with total utilization forced to 50 %. Or double as many with HT on at 100 %, but possibly performing worse overall.

    I switched hyperthreading off in the BIOS of all my machines, except in the two Einstein@Home feeders. (Which will soon leave E@H for other endeavors.)
     
    #75 StefanR5R, Apr 9, 2017
    Last edited: Apr 9, 2017