PrimeGrid Challenges 2019

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,242
3,829
75
Current challenge: Prime Sierpinski Problem (PSP) LLR, December 12-21 (04:19 UTC)

Happy new year! Here's the (tentative) list of this year's PrimeGrid challenges:

Code:
#  Date             Time UTC  Project  Duration  Challenge
-----------------------------------------------------------------------------------------------------------------
1   7-22 January    05:43:00  SoB-LLR  15 days   Conjunction of Venus & Jupiter Challenge
2   5-10 March      18:00:00  GCW-LLR  5 days    Year of the Pig(ging out on our CPU cycles :P) Challenge
3  24-31 May        00:00:00  TRP-LLR  7 days    Hans Ivar Riesel's 90th Birthday Challenge
4  15-20 July       20:17:00  PPS-LLR  5 days    50th Anniversary of the Moon Landing Challenge
5   3-10 August     00:00:00  ESP-LLR  7 days    Lennart Vogel Honorary Challenge
6  21-26 September  11:00:00  AP27     5 days    Oktoberfest Challenge
7  10-15 October    18:00:00  PPS-DIV  5 days    World Maths Day Challenge
8  24-29 October    00:00:00  321-LLR  5 days    50 years First ARPANET Connection Challenge
9   1-11 November   18:04:00  PSP-LLR  10 days   Transit of Mercury Across the Sun Challenge
10 12-22 December   04:19:00  GFN-21+  10 days   Aussie, Aussie, Aussie! Oi! Oi! Oi! Summer Solstice Challenge

What you need:
  • One or more fast x86 processors, preferably with lots of cores. (Even slow ones might do!)
  • Windows (Vista or later 64-bit, or XP or later 32-bit), Linux, or MacOS 10.4+.
  • BOINC, attached to PrimeGrid (http://www.primegrid.com/).
  • Your PrimeGrid Preferences with only the above project(s) selected in the Projects section.
  • Patience! All of these projects run long, slow WUs, at least on your CPU. As a result, no challenge is less than five days long. :eek:

What may help LLR (all but two of the challenges):
  • An Intel Sandy Bridge or later ("Core series" other than first-generation) processor with AVX may be 20-70% faster than with the default application. Sadly, that does not include Pentium or Celeron processors, or AMD processors.
  • In most challenges - probably all of these since their WUs are so large - it helps to enable multi-core processing with app_config.xml. Leave hyper-threading on if you do this!
  • Faster RAM might help on many challenges, as long as it's stable.
What may help in other challenges:
  • A GPU helps in two challenges.
  • Juggling in some extra WUs may help in challenges where you run more than one WU on the CPU at a time. (Or, switching to use all cores on one WU at the end may work equally well.)
  • Turning on hyper-threading may help.

What won't help (but won't hurt either):
  • A large amount of RAM.
  • Any Android devices.

What won't help (and will hurt, sort of):
  • Unstable processors. (Invalid work will be deducted! :eek: If Prime95 worked recently on your processor, it should be stable.)
  • Work not downloaded anduploaded within the challenge. (It's not counted.) Should you not be able to be in front of one or more computers at that time, there are several options:
    • You can often set BOINC's network connection preferences to wait until a minute or two after challenge time.
    • And for short work units, you can just set the queue level very low (0.01 days). This also makes it more likely that you will be a prime finder rather than a double-checker. But you might want to raise their queue size after the challenge is underway.

Welcome and good luck to all! :)

P.S. If no one has posted stats lately, try tracking your stats with my user script. With that installed, visit the current challenge's Team stats link for TeAm stats.
 
Last edited:

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
The PrimeGrid preferences page shows these recent average CPU times:
SoB-LLR: 363 hours
PSP-LLR: 160 h​
WOO-LLR: 118 h​
CUL-LLR: 116 h​
All other LLR based subprojects currently require fewer CPU hours.

(Furthermore, but not directly comparable with LLR:
GFN-20/CPU: 653 h​
GFN-21/CPU: 231 h​
IIRC this is because GFN-21 was updated to make use of FMA/AVX2, but GFN-20 was not.)
 

zzuupp

Lifer
Jul 6, 2008
14,863
2,319
126
T minus 24 hours (ish)

I'm feeling too lazy/tired/Saturday'd/Jaegermeistered to read.
So, without hyperthreading this would give an intel I7 all, or switch to eight to leave on hyperthreading???


<cmdline>-t 4</cmdline>
<avg_ncpus>4</avg_ncpus>
 
  • Like
Reactions: Orange Kid

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
If it is a 4-core/8-thread i7 with dual-channel RAM, then you should
  • run only one llrSOB task at a time,
  • configure llrSOB to use 4 threads per task.
This is what gave me best run time as well as best throughput on an i7-7700K with dual-channel DDR4-3000. I tested various numbers of simultaneous tasks and and various numbers of threads per task. In order to make my tests comparable to each other, I ran always the same WU (but only until it reached 1 % completion). I ran Linux and left HyperThreading enabled.

Here is what I would do:
<app_config>
<app_version>
<app_name>llrSOB</app_name>
<cmdline>-t 4</cmdline>
<avg_ncpus>1</avg_ncpus> <!-- set boinc-client to "Use at most 1 % of the CPUs" -->
</app_version>
</app_config>

When it comes to multithreaded applications, this is IMO the best way to tell the client how many tasks to run in parallel, because it in addition tells it how many new tasks to request when fetching new work. (An <avg_ncpus> value >1 affects boinc-client's decision how many tasks to run at a time, but it does not affect its requests for new work. These requests happen as if all work was single-threaded.)

If you want to run GPU tasks in parallel, set the allowed CPU percentage a bit higher, e.g. 13 %. However, if you want best llrSOB throughput, then don't run GPU work in parallel. llrSOB alone uses the CPU's caches and RAM controller fully; if there is also a GPU task fighting for these, performance will go down.
 
Last edited:

IEC

Elite Member
Super Moderator
Jun 10, 2004
14,328
4,913
136
Per the preferences page on PrimeGrid regarding SoB-LLR:
NOTE: We are double checking old SoB work and tasks may be much shorter than typical SoB tasks.

Doesn't really give an idea of how much shorter, however...
 

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
The note is outdated. Mentally replace "much shorter" by "somewhat shorter*, yet humongously large".

Across all of PG's subprojects, they are among the longest that PG offers: Only GFN22 and GFN21 look at larger primes; SoB-LLR is after the largest primes of all LLR based subprojects. See the table at the home page, which lists the count of digits and the prospective ranking within the T5K list.

According to the "Edit PrimeGrid preferences" web page, recent average CPU time is 340 hours. Michael Goetz posted a database dump of recent run-times (which may contain random errors; beware). You will not get shorter tasks than these anymore; they will only grow slowly but steadily larger like we are used to from other sub-projects.

*) As to why (somewhat) shorter than what:

PG's SoB-LLR project was once at a leading edge of "in the middle of the" n = "31M range", then stopped there and went back to n = 7M for double-checking of results of the defunct Seventeen or Bust project. The first phase of the double check re-ran ranges in which residues from Seventeen or Bust are known and normally only one task per WU needed to be re-run. So basically, the PG users acted as wingmen of former SoB users. This went until n < 28M and has been finished recently.​
They are now at n = 28,637,974 (January 2), and in the longer second and final phase of the double-check. This phase covers ranges of which no records of residues were preserved from Seventeen or Bust, meaning that we are at the usual 2+ tasks per WU. (No wingmen of former SoB here.)​
This stage will take a while still, and will end just under n = 31M.​
https://www.primegrid.com/forum_thread.php?id=7356

So to summarize, in spring 2017, n = 31M WUs were still in circulation, but they jumped down to n = 7M; and now we are near n = 29M again.
 
Last edited:
  • Like
Reactions: Ken g6 and IEC

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
The first result returned in the first challenge of the year came from Amazon EC2. Perhaps this foretells something about this challenge season, perhaps not.
 

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,242
3,829
75
Day 1 stats:

Rank___Credits____Username
3______273519_____xii5ku
20_____58277______Ken_g6
36_____52262______iwajabitw

Rank__Credits____Team
2_____593676_____Sicituradastra.
3_____488403_____SETI.Germany
4_____430653_____Aggie The Pew
5_____384059_____TeAm AnandTech
6_____320608_____Crunching@EVGA
7_____279046_____BOINC@MIXI
8_____211457_____Rechenkraft.net

Looks like not many of us have enough cores and/or fast enough RAM to get a WU done in a day.
 

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
They are now [...] in the longer second and final phase of the double-check. This phase covers ranges of which no records of residues were preserved from Seventeen or Bust, meaning that we are at the usual 2+ tasks per WU. (No wingmen of former SoB here.)
Correction:
I misinterpreted the progress reports on the double check. In the current phase, not all but most records of residues of the original SoB projects were lost. There will still be occasional WUs which already contain imported results of the original SoB project.

The top post of the challenge thread states:
Roger said:
A brief word about the SOB tasks that will be running in the challenge:
  • Around 30% of the tasks will be double checks against one or more residues from the original Seventeen or Bust project. Expect these tasks to validate almost immediately, unless your result doesn't match the residue.
(The other 70% of the tasks during this challenge will be double checks too, but only with actual PrimeGrid contributors as wingmen. The fact that only double check tasks will be available means that the chance of finding a prime is extraordinarily low, since only primes which the old SoB project missed by mistake are left to find in the current range of WUs.)
 

zzuupp

Lifer
Jul 6, 2008
14,863
2,319
126
Correction:
I misinterpreted the progress reports on the double check. In the current phase, not all but most records of residues of the original SoB projects were lost. There will still be occasional WUs which already contain imported results of the original SoB project.

The top post of the challenge thread states:

(The other 70% of the tasks during this challenge will be double checks too, but only with actual PrimeGrid contributors as wingmen. The fact that only double check tasks will be available means that the chance of finding a prime is extraordinarily low, since only primes which the old SoB project missed by mistake are left to find in the current range of WUs.)

This makes sense.

I've completed two WU's. One was immediately validated: wingman double check import.

The other is waiting on a real person.
 
  • Like
Reactions: Ken g6

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,242
3,829
75
Day 2 stats:

Rank___Credits____Username
18_____547911_____xii5ku
34_____221425_____Ken_g6
48_____156731_____emoga
60_____110456_____iwajabitw
75_____104611_____zzuupp

Rank__Credits____Team
6_____1296439____Rechenkraft.net
7_____1279193____The Knights Who Say Ni!
8_____1250488____BOINC@MIXI
9_____1141136____TeAm AnandTech
10____613043_____AMD Users
11____437419_____Ukraine
12____390118_____Team 2ch

Good, more TeAmmates! I hope some of you are faster than you look so far; otherwise we may not rank well in this race. On the other hand, to anyone not racing yet, there's plenty of time to join! ;)
 

crashtech

Lifer
Jan 4, 2013
10,523
2,111
146
I started a little late, and then a day later, realized the relevant section of the app_config that had been pasted into all of my clients contained an error... :oops:
 
  • Like
Reactions: zzuupp

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,242
3,829
75
Day 3 stats:

Rank___Credits____Username
17_____993909_____xii5ku
29_____633141_____emoga
31_____555941_____crashtech
38_____443914_____Ken_g6
60_____226634_____iwajabitw
77_____209202_____zzuupp

Rank__Credits____Team
3_____7478738____Sicituradastra.
4_____7454146____SETI.Germany
5_____3172606____Crunching@EVGA
6_____3062743____TeAm AnandTech
7_____3027983____Rechenkraft.net
8_____2231572____BOINC@MIXI
9_____2199137____The Knights Who Say Ni!

I started a little late, and then a day later, realized the relevant section of the app_config that had been pasted into all of my clients contained an error... :oops:
Yes, I noticed somebody "crashed" through the pack and into third place. ;) You helped turn that 9 from yesterday upside down.
 

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,242
3,829
75
Day 4 stats:

Rank___Credits____Username
19_____1433440____xii5ku
28_____1016296____emoga
34_____830630_____crashtech
45_____555503_____Ken_g6
68_____331196_____iwajabitw
75_____319662_____zzuupp
172____52915______Howdy2u2

Rank__Credits____Team
4_____10942999___Sicituradastra.
5_____4655828____Rechenkraft.net
6_____4654974____Crunching@EVGA
7_____4539644____TeAm AnandTech
8_____3287295____BOINC@MIXI
9_____3277965____The Knights Who Say Ni!
10____2391148____AMD Users

Just a little more power might get us a couple more rankings.

Oh, howdy, @Howdy2u2. ;)
 

IEC

Elite Member
Super Moderator
Jun 10, 2004
14,328
4,913
136
Okay, got it up and running on all rigs. Pretty sure the WUs aren't directly comparable, but here's the expected total runtime for my various rigs. I just set them all to 1 task with thread # = # of physical cores.

R7 2700X 8t - 25h
R7 1700 8t - 30h
i7 8700K 6t - 31h
TR 1900X 8t - 35h
TR 1920X 12t - 20h
TR 1950X 16t - 13h
 

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,242
3,829
75
Day 5 stats:

Rank___Credits____Username
18_____1862297____xii5ku
26_____1379696____crashtech
30_____1225658____emoga
44_____773086_____Ken_g6
79_____435588_____iwajabitw
89_____371967_____zzuupp
132____209992_____Howdy2u2
172____105394_____Orange Kid

Rank__Credits____Team
3_____16359472___SETI.Germany
4_____14058237___Sicituradastra.
5_____6824162____Rechenkraft.net
6_____6363682____TeAm AnandTech
7_____6262548____Crunching@EVGA
8_____4587649____The Knights Who Say Ni!
9_____4108706____BOINC@MIXI


O.K., somebody else joined us, and we're back at 6th. Thanks, O.K. ;)
 

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
A personal observation:
  • During the first 5 days ( = first half) of the 12/2018 challenge at GFN-21, I spent 2 kW on 3 computers with 16 nm GPUs and ranked at the bottom of the top 20 with a trend to drop out of the top 20. (During the rest of that challenge, I spent 4 kW on 8 computers with 16 nm GPUs and 14 nm CPUs and finally ended up at rank 8.)
  • Now, during the first 5 days ( = first third) of the ongoing SOB-LLR challenge, I am spending less than 0.47 kW on 1 computer with 14 nm CPUs but am ranking at the bottom of the top 20 too! (Likewise with a tendency to drop out of the top 20.)

And a global observation:
  • top 6 teams after the first 5 days of the 01/2018 SOB-LLR challenge:
1 ..... 25,426,621 ..... Aggie The Pew
2 ..... 20,578,744 ..... Sicituradastra.
3 ..... 18,762,755 ..... Czech National Team
4 ..... 17,980,174 ..... SETI.Germany
5 ....... 9,564,152 ..... Crunching@EVGA
6 ....... 5,412,270 ..... TeAm AnandTech

total between these 6 teams: 98 M
  • top 6 teams after the first 5 days of the 01/2019 SOB-LLR challenge:
1 ..... 21,170,926 ..... Czech National Team
2 ..... 20,340,525 ..... Aggie The Pew
3 ..... 16,359,472 ..... SETI.Germany
4 ..... 14,058,237 ..... Sicituradastra.
5 ....... 6,824,162 ..... Rechenkraft.net
6 ....... 6,363,682 ..... TeAm AnandTech

total between these 6 teams: 85 M
 

Orange Kid

Elite Member
Oct 9, 1999
4,327
2,112
146
I have only my slowest two going on this so don't expect much.
The faster ones seem to keep losing connectivity with the router. One is wired and one wireless so I have no clue at the moment what the problem is. Not going to run anything on them for a while and see what happens. Will be switching to Linux.