PrimeGrid Challenges 2023

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,250
3,844
75
All PrimeGrid's 2023 challenges will take place in the Chinese Year of the Rabbit. So although I'm likely to be beaten by people as Pokey as a tortoise, let's hop to it!

DateTime UTCProject(s)ChallengeDuration
1​
22-25 January​
12:00:00PPS-MEGA
GFN-17-MEGA
A Prime Chinese New Year Challenge3 days
2​
8-9 March​
15:00:00SGSInternational Women's Day Challenge1 day
3​
16-23 April​
16:00:00PSPGotthold Eisenstein's Birthday Challenge7 days
4​
19-24 June​
20:00:00321Blaise Pascal's Birthday Challenge5 days
5​
8-15 July​
21:00:00CW-SieveMath 2.0 Day Challenge7 days
6​
15-18 August​
02:00:00AP-27Chant at the Moon Day Challenge3 days
7​
13-23 September​
11:00:00SoBWorld Peace Day Challenge10 days
8​
31 October - 5 November​
13:00:00ESPChristmas Challenge5 days
9​
10-20 December​
19:00:00GFN-18
GFN-19
GFN-20
Chris Caldwell Honorary Challenge10 days

Edit: I just found this attached guide on Discord 10/3. Cache sizes may not be accurate.
 

Attachments

  • pg_ht.png
    pg_ht.png
    18.7 KB · Views: 325
Last edited:

StefanR5R

Elite Member
Dec 10, 2016
5,509
7,816
136
Here are the data allocation sizes — i.e. processor cache demand per task — at the current leading edge of each of the challenge projects (of the CPU-only application versions, not of GPU applications). I checked on a Haswell CPU with FMA3 support.

Keep in mind that data allocations grow larger as the leading edge progresses at LLR based projects. (But presumably not anytime soon at GFN based projects.)

22-25 January
PPS-MEGA: 2.25 MB
GFN-17-MEGA: 2.63 MB
GFN-17-MEGA is also available on GPUs.

8-9 March
SGS-LLR: 1.0 MB

16-23 April
PSP-LLR: 22.5 MB

19-24 June
321-LLR: 8.75 MB

8-15 July
CW-Sieve: This project has yet to be started. Application and workunits are not available for now.
Edit – This is a GPU-only project.

15-18 August
AP-27: I haven't checked this one yet.
AP-27 is also available on GPUs.

13-23 September
SoB-LLR: 30.0 MB

31 October - 5 November
ESP-LLR: 18.0 MB

10-20 December
GFN-18: 5.07 MB
GFN-19: 10.1 MB
GFN-20: 20.3 MB
GFN-18…20 are also available on GPUs.
 
Last edited:
  • Like
Reactions: TennesseeTony

waffleironhead

Diamond Member
Aug 10, 2005
6,919
429
136
in my initial testing, gfn17-cpu is giving more ppd versus pps-mega.

gfn ~4190secs/486.14pts
pps ~4952secs/306.79pts

ymmv, but im full in on only gfn
 

StefanR5R

Elite Member
Dec 10, 2016
5,509
7,816
136
I just tried two random tasks on a Haswell CPU and saw more points/second with cpuGFN17MEGA compared with llrMEGA, too.

I have yet to try on a Xeon with 2.5 MB L3 cache per core, which might turn the tables. But then again, it's an old Xeon which I might end up leaving off due to its power consumption.

Edit, nope, it's the same on the Xeon, better PPD and also better PPD/W with cpuGFN17MEGA, relative to llrMEGA.
 
Last edited:

StefanR5R

Elite Member
Dec 10, 2016
5,509
7,816
136
Relative points-per-Joule of my computers, task energy measured "at the wall":

Epyc Rome .................. 100 %
Xeon Broadwell-EP ...... 40 %
GTX 1080Ti .................. 20 % :-O​

[Edit: higher = better]

The 1080Ti computers are already built with low system power overhead in mind: Two GPUs sit in a single computer with Intel Z270 mainboard, some unused board components switched off, and Kaby Lake CPU with turbo disabled. I didn't play with different settings of the 1080Ti itself in order to optimize their efficiency, I merely tested them at 220 W board power limit for now.
 
Last edited:

Skillz

Senior member
Feb 14, 2014
926
951
136
Attach your computers and lets get to rolling. The Challenge started 25 minutes ago.
 
  • Like
Reactions: Ken g6

StefanR5R

Elite Member
Dec 10, 2016
5,509
7,816
136
I re-tested Broadwell-EP with Turbo Boost disabled. This improved power efficiency from 40 % to 44 %, relative to Epyc Rome.
________________________________________

For a moment there I saw a lot of files queued up for upload, so I was starting to worry. But now I did some math and found that I'm good WRT upload speed:

Each result of a GFN-17 "main task" comes with 2 files to upload: One "proof of work" which is 4,194,380 Bytes large, and the actual result which is a plain-text file with 144…145 Bytes in it.

Neglecting the overhead from establishing each new HTTP connection (which should have negligible impact if there are several transfers happening concurrently), this means that per each 1 Mbit/s Internet upload link width, one can upload 20,598 results per day. With current 486.3 points per result of GFN-17 main tasks (credit per result will slowly increase over time), 1 Mbit/s Internet upload link width translates to 10 MPPD.

My own upload link rate isn't great but >>1 Mbit/s, while my production is going to be <<10 MPPD.
 

Skillz

Senior member
Feb 14, 2014
926
951
136
I'm just glad @Icecold found some computers to toss in the challenge.

It's good to know they aren't lost in some box somewhere. Ya know?
 
  • Haha
Reactions: Icecold

TennesseeTony

Elite Member
Aug 2, 2003
4,209
3,634
136
www.google.com
The weather looks dreary for a few days here. I had to switch to air conditioning a time or two for the Folding challenge. It is more satisfying to run a space heater when it is actually needed. :)
 
  • Like
Reactions: Ken g6

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,250
3,844
75
Day 0.5 stats (since I'm asleep for the start and end of this challenge):

Rank___Credits____Username
2______3441505____crashtech
7______1894316____[TA]Skillz
11_____1467242____xii5ku
14_____1299756____w a h
22_____987658_____Pokey
36_____559650_____biodoc
39_____531192_____Orange Kid
43_____468849_____mmonnin
50_____437876_____cellarnoise2
71_____300783_____parsnip soup in a clay bowl
83_____252899_____markfw
96_____226222___10esseeTony
121____181095_____Icecold
138____141365_____Skivelitis2
167____95366______kiska
178____82658______waffleironhead
195____70893______Letin Noxe
196____70166______Ken_g6
213____57532______mnelsonx
250____33630______johnnevermind
381____4146_______geecee

Rank__Credits____Team
1_____12602864___TeAm AnandTech
2_____8890693____Antarctic Crunchers
3_____8437870____SETI.Germany
4_____7852804____Czech National Team

Lots of new (or surprising) names in this challenge! I wonder if some don't even know the challenge is going?
 

TennesseeTony

Elite Member
Aug 2, 2003
4,209
3,634
136
www.google.com
Is it correct to assume the pending tasks get counted (initially) for the race? ( A quick calculator session says yes, why bother these folks with this question !? )

( Thanks, as always, to anyone who tracks our progress during an event! )
 

Skillz

Senior member
Feb 14, 2014
926
951
136
Yes all pendings count towards the challenge.

If the pending task fails after the end of the challenge then it's points will be removed from that user and team if they are on one.
 
  • Like
Reactions: TennesseeTony

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,250
3,844
75
Correct. But pendings should practically never fail with the current versions of LLR and Genefer. (There might be a few edge cases, but it should be extremely rare.)
 
  • Like
Reactions: TennesseeTony

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,250
3,844
75
Day ~1.7 stats:

Rank___Credits____Username
1______12357113___crashtech
7______6650837____[TA]Skillz
12_____5527591____xii5ku
14_____4452389____w a h
21_____3908981____Pokey
24_____3560413____markfw
33_____2869673____mmonnin
34_____2844119____cellarnoise2
44_____1831660____Orange Kid
46_____1706180___10esseeTony
57_____1433024____parsnip soup in a clay bowl
75_____1046268____biodoc
124____595426_____Icecold
127____584267_____waffleironhead
140____487033_____Skivelitis2
168____351926_____Letin Noxe
203____253119_____Ken_g6
212____236510_____mnelsonx
217____223902_____kiska
239____167210_____johnnevermind
349____50962______Mardis
381____30021______geecee

Rank__Credits____Team
1_____51145146___TeAm AnandTech
2_____32791129___SETI.Germany
3_____32179502___Czech National Team
4_____30659637___Antarctic Crunchers

I slightly missed the halfway point.
 

Letin Noxe

Junior Member
Jan 15, 2023
6
30
51
I re-tested Broadwell-EP with Turbo Boost disabled. This improved power efficiency from 40 % to 44 %, relative to Epyc Rome.
May I ask you which model of Xeon you chose: lots of core but low frequency or less cores count and high frequency ?
 

StefanR5R

Elite Member
Dec 10, 2016
5,509
7,816
136
May I ask you which model of Xeon you chose: lots of core but low frequency or less cores count and high frequency ?
Before I got into the DC hobby, I built two dual-socket Xeon computers for my day job with occasional heavy physics simulations. Moderate core count with good frequency, combined with high memory bandwidth per core, was best suited for the purpose, therefore I chose E5-2690 v4 back then (14-core Broadwell-EP from the higher-TDP tier).

After I got into Distributed Computing — which was partially motivated by these two dual socket computers remaining unused periodically —, I added two basically identical builds but with second-hand¹ E5-2696 v4 (OEM variant of the top tier 22-core Broadwell-EP). These two additional computers were already dedicated to DC alone. Since DC projects naturally scale very well, the E5-2696 v4's always had noticeably higher throughput and better power efficiency than the E5-2690 v4's. Later, when I no longer needed the first two dual socket computers for my job anymore, I upgraded them to second-hand E5-2696 v4's too.

________
¹) I am generally very hesitant to buy second-hand. But back then, a certain @TennesseeTony made me aware of some offers which were from solid sellers and, relative to what was available at these times, at decent prices.
 
  • Like
Reactions: Letin Noxe

Letin Noxe

Junior Member
Jan 15, 2023
6
30
51
Before I got into the DC hobby, I built two dual-socket Xeon computers for my day job with occasional heavy physics simulations. Moderate core count with good frequency, combined with high memory bandwidth per core, was best suited for the purpose, therefore I chose E5-2690 v4 back then (14-core Broadwell-EP from the higher-TDP tier).

After I got into Distributed Computing — which was partially motivated by these two dual socket computers remaining unused periodically —, I added two basically identical builds but with second-hand¹ E5-2696 v4 (OEM variant of the top tier 22-core Broadwell-EP). These two additional computers were already dedicated to DC alone. Since DC projects naturally scale very well, the E5-2696 v4's always had noticeably higher throughput and better power efficiency than the E5-2690 v4's. Later, when I no longer needed the first two dual socket computers for my job anymore, I upgraded them to second-hand E5-2696 v4's too.
________
¹) I am generally very hesitant to buy second-hand. But back then, a certain @TennesseeTony made me aware of some offers which were from solid sellers and, relative to what was available at these times, at decent prices.

I use Dual 2680v2 (AVX), Dual E5-2640v4 (AVX2), Dual 6154 Gold (AVX-512), Dual 5218R (AVX-512 x2/core) servers, EPYC Rome 7H12 (AVX2) for (geo)physics computations. Some Xeon Phi (RIP), Titan, Titan V, V100, ... too. Used to dive into iDRAC and the pizza box. But never allowed to try DC, that's too bad. Thank you for sharing your experience. Indeed, it seems that datacenters decommissioned servers become available in numbers and quite affordable (now Broadwell, Haswell with AVX2) (No more official support, so many second-hands and spare parts). These servers are nice pets, a bit noisy though ! But they don't bark and byte.
 
Last edited:

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,551
14,508
136
I use Dual 2680v2 (AVX), Dual E5-2640v4 (AVX2), Dual 6154 Gold (AVX-512), Dual 5218R (AVX-512 x2/core) servers, EPYC Rome 7H12 (AVX2) for (geo)physics computations. Some Xeon Phi (RIP), Titan, Titan V, V100, ... too. Used to dive into iDRAC and the pizza box. But never allowed to try DC, that's too bad. Thank you for sharing your experience. Indeed, it seems that datacenters decommissioned servers become available in numbers and quite affordable (now Broadwell, Haswell with AVX2) (No more official support, so many second-hands and spare parts). These servers are nice pets, a bit noisy though ! But they don't bark and byte.
From your collection, I am surprised you have not acquired a EPYC 7773x or 2. I am going to get one, when the budget allows. But the cheapest I can find one is about $4000. It will be worth it, for those things that need all that cache. And I have 3 7950x's for the avx-512 that they have and the high speed cores. I can't wait to get a Genoa 96 core with avx-512...

Good to see you here !
 
  • Like
Reactions: Letin Noxe