• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."
  • Community Question: What makes a good motherboard?

PrimeGrid Challenges 2020

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

VirtualLarry

No Lifer
Aug 25, 2001
50,021
5,848
126
I wasn't (intentionally) "bunkering", BTW. I just made possibly a poor choice at the start of the race, to arm my two 6C/12T Ryzen 3600 CPU rigs, with 12 tasks each, each 1 thread per task. So, then I had to wait like 5 days to finish those tasks. I've since switched to 3 tasks, 4 threads ea.
 

StefanR5R

Diamond Member
Dec 10, 2016
3,836
4,173
136
A quick and superficial look at the PG web site shows me that llrCUL is now using about 2 M long FFTs on FMA3 supporting hardware like Ryzen 3000. This means circa 16 MByte footprint of the hottest program data, per each task.

Ryzen 3600 has got 2 (two) core complexes. Each core complex has got 3 cores/ 6 threads and 16 MByte level 3 cache.

Based on this, a hypothesis can be formed about how many concurrent llrCUL tasks a Ryzen 3600 supports without having to wait on RAM reads and writes all the time.

(Edit, such a hypothesis would optimistically assume that the operating system or thread library schedules all program threads of a task on logical CPUs which belong to the same core complex, and thus have access to the same last level cache. If this does not happen, performance will suffer.)
 
Last edited:
  • Like
Reactions: biodoc

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
15,283
2,268
55
I wasn't (intentionally) "bunkering", BTW. I just made possibly a poor choice at the start of the race, to arm my two 6C/12T Ryzen 3600 CPU rigs, with 12 tasks each, each 1 thread per task. So, then I had to wait like 5 days to finish those tasks. I've since switched to 3 tasks, 4 threads ea.
I didn't mean you with the "no bunkering" hint. I mainly meant @StefanR5R. Or possibly @TennesseeTony.
 

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
15,283
2,268
55
Day 7 stats:

Rank___Credits____Username
114____642430_____Ken_g6
125____530166_____VirtualLarry
145____412056_____Orange Kid
149____407039_____emoga
154____380243_____SlangNRox
172____338864_____waffleironhead
193____280688_____Lane42
198____262448_____biodoc
210____238889_____Tejas
242____185595_____Icecold
267____142607_____zzuupp

Rank__Credits____Team
18____4505586____Rechenkraft.net
19____3989610____BOINC@Poland
20____3910104____Duke University
21____3821030____TeAm AnandTech
22____3616458____UK BOINC Team
23____3048339____BOINCstats
24____2727546____Ukraine

I'm still leading.



I've also started calculating the best combinations of Cullens and Woodalls to run in the 3 days remaining, for each of my various machines.
 

StefanR5R

Diamond Member
Dec 10, 2016
3,836
4,173
136
A quick and superficial look at the PG web site shows me that llrCUL is now using about 2 M long FFTs on FMA3 supporting hardware like Ryzen 3000. This means circa 16 MByte footprint of the hottest program data, per each task.
llrCUL still seems to be at 2M. llrWOO may be at 2340K now (this is 18 MByte size) and thus a bit much for Zen2. It's zero-padded data though; I don't know if the processor cache management handles this more efficiently than fully populated data.
 
Last edited:
  • Like
Reactions: Ken g6

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
15,283
2,268
55
Happy Thanksgiving! Day 8 stats:

Rank___Credits____Username
110____782900_____Ken_g6
120____676899_____VirtualLarry
129____610155_____emoga
134____589964_____Orange Kid
154____452038_____SlangNRox
166____411932_____Lane42
179____382630_____waffleironhead
192____342775_____crashtech
221____262448_____biodoc
234____238889_____Tejas
262____185595_____Icecold
274____167385_____zzuupp
470____9774_______xii5ku

Rank__Credits____Team
15____6197058____SETI.USA
16____5466174____Rechenkraft.net
17____5149284____Team Norway
18____5113391____TeAm AnandTech
19____5082146____Metal Archives
20____4772243____BOINC@Poland
21____4723465____Duke University

I suppose I should say I'm grateful to still be leading the TeAm, and that we're advancing so well in the rankings. Now, let's gobble up more WUs!
 
  • Like
Reactions: lane42

VirtualLarry

No Lifer
Aug 25, 2001
50,021
5,848
126
How did 'Team Norway' slip in ahead of us, with like 100 less WUs crunched. Are they only crunching the "easy" WUs or something?

Anyways, you're going to likely end up out front, Ken_g6, as I'm tapering off on PrimeGrid, stopped one 3600 from crunching already. The other one (my main PC) has WUs for another day.
 

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
15,283
2,268
55
How did 'Team Norway' slip in ahead of us, with like 100 less WUs crunched. Are they only crunching the "easy" WUs or something?
I know I've noticed that, since I've been crunching with a small numeric WU limit, I'm getting a lot less of the short double-check WUs. It's probably something like that.
 

TennesseeTony

Elite Member
Aug 2, 2003
4,045
3,104
136
www.google.com
Oh good, double check WU's...I was worried when I fired up the 3950X's because my WU's were only running for 10's of minutes before I had to head back to the kitchen, then to my sister's home, instead of the 17 hours or so I was anticipating.
 

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
15,283
2,268
55
Day 9 stats:

Rank___Credits____Username
104____965787_____crashtech
111____895862_____emoga
115____878705_____Ken_g6
124____777415_____Orange Kid
130____701959_____VirtualLarry
163____499498_____Lane42
164____498458_____SlangNRox
185____423759_____waffleironhead
244____262448_____biodoc
257____238889_____Tejas
276____192242_____zzuupp
281____185595_____Icecold
364____75802___10esseeTony
487____9774_______xii5ku

Rank__Credits____Team
14____7891089____Crunching@EVGA
15____7009731____SETI.USA
16____6662646____Rechenkraft.net
17____6606199____TeAm AnandTech
18____5574297____Duke University
19____5521952____BOINC@Poland
20____5441191____Team Norway

Anyways, you're going to likely end up out front, Ken_g6
Famous last (place? :p) words. I fired up a cloud system thinking it would help me with @emoga. I didn't even see @crashtech crashing through the ranks!
 

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
15,283
2,268
55
Final-ish stats:

Rank___Credits____Username
74_____1600961____crashtech
101____1139530____emoga
106____1072414____Ken_g6
118____978838_____Orange Kid
136____752161_____VirtualLarry
156____611763_____Lane42
157____610049_____xii5ku
164____577420___10esseeTony
174____544676_____SlangNRox
178____533216_____waffleironhead
260____262448_____biodoc
266____242379_____zzuupp
269____238889_____Tejas
295____185595_____Icecold

Rank__Credits____Team
11____15564382___BOINC@MIXI
12____10282676___Team 2ch
13____9763091____The Knights Who Say Ni!
14____9350346____TeAm AnandTech
15____9086121____Crunching@EVGA
16____8216614____Rechenkraft.net
17____7612182____SETI.USA

Well, we wound up between two old friends, EVGA and KWSN. Not bad for most people skipping the first third of the challenge! :)
I didn't mean you with the "no bunkering" hint. I mainly meant @StefanR5R. Or possibly @TennesseeTony.
Nevermind, @lane42 , xii5ku just blew by us, causing us to veer off course a bit from all the dust, smoke and Mach 3+ turbulent air.... Stefan needs to name one of his rigs 'SR-71', and another 'Blackbird'.
Couldn't help yourself, could you @StefanR5R? :rolleyes: Well, at least you didn't crash their server...this time.
 

StefanR5R

Diamond Member
Dec 10, 2016
3,836
4,173
136
I merely had 24 results to upload and report today. Which, as to be expected, went through in the blink of an eye.

As a frame of reference, the top three teams combined reported 67 results per hour on average throughout the challenge.

I am sure that the PrimeGrid server can handle challenges at projects like llrCUL and llrWOO with ease. PPS(E), SGS and the likes would be a lot more demanding for the server.
 

TennesseeTony

Elite Member
Aug 2, 2003
4,045
3,104
136
www.google.com
Looking at the scores, I am no longer disappointed that I had one task come in a little late, it would have gained me nothing (within the TeAm's ranks). :)

Just a note, I was surprised to see my 3950X's (Linux) perform so poorly against my 3900X (Windows), both with 8 threads per CCX, and also at full throttle (32 threads vs 24 threads). The 24 thread machine kicked arse against the 32 threads in both situations. (3900X finished 6 hours sooner at 8 threads, 2 hours sooner at max threads)
 

StefanR5R

Diamond Member
Dec 10, 2016
3,836
4,173
136
3900X and 3950X have the same number of core complexes, the same amount of last-level cache, potentially the same RAM I/O bandwidth (depends on your actual RAM config), and by default the same socket power limit.

3950X has 1.33 as many FMA units as 3900X of course, but this does not matter if processor caches are exhausted.

At some point during the challenge, both llrCUL and llrWOO tasks became too big for computers with 16 MB processor cache segments, such as Zen2. (llrWOO did so earlier than llrCUL.)
 
  • Like
Reactions: TennesseeTony

StefanR5R

Diamond Member
Dec 10, 2016
3,836
4,173
136
@Ken g6, the challenge "speaker" did not repeat the warning against bunkering in the llrCUL/llrWOO challenge announcement.

In the previous llrDIV challenge announcement Michael Gutierrez said:
This subproject uses LLR2, which means dumping bunkers will be disastrous for the server. It will fill up the server's disk drive.
I have been told a long while ago that disk space for result files is allocated on the server when a task is generated or when a task is assigned to a host. (I don't recall which of these two.) Either Michael Gutierrez formulated his warning very wrong (the disk space is not occupied between uploading and file deletion; it is occupied from when a task is in progress, until file deletion), or they modified their server to overcommit its disk space.

Edit,
Michael Gutierrez said:
As a result, the server will stop accepting uploads. That will be bad for all users, of course, but it will be worse for the people who have been holding bunkers. You may miss out on uploading your entire bunker if we shut down uploads.
Either this statement is incorrect, or they implemented a faulty protection which would prevent forward progress of the server. I don't believe that the PG admins are this dumb, rather I believe that this warning was incorrect.
 
Last edited:

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
15,283
2,268
55
@StefanR5R, with LLR2, result space is occupied between uploading and file deletion. However, to do the double-checks, the server has to convert the result files into new double-check WUs of about the same size. Those mean disk space is occupied from the end of the original task to the end of the double-check. Which can take a few days if the recipient is busy or shuts down PrimeGrid.

I merely had 24 results to upload and report today. Which, as to be expected, went through in the blink of an eye.

As a frame of reference, the top three teams combined reported 67 results per hour on average throughout the challenge.

I am sure that the PrimeGrid server can handle challenges at projects like llrCUL and llrWOO with ease. PPS(E), SGS and the likes would be a lot more demanding for the server.
But you're right. I was just needling you for apparently not listening to me. ;)
 

lane42

Diamond Member
Sep 3, 2000
5,507
401
126
Nevermind, @lane42 , xii5ku just blew by us
Rank___Credits____Username
74_____1600961____crashtech
101____1139530____emoga
106____1072414____Ken_g6
118____978838_____Orange Kid
136____752161_____VirtualLarry
156____611763_____Lane42
157____610049_____xii5ku
Tony, who is this guy xii of which you speak. Do you mean the
guy behind me :p :) Its not often ill see that.
 
Last edited:

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
15,283
2,268
55
Bump for the year's final challenge, in a couple of days. GFN 18, 19, and 20 do support GPUs, but don't support multi-threading. So just turn off HT and hope for good results, I guess.

In other news, I got some free credit from DigitalOcean. So catch me if you can! ;)
 
  • Like
Reactions: lane42

ASK THE COMMUNITY