PrimeGrid Races 2018

Page 21 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,242
3,829
75
Better-late-than-never bump for the next race. :)
 

crashtech

Lifer
Jan 4, 2013
10,523
2,111
146
Plenty of time to gain insight and/or refresh our memories re the behavior of llrWOO. I don't believe I have that app in my app_config currently.
 

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
Plenty of time
Originally I wanted to take hosts of mine offline in order to measure their performance systematically. But it turned out that I had (and still have) more "urgent" uses for them pretty much the entire time since the last PG challenge. Hence I am giving up on this plan of mine, and will make my app_config a shot into the dark.

to gain insight and/or refresh our memories re the behavior of llrWOO. I don't believe I have that app in my app_config currently.
www.primegrid.com -> Applications says that the <app_name> is llrWOO.

www.primegrid.com -> Your account -> PrimeGrid preferences -> Edit PrimeGrid preferences says that llrWOO's recent average CPU time is 124 hours, which is just a bit more than the recent average 110 hours of llrCUL that we had previously (was 99 hours at the beginning of the past llrCUL challenge).

Longer run times correlate with more processor cache use, which in turn means the sweet spot of the application configuration shifts towards more threads per task, while having fewer simultaneous tasks per processor.

For desktop processors (socket 1511 and similar ones) it's clear: Run only one task at a time, and give it as many processor threads as you want to spare for prime finding (i.e. not the threads that you want to use for GPU tasks, for the operating system and desktop interface, and whatnot...).

For high-core-count server processors, I prefer to think in terms of "simultaneous tasks per socket". I ran llrCUL at 3 per socket on my hosts with 14C/28T CPUs (35 MB shared L3 cache) and 4 per socket on hosts with 22C/44T CPUs (55 MB shared L3 cache), without having done any measurements for llrCUL specifically. So, I ran llrCUL with 11.67 MB L3 / task, or 13.75 MB L3 / task respectively. I saw occasional run time variations between tasks notably on the 14C/28T CPUs, which made me wonder whether this setting was already occasionally above the optimum number of simultaneous tasks per socket for llrCUL.

For llrWOO, the optimum number of simultaneous tasks per socket for a given CPU will be the same or less than for llrCUL.

Edit: L3 cache sizes added
 
Last edited:

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
PS,
I chose 3 tasks per 14C CPU for llrCUL because I knew with very good confidence that this was best for llrESP in April 2018 — which at that time had about half of llrCUL's recent average CPU time per task —, yet I hesitated to go to 2 tasks per 14C CPU despite llrCUL being heavier than llrESP. 2 per CPU seemed over the top to me at the time.

I have only very little performance data on llrPSP from April 2017, which at that time may have had a similar average CPU time per task as llrWOO has now, if folks had ran 2018's program binaries on a 2018-style hardware mix back in spring 2017. (Currently, llrPSP is at 154 hours. That is, current llrPSP tasks are heavier on the hardware than current llrWOO tasks with 124h average.)
 

crashtech

Lifer
Jan 4, 2013
10,523
2,111
146
My Ivy and Sandy Xeons have 10 cores and 25MB L3 cache, and 8C/20MB L3 cache respectively, so it could be that 2 tasks per socket would be more appropriate for them, unless lack of AVX2 makes the working data set smaller.
 

crashtech

Lifer
Jan 4, 2013
10,523
2,111
146
These WUs are going to take a long time on non-AVX2 CPUs, and a long time on Ryzen too. I may only put the newer Intel CPUs that I have on this, but we'll see.
 

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,242
3,829
75
Well, I've started, but I might stop again in three hours.
 

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
As far as I can tell in hindsight, the very first result that came in was from 288larsson less than 4 hours after the start. He runs a single task at a time on each of his hosts. Clearly he hates to be double checker more than he likes throughput¹. :-) This one is the host with the first result.

Edit,
¹) or maybe Skylake-X's changed cache configuration performs better with just one task at a time...?
 
Last edited:

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,242
3,829
75
Day 1 stats:

Rank___Credits____Username
1______869682_____xii5ku
34_____89866______Ken_g6
38_____72729______crashtech
45_____72522______Orange Kid
53_____54470______Howdy2u2
56_____54415______biodoc
83_____18190______zzuupp

Rank__Credits____Team
1_____1939373____Sicituradastra.
2_____1849370____SETI.Germany
3_____1724428____Czech National Team
4_____1231878____TeAm AnandTech
5_____943845_____Aggie The Pew
6_____907139_____Crunching@EVGA
7_____562798_____BOINC@MIXI

I'm #2! :) And we're #4! Not bad for having two races at once.
 

bill1024

Member
Jun 14, 2017
88
73
91
As far as I can tell in hindsight, the very first result that came in was from 288larsson less than 4 hours after the start. He runs a single task at a time on each of his hosts. Clearly he hates to be double checker more than he likes throughput¹. :-) This one is the host with the first result.

Edit,
¹) or maybe Skylake-X's changed cache configuration performs better with just one task at a time...?

I am doing one at a time using MT, I can not do 2 tasks in less than double the time.
In fact when I went to do 4 tasks on a 2P Intel 16/32 it slowed way way down.
Also on a xeon 1650v1 again I could not do 2 tasks in less than double the time to do one.
Maybe has to do with keeping the task in the CPUs cache.
One challenge not too long ago, that was not the case, I think it is the size of these WOO tasks doing it.
 

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,242
3,829
75
It took until just now for me to figure out that my HTPC was only running PrimeGrid on one core. :oops: Fortunately, it only has two cores, but still, I probably lost at least one result.
 
  • Like
Reactions: zzuupp

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
@bill1024 has a magnetic personality.
WvGBagF.png

--------
Also on a xeon 1650v1 again I could not do 2 tasks in less than double the time to do one.
The 1650 v1 has got 12 MB cache. As far as I can tell, this is already marginal for one llrCUL task, and llrWOO certainly would prefer more than 12 MB cache per task if it can get it.

Luckily, I do have more than that. But even with that much more cache per task, I am still also seeing some dependence on RAM bandwidth (comparing hosts of mine which have same RAM but different core counts).
 
Last edited:
  • Like
Reactions: zzuupp and bill1024

bill1024

Member
Jun 14, 2017
88
73
91
Just a little bit ago we were within 160 points, very close to DeleteNull.
I may have to fire up the AMD rigs and see if they can drop a couple tasks.
I am throwing everything at it I have room for.

I tried doing multiple tasks on E5-2670, and i7-4930k and the xeon e5-1650. Better off with one at a time.
Maybe the 2P 24 core AMD can do a few tasks in five days.. I have an i5-8600k that will be delivered here tomorrow, but the MB will not be here until Tuesday.
I cut down the OK on all my rigs, I was getting inconclusive and out right errors on these woo tasks. So far so good!!!!
 
  • Like
Reactions: TennesseeTony

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,242
3,829
75
Day 2 stats:

Rank___Credits____Username
1______1870774____xii5ku
26_____381586_____emoga
36_____289855_____Howdy2u2
43_____216585_____Ken_g6
45_____200025_____crashtech
53_____163207_____Orange Kid
73_____90969______zzuupp
114____54415______biodoc

Rank__Credits____Team
2_____5081311____SETI.Germany
3_____4738345____Sicituradastra.
4_____3756202____Aggie The Pew
5_____3267420____TeAm AnandTech
6_____2181618____Crunching@EVGA
7_____1253724____BOINC@MIXI
8_____1051444____The Knights Who Say Ni!

Rats! We're still behind them. (Rats are the mascot for Aggie the Pew if you didn't know.)

And just for today I'll do stats for our guest:

Rank___Credits____Username
6______1109623____bill1024
8______944843_____bcavnaugh
95_____72621______Opolis
107____54530______planetclown

Rank__Credits____Team
3_____4829220____Sicituradastra.
4_____3810819____Aggie The Pew
5_____3285563____TeAm AnandTech
6_____2181618____Crunching@EVGA
7_____1290230____BOINC@MIXI
8_____1087714____The Knights Who Say Ni!
9_____982123_____AMD Users
 

bill1024

Member
Jun 14, 2017
88
73
91
PEW = Pink Eyed White. That is what a lot of pet/feeder mice and rats are PEW
Aggie was one of their pets I do believe, or is it a rat from a book or something?
 

bill1024

Member
Jun 14, 2017
88
73
91
Day 2 stats:

Rank___Credits____Username
1______1870774____xii5ku
26_____381586_____emoga
36_____289855_____Howdy2u2
43_____216585_____Ken_g6
45_____200025_____crashtech
53_____163207_____Orange Kid
73_____90969______zzuupp
114____54415______biodoc

Rank__Credits____Team
2_____5081311____SETI.Germany
3_____4738345____Sicituradastra.
4_____3756202____Aggie The Pew
5_____3267420____TeAm AnandTech
6_____2181618____Crunching@EVGA
7_____1253724____BOINC@MIXI
8_____1051444____The Knights Who Say Ni!

Rats! We're still behind them. (Rats are the mascot for Aggie the Pew if you didn't know.)

And just for today I'll do stats for our guest:

Rank___Credits____Username
6______1109623____bill1024
8______944843_____bcavnaugh
95_____72621______Opolis
107____54530______planetclown

Rank__Credits____Team
3_____4829220____Sicituradastra.
4_____3810819____Aggie The Pew
5_____3285563____TeAm AnandTech
6_____2181618____Crunching@EVGA
7_____1290230____BOINC@MIXI
8_____1087714____The Knights Who Say Ni!
9_____982123_____AMD Users

Thanks, is there an easy way to get those stats like that? You can PM me to keep the thread cleaner.
 

bill1024

Member
Jun 14, 2017
88
73
91
Thanks guys. Saving the link as a txt file for now. I'll play around with it asap. Not feeling so well today.
Someone has to find a prime(I wish), hope it is at least some one we all know.
It would be a big one for sure.
Good luck guys, thanks for the hospitality.
 
  • Like
Reactions: TennesseeTony