• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

The 12th Annual Folding@Home Holiday Season Race

Page 14 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

The proposal for a name: The Cancer Terminators (Terminators) vs. The Cancer Eradicators (Eradicato

  • Yes

    Votes: 14 93.3%
  • No

    Votes: 1 6.7%
  • Try something else ....

    Votes: 0 0.0%

  • Total voters
    15
  • Poll closed .
I have to decrease the number of cores used by at least the number of GPUs in use on Linux, keeping the number of cores even as well. But I don't have machines with that many cores, so YMMV.
 
Awesome PPD..Sweet.

I have a 12core, 16 core, 24core, 32core and a 40core servers. Any configuration change or just let them loose??

Much Appreciated

I want to quote Stefan here, he is always generous with his knowledge, though I am getting to be a slow learner:
The CPU application FahCore_a7 isn't what it used to be. My 56T slots still work as expected: Put up a 5600 % processor load. At the beginning of this race, when I started FAHClient for the first time on the 44C/88T boxes, I already had to limit the CPU slot to 64 threads (and get 6400 % CPU load), because an 88T slot would only run at 4000 % CPU load. Now that I am back from a few days WCG, even the 64T slot is degraded to 4000 %. 56T slots and 48T slots still work as intended though (with 5600 and 4800 % load, respectively). Of course this costs PPD.

Edit: Looks like running two 44T slots on each 44C/88T box is the way to go now.
The quick return bonus really makes up a considerable part of the overall credit that you get from F@H.

Compared to @Markfw's 200 k PPD per Threadripper (I guess with 30 threads allocated to the CPU slot, and running a tad below 4 GHz?), I get merely 120...130 k PPD from a 32T slot on E5-2696v4 @2.6 GHz (all-core AVX turbo) on Windows. As you can see, PPD is disproportional to thread count × clock.

Dual E5-2690v4:
320...430 k PPD / (56 * 2.9 GHz) = 2.0...2.5 k PPD / (thread * GHz)​
Threadripper:
200 k PPD / (30 * 3.8...3.9 GHz) = 1.75...1.7 k PPD / (thread * GHz)​
E5-2696v4 with Windows-specific 32T slot limit:
120...130 k PPD / (32 * 2.6 GHz) = 1.45...1.55 k PPD / (thread * GHz)​

This ratio will tank dramatically for processors which are even further away from Threadripper's thread count and clock.

The only other DC project that I know of which rewards quicker hosts is GPUGrid. But their reward system is simpler.

The take away is to put a lot of logical cores into one slot, somewhere between 32 (the max for Windows) and 44 or so for Linux.
 
I want to quote Stefan here, he is always generous with his knowledge, though I am getting to be a slow learner:



The take away is to put a lot of logical cores into one slot, somewhere between 32 (the max for Windows) and 44 or so for Linux.

Ok, with that info, I think I will stick with WCG for my CPU's.

Much appreciated
 
Dual E5-2690v4 @ 2.9 GHz, Linux, combined to a single 56T slot:
anywhere between 320...430 k PPD​

Dual E5-2696v4 @ 2.6 GHz, Linux, combined to a single 64T slot while I still got usable WUs for that:
roundabout 480 k PPD, or more​

Dual E5-2696v4 @ 2.6 GHz, Linux, now divided into two 44T slots:
about 280...290 k PPD for each slot
(so, better in the end than the single 64T slot, despite fewer threads per slot, but able to use all 88 threads of this little PC)​

Single E5-2696v4 @ 2.6 GHz, Windows, serving 3 GPUs on the side, one 32T CPU slot:
120...130 k PPD
(a lot less than @Markfw's Threadrippers, which got a similar thread count but a higher clock)​

PPD/Watt of these large Broadwell EP CPU hosts is roughly 1/3 of those you can get from Pascal GPUs.

With so many ponies on the green, I am running the dual CPU servers at F@H, but not the CPU of the single socket PC.

Edit:
If you configure 24 or less threads per CPU slot, you risk receiving 0xa4 WUs, which receive considerably less credit than 0xa7 WUs which also can scale to more threads per slot.
 
Last edited:
Ok, with that info, I think I will stick with WCG for my CPU's.

Much appreciated
Steve, whats your folding name ? and did you change your clients to team 198 ? I can't find you in the 10 am stats update.

Edit: I found you, same name, and 655k update. Number 265 (rising quickly)
 
Steve, whats your folding name ? and did you change your clients to team 198 ? I can't find you in the 10 am stats update.

Edit: I found you, same name, and 655k update. Number 265 (rising quickly)
Glad you found me, do not want to waste my GPU PPD.

Cheers
 
Here are the stats for the 21st day of the 12th Folding@Home Holiday Season Race
Stats are as of December 22, 2017, 20:45 UTC (approx).
The inter-team race stats are from the Folding@Home stats page.
The intra-team race stats are from EOC.com.

The inter-team race:
280pohi.jpg


1ixyiv.jpg


10gaa89.jpg


The intra-team race:
2jesbbs.jpg


The intra-team race daily production:
nedra1.jpg


The intra-team race individual daily production:

ezh5pk.jpg


We have made progress in the past two days: the difference between Brony@Home and TeAm Anadtech has dropped from almost 90M points to less than 57M points. Thanks to all new TeAm-mates! I've also added zzuupp to the intra-team race. Please let me know, if you, HK-Steve, Punchy, Yodap, Howdy2u2 take part in the intra team-race too ...
OK, that is all for today. Only 2 days until Christmas Eve ... and that will be a nice party!
 
Oh, the last 24 hours has marked an important milestone for the TeAm, maybe two, actually.

First, new high score. 🙂 EOC is showing in excess of 69M for the last 24 hours, and every contributor has made that happen! There is no such thing as an insignificant contribution! So thanks all for that! (looks to be climbing too!)

But what I really came here to bring to your attention, is that today we passed ONE BILLION POINTS for the month!!!! WOOT! And a big congrats to Brony as well, they passed 1,000,000,000 yesterday I believe, maybe a bit more than 24 hours ago.

I am extremely pleased, and grateful, to get to participate in such a well matched competition, in which there is only one real winner (science, duh 🙂 ).
 
. Thanks to all new TeAm-mates! I've also added zzuupp to the intra-team race. Please let me know, if you, HK-Steve, Punchy, Yodap, Howdy2u2 take part in the intra team-race too ...
OK, that is all for today. Only 2 days until Christmas Eve ... and that will be a nice party!

Thanks for recruiting and adding me & the stats!
 
Back
Top