(Rosetta v4.20)
I changed it from the defailt 3 to 12 hours. My ppd went down from 506k to 147k. I am back to 4 hours.
Hmm. But how much of this was from the preferences change, and how much perhaps from "downtime" when Rosetta didn't send work?
PPH for individual tasks on three of your hosts:
Results from
host 6179426 (Ryzen 9 5950X, 111 valid results when I collected these data):
10 results with 3.0 h duration, 54...62 points/hour
70 results with 5.5...6.6 h duration, 22...30 points/hour
30 results with 9.0...9.9 h duration, 25...28 points/hour
1 result with 12.0 h duration, 72 points/hour
---> That is, points/hour are all over the place.
Results from
host 6179428 (EPYC 7452, 203 valid results when I looked):
6 results with 6.1...6.2 h duration, 36...41 points/hour
62 results with 6.5...7.2 h duration, 31...38 points/hour
56 results with 7.6...8.0 h duration, 34...43 points/hour
79 results with 11.6...12.1 h duration, 32...46 points/hour
---> That is, points/hour are consistent between 6h, 7h, 8h, and 12h. But we don't have 3h results on this host.
Results from
host 6179463 (Ryzen 9 5950X, 88 valid results when I looked):
12 results with 3.0 h duration, 49...76 points/hour
8 results with 3.8...4.0 h duration, 47...72 points/hour
50 results with 8.0 h duration, 52...70 points/hour
4 results with 11.0...11.5 h duration, 8.3 points/hour *
14 results with 11.9...12.1 h duration, 50...67 points/hour
---> That is, points/hour are consistent between 3h, 4h, 8h, and 12h tasks (ignoring the four special tasks which obviously had issues).
________
*) stderr.txt of these tasks shows that only 10 decoys or less were generated within these tasks. Tasks with normal credit had an order of magnitude more decoys.
If Rosetta@home would have a longer streak of Rosetta v4 work availability, then a "scientific" experiment would be to set up two or three or four boinc client instances on a single computer, configure different target CPU run times for each client instance, and then have them work for a while on whatever random work batches come along. After all instances amassed a good deal of valid results, compare PPH of the differently configured tasks.
But due to the frequent gaps in work availability, such an experiment is harder to do. We would want to take data only from tasks which ran while the host was fully utilized. An only partially busy host would have a higher per-core-performance due to higher power budget/ higher thermal budget/ lower cache pressure/ higher memory bandwidth per busy core.
(Edit: An alternative to several client instances on a single host would be several hosts with same hardware. But it's hard to have same hardware: CPUs from a very similar V/F bin would be required, or the CPUs should be tweaked to run at a fixed clock.)