Originally posted by: petrusbroder
Optimized clients (BOINC-managers) use all the functions of the processor to generate a more correct (? :Q ?) benchmark. The benchmark is used to calculate the credits which each WU earns by using a formula where the benchmarks and the crunching time are important factors. If the benchmark is high, and the the applications takes some time to crunch the you get more claimed credits sent to the project. If the project uses a quorum of eg. 3 (i.e. the claimed credits from three computers are used to calculate the average credits, which are then awarded. Alternatively the highest credits are dumped and the average of the two lower are awarded (I am not sure anbout this though: is the highest or the lowest dumped ...
)
1. different BOINC-versions report different whetstones and dhrystones and thus will claim more or less credits,
Well, starting at the bottom...
For BOINC-client v3.18-v3.21, the windows-compiler optimized-away part of the benchmark, so in reality not all of the benchmarks was run. This gave significantly higher benchmarks on Windows-computers, and therefore also higher claimed credit.
v4.19 and earlier v4-clients partially fixed this problem, but still Windows-compiler optimizes-away part of the benchmark, and this still gave higher benchmark.
With v4.20, the AFAIK last code-change to BOINC-benchmark was made, and all later BOINC-clients should give roughly the same benchmark-score on the same computer/OS. But, the Windows-compiler being used is still better at optimizing than Linux/Mac-compilers being used, this gives higher benchmark-score on windows.
As for all the things that influences benchmark, the single most significant influence is for multi/HT-computers, there the Integer-benchmark is litterally "all over the place", this can give rise to atleast 2x difference in credit-claims between benchmark-runs...
The flops-benchmark on the other hand seems much more stable, and is little influenced by other things.
Also, AFAIK neither memory nor cache-size is significantly influencing the benchmark-scores...
As for crediting in quorum-system, the easy one is SETI@home:
At the same time wu validated, all results that also passes validation at the same time decides crediting after these rules:
1; If only 2 results passed validation, lowest claimed to all.
2; If 3 or more passed validation, remove highest and lowest claimed, and average the rest.
3; Any later-returned results that also passes validation, gets the same credit as other for same wu, no re-calculation of granted credit is done.
Other projects AFAIK uses the same rules, with the addition that projects without any redundancy directly gives claimed as granted...
As for "optimized" BOINC core clients (the Manager doesn't run any benchmarks, and AFAIK isn't even re-compiled)... Well, if not quite mistaken, by using a different compiler, and by using other compiler-switches, just like the old v3.xx for windows removed part of the benchmark, the same is AFAIK true with the "optimized" BOINC-clients...
How can a compiler "optimize away" part of a benchmark?
Let's look on an easy example of a possible benchmark, there i and a is local variables not used in other parts of the code:
i = 1 to 100k {
a = sqrt 4
b = a * a
}
Optimized, a compiler can change it to this code:
b = 4
Now, both code-parts gives the end-result that b = 4, but, while the original code does 200k calculations, the optimized code in reality has no calculations at all, and is therefore faster to execute.
Yes, the BOINC-benchmark isn't so easy, but, it's enough for a compiler to find one small point that cuts-down on how many calculations is done, and you've got a benchmark that runs faster, and therefore gives a higher benchmark-score...