• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

SETI@home WUs high number of Inconclusive with apple-darwin

pandemonium

Golden Member
Anyone else seeing this? I'm noticing a pattern of clients running the apple-darwin version being either the cause or unable to crank these WU's properly. The really odd thing is, even though some of these machines in question aren't exactly producing results, they're still being credited after 2 other machines verify the results, which does seem a bit...disingenuous. I'm concerned that the error rate on that client isn't up the snuff.

It could simply be coincidence that a lot of the WU's I've seen as inconclusive are apple-darwin's, but something's not right and SETI@home's message boards don't seem to highlight that in the little searching I've done over there.

Example.

I suppose one could parse through the user.gz updated daily on https://setiathome.berkeley.edu/stats/, but I've not taken it upon myself to do so and I probably would be getting into something that'd consume all my time. Anyone have experience with reading that data they provide?
 
It could simply be coincidence that a lot of the WU's I've seen as inconclusive are apple-darwin's, but something's not right and SETI@home's message boards don't seem to highlight that in the little searching I've done over there.

I guess I'll do some copy and paste from SSC about task validation:

Here is the general workflow of a task: (Assuming Quorum of 2)
Task Generation -> Task sent to host -> Host sends task back -> Server waits for other task -> Other host sends task back -> Server validates task (does a comparison on work completed, if it is within 1% of each other -> Validate complete -> Award credits -> Assimilate in to science database -> Task complete

Validation is very simple, it compares it line by line of the output and if the difference is greater than 1% then another tie breaker task is generated and sent out.
So here is a Quorum of 2 but the result difference is >1%:
Task Generation -> Task sent to host -> Host sends task back -> Server waits for other task -> Other host sends task back -> Server validates task (does a comparison on work completed, if it is within 1% of each other) greater than 1% detected -> Generate tie breaker task -> Send task out -> Task is returned by host -> (1)Compare the 3 "work units" (This time comparison is between tie breaker and first host, then it does comparison between tie breaker and second host, then the validator adds the results together than averages it out) -> Validate complete -> Award credits -> Assimilate into science database -> Task Complete

(1) If the first comparison results is 99.9% similar and the second comparison is 99.8% similar, validator does (0.1% + 0.2%) / 2 = 0.15% different. This results in all 3 hosts being awarded the credits
(1a) If the first comparison results 99.6% similar and the second comparison is 25% similar, validator automatically assumes that the second comparison host is faulty and only awards first host and tie breaker host
 
Perhaps the threshold is too small? Though, how does any of that speak to the prevalence of apple-darwin client WU's inconclusive rate?
 
Maybe some math library routines on OS X accumulate numerical errors differently from the Windows and Linux ports.
 
Back
Top