My Q is also suffering from the same thing. It is a result of the "defective WUs" Berkeley sent out over a month ago when their server was crashing.
I have taken all sorts of measures to remedy this problem without a whole lot of success.
When it was first discovered I took the extreme measure of completely deleting all 7,000+ cached WUs on my Q to insure that no additional "defective WUs" were given out. But it appears the damage was done and a whole lot of them were distributed.
The "defective WUs" don't have a proper "user_info.sah" file. This causes an additional error in the "result.sah" file (no ID or KEY). If the machine can be discovered that has the "defective WU(s)", it can be fixed. But in my case there are so many Queues and Clients that I have no way of knowing which machine is at fault.
In case you are wondering how this affects a SetiQueue, this is what happens: If the machine is running a caching program like SetiDriver, it starts working on another WU as soon as it completes one (or almost immediately for you people who like to nit-pick
😛). Let's say the "next" WU is a "good one". When it finishes crunching the good WU, SetiDriver dutifully transmits the freshly completed good WU and tries once more to transmit the old "defective WU". Therein lies the dilemma of trying to figure out which machine is the culprit. If the "defective WU" would completely halt the process we could just look for machines that have stopped transmitting completed WUs. But that is not the case, the defective WU laden machines continue crunching new fresh good WUs and endlessly try to transmit both the results of a freshly completed good WU and the old "defective WU".
The result of all of this is that our SetiQueue's waste a good bit of time trying to submit these defective WUs to Berkeley. Berkeley refuses them but the Q tries again and again each time the infected machine transmits. You see the evidence of all of this when you inspect the LOGs.
🙁
The only FIX I can think of would be to just start everything all over from scratch. I'm not just talking about the SetiQueue installation but every machine that is using the Q. If you have only your own machines this is doable but if you have a large Q like the one I'm running you are between a rock and a hard place.
I keep telling myself to just struggle on because S@H-1 will be ending soon and then all of this "stuff" will be behind us. But in the meantime, I'm personally suffering a lot of wasted bandwidth and I'm probably not alone in this situation.