Well, maybe I jumped to conclusions.  I didn't post results from the 2nd and 3rd "post windows update" benchmark.  The numbers changed, not dramatically, but the did change.   The only advantage any one cycle had over the other was that it was run before or after another cycle.   No programs were opened up between tests.  There was also ~1 min in between tests.
So, why the different numbers?
Why does it matter?  because, when an article states the brand x beat brand y by , for example, .01%,  people remember that brand x is better and, in their minds, is the superior product.  Now, what if the tests, even though performed with identical peripherals and operating systems, were not equal due to the time after startup or some other factor.  WHAT IF BRAND Y IS ACTUALLY BETTER?
Yes, I do realize I am being rediculous 
