I doubt anybody would be able to come up with an example where, with the same benchmark, Anand's results are significantly different than everybody elses' (unless there is some other factor, like IO in a Sysmark benchmark, for example).
Benchmark selection and updating benchmarks is difficult, because while newer benchmarks are likely to be more applicable to how new CPUs are going to be used, they are also incomparable to older benchmarks. I do think that this is an issue, and personally I think Anand should (perhaps every year?) do a survey for readers asking them what programs they use, and what benchmarks they'd like seen done. Every year, a few benchmarks can be added/replaced, and old benchmarks removed. This way we don't have an entire refresh (so we can still compare somewhat to the past) but we also keep benchmarks "fresh".
Also, I personally believe that this obsession some people have with "benchmarks favoring one arch over another" is nonsense. People need to come to terms with the fact that Intel has the majority of the marketshare, and thus many programs, benchmarks and retail programs may be specifically optimized for Intel processors. While I don't believe Intel should be allowed to (and they no longer are allowed to) cripple AMD processors via their compiler, benchmarks, imho, should be chosen on what people actually use, regardless of what they've been optimized for.