What matters is total consumption from the wall. You cannot isolate power consumption from the CPU alone and even if you can do it the resulting value is useless, because you are never running the CPU alone.
Total consumption matters to the end user, yes. But it's a crappy way of measuring CPU power consumption. If you really don't understand why, I'm not sure why I am bothering with this at all.
Yes, I accept their performance numbers, but they are not "utterly embarrassing" as you believe.
Well, if you consider being beaten by an average of roughly 50% across a wide swath of benchmarks not utterly embarrassing -- and this in an era of very small performance improvements between generations -- then I have to wonder what you
would consider utterly embarrasing.
The review uses W7 SP1, this is an OS with a bad scheduler that unoptimizes threads for the Bulldozer/Piledriver architecture. Microsoft released two fixes for the FX-chips, but they do not work. This puts extra performance on the Intel side.
1. And the average person cares about this... why exactly? Do you think excuses help them get work done faster, or improve their frame rates? I find it amusing that in one breath you use an "all that matters is the end result argument" like "what matters is consumption at the wall", and then the next breath, you're telling me I should ignore the operating system? What's the guy supposed to do, use the thing as a doorstop? Pay extra for an OS "upgrade" that a lot of people despise?
2. While we're at it, who says the patches don't work?
3. Beyond that, even if they don't, I seriously doubt this can account for more than a few percentage points of difference.
Intel chips run with stock or overclocked RAM. The i7-3930x extreme chips run with RAM overclocked up to 1.98 GHz!! The FX-8350 run with underclocked RAM. This puts extra performance on the Intel side.
1. RAM speed has virtually no impact on net CPU performance. It's maybe a point or two. Every benchmark shows this consistently.
2. In nearly every benchmark the FX-8350 gets utterly destroyed by the i5-3570, which is running RAM of identical speed.
Third, the software used (e.g. Cinebench 11.5) is compiled with the Intel CPU dispatcher, which forces the code to run slower when detects an AMD chip through the CPUID. This puts extra performance on the Intel side.
1. So you're cherry-picking one benchmark to make excuses for, even though the AMD chips get demolished in
every benchmark they used?
2. Again, even if true --
who cares? If software works better on Intel chips, then it works better on Intel chips. As an end consumer, I really don't care why, I just care that it's faster. If AMD can't properly clone the chips they are copying, that's their problem.
When you correct all that, the FX perform very very well... and at lower price.
No, actually, they don't.
Many of your "corrections" are not corrections, they are excuses. They don't matter. The others are either incorrect themselves, or account for a tiny portion of the performance differential between AMD and Intel right now.
And that's why AMD has to practically give their chips away.