Personally I think AMD "E" chips are a step in the right direction. A lot of people aren't "snarky" at you (or even the FX chips), it's the absurd childish fanboy behavior by certain other people endlessly "massaging numbers" that goes on to try & exaggerate minor improvements out of all proportion, that virtually attracts sarcastic comments.
In a very small subset of tests (namely file compression), the FX is about 22-25% faster. It's still drawing up to 30-40% more power though. So even with
charts like this, the perf-per-watt isn't necessarily better. As for 3.2GHz FX 8320E, ultimately there's still 32nm vs 22nm which is significant, ie, 32nm Sandy Bridge dropped from 95w to 77w 22nm Ivy Bridge. It's only gone back up to 84w for Haswell due to AVX (which hardly any apps use outside of Prime "power viruses" which give the chips abnormally lower perf-per-watt in benchmarks vs real-world usage) and larger iGPU (which gets disabled with a dGPU). That's a 20% difference in perf-per-watt even on the same architecture. Simply lowering the clock & voltage on a 32nm chip won't magically give it 22nm efficiency to overpower a process disadvantage.
It also depends on what you compare it with - the slower you go with FX clock speeds, the more the i5 "S" chips become a more appropriate comparison (if they already aren't). An i5-4690S has clock speeds of 3.5 4T / 3.6 3T / 3.8 2T / 3.9 1T at 65w, which is only 0-200MHz away from a 4690K (ie, 0-6% speed reduction
with typical 17-20% reduction in power consumption). And for lower end general net / office box requirements, the Intel i3-4360T is a chilly 35w at same 3.2Ghz clock if perf-per-watt is the ultimate key metric.
Edit : Differences i5-4690 vs i5-4690S : Sysmark = only 2.5% / 3DSMax = 6.2% / Photoshop = 3.5% / IE11 = 1.9% / WinRAR = 3.1% / x264 = 5.8%
X264 runs 5.8% slower but draws 20.5% less power. The gap simply widens again if you compare respective energy efficient optimized chips for each brand (rather than the coolest AMD vs the hottest Intel).