I completely mis-wrote that comment. I should have my BS in Physics revoked :$
Eh it's alright. We've all been there in some fashion or other.
I think that Broadwell will make a larger impact. The advantage Haswell brings over IVY won't be as big as Broadwell will bring over Haswell - mainly a 30% decrease in TDP at a given clock rate, which will lead to more significant savings in power bills. And, BW still has AVX2 and all the architectural improvements that HW has.
Broadwell should be a big deal for Intel, but it's gonna be awhile unless Intel is in a big hurry to push past Haswell-EP/EX. And who knows, maybe they are?
Anyway, Broadwell will probably square off against die-shrunk POWER8, provided Samsung/Glofo do as well with their 14nm process as the press snippets indicate.
Some comparisons of Ivy Bridge and POWER8:
SpecInt_rate_2006
2.7Ghz Intel Xeon E5-2697 v2 - 24 cores - 934
3.52Ghz POWER8 - 24 Cores - 1750
Specfp_rate_2006
2.7Ghz Intel Xeon E5-2697 v2 - 24 cores - 649
3.52Ghz POWER8 - 24 Cores - 1370
Okay, so where are the comparisons of POWER8 vs E7-8895 v2? The E5-2697 v2 launched in 2013 on an older stepping.
i.e. Broadwell 18 core will still not match Today's 12 core POWER8
No surprises there. I don't think anyone really expects Xeon to beat POWER8 on a per-core basis. I think they expect it to beat POWER8 on a per watt basis.
POWER8 is not standing still in the meantime either with higher clocked POWER8 chips to be released as well as future POWER8+ shrink.
I would expect to see POWER8 on Samsung's 14nm process eventually. Depending on how much you buy the hype coming from licensees, that could be pretty soon.
Well stated @thunng8! What @DrMrLordX chooses to overlook is you don't compare a single x86 server to a single Power server. Most businesses with 1000 employee's and higher will have a an IT staff, maybe an operations staff and 1 or two data centers of varying sizes and sophistication. I'll accept we can run a Power server (it can be P7 or P8) at 90% utilization - let's just say 100% of TDP. The server could be a 1, 2, 4, 8 or 32 socket server depending on the environment. Because of the efficiency of the Power Hypervisor it is doing the work that a 12, 16, 24, 32, ... core x86 or SPARC server can do and do it with 1/4th or 1/8th or 1/20th (I am just picking ratio's I have used in the past against these platforms) the compute resources. Now, run Oracle on that x86 server at 25% util on a 24 core server (which is 6 effective cores) it will require 12 licenses @ $47,500/license. That is roughly $600k + 22% maint per year starting with year 1. On P8, let's say only 3 cores are required (it's my story so I can tell it how I want

) that would be 3 Oracle licenses at $150K + 22% maint per year. Most x86 shops will deploy more x86 servers for each Oracle workload. For Power, they will just stack them on the same server. If we add a 2nd workload the x86 is another $600k totaling $1.2M + 22% maint whereas the Power is $300k + 22% maint. See how this scales? I don't want you to think I'm calling your baby ugly for no reason. The reality is, x86 vendors are positioning their servers to run these enterprise workloads where Power, SPARC, PA-RISC, Itanium, Alpha, MIPS and others have been for decades. Performance per core is key. When you have to buy server after server after server like x86 then perf/watt makes perfect sense. Just don't try to apply what is important to you to Power as it isn't relevant.
Uh, socket license issues have been there for years. That hasn't stopped a lot of server farms from picking up x86 machines in the past (which have required more sockets for the same level of computational power vs. other platforms), which is why Intel now controls about 95% of the server market. By your logic, nobody in their right mind would adopt microservers or blades for anything, and yet, that segment of the server market is on fire. I mean, who wants to pay socket licenses on a bunch of Atom or ARM-based servers when you have to pick up maybe 4-8 times as many sockets? Apparently someone does.
IBM should really put an x86-64 decode engine in their chip designs to sweeten the deal and create a competitor to Intel for compatibility oriented markets.
The lack of x86-64 support basically limits them to the markets where performance/watt matter almost exclusively.
It didn't work for Itanium. Who wants to buy a hot 190W chip and then run it in some kind of compatibility mode that will reduce performance?
Maybe they could fleece 2 core POWER8 processors to consumers to get GLOFO a way to unload their wafers when they fail in true GloFo fashion.
I hope IBM's official position on Globalfoundries is more positive than that, since they need someone else to make their chips for them now. Or do you expect TSMC to do it? Depending on whether or not you believe TechEye, GloFo is the one buying out IBM's fabs.
There's no telling what consumers will buy, but I have a feeling that dual-core POWER8 chips would not be a smash hit, especially if they spent 99% of their time decoding someone else's instruction set at a performance hit. With all the changes going on in the desktop and mobile space, it is more likely that people will be willing to jump on a different OS platform. Google already managed to get an enormous number of mobile users to switch to Linux (well, sort of). If that's possible, then nearly anything is possible.
So what would a dual-core POWER8 chip @ 3.52 ghz look like, anyway? If it scaled down perfectly, 32W TDP and 16 threads? That could make an okay console chip, though I think you'd be better off cutting the chip down to a single core and filling out the rest of the power budget with a CAPI-based device (GPU or what have you).
More powerful per core performance would be extremely desirable in many Windows Server x64 applications like Game Server hosting, actual Gaming, any horribly threaded program that needs high per core performance in general.
The real question here is: how attached are game server hosting firms to the Windows server platform? Typically, end-users are the ones most attached to their operating systems and legacy software.