• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Haswell-E Reviews

Windows 7, with no Mantle tests? Oh come on, it's like they're doing their best to make multithreaded performance look bad... At least try to give us some idea of how it will perform with DirectX 12 games!
 
Nothing touches the 8C/16T speed demon when it comes to MT performance. First time I've been excited about an Intel enthusiast chip since Gulftown. 😛

5960X123-55.jpg


5960X123-57.jpg


Impressive power consumption numbers too - 10W more than Ivy-E for much better MT performance, 80W less than FX9590:

5960X123-70.jpg
 
Last edited:
Hmm that overclocking result is expected but not fantastic. 4.1 Ghz @ 1.2V and its 85C due to the power being pulled. Admittedly when they choose to define stable as something less than actually testing the CPU properly it goes to 4.4Ghz but that isn't reasonable and they ought to know better.

Too expensive for not enough performance gain. Not really a surprise of course.
 
The relatively affordable price of the 6 core/12T version (5820X @ $389) is an interesting option. Especially in the coming future, when cheaper (but still probably a fair bit more than socket 1150) motherboards and more affordable DDR4 prices come into play.
 
CPU review game benchmarks... 2560 x 1440 on highest settings... gtfo 😡

also very disappointing in that they didn't test overclocked settings and/or test any known heavily threaded games (BF4 isn't a heavy CPU metric unless you're pushing a fully loaded multiplayer server)
 
Hmm that overclocking result is expected but not fantastic. 4.1 Ghz @ 1.2V and its 85C due to the power being pulled. Admittedly when they choose to define stable as something less than actually testing the CPU properly it goes to 4.4Ghz but that isn't reasonable and they ought to know better.

Too expensive for not enough performance gain. Not really a surprise of course.

You have to remember this is the 5960X with 8 cores and 16 threads, and that Haswell is much different in terms of voltage compared to older sandy bridge CPUs.

Obviously with 8 cores, yes, it is definitely giong to be warmer when you over-volt. Secondly, the voltage limits of Haswell/IVB are lower than that of SB. With SB you could input 1.5V no problems (usually) but the voltage thresholds both at idle and load are substantially lower with Haswell and Ivy. I'd say ideally you would not want to go past 1.25-1.275V with Haswell in my experience.

Haswell-E also auto increases the voltage with stress test programs like prime95. If you're using normal day to day programs you'd never reach those same high temps, a lot of people still like using prime95 for stress testing on Haswell but I don't think it's the best stress test. The fact that voltage automatically increases by .1V in prime95/IBT, especially on 8C/16T, on a Haswell, it will have a definite negative effect on temps. If your normal voltage is 1.3V, prime95 makes it 1.4V. But you will never have that same effect in normal usage, as the HWC article makes note of.

It's hard to say what the best stress test for any Haswell uarch CPU is, but you can bet that prime95 will give you temps far exceeding anything you'd ever encounter in real world use, even at 100% on every core. Personally with my Haswell system I simply used a ton of real world applications over a period of a week and at that point deemed it stable. Haven't had any problems since. But prime95 gives me ridiculous temps, temps that I never ever see in real world use (since prime95 causes Haswell to auto increase voltage..)
 
Last edited:
Some important bits from Anands review:

67026.png


The 28 lanes of the i7-5820K has almost no effect on SLI gaming at 1080p

Despite the low clock speed of the 5960X, it comes top in multithreaded benchmarks.

i7-5820K is as close as Intel wants us to be to a true mainstream 6C/12T chip for now and honestly I'm fine with it. Overall it is faster than previous EE model 4960X according to Hardware.fr. Just a side note, these are efficient chips. i7 5820k barely draws more power than i7-4790k, i7-5960X is 3x as efficient as FX9590.

http://www.hardware.fr/articles/924-8/consommation-efficacite-energetique.html
 
Last edited:
Ditto. Now I'm waiting to see this delidded. If it really is not soldered, the choice between the i7 5830 and the i7 4970k becomes difficult.

In the AT review he indicates it looks like it's soldered.

Thankfully Intel has not decided to play around with the extreme edition platform too much since Nehalem. Although recent reports suggest that Intel is using an epoxy to bind the die to the heatspreader, one tell-tale sign that a goopy TIM is not being used is the hole in the heatspreader in one of the corners.

Looking through the previous generations, Sandy-E, Ivy-E and Haswell-E shows this hole, which is typically thought to allow for expansion of the heatspreader and/or gas trapped inside due to the heat. Also due to the way that the epoxy is handled, the heatspreader cannot be removed without force and destroying parts the silicon die.
 
Windows 7, with no Mantle tests? Oh come on, it's like they're doing their best to make multithreaded performance look bad... At least try to give us some idea of how it will perform with DirectX 12 games!

index.php

That is good question with graphics and GPUs...everybody now going to GPU and hardware compute instead of more more more cores CPU compute.

When $100-$150 chips with stronger and stronger GPUs become faster thn $500-$1000+ chips, Intel maybe need new GPU concentration?

http://www.guru3d.com/articles_pages/amd_a10_7800_kaveri_apu_review,13.html
 
When $100-$150 chips with stronger and stronger GPUs become faster thn $500-$1000+ chips, Intel maybe need new GPU concentration?

Your post containing a single cherry picked image and insinuating $150 AMD chips are superior to a fast general purpose machine is quite the flamebait.

When GPU's gain a library of interesting enterprise things to accelerate, Intel will probably plop down a GPU on the server chips.
 
~25% better performance than previous 4960X @ MT apps at similar power levels @ same 22nm process. Sweet. 🙂
 
Just clicked on the Hardware Canucks link and... urgh, unpleasant memories of September 2003 and the Emergency Edition. Still, you know what they say. From such humble beginnings spring mighty oaks, and so forth.

In any case, it'd be interesting to see how Haswell-E performs with only two memory sticks. DDR4's likely to remain stupidly expensive for a good few months yet, so starting off with a dual-channel setup and going to quad later might be a good way of keeping the cost down, if it doesn't hurt performance too badly.
 
Back
Top