• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

[coolaler] Devils Canyon: 4.0 base/ 4.4 turbo @ stock

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
How much you want to bet that every single Haswell Refresh chip has the exact same "new and improved next-gen TIM" under the IHS?

I'd be absolutely surprised if that wasn't the case.
This suggests otherwise: http://www.xbitlabs.com/articles/cpu/display/intel-haswell-refresh_6.html#sect0

As for temperature, the Haswell Refresh processors have the same low-efficiency thermal interface as before. Their temperature rises sharply as soon as you launch a heavy application and remains high even if you've got an efficient CPU cooler. In our case, with the Noctua NH-U14S cooler, the senior Haswell Refresh model, Core i7-4790, quickly got as hot as 84°C while running LinX - without any overclocking on our part!
 
I bought at launch 6GB DDR3 with my 1366 system, pricing was fine, maybe $200.

Stopped reading that wall the moment pricing on ECC server RAM was used to project desktop DDR4 pricing. Like I said it's paranoia expecting a made up situation of DDR4 prices of $40+ a GB. Maybe if you're discussing top speed 2600+ rated stuff, otherwise nope.
 
Given the discussion about RAM, I don't feel so bad about asking this side-topic question here:

It looks like DC is going to be good enough to work as a heater for testing heatsinks. My previous heater was an i7 860 that really cooked at 4GHz. An accidental peripheral short killed the MB. (RIP -- sic transit Gloria Mundi.)

But that ran W7-64. In pushing the OC on this, I can still get a W7-64, or move on and get a W8.1-64. Which will OC better? Or does it matter?

Using anything other than a high wattage, properly soldered part makes for a terri-bad HSF test platform.

DC is not soldered.

DC can't be overvolted high enough without hitting TJmax to dissipate enough wattage to make a statistically significant test of maximum HSF TDP as the typical gargantuan tower coolers have massive maximum TDPs.

I would use a 5820k when it comes out if I were you. (Or if you can get a 3930k or 4930k for super cheap from someone impulse upgrading do that).

Attempting to delid your 4770k and remate with coolaboratory liquid ultra would also be an option, but likely also would not get your TDP high enough before hitting TJmax.
 
Last edited:
I could believe 84°C with a stock cooler, but if those guys are claiming 84°C at stock with a noctua cooler then I would politely suggest to them that they might want to recheck their HSF mount cuz that just has "I borked my mount" written all over it 😉
Possibly, but Xbitlabs is an extremely competent site in general, including their cooling/fan reviews.
 
This is me exactly, but I'm torn between this and Haswell-E.
I did a thorough investigation of each of the -E platforms, and each time I came to the conclusion that they were worthless. The traditional 4/8 platforms are not only cheaper but also faster for 99% of the tasks people use them for, including gaming.

Also -E may have more bandwidth, but the quad-memory controller has more latency, which can hurt games. Lower clocked Haswells often beat them for games because of this. Witness the 4820K losing against the 4770K despite the former being two speed bins higher (this is a recurring trend):

IVB-4820-62.jpg


The 4790K would absolutely dominate that chart if it were tested.

The -E platform is for people who need >32GB RAM, need 6/12 for very specific applications (not games), and/or who need more PCI lanes for tri/quad GPU. If you fall under any of those categories you know exactly why you're getting that platform. For everyone else, the platform is useless.
 
Last edited:
It seems like the new i5 has no interest here at all. I'd imagine it should OC good but it is hard to best that i7 at a stock 4ghz before turbo
 
Since Haswell-E doesn't have an IGP, the biggest "mainstream" benefit of higher bandwidth isn't applicable. There certainly are benefits to higher bandwidth, particularly for the Haswell-E platform's potential applications, but those aren't my applications. It's not a sin to use Haswell-E if you're not in need of bandwidth -- the "E" platforms are just more capable of solving bandwidth problems.

It seems like most people want DDR4 just for the sake of having a new technology, regardless of its usefulness to them, and are more than happy to fork over "$100" extra for an incremented digit.

There is no confirmation that 16 GB DDR4 will cost $100 more over 16GB of DDR3. The point I was making regarding the additional memory bandwidth - you do not need high speed DDR4. With even DDR4-2133, the system will have more than enough memory bandwidth even compared to any DDR3-3000 Z97 chipset. The performance gain going above DDR3-2133 on modern Intel CPUs is very small. Therefore, the comment that initial DDR4 memory will be 'slow' is irrelevant for X99.

Even the lowest end X99 board will be stacked compared to almost any Z97 board.

Asrock X99 Extreme 4 comes with Ultra M.2 (so far only available on the Z97 Extreme 6), along with 12-power phases, four PCIe 3.0 x16 slots and 8 DDR4 DIMM slots, Intel GbE Lan. A Z97 board with these features is easily $170. I bet this board will not retail for more than $230, or the $50 premium over Z97, which is not a lot considering it will take 6-core BW-E in 12 months. Add $50-70 premium for DDR4 and say another $70 premium for 5820, and the choice is not so clear cut for many because the CPU+mobo+RAM will be say $170-200 more but in return you get 6 cores+HT.

The -E platform is for people who need >32GB RAM, need 6/12 for very specific applications (not games), and/or who need more PCI lanes for tri/quad GPU. If you fall under any of those categories you know exactly why you're getting that platform. For everyone else, the platform is useless.

If someone is not going to overclock, then of course 4790K is better for gaming than any of the rumoured X99 i7 5xxx CPUs. If you want to use Ultra M.2 PCIe 3.0 x4 and 2 GPUs in SLI, Z97 is also no go for now until say Extreme 9 from Asrock drops -- of course a very niche market right now for M.2 drives and their cost is too high.

But there are already games that run faster on a 6-core CPU. In Watch Dogs, the 6-core beat 4770K, which may or may not represent next wave of games in the next 12-24 months. It's hard to say right now since Watch Dogs is the first such game.

http--www.gamegpu.ru-images-stories-Test_GPU-Action-Watch_Dogs-test-proz_nvidia_ultra.jpg


Arma 3 is also very CPU heavy.

http--www.gamegpu.ru-images-stories-Test_GPU-Action-ARMA_III-test-a3_proz.jpg

http--www.gamegpu.ru-images-stories-Test_GPU-Action-ARMA_III-test-a3_proz_u.jpg


There are people running GTX780Ti SLi.

Also, I would wait for 4790K vs. 5820 overclocking results before declaring either a clear winner. If 5820 turns out to be a dud overclocker, and 4790K hits 4.9-5.0ghz regularly, then it's a no brainer for gamers. However, if 4790K exhibits poor temperatures like in the Hexus review, they might only hit 4.6-4.7ghz and say the 5820 can do 4.3-4.4Ghz. The extra 200-300 mhz will not really matter in games but as soon as any game uses more than 4 cores or likes extra cache (a lot of Blizzard games do), the 5820 will walk all over that 4790K. All it takes is just 1 popular multi-player game like BF5.

Again, those not overclocking should pick 4790K.
 
Last edited:
I did a thorough investigation of each of the -E platforms, and each time I came to the conclusion that they were worthless. The traditional 4/8 platforms are not only cheaper but also faster for 99% of the tasks people use them for, including gaming.

Also -E may have more bandwidth, but the quad-memory controller has more latency, which can hurt games. Lower clocked Haswells often beat them for games because of this. Witness the 4820K losing against the 4770K despite the former being two speed bins higher (this is a recurring trend):

IVB-4820-62.jpg


The 4790K would absolutely dominate that chart if it were tested.

The -E platform is for people who need >32GB RAM, need 6/12 for very specific applications (not games), and/or who need more PCI lanes for tri/quad GPU. If you fall under any of those categories you know exactly why you're getting that platform. For everyone else, the platform is useless.

More expensive than Haswell without much more utility? Yes. USELESS??? WRONG!:biggrin:There are some of us that like having the 2011 socket set.
 
Possibly, but Xbitlabs is an extremely competent site in general, including their cooling/fan reviews.

An aftermarket cooler should cut back at least 10c from the stock HSF. That would mean the chip is normally in the 90s, which I have to doubt.
 
lol 84C at stock? Even with an aftermarket HSF? That's crazy, it's like Intel wants you to operate a central heating unit. That was the case with the old Pentium Pro where it hit 110C (I had a dual PPro it and survived 12 years of operation with those high temps).
 
There is no confirmation that 16 GB DDR4 will cost $100 more over 16GB of DDR3.
I know that, $100 was a figure thrown out during the discussion that someone is apparently more than happy to pay for, and I stuck with it.
lol 84C at stock? Even with an aftermarket HSF? That's crazy, it's like Intel wants you to operate a central heating unit. That was the case with the old Pentium Pro where it hit 110C (I had a dual PPro it and survived 12 years of operation with those high temps).
I know you're joking, but with a TDP of 88W, it's not really going to be heating much of anything. A ~15W Pentium Pro might get really hot, but there's not really much energy stored up at 110C... it's just concentrated.
 
Last edited:
For every fringe example you can find 4960X coming ahead, you'll find ten more examples where it loses to the 4770K or ties at best:

Basically this. I like the HEDT platform but it really seems as if owners of that platform are just coming to the defense for it while ignoring any reasonable arguments for the mainstream quads. For every 1 benchmark showing the HEDT ahead, you can find 3-4 similar benchmarks showing the mainstream 4c/8t being faster in other games. Oh, and, let's say you find 1 game out of 10 that is 4 fps faster on the HEDT platform. Let's say the 4930K costs 600$ with a 350$ mobo, while the mainstream quad 4c/8t platform is easily half that. Is that worth 4 fps? You tell me. That's ignoring the fact that you will find other games that are infact faster on the mainstream quad due to better IPC.

HEDT has its uses but for PC gaming, it doesn't provide a benefit that is worth the price premium in most cases. I can find a plethora of benchmarks showing the 4770k faster than the 4930k in games. It all balances out. You find one benchmark showing the HEDT ahead, you can also find others in which the 4770k is ahead.

HEDT is more beneficial for those who do things outside of PC gaming. I mean its a cut down server part FFS....it has benefits, but as to whether it's worth the hefty price premium? For PC gaming, for most people, the answer is no. But some people don't care. That's up to the user.

But it really does depend. If you're doing tri or quad SLI, HEDT provides a clear and real benefit due to the added PCI express lanes. As well, high surround resolutions in conjunction with quad SLI also benefit greatly from HEDT. But for the typical 1080p-1600p gamer? Mainstream quads are fine with comparable or better performance when averaged out. Yes, you can find 1-2 games where HEDT does better. But you can also find many games where mainstream quads do better because of the better IPC. In the end the buyer has to make their own choice based on what they do and how much their software benefits. There is no one size fits all answer here, especially for gaming. I mean if you're doing productivity related tasks that benefit from high thread counts, then HEDT always makes sense. But for PC gaming? It doesn't always make sense given the price premium and lack of added performance compared to mainstream.
 
Last edited:
For every fringe example you can find 4960X coming ahead, you'll find ten more examples where it loses to the 4770K or ties at best:

arma3.png


hitman.png


hitman.png


At 2x-3x the price for the total platform, unless you know exactly why you're getting it, you don't need it.

If you're really that concerned about memory bandwidth, buy some DDR3 3200 to pair with the Haswell: http://www.bit-tech.net/news/hardware/2013/06/06/corsair-launches-fastest-ever-venegeance-pr/1

You've summed it up very well.

Get Haswell-E if you want though. But don't delude yourself into thinking that it's the right choice for a gaming rig.
 
I bought at launch 6GB DDR3 with my 1366 system, pricing was fine, maybe $200.

Stopped reading that wall the moment pricing on ECC server RAM was used to project desktop DDR4 pricing. Like I said it's paranoia expecting a made up situation of DDR4 prices of $40+ a GB. Maybe if you're discussing top speed 2600+ rated stuff, otherwise nope.

So i'll just ignore your revisionist history here and get to the truth and facts of the matter. Let's rewind to when DDR3 was really introduced. With the P35 chipset in 2007. Now you tell me if DDR3 didn't have a significant price premium.

You want an answer? DDR3 did in fact carry a significant price premium at launch. Tell me i'm lying. TELL ME. Anyone around back then as a DIY PC enthusiast, knows i'm right. DDR3 was well matured by the time your x58 chipset rolled around, but DDR3 was insanely priced at launch. And it performed worse than DDR2, since DDR2 was well matured with better timings.

Guess what? this applied to DDR RAM, RDRAM, and SDRAM - every RAM at launch was insanely priced without a performance benefit. Every time, it took a year or more for prices to normalize. Every new RAM launch carried a significant price premium. *clown face* now tell me i'm lying. You can easily research this. Did DDR RAM have a hefty price premium at launch? Did RD RAM? SDRAM? I'm not going to say the answer is yes because you can easily figure this out yourself, but the answer is yes.

Tell you what. I know DDR4 will be insanely priced at launch. But you go right ahead and buy it, and then a year later, it will be priced more reasonably after you yourself pay for the launch R+D tax. I'm ALL FOR the early adopters eliminating the R+D early adopters tax for the rest of the market. So, please do that.
 
Last edited:
You need to calm down. No reason to be so upset. HW-E will launch in about 3 months and we'll see if you can't get a 16GB kit for under $600 like you're hoping the case will be for whatever reason.




About some of the gaming benchmarks above. There are situations where the old 3930K/3960X are still faster than the latest and greatest 4770K in several games. Battlefield 4 MP, Arma 3, Crysis 3, Watch Dogs and there are more. In most cases it's new and cutting edge games designed around the new consoles that are delivering better performance on 6+ core chips.

There are definitely more games where single threaded IPC delivers better performance. Many times though these are archaic games where regardless of platform you are already getting excellent performance even on a 2500K. Whereas I've found the games benefiting from 6+ cores are games where you reap actual benefits in game. Notably Battlefield 4 multiplayer plays better with a hex core than it does with a quad.

Still there are games where a Haswell will give you better performance that you notice, heavily CPU reliant and poorly threaded RTS games are where I've seen it. Starcraft 2 is an example where my 4670K system performs visibly faster compared to my 3930K if I'm not using GPU limited settings.

Really it's a trade-off. I prefer the enthusiast platform. As far as longevity, it's a longer-lasting platform than the desktop line is if you buy into it on launch day. Not much doubt we see the same situation three years from now with HW-E still faster in multi-threaded games than whatever the current desktop quad is at that point. Question is how much more prevalent will games be that make use of those extra cores going forward ? Right now more games are starting to take advantage.

The platform does cost two to three times as much out of the gate. So if you're talking value, it's a no-brainer the desktop line is what you should look at. For enthusiasts there are more metrics than just value though. Otherwise you wouldn't see so many people on this board with $500+ video cards that are just 20% or so faster than a $200-$300 option.
 
So i'll just ignore your revisionist history here and get to the truth and facts of the matter. Let's rewind to when DDR3 was really introduced. With the P35 chipset in 2007. Now you tell me if DDR3 didn't have a significant price premium.

You want an answer? DDR3 did in fact carry a significant price premium at launch. Tell me i'm lying. TELL ME. Anyone around back then as a DIY PC enthusiast, knows i'm right. DDR3 was well matured by the time your x58 chipset rolled around, but DDR3 was insanely priced at launch. And it performed worse than DDR2, since DDR2 was well matured with better timings.

Guess what? this applied to DDR RAM, RDRAM, and SDRAM - every RAM at launch was insanely priced without a performance benefit. Every time, it took a year or more for prices to normalize. Every new RAM launch carried a significant price premium. *clown face* now tell me i'm lying. You can easily research this. Did DDR RAM have a hefty price premium at launch? Did RD RAM? SDRAM? I'm not going to say the answer is yes because you can easily figure this out yourself, but the answer is yes.

Tell you what. I know DDR4 will be insanely priced at launch. But you go right ahead and buy it, and then a year later, it will be priced more reasonably after you yourself pay for the launch R+D tax. I'm ALL FOR the early adopters eliminating the R+D early adopters tax for the rest of the market. So, please do that.


It's no secret that DDR4 pricing will be higher than DDR3 when it arrives, the question is how much? I've seen links saying 30% more and newer links saying $450 for 16GB. I guess we'll have to wait and see.

But, you know how it goes... Some people are ok paying a premium on a $3000 video card that performs worse than an other much cheaper, quieter, older video card. Some people are ok with paying a premium on new memory technology on a new platform that may or may not initially perform worse than the current technology, but likely gives them an easy upgrade path in the future.
 
Last edited:
It's no secret that DDR4 pricing will be higher than DDR3 when it arrives, the question is how much? I've seen links saying 30% more and newer links saying $450 for 16GB. I guess we'll have to wait and see.

But, you know how it goes... Some people are ok paying a premium on a $3000 video card that performs worse than an other much cheaper, quieter, older video card. Some people are ok with paying a premium on new memory technology on a new platform that may or may not initially perform worse than the current technology, but likely gives them an easy upgrade path in the future.

I won't pay that for ram. Not happening. I'll skip the whole platform just for that. I got no problem with $500 cpu, $300 mobo, $900 or so for two GPUs, but $450 for ram is a show stopper. Seriously.
 
Back
Top