6700K OC vs 5820K OC vs 5960x OC

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

know of fence

Senior member
May 28, 2009
555
2
71
Is anyone really surprised? Even the the 4790K ends up being faster in most games vs. the 5820K.

http://techbuyersguru.com/quad-vs-hex-showdown-core-i7-4790k-vs-5820k-games

There really isn't many games that will utilize all 12 threads, and until there is, the 6700K will always be a better gaming CPU. I do wish they would've at least shown us the min FPS though.

Why is the 5820K slower then 4790K at the same clocks, truly I'm baffled. It has a giant 15 MB cache for one thing. Again the answer may be RAM latency. Frankly because I don't know other possible reasons.
In TBG test, assuming timings weren't tightened: DDR3 9.4ns (Cl10-2133) DDR4 11.25ns (CL15-2666).
 

moonbogg

Lifer
Jan 8, 2011
10,734
3,454
136
The solution is to go with Skylake-E. But then some new quads will be out and blah blah. Tough choices. The life of a gamer is a hard one.
 

Dave3000

Golden Member
Jan 10, 2011
1,543
114
106
I think if I would have stayed with my i7-2600k that I sold 4 years ago, I would not have been in any better position for gaming than I am now with my i7-4930k, at least nothing noticeable in gaming. Basically my 4930k is performing like an i7-3770k in games even today and now there are CPUs that are faster than the i7-4930k in gaming and those are the high end mainstream processors, eq. i5-6600k or i7-6700k, not enthusiast line of processors.
 

biostud

Lifer
Feb 27, 2003
20,021
7,117
136
Wasn't there some test in ashes that showed very high cpu usage in multi gpu setups, spread across all the cores available? Which could indicate that dx12 requires lots of cpu power to sync data in sfr.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Pretty much all the games tested can take advantage of >4C/8T to some degree, yet Skylake is still on top.

Hyperthreading is not a replacement for a physical core though. Most of the benefits for hyperthreading revolve around thread stalls, and latency hiding.

This time there's no 'slower memory excuse' (DC DDR4-3000 vs QC DDR4-3200). Sure, blame the website if the results doesn't match your expectations. ;)

I'm not blaming the website, just their methodology. Something doesn't seem right with these tests. The fact that they used mostly scripted sequences may have blunted the CPU performance factor, especially for the titles which CAN use 6 cores like Crysis 3 and AC Unity.

Anyway, RAM speed has very little performance impact for X99 platform..

BTW I'm sure Intel would rather promote more expensive HEDT parts for gamers.

HEDT parts isn't just for gamers. Professionals use HEDT parts for their workstations as well.
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
Is the UNCORE in the HEDT platform also overclocked? In my case the UNCORE can keep up with the core clock and it makes a huge difference in some cases for example Fallout 4. If they left the uncore at the default 3GHz they might be leaving a lot of performance on the table in some cases.

I'm not blaming the website, just their methodology. Something doesn't seem right with these tests. The fact that they used mostly scripted sequences may have blunted the CPU performance factor, especially for the titles which CAN use 6 cores like Crysis 3 and AC Unity.

Anyway, RAM speed has very little performance impact for X99 platform..

That is because of the low default uncore clock of 3GHz, when you overclock the uncore to 4GHz and beyond and more then the memory speed starts to make a difference.
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Is the UNCORE in the HEDT platform also overclocked? In my case the UNCORE can keep up with the core clock and it makes a huge difference in some cases for example Fallout 4. If they left the uncore at the default 3GHz they might be leaving a lot of performance on the table in some cases.

Before I switched over to X99, I did some thorough research on the impact of overclocking the uncore, and the consensus seemed to say that it's fairly useless, with the exception of synthetic benchmarks.

How did it impact your Fallout 4 experience?

That is because of the low default uncore clock of 3GHz, when you overclock the uncore to 4GHz and beyond and more then the memory speed starts to make a difference.

I have my uncore at 4ghz, which is 400mhz less than the core clock. I figure that as long as it's in 500mhz of the core, it shouldn't be a bottleneck.
 

know of fence

Senior member
May 28, 2009
555
2
71
It would also be interesting to disable HT on the 6700k to see what sort of gains it actually provides. It's a great comparison nonetheless although I feel they could have done with an i5 in there. I don't think many people look to hexcores or octocores for a gaming rig while 6600k vs 6700k is much more often discussed.

Generally the upgrade to Haswell to Skylake gives about 11% gain (PClab.pl average of 14 tests), the upgrade from i5 to i7 gives about 6%, for a combined gain from i5-4690K@4.5 to i7-6700K@4.5 of 19%.
These are special test circumstances with a CPU bottleneck.

My understanding is that Skylake mostly gains frames due to higher memory bandwidth, rather than architecture. However Eurogamer went on to demonstrate that even quad-channel bandwidth isn't enough to catch SL. Well, except if they tested all CPUs at 4.4 GHz, the 5960K probably would have caught up in half of the titles. And they also missed the opportunity to use the same memory.

I don't think this is about core-count, for that better tests exist that simply show core scaling, by disabling them 2 at a time.

Rather this is a cross platform apples to oranges comparison with 6 unaccounted variables (PCIe-lanes, cores, latency, bandwidth, cache even CPU-Clock FFS!). I wonder how these comparisons fared in the past, 4/8 core with new architecture was always first, I recon.

To be fair though, Far Cry 4 really seems to be able to really take advantage of Skylake (also Watchdogs, Battlefield4 MP).
vPmtYgP.png
 

Deders

Platinum Member
Oct 14, 2012
2,401
1
91
My understanding is that Skylake mostly gains frames due to higher memory bandwidth, rather than architecture. However Eurogamer went on to demonstrate that even quad-channel bandwidth isn't enough to catch SL. Well, except if they tested all CPUs at 4.4 GHz, the 5960K probably would have caught up in half of the titles. And they also missed the opportunity to use the same memory.


vPmtYgP.png

I would say that it's not the memory bandwidth alone, it's more that Skylakes architecture needs the higher bandwidth to get the best out of it. I read somewhere that it has more registers per core for operations which is how they manage to do more at lower clockspeeds/power consumption so I would speculate that the additional bandwidth lets Skylake breathe.
 
Last edited:

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
The uncore point is a good one.

Ultimately the decision to go 6700k vs 5820k depends on how long you're going to hold the CPU and what you do other than gaming. This smells an awful lot like the Q6600 vs E6600 debate back in the day. The Q definitely ended up aging significantly better despite the E6600 being better in the short term. Here on the cusp of DX12 and a smoother pathway to more scalable game engines, it seems that if you plan on holding your CPU for 3+ years that you ought to get a 5820k over a 6700k. If you plan on upgrading on the next release you're probably still best off on 6700k if you only game.

If you do any other multicore task you have to evaluate the very real speed ups from 6 cores. I do a lot of DAW work in Reaper which is great at multithreading, so I would've gone 5820k even if I planned on upgrading right away.
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
Before I switched over to X99, I did some thorough research on the impact of overclocking the uncore, and the consensus seemed to say that it's fairly useless, with the exception of synthetic benchmarks.

How did it impact your Fallout 4 experience?



I have my uncore at 4ghz, which is 400mhz less than the core clock. I figure that as long as it's in 500mhz of the core, it shouldn't be a bottleneck.

The uncore clock matters in cases where the memory performance makes a difference, especially with faster DDR4 memory modules. Aside from synthetics from the top of my head the two notable cases are Fallout 4 and Winrar. I can't do any tests now because my monitor blew up just yesterday but the CPU-bottle-necked areas of the game gave me noticeable increases in performance.
 

Face2Face

Diamond Member
Jun 6, 2001
4,100
215
106
The uncore clock matters in cases where the memory performance makes a difference, especially with faster DDR4 memory modules. Aside from synthetics from the top of my head the two notable cases are Fallout 4 and Winrar. I can't do any tests now because my monitor blew up just yesterday but the CPU-bottle-necked areas of the game gave me noticeable increases in performance.

When I clocked my uncore up to 4.4GHz from 3GHz, I saw a gain in the Boston common area of around 3-4 FPS. Not huge, but it's something. The 6700K is the CPU to own for FO4.
 

Madpacket

Platinum Member
Nov 15, 2005
2,068
326
126
Not really. They only test 4 and 8 cores conveniently ignoring 6 cores? The price difference between 6 and 8 cores is also much bigger than 4 and 6 as the 6700k was at least some time even more expensive than the 5820k.

It's also only one game engine being tested so it's silly to determine what the "max" ideal cores is for DX12 is 8. All this video shows is 8 cores in DX12 is ideal for AoS, nothing more. It agree it's very convenient they conveniently didn't test 6 cores / 12 threads.

The performance of one game engine is not indicative of how everything will perform. Silly video.

They also made a followup video comparing 8 core AMD FX cpu's where the Intel chips destroy it but if you read the user comments you will see conflicting reports with some evidence invalidating their results. Remind never to subscribe to this channel. So many hacks on YouTube.
 
Last edited:

Concillian

Diamond Member
May 26, 2004
3,751
8
81
If price is the same, what would you buy?

A HEDT X99 w/5820K with room to expand up to 128GB and 10 core Broadwell or a questionable Z10 platform (what 1151 socket upgrades are guaranteed??) with strict limits on core count and RAM?

One gives you a minor increase today for IPC for non DX12 games, while the other platform will likely scale better with DX12 with the added core counts. I would rather build a system that will last a few more years when spending this much money. Oh and the HEDT is better for any other task that requires multiple cores / threads (video / 3d rendering etc).

And up until recently the price of the 5820k +mobo and the 6700k +mobo were actually pretty close with skylake volumes pinched a bit and the prices of the 6700k inflated.

I agree that if you're choosing that high end a CPU, it does make sense to consider the 5820k. The difference in games is very minor, low enough to be considered negligible by most. But there is a definite possibility that the future of games will invert the results, and there's no question that the 5820k is going to be better in any kind of non-gaming CPU intensive application (video compression, rendering, photo editing, etc...)

There was a time in my life I considered any kind of future thinking on hardware completely idiotic, but with useful CPU lifetime looking to be at least triple what it was in the past, there comes a time to adjust ways.
 

moonbogg

Lifer
Jan 8, 2011
10,734
3,454
136
Anyone with a 5960x can keep it for 10 years and be just fine. Seriously. Its a forever chip, like a diamond.
 

Dave3000

Golden Member
Jan 10, 2011
1,543
114
106
And up until recently the price of the 5820k +mobo and the 6700k +mobo were actually pretty close with skylake volumes pinched a bit and the prices of the 6700k inflated.

I agree that if you're choosing that high end a CPU, it does make sense to consider the 5820k. The difference in games is very minor, low enough to be considered negligible by most. But there is a definite possibility that the future of games will invert the results, and there's no question that the 5820k is going to be better in any kind of non-gaming CPU intensive application (video compression, rendering, photo editing, etc...)

There was a time in my life I considered any kind of future thinking on hardware completely idiotic, but with useful CPU lifetime looking to be at least triple what it was in the past, there comes a time to adjust ways.

Well the 6700k is not much slower in multicore performance than the 5820k and but has a bigger difference in lightly threaded performance in the 6700k's favor, this is without overclocking. In Cinbench 11.5, the 6700k got 10.17 and the 5820k got 10.78 in the multicore test in a review at guru3d.com. I thought I was future proofing when I upgraded to a 3930k from a 2600k but now there is the 6700k that nearly matches the 3930k in multicore performance and is much better in gaming than the 3930k due to higher IPC and higher clocks despite 2 less cores.
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
Well the 6700k is not much slower in multicore performance than the 5820k and but has a bigger difference in lightly threaded performance in the 6700k's favor, this is without overclocking. In Cinbench 11.5, the 6700k got 10.17 and the 5820k got 10.78 in the multicore test in a review at guru3d.com. I thought I was future proofing when I upgraded to a 3930k from a 2600k but now there is the 6700k that nearly matches the 3930k in multicore performance and is much better in gaming than the 3930k due to higher IPC and higher clocks despite 2 less cores.

If you OC both, you'll find multithreaded performance with the i7 5820K is much higher than the 6700K, and only slightly slower in single threaded apps. They have a very low stock clock on the i7 5820K with a ton of room to OC.
 

Concillian

Diamond Member
May 26, 2004
3,751
8
81
Well the 6700k is not much slower in multicore performance than the 5820k and but has a bigger difference in lightly threaded performance in the 6700k's favor, this is without overclocking. In Cinbench 11.5, the 6700k got 10.17 and the 5820k got 10.78 in the multicore test in a review at guru3d.com. I thought I was future proofing when I upgraded to a 3930k from a 2600k but now there is the 6700k that nearly matches the 3930k in multicore performance and is much better in gaming than the 3930k due to higher IPC and higher clocks despite 2 less cores.

Sorry for not clarifying, I just assumed that both would have a "reasonable" overclock, which makes the clock difference somewhere around half of what it is stock, and improves the gaming performance of the 5820k considerably, as a pretty mild OC still raises the clock speed over 20%

Apologies, I had my own decision making in mind, which basically only was comparing mild OC to mild OC.
 

Dave3000

Golden Member
Jan 10, 2011
1,543
114
106
If you OC both, you'll find multithreaded performance with the i7 5820K is much higher than the 6700K, and only slightly slower in single threaded apps. They have a very low stock clock on the i7 5820K with a ton of room to OC.

Well, let me put it this way, what is going to be more reliable and last longer, an i7-5820k overclocked to 4.2 GHz or an i7-6700k running at stock?
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
Well, let me put it this way, what is going to be more reliable and last longer, an i7-5820k overclocked to 4.2 GHz or an i7-6700k running at stock?

They'll both likely be stressed about the same. Max OC's for the two are only about 200mhz different. They just choose to severely lower the stock 5820K clocks, most likely due to TDP.