Skylake 6700K Cinebench R11.5 memory scaling

Mar 10, 2006
11,715
2,012
126
Per a request from CHADBOGA, I am going to show results of Cinebench R11.5 running on a Core i7-6700K (stock) at different memory speeds. Will update this posts as I do tests at different speeds.

DDR4-2133: 10.13


DDR4-2400: 10.02


DDR4-2666: 10.17


DDR4-2800: 10.16


DDR4-3000: 10.19
 
Last edited:
Mar 10, 2006
11,715
2,012
126
From these results, it would seem that memory speed has negligible impact to Cinebench R11.5 score.
 

Burpo

Diamond Member
Sep 10, 2013
4,223
473
126
Yes, I found the same thing on older platforms. Memory speed doesn't matter. What does matter is QPI or uncore speed..
 

CHADBOGA

Platinum Member
Mar 31, 2009
2,135
833
136
Per a request from CHADBOGA, I am going to show results of Cinebench R11.5 running on a Core i7-6700K (stock) at different memory speeds. Will update this posts as I do tests at different speeds.

DDR4-2133: 10.13


DDR4-2400: 10.02


DDR4-2666: 10.17


DDR4-2800: 10.16


DDR4-3000: 10.19

Thanks for that.

I'm wondering if for some reason it is games that particularly benefit from the increases in memory speed with Skylake, more so than productivity applications?
 

Brunnis

Senior member
Nov 15, 2004
506
71
91
Found the same thing on my machine.

I'm wondering if for some reason it is games that particularly benefit from the increases in memory speed with Skylake, more so than productivity applications?
In general, that's probably true. Games naturally put a pretty hefty continuous load on the memory subsystem. However, depending on how you define productivity apps, there are definitely cases where large datasets are being worked on which would show some dependency on memory speed as well.
 

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
I'm wondering if for some reason it is games that particularly benefit from the increases in memory speed with Skylake, more so than productivity applications?
Yes,the reason is multiple GBs in textures that have to be juggled between storage, sysram and vram, lowering the bottleneck in either of those will make games run closer to what was intended.
In consoles textures get streamed while the GPU keeps on rendering,on PCs you have to be able to buffer a whole lot to keep from loosing speed,here's to hoping that Dx12 will fix that sooner than later.
https://www.youtube.com/watch?feature=player_detailpage&v=H1L4iLIU9xU#t=941
 

MrTeal

Diamond Member
Dec 7, 2003
3,918
2,708
136
You should give the test a try again, this time varying the speed of your L3.
 

DrMrLordX

Lifer
Apr 27, 2000
22,885
12,939
136
what are the latencies like?

Yes, I'd like to see the timings if possible. It would be instructive if you were to do the same tests, using the timings for DDR4-3000 on all speeds. Then we could compare. It might also be nice to see basic bandwidth/latency measurements for all of the settings used in both tests.
 

coercitiv

Diamond Member
Jan 24, 2014
7,345
17,389
136
Yes, I'd like to see the timings if possible. It would be instructive if you were to do the same tests, using the timings for DDR4-3000 on all speeds. Then we could compare. It might also be nice to see basic bandwidth/latency measurements for all of the settings used in both tests.
Best way would be to have 2400 vs 3000 at 2-3 latency values, but then again it would make little sense to test this in Cinebench.

When Anandtech tested Skylake with Cinebench R15 they found less than 1% difference between DDR4-2133 C15 and DDR3-1866 C9.
 

know of fence

Senior member
May 28, 2009
555
2
71
The PC either spends time waiting for computation to complete or fetching data, let's say 1/10th of the time it's the latter. The time fetching Data is again split between latency and "speed" (data / transfer rate = time). Moreover RAM is only the 4th level storage in the cache hierarchy. All of these splits and fractions depend on the application and it's bandwidth hunger.
We know that Integrated Graphics progressed to the point where it completely saturates memory bandwidth, every other application maybe benefits from a split second shorter loading time. Or not at all, because higher timings negate all split seconds saved with faster rates.
It is hard to come up with random data/bandwidth hungry applications, other than perhaps public servers and real time IGP 3D graphics. CPU benchmarks bent on comparing compute performance, likely all are compute bottlenecked.
 

Rakehellion

Lifer
Jan 15, 2013
12,181
35
91
Cinema 4D can use a lot of memory, though i'd imagine Cinebench wouldn't since it's primarily a CPU test.