• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Skylake 6700K Cinebench R11.5 memory scaling

Per a request from CHADBOGA, I am going to show results of Cinebench R11.5 running on a Core i7-6700K (stock) at different memory speeds. Will update this posts as I do tests at different speeds.

DDR4-2133: 10.13


DDR4-2400: 10.02


DDR4-2666: 10.17


DDR4-2800: 10.16


DDR4-3000: 10.19
 
Last edited:
Yes, I found the same thing on older platforms. Memory speed doesn't matter. What does matter is QPI or uncore speed..
 
Per a request from CHADBOGA, I am going to show results of Cinebench R11.5 running on a Core i7-6700K (stock) at different memory speeds. Will update this posts as I do tests at different speeds.

DDR4-2133: 10.13


DDR4-2400: 10.02


DDR4-2666: 10.17


DDR4-2800: 10.16


DDR4-3000: 10.19

Thanks for that.

I'm wondering if for some reason it is games that particularly benefit from the increases in memory speed with Skylake, more so than productivity applications?
 
Found the same thing on my machine.

I'm wondering if for some reason it is games that particularly benefit from the increases in memory speed with Skylake, more so than productivity applications?
In general, that's probably true. Games naturally put a pretty hefty continuous load on the memory subsystem. However, depending on how you define productivity apps, there are definitely cases where large datasets are being worked on which would show some dependency on memory speed as well.
 
I'm wondering if for some reason it is games that particularly benefit from the increases in memory speed with Skylake, more so than productivity applications?
Yes,the reason is multiple GBs in textures that have to be juggled between storage, sysram and vram, lowering the bottleneck in either of those will make games run closer to what was intended.
In consoles textures get streamed while the GPU keeps on rendering,on PCs you have to be able to buffer a whole lot to keep from loosing speed,here's to hoping that Dx12 will fix that sooner than later.
https://www.youtube.com/watch?feature=player_detailpage&v=H1L4iLIU9xU#t=941
 
what are the latencies like?

Yes, I'd like to see the timings if possible. It would be instructive if you were to do the same tests, using the timings for DDR4-3000 on all speeds. Then we could compare. It might also be nice to see basic bandwidth/latency measurements for all of the settings used in both tests.
 
Yes, I'd like to see the timings if possible. It would be instructive if you were to do the same tests, using the timings for DDR4-3000 on all speeds. Then we could compare. It might also be nice to see basic bandwidth/latency measurements for all of the settings used in both tests.
Best way would be to have 2400 vs 3000 at 2-3 latency values, but then again it would make little sense to test this in Cinebench.

When Anandtech tested Skylake with Cinebench R15 they found less than 1% difference between DDR4-2133 C15 and DDR3-1866 C9.
 
The PC either spends time waiting for computation to complete or fetching data, let's say 1/10th of the time it's the latter. The time fetching Data is again split between latency and "speed" (data / transfer rate = time). Moreover RAM is only the 4th level storage in the cache hierarchy. All of these splits and fractions depend on the application and it's bandwidth hunger.
We know that Integrated Graphics progressed to the point where it completely saturates memory bandwidth, every other application maybe benefits from a split second shorter loading time. Or not at all, because higher timings negate all split seconds saved with faster rates.
It is hard to come up with random data/bandwidth hungry applications, other than perhaps public servers and real time IGP 3D graphics. CPU benchmarks bent on comparing compute performance, likely all are compute bottlenecked.
 
Back
Top