Intel Skylake / Kaby Lake

Page 146 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

gorion

Member
Feb 1, 2005
146
0
71
Intel Skylake is like that super pretty girl who everyone hyped up to be the girl of their dreams but when you actually spent some time with her getting ready to dump my current girlfriend , she turns out to be barely any better than my girlfriend. So instead of dumping my current girlfriend for her, i choose to stick with my current girlfriend. I will dump my current girlfriend for a new boyfriend next year (AMD ZEN)

So changing from a girlfriend to a boyfriend?
I would reconsider things a bit.
 

mikk

Diamond Member
May 15, 2012
4,140
2,154
136
Remember its a GT4e model with eDRAM at 65W. So clocks will be just as different as with 5775C vs 4790K.

I wouldnt wait if I needed a CPU upgrade.


Also higher price and I think the edram effect will be smaller for those Skylake-K users with DDR4-3000 CL15 memory or faster.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
Intel Skylake is like that super pretty girl who everyone hyped up to be the girl of their dreams but when you actually spent some time with her getting ready to dump my current girlfriend , she turns out to be barely any better than my girlfriend. So instead of dumping my current girlfriend for her, i choose to stick with my current girlfriend. I will dump my current girlfriend for a new boyfriend next year (AMD ZEN)

I'll be surprised if Zen can compete with Skylake next year.

For me, the package of Skylake with the new chipsets is a pretty good incentive.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
why does skylake EDram affect minimum frames in Dgpu?

Because the CPU can use it, too.

The 128 MB of eDRAM is on the same package as the CPU, but in a separate die manufactured in a different process. Intel refers to this as a Level 4 cache that is available to both CPU and GPU, naming it Crystalwell.
 

myocardia

Diamond Member
Jun 21, 2003
9,291
30
91
Haswell-E came after Devil's Canyon.
My mistake. There is a whole 7 weeks difference, and I remembered incorrectly which was released first.
No it wouldn't because of much higher DDR4 latencies. You didn't provide any kind of reasonable argumentation as to why other review results aren't valid while AnandTech is other than calling them payed by Intel, which shows that your only purpose here is to troll and defend your current system.

DDR3-1600 CL8/CL9 DDR3 (used in most Haswell tests) is superior to DDR4-2133 CL15 DDR4 in true latency (used in AnandTech's Skylake review).

Haswell DDR3 1600 CL9 = 1.25 ns x 9 CL = 11.25 ns true latency
Skylake
DDR4 2133 CL15 = 0.94 ns x 15 CL = 14.06 ns true latency
DDR4-2400 CL15 = 12.5 ns latency
DDR4-2800 CL15 = 10.7 ns latency
DDR4-3000 CL15 = 10 ns latency
You seriously seem to know absolutely nothing about RAM. Your calculations above determine nothing other than that speed and CL rating of RAM's latency. Latency is exactly half of RAM's 'speed', not 100% of it, as you continue to imply. Total bandwidth is much more important.

BTW, why does your Skylake get to use 2,666 or 3,000 Mhz RAM, while a Haswell is saddled with 1,333 or 1,600 Mhz, when you can buy DDR3 that is 3,000 Mhz and faster? Doesn't fit your agenda, I guess.
Can you explain us why even in applications AnandTech showed different results than some other reviews that also used DDR4-2133 (or at most DDR4-2666)?
No, I didn't conduct the AnandTech tests. I suppose that you can explain them?
Core i7 6700K is the better chip, deal with it.
I haven't seen anyone say that it wasn't. You weren't pimping the 6700k with your cherry-picked, completely lopsided reviews, though. You were swearing (edit: by implying) that the Skylake μArch is so awesome, that it beats all other μArch, even compared to chips with higher clock speeds, and twice or more the threads, which is honestly one of the dumber things I've read in the past few months, on any site. edit #2: This is also the very first time that you have even mentioned the 6700k, at least since I first called you out on the B.S. you were spewing.
 
Last edited:

Sweepr

Diamond Member
May 12, 2006
5,148
1,142
131
My mistake. There is a whole 7 weeks difference, and I remembered incorrectly which was released first.

You seriously seem to know absolutely nothing about RAM.

No, you don't know nothing about RAM. Real latency according to Crucial:

Crucial's website said:
The true definition of latency and the latency equation
At a basic level, latency refers to the time delay between when a command is entered and executed. It's the gap between the two. Because latency is all about this gap, it's important to understand what happens after a command is issued. When the memory controller tells the RAM to access a particular location, the data must go through a number of clock cycles in the Column Address Strobe in order to get to its desired location and “complete” the command. With this in mind, there are two variables that determine a module's latency:

The total number of clock cycles the data must go through (measured in CAS Latency, or CL, on data sheets)
The duration of each clock cycle (measured in nanoseconds)

true latency (ns) = clock cycle time (ns) x number of clock cycles (CL)

Once again:

DDR3-1600 CL8/CL9 DDR3 (used in most Haswell tests) is superior to DDR4-2133 CL15 DDR4 in true latency (used in AnandTech's Skylake review).

Haswell DDR3 1600 CL9 = 1.25 ns x 9 CL = 11.25 ns true latency
Skylake
DDR4 2133 CL15 = 0.94 ns x 15 CL = 14.06 ns true latency
DDR4-2400 CL15 = 12.5 ns latency
DDR4-2800 CL15 = 10.7 ns latency
DDR4-3000 CL15 = 10 ns latency

Latency is very important too, even AnandTech recognizes this in their testing methods. Skylake might be particularly sensitive to it, which according to some is the reason why results vary so much comparing reviews.

AnandTech said:
Normally in our DRAM reviews I refer to the performance index, which has a similar effect in gauging general performance:

DDR3-1600 C11: 1600/11 = 145.5
DDR4-2133 C15: 2133/15 = 142.2

As you have faster memory, you get a bigger number, and if you reduce the CL, we get a bigger number also. Thus for comparing memory kits, if the difference > 10, then the kit with the biggest performance index tends to win out, though for similar kits the one with the highest frequency is preferred.

I already showed you a test where Skylake is at a disadvantage (according to AnandTech) and still outperforming Haswell like any Intel tock outperforms its predecessor (10-15% per clock).

AnandTech's method:
PCLab's review
DDR4-2666/16 = 166 (Skylake-S)
DDR3-1866/9 = 207 (Haswell)

PCLab's overall gaming performance chart - 14 games @ 1080p
gry.png


Unfortunately for you whining won't change facts, the numbers are here an there's plenty of reviews supporting what I say.


BTW, why does your Skylake get to use 2,666 or 3,000 Mhz RAM, while a Haswell is saddled with 1,333 or 1,600 Mhz, when you can buy DDR3 that is 3,000 Mhz and faster? Doesn't fit your agenda, I guess.

Very few reviews used DDR3-1600 in their Haswell systems (none used DDR3-1333). You can find plenty of them using overclocked RAM on Haswell and bottom of the barrel DDR3-2133 CL15 on Skylake. We can count on hand the number of reviews that used faster than DDR4-2666 RAM (often with poor latency) in their Skylake testing. If all this fits your agenda that Devil's Canyon matches Skylake (like your beloved 'non payed by Intel' Anandtech review implies in their gaming tests) it's your problem, not mine. Unfortunately it doesn't meet reality.

No, I didn't conduct the AnandTech tests. I suppose that you can explain them?

Then why did you accuse other reviews (putting Skylake in a better light than AnandTech) of being payed by Intel? Are they payed by Intel even if their memory selection favours Haswell like many websites did and Skylake still delivered better results than AnandTech's review?

myocardia said:
So, if the 'author' hasn't been paid off by Intel to make the Skylake seem as if it has high IPC, like the author you quoted

I haven't seen anyone say that it wasn't. You weren't pimping the 6700k with your cherry-picked, completely lopsided reviews, though. You were swearing (edit: by implying) that the Skylake μArch is so awesome, that it beats all other μArch, even compared to chips with higher clock speeds, and twice or more the threads, which is honestly one of the dumber things I've read in the past few months, on any site.

I never implied it would offer more than 10-15% over Haswell, all the rest is thread crapping from your part. Reported. ;)
 
Last edited:

mikk

Diamond Member
May 15, 2012
4,140
2,154
136
Total bandwidth is much more important.


More important yes, I wouldn't say much more important. DDR4-2133 CL15 isn't necessarily better than DDR3-1600 CL9.


BTW, why does your Skylake get to use 2,666 or 3,000 Mhz RAM, while a Haswell is saddled with 1,333 or 1,600 Mhz, when you can buy DDR3 that is 3,000 Mhz and faster?


DDR3-3000 8GB= 320€+
DDR4-3000 8GB= ~75€


Haswell with 1333? In which reviews? Most did use 1600 and some 1866+. For a 1:1 comparison Skylake can be used with the same DDR3 memory.
 

Fjodor2001

Diamond Member
Feb 6, 2010
3,778
247
106
As we've all noticed, Skylake-K is a 91-95W TDP CPU. Ever since Ivy Bridge (77 W TDP) Intel has been increasing the TDP again.

Previously it was believed that Intel intentionally prioritized lower TDP over performance even on desktop. But now we see this is no longer the case.

So I wonder what you think could be the reason for that?

My own guess would be that if Intel tried lowering TDP further or kept it constant, they would actually get lower performance on Skylake compared to the precious CPU generations (the reason being that 14 nm does not provide as much perf/watt improvement as desired). Since that would not be acceptable, they were forced to increase TDP even though they didn't want to.

Any thoughts on this?
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Mainstream Skylake is 65W and 35W for quads.

Mainstream Ivy Bridge is 77W and 45W for quads.

And thats with +80% AVX performance in favour of Skylake besides the IPC, IGP, Cache etc.

Looking on mobile Skylake just rocks.

Someone complained about Intel not setting higher TDP on K SKUs. When they did, now thats wrong too.
 
Last edited:

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
Yes, Broadwell beats or ties Skylake in DGPU gaming, despite the much lower clock speed. Broadwell is often impressive with minimum fps over Skylake.

http://www.anandtech.com/bench/product/1500?vs=1544

http://www.anandtech.com/bench/product/1501?vs=1542

This is a big hint that Skylake with edram will be impressive.


Win some, lose some, others basically a tie with maybe 1frame or less difference between the two. Not sure it's that big a deal unless you are relying on iGPU. Not too concerned the 6700k would be eclipsed in any way.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
As we've all noticed, Skylake-K is a 91-95W TDP CPU. Ever since Ivy Bridge (77 W TDP) Intel has been increasing the TDP again.

Previously it was believed that Intel intentionally prioritized lower TDP over performance even on desktop. But now we see this is no longer the case.

So I wonder what you think could be the reason for that?

My own guess would be that if Intel tried lowering TDP further or kept it constant, they would actually get lower performance on Skylake compared to the precious CPU generations (the reason being that 14 nm does not provide as much perf/watt improvement as desired). Since that would not be acceptable, they were forced to increase TDP even though they didn't want to.

Any thoughts on this?

Skylake K is running very loosely with voltage. Decent overclocks (4.5-4.6 ghz) seem to be possibly without touching the voltage.

Mainstream chips run lower clocks and will probably have much lower voltage.
 

jpiniero

Lifer
Oct 1, 2010
14,600
5,221
136
You also have to remember that the rumors were speculating that Intel were only going to release the Skylake locked quads and roll with Broadwell-C. But it's obvious why they did end up doing it - the 14 nm yields are still terrible and they need to sell the chips which wouldn't have been able to meet the specs for the 65W.