Intel Skylake / Kaby Lake

Page 309 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

zetruz

Junior Member
Oct 1, 2016
1
0
6
I'm terribly sorry for asking something I'm sure has been asked before, but I don't know where to find the information. Actually, there are two questions, both pertaining to the BIOS updates of current Z170 motherboards to support Kaby Lake processors.
1. Will an updated Z170 motherboard with a Kaby Lake processor support Intel's Optane technology? Or will that be reserved for Z270 motherboards? In other words: is Intel's Optane only CPU-dependent, or CPU- and motherboard-dependent? (Surely not purely motherboard-dependent, right?
2. If you plop a Kaby Lake processor on a Z170 motherboard that has not had its BIOS updated? What will happen? Will the computer just fail to boot into the OS? Will you just have random crashes but it will still generally "quasi-work"? Will you not even be able to get into the BIOS to perform the system update?
I ask because I'm helping a friend pick parts for a computer, and this kind of stuff might prove relevant. If he buys a Z170 motherboard and a Kaby Lake processor, will he need to acquire a Skylake processor to perform the BIOS update before he plops in the Kaby Lake CPU?

Thank you guys very much in advance. And again, I'm sorry for the stupid questions.
 

SAAA

Senior member
May 14, 2014
541
126
116
This is either clocked with more than 4.5 Ghz or there must be IPC differences. Gaussian Blur 25% better than my 4.0 Ghz clocked 6700k running with faster memory, DDR4-3200 CL14 actually.

http://browser.primatelabs.com/v4/cpu/606577

It's possible, just use AES as a ruler and the clockspeed is probably closer to 4.9GHz for the 7700K from this sample. Still bodes well for Kabylake overclocks, even if IPC doesn't change at all.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
WOW. My 6700K @ 4.4GHz gets ~5400 ST. This is running at 5GHz ;)

There are no IPC improvements, it's the same Skylake core.

Then there is of course another perfectly viable explanation that you have ignored since day 1 -- Geekbench is an unreliable synthetic program that has little to do with real world IPC/performance under real world applications; unless of course you think a 4.8Ghz i7 4790K is 23% faster than your 4.4Ghz 6700K.
http://browser.primatelabs.com/geekbench3/5028462

Interesting how under real world applications that a lot of us use, such as video games, a 4.6Ghz i7 6700K easily beats a 4.9Ghz i7 4790K, often by 10-15%, but yet AppleMarketingBench shows Haswell demolishing Skylake by more than 20% at lower clocks than 4.9Ghz.

The fun just keeps getting better because an iMac Retina powered by a 6700K, that we know cannot be overclocked (much), hits almost 6200 ST under AppleBench. Clearly 7700K or Windows 10 is a failure, right? /s

You also said that 6100 ST is a 5Ghz Skylake, am I right? This guy must be using liquid nitrogen then on his i5-6500 as he achieved almost 6500 points.

The fun doesn't stop. i7-3720QM gets 3542, 3542 and 3543 in ST. In contrast, i7 6700K @ 4.4ghz, according to you scores 5400 points; while others have it all the way up to 6500 points. If we use the low end of 5400 points and adjust it to 4.2Ghz (*4.2/4.4Ghz), we would arrive at a score of 5154 or at least 45.5% faster than an i7-3720QM.

Commonly used x86 programs for comparing CPUs show that 6700K only leads 3720QM by:

3DMark06 = 36% faster (9037 vs. 6642)
Cinebench R10 Single = 38% faster (26040 vs. 18879)
Cinebench R11.5 Single = 34% faster (1.97 vs. 1.47)

So you use Cinebench for comparing IPC of AMD to Intel CPUs in X86 threads and you use AppleMarketingBench to compare Apple's CPUs to various Android opposition, but you never actually bothered to compare and find this discrepancy?

You never even bothered to check how i7 6700K under Mac OSX is getting > 6000 points in stock form on an iMac?
https://browser.primatelabs.com/v4/cpu/343269

The AppleMarketingBench is so much fun, the fun never stops!

Intel Core m7-6Y75 @ 1.30 GHz = 3547 ST score! Oh wait, that's almost the same score that wonderful i7-3720QM got above. But how does it compare under x86 workloads?

Cinebench R11.5 Single 64-bit
3720QM = 1.47
m7-6Y75 = 1.08
http://www.notebookcheck.net/Mobile-Processors-Benchmark-List.2436.0.html

But the fun, the fun just keeps on giving!

AppleMarketingBench shows 3720QM multi-threaded performance only 63% above m7-6Y75 (11219 vs. 6857), but NotebookCheck estimates that under Windows x86 benchmarks the i7 3720QM is an 80% faster processor overall (93.3% performance rating vs. 51.9%).
http://www.notebookcheck.net/Mobile-Processors-Benchmark-List.2436.0.html

The solution is to use real world applications, and scrap all synthetic benchmarks. Unigine, 3DMark, PCMark, GeekBench, SuperPi, PassMark = ALL garbage that tells me little about real world performance in real world applications. All these programs tell me is how well a CPU/GPU runs that particular synthetic benchmark. But of course, considering how well Intel, NV and Apple perform in marketing-driven synthetic junk, you would probably throw a fit if the PC community started to flat out discredit synthetic junk and using real world programs that PC users actually run after spending $300+ on their CPU.

The propensity to post synthetic benchmarks to compare CPUs & GPUs on a technical PC forum is frankly insulting and should not be acceptable in 2016. It takes 5 minutes of Google to find reviews which compare a 6700 against a 4.6Ghz 6700K in games. We don't need to extrapolate or guess or fantasize how Kaby Lake's clocks speeds will translate (or not) into real world gains by using the erroneous AppleMarketingBench because we will have real world games and other applications tested by objective professional reviewers. Trying to derive real-world CPU performance from synthetic benchmarks is a waste of time, and is downright insulting to the community that wants to promote objectivity. On the contrary, synthetic benchmarks promote the idea that your CPU/GPU isn't fast enough for real world applications/games, because they try to inherently estimate the performance under those synthetic benchmarks AS IF it's directly transferable to real world scores.

Since by very definition, a synthetic benchmark does NOT measure real world performance, and it will never be able to encompass all the varying code and specific architectural and driver optimizations in real world apps, it has little to no value. Furthermore, unless the synthetic benchmark uses exactly the game engine/code that will underline 100s-1000s of applications mobile and PC users use, the very nature of it being "synthetic" dictates that it's largely irrelevant.

It's the same reason we can find 100s of games where Skylake beats Sandy/Ivy and Haswell, 100s of games where it's barely faster (or not faster at all), 100s of games where HT hurts 6700K, and 10s of games where HT helps 6700K, etc. The very reason Digital Foundry and various sites such as GameGPU test as many games as they can is BECAUSE performance under real world applications (games) varies. Synthetic benchmarks cannot capture this variance.
 
Last edited:

jpiniero

Lifer
Oct 1, 2010
14,584
5,206
136
Keep in mind there are plenty of OSX hackintosh results on Geekbench. Apple does offer an upgrade to the 6700K on the iMac 5K but you wouldn't be able to overclock it.

Intel Core m7-6Y75 @ 1.30 GHz = 3547 ST score! Oh wait, that's almost the same score that wonderful i7-3720QM got above. But how does it compare under x86 workloads?

Cinebench R11.5 Single 64-bit
3720QM = 1.47
m7-6Y75 = 1.08
http://www.notebookcheck.net/Mobile-Processors-Benchmark-List.2436.0.html

Core M can throttle pretty hard; I don't think that's a really valid comparison.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,326
10,034
126
Since by very definition, a synthetic benchmark does NOT measure real world performance, and it will never be able to encompass all the varying code and specific architectural and driver optimizations in real world apps, it has little to no value.

Just like putting cars on a dyno, doesn't pre-determine which one wins a race (drag, circuit, or otherwise), synthetic benchmarks don't tell the whole picture. That does NOT mean that they are useless.

A synthetic benchmark, a well-written one that gives repeatable results, is a pre-set bunch of code that runs on various processors, etc.

Results of a synthetic benchmark run, I agree, DO NOT APPLY TO ANYTHING BUT THAT PARTICULAR SYNTHETIC BENCHMARK, unless that EXACT SAME CODE is used in a "real-world" application. (Which is probably why Cinebench is so popular around here.)

But synthetic benchmarks, are USEFUL to compare different CPUs / APUs, WITHIN THE DOMAIN OF THE BENCHMARK. That is, their relative ranking.
 

lolfail9001

Golden Member
Sep 9, 2016
1,056
353
96
Then there is of course another perfectly viable explanation that you have ignored since day 1 -- Geekbench is an unreliable synthetic program that has little to do with real world IPC/performance under real world applications; unless of course you think a 4.8Ghz i7 4790K is 23% faster than your 4.4Ghz 6700K.

Nice cherry picking of obviously sketchy sample (how to detect sketchy sample: overclocked CPU on MacOS).

Interesting how under real world applications that a lot of us use, such as video games, a 4.6Ghz i7 6700K easily beats a 4.9Ghz i7 4790K, often by 10-15%, but yet AppleMarketingBench shows Haswell demolishing Skylake by more than 20% at lower clocks than 4.9Ghz.

May i see a timestamp for 15% claim? Quick lookthrough led to seeing mostly equal (down to ~10% margin on some frames) perf.

The fun just keeps getting better because an iMac Retina powered by a 6700K, that we know cannot be overclocked (much), hits almost 6200 ST under AppleBench. Clearly 7700K or Windows 10 is a failure, right? /s
https://browser.primatelabs.com/v4/cpu/426690

macOS is clearly a bad OS.

The solution is to use real world applications, and scrap all synthetic benchmarks.

The irony here is that GeekBench is a bunch of real world libs glued together into a synthetic test, and coupled with bad test cases and system detection.

Since by very definition, a synthetic benchmark does NOT measure real world performance, and it will never be able to encompass all the varying code and specific architectural and driver optimizations in real world apps, it has little to no value. Furthermore, unless the synthetic benchmark uses exactly the game engine/code that will underline 100s-1000s of applications mobile and PC users use, the very nature of it being "synthetic" dictates that it's largely irrelevant.

I am willing to bet that about every consumer *nix-based device on the planet uses some of the code that went into GeekBench (on the second thought, the irony is that Windows-based devices may just be the exception) What now?
 
  • Like
Reactions: Arachnotronic

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
AppleMarketingBench shows Haswell demolishing Skylake by more than 20% at lower clocks than 4.9Ghz.
You make a valid point. Everyone knows that to get an accurate picture of performance, one needs to run multiple benchmarks. Synthetic, but most preferably real world workloads where you measure time to completion instead of some arbitrary points system.

One detail makes me wonder how well researched this post is, though.

Intel Core m7-6Y75 @ 1.30 GHz = 3547 ST score! Oh wait, that's almost the same score that wonderful i7-3720QM got above. But how does it compare under x86 workloads?

Cinebench R11.5 Single 64-bit
3720QM = 1.47
m7-6Y75 = 1.08
http://www.notebookcheck.net/Mobile-Processors-Benchmark-List.2436.0.html

But the fun, the fun just keeps on giving!
Everyone, I thought, also knew that Intel has something called Turbo Boost. I'm not going to bother to check your other examples, but if they do not take into account the CPUs running at max boost clock, you might want to re-research your post and see if there's still this discrepeancy.
 

KTE

Senior member
May 26, 2016
478
130
76
You make a valid point. Everyone knows that to get an accurate picture of performance, one needs to run multiple benchmarks. Synthetic, but most preferably real world workloads where you measure time to completion instead of some arbitrary points system.

One detail makes me wonder how well researched this post is, though.


Everyone, I thought, also knew that Intel has something called Turbo Boost. I'm not going to bother to check your other examples, but if they do not take into account the CPUs running at max boost clock, you might want to re-research your post and see if there's still this discrepeancy.
Without nitpicking, he does have a valid point tho. It's difficult to ascertain how well a CPUs performance reflects in the real world by looking at the synthetics and it's the most common correlation made by most, even myself.

I agree, they are only to be used for a processor compare within the bench itself. Good for design analysis but how relevant is it for a reflection of end-user performance gain?

They could be showing 50% boost which translates to 5% in real used code.

They could also be showing corner-case, best-case examples.

Secondly... If it's 30s vs 35s or 90s vs 100s in CB test, makes absolutely no difference in the real world -- it's just not perceptible for a non-time-critical computer application. Is that worth $300 more to the average DT user?

+5-10% in real world perception is just noise even in time critical apps. You are not going to gain anything between the two.

I'd even go so far as to say, Average/Max 65fps vs 70fps is practically nil for perceptible difference (I say that even though I play FPS at 90).

All of these are marketing stats, good for intricate design analysis but just not helpful for end users who will be assessing whether to pay >$250 for that gain.

I am quite baffled if anyone 'upgrades' to a platform offering less than 20% performance gain across the board (if nothing else is much better)... for time-critical apps (except gaming). Corporations don't, and I can't think of any common consumer running code more time-critical than them.

Sent from HTC 10
(Opinions are own)
 

Sweepr

Diamond Member
May 12, 2006
5,148
1,142
131

Nice find.

e3v6roadmap_zpsztaxw880.jpg
 

StrangerGuy

Diamond Member
May 9, 2004
8,443
124
106
Last edited:

Sweepr

Diamond Member
May 12, 2006
5,148
1,142
131
Skylake-EP - Socket LGA 3647

With our four node Knight’s Landing system in the lab, we wanted to take a quick look at Intel’s LGA 3647 socket. This is Intel’s next-generation socket that will replace the venerable LGA 2011. As with LGA 2011 where there are variants such as LGA 2011, LGA 2011-2 and LGA 2011-3, the Intel LGA 3647 socket will have different sockets with the same number of pins serving different markings. Today we have a Knights Landing socket and wanted to show off how big the new CPU packages are going to be in comparison to today’s chips. If you wanted to see into the future of where chips are going, today is your day.


Broadwell-EP-LGA-2647-Broadwell-DE-package-size-comparison.jpg


www.servethehome.com/big-sockets-look-intel-lga-3647/
 

Timmah!

Golden Member
Jul 24, 2010
1,417
630
136
I am looking forward to the day, when they release socket with 1000000 pins, which will take surface of the entire ATX format mobo :p
 

IndyColtsFan

Lifer
Sep 22, 2007
33,656
687
126
Taking it at face value:

(6139 KL ST / 4.5GHz ) / ( 5350 SKL ST / 4.2GHz) = +7.6% IPC. MT IPC increase is higher at 12.7%.

More interesting than I expected.

Wow, color me shocked - I thought for sure we'd see 3-4%. A 7.6% gain is pretty impressive and makes my decision easier.
 

Nothingness

Platinum Member
Jul 3, 2013
2,400
733
136
Wow, color me shocked - I thought for sure we'd see 3-4%. A 7.6% gain is pretty impressive and makes my decision easier.
I don't believe a second you'll see such IPC gains. The 6139 score is bogus, most likely an OC score. There are now other scores that range between 5200 and 5763 :
http://browser.primatelabs.com/v4/cpu/search?utf8=✓&q=7700k

Picking the best 5763 we get 1261 ST/GHz vs 1274 ST/GHz for 6700k. Basically what I expect: no IPC improvement.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Since Kaby Lake uses Skylake cores, as in exact Skylake cores. Anyone thinking there will be IPC increase is just being imaginary.

7700K is something like ~7.5% and ~5% faster than 6700K in ST and MT due to clock speed. Not sure what the uncore is clocked at, it is 4.1Ghz on the 6700K. But that may have gotten another 100-200Mhz as well.
 

IndyColtsFan

Lifer
Sep 22, 2007
33,656
687
126
Since Kaby Lake uses Skylake cores, as in exact Skylake cores. Anyone thinking there will be IPC increase is just being imaginary.

Imaginary like the time when you repeatedly assured everyone the 6950 was going to be $999? :)

7700K is something like ~7.5% and ~5% faster than 6700K in ST and MT due to clock speed. Not sure what the uncore is clocked at, it is 4.1Ghz on the 6700K. But that may have gotten another 100-200Mhz as well.

I'm skeptical of the "clock corrected" 7.6% number, but I do think we'll see an IPC increase over the 6700k in the 3% range. If I'm wrong, I'm wrong, but at this stage, I'm not dying for a new CPU so I can wait and see.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
I'm skeptical of the "clock corrected" 7.6% number, but I do think we'll see an IPC increase over the 6700k in the 3% range. If I'm wrong, I'm wrong, but at this stage, I'm not dying for a new CPU so I can wait and see.

Mobile Kaby Lake was already benched at notebookreview. No IPC increase. Not that there was any doubt, since Intel said the same. There is no core changes. Its a Skylake refresh on 14nm+ and with the video decoder improved.
 
Mar 10, 2006
11,715
2,012
126
I'm skeptical of the "clock corrected" 7.6% number, but I do think we'll see an IPC increase over the 6700k in the 3% range. If I'm wrong, I'm wrong, but at this stage, I'm not dying for a new CPU so I can wait and see.

Intel's CEO himself said that Kaby Lake uses the same Skylake core...

I guess what I would talk about is Kaby Lake. So one of the things we've learned on 14 nanometers is how to make meaningful performance improvements both in the silicon and then with the silicon combined with the architecture. So we said we already started shipping Kaby Lake to our customers and OEMs. We're seeing meaningful performance across all of the various SKUs of Kaby Lake relative to Skylake. Kaby Lake is built off a Skylake core. And as a result, the die size doesn't significantly grow. So you don't see – there's no driver in the silicon itself to shift the margin structure of this product. We're able to get the performance and feature enhancements with relatively small silicon increases but good improvement on the raw silicon technology itself. So there's not an intrinsic driver that should say die size got twice as big so margins are cut. There's nothing like that.

http://seekingalpha.com/article/399...-results-earnings-call-transcript?part=single

Anyway, I don't know why it matters whether Intel is getting the performance thru more frequency or higher perf/MHz. All that matters is delivered performance.