Intel Skylake / Kaby Lake

Page 304 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Maxima1

Diamond Member
Jan 15, 2013
3,074
554
126
Intel adds two cores and you are moaning.
lmao When mediatek or any other adds more cores, it's a sham. When it's Intel, it's good. This addition could possibly be the only noteworthy thing about CFL. That's the problem. It's a tacked on addition to 14nm and we already know there's a big future slowdown. It's far worse than the predictions we've seen here in the prior years. In addition, the extra cores is only at the top (i7). All the rest will probably stay at the same configurations to entice people to get the higher price CPU (which will basically be last gen + moar cores and also still no free boost from edram apparently).
 
Mar 10, 2006
11,719
2,001
126
lmao When mediatek or any other adds more cores, it's a sham. When it's Intel, it's good. This addition could possibly be the only noteworthy thing about CFL. That's the problem. It's a tacked on addition to 14nm and we already know there's a big future slowdown. It's far worse than the predictions we've seen here in the prior years. In addition, the extra cores is only at the top (i7). All the rest will probably stay at the same configurations to entice people to get the higher price CPU (which will basically be last gen + moar cores and also still no free boost from edram apparently).
Are you serious?

More higher performance cores in a desktop or high performance notebook can actually be used to give some benefit (probably more on the desktop than the notebook...). A bazillion A53s with a few good A72s in a 2-4W power envelope being used simultaneously to try to win multithreaded benchmarks in a slim phone is a dumb idea.
 
  • Like
Reactions: witeken

Maxima1

Diamond Member
Jan 15, 2013
3,074
554
126
Are you serious?

More higher performance cores in a desktop or high performance notebook can actually be used to give some benefit (probably more on the desktop than the notebook...). A bazillion A53s with a few good A72s in a 2-4W power envelope being used simultaneously to try to win multithreaded benchmarks in a slim phone is a dumb idea.
Yes, I'm serious. We already got Kabylake. We don't need to lose yet another cycle just to tack on more cores at the top. Heck, I could have bought Sandy Bridge and rode it out to Ice Lake or even further with this glacial pacing. lmao

As for notebooks, lmao. It's the same dilemma you just pointed out with phones. The problem with notebooks isn't the CPU; it's the GPU obsoleting fast. The OEMs will love the core increase, though.
 
Mar 10, 2006
11,719
2,001
126
Yes, I'm serious. We already got Kabylake. We don't need to lose yet another cycle just to tack on more cores at the top. Heck, I could have bought Sandy Bridge and rode it out to Ice Lake or even further with this glacial pacing. lmao

As for notebooks, lmao. It's the same dilemma you just pointed out with phones. The problem with notebooks isn't the CPU; it's the GPU obsoleting fast. The OEMs will love the core increase, though.
Then I guess buy a product from an alternative supplier?
 

coercitiv

Diamond Member
Jan 24, 2014
3,722
3,497
136
A bazillion A53s with a few good A72s in a 2-4W power envelope being used simultaneously to try to win multithreaded benchmarks in a slim phone is a dumb idea.
You may want to take some time and read this Anandtech analysis, Andrei did a fine job showing us what big.LITTLE SOCs can do in a phone. For those who don't have the time, here's a table with queue depth averages under various workloads. Browsing, camera app and application updates show very good scaling with many cores.



We've already had one myth shattered with Apple opting for little cores in the new A10, maybe it's time to treat this subject with the attention it deserves.
 
Mar 10, 2006
11,719
2,001
126
You may want to take some time and read this Anandtech analysis, Andrei did a fine job showing us what big.LITTLE SOCs can do in a phone. For those who don't have the time, here's a table with queue depth averages under various workloads. Browsing, camera app and application updates show very good scaling with many cores.



We've already had one myth shattered with Apple opting for little cores in the new A10, maybe it's time to treat this subject with the attention it deserves.
A10 isn't using all cores simultaneously.
 
  • Like
Reactions: HiroThreading

DrMrLordX

Lifer
Apr 27, 2000
15,729
4,691
136
The most likely to purchase AMD is Mubadala Technology. Who currently owns 17.8% of AMD. This would allow the UAE Space Agency to make proprietary rad-hard x86-64 processors not from Intel.
Yeah I know (well didn't figure on the UAE Space Agency part but whatever). It was more fun to post pics of . . . Sammy!

Though now that you mention it . . . maybe if this guy lived in the UAE . . . yeah, I can totally see it now!
 

coercitiv

Diamond Member
Jan 24, 2014
3,722
3,497
136
A10 isn't using all cores simultaneously.
You claimed "moar" cores in phones is just a marketing trick to win benchmarks, I provided you with a very well documented article showing how Android phones can make good use of many cores. So how exactly is bazillion A53s + few good A72s a dumb idea when Chrome can make use of more than 4 cores when loading a webpage?


This is not a subject where we can debate based on gut instinct + Apple example, we really need to look at the data before we apply the "dumb" label, especially when the whole idea behind the bazillion cores concept is to utilize the most efficient combination of cores available in a certain power envelope.

Which brings us back to Intel and more high performance cores: here I fully agree with you, and not only that, my opinion is notebooks can also see significant benefits even from 15W quad cores. Once 2 cores can operate towards 3.5Ghz in a CPU intensive task as a Cinebench run, the same power envelope can be used by 4 cores at 2.2Ghz+ with a sizeable throughput gain. Moreover, tasks like browsing in a multithreaded browser can end up using less power on the quad CPU.

When I first bought my Haswell notebook (i7 4510U) I was very curious to see how emulating a low power CPU would feel like using configurable TDP, so I enforced a hard limit of 7.5W on the CPU and used Firefox and Chrome to load a few pages. It was a very enlightening experience, as the (mostly) single threaded Firefox forced the CPU at the max frequency attainable withing 7.5W and felt choppy on occasion while loading and scrolling, while Chrome MT load kept the CPU at low frequencies and felt significantly smoother.

Moar cores does not equal better, but it certainly has the potential if done right.
 

Sweepr

Diamond Member
May 12, 2006
5,151
1,124
131
Skylake-S Binning Results



http://forum.hwbot.org/showpost.php?p=461205&postcount=458


NotebookCheck tests the Core i5-7200U (Kaby Lake-U)


Even the smaller Core i5-7200U (2.5 GHz base clock, 3.1GHz Turbo for one and both cores) leaves an excellent impression in our benchmarks. The performance is consistently just above the former high-end model Core i7-6500U and only slightly behind the i7-6600U. The maximum turbo boost is kept stable in all single and multi-core test.
www.notebookcheck.com/Kaby-Lake-Core-i7-7500U-im-Test-Skylake-auf-Steroiden.172422.0.html
 

Maxima1

Diamond Member
Jan 15, 2013
3,074
554
126
Which brings us back to Intel and more high performance cores: here I fully agree with you, and not only that, my opinion is notebooks can also see significant benefits even from 15W quad cores.
How? We're not talking about Atom here. Why would you agree with everything, btw? What will happen to a lot of the 6C? It'll go into a gaming notebook... Guess what? It's largely useless, because you can't upgrade the GPU. It's nice for the OEMs who will sell you mismatched CPUs and GPUs, though. In addition, I can't imagine many users needing a 6C with a GT2 for whatever outside of games. A lot of people see "i7" and think "better" even though it's not relevant to their needs.

Once 2 cores can operate towards 3.5Ghz in a CPU intensive task as a Cinebench run, the same power envelope can be used by 4 cores at 2.2Ghz+ with a sizeable throughput gain. Moreover, tasks like browsing in a multithreaded browser can end up using less power on the quad CPU.
As long as you don't get some generic craptop with integrated 40 watt or less battery, it's not going to matter much.
 

liahos1

Senior member
Aug 28, 2013
571
44
91
kabylake looks like a surprisingly better iteration vs skylake. Is cannonlake desktop going to be a 10nm version of kabylake? Is there going to be a kabylake x with more than 4 cores? I have a 5820k now but need to bring that computer to my office for my business. I have an old ivy bridge i5 doing the deed now but it isn't fast enough for my needs but I can afford to wait a little.

For enthusiast desktop 6-8 core lines what is the upgrade path

Haswell E --> Broadwell E --> Skylake X (many core) or Kabylake X (4 core high clocks) -- > ???

Also why is intel release both a kabylake x and a skylake x at the same time. Why dont they just skip skylake x and do a >4 core kabylake on 14nm+
 

jpiniero

Diamond Member
Oct 1, 2010
7,783
1,114
126
There isn't going to be a mainstream Cannonlake desktop. The mainstream desktop might get 14 nm+ Coffee Lake but that wouldn't be until 2018.

Haswell E --> Broadwell E --> Skylake X (many core) or Kabylake X (4 core high clocks) -- > ???
Cannonlake-X.
 
Mar 10, 2006
11,719
2,001
126
There isn't going to be a mainstream Cannonlake desktop. The mainstream desktop might get 14 nm+ Coffee Lake but that wouldn't be until 2018.



Cannonlake-X.
Coffee Lake is Cannon Lake-H/S implemented in 14nm+. So it is in effect a mainstream Cannon Lake desktop, just not in 10nm.
 

jpiniero

Diamond Member
Oct 1, 2010
7,783
1,114
126
Coffee Lake is Cannon Lake-H/S implemented in 14nm+. So it is in effect a mainstream Cannon Lake desktop, just not in 10nm.
I don't believe there have been any leaks about what Coffee Lake entails. Calling it Cannon Lake on 14 nm+ is premature at this point.
 

liahos1

Senior member
Aug 28, 2013
571
44
91
I feel like every year that goes by intels roadmap gets more confusing. What architecture is intels first 10nm desktop product going to be on? Is that a 2017 or 2018 product.

Also - if Intel is making a mainstream hex core, does the bottom end of their enthusiast chip move up to 8 cores?

If the answer is no can anyone posit what the difference will be between a mainstream and enthusiast hex core?

Finally - what still doesn't make sense to me is why Intel is releasing a juiced up Kaby lake x quad core while also releasing sky lake x.
 
Last edited:

jpiniero

Diamond Member
Oct 1, 2010
7,783
1,114
126
I feel like every year that goes by intels roadmap gets more confusing. What architecture is intels first 10nm desktop product going to be on? Is that a 2017 or 2018 product.
At this point it could even be Cannonlake-X. Either way it won't be until 2018.

Also - if Intel is making a mainstream hex core, does the bottom end of their enthusiast chip move up to 8 cores?
It's possible, but it's not happening in 2017.

Finally - what still doesn't make sense to me is why Intel is releasing a juiced up Kaby lake x quad core while also releasing sky lake x.
Intel wants enthusiasts to buy the HEDT platform. The desktop socket may be going away at some point soon so the sooner Intel can get them off of the mainstream and on to HEDT the better.
 

VirtualLarry

Lifer
Aug 25, 2001
48,023
4,977
126
Intel wants enthusiasts to buy the HEDT platform. The desktop socket may be going away at some point soon so the sooner Intel can get them off of the mainstream and on to HEDT the better.
If that's true, will Celeron, Pentium, and even perhaps i3, be going away? Or at least, the dual-core varieties as we know them today? Will there be HEDT Celeron or Pentium? Or just i7s? (If they kill off the mainstream socket, I have to imagine that there will be non-HyperThreaded i5 CPUs on HEDT too.)

Or will Celeron and Pentium big-core CPUs continue to exist, but in BGA-only form, in smaller motherboard form-factors, for primarily OEM usage?

Personally, I wouldn't mind if Celeron and Pentium big-cores became BGA-only, if that meant the eventual death of the Atom small-cores in the same form-factors. I'm not talking the "U" models either, I mean full-strength Celeron and Pentium. Not sure what sort of heatsink that they might use, though, on a BGA chip, that was not a "U" model.
 
Aug 11, 2008
10,457
640
126
Let's just say that I am really confident in my assertion. That's all I can say for now...
But shouldnt cannon lake be the "tic" from skylake, ie. a die shrink? How can you have a "tick" on the same process node? Are you saying it will be a new architecture?

Honestly, Intel's line up is so confusing right now, I could see someone buying Zen (if it is decent) simply because there are not so many choices, but all gimped in some way to segment the market.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,910
31
91
You claimed "moar" cores in phones is just a marketing trick to win benchmarks, I provided you with a very well documented article showing how Android phones can make good use of many cores. So how exactly is bazillion A53s + few good A72s a dumb idea when Chrome can make use of more than 4 cores when loading a webpage?
IMO Anandtech measured completely the wrong thing in that article (or else failed to make additional measurements to confirm their hypothesis). Simply put, it doesn't matter what power states the CPU is in or what cores are online - at a high level all that matters is the performance (and consistency) and the power consumption.

IT DOESN'T MATTER HOW MANY CORES ARE USED IF THE ADDITIONAL CORES FAIL TO PROVIDE ANY REAL (WORLD) GAIN.

AT never tested in any way shape or form whether there was any advantages/disadvantages in running the tested tasks over more cores or not.
 

coercitiv

Diamond Member
Jan 24, 2014
3,722
3,497
136
How? We're not talking about Atom here. Why would you agree with everything, btw? What will happen to a lot of the 6C? It'll go into a gaming notebook... Guess what? It's largely useless, because you can't upgrade the GPU. It's nice for the OEMs who will sell you mismatched CPUs and GPUs, though. In addition, I can't imagine many users needing a 6C with a GT2 for whatever outside of games. A lot of people see "i7" and think "better" even though it's not relevant to their needs.
Upcoming games will have far better MT support due to DX12/Vulkan, making very good use of all available cores, and thus lowering CPU power consumption (more cores at lower frequency) and allowing more TDP budget for the GPU. More GPU TDP -> more performance in the same form factor.

As for using the GPU upgrade problem in a notebook, it's a weak argument: notebooks come with their unique set of compromises, they always did. Whether you think a gaming notebook is worth the investment or not is not the topic.

The only situation in which I would tend to agree with you would be Intel creating a new price ceiling for the 6C CPUs, thus keeping the rest of the product stack at similar prices with today.
IMO Anandtech measured completely the wrong thing in that article (or else failed to make additional measurements to confirm their hypothesis). Simply put, it doesn't matter what power states the CPU is in or what cores are online - at a high level all that matters is the performance (and consistency) and the power consumption.

IT DOESN'T MATTER HOW MANY CORES ARE USED IF THE ADDITIONAL CORES FAIL TO PROVIDE ANY REAL (WORLD) GAIN.

AT never tested in any way shape or form whether there was any advantages/disadvantages in running the tested tasks over more cores or not.
For the first part, Anandtech recorded and correlated the following data: power state distribution, frequency distribution, and run-queue depth - which is an exact representation of the number of threads running through the system. Not only did Anandtech measure the right thing, they did so in a comprehensive manner to give us a detailed representation of how the big.LITTLE SOC behaves under Android. Please take your time and read through the article, it really is worth the read.

For the second part, it seems to me you imply that optimised multithreaded software running on many cores needs proof of efficiency (perf, power, or both). No offense, but for you to claim that in a scenario when the browser can use up to 6-8 threads, using the additional available cluster of power optimised cores does not yield additional efficiency gains is a bit much. You're entitled to your opinion though, maybe we'll get the chance to compare results in a test with the little cores disabled, since that's the only way we can maintain data consistency (keep software and CPU arch/process the same).
 

ASK THE COMMUNITY