Intel Cannonlake 10 nm delayed, introducing KabyLake

Fjodor2001

Diamond Member
Feb 6, 2010
3,689
166
106
http://news.softpedia.com/news/ntel...-possibly-2017-welcome-kaby-lake-485110.shtml

Original source:
http://benchlife.info/cannon-lake-postpone-and-kaby-lake-will-replace-skylake-in-2016-06232015/
Google translated to English:
https://translate.google.com/transl...-replace-skylake-in-2016-06232015/&edit-text=

Intel's Cannonlake Is Delayed Until Possibly 2017, Welcome "Kaby Lake"

A "tock" that looks more like a "tick" for 10nm chipset CPUs.

Apparently, the 10nm die shrink for Skylake, the Cannonlake, will be replaced - read delayed - by the larger-still "Kaby Lake" chipset.

Changing its roadmap, again, Intel has postponed the Cannonlake 10nm production for a larger 14nm chipset. Featuring two or four cores and including a new integrated graphics engine, the "Kaby Lake" will also have a dual-channel memory controller and 256MB of on-package cache to speed up the graphics workloads.

[...]
Stagnating process

Information is scarce about the new chipset, and except that it'll run on 14nm die shrink, it's quite unknown whether it will have a new micro-architecture or it will support AVX-512 instructions.
[...]
Looking at the available specs, it's very possible, though, that all "Kaby Lake" models except the S series will be another Broadwell as the die size and supporting memory type suggest it. Either Intel is betting all its money on its 14nm technology, hoping for practical and financial reasons to delay the 10nm "tock" advancement in favor of a universal die size, or it didn't fully develop the 10nm "mini-Skylake" architecture, delaying it without having to deal with revenues loss in case no technical advancements are made.
kaby-lake-1024x792.png


kaby-lake-s-series-1024x1002.png



kaby-lake-u-and-y-series-1021x1024.png


So KabyLake is introduced because the Intel 10 nm Cannonlake is delayed.

But is KabyLake a Skylake Refresh? Or Skylake Refresh+, meaning that it also contains architechtural changes...?

Are we seeing two uArch generations per node shrink now? tick-tock-tock-tick...? :hmm:
 

Aristotelian

Golden Member
Jan 30, 2010
1,246
11
76
I have a quick but unfortunately ignorant question. On the text reading "Featuring two or four cores and including a new integrated graphics engine, the "Kaby Lake" will also have a dual-channel memory controller and 256MB of on-package cache to speed up the graphics workloads" a wider issue comes to mind.

Are we going to see high end Intel processors that do NOT dedicate any die space to graphics? Here's what I'm thinking (and where the ignorance comes in):

What stops Intel from producing a type of chip where the entire die area is devoted to the traditional CPU tasks. All the real estate dedicated to the graphics areas could be used to add more cores. So - just guesswork here - instead of a 4C X-Lake which includes graphics - we would have an 8C, overclockable, no graphics space on die. I don't see discrete graphics dying any time soon for gamers, and the optimizer in me thinks that for people with discrete graphics, any CPU die space catering for graphics is simply a waste.

Perhaps I'm not that familiar with the lineups any more but I'm seeing phrasing above more and more commonly, as if all new flagship CPUs are including die space that is dedicated for graphics capability.
 

NTMBK

Lifer
Nov 14, 2011
10,207
4,939
136
What stops Intel from producing a type of chip where the entire die area is devoted to the traditional CPU tasks. All the real estate dedicated to the graphics areas could be used to add more cores. So - just guesswork here - instead of a 4C X-Lake which includes graphics - we would have an 8C, overclockable, no graphics space on die. I don't see discrete graphics dying any time soon for gamers, and the optimizer in me thinks that for people with discrete graphics, any CPU die space catering for graphics is simply a waste.

That's called a "Xeon". Currently available in Socket LGA 2011.
 

NTMBK

Lifer
Nov 14, 2011
10,207
4,939
136
Interesting that the most unique SKU, the 2x128MB cache mobile part with GT4, is still TBD. Anyone else think this is a "WE'RE SORRY 10nm IS LATE APPLE" SKU?
 
Mar 9, 2013
139
0
76
I have a quick but unfortunately ignorant question. On the text reading "Featuring two or four cores and including a new integrated graphics engine, the "Kaby Lake" will also have a dual-channel memory controller and 256MB of on-package cache to speed up the graphics workloads" a wider issue comes to mind.

Are we going to see high end Intel processors that do NOT dedicate any die space to graphics? Here's what I'm thinking (and where the ignorance comes in):

What stops Intel from producing a type of chip where the entire die area is devoted to the traditional CPU tasks. All the real estate dedicated to the graphics areas could be used to add more cores. So - just guesswork here - instead of a 4C X-Lake which includes graphics - we would have an 8C, overclockable, no graphics space on die. I don't see discrete graphics dying any time soon for gamers, and the optimizer in me thinks that for people with discrete graphics, any CPU die space catering for graphics is simply a waste.

Perhaps I'm not that familiar with the lineups any more but I'm seeing phrasing above more and more commonly, as if all new flagship CPUs are including die space that is dedicated for graphics capability.

Actually regarding the discrete graphics card. I beleive that the continuing hd4600 from intel is good enough to the tune that nvidia 750ti(a 10k card) is approximately 80% more powerful. Also, its only 20% more powerful than intel iris chips.

The fact that a hd4600( which is already 1 generation older) is almost equivalent to mid range graphics card from both nvidia and amd does say a lot about these integrated graphics card.
And if the gpu companies won't get there acts together then I won't be amazed that the would only be consolidated towards catering the high end market. A amd R7 250 chip is almost equal to hd4600.

Which means that anything below a r9 270x would become irrelevant. And if somebody possess an iris one than everything below a r9 280x or r9 290x would become irrelevant.

So, Looking at the trend, I don't think that until or unless these gpu companies can double the performace in the mid or lower range. They cant even think about surviving in those segments. In a way with developing faster integrated gpus intel might soon become more or equally relevant to a discrete gpu.

And I don't think that except for the sever market intel would even consider making a cpu where there is no place for gpu. Combined computing is the basis of all future technologies. Infact, the cpu part might be crippled or slow down badly in many scenarois if a integrated gpu is not there on the processor itself.
 
Last edited:

Aristotelian

Golden Member
Jan 30, 2010
1,246
11
76
That's called a "Xeon". Currently available in Socket LGA 2011.

Right. But the Xeon family has many more cores, with much lower clock speeds. So in applications that are not highly multithreaded, they lose out over the higher clocked/lesser core processors that Intel makes.

What I'm asking really is why there's no medium space between the Xeon sector and the consumer sector. Should all consumers who want more cores buy a Xeon? Is there no market for non-Xeon higher core processors that maintain higher clock speeds without the graphics space dedicated on the die? When you look at the modern product lineups say for Skylake and so on it seems ALL of the consumer chips have graphics on the die.

I'm not understanding why Intel would do this. Why not make a Skylake chip that even had the 4C but without the graphics space on the die? I must not be able to understand the technical complexity behind the way these chips are designed. It seems cheaper to me to offer 4C or 6C chips with no die space dedicated to graphics, which is completely wasted on those with dedicated graphics cards in home PCs.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
I'm not understanding why Intel would do this. Why not make a Skylake chip that even had the 4C but without the graphics space on the die? I must not be able to understand the technical complexity behind the way these chips are designed. It seems cheaper to me to offer 4C or 6C chips with no die space dedicated to graphics, which is completely wasted on those with dedicated graphics cards in home PCs.

Its not cheaper. Hardly anyone would buy the product you describe. Its simply cheaper to sell you a 4C with IGP than a 4C without.

A 4C without IGP would need a new die, new mask, validation etc. It would require a miracle just to pay that back.

And again, what dod you expect to get? Cheaper CPUs? Thats not going to happen with the miniscule volume. Also if you want to overclock, the IGP, disabled or not is for your benefit.
 

beginner99

Diamond Member
Jun 2, 2009
5,208
1,580
136
I guess either 10 nm process is delayed because of technical issues or 14 nm has not been amortized yet and the delay is due to financial reasons.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Thread should be titled [Rumor] to comply with Anandtech posting guidelines.
 
Aug 11, 2008
10,451
642
126
Actually regarding the discrete graphics card. I beleive that the continuing hd4600 from intel is good enough to the tune that nvidia 750ti(a 10k card) is approximately 80% more powerful. Also, its only 20% more powerful than intel iris chips.

The fact that a hd4600( which is already 1 generation older) is almost equivalent to mid range graphics card from both nvidia and amd does say a lot about these integrated graphics card.
And if the gpu companies won't get there acts together then I won't be amazed that the would only be consolidated towards catering the high end market. A amd R7 250 chip is almost equal to hd4600.

Which means that anything below a r9 270x would become irrelevant. And if somebody possess an iris one than everything below a r9 280x or r9 290x would become irrelevant.

So, Looking at the trend, I don't think that until or unless these gpu companies can double the performace in the mid or lower range. They cant even think about surviving in those segments. In a way with developing faster integrated gpus intel might soon become more or equally relevant to a discrete gpu.

And I don't think that except for the sever market intel would even consider making a cpu where there is no place for gpu. Combined computing is the basis of all future technologies. Infact, the cpu part might be crippled or slow down badly in many scenarois if a integrated gpu is not there on the processor itself.

Not to be picky, but HD4600 is the gt2 part with no edram and I believe 20 EU or whatever intel calls them. The iris pro part with 48 units and e-dram is Iris Pro 6200. In any case, I think it is quite a bit slower than 750Ti, at least in practical gaming applications. It might get close in some artificial benchmarks, but intel integrated graphics tend to fall off rapidly as resolution or image quality increases. And intel still seems to be devoting huge amounts of die space and development resources to stronger igpus and then making them niche products confined to expensive high end models or now desktops (why????) where the vast majority will use a discrete card. I think it will be a long, long time before integrated graphics replace dgpus on the desktop. In mobile, it *might* happen sooner, but I dont see it really, except for niche models like apple, until (unless) AMD gets their act together with Zen cpu performance with a wide availability of HBM graphics. And even then, by that time we will be on 14nm dgpus which will up the bar, hopefully.
 

dark zero

Platinum Member
Jun 2, 2015
2,655
138
106
If Intel uses HBM 1 or 2 on their iGPU, will put nVIDIA in real problems.

Since AMD finally died in everything, Intel could use the HBM tech and uses it on their advantage, making them stronger overall.

People would even avoid to use nVIDIA due cost and consumption.

In resume, if Intel becomes totally agressive against nVIDIA by now, they would have the absolute monopoly on x86
 

erunion

Senior member
Jan 20, 2013
765
0
0
I guess either 10 nm process is delayed because of technical issues or 14 nm has not been amortized yet and the delay is due to financial reasons.

The last we heard from intel was that 10nm was coming in early 2017. That puts it 15-18 months after Skylake.

This seems like a short life product to launch 12 months after skylake(ie August '16) and last until cannonlake in first half '17.
 

Haserath

Senior member
Sep 12, 2010
793
1
81
I have a quick but unfortunately ignorant question. On the text reading "Featuring two or four cores and including a new integrated graphics engine, the "Kaby Lake" will also have a dual-channel memory controller and 256MB of on-package cache to speed up the graphics workloads" a wider issue comes to mind.

Are we going to see high end Intel processors that do NOT dedicate any die space to graphics? Here's what I'm thinking (and where the ignorance comes in):

What stops Intel from producing a type of chip where the entire die area is devoted to the traditional CPU tasks. All the real estate dedicated to the graphics areas could be used to add more cores. So - just guesswork here - instead of a 4C X-Lake which includes graphics - we would have an 8C, overclockable, no graphics space on die. I don't see discrete graphics dying any time soon for gamers, and the optimizer in me thinks that for people with discrete graphics, any CPU die space catering for graphics is simply a waste.

They do not want to threaten higher markets with cheap multicore CPUs. Offering more and more cores at mainstream prices would shrink their revenue as those that need the CPU performance just buy cheaper.

Die space is relatively cheap for them, but they have to segment the market. There are HEDT CPUs that don't come with integrated graphics.

Graphics is a way for them to add value without sacrificing their higher market segments.
Actually regarding the discrete graphics card. I beleive that the continuing hd4600 from intel is good enough to the tune that nvidia 750ti(a 10k card) is approximately 80% more powerful. Also, its only 20% more powerful than intel iris chips.

The fact that a hd4600( which is already 1 generation older) is almost equivalent to mid range graphics card from both nvidia and amd does say a lot about these integrated graphics card.
And if the gpu companies won't get there acts together then I won't be amazed that the would only be consolidated towards catering the high end market. A amd R7 250 chip is almost equal to hd4600.

Broadwell Iris pro is still way behind a 750 Ti. Closer to being 50% slower than 20%.
 

Fjodor2001

Diamond Member
Feb 6, 2010
3,689
166
106
The last we heard from intel was that 10nm was coming in early 2017. That puts it 15-18 months after Skylake.

This seems like a short life product to launch 12 months after skylake(ie August '16) and last until cannonlake in first half '17.

But do we know how long into 2017 Cannonlake and 10 nm is delayed?

E.g. Sweclockers reported this:

https://translate.google.com/transl...r-kan-forsenas-till-slutet-av-2017&edit-text=

"Intel's transition to 10 nanometers can be delayed until the end of 2017 [...] Now, reports the Semiconductor Engineering to 10 nanometers can go the same fate as 14 nanometers. According to sources, Intel have planned purchase equipment for volume production of the technology in March, something that for unknown reasons postponed to December this year. Instead keeps Intel on installing a small production line for testing the manufacturing of 10 nanometer factory D1X in Hillsboro, Oregon.

The movement corresponding to a full three quarters can potentially come to move forward the launch of Cannon Lake, the sequel to Skylake, the second half of 2017."


If it's the latter half of 2017, then KabyLake could have a decent life span.
 
Aug 11, 2008
10,451
642
126

Like I said earlier, I think he meant the HD 6200 iris pro, not HD4600. I think iris pro could be in fact close or even faster than the r7-250, since your graphs show 7850k about equal to the 250 and desktop iris pro is faster or equal to Kaveri depending on the game. They are all 3 pretty borderline for 1080p gaming though.
 
Last edited:

stockwiz

Senior member
Sep 8, 2013
403
15
81
We're sorry we're late with 10nm.. here's some dual and quad core chips for you in 2017, 10 years after the introduction of the Q6600, their first mainstream quad core chip. What better way to upgrade a 10 year old quad core system than with... another quad core. Thanks, Intel. :)

(I understand why they are doing this... it has already been mentioned here)
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Like I said earlier, I think he meant the HD 6200 iris pro, not HD4600. I think iris pro could be in fact close or even faster than the r7-250, since your graphs show 7850k about equal to the 250 and desktop iris pro is faster or equal to Kaveri depending on the game. They are all 3 pretty borderline for 1080p gaming though.

Iris Pro 6200 was barely beating R7 240 DDR3. I highly doubt its "close" nvm faster than R7-250.

It's amazing how with this 14nm and eDRAM and all we got is 20% gain over the competition that has none of that and 1/10th the budget.*

*What it really tells me is that we are at a plateau of gains, and we need all that "fantastic" technology to get the meagre gains we are getting. If we use the tree analogy, now they are using 15m ladders get the few fruits that are left.
 

Fjodor2001

Diamond Member
Feb 6, 2010
3,689
166
106
Iris Pro 6200 was barely beating R7 240 DDR3. I highly doubt its "close" nvm faster than R7-250.

It's amazing how with this 14nm and eDRAM and all we got is 20% gain over the competition that has none of that and 1/10th the budget.*

*What it really tells me is that we are at a plateau of gains, and we need all that "fantastic" technology to get the meagre gains we are getting. If we use the tree analogy, now they are using 15m ladders get the few fruits that are left.

So what's the bottleneck here? Intel's memory bandwidth or GPU core design? Clearly something is not right if they have to provide so expensive solutions for relatively small performance gains.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
So what's the bottleneck here? Intel's memory bandwidth or GPU core design? Clearly something is not right if they have to provide so expensive solutions for relatively small performance gains.

It's certainly not bandwidth, because Crystalwell allowed 25-70% gains, plus they said the current eDRAM is worth 100-130GB/s GDDR5. I intially though we'd see better gains over Haswell on the GT3e parts since the GT3 Broadwell seems bottlenecked by bandwidth. But just like the non eDRAM parts Broadwell GT3e is 20% faster than Haswell GT3e. So conclusion is the current eDRAM provides enough bandwidth.

GPU core design? Probably. Looking at Fury versus Nvidia competition, they use radically different approaches to end up at the same thing, same performance, same price. AMD makes up for lacking architecturally with brand spanking new memory, while Nvidia makes up lack of HBM with good architecture and loads of caches on chip.

It seems funny that we can't get "brains" and "brawn" combined. You get one or the other.
 

erunion

Senior member
Jan 20, 2013
765
0
0
But do we know how long into 2017 Cannonlake and 10 nm is delayed?

E.g. Sweclockers reported this:

https://translate.google.com/translate?sl=sv&tl=en&js=y&prev=_t&hl=en&ie=UTF-8&u=http%3A%2F%2Fwww.sweclockers.com%2Fnyhet%2F20343-intels-overgang-till-10-nanometer-kan-forsenas-till-slutet-av-2017&edit-text=

"Intel's transition to 10 nanometers can be delayed until the end of 2017 [...] Now, reports the Semiconductor Engineering to 10 nanometers can go the same fate as 14 nanometers. According to sources, Intel have planned purchase equipment for volume production of the technology in March, something that for unknown reasons postponed to December this year. Instead keeps Intel on installing a small production line for testing the manufacturing of 10 nanometer factory D1X in Hillsboro, Oregon.

The movement corresponding to a full three quarters can potentially come to move forward the launch of Cannon Lake, the sequel to Skylake, the second half of 2017."

If it's the latter half of 2017, then KabyLake could have a decent life span.

As I just said, the last thing we heard from Intel was early 2017.

Your sweclocker quote is just flimsy speculation.

I understand that you think that Kabylake( aka skylake 2.0) is evidence that Cannonlake is delayed, but that's not necessarily the case. As I also already said, even an early 2017 launch puts it 15-18+ months after skylake. Its easy to see why Intel would want something to plug that gap.
 
Last edited: