• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Intel Skylake / Kaby Lake

Page 306 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

witeken

Diamond Member
Dec 25, 2013
3,876
152
106
May I ask a question about wafer cost?

So I know that wafer cost is not literally the price of the raw piece of silicon. It also includes things like chemicals that are used. So my question is how far does this go?

For instance if you look at capex: https://g.foolcdn.com/editorial/images/164894/capex_large.png.

These capacity costs are also taken into account, right? So what I think is that Intel has some numbers about depreciation of their tools and in that way the tool costs (and mask costs) (like a €60M ASML lithography machine) are also taking into account, depening on how much they are used. So a later node becomes more expensive because it has more for instance more lithography steps, so each wafer takes a higher depreciation from the tools. (So if you actually were to try to follow a wafer through its 3 months journey in the fab, you would not find like literally $5000 worth of stuff that was spend on the wafer, like the raw silicon wafer, the materials, the chemicals, the electricity bill, or do you? So the wafer cost is more like an imaginative number from a calculation that includes everything than a counter of cost of the products you use?)

Am I wrong? Too bad Idontcare disappeared or TuxDave.
 

jpiniero

Diamond Member
Oct 1, 2010
7,998
1,227
126
Iris is fine, but Iris Pro saw very poor adoption. Pretty much the Apple Edition but Apple is now using dGPUs for rMBP 15".
Even regular Iris doesn't really sell much outside of Apple. Which btw has been very quiet on the Mac front; they are expected to release a new Skylake rMBP in October but the quietness has led to the ARM chatter getting louder. If Apple is dumping Intel sooner rather than later, then ending Iris Pro would for sure make sense. They can always revive it later if forced to by AMD.
 

Maxima1

Diamond Member
Jan 15, 2013
3,134
575
126
Intel ULV/ULT CPUs (15-18W TDP) Comparison in Cinebench 11.5
Next on line: 2017 2C/4T Cannon Lake, 4C/8T Kaby Lake and 2018 4C/8T Coffee Lake.
No, it's 2C/4T Kaby Lake and 2C/4T Cannon Lake. If the OEMs even use Coffee Lake QC U processor, it's going to be segmented and tiered at a higher price point i.e. premium.
 

Sweepr

Diamond Member
May 12, 2006
5,151
1,125
131
No, it's 2C/4T Kaby Lake and 2C/4T Cannon Lake. If the OEMs even use Coffee Lake QC U processor, it's going to be segmented and tiered at a higher price point i.e. premium.
This is a technical comparison between the best Intel has/had to offer at that TDP range, and we already know quad-cores will hit the U lineup in 2017/2018. Once Kaby Lake & Coffee Lake 4C/8T options at 15W arrive the likes of Microsoft, Apple, Dell, etc will use them or at least offer these chips in their ultraportables.

 

Nothingness

Platinum Member
Jul 3, 2013
2,153
397
126
Intel ULV/ULT CPUs (15-18W TDP) Comparison in Cinebench 11.5

Core i7-620UM (32nm Arrandale, 2C/4T @ 1.06 GHz - 2010)
- Single-Core: ?
- Multi-Core: 1.10

Core i7-2617M (32nm Sandy Bridge, 2C/4T @ 1.5 GHz - 2011)
- Single-Core: ?
- Multi-Core: 2.11

Core i7-3517U (22nm Ivy Bridge, 2C/4T @ 1.9 GHz - 2012)
- Single-Core: 1.2
- Multi-Core: 2.8

Core i7-4500U (22nm Haswell, 2C/4T @ 1.8 GHz - 2013)
- Single-Core: 1.3
- Multi-Core: 2.85

Core i7-5500U (14nm Broadwell, 2C/4T @ 2.4 GHz - 2015)
- Single-Core: 1.4
- Multi-Core: 3.2

Core i7-6500U (14nm Skylake, 2C/4T @ 2.5 GHz - 2015)
- Single-Core: 1.5
- Multi-Core: 3.5

Core i7-7500U (14nm Kaby Lake, 2C/4T @ 2.7 GHz - 2016)
- Single-Core: 1.6-1.68
- Multi-Core: 4.03

Median scores from NotebookCheck.
Next on line: 2017 2C/4T Cannon Lake, 4C/8T Kaby Lake and 2018 4C/8T Coffee Lake.
30% better efficiency for ST and 50% for MT in 4 years. Good but not impressive.
 
Mar 10, 2006
11,719
2,002
126
May I ask a question about wafer cost?

So I know that wafer cost is not literally the price of the raw piece of silicon. It also includes things like chemicals that are used. So my question is how far does this go?

For instance if you look at capex: https://g.foolcdn.com/editorial/images/164894/capex_large.png.

These capacity costs are also taken into account, right? So what I think is that Intel has some numbers about depreciation of their tools and in that way the tool costs (and mask costs) (like a €60M ASML lithography machine) are also taking into account, depening on how much they are used. So a later node becomes more expensive because it has more for instance more lithography steps, so each wafer takes a higher depreciation from the tools. (So if you actually were to try to follow a wafer through its 3 months journey in the fab, you would not find like literally $5000 worth of stuff that was spend on the wafer, like the raw silicon wafer, the materials, the chemicals, the electricity bill, or do you? So the wafer cost is more like an imaginative number from a calculation that includes everything than a counter of cost of the products you use?)

Am I wrong? Too bad Idontcare disappeared or TuxDave.
Let me explain it...

So with wafer cost you have two big pieces to cost. The first is the literal cost to produce that wafer...silicon wafer, chemicals, electric bill to keep the lights on, salaries the people that you're paying to actually run these factories, etc. and whatnot. But the other big piece is pretty much depreciation cost.

Depreciation is an accounting concept designed to spread the cost of capital equipment over some amount of time rather than all at once (aka a depreciation schedule).

The money is already spent, so no impact to cash flow, but that money spent needs to flow thru to the products that get sold. So Intel takes a quarterly depreciation "charge" and its impact to wafer cost really is roughly Total Depreciation/wafers shipped.

One reason 14nm and 10nm are more costly than say 22nm or 32nm is that the addition of double/quad patterning and other process steps. The more steps required to process a wafer, the longer it takes for it to go from start to finish, and this extra complexity definitely increases that time to produce a wafer (known as cycle time). And since depreciation happens at a fixed schedule; the capital equipment costs are spread across fewer wafers in a given amount of time, impacting the effective wafer cost.
 

witeken

Diamond Member
Dec 25, 2013
3,876
152
106
Let me explain it...

So with wafer cost you have two big pieces to cost. The first is the literal cost to produce that wafer...silicon wafer, chemicals, electric bill to keep the lights on, salaries the people that you're paying to actually run these factories, etc. and whatnot. But the other big piece is pretty much depreciation cost.

Depreciation is an accounting concept designed to spread the cost of capital equipment over some amount of time rather than all at once (aka a depreciation schedule).

The money is already spent, so no impact to cash flow, but that money spent needs to flow thru to the products that get sold. So Intel takes a quarterly depreciation "charge" and its impact to wafer cost really is roughly Total Depreciation/wafers shipped.

One reason 14nm and 10nm are more costly than say 22nm or 32nm is that the addition of double/quad patterning and other process steps. The more steps required to process a wafer, the longer it takes for it to go from start to finish, and this extra complexity definitely increases that time to produce a wafer (known as cycle time). And since depreciation happens at a fixed schedule; the capital equipment costs are spread across fewer wafers in a given amount of time, impacting the effective wafer cost.
Cool, nice to see that I got the gist of it, but this is much clearer. Especially the depreciation. You've earned a like.
 
  • Like
Reactions: ehume

jpiniero

Diamond Member
Oct 1, 2010
7,998
1,227
126
No, it's 2C/4T Kaby Lake and 2C/4T Cannon Lake. If the OEMs even use Coffee Lake QC U processor, it's going to be segmented and tiered at a higher price point i.e. premium.
I guess we will have to see what Intel charges for it, but I imagine it'd be about the same as the 2+3e models (ie: RCP of $20 more than the 2+2 U). Intel actually charges less (well the RCP) for the 4+2 H than the 2+3e U but the former doesn't include the PCH.
 

Maxima1

Diamond Member
Jan 15, 2013
3,134
575
126
I guess we will have to see what Intel charges for it, but I imagine it'd be about the same as the 2+3e models (ie: RCP of $20 more than the 2+2 U). Intel actually charges less (well the RCP) for the 4+2 H than the 2+3e U but the former doesn't include the PCH.
It doesn't matter what Intel puts on their site. Why would I think Dell, etc. would play a different game than they do with dGPUs? Dell, Lenovo, etc. will sell really crappy dGPUs for inflated prices. Sometimes there are special models (e.g. when Dell had 8850m), but that's not the norm.
 

witeken

Diamond Member
Dec 25, 2013
3,876
152
106
@Arachnotronic

[TLDR: I found a little gem in a slide from 2015 Investor meeting.]

So I've been looking back to the presentation about SKL-Y vs. A9 from last year, and I noticed something interesting.

http://intelstudios.edgesuite.net/im/2015/archive/wh/archive.html
http://intelstudios.edgesuite.net/im/2015/pdf/2015_InvestorMeeting_Bill_Holt_WEB2.pdf

If you compare slides 17 and 21 (slide 21 is a 2015 update of the slide 17 that they copied from 2013), you will see that

1) At 14nm, Samsung does a bit worse then Intel predicted, so Intel's advantage at 14nm is ever so slightly bigger than they predicted in 2013. (So this talk about 15% higher density was BS, unless A10 uses 16FFC and maybe that does have an increased density.) [Edit: That is indeed the case, 16FFC has improved density of Apple A10, but TSMC compares 10nm scaling to their 16nm+, so that doesn't matter for the predictions.]

2) For 10nm, Intel predicted that the foundries would be hardly any denser than Intel's 14nm, but their updated version of the graph shows a bigger, quite noticable gap.

3) However, you don't have to be rocket scientist either to see that in the 2013 projection, Intel's forecast of their own 10nm shows that the graph becomes less steep going from 14->10 than from 22->14. However, in the 2015 slide, you can clearly see that the line remains just as steep, if not even ever so slightly steeper. (We can only guess if this lower 10nm steep in 2013 was done (1) to not give away too much about how much they would shrink at 10nm or (2) if they have since november '13 made their 10nm node more agressive scaling, which might also be plausible because when they were planning to go to a 3 year cycle, then they would have seen that they had more time to develop 10nm and decided that they could use that to increase density to remain competitive with TSMC and Samsung's agressive node plans.)

On my monitor, I can confirm that the line from 22->14 on the 2013 slide is 22mm, while on the 2015 slide it is 21mm. The 14->10 line drops vertically 18mm on the 2013 slide and 22mm on the 2015 slide. So this quantitatively confirms my suspicion. BTW, the 32->22 line drops respectively 13mm and 13mm.

For completeness sake, the Samsung line.
2013: 12mm, 5mm and 12mm, for a total of 29mm
2015: 12mm, 3mm and 15mm, for a total of 30mm (So 14/16nm performed less than anticipated, but this does not impact the forecast (which indeed still remains a forecast, since that forecast is based on numbers while the rest of the 2015 graph is based on actual products) of 10nm, on the contrary, the projection has slightly improved.)

And here Intel again in the same format.
2013: 13mm, 22mm and 18mm, for a total of 53mm
2015: 13mm, 21mm and 22mm, for a total of 56mm

Now that I've gone this far into the slide analysis rabbit hole, let's not forget the timing.

Samsung/TSMC, in terms of mm down on the slide.
2012: 0mm (January, AMD HD7xxx series IIRC)
2017: 30mm, for a total of 6mm per year

Same method for Intel.
2010: 0mm (January, Westmere)
2018: 56mm (Canonlake, I round a bit up to calculate in whole years), for a total of 7mm per year

So there you have it guys. Contrary to all the talk, Intel's lead has been growing at 1mm per year in arbitrary units. Or more precise, Intel will have been shrinking 17% more per year, although the comparison is a bit off because Intel's nodes span 8 years while the slide only provides information about Samsung/TSMC for 5 years.

4) This has already been covered by the previous point, but I will say it here again. Even though TSMC and Samsung's 10nm will be reasonably denser than Intel's 14nm, more than previously projected, by 7mm instead of 4mm (So 7mm corresponds to a 1.4x advantage, although I guess you can't simply say that it stacks because then 21mm would be 1.4^3 or 2.7x), Intel's absolute lead at 10nm will grow thanks to the more aggressive shrinking at 10nm than they previously forecasted: at 14nm Intel is ~10mm ahead of TSMC/Samsung (same as 2013 forecast), while at 10nm Intel will be 15mm ahead (instead of 14mm in 2013 forecast). So Intel will grow its lead from 10mm to 15mm, even though this time around TSMC and Samsung might have slightly earlier time to market than Intel 14nm vs 20nm, although it remains to be seen if for instance the Galaxy S8 or other Android phones early 2017 will be 10nm, or if it's just the Apple A10X.

In short, if the 2015 graph is accurate, Intel will lead by about 15mm at 10nm, which is more than for instance the 32->22nm node shrink.

Conclusion

So let's face it guys, Intel's not going to lose their manufacturing advantage any time soon. The most a foundry will have shrunk since 2012 is TSMC's 10nm, which is 18mm. So TSMC's 7nm will only be on par with Intel's 10nm at best, and if it launches within 2 years of their 10nm node, you can be sure that it won't shrink 15/18mm. [Edit: EETimes has since reported http://www.eetimes.com/document.asp?doc_id=1330503&piddl_msgid=363618 that for 7nm, TSMC claims a "1.63x gate density".]

The only complaint you can have about this analysis is that you have to trust that Intel's done an honest job with this slide. They have done the real analysis and this post is an analysis of an analysis.

I think the most important takeaway for now is that Intel will most likely shrink the interconnect by 0.65x, just like at 14nm (just like they also shrunk the gate pitch by 0.77x like at 14nm), so the interconnect pitch should be 34nm. This is the highest number for the interconnect pitch that I estimated, so this only confirms my previous guess that 10nm would shrink just like 14nm. At this point however, my most optimistic guess of 31nm, seems unlikely (https://forums.anandtech.com/threads/arm-and-intel-team-up-for-10nm.2483328/#post-38424727).

Edit: I remain with one unanswered question. Since slide 21 is based on actual products, this means that Intel's 14nm advantage (the 21mm) in part comes from other things they have done besides the features shrink. Because in terms of just feature size alone, you would expect a 0.51x shrink at 14nm, but I believe that the line on that slide is based on a higher shrink. When Intel unveiled Broadwell-Y, it became clear that it had in reality a 2.2x increase in density. What I now believe (but this is speculation) is that in reality it might have been even more than that: if Broadwell's die composition changed compared to Haswell to contain more low density cells.

Put another way: we know that the pure silicon has shrunk 0.51x at 14nm, and we know that in reality Broadwell-Y had a 2.2x higher density thanks to other improvements. What we don't know however, is how the composition has changed from Haswell to Broadwell. If Broadwell contains more high density cells, like A9, then that could be the cause of the 2.2x density in part. OTOH, if BDW contains more low density cells, than that might have deflated the actual density improvement, making it look smaller than it actually was. I bet it was the last one, although it's a question by how much that would have changed the picture.

In short, I would have liked to see Haswell thrown onto slide 19 and 20.
 
Last edited:

witeken

Diamond Member
Dec 25, 2013
3,876
152
106
Don't put too much faith in marketing slides being technically accurate. They are made by graphic designers, not engineers.
As long as it is based on correct data, it doesn't matter who made the slide.
Exactly. Do you think those marketing folks know anyhing about those things. Yes, I know that Intel probably has marketing people with technology background or so. But the point is that those presentations are first made by the people who give them. You see all the sources underneath the slides? You see in how much detail the slides go, like the the composition of the x'tor.

The A9 vs. SKL-Y comparison ís based on actual Intel research like you can hear Bill Holt explain in the video. Intel bought the phones, counted the x'tors, etc. I mean, I guess every company, TSMC, Samsung, does this analysis of their competitor's claims and products.

So if Samsung's 10nm ends up 1mm lower on slide 21 vs slide 17, that is no coincidence. If Intel 10nm has become 22mm instead of 18mm, that's no coincidence. Those dots didn't appear there out of nowhere. In the same way, if on slide 20, SKL-Y has a tiny bit higher density than BDW-Y that is certainly no coincidence since Intel knows it own stuff of course.

If you are willing to construct companies's roadmaps from job listings, then this should be no more difficult to accept.

I mean, just look at the 14nm data. Even though the marketing/graphics folks removed the exact numbers, you can clearly see that TSMC and Samsung 14nm are much closer together than slide 17.

Doesn't it give confidence that Intel is willing to repeat their at the time heavily debated 2013 investor meeting slide.
 
Last edited:
Mar 10, 2006
11,719
2,002
126
Exactly. Do you think those marketing folks know anyhing about those things. Yes, I know that Intel probably has marketing people with technology background or so. But the point is that those presentations are first made by the people who give them. You see all the sources underneath the slides? You see in how much detail the slides go, like the the composition of the x'tor.

The A9 vs. SKL-Y comparison ís based on actual Intel research like you can hear Bill Holt explain in the video. Intel bought the phones, counted the x'tors, etc. I mean, I guess every company, TSMC, Samsung, does this analysis of their competitor's claims and products.

So if Samsung's 10nm ends up 1mm lower on slide 21 vs slide 17, that is no coincidence. If Intel 10nm has become 22mm instead of 18mm, that's no coincidence. Those dots didn't appear there out of nowhere. In the same way, if on slide 20, SKL-Y has a tiny bit higher density than BDW-Y that is certainly no coincidence since Intel knows it own stuff of course.

If you are willing to construct companies's roadmaps from job listings, then this should be no more difficult to accept.

I mean, just look at the 14nm data. Even though the marketing/graphics folks removed the exact numbers, you can clearly see that TSMC and Samsung 14nm are much closer together than slide 17.

Doesn't it give confidence that Intel is willing to repeat their at the time heavily debated 2013 investor meeting slide.
Watching Intel's mfg guys trying to blame their design teams for failing on the density side of things in actual products is kind of funny.
 

oak8292

Member
Sep 14, 2016
49
19
51
I think you are too worried about pitches. They are a simplistic way to determine who has the lead in Moore's Law but unfortunately it isn't really that simple as you have noted in your comment.

The main reason to change nodes is to improve on power performance and area (cost), PPA. The 2014 ASML slide deck that was linked earlier showed that a 1D layout used about 15% more area than a 2D layout at 10nm with the same design rules. Intel has been at 1D layout since 32nm and the foundries are still at 2D layout. Intel needs a tighter pitch just to get the same density based on the data from ASML. There could be bias in the data from ASML as they are trying to maintain 2D layout at the foundries which isn't feasible without EUV. TSMC is moving to 1D layout at 10nm because EUV isn't ready for volume manufacturing.

Then Bill Holt presentation shows the density difference in the 'normalized' transistor area for 'tall' and 'short' transistors. If your designs has a lot of 'tall' transistors with a 1D layout then a tighter metal pitch is more likely to be the optimum solution. On the other hand if your designs are primarily 'short' transistors with a higher transistor density and more interconnects per square millimeter with 2D layout then the optimum pitch may be larger to reduce the number of metal layers and cost. I think 'tall' transistors is what gets Intel the 3-4 Ghz for their bread and butter CPU and dropping that to 'short' transistors would save area and power but lose a lot of sales. The pitch really needs to be optimized for the 'tall' transistors.

One source for data on transistor costs and manufacturing costs is Handel Jones. He has been making presentations trying to sell FD-SOI and he includes what I assume is industry average data on costs, yields, silicon utilization, etc. etc. Here is a link to one from 2013 with manufacturing costs at 20 and 28nm.

http://www.soiconsortium.org/fully-depleted-soi/presentations/october-2013/IBS - Shanghai SOI Summit - Oct 2013 .pdf

It also shows the assumptions for the increasing costs of transistors below 28nm. However, there is another presentation from Yeric that disputes a lot of the data on transistor costs with some of the problems being gate utilization and parametric yield (slide 16)

https://community.arm.com/servlet/JiveServlet/previewBody/10986-102-1-22341/Yeric_IEDM_2015 - For print.pdf

I wish I could tell you I understood slides 41 and 42 in the Yeric presentation but I need help.
 
  • Like
Reactions: JoeRambo and Sweepr

witeken

Diamond Member
Dec 25, 2013
3,876
152
106
Watching Intel's mfg guys trying to blame their design teams for failing on the density side of things in actual products is kind of funny.
I don't know why you've become so Intel-cynical.

Two of the greatest Intel defenders on this forum, III-V and Arachnotronic have turned away from Intel. Any Intel fans left, or has the fanbase just become less polarized :)?
 
Last edited:

witeken

Diamond Member
Dec 25, 2013
3,876
152
106
I think you are too worried about pitches. They are a simplistic way to determine who has the lead in Moore's Law but unfortunately it isn't really that simple as you have noted in your comment.

The main reason to change nodes is to improve on power performance and area (cost), PPA. The 2014 ASML slide deck that was linked earlier showed that a 1D layout used about 15% more area than a 2D layout at 10nm with the same design rules. Intel has been at 1D layout since 32nm and the foundries are still at 2D layout. Intel needs a tighter pitch just to get the same density based on the data from ASML. There could be bias in the data from ASML as they are trying to maintain 2D layout at the foundries which isn't feasible without EUV. TSMC is moving to 1D layout at 10nm because EUV isn't ready for volume manufacturing.

Then Bill Holt presentation shows the density difference in the 'normalized' transistor area for 'tall' and 'short' transistors. If your designs has a lot of 'tall' transistors with a 1D layout then a tighter metal pitch is more likely to be the optimum solution. On the other hand if your designs are primarily 'short' transistors with a higher transistor density and more interconnects per square millimeter with 2D layout then the optimum pitch may be larger to reduce the number of metal layers and cost. I think 'tall' transistors is what gets Intel the 3-4 Ghz for their bread and butter CPU and dropping that to 'short' transistors would save area and power but lose a lot of sales. The pitch really needs to be optimized for the 'tall' transistors.

One source for data on transistor costs and manufacturing costs is Handel Jones. He has been making presentations trying to sell FD-SOI and he includes what I assume is industry average data on costs, yields, silicon utilization, etc. etc. Here is a link to one from 2013 with manufacturing costs at 20 and 28nm.

http://www.soiconsortium.org/fully-depleted-soi/presentations/october-2013/IBS - Shanghai SOI Summit - Oct 2013 .pdf

It also shows the assumptions for the increasing costs of transistors below 28nm. However, there is another presentation from Yeric that disputes a lot of the data on transistor costs with some of the problems being gate utilization and parametric yield (slide 16)

https://community.arm.com/servlet/JiveServlet/previewBody/10986-102-1-22341/Yeric_IEDM_2015 - For print.pdf

I wish I could tell you I understood slides 41 and 42 in the Yeric presentation but I need help.
We've had this 1D-2D discussion numerous times before on this forum, and frankly, I don't remember the conclusion although I'd think it was that the 15% number is too much, but that might be wishful thinking from my part.

(E.g. https://forums.anandtech.com/threads/samsung-claims-mass-production-on-14-nanometers.2409843/page-2).

Edit: But good first post, welcome on AT.

Edit 2: While searching about this interconnect issue, I've come across a thread I made myself which claims 10nm is 2.1x density, thanks to 1D layout. Bit confused now since I thought it was 0.52x for TSMC.

https://forums.anandtech.com/threads/tsmc-10nm-details-tsmc-symposium.2428274/
 

Sweepr

Diamond Member
May 12, 2006
5,151
1,125
131
LaptopMedia is testing a Core i7-7Y75 Acer notebook. This part got my attention:

Nevertheless, Intel claims that the shift from Core m5 and m7 to i5 and i7 is not only a marketing-driven move but also represents the performance increase that almost matches the KBL-U chips. That’s a pretty bold statement but with the new guidelines imposed to OEMs for implementing the KBL-Y processors, we might actually see some decent performance jump over last years Skylake-Y chips.

Kaby Lake-Y (2C+GT2)

http://laptopmedia.com/news/intels-kaby-lake-core-i7-7y75-is-in-our-office-lets-see-how-far-the-former-core-m-processor-has-gotten
 

ASK THE COMMUNITY