Even regular Iris doesn't really sell much outside of Apple. Which btw has been very quiet on the Mac front; they are expected to release a new Skylake rMBP in October but the quietness has led to the ARM chatter getting louder. If Apple is dumping Intel sooner rather than later, then ending Iris Pro would for sure make sense. They can always revive it later if forced to by AMD.Iris is fine, but Iris Pro saw very poor adoption. Pretty much the Apple Edition but Apple is now using dGPUs for rMBP 15".
No, it's 2C/4T Kaby Lake and 2C/4T Cannon Lake. If the OEMs even use Coffee Lake QC U processor, it's going to be segmented and tiered at a higher price point i.e. premium.Intel ULV/ULT CPUs (15-18W TDP) Comparison in Cinebench 11.5
Next on line: 2017 2C/4T Cannon Lake, 4C/8T Kaby Lake and 2018 4C/8T Coffee Lake.
This is a technical comparison between the best Intel has/had to offer at that TDP range, and we already know quad-cores will hit the U lineup in 2017/2018. Once Kaby Lake & Coffee Lake 4C/8T options at 15W arrive the likes of Microsoft, Apple, Dell, etc will use them or at least offer these chips in their ultraportables.No, it's 2C/4T Kaby Lake and 2C/4T Cannon Lake. If the OEMs even use Coffee Lake QC U processor, it's going to be segmented and tiered at a higher price point i.e. premium.
30% better efficiency for ST and 50% for MT in 4 years. Good but not impressive.Intel ULV/ULT CPUs (15-18W TDP) Comparison in Cinebench 11.5
Core i7-620UM (32nm Arrandale, 2C/4T @ 1.06 GHz - 2010)
- Single-Core: ?
- Multi-Core: 1.10
Core i7-2617M (32nm Sandy Bridge, 2C/4T @ 1.5 GHz - 2011)
- Single-Core: ?
- Multi-Core: 2.11
Core i7-3517U (22nm Ivy Bridge, 2C/4T @ 1.9 GHz - 2012)
- Single-Core: 1.2
- Multi-Core: 2.8
Core i7-4500U (22nm Haswell, 2C/4T @ 1.8 GHz - 2013)
- Single-Core: 1.3
- Multi-Core: 2.85
Core i7-5500U (14nm Broadwell, 2C/4T @ 2.4 GHz - 2015)
- Single-Core: 1.4
- Multi-Core: 3.2
Core i7-6500U (14nm Skylake, 2C/4T @ 2.5 GHz - 2015)
- Single-Core: 1.5
- Multi-Core: 3.5
Core i7-7500U (14nm Kaby Lake, 2C/4T @ 2.7 GHz - 2016)
- Single-Core: 1.6-1.68
- Multi-Core: 4.03
Median scores from NotebookCheck.
Next on line: 2017 2C/4T Cannon Lake, 4C/8T Kaby Lake and 2018 4C/8T Coffee Lake.
Let me explain it...May I ask a question about wafer cost?
So I know that wafer cost is not literally the price of the raw piece of silicon. It also includes things like chemicals that are used. So my question is how far does this go?
For instance if you look at capex: https://g.foolcdn.com/editorial/images/164894/capex_large.png.
These capacity costs are also taken into account, right? So what I think is that Intel has some numbers about depreciation of their tools and in that way the tool costs (and mask costs) (like a €60M ASML lithography machine) are also taking into account, depening on how much they are used. So a later node becomes more expensive because it has more for instance more lithography steps, so each wafer takes a higher depreciation from the tools. (So if you actually were to try to follow a wafer through its 3 months journey in the fab, you would not find like literally $5000 worth of stuff that was spend on the wafer, like the raw silicon wafer, the materials, the chemicals, the electricity bill, or do you? So the wafer cost is more like an imaginative number from a calculation that includes everything than a counter of cost of the products you use?)
Am I wrong? Too bad Idontcare disappeared or TuxDave.
Cool, nice to see that I got the gist of it, but this is much clearer. Especially the depreciation. You've earned a like.Let me explain it...
So with wafer cost you have two big pieces to cost. The first is the literal cost to produce that wafer...silicon wafer, chemicals, electric bill to keep the lights on, salaries the people that you're paying to actually run these factories, etc. and whatnot. But the other big piece is pretty much depreciation cost.
Depreciation is an accounting concept designed to spread the cost of capital equipment over some amount of time rather than all at once (aka a depreciation schedule).
The money is already spent, so no impact to cash flow, but that money spent needs to flow thru to the products that get sold. So Intel takes a quarterly depreciation "charge" and its impact to wafer cost really is roughly Total Depreciation/wafers shipped.
One reason 14nm and 10nm are more costly than say 22nm or 32nm is that the addition of double/quad patterning and other process steps. The more steps required to process a wafer, the longer it takes for it to go from start to finish, and this extra complexity definitely increases that time to produce a wafer (known as cycle time). And since depreciation happens at a fixed schedule; the capital equipment costs are spread across fewer wafers in a given amount of time, impacting the effective wafer cost.
I guess we will have to see what Intel charges for it, but I imagine it'd be about the same as the 2+3e models (ie: RCP of $20 more than the 2+2 U). Intel actually charges less (well the RCP) for the 4+2 H than the 2+3e U but the former doesn't include the PCH.No, it's 2C/4T Kaby Lake and 2C/4T Cannon Lake. If the OEMs even use Coffee Lake QC U processor, it's going to be segmented and tiered at a higher price point i.e. premium.
It doesn't matter what Intel puts on their site. Why would I think Dell, etc. would play a different game than they do with dGPUs? Dell, Lenovo, etc. will sell really crappy dGPUs for inflated prices. Sometimes there are special models (e.g. when Dell had 8850m), but that's not the norm.I guess we will have to see what Intel charges for it, but I imagine it'd be about the same as the 2+3e models (ie: RCP of $20 more than the 2+2 U). Intel actually charges less (well the RCP) for the 4+2 H than the 2+3e U but the former doesn't include the PCH.
Don't put too much faith in marketing slides being technically accurate. They are made by graphic designers, not engineers.
Exactly. Do you think those marketing folks know anyhing about those things. Yes, I know that Intel probably has marketing people with technology background or so. But the point is that those presentations are first made by the people who give them. You see all the sources underneath the slides? You see in how much detail the slides go, like the the composition of the x'tor.As long as it is based on correct data, it doesn't matter who made the slide.
Watching Intel's mfg guys trying to blame their design teams for failing on the density side of things in actual products is kind of funny.Exactly. Do you think those marketing folks know anyhing about those things. Yes, I know that Intel probably has marketing people with technology background or so. But the point is that those presentations are first made by the people who give them. You see all the sources underneath the slides? You see in how much detail the slides go, like the the composition of the x'tor.
The A9 vs. SKL-Y comparison ís based on actual Intel research like you can hear Bill Holt explain in the video. Intel bought the phones, counted the x'tors, etc. I mean, I guess every company, TSMC, Samsung, does this analysis of their competitor's claims and products.
So if Samsung's 10nm ends up 1mm lower on slide 21 vs slide 17, that is no coincidence. If Intel 10nm has become 22mm instead of 18mm, that's no coincidence. Those dots didn't appear there out of nowhere. In the same way, if on slide 20, SKL-Y has a tiny bit higher density than BDW-Y that is certainly no coincidence since Intel knows it own stuff of course.
If you are willing to construct companies's roadmaps from job listings, then this should be no more difficult to accept.
I mean, just look at the 14nm data. Even though the marketing/graphics folks removed the exact numbers, you can clearly see that TSMC and Samsung 14nm are much closer together than slide 17.
Doesn't it give confidence that Intel is willing to repeat their at the time heavily debated 2013 investor meeting slide.
I don't know why you've become so Intel-cynical.Watching Intel's mfg guys trying to blame their design teams for failing on the density side of things in actual products is kind of funny.
We've had this 1D-2D discussion numerous times before on this forum, and frankly, I don't remember the conclusion although I'd think it was that the 15% number is too much, but that might be wishful thinking from my part.I think you are too worried about pitches. They are a simplistic way to determine who has the lead in Moore's Law but unfortunately it isn't really that simple as you have noted in your comment.
The main reason to change nodes is to improve on power performance and area (cost), PPA. The 2014 ASML slide deck that was linked earlier showed that a 1D layout used about 15% more area than a 2D layout at 10nm with the same design rules. Intel has been at 1D layout since 32nm and the foundries are still at 2D layout. Intel needs a tighter pitch just to get the same density based on the data from ASML. There could be bias in the data from ASML as they are trying to maintain 2D layout at the foundries which isn't feasible without EUV. TSMC is moving to 1D layout at 10nm because EUV isn't ready for volume manufacturing.
Then Bill Holt presentation shows the density difference in the 'normalized' transistor area for 'tall' and 'short' transistors. If your designs has a lot of 'tall' transistors with a 1D layout then a tighter metal pitch is more likely to be the optimum solution. On the other hand if your designs are primarily 'short' transistors with a higher transistor density and more interconnects per square millimeter with 2D layout then the optimum pitch may be larger to reduce the number of metal layers and cost. I think 'tall' transistors is what gets Intel the 3-4 Ghz for their bread and butter CPU and dropping that to 'short' transistors would save area and power but lose a lot of sales. The pitch really needs to be optimized for the 'tall' transistors.
One source for data on transistor costs and manufacturing costs is Handel Jones. He has been making presentations trying to sell FD-SOI and he includes what I assume is industry average data on costs, yields, silicon utilization, etc. etc. Here is a link to one from 2013 with manufacturing costs at 20 and 28nm.
http://www.soiconsortium.org/fully-depleted-soi/presentations/october-2013/IBS - Shanghai SOI Summit - Oct 2013 .pdf
It also shows the assumptions for the increasing costs of transistors below 28nm. However, there is another presentation from Yeric that disputes a lot of the data on transistor costs with some of the problems being gate utilization and parametric yield (slide 16)
https://community.arm.com/servlet/JiveServlet/previewBody/10986-102-1-22341/Yeric_IEDM_2015 - For print.pdf
I wish I could tell you I understood slides 41 and 42 in the Yeric presentation but I need help.
Nevertheless, Intel claims that the shift from Core m5 and m7 to i5 and i7 is not only a marketing-driven move but also represents the performance increase that almost matches the KBL-U chips. That’s a pretty bold statement but with the new guidelines imposed to OEMs for implementing the KBL-Y processors, we might actually see some decent performance jump over last years Skylake-Y chips.
|Thread starter||Similar threads||Forum||Replies||Date|
|Discussion Cinebench Results for 2 Intel and 2 AMD Hex Core Systems at equivalent clock speed of roughly 4Ghz||CPUs and Overclocking||4|
|A||News Massive 20GB Intel IP Data Breach!||CPUs and Overclocking||18|
|Discussion [SA] Intel should not launch Ice Lake-SP||CPUs and Overclocking||26|
|News [Intel] Murthy is out||CPUs and Overclocking||7|
|S||Question Taiwan Report: Intel has reached an agreement with TSMC||CPUs and Overclocking||120|
|Discussion Cinebench Results for 2 Intel and 2 AMD Hex Core Systems at equivalent clock speed of roughly 4Ghz|
|News Massive 20GB Intel IP Data Breach!|
|Discussion [SA] Intel should not launch Ice Lake-SP|
|News [Intel] Murthy is out|
|Question Taiwan Report: Intel has reached an agreement with TSMC|