Skylake Core Configs and TDPs

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
Richtek-motherboard-solutions-05.JPG


http://www.esm-components.com/news/004/4.html

Stumbled upon this while searching for more information on the whole Skylake-not-using-FIVR thing.

Of interest, it looks like there's a 95W 4C+GT2 part, and a 95W 4C+GT4e part. The Skylake lineup scales from 95W down to 4.5W (Y-series).

Definitely not a full lineup... probably engineering samples.

95W for Skylake-H seems... really odd. Nominal TDP for Haswell-H tops out at 57W, with cTDP up maxing out at 65W. This would suggest that there are 95W BGA parts. Looks like Intel's really going after taking out the dGPU market.
 
Last edited:

mikk

Diamond Member
May 15, 2012
4,299
2,383
136
Don't read too much into the desktop TDP data, this is Intels standard approach. They have to make sure that the platform can cope with 95W models (especially for a 4+4e part it could be necessary). In earlier Haswell Roadmaps we saw 95W for this reason. Processor TDP could differ for SKL at launch.
 

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
Don't read too much into the desktop TDP data, this is Intels standard approach. They have to make sure that the platform can cope with 95W models (especially for a 4+4e part it could be necessary). In earlier Haswell Roadmaps we saw 95W for this reason. Processor TDP could differ for SKL at launch.
That's a really good point. Actually, even Ivy Bridge had 95W parts prior to launch, which turned out to top out at 77W.

Looks like wccftech has this information already... but at least I'm happy knowing I found it myself :)
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
BTW, I really think Skylake won't be something people expect. There's a reason why Intel is releasing Broadwell K and Skylake mainstream at the same time.

Businesses that make money don't make products that has conflicting interests - for example making cheaper mainstream Skylake better than Broadwell K.

There would have been NO WAY that Intel could have pulled this off back in the Pentium III days. The CPUs advanced way too fast. Now the gains are far slower.

Unless Broadwell releases at 4.5GHz base and overclocks to 5.5GHz and Skylake "5770" is a 3.7GHz Base chip(100MHz faster than 4790), then MAYBE we'll see 15-20% perf/clock improvement.
 

Wall Street

Senior member
Mar 28, 2012
691
44
91
"No this power rail" tells me that this document is probably fake by someone with English as a second language.
 

Haserath

Senior member
Sep 12, 2010
793
1
81
BTW, I really think Skylake won't be something people expect. There's a reason why Intel is releasing Broadwell K and Skylake mainstream at the same time.

Businesses that make money don't make products that has conflicting interests - for example making cheaper mainstream Skylake better than Broadwell K.

There would have been NO WAY that Intel could have pulled this off back in the Pentium III days. The CPUs advanced way too fast. Now the gains are far slower.

Unless Broadwell releases at 4.5GHz base and overclocks to 5.5GHz and Skylake "5770" is a 3.7GHz Base chip(100MHz faster than 4790), then MAYBE we'll see 15-20% perf/clock improvement.

14nm claims look greater than usual and they're also having yield problems. I'm thinking they've changed the transistor makeup already and am expecting better electrical performance from it.

I won't be surprised to see Broadwell clock over 5Ghz regularly and Skylake will be the same 10-15% improvement as previous arch updates.

One of the reasons they may be delaying SLk is the difference in GPU size. Rumored to be 48 EU for BDk vs 96 for SLk
 

mikk

Diamond Member
May 15, 2012
4,299
2,383
136
SLK GT4 could have more than 100 EUs if they increased the EU count to 28 per slice.
 

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
BTW, I really think Skylake won't be something people expect. There's a reason why Intel is releasing Broadwell K and Skylake mainstream at the same time.

I think Skylake will be something people expect, it won't be something some desktop enthusiasts are expecting.
 

Sweepr

Diamond Member
May 12, 2006
5,148
1,143
136
There's a 45W 4+4e SKU. A mobile part with 4 Skylake cores, 100+ EUs iGPU and that TDP is an impressive feat. Apple and other OEMs will surely like it.
 

SoulWager

Member
Jan 23, 2013
155
0
71
Is the discrete gpu desktop market so small that it's not worth making a 60 SKU? It just seems like the igpu is a waste of die area for a lot of people.
 

24601

Golden Member
Jun 10, 2007
1,683
40
86
Since Intel has a cross-licensing agreement with Nvidia, is there any inkling that Intel will try again with a dedicated desktop graphics card/compute card hybrid like they were going for with larrebee?

They definitely have the process advantage and once they ramp up 14nm fully they will have extra fab space to spare for such an adventure for the upper-mainstream to high-end consumer desktop dedicated graphics market segment.

Does anyone know if that cross-licensing agreement with Nvidia excluded that possibility explicitly anywhere like it does Nvidia's doing an x86 Project Denver?
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Since Intel has a cross-licensing agreement with Nvidia, is there any inkling that Intel will try again with a dedicated desktop graphics card/compute card hybrid like they were going for with larrebee?

They definitely have the process advantage and once they ramp up 14nm fully they will have extra fab space to spare for such an adventure for the upper-mainstream to high-end consumer desktop dedicated graphics market segment.

ASP is too low in that segment. And its a dying segment. Xeon Phi is the only thing you see in that category. IGP all the way else. And if dGPUs didnt already suffer from value/lowend being oblitterated. Something like the GT4e config will really add pressure.

Intels graphics share rises fast already. dGPUs would only be a black hole for money. And look at nVIdia, they are moving as fast as they can to Tegra etc for a post dGPU world.

JPR_Graphics_chip_market_Q1_2014.jpg
 
Last edited:

24601

Golden Member
Jun 10, 2007
1,683
40
86
ASP is too low in that segment. And its a dying segment. Xeon Phi is the only thing you see in that category. IGP all the way else. And if dGPUs didnt already suffer from value/lowend being oblitterated. Something like the GT4e config will really add pressure.

Intels graphics share rises fast already. dGPUs would only be a black hole for money. And look at nVIdia, they are moving as fast as they can to Tegra etc for a post dGPU world.

JPR_Graphics_chip_market_Q1_2014.jpg

Intel shut down one of it's prospective 14nm fabs specifically because of lack of need.

Intel obviously has a fab utilization problem in 14nm for most of the node once it's fully ramped.

Intel already has invested the R&D to do tick/tock-ish cadence for it's graphics IGP.

I don't see it as such a huge leap to using that spare fab capacity once the process fully ramps on something you've essentially done most of the R&D on.

Just make a GT8 or a GT16 or something.

Intel will definitely benefit from Nvidia dieing, as it has AMD already for it's token competition in both IGP and GPUs.

This would be a pretty good way to kill them, by denying Nvidia the volumes it needs to subsidize the R&D for it's HPC cards.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Intel shut down one of it's prospective 14nm fabs specifically because of lack of need.

Intel obviously has a fab utilization problem in 14nm for most of the node once it's fully ramped.

Intel already has invested the R&D to do tick/tock-ish cadence for it's graphics IGP.

I don't see it as such a huge leap to using that spare fab capacity once the process fully ramps on something you've essentially done most of the R&D on.

Just make a GT8 or a GT16 or something.

You assume its possible to tool the fab as well (Not that TSMC still havent shipped a single 20nm "retail" part.). And for dGPUs to have a high enough margin to pay itself back. And in terms of utilization, look at TSMC. They was down to 70% on 28nm. Its not like Intel is having some abnormal utilziation rate.

exh99144.jpg


GPU development time is also multiple years. And no, they cant just make a GT16 for example. Just like you cant "just" add more cores to a CPU. All the infrastructure and such needs to be in place.

But again, why enter a rapid dying segment. Specially when your own IGPs flies ahead in marketshare.
 
Last edited:

24601

Golden Member
Jun 10, 2007
1,683
40
86
You assume its possible to tool the fab as well (Not that TSMC still havent shipped a single 20nm "retail" part.). And for dGPUs to have a high enough margin to pay itself back. And in terms of utilization, look at TSMC. They was down to 70% on 28nm. Its not like Intel is having some abnormal utilziation rate.

exh99144.jpg


GPU development time is also multiple years. And no, they cant just make a GT16 for example. Just like you cant "just" add more cores to a CPU. All the infrastructure and such needs to be in place.

But again, why enter a rapid dying segment. Specially when your own IGPs flies ahead in marketshare.

Using spare fab capacity is much cheaper for an IDM than a third party design company like Nvidia.

I see that chart and I see 20% of free utilization that you can put whatever you like on as long as that thing you put on it doesn't have to be ready for shipping in volume by any specific time period.

But again, why enter a rapid dying segment. Specially when your own IGPs flies ahead in marketshare.

Nvidia is a direct and competent competitor in the HPC space to Intel.

If Intel can cut Nvidia's legs from under them far more quickly than the natural market progression, than you have cut them off at the pass without giving them as much time to maneuver.

Nvidia subsidizes is HPC cards with their large (in comparison) volumes of almost the same design in their GPU line (Identical design with their Big Core consumer GPU).

If Intel can rapidly increase Nvidia's decline of dedicated card sales volume by spending not a whole lot of R&D glueing the parts their already have and already spent the R&D on, they can cut them down where they stand before they make their planned dramatic turnaround in the ARM APU space.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Using spare fab capacity is much cheaper for an IDM than a third party design company like Nvidia.

I see that chart and I see 20% of free utilization that you can put whatever you like on as long as that thing you put on it doesn't have to be ready for shipping in volume by any specific time period.

Because utilization rate is not 100% doesnt mean there is spare capacity. Remember there is an ongoing upgrade of fabs to newer nodes.

And remember, Fab42 as you originally mention is an empty shell. There is no equipment. It will be used it seems for 450mm/10nm.

Fab32 on the other hand is getting an unexpected upgrade from 22nm to 14nm.
 
Last edited:

NTMBK

Lifer
Nov 14, 2011
10,447
5,819
136
Nvidia is a direct and competent competitor in the HPC space to Intel.

If Intel can cut Nvidia's legs from under them far more quickly than the natural market progression, than you have cut them off at the pass without giving them as much time to maneuver.

Nvidia subsidizes is HPC cards with their large (in comparison) volumes of almost the same design in their GPU line (Identical design with their Big Core consumer GPU).

Why do you think Intel are pushing the Xeon Phi? ;) They want to kill off the Tesla with an x86 product and deny NVidia the HPC market.
 

24601

Golden Member
Jun 10, 2007
1,683
40
86
Why do you think Intel are pushing the Xeon Phi? ;) They want to kill off the Tesla with an x86 product and deny NVidia the HPC market.

One does not strike where the enemy is strong.

One strikes where the enemy is weak.

That's why I said "cut their legs from under them."

Nvidia has inflexible per unit costs from TSMC.

Intel can sell the cards at cost without getting in trouble for dumping at below cost and it will be far below what Nvidia's cost structure can handle.

Since Intel dgaf about the dedicated gpu market, almost-dumping is the best way to get rid of a competitor.
 

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
Since Intel has a cross-licensing agreement with Nvidia, is there any inkling that Intel will try again with a dedicated desktop graphics card/compute card hybrid like they were going for with larrebee?

They definitely have the process advantage and once they ramp up 14nm fully they will have extra fab space to spare for such an adventure for the upper-mainstream to high-end consumer desktop dedicated graphics market segment.

Does anyone know if that cross-licensing agreement with Nvidia excluded that possibility explicitly anywhere like it does Nvidia's doing an x86 Project Denver?
All that cross-licensing agreement is, is that it's basically an agreement not to sue each other if some of their IP is similar to the other's. It's not a "hey, we'll give you the critical details of Maxwell if you give us the critical details of Gen8."
 

TreVader

Platinum Member
Oct 28, 2013
2,057
2
0
Last edited:

MarkLuvsCS

Senior member
Jun 13, 2004
740
0
76
The market may surely be shrinking, but i certainly don't expect it to disappear anytime soon. Mobile has grown by leaps and bounds, but that doesn't mean all desktops are disappearing. Adding more and more to a CPU die is ever increasing its thermal density as well. If you have a secondary die as a GPU elsewhere, the GPU saves thermal limit of the CPU and allows the GPU to increase within it's own thermal limits.
 

24601

Golden Member
Jun 10, 2007
1,683
40
86
All that cross-licensing agreement is, is that it's basically an agreement not to sue each other if some of their IP is similar to the other's. It's not a "hey, we'll give you the critical details of Maxwell if you give us the critical details of Gen8."

I'm talking about specific limitations baked into the contract, like the specific language that didn't allow Nvidia to do x86 translated Denver.

I was wondering if there was anything in there that forbid Intel from making dedicated GPUs.