Skylake Core Configs and TDPs

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
My guess is 10nm for "post-silicon" in some form for Intel. I don't recall if they've nailed down their 10nm "recipe" yet, though, so there may or may not be some fluidity.

They already have the solution since EOY 2012.

I don't think 10nm will be the node of III-V. Earlier I though it would be 7nm, but it's probably 5nm because there's a new transistor innovation every 2 nodes.
 

jdubs03

Golden Member
Oct 1, 2013
1,283
902
136
They already have the solution since EOY 2012.

I don't think 10nm will be the node of III-V. Earlier I though it would be 7nm, but it's probably 5nm because there's a new transistor innovation every 2 nodes.

after a bit of thought i'd put my money on 7nm (probably finfet). the timing (2019) seems right with potentially euv/450mm. it gives intel an extra 2 yrs post cannonlake to get it right.

that one slide from applied materials gives us a solid indication of process materials at 10/7nm.

i think 5nm could be same material+new gate structure.

we've seen this:
different-transistor-topologies-2_original.jpg


found this:
Transistor_Blog_2.jpg
 
Last edited:

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
They already have the solution since EOY 2012.

I don't think 10nm will be the node of III-V. Earlier I though it would be 7nm, but it's probably 5nm because there's a new transistor innovation every 2 nodes.
I wouldn't make the "innovation every 2 nodes" thing a hard rule. The tech is ready to drop in when it's ready.
after a bit of thought i'd put my money on 7nm (probably finfet). the timing (2019) seems right with potentially euv/450mm. it gives intel an extra 2 yrs post cannonlake to get it right.
You need to keep in mind that they're going to want to limit the number of new technologies they're implementing at once, if they have the choice.

10 and 7nm are going to be very difficult for everyone.
10nm definition is closed, and since 2013 Intel has been designing chips on 10nm.

http://www.fool.com/investing/gener...gins-work-on-10-nanometer-mobile-designs.aspx
Interesting. I do think they're going to replace silicon in the channel at 10nm. With what, I don't know.
 

jpiniero

Lifer
Oct 1, 2010
16,823
7,267
136
Interesting. I do think they're going to replace silicon in the channel at 10nm. With what, I don't know.

It appears that Intel is going to do 10 nm on standard silicon; perhaps with 450 nm wafers.
 

rtsurfer

Senior member
Oct 14, 2013
733
15
76
How is Intel going to eliminate the dGPU.?

They can't even keep up with AMD's APUs in Graphics except one top end expensive part.
If you are holding your hopes on a next gen part coming out & being faster, then you should remember that Intel isn't the only company making new faster parts.


Intel might be able to eliminate the $150 GPU market but that's it.
Or have you guys forgotten that unlike the lackluster improvement in IPC on CPUs every year, Top end GPUs of subsequent generations tend to have large performance deltas between them.
 

jpiniero

Lifer
Oct 1, 2010
16,823
7,267
136
How is Intel going to eliminate the dGPU.?

Well they have already done quite a bit considering the only reason to get a dGPU now is for games. And (aw shucks) Broadwell U/Y doesn't have the lanes to do an external GPU, and wouldn't you know it there are only laptop quad core H models. Appears the same for Skylake too. Since the dual cores are the vast majority of Intel's sales it's easy to see the effect on nVidia since a large portion of their sales are the low to medium end GPUs like the 820/830/840/850 that would be paired with Intel's dual core laptops.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Apple has already matched Haswell IPC and already is beating intel in mobile GPU and handedly in CPU also.

http://www.anandtech.com/bench/product/1192
http://www.anandtech.com/bench/product/1209

Let's see.

Core i3 4330(2 cores, 4 threads, 3.5GHz, no Turbo), vs iPad Air A7(2 cores, 1.4GHz)

Sunspider Javascript: 137 vs 389.9(less is better, 184.6%)
Mozilla Kraken: 1224 vs 5773.2(less is better, 371.7%)
Google Octane: 27089 vs 5308(higher is better, 410.3%)
WebXPRT: 2369 vs 537(higher is better, 341.2%)

Now, that's on a mere 150% difference in clock speed difference.

Per clock,

On Sunspider Haswell is... 13.8% faster
On Kraken Haswell is... 88.7% faster
On Google Octane V2: Haswell is... 104.1% faster
On WebXPRT: Haswell is... 76.5% faster

Yea right its equal.

Does a low power setup affect Haswell's results?

http://www.anandtech.com/show/7440/microsoft-surface-pro-2-review/3

Core i5 4200U, 2.6GHz single core Turbo

Let's assume worst case scenario for Core i5and say it runs at 2.6GHz for ALL applications.

Per clock,
Sunspider Javascript: Haswell is 8.2% faster
Mozilla Kraken: Haswell is 78% faster
WebXPRT: Haswell is 26.3% faster

Can we guarantee that Core i5 4200U runs at 2.6GHz all the time? Hell no! We can't even guarantee that it runs at 2.6GHz even on single threads! Sunspider, if the i5 4200U runs at 2.5GHz, would mean Haswell has 12.5% advantage, negligible difference compared to 13.8% that scaling from Desktop i3 had.

To have Desktop Haswell equivalent gains, we need the Core i5 4200U running at 2.47GHz for Sunspider, 2.45GHz in Mozilla Kraken, and 1.86GHz in WebXPRT. It's likely possible that the Desktop is somewhat faster than mobile at the same clock due to platform differences. That means we only need 2.3GHz(Max 2 core Turbo for i5 4200U) on Kraken. Kraken may run at 2.3GHz but the Desktop being 15-20% faster.

How well are applications threaded? Does dual core + Hyperthreading on the Core i3 have any advantage over dual core on the A7?

Sunspider Javascript: Zero
Kraken: Nope
Google Octane V2: Nope
WebXPRT: Nope

Apple A7 and Haswell having equal IPC? I don't think so. A7 is Apple's version of ARM's A57 core. They look arbitrarily better for two reasons:
-Sunspider's code probably fits extremely well in A7's dedicated cache(Not talking about L1 and L2)
-Mobile applications have terrible threading
 
Last edited:

rtsurfer

Senior member
Oct 14, 2013
733
15
76
Well they have already done quite a bit considering the only reason to get a dGPU now is for games. And (aw shucks) Broadwell U/Y doesn't have the lanes to do an external GPU, and wouldn't you know it there are only laptop quad core H models. Appears the same for Skylake too. Since the dual cores are the vast majority of Intel's sales it's easy to see the effect on nVidia since a large portion of their sales are the low to medium end GPUs like the 820/830/840/850 that would be paired with Intel's dual core laptops.

That still doesn't mean the entire dGPU market.
It would mean low end & mobile.
Unless AMD & Nvidia get their power requirements down & performance up in mobile, Intel will destroy them in next 5 years, Top end GPUs will still be around.
Intel isn't getting close to 780Ti class GPU (aka flagship GPU) performance for another 10 years. No data here, its just what I believe.
Although if most GPUs AMD & Nvidia sell are low end as people say then they might have trouble surviving, they might have to lower the price of their Top GPU for people to buy it, but then would they make enough profit to survive.

Intel's elimination of PCI-E lanes isn't going to fly anywhere, people will either stick to older stuff or move to AMD.
Intel is not stupid enough to not see that.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
They're gonna need to start fabbing a lot more eDRAM modules because those EUs scale like garbage past 20 without more memory bandwidth.

Not Intel's iGPUs. Especially not at U's performance level.

Here, here's the benchmark of Intel's HD 4600 part, a much higher performing part due to not having thermal limitations. In fact, its little known that standard voltage HD 4600 is fast as U's Iris 5100.

http://www.anandtech.com/show/7364/memory-scaling-on-haswell/6

DDR3-1333 to DDR3-3000(125% increase in bandwidth):

Bioshock Infinite-16%
Tomb Raider-9%
Sleeping Dogs-17%


I wouldn't call that memory bandwidth limited.

On the contrary the A10-7850K
http://www.eteknix.com/amd-kaveri-a10-7850k-overclocking-unleashing-gcns-potential/6/

Batman Arkham Origins-20%
Bioshock Infinite-16.5%
Metro Last Night-13.8%
Sleeping Dogs-13.2%
Tomb Raider-12.1%

AMD got similar improvement for 28.6%(DDR3-1866 to DDR3-2400) improvement in bandwidth as HD 4600 did with 125% increase in bandwidth, but let's say they are the same. :hmm:


I expect the GT4e parts to be very impressive while the GT4 parts without eDRAM will probably offer the normal 10-20% boost from doubling the EUs.
Nvidia's GT 750 got significant improvements without improving memory bandwidth due to improved architecture. There's no reason that Broadwell can't do it. Actually according to Nvidia, they got that efficiency increase twice, once with Kepler and second time with Maxwell.

some nonsense about quad channel on a iGPU setup
http://www.anandtech.com/show/7364/memory-scaling-on-haswell/3

TreVader, look at that above. 125% increase in memory bandwidth results in 0-6% gain in CPU performance. Haswell doesn't care about memory above the stock dual channel setup. Also, the increased traces are said to be the direct reason why the -E chip boards are much pricier compared to regular LGA1150 boards. Increased traces mean not only more difficult design but moving from a 4-layer PCB to a 6-layer PCB.

witeken said:
Only 1 year: GT4 will have something like 3TFLOPS (96EUs, 75% faster than Gen7.5)

Assuming that 96EU will have similar Flops/EU,

96EUs, x 16 Flops/EU = 2TFlop @ 1.3GHz

Also, Broadwell does not have GT4. Skylake does. Skylake should increase EUs to 16/32/64/128.

Oh, and the Skylake GT2's are coming in 1 year. Broadwell's launch schedule leak showed that Broadwell GT3e is coming in 1 year. Skylake is two years.
 
Last edited:

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
How is Intel going to eliminate the dGPU.?

...

Intel might be able to eliminate the $150 GPU market but that's it.
Or have you guys forgotten that unlike the lackluster improvement in IPC on CPUs every year, Top end GPUs of subsequent generations tend to have large performance deltas between them.
Well, that's the idea. If Intel can convince the OEMs that they've got the best "low end" solution, that's a lot of money for them. They put the graphics on die that the OEMs want, and price it accordingly.
 

Alatar

Member
Aug 3, 2013
167
1
81
Not what I meant, I meant that they are obviously not going to continue the tegra line. They will still support hardware they just made.

Have you actually read any of the articles on that subject or did you just look at the headlines?

What Nvidia does not want to do is make successors to Tegra 4i which was their lower end chip designed to compete in a low margin, high volume, phone market.

What Nvidia will continue to do is making successors to Tegra 4, K1 etc. These chips are positioned as high end tablet chips that can also serve as embedded products for the car industry etc.

the TL;DR of those articles is that Nvidia isn't interested in making more low end i chips for phones.

Besides, I don't understand the doom and gloom for dGPU. dGPU share is going down due to igpus replacing the sub $100 dollar market. It doesn't matter what you cram inside a CPU, you'll never have the transistor or the power budget to kill off the high end dGPUs for gaming, professional use or supercomputers.

And skylake wont change this, neither will AMD's future APUs. They'll carve some more of the lower end dGPU market that's already dying but they wont be able to touch the high end, high margin market. The future of iGPUs is for high end market is probably more along the lines of igpus serving as just another unit inside your CPU doing some compute tasks and being nice at accelerating all sorts of things due to HSA like memory access etc.
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
Intel's elimination of PCI-E lanes isn't going to fly anywhere, people will either stick to older stuff or move to AMD.
Intel is not stupid enough to not see that.

I really don't think Intel will eliminate PCIe, for plenty of reasons. Sure they might cut down the number of lines to save power, but I wouldn't call that a problem.

Besides, as several tests and benchmarks show, discrete GPUs don't need more then a PCIe 2.0 x8 slot to give optimal performance. In other words you can achieve the same bandwidth with a simple PCIe 4.0 x2 slot. Should save some power too, PCIe 4.0 should have some significant advantages in both idle and active power usage.

Besides, I don't understand the doom and gloom for dGPU. dGPU share is going down due to igpus replacing the sub $100 dollar market. It doesn't matter what you cram inside a CPU, you'll never have the transistor or the power budget to kill off the high end dGPUs for gaming, professional use or supercomputers.

And skylake wont change this, neither will AMD's future APUs. They'll carve some more of the lower end dGPU market that's already dying but they wont be able to touch the high end, high margin market. The future of iGPUs is for high end market is probably more along the lines of igpus serving as just another unit inside your CPU doing some compute tasks and being nice at accelerating all sorts of things due to HSA like memory access etc.

Bingo... :)
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Intel isn't getting close to 780Ti class GPU (aka flagship GPU) performance for another 10 years. No data here, its just what I believe.
Although if most GPUs AMD & Nvidia sell are low end as people say then they might have trouble surviving, they might have to lower the price of their Top GPU for people to buy it, but then would they make enough profit to survive.

Besides, I don't understand the doom and gloom for dGPU. dGPU share is going down due to igpus replacing the sub $100 dollar market. It doesn't matter what you cram inside a CPU, you'll never have the transistor or the power budget to kill off the high end dGPUs for gaming, professional use or supercomputers.


I think you both forget the business model. Neither nVidia or AMD can make a profit only selling and developing highend cards. And AMD will exit it a lot sooner than nVidia for the same reason.

I wouldnt be surprised if Intel commands 75%+ graphics share in Q1 2016 for example with the much broader range of GTe products and GT4 models. And with rising IC design costs, wafer cost increase and Intels continual node lead. Its simply gonna be a point where dGPUs cant make money. Then it doesnt matter if dGPUs are still faster. Because there will be no new development. Its all about ROI. The only question left is, where is that point.
 
Last edited:

Fjodor2001

Diamond Member
Feb 6, 2010
4,224
589
126
I think you both forget the business model. Neither nVidia or AMD can make a profit only selling and developing highend cards. And AMD will exit it a lot sooner than nVidia for the same reason.

I wouldnt be surprised if Intel commands 75%+ graphics share in Q1 2016 for example with the much broader range of GTe products and GT4 models. And with rising IC design costs, wafer cost increase and Intels continual node lead. Its simply gonna be a point where dGPUs cant make money. Then it doesnt matter if dGPUs are still faster. Because there will be no new development. Its all about ROI. The only question left is, where is that point.

Intel increase its GPU market share only because it is increasing its CPU market share over AMD, not because less dGPUs are sold. Since the market as a whole is growing, there's still money to be made for AMD and nVidia, even if Intel grabs a larger section of the complete combined dGPU/iGPU market share.

18091-JPR-backpages-2.jpg


See this post for more details.
 
Last edited:

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
I wouldn't make the "innovation every 2 nodes" thing a hard rule. The tech is ready to drop in when it's ready.

The rule has been consistent since 90nm and according to AM's slide, it will be consistent until 5nm.
 

mikk

Diamond Member
May 15, 2012
4,304
2,391
136
10 and 7nm are going to be very difficult for everyone.

Interesting. I do think they're going to replace silicon in the channel at 10nm. With what, I don't know.

EUV isn't ready. Intel hinted they are going the triple patterning route. 10nm isn't necessarily more difficult than 14nm for Intel. They have to avoid the yield issues they had with 14nm.

Assuming that 96EU will have similar Flops/EU,

96EUs, x 16 Flops/EU = 2TFlop @ 1.3GHz

I would assume Gen8 and beyond doesn't have similar Flops/EU. It has 20 flops per EU instead 16 flops per EU according to the hotchips GenX presentation. 1.3 Ghz would give them 2.5 Tflop. GT4 isn't coming for Broadwell anyway.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
It appears that Intel is going to do 10 nm on standard silicon; perhaps with 450 nm wafers.

Here's the current roadmap:

14nm: 2nd generation Tri-Gate

10nm: 1st generation Tri-Gate with SiGe or Ge fin
new transistor innovation

7nm: 2nd generation SiGe/Ge - Extreme Ultraviolet Lithography (EUV) - 450mm wafers
2 new fab innovations

5nm: 1st generation III-V post-silicon transistors
new transistor innovation
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
How is Intel going to eliminate the dGPU.?

They can't even keep up with AMD's APUs in Graphics except one top end expensive part.
If you are holding your hopes on a next gen part coming out & being faster, then you should remember that Intel isn't the only company making new faster parts.

Intel's supposedly going to eliminate the dGPU with its manufacturing lead. In 2016, Intel will have a 3x higher transistor density. A 438mm² R9 290 (or its equivalent 20nm successor) would be only 146mm² at Intel's process. And the performance/watt will be even worse if AMD and Nvidia release their FinFET products a lot later, like what happens at 20nm now.

At 10nm, Intel will also have a Gen10 microarchitecture.

Intel might be able to eliminate the $150 GPU market but that's it.[/quote]
But the low-end is always the biggest part of a market, so Intel would really do a lot of damage. Could Nvidia and AMD survive from high-end parts only?

Or have you guys forgotten that unlike the lackluster improvement in IPC on CPUs every year, Top end GPUs of subsequent generations tend to have large performance deltas between them.
What does this have to do with Intel's ability to make a big part of the dGPU market obsolete?
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
I wouldnt be surprised if Intel commands 75%+ graphics share in Q1 2016 for example with the much broader range of GTe products and GT4 models. And with rising IC design costs, wafer cost increase and Intels continual node lead. Its simply gonna be a point where dGPUs cant make money. Then it doesnt matter if dGPUs are still faster. Because there will be no new development. Its all about ROI. The only question left is, where is that point.

I can see Intel increase its GPU market share the next years BUT only because they are flooding the Tablet market with ATOMs, not because they decrease the dGPUs in x86 Windows products.
How many times people upgrade there dGPUs ??? more than they change CPUs. Even if you get a iGPU PC, a lot of people will install a new dGPU later on as an upgrade.
So a $100 dGPU in 2016 will be much faster than todays Intels HD5200 iGPU and that dGPU will be a nice upgrade for that user.

Even in Laptops, dGPUs will always provide more performance for gaming. People buying laptops with dGPUs today will continue to buy them in the future.

So no, dGPUs are not dying ;)
 

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
Intel increase its GPU market share only because it is increasing its CPU market share over AMD, not because less dGPUs are sold. Since the market as a whole is growing, there's still money to be made for AMD and nVidia, even if Intel grabs a larger section of the complete combined dGPU/iGPU market share.

Do you know that this CAGR is counting a far higher growth in the professional market, don't you? The CAGR for the consumer market is negative.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
Assuming that 96EU will have similar Flops/EU,

96EUs, x 16 Flops/EU = 2TFlop @ 1.3GHz
This is what I did, I don't know if this is correct:

Haswell GT3 (40EUs) has 704 GFLOPS at 1100MHz. Broadwell will have 20% more EUs and is rumored to be 40% faster per clock. Let's assume that Skylake's Gen9 will be another 25% faster per clock.

This gives us 1478.4 GFLOPS for a 48EU Skylake. Doubling that for 96EUs gives us 2956.8 GFLOPS. If Skylake GT4 has 128EUs, it would have 3942.4 GFLOPS. At about 300 mm².

Compare that to Haswell's 352 GFLOPS at 1100MHz. One order of magnitude of IGP performance increase within just a mere 2 years, even more if you take Ivy Bridge's 250GFLOPS.

Reality check: R9 290 at 28nm has 1.43x more FLOPS and a 1.46x bigger die. This means they have within a few percent exactly the same performance/mm². This is of course ridiculous: 14nm has a 2.50x higher density and Skylake is 2 major architectural revisions later than Haswell, which an Intel CPU architect claims to be competitive with Kepler/HD7000 in terms of performance/mm², so my 4000 GFLOPS estimate must be way too pessimistic. It could also be 8 or, heck, even 24 TFLOPS.

Also, Broadwell does not have GT4. Skylake does. Skylake should increase EUs to 16/32/64/128.
We don't know how much EUs Skylake will have, so I toke Skylake with 96EUs for my earlier calculation. In the calculation above, I toke 128EUs and got 4TFLOPS.

Oh, and the Skylake GT2's are coming in 1 year. Broadwell's launch schedule leak showed that Broadwell GT3e is coming in 1 year. Skylake is two years.
We don't know when GT4e will launch. Still, they could launch it in Q2-Q4'2015, which is around 2 years after Haswell, which is a serious improvement, even if they wait to release it.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,224
589
126
It [Skylake] could also be [...]24 TFLOPS.

So 24 TFLOPS Skylake iGPUs in ~1 year is your prediction. I.e. about 15x the performance of a PS4 (1.84 TFLOPS), or 20x XBONE (1.2 TFLOPS). And you don't see DDR3/DDR4 as being any bottleneck either, even if paired with eDRAM?
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
So 24 TFLOPS Skylake iGPUs in ~1 year is your prediction. I.e. about 15x the performance of a PS4 (1.84 TFLOPS), or 20x XBONE (1.2 TFLOPS).
I was obviously being facetious there. I simply don't know. A 4TFLOPS IGP already seems very nice to me, which I think is realistic. But this isn't exclusive to Intel: A dGPU from Nvidia on TSMC's 10nm node could also easily achieve 20TFLOPS. This is simply what Intel can do with a multiple year manufacturing lead.

If Sony and Microsoft switch to Intel after 4 years, their consoles would make a 7-8 year leap in transistor performance.

Edit: Also note that they would probably take the cheaper 12 TFLOPS GT3 version ;). Since Skylake launches 2 years after Ps4/X1, and 14nm is 2 nodes ahead of 28nm, this makes for a 6 year or 3 node leap in just 2 years. 1.84 * 2^3 = 14.72 TFLOPS.


And you don't see DDR3/DDR4 as being any bottleneck either, even if paired with eDRAM?
I don't know enough about this, but I suppose it would. I also suppose that Intel will fix any major bottlenecks, because launching a 300mm²+ silicon chip with a huge bottleneck doesn't make any sense.
 
Last edited:

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
so my 4000 GFLOPS estimate must be way too pessimistic. It could also be 8 or, heck, even 24 TFLOPS.
LOL
Someone needs a reality check. Last time I checked intels igps were good for browsing internet and not demanding office stuff.