Skylake/Broadwell Roadmap Update @Vr-zone

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

chonicle

Junior Member
Feb 10, 2015
6
0
0
A little bit translation:
Intel doesn't want core-m/skylake/cherrytrail/baytrail on the market in the same time. And lenovo/asus/hp/dell..etc requested that all skylake series(Y,U,S) must be out at the same time(unlike broadwell, Y first and then others), in order to avoid small PC vendors affect on their market sales. This will give one more season(apparently for broadwell).
 
Aug 11, 2008
10,451
642
126
I dont like the delay, but it makes sense in a way to adjust inventory and have everything come out at the same time.

The even more troubling part is the supposed artificial limiting of Cherry Trail performance. That does not make sense. Does it mean that the graphics performance of low power broadwell and skylake are not as good as expected and they dont want to show it up by having atom better?

In any case, I have thought all along and even posted it earlier that I we would not see wide adoption of Skylake until 2016.
 

oobydoobydoo

Senior member
Nov 14, 2014
261
0
0
For interested, http://tieba.baidu.com/p/3575434876?pn=1

He mentioned skylake will be pretty nice.

This just sounds like more intel PR damage control. Cherry Trail will be "castrated" to keep it from competing with Core M? That's a laugh, intel can use every single clock cycle they can get out of Cherry Trail, they are worlds behind even Chinese OEMs in mobile! Cherry trail would have to be at least 3 times as fast as bay trail to compete with core M.... anybody think intel is going to do an across-the-board 300% increase in performance? I didn't think so...


Then it goes on to say that intel is delaying Skylake so it doesn't have Bay trail, cherry trail, skylake, and broadwell available at the same time? Perhaps, but seeing as we've seen no credible rumors about skylake (and we even had benchmark leaks with Broadwell) I think it would be prudent to take these "rumors" with a bucket full of salt.
 

Ajay

Lifer
Jan 8, 2001
15,332
7,792
136
So still yield issues at the root of the delay.

That's the only thing that would make sense. Intel is normally smart about cannibalizing the prior generation to offer more features/performance/power efficiency. I really can't see why retailers would care about BW vs SKL - where SKL is a better fit, they'll just go with that. All consumers see is i3/i5/i7 and the clock rate.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Wow, by August 2015, the original SB i5 2500K/2600K will be just 4 months short of 5 years old!! Amazing longevity. On the positive side, the mainstream market does need for DDR4 prices to come down. Looks like there won't be any Skylake Apple laptops until late 2015 at the earliest. This is bound to impact Skylake-E. With BW-E launching Q1 2016, I think we might not even see SK-E until Q4 2016/Q1 2017 at this pace. Very little excitement in the CPU arena in the last 5 years.
 

Omar F1

Senior member
Sep 29, 2009
491
8
76
Wow, by August 2015, the original SB i5 2500K/2600K will be just 4 months short of 5 years old!! Amazing longevity. On the positive side, the mainstream market does need for DDR4 prices to come down. Looks like there won't be any Skylake Apple laptops until late 2015 at the earliest. This is bound to impact Skylake-E. With BW-E launching Q1 2016, I think we might not even see SK-E until Q4 2016/Q1 2017 at this pace. Very little excitement in the CPU arena in the last 5 years.
Don't forget that many are still holding on their i7 920 (now X5650). The more they delay Braodwell-K/Skylake, the more we milk their old tech.


I believe it's time for Intel to rethink it's Tick/Tock strategy.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,229
9,990
126
I believe it's time for Intel to rethink it's Tick/Tock strategy.

Why? Because Broadwell was delayed so much? Tick-tock has been very successful for Intel.

Or do you want them to introduce new nodes, and new architectures, at the same time? Risking complete failure, instead of predictable slower performance increases?

Do you think Tick-Tock is holding them back, as far as performance increases over time?
 

SAAA

Senior member
May 14, 2014
541
126
116
Why? Because Broadwell was delayed so much? Tick-tock has been very successful for Intel.

Or do you want them to introduce new nodes, and new architectures, at the same time? Risking complete failure, instead of predictable slower performance increases?

Do you think Tick-Tock is holding them back, as far as performance increases over time?

Tick-Tock has ended long ago, both for physical reasons and commercial ones.
First there hasn't been a real "new" or vastly reworked microarchitecture since Core, second tick don't give anymore the boost they used to but that's a different matter: that's a silicon limitation.

What's holding them back now are the immensely higher cost (time-money-research) for developing both of these in a way that isn't just a 10% increase here and 5% there.

Just to name one, think of HSA and heterogeneous computation: these are the kind of innovations that can still affect the end user noticeably, like SSD did and maybe stacked/faster memories will.

I don't know, maybe III-V materials will emerge at 7nm and we will jump straight to 6-8GHz clocks on all machines or something like that. Or some great programmer will finally find a way to parallelize most algorithms and many cores will finally be useful.

There's still room for improvements but as I said it's walled by adveristies, for the usual tree example: low hanging fruits are gone now, but also many high ones. Yet there is that sequoia close by with many succulent fruits, just far higher than ever.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Don't forget that many are still holding on their i7 920 (now X5650). The more they delay Braodwell-K/Skylake, the more we milk their old tech.

I don't mind. This leaves more $ towards a monitor, SSD and GPU upgrades. It also means after 4-5 years i5/i7 CPU ownership, a gamer can resell the parts for something, while try doing that back in the days with P2-P4/Core 2 Duo/Quad or Athlon XP/A64/FX and you'll get almost nothing for those parts in 4-5 years. It now basically means a Skylake i7 for $350 will probably last another 5 years and could be resold for 40-50% of its value. It's never been cheaper to own a PC gaming rig when you look at the cost of ownership from that point of view. Pretty much an overclocked Skylake i7 is going to to last all the way until PS5/XB2 in late 2019-2020. :thumbsup:
 

mikk

Diamond Member
May 15, 2012
4,112
2,108
136
Windows 10 has a driver update for Intel with Skylake support apparently.


Code:
SKL HW
iSKLULTGT1    = "Intel Skylake HD Graphics ULT GT1"    
iSKLULTGT15    = "Intel Skylake HD Graphics ULT GT1.5"    
iSKLULTGT2    = "Intel Skylake HD Graphics ULT GT2"
iSKLULXGT1    = "Intel Skylake HD Graphics ULX GT1"
iSKLULXGT15    = "Intel Skylake HD Graphics ULX GT1.5"
iSKLULXGT2    = "Intel Skylake HD Graphics ULX GT2"
iSKLDTGT2    = "Intel Skylake HD Graphics DT GT2"
iSKLULTGT3    = "Intel Skylake HD Graphics ULT GT3"
iSKLULTGT2f     = "Intel Skylake HD Graphics ULT GT2f"
iSKLDTGT15    = "Intel Skylake HD Graphics DT GT1.5"
iSKLDTGT1    = "Intel Skylake HD Graphics DT GT1"
iSKLHaloGT4    = "Intel Skylake HD Graphics Halo GT4"
iSKLDTGT4    = "Intel Skylake HD Graphics DT GT4"
iSKLHaloGT2    = "Intel Skylake HD Graphics Halo GT2"
iSKLHaloGT3    = "Intel Skylake HD Graphics Halo GT3"
iSKLHaloGT1    = "Intel Skylake HD Graphics Halo GT1"
I've disassembled this driver and found some interesting things regarding CannonLake (CNL).


kq8e8egd.png



As we know one subslice contains 8 EUs since Gen8. So it's possible that 5x8, 13x8 etc. refers to the EU configuration for CannonLake.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
That's interesting. At this moment, GT2 has 3x8 = 1 slice

But here:

3x8 = 24 EUs = GT1 (1 slice?)
5x8 = 40 EUs = GT2 (1.7 slices?)
7x8 = 56 EUs = GT2.5 (2.3 slices?)
9x8 = 72 EUs = GT3 (3 slides?)
13x8 = 104 EUs = GT4 (4.3 slices?)

That surely doesn't make sense, but it's a good improvement, about what you'd expect from a die shrink, although this should have been SKL, in my opinion, considering what's happening with graphics in smartphones and tablets.
 

imported_ats

Senior member
Mar 21, 2008
422
63
86
That's interesting. At this moment, GT2 has 3x8 = 1 slice

But here:

3x8 = 24 EUs = GT1 (1 slice?)
5x8 = 40 EUs = GT2 (1.7 slices?)
7x8 = 56 EUs = GT2.5 (2.3 slices?)
9x8 = 72 EUs = GT3 (3 slides?)
13x8 = 104 EUs = GT4 (4.3 slices?)

That surely doesn't make sense, but it's a good improvement, about what you'd expect from a die shrink, although this should have been SKL, in my opinion, considering what's happening with graphics in smartphones and tablets.

There could also be a bit of a difference in design. For instance, instead of basically symmetrical slices, its could have a base block that contains 3x8 and the additional slices are 2x8. Reasons for this could be as simple as optimizing layout geometries. 2x8 might fit the vertical area and they might needed half the vertical area for the auxiliary none-computational portion of the GPU and filled the rest with another 8 EUs:

Code:
2EU   2EU-2EU
2EU   2EU-2EU
2EU   2EU-2EU
2EU   2EU-2EU
2EU   2EU-B_F
2EU   2EU-A_U
2EU   2EU-S_N
2EU   2EU-E_C
 
Last edited:

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
There could also be a bit of a difference in design. For instance, instead of basically symmetrical slides, its could have a base block that contains 3x8 and the additional slices are 2x8.
Is it me, or does it look like you've just given some architectural tidbits before Intel's disclosure? Well, I guess it doesn't matter since Intel's itself disclosed which CNL configuration we'll get.

Reasons for this could be as simple as optimizing layout geometries. 2x8 might fit the vertical area and they might needed half the vertical area for the auxiliary none-computational portion of the GPU and filled the rest with another 8 EUs:
I don't really know what to make of all those "might"s and "could"s...
 

imported_ats

Senior member
Mar 21, 2008
422
63
86
Is it me, or does it look like you've just given some architectural tidbits before Intel's disclosure? Well, I guess it doesn't matter since Intel's itself disclosed which CNL configuration we'll get.

I have absolutely no inside knowledge of anything GPU related at Intel. Any knowledge I did have was so old and different to any of the current direction to be immaterial. I'm just doing pattern fitting and pattern recognition based on what little info was posted here.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
I have absolutely no inside knowledge of anything GPU related at Intel. Any knowledge I did have was so old and different to any of the current direction to be immaterial. I'm just doing pattern fitting and pattern recognition based on what little info was posted here.
The delta between GT2-3-4 is 4x8, that I know.
 

meperry64

Junior Member
Feb 19, 2015
1
0
0
Windows 10 has a driver update for Intel with Skylake support apparently.


Code:
SKL HW
iSKLULTGT1    = "Intel Skylake HD Graphics ULT GT1"    
iSKLULTGT15    = "Intel Skylake HD Graphics ULT GT1.5"    
iSKLULTGT2    = "Intel Skylake HD Graphics ULT GT2"
iSKLULXGT1    = "Intel Skylake HD Graphics ULX GT1"
iSKLULXGT15    = "Intel Skylake HD Graphics ULX GT1.5"
iSKLULXGT2    = "Intel Skylake HD Graphics ULX GT2"
iSKLDTGT2    = "Intel Skylake HD Graphics DT GT2"
iSKLULTGT3    = "Intel Skylake HD Graphics ULT GT3"
iSKLULTGT2f     = "Intel Skylake HD Graphics ULT GT2f"
iSKLDTGT15    = "Intel Skylake HD Graphics DT GT1.5"
iSKLDTGT1    = "Intel Skylake HD Graphics DT GT1"
iSKLHaloGT4    = "Intel Skylake HD Graphics Halo GT4"
iSKLDTGT4    = "Intel Skylake HD Graphics DT GT4"
iSKLHaloGT2    = "Intel Skylake HD Graphics Halo GT2"
iSKLHaloGT3    = "Intel Skylake HD Graphics Halo GT3"
iSKLHaloGT1    = "Intel Skylake HD Graphics Halo GT1"
I've disassembled this driver and found some interesting things regarding CannonLake (CNL).


kq8e8egd.png



As we know one subslice contains 8 EUs since Gen8. So it's possible that 5x8, 13x8 etc. refers to the EU configuration for CannonLake.

Thanks for the info, where can we get this driver?
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91

podspi

Golden Member
Jan 11, 2011
1,965
71
91
Has wccftech ever actually managed to get a scoop first? I've only seen speculative stuff that's way off mark and me-too articles.
 
Aug 11, 2008
10,451
642
126
Unfortunately, i consider the lack of performance previews to mean that it is either further delayed or not living up to initial expectations, not that some big advance is in the works. I hope i am wrong, but with all the troubles of the broadwell launch, I have lowered my expectations for skylake to that of just another incremental advance, maybe finally becoming what broadwell was supposed to be. As for that article, the concept does sound interesting, but it is total speculation. I do feel that intel does need some major advance like this though. I think they are becoming too reliant on die shrinks and refinements in core to stay in the lead. Die shrinks are just becoming too difficult and expensive, and bringing smaller advantages, along, with temperature issues.
 
Mar 10, 2006
11,715
2,012
126

ROFL!

Skylake should be good, but I expect it to be a solid evolution over Haswell. If Intel has a great microarchitecture (which it does with Haswell), it's not going to throw it all away and do something "clean-sheet" with MorphCore-whatever. They're going to look at the huge list of things that they wanted to implement in Haswell, but didn't have the time to, and implement them AND/OR they're going to figure out where Haswell was weak and improve those areas.

That's how things in the real world work.
 
Last edited:

Fjodor2001

Diamond Member
Feb 6, 2010
3,711
182
106
The SandyBridge/Haswell uArch is good, but the problem is that it's not getting much better IPC for each new generation anymore.

At some point Intel will have to create a completely new uArch to advance performance in a meaningful way. I.e. as Intel did when going from Netburst->Core, or as what AMD is expected to do when going from Bulldozer->Zen.
 
Last edited:

NTMBK

Lifer
Nov 14, 2011
10,208
4,940
136
Tha SandyBridge/Haswell uArch is good, but the problem is that it's not getting much better IPC for each new generation anymore.

At some point Intel will have to create a completely new uArch to advance performance in a meaningful way. I.e. as Intel did when going from Netburst->Core, or as what AMD is expected to do when going from Bulldozer->Zen.

Netburst->Core was moving back to an evolution of the Pentium Pro, not a clean sheet.
 
Mar 10, 2006
11,715
2,012
126
Tha SandyBridge/Haswell uArch is good, but the problem is that it's not getting much better IPC for each new generation anymore.

At some point Intel will have to create a completely new uArch to advance performance in a meaningful way. I.e. as Intel did when going from Netburst->Core, or as what AMD is expected to do when going from Bulldozer->Zen.

The reason you are seeing a relatively small advancement in ST perf/clock is because Intel has spent the last few generations re-targeting the power profile of its chips. Also, in a design like Haswell, you can get a significant boost in performance if you utilize the new instructions such as AVX2.

I think with Core at a sufficiently low power level, future improvements can target raw performance increases.