Skylake/Broadwell Roadmap Update @Vr-zone

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
I'm confused about the video decoding/encoding options (and I've pretty much always been). Does any video player utilize it? Or is it like its only taken advantage of using WMP? In that case I couldn't care less about the hardware support.

H.265? Its any.
 

coercitiv

Diamond Member
Jan 24, 2014
7,374
17,476
136
Maybe because browsing battery life is mainly determined by screen power consumption (size, resolution and screen technology).
I know we shouldn't compare two products and extrapolate towards CPU architectures and process nodes being used, but as more and more reviews come in, for similarly equipped units (size, components, manufacturers etc)... they will become statistically significant. They will paint the bigger picture.

Don't get me wrong, I'd buy BW-U over HW-U anytime of the day, but this issue still deserves a better explanation.
 

dahorns

Senior member
Sep 13, 2013
550
83
91
I know we shouldn't compare two products and extrapolate towards CPU architectures and process nodes being used, but as more and more reviews come in, for similarly equipped units (size, components, manufacturers etc)... they will become statistically significant. They will paint the bigger picture.

Don't get me wrong, I'd buy BW-U over HW-U anytime of the day, but this issue still deserves a better explanation.

I think it is a pretty good explanation. We already know that the SOC is a small portion of the system power consumption. You could halve the power from the SOC and your overall reduction would be a fraction of that.

From a thermal standpoint, we've consistently seen the Broadwell chip maintaining 1.5x to 2x higher clocks under load. Benchmarks are mainly determined by max turbo, and in that regard Haswell and Broadwell are the same. I think the question should be: why is that? Is it a process limitation? Or can it be addressed in Skylake?
 
Aug 11, 2008
10,451
642
126
yea, those results in the notebook check article did not make sense to me. Broadwell had much higher base clocks and supposedly maintains turbo better, but the benchmarks showed only a modest improvement. Maybe in a sustained load, broadwell would have shown a bigger improvement. Even if it did though, the question is whether that kind of usage is common for an ultrabook.
 

CakeMonster

Golden Member
Nov 22, 2012
1,630
810
136
H.265? Its any.

So if I get a brand spanking new Skylake when its out, and load up a third party video player like MPC in Windows playing a video encoded with H.265 the Skylake CPU will use its new hardware capabilities without the need for any configuration?

Or are there any prerequisites? Like the player will need to be patched? Or that I need a certain windows version? Or a certain graphics driver?
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
From a thermal standpoint, we've consistently seen the Broadwell chip maintaining 1.5x to 2x higher clocks under load. Benchmarks are mainly determined by max turbo, and in that regard Haswell and Broadwell are the same. I think the question should be: why is that? Is it a process limitation? Or can it be addressed in Skylake?
A few conspiracies.

Intel might have made some changes to BDW to increase yields. Even if you get 1% more SKUs if you lower turbo, that's a good deal.
Other possibility is that Intel wants to look BDW bad performance wise to deliver on those "ecstatic" performance improvements claimed for Skylake :p.
 

mikk

Diamond Member
May 15, 2012
4,304
2,391
136
So if I get a brand spanking new Skylake when its out, and load up a third party video player like MPC in Windows playing a video encoded with H.265 the Skylake CPU will use its new hardware capabilities without the need for any configuration?

Or are there any prerequisites? Like the player will need to be patched? Or that I need a certain windows version? Or a certain graphics driver?


No it requires app support for GPU accelerated video decoding or encoding. It also needs graphics driver support of course. Hybrid HEVC decoding for Haswell is supported by MPC or Zoomplayer via DXVA2. If that works for BDW too no idea. All the testers failed to check this out. Skylake is a different story since it probably will support HEVC via Quicksync, this won't work without player updates.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
So if I get a brand spanking new Skylake when its out, and load up a third party video player like MPC in Windows playing a video encoded with H.265 the Skylake CPU will use its new hardware capabilities without the need for any configuration?

Or are there any prerequisites? Like the player will need to be patched? Or that I need a certain windows version? Or a certain graphics driver?

Assuming your codecs support it yes.

Haswell and Broadwell also got the support.
http://techreport.com/news/27677/new-intel-igp-drivers-add-h-265-vp9-hardware-decode-support
 

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
If I'm reading this correctly, there won't be any non-Atom mobile quadcores from this point forward, correct? That's a shame, and it has me really worried for the future. :/
 
Aug 11, 2008
10,451
642
126
A few conspiracies.

Intel might have made some changes to BDW to increase yields. Even if you get 1% more SKUs if you lower turbo, that's a good deal.
Other possibility is that Intel wants to look BDW bad performance wise to deliver on those "ecstatic" performance improvements claimed for Skylake :p.

I support intel overall, but you are realy reaching with that second theory. With all the claims they made about broadwell and 14nm being such great improvements, and being in a down and drag out fight with ARM, do you *really* think they would neuter their own product?

It might be conceivable in desktop or servers where they are dominant, but to intentionally cripple a product that is already late in the mobile field is insanity.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
It might be conceivable in desktop or servers where they are dominant, but to intentionally cripple a product that is already late in the mobile field is insanity.
I think BDW-U has still quite a bit of frequency headroom left that could have been utilized for higher boost clocks. Maybe Intel just thought it was unnecessary, since it makes performance/watt worse. I wasn't talking about BDW-Y.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
Dual A9 to quad A57 is an awesome improvement however it needs to be taken in context. Power consumption has jumped through the roof (its interesting to note that the new A57 Exynos shows basically no efficiency improvements in AT's tests despite a node shrink). We have also gone from 45nm to 14 nm.

I agree with your argument and context in general but we have to remember the node scrink happens between archs and secondly the specint/fp suite is not tailored to arm but x86 and comes unjustified and looks heavily x86 biased to me. But whatever, good relations keeps the business going and gives us free articles.

At power test is also often useless intel pleaser stuff in my world. Go eg 2 years back to when anand busted the x86 myth. It was imo pathetic and in hindsight a shame and not worthy. I have a ct+ tablet in the house and if onething it just reinforce the "myth".
 

podspi

Golden Member
Jan 11, 2011
1,982
102
106
I also have a CT tablet. Without a doubt I haven't been as impressed with anything out of Intel since Conroe. Battery life that good on a full windows machine?

God is it slow, though. But try running Windows on any ARM processor of the same vintage - an exercise in masochism.
 

ehume

Golden Member
Nov 6, 2009
1,511
73
91
I also have a CT tablet. Without a doubt I haven't been as impressed with anything out of Intel since Conroe. Battery life that good on a full windows machine?

God is it slow, though. But try running Windows on any ARM processor of the same vintage - an exercise in masochism.

Try putting in an SSD. You'd be amazed at what you can resurrect with one of those.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
You are correct. Interesting find, although I do not understand what the other information in your link means, and I can't find comparisons with BDW-Y (power consumption is stated as 4.5W).

I never understand any of these numbers if not having anything to compare it to.
http://www.anandtech.com/show/8355/intel-broadwell-architecture-preview/3

You can see that 1 sub-slice of Gen8 has 64ALUs. But BDW-U/Y are GT2, which means they have one full slice, which has 3 sub-slices. So you multiply that by 3 to get the amount of GT2 ALUs in BDW (192). But in the link above it says 384SPs, which is twice that number.

What this means is that Skylake will have two times the theoretical peak flops performance (for the same slice configuration), which is quite massive.

For comparison: One Maxwell SMM has 128ALUs, so if the performance per flops would be the same (which we don't know, but we'll have to do it with that), then Skylake-Y has the equivalent of 3 SMMs or 768GFLOPS. Tegra X1, which will probably launch not too much earlier, only has 2 SMMs.

Interesting if this were true... (And 14nm process should help significantly vs 20nm (and even 20FF) SoCs like X1.)

http://www.anandtech.com/show/8811/nvidia-tegra-x1-preview/2

Would that be what was meant when people like Kirk hyped up SKL.
Kirk Skaugen said:
So the question is: are we done? And today I'm excited to announce for the first time our next-generation codename for our next-generation Core microarchitecture, codename Skylake. This is our next-generation Core microarchitecture on 14nm, and you should expect a significant [emphasis not mine] increase in performance, in battery life, in power efficiency, all on this new product. I'm excited-- in fact, I'm ecstatic on the health of Skylake. It is on track for high performance desktops, notebooks, fanless 2-in-1s with production in the second half of 2015 (and launch).

Also: "very high performance" and "stunning level of performance".
 
Last edited:

imported_ats

Senior member
Mar 21, 2008
422
64
86
I agree with your argument and context in general but we have to remember the node scrink happens between archs and secondly the specint/fp suite is not tailored to arm but x86 and comes unjustified and looks heavily x86 biased to me. But whatever, good relations keeps the business going and gives us free articles.

SpecCPUxxxx isn't tailored to any specific CPU. It is a suite of benchmarks that was formed by a consortium of many different companies, many of which esp for CPU2k, all had their own architectures at the time. SpecCPU is pretty much the most unbiased suite of cpu benchmarks available.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
Apparently Skylake will indeed launch in the middle of the year. Shouldn't be a surprise by now, but now it's official I guess.

Intel confirms Skylake for the middle of the year

Even more good Skylake news from Fudzilla: Skylake uses 60 percent lower power

Data regards the Skylake-Y the 5W TDP variant of the core which Chipzilla has compared the early silicon with the Broadwell-Y or 5W TDP version of Core M 5Y10 CPU.

According to Intel, the Skylake mobile platform has 60 percent lower SoC power and can sustain up to 35 percent longer 1080p HD playback than the Core M 5Y10. If this is true, this is a great achievement. Intel used the same battery capacity and the system configuration on the Skylake-Y and 5th Generation Core Y processor test beds. You can expect the same 60 percent lower SoC power and up to 35 percent longer HD playback from the Skylake-U versions 5th generation Broadwell-U processor.

I'm keeping my expectations in check, but I'm really curious as to what Intel has done to Skylake.
 
Aug 11, 2008
10,451
642
126
Sounds eerily similar to the hype for broadwell, and we all know how that turned out.
I am at best cautiously optimistic for skylake. Actually there are only two things that i want personally from skylake. one would be a mainstream igp in mobile that allows gaming at 1080p med to high settings. And i dont mean some super expensive iris pro type sku either, but something that can give GTX 850m level of performance at a lower price. The other would be a big improvement in desktop cpu performance. I have never expected the former, and I am steadily becoming less optimistic about the latter as more and more of the hype has shifted toward battery life, thin form factors, and fanless design.

I know it goes against what intel is trying to do, but i dont really care if some low performance 1300.00 ultrabook is 0.5mm thinner or is able to go fanless by downcloching everythig to lower the tdp at the expense of performance.

I also think it will be the end of the year before we see good availability.
 

mikk

Diamond Member
May 15, 2012
4,304
2,391
136


I'm not sure about it.


Intel(R) Skylake Desktop Graphics Controller (184SP 23C 950MHz, 1.5GB) (OpenCL)
http://www.sisoftware.eu/rank2011d/...efdceddce5dce5c3b18cbc9aff9aa797b1c2ffc7&l=en


However my link is for OpenCL whilst your link OpenGL. Also in my link it says 23C which could mean this is a partially disabled ES. We should wait for more infos.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,225
126
Last edited:

Fjodor2001

Diamond Member
Feb 6, 2010
4,224
589
126
the Skylake mobile platform has 60 percent lower SoC power

Note that the article said "SoC power", so the power savings could mostly come from the new chipset accompanying Skylake and not the CPU.

Nevertheless, impressive if true. However often these claims are only valid under certain conditions, or in specific scenarios. We'll have to see if it's actually 60% lower power in general.