Intel Skylake / Kaby Lake

Page 147 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Fjodor2001

Diamond Member
Feb 6, 2010
3,792
259
126
I am sure you do. You just dont want to because it defeats your entire argument.

No, I really don't. And I think you're intentionally avoiding answering the question, since you don't want to admit that Intel had to raise the TDP to get a performance improvement, despite moving to 14 nm.

Compare TDP of top end K SKUs and it's obvious.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
No, I really don't. And I think you're intentionally avoiding answering the question, since you don't want to admit that Intel had to raise the TDP to get a performance improvement, despite moving to 14 nm.

Compare TDP of top end K SKUs and it's obvious.

That argument could be said about Haswell as well. Despite being on the same 22nm as Ivy Bridge.

Why you need to compare with a K model? Even a non K 65W is quite faster than a 3770K.

You base your entire argument on 2 K SKUs. And by ignoring the fact that its AVX that needs the power. Not to mention completely ignoring Haswell to make some wrong 14nm claims.

76279.png

76297.png

76300.png


Now you have to pick between 2 evils (for you). Either you have to admit that Skylake is a massive improvement over for example Ivy Bridge. Or you have to admit its not using more power in the same regular tasks.

And thats not even mentioning your complete selected blindness to all mobile and non K desktops that also destroy any of your silly 14nm claims.

I own IB, HW and SKL. And I am able to see the power consumption of each in the same tasks.
 
Last edited:

Fjodor2001

Diamond Member
Feb 6, 2010
3,792
259
126
My statement is that Intel had to raise TDP to get noticeable performance improvements on top end desktop SKUs. Hence the reason for comparing top end SKUs.

Previously we used to see both performance improvements and lower TDP when moving to a new node, e.g. SB->IB. We went from 95 to 77 W. Not anylonger.

Point is that 14 nm is not so fantastic for top end desktop SKUs in this regard.
 

videogames101

Diamond Member
Aug 24, 2005
6,777
19
81
My statement is that Intel had to raise TDP to get noticeable performance improvements on top end desktop SKUs. Hence the reason for comparing top end SKUs.

Previously we used to see both performance improvements and lower TDP when moving to a new node, e.g. SB->IB. We went from 95 to 77 W. Not anylonger.

Point is that 14 nm is not so fantastic for top end desktop SKUs in this regard.

Intel's 14nm is likely optimized for operation around .7 to .8 volts. High-end desktop SKU's run at what 1.3V? That's crazy drive. It's a huge jump from .8V, and you pay exponentially in power and get diminishing gains in cell delay. 14nm is not and was never meant to be "fantastic" for high-end desktop.

That being said, TDP is a very bad measurement for this kind of thing. You need to define a benchmark and measure power consumption, rather than relying on Intel's spec. I really don't think you can use TDP as a proxy for core efficiency as you're trying to do.
 

tential

Diamond Member
May 13, 2008
7,355
642
121
The reason I chose the i7 over the i5 is not just benches. It's that the benches don't tell the whole story. They tell the story of the BEST CASE. But say something is running in the background, or something. The i7 benefits with those extra threads greatly. And considering how long we're all holding onto our processors, that extra amount spread over 5 years is pathetically small.

To whoever posted about this. It's not that we're picking on you, honestly, I doubt anyone cares or even remembers who you are(I already don't). We just want to talk about processors, it's a forum, that's what you do. Pretty sure we could all talk about Skylake til we're blue in the face and only be semi bored with it.
 

tential

Diamond Member
May 13, 2008
7,355
642
121
My statement is that Intel had to raise TDP to get noticeable performance improvements on top end desktop SKUs. Hence the reason for comparing top end SKUs.

Previously we used to see both performance improvements and lower TDP when moving to a new node, e.g. SB->IB. We went from 95 to 77 W. Not anylonger.

Point is that 14 nm is not so fantastic for top end desktop SKUs in this regard.

Ok, but at the end of the day, the 6700k is still a great processor through and through. It's an improvement over what the previous option was. When AMD comes out with Zen, we'll see what both companies do. I pray Zen is good because without Zen being good.... there will be no options left for us and that will not be a fun time.
 

tential

Diamond Member
May 13, 2008
7,355
642
121
Huh? Think that you have me confused with someone else. I haven't said anything pro or con about i5 vs i7 SKL. I've hinted that I want an overclockable i3 in an ITX board, or a 6400T.

Meaning, you won't purchase it and deem it too expensive! Larry take the jump the deals you posted are there you found them for a reason. Join us.....
 

Fjodor2001

Diamond Member
Feb 6, 2010
3,792
259
126
Intel's 14nm is likely optimized for operation around .7 to .8 volts. High-end desktop SKU's run at what 1.3V? That's crazy drive. It's a huge jump from .8V, and you pay exponentially in power and get diminishing gains in cell delay. 14nm is not and was never meant to be "fantastic" for high-end desktop.

That being said, TDP is a very bad measurement for this kind of thing. You need to define a benchmark and measure power consumption, rather than relying on Intel's spec. I really don't think you can use TDP as a proxy for core efficiency as you're trying to do.

I thought Intel used to have different variants of their process tech for each node, optimized for different aspects. E.g. one variant optimized for low power, and another for high frequency. Don't they have that for 14 nm too?

But maybe the main focus for 14 nm has been low power, so that affects the other variants too anyway? I'm not sure how this works, but maybe they design one main process tech on a node, and then tweak it somewhat to create variants that work on higher frequency or lower power compared to the main variant?

I bet someone on this forum knows better how this works and hopefully can explain it.
 
Last edited:

C.Cardinale

Junior Member
Jul 27, 2015
6
0
0
My statement is that Intel had to raise TDP to get noticeable performance improvements on top end desktop SKUs. ...

I don't think so, the enlarged iGPU alone required significantly more thermal design power budget. Due to 14nm Process the TDP rise is tiny 3 Watts, from 88 Watts to 91 Watts.
Before 20 EUs, now 24 EUs. These are 20% more graphic units. Covered by 3% more watt TDP. Intel 14nm process does an extremely good job.
 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
I don't think so, the enlarged iGPU alone required significantly more thermal design power budget. Due to 14nm Process the TDP rise is tiny 3 Watts, from 88 Watts to 91 Watts.
Before 20 EUs, now 24 EUs. These are 20% more graphic units. Covered by 3% more watt TDP. Intel 14nm process does an extremely good job.

You get 20% more graphics units but you also have ~50% power reduction going from 22nm FF to 14nm FF. That means you could have 20% more graphics units PLUS a reduction in power consumption.

But, TDP is not power/energy consumption so the rise of the TDP may be due to power map characteristics of the smaller die.
 

Fjodor2001

Diamond Member
Feb 6, 2010
3,792
259
126
I don't think so, the enlarged iGPU alone required significantly more thermal design power budget. Due to 14nm Process the TDP rise is tiny 3 Watts, from 88 Watts to 91 Watts.
Before 20 EUs, now 24 EUs. These are 20% more graphic units. Covered by 3% more watt TDP. Intel 14nm process does an extremely good job.

That's just the consequence of having an iGPU; Both the CPU and the iGPU are expected to increase performance from one generation to the next.

This not different compared to SB->IB (32->22 nm). The iGPU increased in size and performance then too. However the TDP went down from 95->77 W, not increased as with Skylake.
 

mikk

Diamond Member
May 15, 2012
4,141
2,154
136



A golem.de tester told this is Windows 10 exklusive. The question is how many applications are able to make use of this. Cinebench Singlecore for example isn't any faster.


http://www.tomshardware.com/reviews/skylake-intel-core-i7-6700k-core-i5-6600k,4252-5.html


In wonder if this is another example for the inverse HT. Because the difference to Windows 8.1 is massive.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
That's just the consequence of having an iGPU; Both the CPU and the iGPU are expected to increase performance from one generation to the next.

This not different compared to SB->IB (32->22 nm). The iGPU increased in size and performance then too. However the TDP went down from 95->77 W, not increased as with Skylake.
If you look at 22->14, you see that it went down from 84W to 65W, while Broadwell has GT3e. Now Skylake adds some features to increase performance, so does TDP.

I don't understand why you make such a big deal about that tiny difference of 3W. TDP is meaningless. What you want to know is performance per watt or instructions per joule.

So Skylake can have a 3% higher TDP (or to be more accurate, power consumption), if it's appreciably faster than that.
 

majord

Senior member
Jul 26, 2015
433
523
136
these things are in stock nearly everywhere in Australia, yet in US you guys are complaining they're not available? Odd!
 

jpiniero

Lifer
Oct 1, 2010
14,629
5,246
136
In wonder if this is another example for the inverse HT. Because the difference to Windows 8.1 is massive.

I wouldn't be surprised if the Adobe benchmarks are simply due to Intel optimizing the Skylake GPU OpenCL driver on 10. Look at where the 7850K is in the Illustrator benchmark.

these things are in stock nearly everywhere in Australia, yet in US you guys are complaining they're not available? Odd!

Aren't the chips assembled in Malaysia? I imagine it would be easier to ship the chips to Australia compared to the US, and if there's a huge shortage it makes sense to only ship to the US once the volume is there.
 

Sweepr

Diamond Member
May 12, 2006
5,148
1,142
131
I see a few impressive improvements in there...

http://i.imgur.com/FK3Ypex.png

Also a few important important points from Eurogamer's review, other than the numbers:

A typical benchmark concentrates on one task and hammers away at it repeatedly, making for easy to track, comparable results. Gameplay stresses the CPU in different ways all the time, different games utilise the processor to varying degrees, and some do not even utilise all of the threads available on an i7. On top of that, the benchmarks included with games generally concentrate on graphics performance. It's for these reasons - and more - that most of the Skylake reviews we've seen so far present gaming results that show little or no difference between any Intel quad. And yet, play the Welcome to the Jungle level in Crysis 3 using Sandy Bridge and then with Skylake and it's immediately obvious that the newer tech provides a tangible, worthwhile boost.

We strongly recommend watching the videos to get an idea of how CPU performance actually works in practise: where the processor workload comes to the forefront, you'll see the differential. Where it's less of an issue, GPU takes precedence and you'll see performance converge. In part this explains why the bar charts found in many PC reviews don't really cut it when it comes to comparing what the CPU is actually capable of: the differences are averaged out when in areas of the benchmark run where it's actually the graphics card that is the limiting factor.

Even so when looking at average frame-rates in titles where we are truly CPU-bound for the majority of the duration, there are some notable results: in GTA 5, the 6700K is 20 per cent faster than the 4790K, 34 per cent faster than the 3770K, with a 38 per cent uptick compared to the 2600K. Also noteworthy is Far Cry 4: 17/40/43 per cent faster respectively than its predecessors - Devil's Canyon, Ivy Bridge and Sandy Bridge.

Other results also show notable gains, but don't quite seem to reflect the difference we actually experienced when carrying out these tests. And that's all down to the averaging effect. In most games you won't be CPU bound all of the time, but during gameplay, it's the hitches and stutters when the CPU runs out of oomph that hit the experience the most. With that in mind, here's an alternative version of the table above, concentrating on lowest frame-rates. Note that being CPU-bound can cause a lot of stutter, which can introduce some degree of error to the results, but the trend is clear. When the CPU is the limiting factor in gameplay, Ivy Bridge and Sandy Bridge dip down hardest, the Haswell Devil's Canyon is more robust in some titles, but Skylake is considerably ahead.

But there are still some noticeable boosts - GTA 5 on the 6700K is 17 per cent faster clock for clock than the 4790K, and 29 per cent faster than both Ivy and Sandy Bridge. Far Cry 4 - an eight-core aware title that demands high per-core performance sees Skylake move 17 points clear of the 4790K, and a mammoth 32 per cent ahead of the second and third-gen i7s.

...The lowest recorded frame-rates also throw up some interesting results: Crysis 3 stability scales according to how modern the CPU architecture is, with Sandy Bridge way off the pace set by its successors and Skylake at point. Far Cry 4 shows a large leap compared all prior Intel generations tested, where previously we saw only iterative improvements.

To answer the first question - in most gaming scenarios, our tests have demonstrated that existing i5s and i7s still perform admirably. After all, most of the time, you are limited by the GPU, not the CPU. But in terms of quality of gameplay, when you are CPU-bound, the experience definitely suffers - in our experience, in-game stutter at its worst is usually caused by CPU bottlenecks, rather than graphics or driver issues. In games heavy on CPU, Skylake outperforms its predecessors and can leave both Sandy Bridge and Ivy Bridge in particular in the dust.

The performance per clock Skylake provided in GTA V and Far Cry 4 was much more substantial than Haswell vs Sandy Bridge, arguibly some of the most CPU intensive games they tested. Pains me to see noobs looking at only 1 review (coff coff AnandTech) and concluding Skylake is worse or at best on par with Haswell to play games with a dGPU.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
I think it's interesting that Skylake is still (presumably) this 4-way superscalar architecture, while Apple's Cyclone is 6-way and has 1 more execution port (unless these things changed compared to Haswell). But maybe it isn't truly apples to apples since one is RISC, the other CISC.
 

videogames101

Diamond Member
Aug 24, 2005
6,777
19
81
I think it's interesting that Skylake is still (presumably) this 4-way superscalar architecture, while Apple's Cyclone is 6-way and has 1 more execution port (unless these things changed compared to Haswell). But maybe it isn't truly apples to apples since one is RISC, the other CISC.

Intel x86 is RISC under the hood though, everything is broken into micro-ops right?

I hope we'll find out more at IDF.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
Intel x86 is RISC under the hood though, everything is broken into micro-ops right?

I hope we'll find out more at IDF.

Yes, 4 x86 instructions can be decoded per clock with Conroe - Broadwell and possibly Skylake, versus 6 ARM instructions for Cyclone.