• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Intel Skylake / Kaby Lake

Page 661 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

coercitiv

Diamond Member
Jan 24, 2014
4,372
5,725
136
Arnold for Maya: 12 core was right in the middle of 10 core and 14 core
Premier Pro: 10 core beat the 12 core in most tests
After Effects: 10 core beat the 12 core in all tests
Cinema 4D (Cinebench): 12 core was right in the middle of 10 core and 14 core
Photoshop: 10 core beat the 12 core in all tests
Lightroom: 10 core beat the 12 core in all tests
V-Ray:12 core was right in the middle of 10 core and 14 core
KeyShot: 12 core was right in the middle of 10 core and 14 core

Basically, get the 10 core or the 14 core. The 12 core chip never won. This is only because the 12 core version is where Intel dumps the chips that run hotter and thus have to use a lower clock speed. Since the system price is nearly the same, I would just get the 14 core chip if I was in that market.
So we use these tests to prove 7920X runs hotter than 7900X even though the 8 core 7820X wins or ties with them thus proving these are not heavily multithreaded scenarios?! According to this logic the 10 core is where Intel dumps the hotter 8 core candidates.

Arnold for Maya: 12 core was right in the middle of 10 core and 14 core, and the 8 core was last
Premier Pro: 10 core beat the 12 core in most tests, and the 8 core tied the 12 core
After Effects: 10 core beat the 12 core in all tests, and the 8 core was on 2nd place behind the winner 7700K
Cinema 4D (Cinebench): 12 core was right in the middle of 10 core and 14 core, and the 8 core was last
Photoshop: 10 core beat the 12 core in all tests, and the 8 core was on 1st place
Lightroom: 10 core beat the 12 core in all tests, actually they had equal performance and the 8 core was not far behind
V-Ray:12 core was right in the middle of 10 core and 14 core, and the 8 core was last
KeyShot: 12 core was right in the middle of 10 core and 14 core, and the 8 core was last
 
  • Like
Reactions: coffeeblues

coercitiv

Diamond Member
Jan 24, 2014
4,372
5,725
136
Perhaps the 12-core part suffers merely from an unfortunate selection of Turbo bins...?
Perhaps in addition to that the 12-core part is 140W TDP while the 14-core part is 165W TDP? Sure, the 14 core part also comes with higher base clocks, but maybe we should see these CPUs actually measured at same TDP & frequencies before we pass judgement?
 

dullard

Elite Member
May 21, 2001
22,806
1,038
126
That's an interesting assertion, is this your own theory, or is there a link that you might share?
There is some natural variation in all chips. The lower performing chips either get tossed in the trash (expensive loss of yield), or sold as a slower speed chip with about the same power use as much faster chips (higher profit by selling all functioning chips). This is a very common issue when you bin chips. Either you lower the performance of all chips to match the lowest performing chips that you sell, or you have a bin where you sell the chips that weren't quite as good as the others (but are still functional). Every line of chips has that one with far slower speeds or far higher TDP than you would otherwise expect. For the Skylake-X, that chip is the 7920X.

Here isn't a direct link of the exact words, but you can figure it out yourself:
https://images.anandtech.com/doci/11698/turbos2.png

Notice that for the same TDP, the 7920X has 400 MHz lower base speed than the cheaper 7900X and 200 MHz lower base speed than the chip with more cores (the 7940X). Same goes with the turbo speeds of 9 to 12 cores: the 7920X has the lowest turbo of the whole group of chips that can use that many cores.

There is not one combination of cores on that chart where the 7920X will ever win against the chips nearby. That is okay if it is priced lower. But it isn't. The 7920X is priced right in the middle of the 7900X and the 7940X even though it will never be the best of the three. If you don't need more than 10 cores, get the 7900X, if you do need more than 10 cores, get the 7940X.

The 7800X is in a similar position as the garbage collection of the LCC process, where the speed is lower than the chips nearby in the lineup, yet the 7800X TDP is just as high as a chip with twice as many cores and faster clock speeds. But, at least the 7800X was priced low. But, not low enough now that the 8700K is going to be released.

The 7920X and 7800X are to be avoided, unless you have a damn good reason to get one.
 
Last edited:

dullard

Elite Member
May 21, 2001
22,806
1,038
126
So we use these tests to prove 7920X runs hotter than 7900X even though the 8 core 7820X wins or ties with them thus proving these are not heavily multithreaded scenarios?! According to this logic the 10 core is where Intel dumps the hotter 8 core candidates.
No, the 7800X gets that LCC garbage collection bin award. The 7800X has the same TDP as the 7820X, yet the 7820X has more cores, faster base speed, and faster turbos.

My whole post was about the HCC chips (because that is what the discussion branch was all about), so I have no reason why you are crossing my words out and bringing in discussion of LCC chips though.
 

coercitiv

Diamond Member
Jan 24, 2014
4,372
5,725
136
My whole post was about the HCC chips (because that is what the discussion branch was all about), so I have no reason why you are crossing my words out and bringing in discussion of LCC chips though.
You were highlighting benchmarks which were obviously lightly multithreaded since the 8 core was comparable with the 10 and 12 core. In these tests power was likely not the issue, but rather how turbo bins were configured. They did not tell us anything significant with regard to how power hungry the 12 core was when compared to the better binned 14 core chip.

For example the After Effects test is so lightly threaded that 7700K is actually the winner of the benchmark. It shouldn't even have been mentioned.

If we want to accurately observe what the binning difference is between the 12 and 14 core chip, we should observe high throughput tests and compare performance when the chips go close to TDP. It would be even better if they were configured at equal TDP, since the 12 core starts with a 25W handicap in stock config.

Meanwhile the CB 15 scores from the same benchmark series scale linearly from 12 core to 14 core, suggesting the two chips operated at the same average frequency during this power intensive test. Why isn't the hot inferior chip working at lower frequencies?
 
  • Like
Reactions: Zucker2k

dullard

Elite Member
May 21, 2001
22,806
1,038
126
In these tests power was likely not the issue, but rather how turbo bins were configured.
That is exactly my point, thanks for agreeing with me, the 7920X is binned lower. Now answer the question: why are the turbo bins lower on that chip and we'll be on the same page. Oh, and to do your test that you describe, you'll have to do it with dozens or even hundreds of 7920X chips as not all of them are power hogs. It is just a place where power hog chips can go. So, you need to do statistical tests, not comparison tests.

If your workload is lightly threaded, don't get the 7920X (get the 7900X or Coffee Lake depending on your needs). If your workload is highly threaded, don't get the 7920X (get the 7940X or 7980X or go to a professional chip depending on your needs).
 
Last edited:

coercitiv

Diamond Member
Jan 24, 2014
4,372
5,725
136
That is exactly my point, thanks for agreeing with me, the 7920X is binned lower.
When 7740X was launched it had trouble beating 7700K in benchmarks because of the way turbo bins were configured. It was firmware based and got addressed later. According to you we could have just concluded the 7740X was just binned lower.

If you want to prove the 12 core is a lower binned chip than the 14 core (relative to core count) then do it based on benchmarks that end up enforcing TDP limits. Otherwise all you're showing is some turbo anomaly which can otherwise be easily corrected by forcing turbo ratios on these unlocked chips.

Why is it so hard to understand that simply because you may have the right conclusion you can't just use poor methodology to prove it.
 
  • Like
Reactions: VirtualLarry

dullard

Elite Member
May 21, 2001
22,806
1,038
126
Why is it so hard to understand that simply because you may have the right conclusion you can't just use poor methodology to prove it.
No methodology (or bold font) is needed. Just like changing a font does not make you more correct, ignoring Intel's own tables does not make you more correct.

When you have binned chips, not all bins are equal. That is the whole point of binning. Intel put the turbos and base speeds lower on the 1920X chip. That bin does not contain the best of the best. End of story.
 

Timmah!

Senior member
Jul 24, 2010
760
65
91
That is exactly my point, thanks for agreeing with me, the 7920X is binned lower. Now answer the question: why are the turbo bins lower on that chip and we'll be on the same page. Oh, and to do your test that you describe, you'll have to do it with dozens or even hundreds of 7920X chips as not all of them are power hogs. It is just a place where power hog chips can go. So, you need to do statistical tests, not comparison tests.

If your workload is lightly threaded, don't get the 7920X (get the 7900X or Coffee Lake depending on your needs). If your workload is highly threaded, don't get the 7920X (get the 7940X or 7980X or go to a professional chip depending on your needs).
This is easy to answer, its because it has 25W lesser TDP. Do you think 7940x could keep its current bins, if it was just 140W part? It would very likely have even lower clocks than 7920x has now. Would it make it inferior?
 

jpiniero

Diamond Member
Oct 1, 2010
9,358
1,891
136
https://twitter.com/TMFChipFool/status/917147417035853824

"Intel's next Xeon D processor platform is known as Bakerville and it is Skylake-D"
Strange, if they were going to release a Skylake Xeon-D they would have released it by now given how much time has passed since Skylake's release. Wonder if it really is just the original 14 nm.

I guess this is more confirmation that Icelake is on it's way to being scaled back due to the crappy yield.
 

DrMrLordX

Lifer
Apr 27, 2000
17,278
6,279
136
Wow Xeon-D is Skylake? I would have thought Cannonlake 10nm would be their next Xeon-D, since Broadwell was like that.

I'm a little disappointed.
 

Ajay

Diamond Member
Jan 8, 2001
8,640
3,389
136
Strange, if they were going to release a Skylake Xeon-D they would have released it by now given how much time has passed since Skylake's release. Wonder if it really is just the original 14 nm.

I guess this is more confirmation that Icelake is on it's way to being scaled back due to the crappy yield.
I’m pretty confident that it’ll be at least 14nm +, just as Skylake-SP was. Intel does seem to be losing a lot of ground on first CNL and now IL. 7nm better be flawless by comparison or Intel could face upsets even in servers.
 
  • Like
Reactions: Dayman1225

Dayman1225

Golden Member
Aug 14, 2017
1,027
644
106
I believe Intels 10nm++ and 7nm are both oriented for Server first, so they may be OK on that front, client will probs get 7nm 1-2 years later
 
  • Like
Reactions: Arachnotronic

Ajay

Diamond Member
Jan 8, 2001
8,640
3,389
136
I believe Intels 10nm++ and 7nm are both oriented for Server first, so they may be OK on that front, client will probs get 7nm 1-2 years later
Clients should come out pretty fast, maybe even before the server SoCs because they are less complex and require much less verification. Design starts on server chips first, doesn't mean that clients don't start till the server chip is done - in fact, it won't (unless Intel has gone completely nuts).
 

IntelUser2000

Elite Member
Oct 14, 2003
7,507
2,290
136
Wow Xeon-D is Skylake? I would have thought Cannonlake 10nm would be their next Xeon-D, since Broadwell was like that.
Broadwell Xeon D was fantastic even though rest of the Broadwells were a disappointment.

Based on the state of their 10nm cores, I think its a very prudent decision to use Skylake-based ones.

Also we shouldn't forget System-on-a-Chip integrations take extra time to do.
 
  • Like
Reactions: Ajay

DrMrLordX

Lifer
Apr 27, 2000
17,278
6,279
136
I do agree that Xeon D was fantastic. It's sort of a shame they aren't doing the same with Cannonlake. The next-gen Xeon D would have been a wonderful way to showcase at-least-suitable yields for 10nm.
 

Dayman1225

Golden Member
Aug 14, 2017
1,027
644
106
We assume it's yields but it could be clocks/perf/efficiency Intel has really said nothing, they should give us an update like they did with 14nm.

Like so.
 
  • Like
Reactions: Zucker2k

IntelUser2000

Elite Member
Oct 14, 2003
7,507
2,290
136
Performance of a processor is determined significantly by the transistors, and thus the fabrication technologies used to make those chips.

It may be the yields are dependent on how much defects are on a sq mm of area. In that case using redundant circuitry or just making the die smaller would mitigate the issue.

What if its what they call "parametric yields? Performance, and functionality related? If the latter is what we are seeing with Intel's 10nm, then it likely renders it unsuitable for most of their chips. Intel sells chips in the highest of performance. Because they said the initial 10nm(and even 10nm+) is lower performance than 14nm++, there's a good chance they are having problems reaching desired performance.

Some people point to way too aggressive focus on density being the fault for Intel's issues with 14nm and 10nm process. If that continues, we may never see a day when Intel gains their process leadership again.
 

Ajay

Diamond Member
Jan 8, 2001
8,640
3,389
136
Given that Intel is having trouble producing U/Y processors with GPUs (if true) @ 10nm, then my guess is that defect density is the main problem right now. That has to be solved before going for improvements in parametric yield.
 
Mar 10, 2006
11,715
2,010
126
Given that Intel is having trouble producing U/Y processors with GPUs (if true) @ 10nm, then my guess is that defect density is the main problem right now. That has to be solved before going for improvements in parametric yield.
They could also be facing parametric yield issues. If power consumption is too high at frequencies that allow the CNL-Y/U processors to match/outperform KBL-U/Y, then it makes little sense to release.
 
  • Like
Reactions: Ajay

Ajay

Diamond Member
Jan 8, 2001
8,640
3,389
136
They could also be facing parametric yield issues. If power consumption is too high at frequencies that allow the CNL-Y/U processors to match/outperform KBL-U/Y, then it makes little sense to release.
Given they lack of transparency by Intel these days, we are left to speculate :(
 

krumme

Diamond Member
Oct 9, 2009
5,914
1,538
136
Given they lack of transparency by Intel these days, we are left to speculate :(
My guess is if we could ask some of the senior engineers secretly they would say things is going as they anticipate.

I dont buy all this lateness stuff and hammering on Intel. Intel is a steamliner when it comes to executing process stuff. Its world class engis. But naturally they cant compete against nature and economic realities.
Just dont accept all the marketing bs that continue from management and marketing like the world is unchanged. Then you come away disapointed.
 

DrMrLordX

Lifer
Apr 27, 2000
17,278
6,279
136
My guess is if we could ask some of the senior engineers secretly they would say things is going as they anticipate.

I dont buy all this lateness stuff and hammering on Intel. Intel is a steamliner when it comes to executing process stuff. Its world class engis. But naturally they cant compete against nature and economic realities.
Just dont accept all the marketing bs that continue from management and marketing like the world is unchanged. Then you come away disapointed.
That's probably the most optimistic outlook I've seen of Intel's roadmap to date. You are aware of all the delays and so forth, no? Coffeelake shouldn't even exist. Desktop users are supposed to be on Cannonlake right now. Hell even Kabylake wasn't supposed to exist! Everything is in doubt now. It is up to Intel to show us that they can execute on some node other than 14nm.
 

ASK THE COMMUNITY