Intel Skylake / Kaby Lake

Page 475 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
This staggered launch of all 14C+ SKUs tells me that intel was not prepared at all for competing with TR 16/32. As Jayz2C put it well, even mobo manufacturers were scratching their heads when they heard intel revealed the new SKUs. There are even rumors of a new reworked socket to accept these parts, not to mention the nightmare of supporting everything from measly "HEDT" KBL-X 4C to monster 18C parts, with all the possible memory configs etc. The platform feels rushed , especially the 14C+ SKUs.

Indeed, and one has to hope stability and maturity will be better than what Ryzen launch had.
 

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
Why can't some of those extreme guys go to BIOS, load optimized defaults, get DDR4 @ vanilla 2666 and run GB4 on it? That way we would get a proper reference point instead of these bs scores.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,576
126
  • Like
Reactions: Drazick

TheF34RChannel

Senior member
May 18, 2017
786
310
136
What is expected to be available and when?

26 June, all boards, all SKUs up to and including the 7900X -- quoting Sweepr on the date.

Did you expected anything else? We have been saying for a while now that if Intel offered a high clocked/unlocked SKU that was exactly the same die as the Xeon E5 and using the exact same motherboards, there was zero point in a Xeon, they did figured that one out, so they segmented everything in a way they can now sell high clocked, high cores, more expensive SKL-X, whiout eating Xeon sales for servers or workstations.

Indeed; they would be rather unintelligent eating into their own Xeon sales like that. They left themselves no choice.

I'm reading and hearing rumors that socket 2066 was designed for 10 cores max and in order for clock speed (or other compatibility issues) to be higher on 14,16 and 18 core CPU's, Intel will need to release a socket 2066 V2 to accompany the higher core count chips. This would mean all the X299 boards won't support more than 10 cores. I'll provide the link below. It makes sense to me. Try running a 16 or 18 core skylake with an all core OC of 4.3-4.5ghz with board power delivery designed for 160w. Could you? A board designed for 200 watts like the link below suggests makes more sense.
This would line up with Intel rushing out a hasty announcement of more than 10 core variants even though they designed their board for 10 cores max. Hmm. IF this is true, and I do mean a titanic IF, then Intel will suffer terribly because of this. Imagine all those people buying a fancy, ultra expensive X299 board only to learn, either before hand or after, that they can't upgrade to the higher core count chips. Oh god another socket. Disaster would follow.

http://www.bitsandchips.it/52-english-news/8452-rumor-intel-could-commercialize-a-v2-edition-of-the-socket-2066

Nah that whole thing is someone simply musing, and unfortunately someone's thoughts are news on today's internet. I'm with wildhorse2k on this one:

wildhorese2k said: Given that Intel decided to use thermal paste on Skylake-X instead of soldering that would seem unlikely. 12 core is supposed to have 160W TDP, the next one 14 core probably too. Boards are designed for much more than 160W. They have big reserves unless you buy a really cheap one or poorly designed one.
 

Atari2600

Golden Member
Nov 22, 2016
1,409
1,655
136
Given that Intel decided to use thermal paste on Skylake-X instead of soldering that would seem unlikely.

Thermal dissipation =/= Power draw.

12 core is supposed to have 160W TDP, the next one 14 core probably too.

You can design it for the same TDP, but you will have to accept a drop in max all-core frequencies. No free lunch.


Boards are designed for much more than 160W.

Boards are designed to a minimum specification. That means the worst board in the bin will be able to deal with at least that much power.


They have big reserves unless you buy a really cheap one or poorly designed one.

Irrelevant. Unless Intel is willing to pay their partners to do a testing campaign to downselect boards based on what they can handle and pay them to dump any board that doesn't meet a revised spec of, say, 180W power draw.

Otherwise you'll have 2 versions of the socket in the wild - with no clear means of discerning which is which. Which is really bad for the consumers.
 
  • Like
Reactions: Drazick

Sweepr

Diamond Member
May 12, 2006
5,148
1,143
136
Actually here's the full quote:

Intel Xeon Platinum 8180... R15 score easily breaks 7500 points, 2699v4 about 6000 points, 7820X about 2100 points

Core i7-7820X ($599) >80% faster than Core i7-6850K ($617-628). Huge performance bump at a similar price point. Score is also up by ~35% compared to Core i7-6900K (equal core count) and ahead of Core i7-6950X, if the leak is correct. Sounds overly optimistic, even though a (close to) 20% higher all-core Turbo compared to Core i7-6900K + improved IPC + fast DDR4 should definitely put it near the magic 2,000 pts mark.

81824.png
 
Last edited:

csbin

Senior member
Feb 4, 2013
908
614
136
  • Like
Reactions: raghu78

wildhorse2k

Member
May 12, 2017
180
83
71
Maturity is already affected with cases like Gigabyte not being able to support Thunderbolt 3 on any of their X299 boards since they did not have time to go through the certification process.

Hopefully stability is excellent though.

Nobody forced Gigabyte to release X299 boards now. If they didn't have enough time, they could have delayed their launch. It's a poor excuse by Gigabyte. If X299 boards or Skylake-X CPUs have serious bugs due to accelerated launch then Intel will be in deep trouble.
 

TahoeDust

Senior member
Nov 29, 2011
557
404
136
I use a 3440x1440 100hz monitor and 1080ti. My usage is 80% gaming, 10% benching, and 10% encoding. I want a chip that will game as well as a 7700k and accel in my other uses. I will never go SLI again and wll never have a need for NVMe raid 1 or 5. If I am comfortable with the price, is there any reason that the 7820x will not be the best chip for me?
 

coercitiv

Diamond Member
Jan 24, 2014
7,354
17,423
136
I use a 3440x1440 100hz monitor and 1080ti. My usage is 80% gaming, 10% benching, and 10% encoding. I want a chip that will game as well as a 7700k and accel in my other uses. I will never go SLI again and wll never have a need for NVMe raid 1 or 5. If I am comfortable with the price, is there any reason that the 7820x will not be the best chip for me?
No reason, as long as reviews report decent overclock potential it's probably the best chip for you. Coffee Lake 6c/12t might come close, maybe even beat it in some games where pure clocks matter more, but overall performance, throughput and longevity will favor 7820X.
 

Eug

Lifer
Mar 11, 2000
24,114
1,760
126
Apple is switching to High Efficiency Image File (HEIF) Format, for macOS 10.13 High Sierra, and for iOS 11. This is a file format that is much more efficient than JPEG, and has more features, and it utilizes h.265 HEVC compression technology.

Now that Kaby Lake is in Macs, I'm guess this means that even 2D photographs (which are 10-bit) will likely be completely hardware accelerated both for display and for thumbnail creation. Hopefully this means that image management programs like Photos will become much faster with large libraries.

I would love to see some demos and benchmarks on this, but I'm thinking if implemented fully and correctly, this could be a significant performance increase in certain circumstances even for 2D photographs. Does my thinking here make sense?
 
Last edited:

coercitiv

Diamond Member
Jan 24, 2014
7,354
17,423
136
I would love to see some demos and benchmarks on this, but I'm thinking if implemented fully and correctly, this could be a significant performance increase in certain circumstances even for 2D photographs. Does my thinking here make sense?
Skylake already has hardware acceleration for JPEG encoding. Haswell brought hardware acceleration for JPEG decoding. It probably boils down to how efficient each hardware implementation is (HEVC vs. JPEG).

That having been said, older chips will have a hard time handling a more resource intensive codec.
 
  • Like
Reactions: Eug

Eug

Lifer
Mar 11, 2000
24,114
1,760
126
Skylake already has hardware acceleration for JPEG encoding. Haswell brought hardware acceleration for JPEG decoding. It probably boils down to how efficient each hardware implementation is (HEVC vs. JPEG).

That having been said, older chips will have a hard time handling a more resource intensive codec.
I wonder how much mainstream software implements hardware JPEG encoding, esp. on the Mac side. On the Mac I'm thinking it wouldn't have been a top priority if they knew that they were switching to HEIF.
 

Topweasel

Diamond Member
Oct 19, 2000
5,437
1,659
136
I use a 3440x1440 100hz monitor and 1080ti. My usage is 80% gaming, 10% benching, and 10% encoding. I want a chip that will game as well as a 7700k and accel in my other uses. I will never go SLI again and wll never have a need for NVMe raid 1 or 5. If I am comfortable with the price, is there any reason that the 7820x will not be the best chip for me?
Find the softspot in this lineup is hard. I want to say it's the 7900, but its hard to look at $1k CPU as decent deal. I guess the big question becomes coffee lake and how much it will encroach with it's i7, on the 7820x. That said requirement wise yeah it looks like the 7820 has the best clocks, it's twice the cores of the 7700, and as long as you are only using 1 NVME (maybe 2) and 1 GPU, it looks like a good buy (for an i9). Just be really picky on the board, don't go with a high option list version that may not work well with the 7820.
 

TahoeDust

Senior member
Nov 29, 2011
557
404
136
Find the softspot in this lineup is hard. I want to say it's the 7900, but its hard to look at $1k CPU as decent deal. I guess the big question becomes coffee lake and how much it will encroach with it's i7, on the 7820x. That said requirement wise yeah it looks like the 7820 has the best clocks, it's twice the cores of the 7700, and as long as you are only using 1 NVME (maybe 2) and 1 GPU, it looks like a good buy (for an i9). Just be really picky on the board, don't go with a high option list version that may not work well with the 7820.
Right now I have my eyes on the Asus Strix X299-E. Seems like it has everything I would need without being too overkill. I came across a good deal on 32gb (4x8gb) of Corsair Vengence LED 3200MHz (16-18-18-36) the other day ($260), so I went ahead and scooped it up.
 

ManyThreads

Member
Mar 6, 2017
99
29
51
Do all the X299 mobo's have the ability to use the CPU lanes for the M2 SSD? Or is that considered a special feature? I feel like that would be better than sending it through the chipset if there was an option.
 

FlanK3r

Senior member
Sep 15, 2009
321
84
101
A realistic score for 7820x at stock with turbo boost pushing clocks to 4 Ghz across all threads is 1800-1850 in multithread.

Worse, cause clock to clock is SK-X worse than Ryzen in Cinebenchs. And Ryzen at 4 GHz 8c/16t has socre around the 1770-1800 points
 
  • Like
Reactions: Drazick