Intel Skylake / Kaby Lake

Page 506 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

lolfail9001

Golden Member
Sep 9, 2016
1,056
353
96
OK, I would like to know the reason if you know one?
Just like you, i can only speculate on known facts. But if you are willing... here is my guess:

Considering how badly assembly is done with TIM and even with solder in some situations on Haswell/Broadwell, i tend to think that Intel realized that every messed up (or degraded in use) solder is a lost CPU, a very expensive sale to lose, so they've just abandoned solder altogether, bottom to top, since ultimately mediocre/bad TIM application will still produce a usable CPU and with Intel's choice of compound, a usable CPU that will last a fair amount of time past it's warranty. mediocre/bad soldering will just produce either a dead die, or swift crack of it in 24/7 usage (especially with energy management and HT producing temp gradients all over the die).
 
Mar 10, 2006
11,715
2,012
126
Just like you, i can only speculate on known facts. But if you are willing... here is my guess:

Considering how bad assembly is done with TIM and even with solder in some situations on Haswell/Broadwell, i tend to think that Intel realized that every messed up (or degraded in use) solder is a lost CPU, a very expensive sale to lose, so they've just abandoned solder altogether, bottom to top, since ultimately mediocre/bad TIM application will still produce a usable CPU and with Intel's choice of compound, a usable CPU that will last a fair amount of time past it's warranty. mediocre/bad soldering will just produce either a dead die, or swift crack of it in 24/7 usage.

That's a good bit of reasoning here, too, but I wonder what kind of yield loss Intel would see in packaging when using solder vs TIM. Surely they've got this mfg process down cold at this point?
 
Aug 11, 2008
10,451
642
126
I don't think so, these chips sell for so much that saving a few bucks per unit in exchange for tons of bad press (which will probably turn some enthusiasts off from the parts) would be...well, it'd be penny wise, pound foolish, IMHO.

There's got to be something else behind it. I wish we knew what.
Yea, I agree. I am sure it is ultimately "cost" related, but I dont think it is as simple as just the cost of TIM vs solder. It has to be related to simplifying production, ensuring raw materials supply, longevity (lack of replacement costs) issues or something like that. Really though, it would be nice if Intel would publicly address the issue. I dont think the PR could be any worse than it is now.
 

formulav8

Diamond Member
Sep 18, 2000
7,004
522
126
Just like you, i can only speculate on known facts. But if you are willing... here is my guess:

Considering how badly assembly is done with TIM and even with solder in some situations on Haswell/Broadwell, i tend to think that Intel realized that every messed up (or degraded in use) solder is a lost CPU, a very expensive sale to lose, so they've just abandoned solder altogether, bottom to top, since ultimately mediocre/bad TIM application will still produce a usable CPU and with Intel's choice of compound, a usable CPU that will last a fair amount of time past it's warranty. mediocre/bad soldering will just produce either a dead die, or swift crack of it in 24/7 usage (especially with energy management and HT producing temp gradients all over the die).

So its simply money like I already mentioned. Fair enough. I just thought there was some other benefit, especially for consumers. I guess AMD is willing to sacrifice a CPU even though solder is that bad for yields.
 

lolfail9001

Golden Member
Sep 9, 2016
1,056
353
96
Oof. Wonder if there's a leakage issue here . . . and if so, why?
Nah, leakage is pretty small on the process as you can tell from voltages.

Simply peak condition for 7900X is much more brutal than any other overclockable CPU on the market. Fairly certain that 7820X in same conditions would actually only pull about 100-120A (since cut chips generally have worse leakage).

So its simply money like I already mentioned. Fair enough. I just thought there was some other benefit, especially for consumers.
Technically if my speculation is correct then there is a very clear money benefit to both Intel and data center consumers of theirs. And Intel never gave a [censored] about enthusiasts (or even plain consumers, for that matter).
 

moonbogg

Lifer
Jan 8, 2011
10,635
3,095
136
I'm sure this is wrong, but its my guess about an actual use for the TIM. Maybe it has something to do with maintaining an ideal server room temp? If there is a sudden increase in server load, and the heat gets instantly transferred into the room, then maintaining a narrow temp range for the room might be harder with the sudden spikes. If the heat of the die simply increases and that heat is transferred slower to the IHS, it will be a more slow ramping up of room temp rather than a sudden spike? Stupid guess I'm sure, but I'm really reaching here trying to understand a logical, practical and useful reason for TIM.

A silly article I found.
http://www.itwatchdogs.com/environm...m-temperature-on-the-rise,-study-finds-472652

"Data center operators interested in making facilities hotter should use state-of-the-art temperature monitoring equipment to oversee the server room at all times and make sure internal temperatures do not spike."
 
Last edited:

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,483
14,434
136
I'm sure this is wrong, but its my guess about an actual use for the TIM. Maybe it has something to do with maintaining an ideal server room temp? If there is a sudden increase in server load, and the heat gets instantly transferred into the room, then maintaining a narrow temp range for the room might be harder with the sudden spikes. If the heat of the die simply increases and that heat is transferred slower to the IHS, it will be a more slow ramping up of room temp rather than a sudden spike? Stupid guess I'm sure, but I'm really reaching here trying to understand a logical, practical and useful reason for TIM.

A silly article I found.
http://www.itwatchdogs.com/environm...m-temperature-on-the-rise,-study-finds-472652

"Data center operators interested in making facilities hotter should use state-of-the-art temperature monitoring equipment to oversee the server room at all times and make sure internal temperatures do not spike."
Have you ever walked in an enterprise sized server floor ? The AC is so cold you freeze (unless you get near the exit of the current rack you are standing by where its HOT), and there are so many servers, that any spike on any 1 of 10 servers would not make a bit of difference. Its almost surreal walking in a real server floor. My company had ONE data center with over one square mile of server space, and they had 10 data centers. The city of Corona could not even give them all the power they needed, so there were 10 semi's with generators in the parking lot for quite some time.

OK, the reason for this, is to say, sorry, no good reason that I can think of for using TIM.
 

beginner99

Diamond Member
Jun 2, 2009
5,208
1,580
136
This release is for sure mediocre. Waiting game continues. Best bet is I will buy 6-core Coffee lake unless Intel has another idiotic surprise n the bag. A 8-core+ would be preferable just for fun but I don't want to miss too much ST performance (Ryzen...) and also don't want to have to install an AC in my office (Skylake-X). Thread ripper will require a new cooler which is a huge downside as it easily adds $100 to the price tag.
 

moonbogg

Lifer
Jan 8, 2011
10,635
3,095
136
Have you ever walked in an enterprise sized server floor ? The AC is so cold you freeze (unless you get near the exit of the current rack you are standing by where its HOT), and there are so many servers, that any spike on any 1 of 10 servers would not make a bit of difference. Its almost surreal walking in a real server floor. My company had ONE data center with over one square mile of server space, and they had 10 data centers. The city of Corona could not even give them all the power they needed, so there were 10 semi's with generators in the parking lot for quite some time.

OK, the reason for this, is to say, sorry, no good reason that I can think of for using TIM.

Well, that sounds just about right to me then. No good reason indeed. I guess its just to save a few cents on a $1000 CPU, limit performance so people will buy the next chips and save time and money during manufacturing. They probably just cheaped out, straight up.
 
  • Like
Reactions: DarthKyrie

coercitiv

Diamond Member
Jan 24, 2014
6,151
11,686
136
If the heat of the die simply increases and that heat is transferred slower to the IHS, it will be a more slow ramping up of room temp rather than a sudden spike? Stupid guess I'm sure, but I'm really reaching here trying to understand a logical, practical and useful reason for TIM.
The heat stored in CPUs would be an insignificant amount. Without discussing physics, one way to intuitively understand this would be to imagine a CPU experience sudden high load while not equipped with a heatsink: at best it would shut down in just a few seconds. So going back to the normal scenario and considering the heatsink still draws most of the heat, using TIM would allow the servers to store up to a fraction of a second of load induced heat, essentially nothing when compared to the heat capacity of the external environment (server room).
 
Last edited:
  • Like
Reactions: IEC

twothreefive

Junior Member
Jun 13, 2017
12
5
36
The thermals have me worried, especially from Tom's Hardware review. If Intel releases a new version with solder, as the rumors have mentioned, I will take the leap. But until then, I think I am going to hold off on a new system.
 

KompuKare

Golden Member
Jul 28, 2009
1,012
923
136
I think it is because Intel must have increased clockspeeds relatively late in the cycle - I expect Skylake-X must have originally been targeted at lower stock clockspeeds where TIM would have not been an issue.

While competition is of course the most likely answer, if they had released Skylake-X closer to the silicon's optimal, the difference between it and Broadwell-E (also on 14nm) would have been minor. Since IPC didn't really improve, even without Ryzen Intel would probably have been forced to increase clocks.

^ Not so much the TIM itself (which Idontcare showed to have excellent C/W awhile back), but the manufacturing / assembly of the IHS.

Aren't there delidders who have said that the gap between IHS and die is excessive? So that even the best TIM money can buy (but Intel would never want to use a conductive one) can't really improve it much?

Wonder if anyone has delidded a soldered part, and whether the thickness of the IHS is the same? That is, could the gap be improved with a thicker lid and yet Intel continue to use the same thickness?
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
It's interesting that Skylake-X is showing poorer than expected performance in games (at least according to what the last page Anandtech's review hinted) just as it moves to a more AMD-style, non-inclusive cache hierarchy. Is it possible that many games are optimized to work better with an inclusive hierarchy? This would explain why both Ryzen and Skylake-X do very well on productivity applications but not so well on gaming.
 

Dygaza

Member
Oct 16, 2015
176
34
101
It's interesting that Skylake-X is showing poorer than expected performance in games (at least according to what the last page Anandtech's review hinted) just as it moves to a more AMD-style, non-inclusive cache hierarchy. Is it possible that many games are optimized to work better with an inclusive hierarchy? This would explain why both Ryzen and Skylake-X do very well on productivity applications but not so well on gaming.

It could be because core to core ping times are rather slow. Moving from Ring to Mesh seems to be causing problem especially in games. In Ryzen's case we learned that when certain threads were running on different CCX, there was very clear penalty for peformance. Atleast in Zen's case latency within one CCX was reasonably low for threads which has a lot of communication in between.

Unfortunately in the case of 7900x, there ain't a single core-pair with low latency.

latency-pingtimes.png


https://www.pcper.com/reviews/Proce...X-Processor-Review/Thread-Thread-Latency-and-
 
  • Like
Reactions: TheF34RChannel

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
Unfortunately in the case of 7900x, there ain't a single core-pair with low latency.

YUP. And it is pure madness to have 100ns inter thread communication latency. Broadwell EP already had it bad with so many stops on the ring, but to "fix" scaling by rising latency even more is wow, AMD has amazing opening here, i'd say in virtualization stuff, 4-6 core CCX would be just fine. 40ns vs what is probably 150-200ns on MCC/HCC on Intel is going to win them customers.

Actually I'd love see to see Intel Memory Latency Checker results for 10C Broadwell and Skylake-x side by side. I think we when the memory load rises, average latency would degrade real fast and hard.
 

TheGiant

Senior member
Jun 12, 2017
748
353
106
I think when the number of cores in CCX will increase to 6 and AMD improves a little of everything, ryzen will be a champ even in gaming
Do you think the high latency can somehow be improved by BIOS tweaks?
 

nathanddrews

Graphics Cards, CPU Moderator
Aug 9, 2016
965
534
136
www.youtube.com
In my limited understanding of Intel's mesh and AMD's fabric, the goals of each company are to improve the way in which multiple CPUs and multiple servers work together for massive data centers, not to make gaming better. As engineers at Intel/AMD work out their own latency issues (stepping, BIOS, tick/tock/talk, etc.), developers program around the limitations, and as memory speeds increase, I think we'll be seeing some very good gains on v2.0 of mesh/fabric CPUs. No denying that program patches here and there will ultimately result in the best improvements. I'm Intel will have no problems getting partners to release some patches.
 
  • Like
Reactions: Phynaz

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
I asked this earlier but no one responded it was at the bottom of a page anyways. But has anyone seen any game reviews with 1080P or lower that test games that support it in both DX12 and DX11?