Is the black adhesive solid, like plastic, or does it have a rubbery feel to it?
EDIT: I was just looking at Coollaboratory's Liquid Ultra. It is described as being more viscous. I was also thinking that the IC Diamond would be a good TIM replacement -- non-conductive, wouldn't flow, etc. Once it dries, it has a rubbery feel to it. I don't see how nano-diamond particles are going to do damage to the CPU die.
But you could use them in combination. Not knowing anything about whether conductors are exposed under the IHS -- and someone mentioned this approach before of making a "silicon grommet" for use with these metal pastes -- the diamond paste would be just as effective for that.
I also found an earlier thread -- IDC and others had contributed during the spring around May -- there were thoughts about expansion and contraction as a consideration for Intel's choice to use the thermal grease instead of the indium-silver solder. Those may be concerns, but it's equally likely that they may have wished to deliberately make the IB thermally limited. With the Liquid Ultra, it's going to flow a tad if it gets hot enough. So those other engineering concerns wouldn't apply.
Someone else speculated about the cost of doing it this way or that, eroding the unit margin. For the costs cited, I can't see how it would have an impact on product demand if they simply passed on the cost to the consumer -- either way it's an insignificant part of the cost and therefore the price.
I don't buy the conspiracy theory angle. It would be self-defeating in all their other product segments and the tradeoff would not make any sense at all.
What percentage of IB chips are going to be sold to non-OC'ers? 98% of them? (including server market and notebooks)
Operating temperature impacts power-consumption, power-consumption is a big care-about in both the mobile and the server marketspaces.
There is no way that Intel's decision makers would elect to have their 22nm products underperform to their potential in both the notebook and server segments just so they could field thermally limited 3570k and 3770k chips to the desktop segment.
So the tradeoff in going with the non-solder TIM under the IHS definitely costs Intel lots of money in terms of lost revenue potential, all their server chips and mobile chips are going to be clocked lower and spec'ed to operate at a higher Vcc because of the resulting higher operating temps, and the decision to lose out on that revenue had to come on the heels of a serious trade-off being made that would have cost them even more money in the long run.
And the one thing that would have cost them more money in the long run is in-field fails that must be covered by warranty. No one wants that, the risk of it is one of the huge incentives that process engineers have in scaling existing solutions versus implementing brand new solutions like HKMG or 3D xtors.
We had a huge in-field fail issue at TI when we were the first to introduce a lower-k dielectric material into the BEOL (we used HSQ vs everyone else using SiO2 at the time) and we didn't enough testing to capture the reality of stress-induced cracking that would happen in the dielectric from thermal cycling. Thereafter we were keen to never repeat that mistake, it was extremely costly (hundreds of millions of dollars in warranty charges).
With Intel changing so much of the xtor structure at 22nm you can bet they did extensive/exhaustive internal testing to ensure the risks were fully identified and mitigated. It makes engineering and financial sense to conclude the shift in soldering the IHS to the CPU at 32nm to now just putting a nice malleable TIM at that interface at 22nm (at great expense in terms of lost revenue in all product segments) must have been done for stress and thermal-cycling reasons.
AMD chips were exactly like that for the longest time. Of course my experience is based on years ago, but has it changed so much? -- I haven't had the opportunity to open up a newer AMD part; did either maker recently sell chips which were so very different from this ?
While I realize that this isn't my $300 piece of silicon to risk, I wonder at all this talk of sanding the IHS and spreading on double layers of TIM. Sure I seem to recall reading something about smaller processes having thinner, more fragile substrates, and thus dies... But damnit, is it so far out of the question to just stick a nice, massive copper heatsink (or waterblock) on top of a naked IB die? Didn't we all go years clumsily clamping fat aluminum heatsinks onto naked dies not that long ago?
I did just that about a year ago
with my GTX460 for fun. The trick of course is building the right standoffs so the die doesn't take the brunt of the mounting pressure, particularly so the pressure is even across the entire die.
This can be done with IB of course, the retention bracket on the mobo will have to be removed (shouldn't be an issue, it is bolted on so it can be unbolted).
From my experience with delidding my GTX460, I'm definitely going to at least attempt to do the same with my 3770k at some point in the tests.