Delidded my i7-3770K, loaded temperatures drop by 20°C at 4.7GHz

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
I'd do it just to say I did it, but I would imagine it takes some specialized equipment and solder to do it...

You would only want to do it with a custom water block or the like. Doing it with something like an H100 would be stupid. Imagine if the pump went out. They aren't servicable easily, and you'd be stuck with a processor glued to a broken unit.
IX and Liquid Metal Pro/Utlra are close enough. And yeah, I'd only do it if I had an expensive (relatively) setup, like cascade or a custom loop.
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
I wonder if the cpu would still work if we took a piece of solder (of similar type to what intel uses) of the correct volume, sat it on the die, heated the IHS, and then squeezed it down.... ;)
 

rge2

Member
Apr 3, 2009
63
0
0
The biggest problem in doing your own solder would be avoiding voids, correct bondline thickness to avoid stress cracking, fatigue, etc. Intel spent millions developing indium solder and non solder die attaches that had long term stability, ie did not suffer from rapid performance degradation that users placing regular tim on tim1 interface will see.

Here is white paper from whence the link above came from, describes some of the technical difficulties with voids, cracking, stim fatigue/failure mating die with various metals, etc.
http://download.intel.com/technolog...ials_Technology_for_Environmentally_Green.pdf

Also companies (http://www.indium.com/TIM/solutions/) that make indium solder die attaches wont sell them to you as an individual, even if you try to go through an LLC, they want an account and all purchase must be in bulk (otherwise i would have already tried to solder mine myself), voids or not. Though intel attaches their solder in ~30 seconds, temp is ramped up in 3-4 seconds to well past 157C (melting point of indium solder) held their for 20-30 seconds then cooled. My oven isnt doing that....but would be interesting to give it a go the ghetto way.

And intel also describes in that paper above why the went with IHS and 87w/mk solder attach vs bare die to keep up with higher TDP models coming out when that paper was written. (though does not apply to current cost cutting paste die attach they are using, in taking advantage of lower TDP of ivy (stock oem settings)...though they should be bslapped for using thermally poor/cheap die attach on k cpus).

The relentless progress of Moore’s law, leading to a doubling
of transistor density in silicon chips every generation,
drove the need to develop thermal solutions to dissipate
additional heat generated in the silicon die. Consequently,
Intel’s packages have evolved from a bare die solution
catering to mobile market segments to Integrated Heat
Spreader (IHS) lidded products in desktop and server market
segments as shown in Figure 15 [4].

There are several technical and cost drivers to enable
lidded thermal architecture such as minimizing the impact
of local hot spots by improving heat spreading, increasing
the power-dissipation capability of the thermal solutions,
expanding the thermal envelopes of systems, developing
thermal solutions that meet business-related cost
constraints, as well as developing solutions that fit within
form-factor considerations of the chassis.
The primary role of the IHS is to spread the heat out
evenly from the die and to provide a better bondline
control of the interface material. This can be achieved by
increasing the area of the IHS and by using a high thermal
conductivity thermal interface material with low
interfacial resistances.
 
Last edited:

Concillian

Diamond Member
May 26, 2004
3,751
8
81
I thought about doing that since I do know someone who owns a company that does a significant amount of indium bonding. He normally works in things significantly larger than CPUs, and It's been a couple years since I've personally had dealings with him, but our company is a big enough account and I was a contact with him for so long (almost 10 years) that I think he might do a CPU as a personal favor for me.

However, I think even if he did that, it would cost a couple hundred bucks (maybe more) to make it worth his while, plus using up a networking favor. I decided it wasn't worth it, especially with my home computer usage on the decline.
 

mrob27

Member
Aug 14, 2012
29
0
0
www.mrob.com
It is really mind blowing, to be frank, that Ivy Bridge is so limited in upper clockspeed. I am referring to the electrical parametrics in this case.

NMOS and PMOS drive currents were substantially improved with 22nm over 32nm, and yet the clockspeed max (fmax) just sucks, didn't move at all.

The lower power consumption is great, that is the benefit of smaller xtors which have less capacitance and the benefit of 3D xtors giving better leakage per micron results. But clockspeed should have improved commensurate with drive currents improving and yet we don't see that in practice :confused:

From a device-physics position, I'm at a loss at the moment to satisfactorily explain the clock-limited aspects of my 3770k. I should have realized at least a 10% increase in clocks over my 2600k (5.5GHz operation at same Vcc) and yet that isn't the reality, for anyone.

Since that hasn't happened, it begs the question why Intel bothered to boost their Idrives at all. Ivy Bridge responds as if the 22nm Idrive is no more than that delivered with 32nm.

I have never had trouble understanding this, but maybe while IDC is away, someone can set me straight.

Here's the only graph we have to go on. Bohr 2011 May 3rd, slide 20:

bohr-20110503-slide20-ann.png


Notice the "22 nm planar" curve (which was so labeled in the previous slide). This curve is sorta parallel to the 32nm planar curve, and there is the implication that the new tri-gate transistors give us a significantly different curve.

This chart shows us "gate delay vs. operating voltage", and it's not a chart I see often. But Intel has given a lot of charts concerning drive current, which you mentioned. Here's Paolo (2005 March 3, slide 36) comparing 65 nm to 95 nm:

Paolo-20050303-slide36.png


Then we have Mistry showing the performance of 45nm (2007 Dec 9, slide 20):

Mistry-20071209-slide20.png


Despite the difference between "I_ON" vs. "I_DSAT", that same data seems to have been used for this stylized version, by Bohr (2008 Oct 20, slide 23):

Bohr-20081020-slide23.png


Then we got 32nm. This is Packan et al. (2009 Dec 11, figure 7):

Packan-20091211-fig7.png


But there's no drive current data for 22 nm... Intel doesn't want to share it. Instead we have this voltage - vs - speed graph, which is a lot more visceral for overclockers anyway. I have to imagine that the leading "K" CPUs lie on the graph someplace like this:

bohr-20110503-slide21-ann.png


Overclockers will be pushing the voltage and gate delay down into the lower-right corner, which I've labeled "???" in tribute to the Gnomes of South Park. We don't know what those curves do over there.

I also want to point out the last bit of hard scientific data we ever got on Intel's tri-gate research, Chau et al. (2006 June 13th) where they thoroughly defend the importance of vertical fins:

Chau-20060613-slide7.png


We all know what happened there. :rolleyes: Chau says that tapered fins give "degraded SCEs". SCE is Short-channel effect, and if you check that link you'll find links to a bunch of other things that I don't understand, but don't sound too good.

So, my belief is that the lines intersect, and that's why we can't overclock i7-3770K any more than Sandy:

munafo-20120820-we-are-here.png


Please fix my drawing. I want to be educated!
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
I think that was the problem with that graph. It suggested that we'd be running below 1.0V, but it turns out that we only aren't using much less voltage than 32nm at all, so the graph is nearly meaningless. We can extrapolate, like you have attempted, but it is difficult to do so with the limited data presented.
 

mrob27

Member
Aug 14, 2012
29
0
0
www.mrob.com
I think that was the problem with that graph. It suggested that we'd be running below 1.0V, but it turns out that we only aren't using much less voltage than 32nm at all, so the graph is nearly meaningless. We can extrapolate, like you have attempted, but it is difficult to do so with the limited data presented.

Well, sure -- it's "operating voltage" which isn't "VCC". That's why I explicitly put on all three 2011 images, "(O.V. to VCC relation varies by process)"

But maybe O.V. is VCC, and we're really over in the 1.2 - 1.4 volts area off the right side of the chart, and Intel didn't want to show us that because the curves come really close together.

Intel's future (for volume, profit, etc.) is clearly in the low voltage, lower clock speed area. This is true in both the server space (high end) and the portable space (mainstream market, desktops are becoming a niche).
 

BonzaiDuck

Lifer
Jun 30, 2004
15,699
1,448
126
Mrob27 [etc. etc.]

I'm not sure there isn't a speed-advantage to an over-clocked IB versus an SB.

I just think you may only find it in a more modest range, like 4.6 to 4.8 Ghz. Looking at 4.9 Ghz suggests little or no advantage.

So that may well give support or relevance to the last graph in your post.

Just as an aside here, I've been mystified often by the logic and common-sense of "extreme game-system" enthusiasts who have a strong showing in these forums. SLI and Crossfire rigs have power-consumption implications. Fixed-vcore over-clocking has more power-consumption implications than do "Auto-VCORE, Offset and Turbo" voltage choices. Four-drive RAID setups on either motherboard or PCI-E controller-cards suck power in what seems like astronomical proportions compared to SSDs RAIDed or hybrid configurations.

Does the Ivy Bridge offer a better margin of performance over Sandy Bridge with stock-turbo configurations? Powerwise, and adding in the better Intel 4000 iGPU, you'd think it would. I think we're already showing the thermal improvements that are possible.
 

tweakboy

Diamond Member
Jan 3, 2010
9,517
2
81
www.hammiestudios.com
To OP your temps are too high. 80's and 90's mean a quick death of the CPU.

Take up the fan. The Core 2 series was concave, but these new intel cpus are less concave. Are more in line and flat for maximum contact with block. Your 20c doesnt matter those load temps are scary.
 
Last edited:

Rubycon

Madame President
Aug 10, 2005
17,768
485
126
To OP your temps are too high. 80's and 90's mean a quick death of the CPU.

Take up the fan. The Core 2 series was concave, but these new intel cpus are less concave. Are more in line and flat for maximum contact with block. Your 20c doesnt matter those load temps are scary.

Take a deep breath and read this entire thread before posting. :biggrin:
 

BonzaiDuck

Lifer
Jun 30, 2004
15,699
1,448
126
Sorry guys,,

Happens to me just for being a careless, old fart. But since we'd crossed paths in other threads, I would think you'd have been all over this "de-lidding" project like white on rice, like flies on s**t, like maggots on a dead bunny rabbit!!

I'm just sayin' . . . . :biggrin:
 

C.C.

Member
Aug 21, 2012
28
0
0
ust a heads-up then to not waste your time checking this thread for the next week, I'm headed off to Ocean City MD for a week at the beach with the family :)
..

I happen to live right outside Ocean City, MD! I am a long time member @ [H]ardforum, and have been following this thread since you first started it..

I will have some data on my 3770K, which has been delidded since I got it back in June..

I have a pure copper WB (Enzotech Sapphire, one of the best still) that is mounted directly to the die..I am currently using MX-2, and once I see where my temps are, I will report back with o/c, temps etc..
 

BonzaiDuck

Lifer
Jun 30, 2004
15,699
1,448
126
..

I happen to live right outside Ocean City, MD! I am a long time member @ [H]ardforum, and have been following this thread since you first started it..

I will have some data on my 3770K, which has been delidded since I got it back in June..

I have a pure copper WB (Enzotech Sapphire, one of the best still) that is mounted directly to the die..I am currently using MX-2, and once I see where my temps are, I will report back with o/c, temps etc..

I was really trying to defer posting this, but it's just a suggestion. I had heard of MX-2 maybe five years ago when I had already graduated from AS-5 to "home-made diamond" and finally IC Diamond. I failed to investigate further the composition of MX-2, figuring it was just another OEM's thermal paste, bundled with a heatsink, or generally available as a "satisfactory" TIM.

Since I don't have any data on MX-2 and you are using it, it would be useful [for somebody] to see a comparison [controlled hardware, speed, ambient temperature] with IC Diamond. I only say this because MX-2 has been promoted as carrying carbon, carbon-compound or carbonized particles:


ARCTIC MX-2 is the thermal compound offering high thermal conductivity and low thermal resistance to dissipate heat efficiently from CPU / GPU to the installed heatsink.

The ARCTIC MX-2 compound is composed of carbon micro-particles which lead to an extremely high thermal conductivity.
About the time I discovered micronized diamond, a news article reported an asian-american lady researcher who had discovered ways to make the cheaper forms of carbon into TIM slurries, pastes, oils etc. which reduced thermal resistance and increased thermal conductivity.

Of course, micronized diamond is carbon in crystallized form, and the paste made by Innovative Cooling is so loaded with diamond that it spreads like drying concrete on a warm day.
 

Puppies04

Diamond Member
Apr 25, 2011
5,909
17
76
To OP your temps are too high. 80's and 90's mean a quick death of the CPU.

Take up the fan. The Core 2 series was concave, but these new intel cpus are less concave. Are more in line and flat for maximum contact with block. Your 20c doesnt matter those load temps are scary.

Read post #1 comment post #285.

I like your style, you miss out all that boring "filler" in these types of threads.
 

BonzaiDuck

Lifer
Jun 30, 2004
15,699
1,448
126
Is this thread still going? Buy a better HSF.

A better heatsink doesn't matter nearly as much if the heat-transfer is bottlenecked between the processor-die and the integrated heat-spreader. That might be the most ineffective way to address the problem until applying it as an additional last step.

So far, we've shown that replacement of the IBM thermal paste with a "so-so" commercial TIM has reduced maximum load temperatures by at least 20C degrees. Some data here and elsewhere suggests a temperature reduction from liquid-metal or metal-pad applications of maybe another 8 or 9C degrees. Since the H100 water-cooler being tested here has about the same performance or effectiveness as a top-end heatpipe cooler like the NH-D14, these results generalize to a larger number of enthusiasts willing to trouble themselves with de-lidding.

Otherwise, it seems we might have to wait for the Haswell releases, with disappointment even then. Who knows? We at least know as much as this and other threads and posts at other forums. We know . . . that much . . . .
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
IDC, are you keeping the NT-H1 under there long term? As I've mentioned before, I start to have temp issues after a few months. I pulled it off earlier today, and the paste looked rather odd. Instead of being uniform, it appeared to have separated in to a liquidy phase and then a more solid, normal looking phase. Sort of as if there is a solvent in it that never really evaperated, but had separated from the solid.

I'd be curious if you see something similar over time.

In the mean time, I'm trying AS5 under the IHS to see if I have better results after a few months, or if I also start to have high temps on the topmost core with that as well.


I must be doing something wrong. The AS5 which tested fine when I first applied it, only lasted 10 days before my temps were about 15C higher. I've swapped back to NT-H1, but this is somewhat baffling. Is there something about the thermal expansion and contraction on these chips that just makes normal paste stop working properly.

We'll see how long this NT-H1 lasts this time ;)

I'm wondering if my H100 doesn't have some issue, and when I remove it and put it back on, I am shaking the pump loose or whatever.
 

rge2

Member
Apr 3, 2009
63
0
0
normal cpu cooling where the heatsink rests on the IHS, there is direct pressure on the tim space to contract that space. As some tim gets pumped out (curring) the heatink just mates closer to IHS from pressure, so no voids (air pockets) form. Same thing when using a heatsink on a bare die, though pumpout may still be somewhat of an issue on some high power density cpus, but less so.

tim1 interface between IHS and die, there is a defined space that must be filled, and no pressure to contract this space when tim is pumped out. So typically die attach thermal interface materials are used, ones that are designed for that space and will not be pumped out under those conditions and hence prevent voids. As opposed to end user tim which will get pumped out of that defined space that they were not designed for by thermal cycling. Might be able to find a tim through trial and error that works ok between IHS and die, but user tims for tim2 are designed to not get pumped out of a contracting space, but maintaining a defined space is a different ball game. IDC by lessening that space may reduce the problem, but hard to say for sure, ideally you want that space to be able to contract, ie pressure on it.
 
Last edited:

BonzaiDuck

Lifer
Jun 30, 2004
15,699
1,448
126
normal cpu cooling where the heatsink rests on the IHS, there is direct pressure on the tim space to contract that space. As some tim gets pumped out (curring) the heatink just mates closer to IHS from pressure, so no voids (air pockets) form. Same thing when using a heatsink on a bare die, though pumpout may still be somewhat of an issue on some high power density cpus, but less so.

tim1 interface between IHS and die, there is a defined space that must be filled, and no pressure to contract this space when tim is pumped out. So typically die attach thermal interface materials are used, ones that are designed for that space and will not be pumped out under those conditions and hence prevent voids. As opposed to end user tim which will get pumped out of that defined space that they were not designed for by thermal cycling. Might be able to find a tim through trial and error that works ok between IHS and die, but user tims for tim2 are designed to not get pumped out of a contracting space, but maintaining a defined space is a different ball game. IDC by lessening that space may reduce the problem, but hard to say for sure, ideally you want that space to be able to contract, ie pressure on it.

Well, this could be the problem that makes it or breaks it. We have pictures of the original Intel TIM courtesy IDC. No one has tried IC Diamond yet, but some have tried Liquid Ultra. We'd like to hear again from those who've already deployed Liquid Ultra to see if they've run into any problems.

The IC Diamond stuff cures to a sort of gray rubbery material which still maintains a little flexibility, or so it seemed when I removed the bead of excess from a heatsink base and IHS.

If these latter two choices don't work, then I don't know what will.
 

Anth Seebel

Junior Member
Aug 22, 2012
14
0
0
My concern is the Liquid Pro dripping down out of the cpu and causing a short.

Mite have to get some black rubber epoxy and seal up the bottom and sides at least so if any escapes it stays inside the IHS.

Another concern is the Liquid Pro itself and how it is applied. From what I've read, Liquid Pro is mainly used for surfaces that need very little 'filling in' of the gaps (mainly for flat surfaces - so the die qualifies, not sure about the IHS and any concave/convex issues), where Liquid Ultra is less liquid form and more paste form and designed to 'fill-in' the gaps. Having said that, my concern is that if there is any air gaps between the naked die and the IHS, this will easily burn out the die ? I guess you could do a test apply and check the spread, but that still doesn't ensure you have any air gaps. I guess that is why a TIM in paste form would be 'safer' in this regard as it designed to fill-in those air gaps better than say Liquid Pro ? Any thoughts on this ?

IDontCare mentioned when putting the IHS back on, you need to watch out for shorting the IHS on the PCB standoffs or something ? Im not sure here. Do you mean the exposed gold contacts sitting on the PCB ? If this is the case, does IHS need to be placed on the PCB so as to avoid the gold contacts all together OR does one need to cover the gold contacts and hence make it even all the way around, then place the IHS on ?

Very awesome thread BTW. :D
 
Last edited:

zebrax2

Senior member
Nov 18, 2007
972
62
91
Is this thread still going? Buy a better HSF.

I think this thread isn't really about having a problem with the chip having heat problems. I this is more of an academic thing. Finding the what ifs and trying to draw some conclusion based those results
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
I'm tempted to just try to go bare die cooling at this point. I'm tired of re-working my IHS TIM.