• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Why do we care so much about heat?

ShawnD1

Lifer
We buy expensive coolers for our processors just to make sure they run lower than 60C. Why do we do this? My 8800GTX video card is 82C right now and it doesn't get artifacts (calculations are repeatable). Shouldn't an AMD or Intel processor be able to withstand the same abuse as a video card?

On a side note, I've turned off the temperature control and I've overclocked my CPU to the point where it runs at almost 70C at full load. This thing still passes Prime 95 with flying colors, but we'll see how long it lasts. The CPU is two years old so I'm not overly concerned if it dies.
 
gpu and cpu are different.

GPU's have a much higher fault tollerance compared to CPUs.

You upgrade the sink on your cpu, because you overclock it, and when you overclock it, it generates more heat. Sometimes you push it so hard that the stock sink is no longer within acceptable cooling ranges.

On the gpu people upgrade that because they want a quieter solution.

Also i remember an intel engineer telling me each 10C you lower your cpu temps, you just effectively doubled its life.
 
Depends on the silicon, architecture, stuff. At 1.5V (1.48v load) on this processor, with my old 1000RPM Coolermaster fan, if I edged past 74C bits would immediately, randomly start flipping. Not before. 74C I was fine. Then I upgraded to a 3000RPM fan with 4x the CFM. I went to 1.55v in the BIOS and stayed below about 60C, not bits flipping.

That 74C was the thermal barrier for my chip, as Intel says (don't go past 73.2 for mine). This is when I knew that temps matter.
 
GPU's have a much higher fault tollerance compared to CPUs.

what do you mean? as in a circuit malfunction causing the entire component to stop working? i really don't want to use a video card that has incorrect outputs even if it still forms an image.

Originally posted by: aigomorla
Also i remember an intel engineer telling me each 10C you lower your cpu temps, you just effectively doubled its life.

really? that's just flat out wrong.

me personally, i dont give a crap about temps, only voltages.
 
We care because CPU's max temps are usually closer to 90c, while GPU's are rated up to more like 120c before they shut off.

WHY the difference between the maximum temperatures of the two is somewhere around 30c is I think due just to the way they are made/designed.
They can't withstand the same temperatures because they are not the same, similar to why a CPU can run close to 4GHz while a GPU would struggle to hit 1GHz, they are just different.
 
Originally posted by: dmens

Originally posted by: aigomorla
Also i remember an intel engineer telling me each 10C you lower your cpu temps, you just effectively doubled its life.

really? that's just flat out wrong.

me personally, i dont give a crap about temps, only voltages.

I don't know about the doubling the life of the chip part, but lower temperatures(and lower voltages) will prolong the life span of the chip. At stock voltages, its almost irrelevant, because the CPU would be upgraded long before it ever reached the end of its operational life span, even by the most conservative owner. Other parts would wear out well before the CPU.
 
Originally posted by: Bateluer
Originally posted by: dmens

Originally posted by: aigomorla
Also i remember an intel engineer telling me each 10C you lower your cpu temps, you just effectively doubled its life.

really? that's just flat out wrong.

me personally, i dont give a crap about temps, only voltages.

I don't know about the doubling the life of the chip part, but lower temperatures(and lower voltages) will prolong the life span of the chip. At stock voltages, its almost irrelevant, because the CPU would be upgraded long before it ever reached the end of its operational life span, even by the most conservative owner. Other parts would wear out well before the CPU.

Agreed...frankly, I don't know of any studies showing that to be right OR wrong (the doubled lifespan that is). However, higher temps certainly decrease any chip's lifespan...
 
Originally posted by: dmens
GPU's have a much higher fault tollerance compared to CPUs.

what do you mean? as in a circuit malfunction causing the entire component to stop working? i really don't want to use a video card that has incorrect outputs even if it still forms an image.

Originally posted by: aigomorla
Also i remember an intel engineer telling me each 10C you lower your cpu temps, you just effectively doubled its life.

really? that's just flat out wrong.

me personally, i dont give a crap about temps, only voltages.

All thermally activated failure mechanisms respond to the Arrhenius equation which by its very form results in rate limiting energetics (i.e. activation barrier) that manifest as a near doubling in the rate of kinetics for every 10°C increase in the ambient temperature of the reactants (the hydrogen and fluorine in the IC for example).

This isn't flat out wrong, its flat out basic physical chemistry and unless the materials Intel uses for their CPU's have TDDB (time-dependent dielectric breakdown) properties unlike every other material used by every other IDM then yes you are most assured that 10C increases in your CPU's operating temp does impart a near 1.8-2.0x decrease in the expected operational lifetime.

If you know anyone in your lifetime/reliability R&D (not production) dept then ask them how they do their accelerated lifetime studies and find out why they do that at both elevated temperatures and voltages.

(FWIW in my capacity as a process development engineer at TI we were required to thoroughly characterize the device level impact on operating lifetime for every process tweak or change we developed and proposed go into the "baseline" for nodes under development...and I know our procedures were not industry outliers...)
 
I think that most people's concerns about lowering their CPU temps is misguided. I let mine reach 9C from TJmax, and run 24/7 at that point, and I can't see any evidence of degradation, at least not yet.
 
No, your premise is already wrong.

The reason you buy aftermarket heat sinks isn't to feel good about having lower temps, but rather to be able to use more voltage and obtain higher clocks without running into heat limitations, it gives you more headroom to play. An after market heat sink basically shifts the limitation to the voltage rather than heat.

The reason you don't buy expensive heat sinks for your video card is because you don't have any control over voltage, and increasing the clock speed by itself doesn't generate much more heat. So the only reason left would be noise if that bothers you, as someone already mentioned.
 
As far as CPU's go, I'm not sure why they need to be cooler, but it seems they do.

With GPU's like the 4870 it was kind of funny watching everyone get all worried about their 90C GPU. Some people don't understand that GPU temp is not the same as heat dissipation.

I think a lot of people thing along the lines that a given card, say a 4870 @ 750/3600 with a cooler lowering the temp to 50C is warming your room less then that same card at the same clock speeds with a different cooler showing a temp of 85C.
 
Originally posted by: ShawnD1
We buy expensive coolers for our processors just to make sure they run lower than 60C. Why do we do this? My 8800GTX video card is 82C right now and it doesn't get artifacts (calculations are repeatable). Shouldn't an AMD or Intel processor be able to withstand the same abuse as a video card?

On a side note, I've turned off the temperature control and I've overclocked my CPU to the point where it runs at almost 70C at full load. This thing still passes Prime 95 with flying colors, but we'll see how long it lasts. The CPU is two years old so I'm not overly concerned if it dies.

We care about temperatures because our chips have a max stable operating frequency that is temperature dependent for any given voltage.

Temperature increase the "noise" in the signal/noise ratios that electronics are attempting to operate with. For example lets say your CPU runs at 3.6GHz with a Vcc of 1.5V and temps of 70C. For the same CPU, were your temps to be 80C (less efficient HSF) then it is unlikely that you could run stable with the same 1.5V Vcore...you'd likely need to bump up the voltage even more (and thus actually cause an even higher temp increase).

At some point you could end up requiring more Vcc and be at higher temps than the chip can simply operate at and be stable.

On the flip side you could get a more efficient cooling method and lets say the temps drop from 70C to 60C...at that lower temperature you may find you no longer require 1.5V Vcore to be stable because the background thermal noise was reduced in going to 60C. So now maybe you can drop the Vcc a little (resulting in even slightly lower temps because now you produce slightly less heat).

But perhaps more importantly, at the lowered operating temps you could keep the same Vcc (1.5V in this example) and bump up your overclock from the 3.6GHz you did at 70C to maybe 3.7 or 3.8GHz (raising temps again but not by so much that you need to bump up voltage).

This is the basic premise of water-cooling, vapor-phase cooling, and liquid nitrogen cooling. In all cases the primary advantage of the lowered operating temps is that it allows the chip to stably operate at any given GHz with less voltage.

My vapor-phase rig (B3-stepping kentsfield QX6700) could operate at 4GHz on 1.56V fully loaded stable with small FFT. Entirely made possible because of the lowered operating temps.
 
Originally posted by: soccerballtux
Lol, your 4Ghz B3 still makes me laugh; because 4ghz on one was so unheard of.

Yeah I wouldn't have believed it if I hadn't seen it myself, its definitely a "shens" worthy claim but alas I ran it that way for months. Probably just had a real lucky B3, although the VID was 1.35V which was near the top of the spec at the time. (before they raised the spec to 1.5V max VID on the G0's)
 
This thread is a great example of how discussion in this cpu forum is so head-and-shoulders above the discussion in some other forums, e.g. the video card forum! I'm not trying to call out any members of the video card forum -- they're fine. But if you want a technical question answered, even if it's about graphics cards, this is the place to come. If the OP had asked this question in the video card forum, which he legitimately could have, no way he'd have gotten such an answer.

Maybe some of you cpu technical expert guys could check out the video card forum from time to time? It's not only about fun and games 😉
 
Originally posted by: Idontcare
This isn't flat out wrong, its flat out basic physical chemistry and unless the materials Intel uses for their CPU's have TDDB (time-dependent dielectric breakdown) properties unlike every other material used by every other IDM then yes you are most assured that 10C increases in your CPU's operating temp does impart a near 1.8-2.0x decrease in the expected operational lifetime.

and you guys who said im flat wrong now owe me an appology.

thanks IDC.

Originally posted by: dmens
GPU's have a much higher fault tollerance compared to CPUs.

what do you mean? as in a circuit malfunction causing the entire component to stop working? i really don't want to use a video card that has incorrect outputs even if it still forms an image.

Originally posted by: aigomorla
Also i remember an intel engineer telling me each 10C you lower your cpu temps, you just effectively doubled its life.

really? that's just flat out wrong.

me personally, i dont give a crap about temps, only voltages.

GPU's have a threshold much higher then CPU's.
GPU also isnt anywhere temperature senstive when you overclock.

Any more challanges? Ive been overclocking way long then you have.

Originally posted by: JAG87
No, your premise is already wrong.

The reason you buy aftermarket heat sinks isn't to feel good about having lower temps, but rather to be able to use more voltage and obtain higher clocks without running into heat limitations, it gives you more headroom to play. An after market heat sink basically shifts the limitation to the voltage rather than heat.

The reason you don't buy expensive heat sinks for your video card is because you don't have any control over voltage, and increasing the clock speed by itself doesn't generate much more heat. So the only reason left would be noise if that bothers you, as someone already mentioned.

Yup... i totally agree.
 
Originally posted by: magreen
Originally posted by: aigomorla
Any more challanges? Ive been spelling way long then you have.

fixed. *giggle*

yeah i love to butcher spelling. 😛

i graduated from kolledge with ms word.
 
Originally posted by: magreen
Originally posted by: aigomorla
Originally posted by: magreen
Originally posted by: aigomorla
Any more challanges? Ive been spelling way long then you have.

fixed. *giggle*

yeah i love to butcher spelling. 😛

i graduated from kolledge with ms word.

:laugh: Good, glad you're not offended by my joke. :heart:

im pretty much kick back. I know my spelling is bad. And i know i need to fix it.

but im lazy.

cuz im lazy i have no room to get mad. 😛
 
Of course I could, but that's not the point. I'm the kind of guy who is also seriously thinking about switching all the 120mm fans in my case with 200cfm replacements, for my rig which has pretty modest heat generation
 
Originally posted by: aigomorla
Originally posted by: Idontcare
This isn't flat out wrong, its flat out basic physical chemistry and unless the materials Intel uses for their CPU's have TDDB (time-dependent dielectric breakdown) properties unlike every other material used by every other IDM then yes you are most assured that 10C increases in your CPU's operating temp does impart a near 1.8-2.0x decrease in the expected operational lifetime.

and you guys who said im flat wrong now owe me an appology.

thanks IDC.

yeah except im talking about vias not gates and since current density is an inverse exponential relationship, that approximation is way too simplistic. current and temperature are related properties and it is exceedingly difficult to isolate either one.

GPU's have a threshold much higher then CPU's.
GPU also isnt anywhere temperature senstive when you overclock.

Any more challanges? Ive been overclocking way long then you have.

if you're gonna take that kind of attitude, use the correct terminology. fault tolerance is something quite different to what you are describing. oh, and the temp sensors on IC's are usually completely inaccurate when measuring localized hot spots which tend to be the locations of speed limiters. so even for what you are describing, your generalization has little meaning.

if you view that as a challenge that's fine, but for the record, you don't know how long i've been overclocking, and overclocking experience doesn't matter a damn when discussing IC reliability and failure modes.
 
Originally posted by: Idontcare
We care about temperatures because our chips have a max stable operating frequency that is temperature dependent for any given voltage.

Temperature increase the "noise" in the signal/noise ratios that electronics are attempting to operate with. For example lets say your CPU runs at 3.6GHz with a Vcc of 1.5V and temps of 70C. For the same CPU, were your temps to be 80C (less efficient HSF) then it is unlikely that you could run stable with the same 1.5V Vcore...you'd likely need to bump up the voltage even more (and thus actually cause an even higher temp increase).

Great reply. This answered everything.
 
Back
Top