Why are higher GPU temps ok?

orangat

Golden Member
Jun 7, 2004
1,579
0
0
I know that modern cutting edge gpu's are pack more circuitry and generate more heat than CPUs.

What I don't yet understand is why the common consensus is that gpu's are ok even if they hit 80C at load. I have not come across a proper explanation for this. What makes the gpu more resilient to heat but not cpu's? They are made of the same materials and should fail or degrade similarly to heat right?

Are gpu's designed to only last for a few years since these cards are predominantly marketed to gamers hence the very high maximum operating temps?

 

Rock Hydra

Diamond Member
Dec 13, 2004
6,466
1
0
Hmm...I guess asking the manufacturers would be a good place to start.

Edit: I e-mailed ATI using your question. Hopefully I get a response. If so, I'll post it here or PM it to you.
 

uOpt

Golden Member
Oct 19, 2004
1,628
0
0
Tough question.

From my personal perspective, I don't care what the GPU temp is like because the GPU will not shred my harddrive contents when it computes odd values.
 

uOpt

Golden Member
Oct 19, 2004
1,628
0
0
Originally posted by: Atomicus
aren't GPUs a form of superconductor? or am I wrong? :confiused;

Only if you have *very* good cooling :D
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
Originally posted by: Atomicus
aren't GPUs a form of superconductor? or am I wrong? :confiused;

Physically I don't think they're any different than CPU's. All made of silicon and copper and junk like that. My guess is that it has to do with operating speeds. Remember modern GPU's don't go much over 600 MHz. The transistors don't have to cycle on and off as fast, they can stand up to more heat before it effects them.
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
Originally posted by: Gamingphreek
Dont they use a different kind of Silicon as well?

-Kevin

Well it's all silicon... I doubt the silicon nVidia and ATi chips are made of has a higher melting point than the silicon Intel and AMD use.
Manufacturing techniques vary I assume, but I don't think they're THAT different. I could be wrong though, it's happened once or twice before. ;)
 

Navid

Diamond Member
Jul 26, 2004
5,053
0
0
I think this is an excellent question.

CPUs can get hot too as far as the reliability goes. There should be no permanent damage if a CPU gets up to 80 degrees. Except that when they get hot, they cause errors to occur. An error in case of a CPU is catastrophic. You don't want the result of a calculation to be wrong that can affect something important!

If a GPU causes an error, there will be a wrong color on a pixel for 1/60 of a second. That is not a catastrophe. You may not even notice it.

This is just what I speculate. I do not have any way to confirm this.
 

orangat

Golden Member
Jun 7, 2004
1,579
0
0
No permanent damage? A cpu core that heats up to 80C would have have a drastically reduced lifetime.

Now that might not be a problem with hobbyists who tend to upgrade every major component 6-18 months. But the majority of consumers including myself would like to know if the high operating temperatures are safe to keep component lifetime in the 10yr timeframe.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,227
126
Originally posted by: orangat
No permanent damage? A cpu core that heats up to 80C would have have a drastically reduced lifetime.
Why does the thermal-monitor setting in my P4 rig's BIOS go up to 85C, and Intel's specs show 100C for max die temps? (That's in terms of hot-spots, which are not located exactly where the on-die temp sensor is.)

Originally posted by: orangat
Now that might not be a problem with hobbyists who tend to upgrade every major component 6-18 months. But the majority of consumers including myself would like to know if the high operating temperatures are safe to keep component lifetime in the 10yr timeframe.
I agree, it certainly doesn't sound that good for the chips, but .. I don't think that it's really all that different between CPUs and GPUs in terms of silicon temps and actual damage. But I think that the variances in transistor switching-speed due to temps, is slightly more critical of an issue with CPUs than GPUs, due to the higher overall frequencies, which leaves far less margin for timing error.