• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Why high GPU temps ok?

orangat

Golden Member
I posted the same thread in the video forum but got no joy.

Anyway why are the operating specs for the 68xx nvidia series so hot? Up to 80C is considered normal by my card manufacturer, 120C is the cut off for clock throttling.

Is it because gaming cards are purposely designed for a short lifespan?
 
They allow higher temps because they can. At the temperatures we're looking at, you are unlikely to harm the silicon. In these chips, what you see is temperature related data errors. Intel, for example, guarantees that if you keep your Pentium 4 3.4E chip between 5C and 73.5C, it will run at 3.4GHz with no errors. If the chip gets any hotter, the possibility of data errors exists.

I'm sure Nvidia is doing the same thing. They test their parts, find out the temperature that they start seeing errors at, then set their thermal specifications accordingly. My guess on why they can spec a higher temp is that the clock speeds they are dealing with are much lower. The higher resistances in the copper interconnects at high temperatures cause more problems with higher frequency systems.
 
I find it hard to believe 110C operating temp (below 120C threshold for clock throttling) cannot damage the GPU or at least drastically reduce its lifespan.

Operating temps for gpu's have risen alot compared to CPUs which have long been in the 60-70C range for Tcase.

My old TNT2 with passive cooling in the reference design ran hot and was unstable during gaming (never overclocked) and finally died just right after warranty of 1yr. I just don't feel comfortable with Nvidia specs, in their reference designs. The thermal adhesive might have cracked but I don't think that was the problem.
 
I seem to remember one oddball Intel cpu sometime back which ran too hot (and was unstable) but was put out to compete with an AMD chip.
 
Originally posted by: orangat
I find it hard to believe 110C operating temp (below 120C threshold for clock throttling) cannot damage the GPU or at least drastically reduce its lifespan.

What you "believe" has little to do with the physical reality of the chip. Higher temperatures will indeed reduce the life of the silicon; however, do you really care whether it has a 50 year expected life or a 20 year expected lifetime? That card will be so obsolete 5 years from now that it simply won't matter.

It comes down to trust - do you trust NVidia, or ATI, or Intel, or AMD to specifiy operating parameters that are compatible with a reasonable life for the part?

/frank
 
Originally posted by: orangat
I seem to remember one oddball Intel cpu sometime back which ran too hot (and was unstable) but was put out to compete with an AMD chip.

the 1.13 ghz coppermine was unstable at it's rated speed.

I'm not 100% sure if it was due to temperature though, and it was rather quickly recalled.
 
Originally posted by: FrankSchwab
What you "believe" has little to do with the physical reality of the chip. Higher temperatures will indeed reduce the life of the silicon; however, do you really care whether it has a 50 year expected life or a 20 year expected lifetime? That card will be so obsolete 5 years from now that it simply won't matter.

It comes down to trust - do you trust NVidia, or ATI, or Intel, or AMD to specifiy operating parameters that are compatible with a reasonable life for the part?

/frank

And why do you 'believe' an operating environment of 115C or even 90C would be quite alright for the GPU to last 20 years instead of 50 or even 5? which is cutting it a little close. It may be obsolete but it will work for less than a replacement card.

A designed lifecycle of 10-15years for a non-overclocked card is more reasonable.
 
Originally posted by: 3chordcharlie
.......
the 1.13 ghz coppermine was unstable at it's rated speed.

I'm not 100% sure if it was due to temperature though, and it was rather quickly recalled.

I was pretty sure heat and voltages had something to do with it.
google - "We found some marginality in the part within certain temperatures within the operating range and certain code sequences (in applications)," said spokesman George Alfs.
 
I only have two questions or replys.
What makes you think that 120 is high? I work on industial chips that run that hot 24/7/365 and are much more inportant that anyones GPU.
Second what would Nvidia or ATI gain by purposely lying to the public?
 
Originally posted by: Sparky19692
I only have two questions or replys.
What makes you think that 120 is high? I work on industial chips that run that hot 24/7/365 and are much more inportant that anyones GPU.
Second what would Nvidia or ATI gain by purposely lying to the public?

I don't know if 120C is 'high'. I just bounced the question about and have gotten no assurance that 120C is normal. I just want a better answer than the 'ya n00b, gpu's runs hotter than cpu's, don't worry about it'.

So if an electronics engineer or equivalent with knowlege about the 68xx gpu can state 110C is just fine for the next 7,10 or 15yrs - that would be great

Nvidia/ATI has as much to gain from the recent purevideo/HD WMV scandal or Intel for that matter when they had to recall the 1.13 P3 and 900mhz Xeons. Companies will take the easy way out when faced with a situation where they stand to gain/lose a significant amount of revenue.
 
Back
Top