Is my R9 280X too hot?

Crockett12

Member
Dec 14, 2013
37
0
66
I’ve got a 2 year old (purchased new) stock Gigabyte Windforce R9 280X that is crashing my computer when I play a video game. I’ve also got a NZXT Sentry LX Fan Controller that monitors temps – so I can increase the monitored temp before shutdown with the NZXT. I’ve increased the temp on my video card to 90c before shutdown and the card has reached that under load. I’m playing a flight sim exclusively (Rise of Flight) and last night I manually shut down the game when it reached 89c. The idle temp is about 41c.

The video card isn’t overclocked and I’ve blown out all dust from the card as well as the computer as well – and there is nothing next to the card to inhibit airflow. My case is a high airflow Cooler Master HAF 932 Full Tower Case with all fans working and set on the highest rpm – so I don’t think dust or airflow is the problem.

Two questions:
How do I fix it?
Or - is my video card toast or about to become toast?

Thanks in advance for any help!
Crockett12

Listed below are my specs:
OS: Windows 7Professional 64 bit
Cooler Master HAF 932 Full Tower case
CPU: Intel Core i5-750
CPU cooler: Artic Cooling Freezer 7 Pro Rev 2 92mm Fluid Dynamic
Mobo: MSI 55-GD65 LGA 1156
8 GB Memory: Corsair XMS3 (4 x 2 GB)
Boot Drive: Samsung SSD 250GB 850 Evo
Storage Hard Drive: Western Digital Caviar Black 1TB 7200 RPM SATA
DVD Drive: Lite-On 24X DVD Writer Black SATA Model iHAS-324-98
PSU: Thermaltake Black widow W0319RU-850 Watt
NZXT Sentry LX Fan Controller



 

Joepublic2

Golden Member
Jan 22, 2005
1,097
6
76
So it's actually crashing or is the fan controller software shutting down the computer? 90C is an acceptable temperature for a GPU. Even around 95-105C it should start throttling before it actually crashes an app or windows. That being said it is odd that it's getting that hot in the first place with a realistic workload; that's a great cooler. Are the fans spinning up when you load it? I had a 7950 with a similar cooler and it never broke 80C even with furmark, although it did get a bit noisy.

If I were you I'd disconnect your SSD and install a fresh copy of windows and the latest drivers on your storage drive along with that game and furmark for testing the GPU and prime95 for testing the CPU as well as some software to crank up the GPU fans and see if you can replicate the crashing to rule out any software/driver issues that might arise from an old windows installation. If it's still crashing it's probably an issue with the card or your PSU. It sounds like the the cooler isn't making good contact with the GPU to me.
 

Despoiler

Golden Member
Nov 10, 2007
1,967
772
136
So it's actually crashing or is the fan controller software shutting down the computer? 90C is an acceptable temperature for a GPU. Even around 95-105C it should start throttling before it actually crashes an app or windows. That being said it is odd that it's getting that hot in the first place with a realistic workload; that's a great cooler. Are the fans spinning up when you load it? I had a 7950 with a similar cooler and it never broke 80C even with furmark, although it did get a bit noisy.

If I were you I'd disconnect your SSD and install a fresh copy of windows and the latest drivers on your storage drive along with that game and furmark for testing the GPU and prime95 for testing the CPU as well as some software to crank up the GPU fans and see if you can replicate the crashing to rule out any software/driver issues that might arise from an old windows installation. If it's still crashing it's probably an issue with the card or your PSU. It sounds like the the cooler isn't making good contact with the GPU to me.

The 280x thermal throttle is 85c. The 290(x) is 95C. The Furmark power virus isn't a realistic test for GPUs and should not be used. Also, Prime95 is outdated for CPU testing. It will break a Haswell-E by overloading the VRMs. Use OCCT for GPU and CPU testing. Do not enable AVX on the CPU test. Asus Realbench is another safe test suite.

I would check your fan speed like CropDuster suggested. If it's built like the 290(x) versions, the dual BIOS switch has one BIOS that runs the fans super silent and the card sits at thermal throttle. The other has a normal fan curve that can have more fan noise, but cooler temps. The last thing is make sure that if you installed the new Crimson drivers you got the hotfix version from Nov 30.
 

Joepublic2

Golden Member
Jan 22, 2005
1,097
6
76
It will break a Haswell-E by overloading the VRMs

Sounds more like Haswell-E is broken than Prime, then. If the CPU can't cope it should downclock or inject some NOPs or something. Manufactures need to stop blaming software for breaking their marginal designs, ditto on the Furmark. It's a perfectly valid test, and even warns you it might break marginal hardware. Heaven or Valley would be a decent test if you don't want to use Furmark.
 
Last edited:

ClockHound

Golden Member
Nov 27, 2007
1,111
219
106
Just rebuilt the exact same card. The wimpy 75mm factory fans were failing and it was pushing above 80c.

Remounted the cooler using Kryonaut TIM. Replaced the VRM pads with FujiPoly and replaced the factory fans & shroud with 3x 92mm Scythe 2500rpm fans with a custom mount.

Result: 60c max under full load. VRM back plate is now moderately hot to the touch - previously it was a 3rd degree burn source.

customer-gpu-cooler1.jpg
 

Crockett12

Member
Dec 14, 2013
37
0
66
I don't think the fan speed on the GPU changes from idle speed to load speed. I've got the big case fan on the side of the case that blows on the GPU at 1000rpm. It does increase to 1100rpm under load but other than that none of the other fans sound any louder when under laod.
 

ClockHound

Golden Member
Nov 27, 2007
1,111
219
106
If the GPU fan speed isn't changing from idle to load, sounds like the Crimson driver bug.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I don't think the fan speed on the GPU changes from idle speed to load speed. I've got the big case fan on the side of the case that blows on the GPU at 1000rpm. It does increase to 1100rpm under load but other than that none of the other fans sound any louder when under laod.

Step 1: Uninstall your existing drivers.
Step 2: Make sure to download the latest Crimson drivers.
Step 3: Download MSI Afterburner and set a custom fan profile and force After Burner fan profile at launch.

Launch any application you want like Unigine Heaven and open MSI AB + GPU-Z to monitor your GPU fan speeds/temperatures.

The video card isn’t overclocked and I’ve blown out all dust from the card as well as the computer as well – and there is nothing next to the card to inhibit airflow. My case is a high airflow Cooler Master HAF 932 Full Tower Case with all fans working and set on the highest rpm – so I don’t think dust or airflow is the problem.

Another possibility is that the thermal paste (Thermal Interface Material) on the card has dried up and needs replacing. Your card should be one of the coolest R9 280X/7970 Ghz cards ever made:

Untitled-3.png


ztemps-xbt.png


04_giga797_fr_big.jpg


16_giga797_cool_big.jpg


If the first fix with newer drivers and MSI After-burner fan profile with fans running at 60-100% doesn't fix the problem, then what you should try is disassemble the GPU heatsink. Then use 90% isopropyl alcohol to wipe the residue from the old thermal paste. Then reapply some thermal paste. Chances are the existing paste is getting old/dry and you might need to use a plastic tool to scrape it off or press hard with a Bounty paper towel + isopropyl alcohol.

Here is a video guide.
https://www.youtube.com/watch?v=qDLQ7FjPMf8
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
280x lacks the power efficiency of the 300 series so course its hot

No, this is not a valid explanation. It is easily possible to have a 100W card that has superior perf/watt running way hotter and louder than a 250W card that has an inferior perf/watt.

1. The graphics card's temperature is a function of not only its power usage but the efficiency of its cooling system (type of cooling used: for air cooling it would be the heatsink's ability to dissipate heat, fan/airflow efficiency, TIM efficiency).

2. Ambient temperatures have a direct impact on a component's operating temperature. A graphics card operating in a household with 16-18C ambient temperature will run ~30C cooler than one operating in Africa with 46-48C ambient.

3. 280X uses less power than R9 390/390X cards which means using an identical cooling system, R9 390/390X will run hotter.

4. OP's card is known to be one of the coolest R9 280Xs because of its very efficient cooling system.

Gigabyte R9 280X OC 3 GB
temp.gif


Conclusion: so, it is not normal for a Gigabyte Windforce R9 280X to operate at 90C under normal ambient temperatures, with properly working fans and with new/properly applied TIM that removes the air bubbles between the GPU die and the metal heatsink.
 
Last edited:

Crockett12

Member
Dec 14, 2013
37
0
66
Something I forgot to say – I’ve got Crimson drivers already downloaded and installed. It’s been a while since this has been happening and it may have started after I downloaded the Crimson Drivers – as I don’t remember the timing of it.

So I’m going to uninstall the Crimson Drivers and reinstall the previous drivers. Then fire up Rise of Flight and see what happens. If that doesn’t work, I’ll download MSI Afterburner like RussianSensation suggested and see if that’ll do it.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Something I forgot to say – I’ve got Crimson drivers already downloaded and installed. It’s been a while since this has been happening and it may have started after I downloaded the Crimson Drivers – as I don’t remember the timing of it.

So I’m going to uninstall the Crimson Drivers and reinstall the previous drivers. Then fire up Rise of Flight and see what happens. If that doesn’t work, I’ll download MSI Afterburner like RussianSensation suggested and see if that’ll do it.

There are 2 versions of Crimson drivers - the original the latest update. The latest update specifically fixes the fan % issues.

http://www.gamespot.com/forums/pc-a...users-watch-your-fan-speed-updated--32798943/

The new drivers came out just 3-4 days so chances are you have the version with the bug.
 

Crockett12

Member
Dec 14, 2013
37
0
66
Well -- older drivers did the trick! First I rolled back to Catalyst 15.7.1 and the temp never got above 60c -- close but not above. I flew about half a mission and got into a furball with a couple of other planes, so there was lots going on. I finally quit the game only because I was getting some on screen flickering.

Then I rolled back to Catalyst 15.7 and flew another mission. The flickering is gone and the temp never got above 58.5c. I got into a big furball with several other planes and close to the ground, which should have increased the temp but it stayed 58.5c and lower! I wear earphones when I'm flying so didn't notice an increase in fan noise. Next time I fly I'll take the earphones off so I can listen to see if the GPU fans have increased rpm.

Next I'll see about MSI Afterburner and GPU-Z!

I never would have thought about drivers -- so -- thank you all! I'm happy!
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I never would have thought about drivers -- so -- thank you all! I'm happy!

Good stuff. Glad things worked out. BTW, AMD released Crimson 15.12 so at some point when you do upgrade to newer drivers, your previous fan issues should be gone too.
 

Despoiler

Golden Member
Nov 10, 2007
1,967
772
136
Sounds more like Haswell-E is broken than Prime, then. If the CPU can't cope it should downclock or inject some NOPs or something. Manufactures need to stop blaming software for breaking their marginal designs, ditto on the Furmark. It's a perfectly valid test, and even warns you it might break marginal hardware. Heaven or Valley would be a decent test if you don't want to use Furmark.

I'm sure Intel would love to get your design expertise. Let us all know when the first chip is released with your design input. We will be happy to put it through its paces.

Furmark is not valid test under and circumstance because the workloads it generateswill not be found on any other software. It's the same reason why the way Prime95 uses AVX can overload the VRMs on Haswell-E. The workload will never be found in the real world.
 

MrTeal

Diamond Member
Dec 7, 2003
3,753
2,159
136
The 280x thermal throttle is 85c. The 290(x) is 95C. The Furmark power virus isn't a realistic test for GPUs and should not be used. Also, Prime95 is outdated for CPU testing. It will break a Haswell-E by overloading the VRMs. Use OCCT for GPU and CPU testing. Do not enable AVX on the CPU test. Asus Realbench is another safe test suite.

What are you basing that on? My 5930k will run Prime 28.5 with AVX at 4.8GHz all day long.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Sounds more like Haswell-E is broken than Prime, then. If the CPU can't cope it should downclock or inject some NOPs or something. Manufactures need to stop blaming software for breaking their marginal designs, ditto on the Furmark. It's a perfectly valid test, and even warns you it might break marginal hardware. Heaven or Valley would be a decent test if you don't want to use Furmark.

Didn't you get the memo? Most of this forum has long understood that FurMark is useless but yet in late 2015 you bring up this topic again? Technically speaking, every single test designed to be a power virus is synthetic since it has nothing to do with real world applications.

If an i7 6700K can hit 4.8-4.9Ghz in every real world application on my system on a Corsair H110i GT at 75C but is maxing out at 90-95C at 4.4Ghz in OCCT synthetic power virus I am supposed to lower my CPU overclock for a program I'll never use? Stop spreading outdated ideologies.

GTX980 G1 uses 204W in gaming
GTX90 G1 uses 344W in Furmark power virus
https://www.techpowerup.com/reviews/Gigabyte/GeForce_GTX_980_G1_Gaming/25.html


:thumbsup:

"We coded Realbench to generate stress with real-world apps. It's a useful tool for people that encode, render or crunch numbers with their systems."

Emphasis on real world apps.

Next thing you know someone is going to tell us how we need a 1200W Platinum PSU for i7 5960X + GTX980TI SLI because that's what their system showed when they simultaneously ran OCCT/IBT + Furmark (example used for illustrative purposes).
 
Last edited:

MrTeal

Diamond Member
Dec 7, 2003
3,753
2,159
136

I've seen Raja's post before, but it's fairly sparse on details. Running a sustained AVX load might be an issue depending on your application, but it's hardly universal. For one, even running P95 my load temperatures are in the mid-60s. Intel doesn't publish any information that I've seen on a derating curve for their FIVR, but generally for a standard VRM can reduce maximum output current 20-40%.* Similarly, load on the FIVR is instantly reduced 25% by having a 5930k vs a 5960X.

* See for example figure 9 here.
 

Joepublic2

Golden Member
Jan 22, 2005
1,097
6
76
Didn't you get the memo? Most of this forum has long understand that FurMark is useless but yet in late 2015 you bring up this topic again? Technically speaking, every single test designed to be a power virus is synthetic since it has nothing to do with real world applications.

Memo it out your rear. If a device can't run code, synthetic or not, without melting itself it's defective by design, period. My old 7950 and my new 970 will run Furmark all day without issue. Ditto on my CPU running Prime95 AVX. That's the first thing I do when testing components in fact; load them up with power virus synthetic tests. If they fail it's just telling me that they were marginal in the first place. Just because you accept marginal equipment in your systems doesn't mean I do or other people should.

I'm sure Intel would love to get your design expertise.

I'm not a CPU engineer (neither are you most likely given your attitude) but apparently they need to hire some better ones. It doesn't matter if real world apps can generate said load or not; the device should compensate for this contingency.
 
Last edited:

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
Didn't you get the memo? Most of this forum has long understand that FurMark is useless but yet in late 2015 you bring up this topic again? Technically speaking, every single test designed to be a power virus is synthetic since it has nothing to do with real world applications.

If an i7 6700K can hit 4.8-4.9Ghz in every real world application on my system on a Corsair H110i GT at 75C but is maxing out at 90-95C at 4.4Ghz in OCCT synthetic power virus I am supposed to lower my CPU overclock for a program I'll never use? Stop spreading outdated ideologies.

GTX980 G1 uses 204W in gaming
GTX90 G1 uses 344W in Furmark power virus
https://www.techpowerup.com/reviews/Gigabyte/GeForce_GTX_980_G1_Gaming/25.html



:thumbsup:

"We coded Realbench to generate stress with real-world apps. It's a useful tool for people that encode, render or crunch numbers with their systems."

Emphasis on real world apps.

Next thing you know someone is going to tell us how we need a 1200W Platinum PSU for i7 5960X + GTX980TI SLI because that's what their system showed when they simultaneously ran OCCT/IBT + Furmark (example used for illustrative purposes).

Memo it out your rear. If a device can't run code, synthetic or not, without melting itself it's defective by design, period. My old 7950 and my new 970 will run Furmark all day without issue. Ditto on my CPU running Prime95 AVX. That's the first thing I do when testing components in fact; load them up with power virus synthetic tests. If they fail it's just telling me that they were marginal in the first place. Just because you accept marginal equipment in your systems doesn't mean I do or other people should.



I'm not a CPU engineer (neither are you most likely given your attitude) but apparently they need to hire some better ones. It doesn't matter if real world apps can generate said load or not; the device should compensate for this contingency.


Can we just agree that different people have different opinions and that nobody is right or wrong?

Either way, the stuff about VRM damage is mostly paranoia. GPUs throttle and CPUs shut off the system to avoid damage from the rapid massive voltage spike (the issue is not related to clockspeed, which is why the CPU can't simply throttle itself to compensate; shutting down is the only safe option, though even that isn't risk-free).
 
Last edited:

Mondozei

Golden Member
Jul 7, 2013
1,043
41
86
Can we just agree that different people have different opinions and that nobody is right or wrong?

What is this nonsense? Of course there are things which can be described as right approaches and wrong approaches. JoePublic thinks a synthetic and irrelevevant power virus benchmark is all the rage.

He even says real world use load doesn't matter to him if it fails said irrelevant benchmark.

The rest of the industry disagrees with him. Joe is certainly entitled to hold a stupid opinion. But it doesn't prevent the rest of us from pointing out that it is very stupid.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,200
126
Memo it out your rear. If a device can't run code, synthetic or not, without melting itself it's defective by design, period. My old 7950 and my new 970 will run Furmark all day without issue. Ditto on my CPU running Prime95 AVX. That's the first thing I do when testing components in fact; load them up with power virus synthetic tests. If they fail it's just telling me that they were marginal in the first place. Just because you accept marginal equipment in your systems doesn't mean I do or other people should.
This! General-purpose compute devices should be capable of executing ANY arbitrary code that conforms with the ISA, no matter how much power it draws (or it should throttle the execution such that platform power or thermal limits are not exceeded, for safety reasons).

Imagine if bridges were designed for "average" loads, and not worst-case loads, and one day, nothing but semi-trailers cross the bridge, and it collapses under the load. Would that level of engineering be acceptable? "You shouldn't have driven only semis over the bridge, it was designed for a mixture of semis and cars."

Thank God RS isn't a bridge designer (mechanical engineer). (Or are you?)
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,200
126
What is this nonsense? Of course there are things which can be described as right approaches and wrong approaches. JoePublic thinks a synthetic and irrelevevant power virus benchmark is all the rage.

He even says real world use load doesn't matter to him if it fails said irrelevant benchmark.

The rest of the industry disagrees with him. Joe is certainly entitled to hold a stupid opinion. But it doesn't prevent the rest of us from pointing out that it is very stupid.

Uhm, in case you didn't know, Prime95 is a real-world program. It's NOT a "synthetic power virus".