- Aug 25, 2001
- 56,570
- 10,202
- 126
So, I'm trying to figure this out, whether one card is way out of spec, or what's going on here. (Bad thermal pads on VRAM?)
Two identical (?) brand-new cards. Installed into a Z170 / G4560 / 2x8GB DDR4-2400 Pro4S desktop ATX board, with two PCI-E x16 triple-slot spaced slots.
Turns out that the XFX "Raw II" RX 5700 XT (which supposedly have "fixed" or "better" cooling than the DD ("Double Dissapation") model), are in effect, triple-slot cards on their own.
Because when I installed them, they have VERY LITTLE clearance between the two, for air to get into the dual fans.
Top card, has GPU temps of like 72C, VRAM temps of 96C (with a small Rosewill external USB fan shoved in front of both cards, mostly the top card), or 102C (without additional fan), while mining.
The bottom GPU, is running nice and cool at 54C, and VRAM temps of like 82C.
Also, the top card is apparently taking 105W, and the bottom card is only taking 80W, at the same undervolt settings (1350Mhz, 860mV), and Power Limit settings (-20%). That seems like a big disparity to me, more than just the temp difference would entail. (Hotter chips take more power, I know this.)
Is this RMA-worthy? Or "take apart GPU cooler and re-paste and re-apply thermal pads"? Or just "it's normal, par for the course"? Or "XFX has crap Q.C."?
It should be noted that it's possible that the temp difference on the VRAM is also due to the top GPU handling the Windows Desktop, and maybe it would equal out things if I plugged the HDMI into the bottom-most card for output. Maybe I'll try that, or run the display output off of the Intel onboard, so that the GPUs only have to handle the mining load. Also, mining ETH is mostly a load on the memory, not the cores. I DO NOT have the VRAM overclocked, on either card (1750Mhz stock). Not with those VRAM temps.
Edit: Important! This PC is NOT in a cubby, and the side-panel is removed.
Two identical (?) brand-new cards. Installed into a Z170 / G4560 / 2x8GB DDR4-2400 Pro4S desktop ATX board, with two PCI-E x16 triple-slot spaced slots.
Turns out that the XFX "Raw II" RX 5700 XT (which supposedly have "fixed" or "better" cooling than the DD ("Double Dissapation") model), are in effect, triple-slot cards on their own.
Because when I installed them, they have VERY LITTLE clearance between the two, for air to get into the dual fans.
Top card, has GPU temps of like 72C, VRAM temps of 96C (with a small Rosewill external USB fan shoved in front of both cards, mostly the top card), or 102C (without additional fan), while mining.
The bottom GPU, is running nice and cool at 54C, and VRAM temps of like 82C.
Also, the top card is apparently taking 105W, and the bottom card is only taking 80W, at the same undervolt settings (1350Mhz, 860mV), and Power Limit settings (-20%). That seems like a big disparity to me, more than just the temp difference would entail. (Hotter chips take more power, I know this.)
Is this RMA-worthy? Or "take apart GPU cooler and re-paste and re-apply thermal pads"? Or just "it's normal, par for the course"? Or "XFX has crap Q.C."?
It should be noted that it's possible that the temp difference on the VRAM is also due to the top GPU handling the Windows Desktop, and maybe it would equal out things if I plugged the HDMI into the bottom-most card for output. Maybe I'll try that, or run the display output off of the Intel onboard, so that the GPUs only have to handle the mining load. Also, mining ETH is mostly a load on the memory, not the cores. I DO NOT have the VRAM overclocked, on either card (1750Mhz stock). Not with those VRAM temps.
Edit: Important! This PC is NOT in a cubby, and the side-panel is removed.
Last edited: