The future of AMD in graphics

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

maddie

Diamond Member
Jul 18, 2010
5,156
5,545
136
This whole thread is pointless.
"Uuugh what will AMD do next?"
I mean, maybe release Navi?
And whatever comes after.
And another thing.
Until we're out of shrinks and GPUs turn into commodities.
See how funny it becomes?

If still troubling, exercise your freewill and don't read anything. Banish it from your world. Why try to control others if no harm is done?
 
  • Like
Reactions: DarthKyrie

jpiniero

Lifer
Oct 1, 2010
16,823
7,264
136
Polaris doesn't support packed math, unless it's PS4PRO one or P22 (KBL-G semicustom chip).

Ah, you're right; but Polaris probally supports FP16 better I think so maybe you get some benefit. And Doom/Wolf does support asych compute I guess; which helps out Polaris out since there is excess compute power available that can't be normally used because of other bottlenecks in the GCN rendering pipeline.

In any case Intel is potentially a big problem, even if all they manage is to force AMD to cut prices.
 

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
but Polaris probally supports FP16 better I think so maybe you get some benefit.
Same as Tonga iirc, native 1:1 FP16 support.
In any case Intel is potentially a big problem, even if all they manage is to force AMD to cut prices.
They're a big problem for either dGPU vendor, due to extreme OEM leverage and infinite engineering and marketing resources.

Any dGPU losses would hurt nV in a very unpleasant way, since they don't have a CPU or networking or anything else biz to fall back on.
 

DrMrLordX

Lifer
Apr 27, 2000
22,931
13,014
136

Um. 15W?

Using Superposition 1080p medium as the benchmark, the best perf/watt I ever got out of VegaFE was from a .15v undervolt, +50% power limit, and max fans. This got me 510W power draw from the wall (idle power = 140W) for a score of ~15000. To contrast, stock, it would score around 13200 or so while pulling 470W, it was awful and throttled all over the place. At least it would hold clocks using the above configuration.

With Radeon VII, the best perf/watt I've gotten from it to date is to just undervolt it by .14v and let er rip. Same benchmark, same settings, score of ~17000 with a 370W power draw (same idle power: 140W)

Radeon VII's comparative efficiency blew me away.
 
  • Like
Reactions: guachi and Feld

Feld

Senior member
Aug 6, 2015
287
95
101
Um. 15W?

Using Superposition 1080p medium as the benchmark, the best perf/watt I ever got out of VegaFE was from a .15v undervolt, +50% power limit, and max fans. This got me 510W power draw from the wall (idle power = 140W) for a score of ~15000. To contrast, stock, it would score around 13200 or so while pulling 470W, it was awful and throttled all over the place. At least it would hold clocks using the above configuration.

With Radeon VII, the best perf/watt I've gotten from it to date is to just undervolt it by .14v and let er rip. Same benchmark, same settings, score of ~17000 with a 370W power draw (same idle power: 140W)

Radeon VII's comparative efficiency blew me away.
Yes, RVII is noticeably faster than V64 and also consumes less power. AMD just overvolts the hell out of their chips at stock settings. Stock voltage shouldn't be used to determine the chip's efficiency.
 

tajoh111

Senior member
Mar 28, 2005
346
388
136
Yes, RVII is noticeably faster than V64 and also consumes less power. AMD just overvolts the hell out of their chips at stock settings. Stock voltage shouldn't be used to determine the chip's efficiency.

As the undervolt data base shows, they cards were not really overvolted this time as is more representative of what range are needed for a wide range range of variance. We see cards only accepting some cards accepting a 19mv or around a less than a 2% undervolt which is what AMD needs to take into account when volting/clocking cards.

https://docs.google.com/spreadsheets/d/1Iim9e_ejX3nkgxLIZ3vLu1seQ1m0lDTKUhClJpAO-Gk/edit#gid=0

Although there are some that undervolt really well(170mv), this doesn't really apply to the equation because we are talking about reducing RMA rates. If AMD reduced voltage by 170 mV, literally 100% of the samples would fail including the golden card here because you have to guarantee stability for the life of the card across all applications, not just the limited testing in the charts above.

So how do we go about how much volts these cards need? Look at the results running around 1800mhz and these are the cards which are clocked to remain at stock clocks. Now look at the under voltage applied, we see the undervolt. We generally see a range of outside the extremes noted above, a range between 59mV an average of 90mV and the upper limit being around 120mv.

From this we can estimate simply undervolting by 60mV or less than 5%, would cause about 6% of the samples to fail within days of testing a limited number of stability tests. Increase this to 3 years of use, silicon degradation over time, and hundreds of more applications, and this number could easily rise to 50%. The 6% figure alone is an unacceptable failure rate after only a few days of use with limited testing and application. Undervolting signficantly to that average 90mv which is less than 10% undervolt and you have atleast half samples failing under a few days of use with limited testing. Increasing this to the lifetime of the card and across a wide variety of cards and you have close to a 100% failure rate outside of golden cards. Taken this all into account, these cards are absolute not undervolted. Increasing the efficiency of the card 6-10% initially(a small undervolt of 60mV) vs potentially half the cards failing over 3 years is an easy decision if the cost of the RMA rides on you and board partners.
 
  • Like
Reactions: ozzy702

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
We see cards only accepting some cards accepting a 19mv or around a less than a 2% undervolt which is what AMD needs to take into account when volting/clocking cards.
Or they could just, like, bin their GPUs tighter.
Doesn't apply to this very limited volume product, but still.
 
  • Like
Reactions: prtskg

tajoh111

Senior member
Mar 28, 2005
346
388
136
Or they could just, like, bin their GPUs tighter.
Doesn't apply to this very limited volume product, but still.

Increasing binning initially simply increases the amount of failed chips, reduces volume further which is pretty limited and with the higher BOM for this card, that is unaccepted. AMD is likely binning to some extent with the best chips going into instinct cards.

For something with a high cost of production, it makes more sense to apply higher voltage to maximize yields so they don't have to throw away a chip that is perfectly acceptable when slightly higher volts are applied.

In this case, applying higher volts to increase the pass rate and volume is the correct decision to maximize yields and minimize losses.
 

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
In this case, applying higher volts to increase the pass rate and volume is the correct decision to maximize yields and minimize losses.
Which is exactly what I said; reading is hard, whatever.
But hopefully they bin Navi dies tighter, these are rather small.
 

tajoh111

Senior member
Mar 28, 2005
346
388
136
Which is exactly what I said; reading is hard, whatever.
But hopefully they bin Navi dies tighter, these are rather small.

It not really about binning as much as getting reducing the variance of their chips down which makes binning less necessary.

Why AMD generally needs to apply high voltage because the sample variance is really high. That means quality from chip to chip can be really different.

You can tell by how much difference there is between the worst and best overclocking chips. Polaris 10 was particularly notable for this. Some cards only overclocked 40mhz to 1300mhz, while others overclocked 240mhz to near 1500mhz. That means there is a high degree of variance. The GTX 1060 on the other hand had overclocking difference of 2000mhz +- 50mhz. Basically a 5% percent difference between worst and best chips. On the other hand AMD have a huge degree of silicon lottery, referring back to Polaris 10, this 240mhz difference represents around a 20% difference between best and worst silicon. The bigger the variability, the higher the volts needed to stabilize a wide sample size.

The 7970 was even worse for this where some chips would get to 1000mhz and others would do close to 1300mhz. A tremendous 300 or 30% difference between worst and best samples. The 290x was also really bad for this. This generally has been a consistent weakness for AMD which they can't be as aggressive when setting their voltage.

As a result binning doesn't really do much aside from increasing the number of samples that make it to specialized applications like instinct or laptops or apple imac pros. What you want is to reduce the variability of chips, so at a given voltage less chips fail. If you have 0 variance, which is near impossible, you could pick one voltage apply it all the chips, and they would overclock, undervolt to the same degree. In essence you would not need to bin at all.

This comes down to refinements of the silicon and this is where Nvidia shows it's advantage vs AMD. Nvidia spends alot on the refinement process which means the variability is highly reduced. What does this mean? Nvidia chips are generally boring overclockers with low variance between the worst and best samples and they are able to gets closer to their optimum clocks/voltage.

One of the reasons why they can do this better than AMD when it comes to variance is money. Look at this video and you can understand why.

https://www.youtube.com/watch?v=pRz_CG3DZb4

These are Nvidia failure labs and simulators which are state of the art and likely outside the budget of AMD, particularly their GPU divison. They are able to quickly determine the behavior of transistors when the chip is actually running to detect failure and anomalies and also do simulations with the chip before they are made. Athough AMD can do some of this, looking at the prices of the equipment and the budget AMD has for R and D, I highly doubt the lab is as good as this and it shows in the product variance. In addition this lack of equipment is why we see so many rebrands and reitterations for the same product from AMD. Because of the lack of equipment, AMD corrects their products slower and where the potential for improvements show in a respin or revisions of the same chip. Where nvidia can launch products with optimum polish out of the gate, it can take AMD 3 attempts as the RX 590 shows to get the silicon in a polished state.
 
  • Like
Reactions: MangoX and ozzy702

Mopetar

Diamond Member
Jan 31, 2011
8,491
7,750
136
Why AMD generally needs to apply high voltage because the sample variance is really high. That means quality from chip to chip can be really different.

You can tell by how much difference there is between the worst and best overclocking chips. Polaris 10 was particularly notable for this. Some cards only overclocked 40mhz to 1300mhz, while others overclocked 240mhz to near 1500mhz. That means there is a high degree of variance. The GTX 1060 on the other hand had overclocking difference of 2000mhz +- 50mhz. Basically a 5% percent difference between worst and best chips. On the other hand AMD have a huge degree of silicon lottery, referring back to Polaris 10, this 240mhz difference represents around a 20% difference between best and worst silicon. The bigger the variability, the higher the volts needed to stabilize a wide sample size.

That's probably just a result of better binning by NVidia. Unless there are certain design characteristics of AMD cards that lead to greater variability, it's a fair assumption to assume that cards from both companies follow a normal distribution (bell curve) but because NVidia does a better job of binning, you only get cards in a narrow band. The worst performers are probably relegated to a cut down card, and the best might be set aside for sale to third party manufacturers that want to sell a souped up special edition card.

The other side of it is that NVidia currently has the more efficient architecture, so AMD needs to push their cards harder to keep performance parity. NVidia did the same thing with Fermi when it was AMD that had the upper hand. Both companies act in a way that suggests they believe that when it comes down to it, customers care more about performance (or performance/price) than any other metric.
 

Guru

Senior member
May 5, 2017
830
361
106
See how funny it becomes?

If still troubling, exercise your freewill and don't read anything. Banish it from your world. Why try to control others if no harm is done?
That is the point though. Polaris is much more advanced than Pascal in terms of Vulkan and DX12, it took Nvidia essentially 3 years to come up with Turing which equals the playing field.

Let's wait and see if Navi improves on the low level api's and regains the lead there. There is no doubt that RX 580 beat the 1060 6gb in 90% of low level API games. Even ROTTR which the bench had the 1060 6gb winning slightly, in game the RX 580 was essentially 7-8fps faster on average.
 

Muhammed

Senior member
Jul 8, 2009
453
199
116
Edit: Sorry, that was childish. What I was intending to point out, was the irony of someone claiming that RTX has an IQ advantage due to RT, when the only way to run it with any sort of performance for gaming is along with DLSS, which, given some YT reviews (*which may be out of date, NVidia claims to have improved DLSS for some AAA games recently), which accused DLSS of "smearing", and looking worse than just regular AA with rendering resolution scale turned down.
Metro DLSS was fixed, it's no on par with native res with TAA
and RTX runs fine in Metro Exodus on all GPUs.
 

Feld

Senior member
Aug 6, 2015
287
95
101
As the undervolt data base shows, they cards were not really overvolted this time as is more representative of what range are needed for a wide range range of variance. We see cards only accepting some cards accepting a 19mv or around a less than a 2% undervolt which is what AMD needs to take into account when volting/clocking cards.

https://docs.google.com/spreadsheets/d/1Iim9e_ejX3nkgxLIZ3vLu1seQ1m0lDTKUhClJpAO-Gk/edit#gid=0

Although there are some that undervolt really well(170mv), this doesn't really apply to the equation because we are talking about reducing RMA rates. If AMD reduced voltage by 170 mV, literally 100% of the samples would fail including the golden card here because you have to guarantee stability for the life of the card across all applications, not just the limited testing in the charts above.

So how do we go about how much volts these cards need? Look at the results running around 1800mhz and these are the cards which are clocked to remain at stock clocks. Now look at the under voltage applied, we see the undervolt. We generally see a range of outside the extremes noted above, a range between 59mV an average of 90mV and the upper limit being around 120mv.

From this we can estimate simply undervolting by 60mV or less than 5%, would cause about 6% of the samples to fail within days of testing a limited number of stability tests. Increase this to 3 years of use, silicon degradation over time, and hundreds of more applications, and this number could easily rise to 50%. The 6% figure alone is an unacceptable failure rate after only a few days of use with limited testing and application. Undervolting signficantly to that average 90mv which is less than 10% undervolt and you have atleast half samples failing under a few days of use with limited testing. Increasing this to the lifetime of the card and across a wide variety of cards and you have close to a 100% failure rate outside of golden cards. Taken this all into account, these cards are absolute not undervolted. Increasing the efficiency of the card 6-10% initially(a small undervolt of 60mV) vs potentially half the cards failing over 3 years is an easy decision if the cost of the RMA rides on you and board partners.
Then why are the stock voltages all over the place for different cards? Most cards seem able to shave off 100 mV at stock clocks, even if there are outliers that can't. If AMD was merely setting a voltage everything could handle then shouldn't all cards have the same stock voltage? There's more than a 100 mV spread in what's being reported as stock settings for different cards, and yet most cards can still be significantly undervolted. AMD is just doing a poor job, as usual, of setting proper voltages for their Radeon VII cards, but that isn't reflective of what the silicon actually requires.
 

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
AMD is just doing a poor job, as usual, of setting proper voltages for their Radeon VII cards, but that isn't reflective of what the silicon actually requires.
Tight binning is a bad idea for a rather limited volume SKU.
 
  • Like
Reactions: DarthKyrie

Feld

Senior member
Aug 6, 2015
287
95
101
Tight binning is a bad idea for a rather limited volume SKU.
Maybe, but that's besides the point that Radeon VII is quite a bit more efficient than Vega 64, and that stock voltages don't necessarily reflect that as well as they could.
 

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
and that stock voltages don't necessarily reflect that as well as they could.
Most people bench perf/watt stocks, so that they do.
AMD will definetly need to bin Navi tighter, since the new pwr mgmt is also there.
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,379
126
I don't know. Definitely for gamers right now AMD doesn't have much. Even firesale 580s are a tough ask. I bought one here a few months ago for my nephew, and it was really disappointing. Hot, loud, inefficient (XFX XXX I think, have to check my posts in FS/FT to find it again).

I've recommended $250 used Vega56 over RTX 2060 for an experienced user who is competent enough to carefully undervolt and 64 mod it. Other than that, it's a rare case by case basis. 10 times out of 10 at the same price or even a moderate premium, I'd take an 11Gbps 1060 over a 580, and a 1160ti over any Vega. They are just a lot less hassle to get working without a bigger PSU and more PCIe power/noise/size. 1160ti gives around Vega64 performance with tiny cards and near silent operation, after playing with one day 1 from Micro Center it's my go to in the 250-300 range for sure. At mostly high settings it's able to run major titles at 1440p easily, which I couldn't say for the 580. I had to really help tweak and dial settings significantly back for 1440p on there. A couple of years ago 480 was one of my favorite value cards until mining ruined things. But games have moved on, BFV, Metro Exodus, AC Odyssey, etc, it's just too much for 480/580 (even 590 for that matter) beyond 1080p. I do see them for $120 used though now, on our FS/FT and Craigslist, and for that price I do think they're still very nice budget 1080p cards, certainly better than a 1050ti if you have the PSU for it. Even the 4GB variants, because let's face it, they're too slow to use 8GB and 1440p, and 4k is laughable with any of the cards in that segment even up to 1660ti/Vega64/2060.

I do think there should be a 1660ti performer for $199 though. I'm hopeful that Navi10 can be just that, because Nvidia will obviously just keep charging higher prices without some kind of competition.
 
  • Like
Reactions: ozzy702

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
I don't know. Definitely for gamers right now AMD doesn't have much. Even firesale 580s are a tough ask. I bought one here a few months ago for my nephew, and it was really disappointing. Hot, loud, inefficient (XFX XXX I think, have to check my posts in FS/FT to find it again).

I've recommended $250 used Vega56 over RTX 2060 for an experienced user who is competent enough to carefully undervolt and 64 mod it. Other than that, it's a rare case by case basis. 10 times out of 10 at the same price or even a moderate premium, I'd take an 11Gbps 1060 over a 580, and a 1160ti over any Vega. They are just a lot less hassle to get working without a bigger PSU and more PCIe power/noise/size. 1160ti gives around Vega64 performance with tiny cards and near silent operation, after playing with one day 1 from Micro Center it's my go to in the 250-300 range for sure. At mostly high settings it's able to run major titles at 1440p easily, which I couldn't say for the 580. I had to really help tweak and dial settings significantly back for 1440p on there. A couple of years ago 480 was one of my favorite value cards until mining ruined things. But games have moved on, BFV, Metro Exodus, AC Odyssey, etc, it's just too much for 480/580 (even 590 for that matter) beyond 1080p. I do see them for $120 used though now, on our FS/FT and Craigslist, and for that price I do think they're still very nice budget 1080p cards, certainly better than a 1050ti if you have the PSU for it. Even the 4GB variants, because let's face it, they're too slow to use 8GB and 1440p, and 4k is laughable with any of the cards in that segment even up to 1660ti/Vega64/2060.

I do think there should be a 1660ti performer for $199 though. I'm hopeful that Navi10 can be just that, because Nvidia will obviously just keep charging higher prices without some kind of competition.

Sorry but for 1080p AMD is the king now, there is no NVIDIA card that can come close to perf/$ at the moment.

Comparing RX580 to GTX1660Ti is apples to oranges, the first starts at $170 with 2 games bundle and the other starts at $270.

Also the RX 570 4GB starts at $130 with 2 games bundle that can play all current games with High/med settings at 1080p when at the same price NVIDIA only offers the 2GB GTX1050 that is 50% or more slower.
 
  • Like
Reactions: Gikaseixas

Arkaign

Lifer
Oct 27, 2006
20,736
1,379
126
Sorry but for 1080p AMD is the king now, there is no NVIDIA card that can come close to perf/$ at the moment.

Comparing RX580 to GTX1660Ti is apples to oranges, the first starts at $170 with 2 games bundle and the other starts at $270.

Also the RX 570 4GB starts at $130 with 2 games bundle that can play all current games with High/med settings at 1080p when at the same price NVIDIA only offers the 2GB GTX1050 that is 50% or more slower.

Well, that's kind of what I said, in other ways. I just found the 580 a bad value for what I paid ($200+ for a 1440p display, my fault for not realizing how demanding games have gotten since the 480 I had previously).

At $120 used, a 580 4GB or 8GB is excellent, provided you have the PSU and cooling for it. Maybe the XFX I had was just a really bad example, but it definitely was disappointing for a 'budget' gaming card, it was way louder and hotter than my old 1060 was for example. I was trying to help out someone by buying it for them, but it was a mess.

I do agree that Nvidia doesn't have much for value, hence my point of not recommending the 1050ti. The 580 is already borderline for 1080p, the 1050ti is approaching being just useless beyond esports/HTPC.

The 1660ti is almost the minimum I'd recommend for people going forward, or good deals on Vega56 or better. With 9th gen consoles around the corner, I think 580/1060 level cards are soon to be like the 660 to 7770/7850 levels were shortly after PS4/X1 came out : just too compromised to run 8th gen AAA ports without suffering.
 

tajoh111

Senior member
Mar 28, 2005
346
388
136
Then why are the stock voltages all over the place for different cards? Most cards seem able to shave off 100 mV at stock clocks, even if there are outliers that can't. If AMD was merely setting a voltage everything could handle then shouldn't all cards have the same stock voltage? There's more than a 100 mV spread in what's being reported as stock settings for different cards, and yet most cards can still be significantly undervolted. AMD is just doing a poor job, as usual, of setting proper voltages for their Radeon VII cards, but that isn't reflective of what the silicon actually requires.

Its a shortcoming of their engineering and time. They don't have the budget and manpower to correct for the silicon variability and flaws to the extent Nvidia does(watch those videos). They do a brute force approach to correct for this variance. The voltages being difference across cards is likely a response of variance silicon quality and the voltage controller applying the volts the cards need which is why they are different across cards. Nvidia cards do this well but it is less variable than AMDs.

100mv is above average for an undervolt and they have to ensure lifetime stability. AMD is not really undervolting this time because we are not getting the big 200+mv undervolts this time and we are similarly not getting 20-30% overclocks under stock voltage settings. Getting an overclock of 100mhz while running stock volts which are what most cards is showing that AMD has significantly reduced the variance this time around and the voltage most of these cards are getting is just about where they should be. It's not as tight as Nvidia but its an impressive showing from AMD. If you think AMD is overvolting, Nvidia is as well since there are people that can undervolt Nvidia cards down to .800mv on cards like the RTX 2080 ti and losing 8% performance and cutting their power by around 40 percent.



AMD is not overvolting as much as you think this time around. Considering the variance, they are doing a decent job considering the resources they have.

Undervolting 100mv on all cards would cost 80 percent of the cards to not be stable almost immediately as seen in that chart. I would say safely, AMD might be able to undervolt the cards 20mV, but that pushing it since 1 card in about 50 failed at that point. 2% might not sound like a lot but added that this is only after days of testing with limited applications its not acceptable really.

I don't know. Definitely for gamers right now AMD doesn't have much. Even firesale 580s are a tough ask. I bought one here a few months ago for my nephew, and it was really disappointing. Hot, loud, inefficient (XFX XXX I think, have to check my posts in FS/FT to find it again).

I've recommended $250 used Vega56 over RTX 2060 for an experienced user who is competent enough to carefully undervolt and 64 mod it. Other than that, it's a rare case by case basis. 10 times out of 10 at the same price or even a moderate premium, I'd take an 11Gbps 1060 over a 580, and a 1160ti over any Vega. They are just a lot less hassle to get working without a bigger PSU and more PCIe power/noise/size. 1160ti gives around Vega64 performance with tiny cards and near silent operation, after playing with one day 1 from Micro Center it's my go to in the 250-300 range for sure. At mostly high settings it's able to run major titles at 1440p easily, which I couldn't say for the 580. I had to really help tweak and dial settings significantly back for 1440p on there. A couple of years ago 480 was one of my favorite value cards until mining ruined things. But games have moved on, BFV, Metro Exodus, AC Odyssey, etc, it's just too much for 480/580 (even 590 for that matter) beyond 1080p. I do see them for $120 used though now, on our FS/FT and Craigslist, and for that price I do think they're still very nice budget 1080p cards, certainly better than a 1050ti if you have the PSU for it. Even the 4GB variants, because let's face it, they're too slow to use 8GB and 1440p, and 4k is laughable with any of the cards in that segment even up to 1660ti/Vega64/2060.

I do think there should be a 1660ti performer for $199 though. I'm hopeful that Navi10 can be just that, because Nvidia will obviously just keep charging higher prices without some kind of competition.

A 200 dollar GTX 1660 Ti would be close to predatory pricing because it would destroy AMD ability to make profit, while throwing away a huge portion of Nvidia's profit. Vega cards would have to incur massive losses and cards like the RX 580 would have to be like 120 dollars and the RX 590 would have to be around 150(this could still be profitable). With the game bundles and the 50-60 dollars to pay for 8gb of GDDR5, partner cut and retailer margin, AMD would be selling all their cards at a loss or cost.

Nvidia pricing is the only thing keeping AMD in the game right now. If the RTX series came out with maxwell pricing, AMD graphic division would be toast in a year with profits from the division or lack of them, being unable to justify the 7nm investment.
 
Last edited:
  • Like
Reactions: Innokentij

railven

Diamond Member
Mar 25, 2010
6,604
561
126
I don't know. Definitely for gamers right now AMD doesn't have much. Even firesale 580s are a tough ask. I bought one here a few months ago for my nephew, and it was really disappointing. Hot, loud, inefficient (XFX XXX I think, have to check my posts in FS/FT to find it again).

With my RTX 2080 Ti in the shop I had to pick up a temp card. I snagged the Powercolor Red Devil RX 580 for $200+. My biggest complaint is the noise. Mother of god is this thing loud. I also admit, I've been on water for 2-3 years so I'm obviously coming from a different perspective. So I fired up WoW on my R9 290X, and surprisingly, the Red Devil is louder than the stock cooler on the 290X. Sort of surprised me. These things sound like jet engines. Stock fan profile also keeps the fan ramped up after exiting a game for almost 2 minutes. The temps already dropped into the 40c and the fans were still 80%+ speeds. Custom fan profile helped, but not by much, it gets hot quickly. Crazy little card.
 
  • Like
Reactions: Arkaign

coercitiv

Diamond Member
Jan 24, 2014
7,372
17,468
136
With my RTX 2080 Ti in the shop I had to pick up a temp card. I snagged the Powercolor Red Devil RX 580 for $200+. My biggest complaint is the noise. Mother of god is this thing loud.
Use the physical BIOS switch, change it from OC to Silent.