Official AMD Polaris Review Thread: Radeon RX 480, RX 470, and RX 460

Page 24 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
A year ago+ sure, but not in mid-2016. Today go to Newegg and you can buy a 390 new, 8GB, with TW Warhammer included, for $259. That's the problem, being a new gen it should be a slam dunk that it's better. Not basically a wash. If you count Warhammer as $60 game, you can get a faster 390, with 4 more GB of vram for the same price as a 4GB 480. 970 prices have been dropping fast as well. No matter the spin that's not exactly exciting for a new product release. It's not the same situation as the 1070/1080 where NV put out a clearly faster new gen set of cards that were also cheaper than the previous gen.

They're being cleared. If they didn't make them a better deal than the new model why would anyone consider it?

This is simple retail.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Disowning? If it were NVIDIA people (not saying you) would yell "lies!!". They delivered big time on the price front because they couldn't deliver on anything else. It was clear from moment one it was positioned this way because there was no other option. Of course there is nothing wrong with that and customers are all the better for it given the NVIDIA will likely have to cut their prices as well.

IIRC they claimed Polaris was up to 2.7X as efficient. The 470 is still Polaris.
 

JimmiG

Platinum Member
Feb 24, 2005
2,024
112
106
They're being cleared. If they didn't make them a better deal than the new model why would anyone consider it?

This is simple retail.

The difference is that they're being cleared to make way for something that is not significantly better. It was different when the 980 and 970 replaced the 780 and 770, for example. The only way to get someone to buy a 770 instead of a 970 was to dump the prices. So while the RX 480 looks like a decent card, it doesn't exactly revolutionize the mid-range market. It just gives you slightly more performance, for the same or slightly more money.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
The difference is that they're being cleared to make way for something that is not significantly better. It was different when the 980 and 970 replaced the 780 and 770, for example. The only way to get someone to buy a 770 instead of a 970 was to dump the prices. So while the RX 480 looks like a decent card, it doesn't exactly revolutionize the mid-range market. It just gives you slightly more performance, for the same or slightly more money.

Please elaborate on the bold.

please remember that the 480/70 replaces the 380/X. It's not the replacement for the 390/X or 970/80.
 
May 11, 2008
19,869
1,233
126
Absolutely not true. If you had any memory of history.

Pitcairn was very efficient.

The 7950 and 7970 was very efficient, actually almost the same as Kepler.

It did not get bad until the Ghz ed came and Hawaii.

Though Hawaii itself, if you compare it's competitor, big Kepler, was actually close. Now Hawaii destroys big Kepler for similar power usage.

With Maxwell, NV had a big leap forward, leaving AMD behind. But Fury & Nano actually brought them back very close. Depending on the review, Asus Fury & Nano beat Maxwell on perf/w.

Even the Fury X vs 980Ti, similar power usage (~235W vs 250W), similar performance. These were all on 28nm at TSMC, the same as NV's GPU.

AMD going to Polaris, enhanced GCN designed for perf/w and they failed.. I don't think it's the architecture when they've shown in the past they can do it. The difference this time is the node.

I am wondering about process optimization for some time too. I have seen it in the past before (both Nvidia and AMD did this) that chip manufacturers start with smaller chips first at a new node, optimize and then come out with the newer revisions and bigger chips.
I do not know how much polaris chips have been produced. But i hope the AIB's get new versions. It could also be that AMD will come with "new" polaris chips (called RX 580 or RX 480X) around the time vega will come out.

This is just a guess but with cpu designs, higher clocks are achievable through longer pipelines. I have been wondering for a while now that nvidia in their architecture has longer pipelines than AMD since nvidia can clock at so much higher frequencies. Those calculation units are based on in order execution with a predictable workload, so longer pipeline is not an issue. It would be an issue when you have to branch a lot. But when doing the same computations over and over, a longer pipeline does not matter. And because of the longer pipeline, the frequency can go up and the throughput increases. Which makes Nvidia shine at high frequencies.

I think that AMD in the end will do the same, longer pipelines for new GCN iterations. BUt this is just an assumption.

As a side note (not derailing the thread),
i would love to see a downclocked GTX 1080 at the same frequency as a stock GTX 980 and see how they compare side by side. This would definitely show the thrue advances of the pascal architecture over maxwell 2.

And i would love to see this also with polaris and the older gcn cards.
all clocked at the same (lower) clock speed and then again with different memory speeds.

Testing with synthetic benchmark tests to see if all the improvements really show.
 
Last edited:

stuff_me_good

Senior member
Nov 2, 2013
206
35
91
I overclocked my 390X to 1200/1650 (only stable for long enough to do Steam VR test) when those first 480 tests at 6.3 showed up and I maxed out at 9.3 (5820k@4.5GHz 16gb DDR4 2400MHZ) :sneaky:
Yeah, that SteamVR test is complete ****hit because it's so easy to cheat on it by OC:ing just a tad.

j_Gyg_U4nm_Texsgkjj_XVj_LGJaek1m7y_KOuqt4iki_Fop_M.png
 
Mar 10, 2006
11,715
2,012
126
That's overspec, bios should limit it to 1.15v max.

As for the OC perf, the guy that did 1.425ghz got Fury performance out of it.

But you know the funny thing, it went up to like 183W.

Compared to Nano, there's ZERO perf/w increase. o_O

Jesus could GloFo fail any harder?

Don't be so quick to blame GloFo (which is using a Samsung process). I have been told that 14LPP has inferior electrical characteristics to TSMC 16FF+, but the delta isn't as large as the perf/watt delta between Pascal and Polaris.

Micro-architecture matters, but people often forget that the implementation of that micro-architecture matters a lot, too. There is no guarantee that if I give two companies identical RTL that they will implement parts with the same performance/power characteristics.

An excellent physical design will consume less power and clock higher, while a so-so one will consume more power and clock lower.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
Not sure if it was mentioned. If you read the Computerbase.de article carefully they say:

AMD hatte ursprünglich den Crimson 16.6.2 zum Testen zur Verfügung gestellt. Wie ComputerBase jedoch an diesem Montag erfahren hat, hat dieser mit einem Bug zu kämpfen, der die PCIe-Bandbreite limitiert. AMD hat ComputerBase daraufhin den Crimson 16.20.1035.1001-RC1 zur Verfügung gestellt, der das Problem behebt. Alle Werte wurden daraufhin noch einmal erhoben, denn je nach Spiel steigt die Performance durch den neuen Treiber um bis zu fünf Prozent an. Im Durchschnitt ist die Radeon RX 480 etwa 1,5 Prozent schneller.

They got informed by AMD on Monday, that the 16.6.2 driver has a bug with limits PCIe bandwidth. Therefore they got Crimson 16.20.1035.1001-RC1 which solved the problem and noticed a performance increase of up to 5% compared to 16.6.2.
From what i read, most other reviews just used the buggy 16.6.2 driver.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Not sure if it was mentioned. If you read the Computerbase.de article carefully they say:

AMD hatte ursprünglich den Crimson 16.6.2 zum Testen zur Verfügung gestellt. Wie ComputerBase jedoch an diesem Montag erfahren hat, hat dieser mit einem Bug zu kämpfen, der die PCIe-Bandbreite limitiert. AMD hat ComputerBase daraufhin den Crimson 16.20.1035.1001-RC1 zur Verfügung gestellt, der das Problem behebt. Alle Werte wurden daraufhin noch einmal erhoben, denn je nach Spiel steigt die Performance durch den neuen Treiber um bis zu fünf Prozent an. Im Durchschnitt ist die Radeon RX 480 etwa 1,5 Prozent schneller.

They got informed by AMD on Monday, that the 16.6.2 driver has a bug with limits PCIe bandwidth. Therefore they got Crimson 16.20.1035.1001-RC1 which solved the problem and noticed a performance increase of up to 5% compared to 16.6.2.
From what i read, most other reviews just used the buggy 16.6.2 driver.

It really doesn't instill confidence when they still went ahead with their launch knowing their driver had a pci-e bandwidth limiting problem.
You cant tell me that throughout all their testing, preparing this product for a launch, that they didnt know if there was a bug or not. Im guessing there was no bug.
 

NomanA

Member
May 15, 2014
128
31
101
Getting towards 1.5ghz from the reference RX 480!

https://www.reddit.com/r/Amd/comments/4qk6ug/4chan_uses_a_mounting_bracket_to_mount_a/

Water mod AIO.

I may end up doing that to mine. lol

Guru3D got to 1375 MHz on stock reference cooling, but they did have fan at maximum which I regard an unworkable setting. Their temp limit was set at 83C. Maybe a less noisy fan curve profile would also work with a little less overclock.
http://www.guru3d.com/articles_pages/amd_radeon_r9_rx_480_8gb_review,35.html

At 1375MHz, Firestrike improved by 15%, reaching stock Nano level. DX12 and latest DX11 games saw 8-13% improvement.
 

Insomniator

Diamond Member
Oct 23, 2002
6,294
171
106
They got informed by AMD on Monday, that the 16.6.2 driver has a bug with limits PCIe bandwidth. Therefore they got Crimson 16.20.1035.1001-RC1 which solved the problem and noticed a performance increase of up to 5% compared to 16.6.2.
From what i read, most other reviews just used the buggy 16.6.2 driver.


A buggy driver from ATI? No way!!
 
May 13, 2009
12,333
612
126
A buggy driver from ATI? No way!!
At least they support their hardware with updated drivers even after the new models are out. I bet the 980 ti falls off a cliff compared to the 1070 in the next year or two. Is that good business for those that paid 6 or 7 hundred dollars for a 980 ti?
 

sze5003

Lifer
Aug 18, 2012
14,184
626
126
At least they support their hardware with updated drivers even after the new models are out. I bet the 980 ti falls off a cliff compared to the 1070 in the next year or two. Is that good business for those that paid 6 or 7 hundred dollars for a 980 ti?
That's the thing I worry about if I go the 1070 route.
 
Aug 11, 2008
10,451
642
126
IIRC they claimed Polaris was up to 2.7X as efficient. The 470 is still Polaris.

Well, we havent seen yet if the 470 actually delivers the 2.7x increase. Since it is basically the same architecture on the same process, it would be very strange if it is that much more efficient than the 480. If it does, AMD is technically "within the letter of the law", but I consider the original statement borderline deceptive at best, when it applies to only one card, and that a cutdown, probably downclocked lower end one.

The 480 is a very good value, but that is really all you can say for it, at least the reference model. It is obvious why they priced it the way they did. Technically (engineering wise) it is not outstanding by any means. And for those who are saying, what a great advance in performance it is, yes, compared to the 380, of course. But that is not what it will ultimately compete against. That will the the 1060/1050. I expect those to be similar in performance, more efficient, overclock better, and be more expensive.
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
2x strategy is bad because of the aversion to multi-GPU setup. Gamers just don't like it, SLI/CF are niche. **

Anything cost effective is a good strategy for AMD right now. A single PCB dual GPU card is not nearly as niche as two separate cards if/when sold at competitive prices because it's sold as 1 card and can be marketed as such.

Until Navi, on the same interposer, presenting as a single big GPU to APIs, it's not a viable strategy.

AMD's current strategy of 4 new GPUs + SoCs + high end CPU seems even less viable. They are getting clobbered from a technical perspective and burning cash each and every quarter.

They just needed GloFo to perform better. If Vega is on GloFo, it's not going to look good at the high end against GP102 either. Damn shame really.

No one can verify that GloFo is the problem. GloFo is supposedly a replicate of Samsung, and Apple had no problems clocking their TSMC and equal Samsung counterparts the same for both performance and TDP. Until you or someone else can verify that Glofo is the problem, it's all just hearsay made up fud to deflect where the real potential problems may be.

Do you remember when Fury X was released, it was 10% behind 980ti at 1440p? I said give it 6 months and it will match it. It did. GCN just keeps getting better with time due to the console effect & AMD's driver optimizations.

It did match 980 TI, aka the cut-down GM200 sku. But 980 TI still had way more OC headroom and was ultimately still 15-20% faster after taking that into account. And speaking of optimizations, both parties just released their new products. Both parties will make optimizations to their drivers. AMD does have the console advantage, but Pascal is more future proof than both Kepler and Maxwell. Also, as you have pointed out many times over, Pascal now has many of the features GCN has, therefore the console effect will not nearly be as noticeable this time. I do expect P10 to gain a little bit, but no more than 5-10% during it's life cycle.

Pascal will no doubt fade once Volta is here. Remember they have to put back a hardware scheduler if they want to excel in DX12 and it's multi-queue parallel nature. By that time frame, DX12 would certainly have matured.

Please put your crystal ball away and stop talking about what things may be like in 18-24 months. AMD is just as likely to be out of business by then with the rate of their cash burn.

How the high-end play out will depend on whether GloFo get their act together for 14nm or even if Vega is on 14nm, could be 16nm FF. We don't know. But how the mainstream play out, I'm pretty sure GP106 is going to wreck Polaris 10 in sales due to the perf/w advantage.

Again, unless you have concrete proof that GloFo, the mirror fab of Samsung, is the problem then you're just speculating and you need to be clear as such. It's also entirely possible (in fact, more likely just do to proven correlations between R&D and results) that Nvidia's 50% higher R&D budget spread out across 30% fewer projects is leading to much better results in perf/transistor.

** I find it hard to justify for my own purchase of RX 480s. But because I only play a handful of AAA per year, as long as those titles have CF support, it's good. I don't care about most other games. To me, BF1 needs excellent CF scaling, so does Deus Ex. Still waiting for this TW Warhammer DX12 patch, I've already got 200 hours into it and more, mods are great.

Yeah I find it hard too. Not sure what 2x 480's cost over 1 GTX 1070 where you live, but as you've said yourself in this very post I am quoting, 2 card setups are finnicky, you're doubling the power consumption, RX 480 has absolutely no headroom, and you're looking at double the power consumption over 1 GTX 1070. Sounds like even if GTX 1070 is still in the pricing hell, you'd have been better off getting a used custom AIB GTX 980 TI.
 

Mopetar

Diamond Member
Jan 31, 2011
7,945
6,245
136
Don't be so quick to blame GloFo (which is using a Samsung process). I have been told that 14LPP has inferior electrical characteristics to TSMC 16FF+, but the delta isn't as large as the perf/watt delta between Pascal and Polaris.

Just because they're using Samsung's process doesn't mean that they've got it working perfectly. Considering we're seeing some reviews where the card is drawing well beyond its TDP and others where they can get an okay OC on the stock cooler, it would seem to suggest there were some quality issues and that AMD got some leaky chips. Disappointing that they decided to release those as 480s instead of binning them for 470 or some other part.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
Guru3D got to 1375 MHz on stock reference cooling, but they did have fan at maximum which I regard an unworkable setting. Their temp limit was set at 83C. Maybe a less noisy fan curve profile would also work with a little less overclock.
http://www.guru3d.com/articles_pages/amd_radeon_r9_rx_480_8gb_review,35.html

At 1375MHz, Firestrike improved by 15%, reaching stock Nano level. DX12 and latest DX11 games saw 8-13% improvement.

How much power at these clocks though?
 
Feb 19, 2009
10,457
10
76
Sounds like even if GTX 1070 is still in the pricing hell, you'd have been better off getting a used custom AIB GTX 980 TI.

I was thinking that if it scales in BF1 and Deus Ex, 80-90%, that would put it a lot faster than a 1070 and it would justify itself. I only do AAA gaming with a few titles per year, the rest of my games are indie so I don't even need GPU grunt. However, when I do AAA game, I'd like to run it at native 4K with High settings, and one 1070 isn't enough for that.

The used 980Ti crossed my mind, I went searching on ebay but here, they are close to double the price of an RX480 used (ridiculous), and my aversion to Maxwell's aging potential makes it a bad investment for the next few years.

Anyhow, we found what the bottleneck is in RX 480. Why it's 5.8TFlops is not able to consistently match the 390X.

http://www.hardwareluxx.de/index.ph...39615-amd-radeon-rx-480-im-test.html?start=25

OC on memory alone sees almost perfect performance scaling. o_O

Looks like AMD did not improve memory compression tech enough for the 256 bit bus to really help. In some games it can match the 390X, but some games it lags behind due to this IMO. As well as raising resolution, it drops off quickly (could be ROPs as well though). But bandwidth bottleneck here is quite undeniable.
 

Zstream

Diamond Member
Oct 24, 2005
3,396
277
136
I was thinking that if it scales in BF1 and Deus Ex, 80-90%, that would put it a lot faster than a 1070 and it would justify itself. I only do AAA gaming with a few titles per year, the rest of my games are indie so I don't even need GPU grunt. However, when I do AAA game, I'd like to run it at native 4K with High settings, and one 1070 isn't enough for that.

The used 980Ti crossed my mind, I went searching on ebay but here, they are close to double the price of an RX480 used (ridiculous), and my aversion to Maxwell's aging potential makes it a bad investment for the next few years.

Anyhow, we found what the bottleneck is in RX 480. Why it's 5.8TFlops is not able to consistently match the 390X.

http://www.hardwareluxx.de/index.ph...39615-amd-radeon-rx-480-im-test.html?start=25

OC on memory alone sees almost perfect performance scaling. o_O

Looks like AMD did not improve memory compression tech enough for the 256 bit bus to really help. In some games it can match the 390X, but some games it lags behind due to this IMO. As well as raising resolution, it drops off quickly (could be ROPs as well though). But bandwidth bottleneck here is quite undeniable.


That's why HBM will help quite a bit. I'm disappointed that they didn't bin the chips better. Those chips, that are leaky, should have been a 470.

I'm guessing a 470 might be equivalent to my 280? If so, might purchase two of those for crossfire.
 

Dresdenboy

Golden Member
Jul 28, 2003
1,730
554
136
citavia.blog.de
It really doesn't instill confidence when they still went ahead with their launch knowing their driver had a pci-e bandwidth limiting problem.
You cant tell me that throughout all their testing, preparing this product for a launch, that they didnt know if there was a bug or not. Im guessing there was no bug.

This is how big companies work. There is more to a launch than just a low number of driver bugs.
 
Feb 19, 2009
10,457
10
76
That's why HBM will help quite a bit. I'm disappointed that they didn't bin the chips better. Those chips, that are leaky, should have been a 470.

I'm guessing a 470 might be equivalent to my 280? If so, might purchase two of those for crossfire.

Supposedly the 470 is actually the most efficient, 110W and it's what AMD used for it's claims of up to 2.8x perf/w. Look at their slides again, specifically a 470, not 480.

This tells me they went for that last drop of performance, beyond the optimal cooling, PCB power, vcore & clock range and power usage went boom. This is actually proven already by Computerbase, because they did a small under-volt and power usage dropped 30W while performance GOES UP, because it's not hitting thermal/power limits to throttle.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
Supposedly the 470 is actually the most efficient, 110W and it's what AMD used for it's claims of up to 2.8x perf/w. Look at their slides again, specifically a 470, not 480.

This tells me they went for that last drop of performance, beyond the optimal cooling, PCB power, vcore & clock range and power usage went boom. This is actually proven already by Computerbase, because they did a small under-volt and power usage dropped 30W while performance GOES UP, because it's not hitting thermal/power limits to throttle.

Why wouldn't AMD do the same under-volt and performance increase as stock settings?
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Got it. Thanks for clarifying. In the meantime, if someone wants more than decent 1080p performance and/or barely adequate 1440p performance and minimum VR performance, AMD has nothing for them.

LOL! Coming from a guy who bought a GTX980 for $550-600 less than 2 years ago. Today, AMD's $199-229 card is within 10-15% of that overpriced garbage 980! Generation after generation these relative concepts seem completely foreign to you.

At 1080p, RX 480 is nearly 70% faster than a GTX960. How many times have you recommended R9 280X/380X/290 over GTX960? How many times have you recommended R9 270/270X over GTX750/750Ti?
https://www.techpowerup.com/reviews/AMD/RX_480/24.html

For the last 2 years, you defended/recommended GTX750/750Ti/950/960, while completely ignoring lack of longevity of NV cards and their horrible price/performance. If you actually cared about the best interests of PC gamers, you'd be objectively recommending some NV cards like GTX970 and 980Ti, while nailing the trash that they released in the last 10 years.

As I said before and it's only more evident: All you care about are metrics where NV wins, while constantly ignoring the price/performance metric as if it's irrelevant. Secondly, the ONLY reason you care for AMD to improve perf/watt and perf/mm2 is so that they give more competition to NV so that you pay less for NV cards.

On this forum, there have been many, many people who rightfully criticized AMD/ATI's cards over the years. You should be one of the LAST people to do so because you haven't bought an AMD/ATI card since 9800 series.

That means, you managed to buy 3 of the worst NV architectures of all time:

FX5800/5900 series = pure trash
GeForce 7 = more trash
Fermi = the very architecture that failed in nearly every metric you keep discussing in modern times (perf/mm2, perf/watt, price/performance)

No offense, but your criticism against all AMD products has no merit in this case since you didn't purchase any excellent AMD/ATI cards since what 2003?

Your other thread titles Polaris vs. Pascal is just as flawed or is intentionally misleading.

First, you compare ASICs (chips) but ignore that RX 480 and GTX1080 are videocards, not just chips. That means you are penalizing RX 480 for not having GDDR5X or alternatively, you are giving GTX1080 a huge advantage. Since RX 480 is a $199-239 product, it would have made no sense to have GDDR5X for it. Therefore, what you are really comparing is a RX 480 videocard vs. GTX1080 videocard, not GCN 4.0 architecture vs. Pascal.

Secondly, you assign no value whatsoever to exponential gains that come from shifting from 32 to 64 ROPs (hint: R9 290X vs. HD7970Ghz). This means the 314mm2 to 232mm2 comparison inherently assumes that a hypothetical 300mm2 Polaris 10 would scale non-linearly against the 232mm2. Another way to look at it, is if NV halved the number of ROPs in GTX1080 to 32 ROPs and used only GDDR5 and made a 250mm2 chip, it would get destroyed by the current GP104 1080 SKU. You don't understand the context of how videocards are made so you blindly compare perf/mm2. The biggest facepalm here is you bought GTX275 over HD4890, then GTX570 over HD6950/6970 and you bought GTX780 over R9 290, but like a broken record you keep discussing perf/mm2. :rolleyes:

Additionally, in all of your comparisons of GTX970/980 to RX 480, you completely ignore DX12 performance where Maxwell bombs.

Let's face it, even if GTX1060 lost to RX 480 in the price/performance metric, and got gimped by 3GB of VRAM, you'd still recommend GTX1060 over RX 480. In the case of GTX1060 6GB vs. RX 480 4GB, you'd recommend people spend the extra $ on the 1060 6GB. Amiright?

I hate to say it, but I love all the hype and inevitable disappointment. It's unfortunate, but extremely entertaining. I wish AMD were able to be more competitive, but the reality is they are spread to thin on R&D and resources.

It's more sad that someone gets satisfaction that's derived from less competition. Maybe this actually explains why you like posting so much anti-AMD posts -- you just like to stir things up as you get satisfaction from AMD failing.

It's interesting you want to discuss how a $199-239 card is a "failure" in all key ways but ignore how the mighty $650 780 and then $699 780Ti turned into a giant turds!

perfrel_2560_1440.png


I am pretty sure it's a bigger failure to have a $700 780Ti getting smoked by a reference $399 R9 290 in modern games. $500-650 GTX780 looks like laughable against a $299 R9 280X or a $399 R9 290. Shows how much of a failure Kepler architecture was -- the very architecture you bought not once, but twice (!!) with GTX670 2GB and GTX780 3GB. Now you are advising AMD to leave the high-end?

It probably would have been best if they just abandoned the high end again and devoted more resources to making Polaris better, then using a Polaris X2 card to compete on the high end.

What do you care? You don't buy AMD graphics cards. Let AMD and objective/brand agnostic PC gamers decide what's best for AMD. Even when AMD had a 6 months lead with HD5870, you didn't buy that. Then when AMD had clearly superior videocards with HD7950/7970, you still bought the 670 2GB. You even managed to buy a 780 -- easily one of the worst cards ever made - instead of the R9 290. :sneaky:

Do you know how your posts actually appear for the outside world?

"ATI/AMD have failed .... blah blah blah. Buys NV"
"ATI/AMD have an amazing product....Buys NV"

What's the real reason you are so upset that AMD is failing? Because you'll be paying $600+ for a mid-range GTX1080, and deep down you aren't thrilled that you "have no choice" since "AMD forced you to pay those prices", amiright?

:thumbsup:

I wish AMD were able to be more competitive, but the reality is they are spread to thin on R&D and resources.

Ya, and how much have you personally contributed to AMD's R&D by buying their excellent cards over the last 10-13 years? Poor AMD. Even when HD6950 2GB unlocked into an HD6970 and made hundreds of dollars with Bitcoins, and cost $230-250, you still bought a $350 GTX570 1.28GB. They never stood a chance!

For crying out loud, you actually defended NV locking down voltage on Kepler as a way to reduce RMA on their cards. Who the hell cares about NV's RMA from overvoltage control unless you are an NV stockholder...

At least they support their hardware with updated drivers even after the new models are out. I bet the 980 ti falls off a cliff compared to the 1070 in the next year or two. Is that good business for those that paid 6 or 7 hundred dollars for a 980 ti?

$699 780Ti November 2013
$549 GTX980 September 2014
--

$239 RX 480 4GB is faster than GTX780Ti and is within 10% of GTX980 (but what about future DX12 games, 980 would get beaten).

perfrel_1920_1080.png


Polaris 10's perf/watt is very disappointing to me but as a product aimed at the mainstream market, $199-239 is a good deal. In fact, given the level of performance of this videocard, I'd actually not hesitate at all to recommend the cheaper $199 4GB version and just use that as a stop-gap for 2 years; then upgrade to a newer $200-250 card in 2018. Looking at the level of performance for RX 480, I am not sure the premium for the 8GB card even makes sense for those planning to keep the card for 2 years or less.
 
Last edited: