Polaris and Pascal tested in 16 2016 Titles [HardwareUnboxed]

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
AMD Rx 480 has been winning more games recently . CoD Infinite Warfare and Titanfall 2 are two of the recent AAA titles which are clearly faster on Rx 480. AMD has also done a good job with Crimson Relive and continue to keep increasing performance through driver updates. Here are a few more recent reviews

Joker 2016 games
https://www.youtube.com/watch?v=s12S74umruY
 
Last edited:

IllogicalGlory

Senior member
Mar 8, 2013
934
346
136
Just for fun, I took the FPS data from the video and tried to get a more accurate relative picture of performance on average than the video gave us; they seemed to use that method of averaging the FPS numbers, which doesn't actually work, though in this case it's very close.

7bQLUqJ.png


It's just the average FPS of the 480 8Gb and 1060 6GB from the 1080p results. Interestingly, it seems that using the averaging FPS method in excel gives almost exactly the same value for both cards, (within 0.000001!) though the video gave us 86 for the 480 and 84 for the 1060, so I'm not sure how they got their numbers.

This also shows the importance of using the geometric mean when comparing relative performance. The arithmetic mean suggests that the 1060 is 1.5% faster than the 480 and that the 480 is 0.3% slower than the 1060, which is not equivalent.

With geometric mean, we get the correct result; the 1060 is 100.9% as fast the 480 and the 480 99.1% as fast as the 1060.

So, despite the summary from the video, the 480 is still a tiny bit behind (less than 1%) the 1060 overall.

Explanation:
UxjjSmP.png

Source

if I made a mistake (perhaps I accidentally recorded the 1060 3GB or 480 4GB results for something), or if you want to play around with it yourself, here's the excel sheet (now updated with data from Joker Productions). It also contains some data I was testing from this TPU thread and the the review it's discussing.

Also: there are 19 games tested, not 16.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136

I will like here to say that in the Video he said more than once that no NVIDIA user should use DX-12 because they have higher fps with DX-11. That is fine if you have a Core i7 6700 @ 4.5GHz that was used in the video. But there are millions of gamers with LOCKED dual/Quad cores like Core i3/5 Haswell/Skylake. In many of those games they will be better with DX-12 than with DX-11 and Reviewers should do their job and make a DX-11 vs DX-12 review using locked Intel Core i3/5 and AMD APU/FX CPUs and evaluate the performance differences with AMD and NVIDIA GPUs.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
This review underscores how difficult it can be to gauge performance nowadays due to DX12, which most developers still suck at when it comes to optimizing for multiple hardware vendors. If a game runs faster on NVidia using DX11 than it does using DX12, then the reviewer should test the game in DX11 for NVidia. A lot of reviewers don't do this however, which gives an inaccurate estimate. I also found it strange that Hardware Unboxed didn't do DX11 benchmarking for Total War Warhammer. As far as I know, that game still runs noticeably faster in DX11 for NVidia than it does in DX12:

GTX-1060-UPDATE-70.jpg


GTX-1060-UPDATE-79.jpg


We still have a long way to go before we reap the full benefits of DX12, likely two years or so if I had to guess..
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
A game should always be tested on each card with the API that delivers the highest performance without compromise to image quality.

The 1060 and 480 trade blows when it's all said and done, but Nvidia still holds a clear technical advantage with a 15% smaller chip, 20% less transistors, and 30% less power consumption. The 1060 is doing the same work with less resources. Of course most of that doe not directly matter to consumers, but ramifications reign galore. Like prices (i.e. 1080 and Titan XP) and missing products (1080 TI).
 

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
If a game runs faster on NVidia using DX11 than it does using DX12, then the reviewer should test the game in DX11 for NVidia. A lot of reviewers don't do this however, which gives an inaccurate estimate. I also found it strange that Hardware Unboxed didn't do DX11 benchmarking for Total War Warhammer.

They said they used whatever API was faster. Hence why QB was DX11 Nvidia, DX12 AMD (though they said both DX11 and DX12 were about the same for AMD)

As for TWWH:

Like Ashes of the Singularity Total War WArhammer is another game I feel should be tested using the DX12 API for both AMD and Nvidia hardware. That being the case I've tested all graphics cards using the DX12 API with the unlimited video memory option enabled.

I mean he addressed that right in the video.

You'll notice that his results are higher for the 1060 than the DX11 HWC testing.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
The 1060 and 480 trade blows when it's all said and done, but Nvidia still holds a clear technical advantage with a 15% smaller chip, 20% less transistors, and 30% less power consumption. The 1060 is doing the same work with less resources. Of course most of that doe not directly matter to consumers, but ramifications reign galore. Like prices (i.e. 1080 and Titan XP) and missing products (1080 TI).

I think it does have ramifications that matter to consumers. The fact that NVidia has such a large lead in performance per watt/mm2, also practically guarantees that NVidia will always have the faster hardware. NVidia hardware does more with less, and since there are limitations on what consumers think is acceptable in regards to power consumption, AMD can only make their GPUs so large and power hungry before people say, "WTF!" o_O

Fury X vs 980 Ti is a perfect example of this. Pascal only compounds the problem for AMD, so if Vega doesn't come with a very significant efficiency improvement, then AMD is toast.
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
I mean he addressed that right in the video.

You'll notice that his results are higher for the 1060 than the DX11 HWC testing.

I saw that when I viewed the video. I was just taken aback because every single benchmark I've ever seen of that game shows the DX11 renderer being faster than the DX12 renderer for NVidia. Perhaps things have changed though due to patches and driver updates..
 

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
I saw that when I viewed the video. I was just taken aback because every single benchmark I've ever seen of that game shows the DX11 renderer being faster than the DX12 renderer for NVidia. Perhaps things have changed though due to patches and driver updates..

HWC:

Processor: Intel i7 5960X @ 4.3GHz
Memory: G.Skill Trident X 32GB @ 3000MHz 15-16-16-35-1T

HWUB:

Processor: I7-6700k @ 4.5Ghz
Memory: 16GB G Skill trident

Maybe the extra 4c on the HWC doesn't have as much impact in a CPU heavy game like TW?

Edit:

Also HWC specifically says not to compare their DX11 and DX12 results:

We are now using FCAT for ALL benchmark results in DX11.


DX12 Benchmarking

For DX12 many of these same metrics can be utilized through a simple program called PresentMon. Not only does this program have the capability to log frame times at various stages throughout the rendering pipeline but it also grants a slightly more detailed look into how certain API and external elements can slow down rendering times.

Since PresentMon throws out massive amounts of frametime data, we have decided to distill the information down into slightly more easy-to-understand graphs. Within them, we have taken several thousand datapoints (in some cases tens of thousands), converted the frametime milliseconds over the course of each benchmark run to frames per second and then graphed the results. This gives us a straightforward framerate over time graph. Meanwhile the typical bar graph averages out every data point as its presented.

One thing to note is that our DX12 PresentMon results cannot and should not be directly compared to the FCAT-based DX11 results. They should be taken as a separate entity and discussed as such.
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
I think it does have ramifications that matter to consumers. The fact that NVidia has such a large lead in performance per watt/mm2, also practically guarantees that NVidia will always have the faster hardware. NVidia hardware does more with less, and since there are limitations on what consumers think is acceptable in regards to power consumption, AMD can only make their GPUs so large and power hungry before people say, "WTF!" o_O

That's exactly what I meant without explaining it. In the narrow view of GTX 1060 vs. RX 480 desktop graphics cards, die size, transistor count, and power consumption might not matter that much (or at all) to a potential consumer but the chips costs, complexity, and design has hidden ramifications that reverberate well beyond it's snapshot scope (i.e. high priced 1080 and Titan XP, nonexistent 1080 TI, and the fact that Titan XP is cutdown and 1080 TI will likely be further cutdown). Since GK100 (4 years ago), Nvidia has successfully been able to milk prices on their 300-400mm2 GPU and slow roll their high end GPU with extremely high prices and cut-down dies. Not only is Nvidia winning with smaller chips and less transistors, they're also winning with the same smaller chips that aren't fully functioning. It's no wonder at all that they have 75% of the market.
 

Madpacket

Platinum Member
Nov 15, 2005
2,068
326
126
A game should always be tested on each card with the API that delivers the highest performance without compromise to image quality.

The 1060 and 480 trade blows when it's all said and done, but Nvidia still holds a clear technical advantage with a 15% smaller chip, 20% less transistors, and 30% less power consumption. The 1060 is doing the same work with less resources. Of course most of that doe not directly matter to consumers, but ramifications reign galore. Like prices (i.e. 1080 and Titan XP) and missing products (1080 TI).

While I generally agree with this, there's a few things I would like to add which may change things from a value proposition. There was no reference 1060 so power measurements vary a lot depending on the model used. For example some of the aftermarket XFX 480's use around the same amount of energy as some 1060's, while an aftermarket MSI 480 will consume well over 150W. The 1060 lacks SLI capabilities. The RX480 supports Freesync which gives the consumer a lot more choice and savings when they need to purchase a monitor. And while the 1060 may be a more efficient chip in most cases, it's not necessary "doing the same amount of work with less resources". Focus on heavy compute related tasks and the RX480 likely has the edge in performance per watt. And although this is likely negligible, the 480 also has to power an additional 2GB of GDDR5 memory.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
While I generally agree with this, there's a few things I would like to add which may change things from a value proposition. There was no reference 1060 so power measurements vary a lot depending on the model used. For example some of the aftermarket XFX 480's use around the same amount of energy as some 1060's, while an aftermarket MSI 480 will consume well over 150W. The 1060 lacks SLI capabilities. The RX480 supports Freesync which gives the consumer a lot more choice and savings when they need to purchase a monitor. And while the 1060 may be a more efficient chip in most cases, it's not necessary "doing the same amount of work with less resources". Focus on heavy compute related tasks and the RX480 likely has the edge in performance per watt. And although this is likely negligible, the 480 also has to power an additional 2GB of GDDR5 memory.

But there is a reference gtx 1060. Where did you get your information? The lack of SLI is an artificial factor, one that 98.7% of all consumers don't care about and also one that is mitigated with DX12 multi GPU solution. Also, I've never seen one review where a 1060 consumes more power and/or is less efficient than any 480 variant. I've also never seen a 480 consuming the 110-120 watts a 1060 consumes. Link these reviews if they exist. And do you know how much more power the vram package on the rx 480 consumes than on the 1060? It's negligent just like you said; not at all a determining factor between which chip is more efficient.

LOL at people liking your post which has blatantly wrong information.
 

Madpacket

Platinum Member
Nov 15, 2005
2,068
326
126
But there is a reference gtx 1060. Where did you get your information? The lack of SLI is an artificial factor, one that 98.7% of all consumers don't care about and also one that is mitigated with DX12 multi GPU solution. Also, I've never seen one review where a 1060 consumes more power and/or is less efficient than any 480 variant. I've also never seen a 480 consuming the 110-120 watts a 1060 consumes. Link these reviews if they exist. And do you know how much more power the vram package on the rx 480 consumes than on the 1060? It's negligent just like you said; not at all a determining factor between which chip is more efficient.

LOL at people liking your post which has blatantly wrong information.


My bad, I wasn't aware of any reference cards for sale. Thought those were just sent out to reviewers. Checking NewEgg I see plenty of 1060's but no reference models?

http://www.newegg.ca/Product/ProductList.aspx?Submit=ENE&N=-1&IsNodeId=1&Description=Geforce 1060&bop=And&PageSize=96&order=BESTMATCH

As for power consumption there have been many reports from XFX owners of the lower power consumption but I would have to dig those up. Jayz2cents did a review and you see the power numbers (minus the board power) which puts his card around 130W, or in the ballpark of the 1060.

Review here:

https://youtu.be/zWASNajSdpg

You can ignore the missing SLI but it's artificially gimped where the RX480 isn't, and the extra 2GB of RAM comes in handy when combining cards with one memory pool. DX12 multi-gpu is a neat but realistically no major devs are adopting it and I doubt it'll gain much traction going forward.

Just pointing out a few facts that people often forget to mention when comparing the two cards.

To clarify..

"Also, I've never seen one review where a 1060 consumes more power and/or is less efficient than any 480 variant"

I never implied a Rx480 is more energy efficient (other than maybe in compute), just stated there are 480's that consume around the same as some 1060's such as the newer XFX cards.
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
My bad, I wasn't aware of any reference cards for sale. Thought those were just sent out to reviewers. Checking NewEgg I see plenty of 1060's but no reference models?

Didn't hear about the Founder's Edition then I take it?
 

casiofx

Senior member
Mar 24, 2015
369
36
61
There are very few reasons to run rx480 crossfire.

Unless you get very cheap deals, multiple gpu should only be considered for high end cards. Since a single gtx 1080 or 1070 can run almost as fast as cfx rx480, why would you risk game cfx support when you cna have simgle gpu solutions at that performance?

Last but not least, american is no the center of the world, many countries in the world had gpus selling at msrp without special sales. For example for the price of two rx480 at my place, you can buy a custom gtx1080, and the 1080 is clearly better... And two rx480 costs 40% more than the performance bargain gtx1070 at here

So saying that rx480 is better than gtx1060 because it can run cfx is a silly statement. You should just go for single high end gpu than running dual mid range cards
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Its quite clear the RX480 has the upper hand in DX12 (but not an outright advantage) and the GTX1060 has the upper hand in DX11. However right now and for upcoming titles, it seems like there is no IQ benefits from using DX12. DX12 right now only seems to either improve performance (mostly on AMD cards and sometimes on nVIDIA cards) or regress performance and/or graphical glitches.

That being said, DX11 is still here to stay, its far more stable and on most titles, provides more performance than DX12. So i think its a wash between the cards because if im a GTX1060 owner, i'd run games mostly on DX11 which might be faster than a RX480 in DX12 on the same game vice versa.

Some might buy the RX480 because of the DX12 mileage, but im thinking by the time DX12 actually provides more benefits other than performance (id actually be more concerned with game stability than performance at this point in time) there will be newer, faster cards out by then.
 
  • Like
Reactions: tviceman

Zstream

Diamond Member
Oct 24, 2005
3,396
277
136
Its quite clear the RX480 has the upper hand in DX12 (but not an outright advantage) and the GTX1060 has the upper hand in DX11. However right now and for upcoming titles, it seems like there is no IQ benefits from using DX12. DX12 right now only seems to either improve performance (mostly on AMD cards and sometimes on nVIDIA cards) or regress performance and/or graphical glitches.

That being said, DX11 is still here to stay, its far more stable and on most titles, provides more performance than DX12. So i think its a wash between the cards because if im a GTX1060 owner, i'd run games mostly on DX11 which might be faster than a RX480 in DX12 on the same game vice versa.

Some might buy the RX480 because of the DX12 mileage, but im thinking by the time DX12 actually provides more benefits other than performance (id actually be more concerned with game stability than performance at this point in time) there will be newer, faster cards out by then.

Completely agree, but I'm winding down my computer expenses as I grow older and I personally no longer need top of the line gear. I'm looking for ways to make my gear last longer, and AMD video cards have typically provided that for me.

Just a thought, but I can't see myself being the only one thinking this way.
 

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
DX12 multi-gpu is a neat but realistically no major devs are adopting it and I doubt it'll gain much traction going forward.

Just wanted to point out that both Deus Ex MD and ROTTR have excellent MGPU support in DX12.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Just a thought, but I can't see myself being the only one thinking this way.

Fair enough. Im in the same boat as well these days. Just have no reasonable excuse to game as I use to a long time ago, and that means no reason to have such hardware. Its becoming more of a house decoration for me.

Just wanted to point out that both Deus Ex MD and ROTTR have excellent MGPU support in DX12.

I've been thinking about multi GPUs and DX12, imo I think the days of most games supporting dual GPUs are over. Only AAA titles (and a few others) will have support but with how DX12 is structured im thinking that dual GPU setups being viable for most games are over (as an option anyway as in the past). Not many people understand that there are trade offs with having diffferent level of software layers. DX12 removes some of these layers to extract more performance as an example but in return makes it that much harder on the compatibility front when using dual GPUs (even amongst single GPUs from different IHVs).

Having AMD and nVIDIA do the heavy lifting imo was a good thing. Now its basically in the hands of the developer and multi GPU support let alone performance optimisations from multi GPUs is most likely the last thing on their list or thrown into the back burner.

Sorry for going off tangent, but DX12 has been really dissapointing so far just thinking about it. Its mind blowing the kind of ramification it has on the industry as a whole (due to questionable decisions made on this API - think MS/AMD/Dice/Mantle/Consoles/Cant fix their driver overhead/lets fix this by introducing a new API/no budget to change architecture/etc) and I wonder how DX12 will turn out in the future because results aren't looking so good. With every other DX versions, atleast there were new visual improvements on top of performance ones.
 

Madpacket

Platinum Member
Nov 15, 2005
2,068
326
126
There are very few reasons to run rx480 crossfire.

Unless you get very cheap deals, multiple gpu should only be considered for high end cards. Since a single gtx 1080 or 1070 can run almost as fast as cfx rx480, why would you risk game cfx support when you cna have simgle gpu solutions at that performance?

Last but not least, american is no the center of the world, many countries in the world had gpus selling at msrp without special sales. For example for the price of two rx480 at my place, you can buy a custom gtx1080, and the 1080 is clearly better... And two rx480 costs 40% more than the performance bargain gtx1070 at here

So saying that rx480 is better than gtx1060 because it can run cfx is a silly statement. You should just go for single high end gpu than running dual mid range cards

Who's saying the Rx480 is better than the 1060? If you're responding to me I didn't say that. These cards are a wash performance wise. I also don't live in America FWIW and understand prices in other countries vary considerably. And a 1070 is not almost twice as fast as crossfire 480's. It's a better option though (the 1070) if you can afford it out of the gate.

As for CFX users. This has been debated many times. It's a stupid option to buy two cards at the same time with the intention crossfire (unless you're shooting for as much performance as possible) but many people who end up crossfiring buy one card now, and then add another one down the road budget permitting. There are many games that work just fine in crossfire, and if you happen to play those games it's not a terrible option. 1060 owners simply don't have this luxury or the luxury of buying an adaptive sync monitor for a reasonable price. Adaptive sync is also important to some people.

When you have two cards that are almost equally priced, and have equal performance plus have similar power consumption (typically that 30 - 40W under load that can be cut down considerably with a simple voltage adjustment, or pick a XFX card) and also similar driver and software capabilities, you have to look at what else separates them to make a decision.

Some people buy Geforce cards just because of brand and reputation, and that's fine. Some people look at other factors like longevity, adaptive sync differences, company ethics, availability etc.

There's no wrong decision here, the only wrong thing to do is be completely biased and ignore the full capabilities of each card before making a decision on what suits your situation the best.
 
Last edited:
  • Like
Reactions: AtenRa

daxzy

Senior member
Dec 22, 2013
393
77
101
Its quite clear the RX480 has the upper hand in DX12 (but not an outright advantage) and the GTX1060 has the upper hand in DX11.

HardwareCanucks has done an updated look at RX 480 8GB vs GTX 1060 6GB. The DX11 lead at 1080p is pretty much gone (RX 480 8GB is just 2% down from GTX 1060 6GB). At 1440p DX11, RX 480 8GB and GTX 1060 6GB are dead even. This is not even using the ReLive driver that came out a few weeks ago (this was done in early December), so you can probably add some more performance to the RX 480 8GB.

http://www.hardwarecanucks.com/foru...945-gtx-1060-vs-rx-480-updated-review-23.html

So the fact that of RX 480 8GB being worse than GTX 1060 6GB at DX11 is now a myth (although I could definitely see it in some random GameWorks titles).