Official AMD Ryzen Benchmarks, Reviews, Prices, and Discussion

Page 216 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

lolfail9001

Golden Member
Sep 9, 2016
1,056
353
96
My only question is, if the Nvidia DX12 driver is crippling Ryzen, why doesn't it also cripple the 7700K? No idea.
Dx12 performance in general is a chaotic mess, trying to make sense of it only leads to lost sanity.

There could be something to it, because afaik Nvidia still doesn't really support async compute at hardware level. Instead what could be happening is that they have implemented a driver hack to force concurrent draw calls etc to be serialized.
/me bangs his head against the desk.

No, that hack is present for Maxwell only, Pascal does async compute perfectly fine.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,770
3,590
136
They don't specify... Though it has to be said that i am sooner to believe SLI was not functioning for him in Dx11 :p
If they don't specify, then its wrong to use that graph to conclude that he forgot to turn on Crossfire in DX11.
 
Last edited:

lolfail9001

Golden Member
Sep 9, 2016
1,056
353
96
If they don't specify, then its wrong to use that graph to conclude that he forgot to turn on Crossfire in DX11.
I use that graph to conclude that rx480x2 is faster than 1070 when crossfire is working. Something that is trivial to conclude considering that 295x2 is faster than 980 Ti when crossfire works and i only used that graph to establish that crossfire is indeed working in RotR.
As such, when i see graphs like
mw47zq.jpg
.
I conclude that crossfire was not functioning in Dx11. He is welcome to prove me wrong, of course.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,770
3,590
136
I use that graph to conclude that rx480x2 is faster than 1070 when crossfire is working. Something that is trivial to conclude considering that 295x2 is faster than 980 Ti when crossfire works and i only used that graph to establish that crossfire is indeed working in RotR.
As such, when i see graphs like
mw47zq.jpg
.
I conclude that crossfire was not functioning in Dx11. He is welcome to prove me wrong, of course.
Care to pause and think that Crossfire scaling in DX11 can vary depending on the area of the game which is being benchmarked?

Especially when Mountain Pass and Syria shows that it's working - which you quite conveniently forgot to include I suppose?
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
I use that graph to conclude that rx480x2 is faster than 1070 when crossfire is working. Something that is trivial to conclude considering that 295x2 is faster than 980 Ti when crossfire works and i only used that graph to establish that crossfire is indeed working in RotR.
As such, when i see graphs like
mw47zq.jpg
.
I conclude that crossfire was not functioning in Dx11. He is welcome to prove me wrong, of course.

So what you are saying is that a single 480 is within 10-20% of the 1070 in DX11 RotTR?

http://images.anandtech.com/graphs/graph10446/82390.png
 
  • Like
Reactions: BlahBleeBlahBlah

lolfail9001

Golden Member
Sep 9, 2016
1,056
353
96
Care to pause and think that Crossfire scaling in DX11 can vary depending on the area of the game which is being benchmarked?
Sure, that is always a possibility, i have observed enough to realize that generally crossfire scaling is near 100% unless it hits CPU limitation. I mean, sure, it is entirely possible that it is so hardcore CPU limited in Dx11 that it sits at 25% lower fps than a single 1070 with a 7700k. Do i buy that though? Well, that is possible, i am stupid not to consider that.
So what you are saying is that a single 480 is within 10-20% of the 1070 in DX11 RotTR?
I count at least 37% worth of difference in the very picture i have posted.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,770
3,590
136
Here is something I found, almost a year old but nonetheless pretty interesting:

Total War: Warhammer DX11/12 CPU scaling -

DX11:
z8ExJ5ptM7gMmY8QxZN4oB.png


97th percentile frame rate on GTX 1080 with i3-4360/i5-4690/i7-5930K:

23.8/29.7/47.7
100%-124.8%-200.4%

On Fury X:

21.3/23/42.5
100%-108%-199.5%

DX12:
XxhXDJKP2x6Z9jPMP7XgjD.png


Same 97th percentile fps with i3-4360/i5-4690/i5-5930K on GTX 1080:

39.1/54.1/61.6
100%-138.4%-157.5%

On Fury X:

35.4/44/70.1
100%-124.3%-198%


Thus, in this review the DX12 scaling of NVIDIA with #of cores was really bad compared to its DX11 scaling. In fact the DX12 scaling of AMD is the same as the DX11 scaling of NVIDIA in TW:W with these three CPUs.
 

ryzenmaster

Member
Mar 19, 2017
40
89
61
No, that hack is present for Maxwell only, Pascal does async compute perfectly fine.

My bad. Found this and it does indeed seem like they adapted Pascal for async:

timespy-3.png


In that case if RoTR DX12 implementation is using multiple threads for graphics, then we should see less of a CPU bottleneck in comparison to DX11 on both Nvidia and AMD GPU.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
Sure, that is always a possibility, i have observed enough to realize that generally crossfire scaling is near 100% unless it hits CPU limitation.

I count at least 37% worth of difference in the very picture i have posted.
That is still under expected output (should be nearly 50% slower). But I was looking more at the Ryzen 1070 vs. 480 numbers.

The 7700k numbers look ~10% or so faster for the 480 and ~15% slower for the 1070 than I would expect.

Now both of these could be down to the Soviet Installation. Which could be playing with the numbers, as we could tell from all the other locations (whether or not CF is running). That some area's just when looking at 7700+1070 numbers vary greatly depending on where you are at and sometime like the platform in I think Syria, where just the platform that the camera starts on in the in benchmark tool shows wildly different performance than any other spot in the whole room. But that then throws another wrinkle in to the problem. If that is the case then it is also entirely possible that CF is running in DX11 and that Soviet installation is particularly harsh on RX480 CF in DX11 and not that it is disabled. We need more research into the information provided preferably from a third party.

But lets circle back for a second and assume that either accidentally or on purpose CF is disabled in DX11 and just look at the 1800x. We know from the 7700k numbers that the 1070 is more than capable of higher performance on DX11 and DX12, then we see on the Ryzen benches. We see from what we assume is actual 480CF on DX12 that Ryzen is able to perform a lot better. We can agree on this right? Then we are seeing one of two things. You are right and CF isn't running on DX11 which means that all gaming performance on the Ryzen could be held back because of the use of an Nvidia card and that potentially with more testing we will find that all benches using an Nvidia card happen to be holding Ryzen back (and honestly the DX11 performance of the 1070 on both CPU's do suggest this). Or something isn't right with DX12 performance compared to DX11 with Nvidia cards. It could be a weird combination of the two. We need more testing on an RX480, RX480CF or Fury X, Fury X CF. It was bad enough that no one even seemed to consider a GPU driver problem when reviewing Ryzen, but it would be foolish now to just dismiss this information now as bad testing methodology, without actually confirming it.
 

imported_jjj

Senior member
Feb 14, 2009
660
430
136
There is no question that Nvidia has problems with more cores (AMD or Intel) under DX12, there is ample data on that. Tomb Raider isn't even a worst case scenario.
Posted this on the previous page (data from Computerbase Ryzen 1800X review)
BF1 DX11 720p
6900k 143.8 FPS
1800X 122.4 FPS
7700k 116.4 FPS
BF1 DX12 720p
6900k 122.4 down 14.9%
1800X 90.7 down 25.9%
7700k 127.6 up 9.6%

What is less clear is how well AMD scales in comparison but CF is far from ideal and there is no real need for it since the res can be pushed to 720p if the GPU is mid range.
 

DisEnchantment

Golden Member
Mar 3, 2017
1,601
5,780
136
Ryzen 5 / 1400 Performance in games compared to i5 7400 and Pentium G4560


  • 00:16 — OCing at
  • 00:41 — Specs
  • 01:18 — Battlefield 1 DX12
  • 04:03 — Fallout 4
  • 04:51 — GTA 5
  • 06:49 — Hitman DX12
  • 07:39 — Just Cause 3
  • 08:06 — Assassin’s Creed Unity
  • 08:57 — The Witcher 3
  • 10:02 — Rise Of The Tomb Raider DX12
https://videocardz.com/67773/amd-ryzen-5-1400-gaming-performance-leaked

1400 OC manages to keep up with i5 7400 with 60 - 70 % load.

Also interesting is that the owner manages to also OC 1400 on the B350 to 3.8 GHz, RAM at 2667 MT/s
[AsRock Fatal1ty AB350 Gaming K4 AM4 Motherboard, BIOS updated to 2.20]
 
Last edited:

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
Dx12 performance in general is a chaotic mess, trying to make sense of it only leads to lost sanity.


/me bangs his head against the desk.

No, that hack is present for Maxwell only, Pascal does async compute perfectly fine.

except it does not, not really.
/me approves of your head banging against desks though at all times
 
  • Like
Reactions: french toast

ryzenmaster

Member
Mar 19, 2017
40
89
61
There is no question that Nvidia has problems with more cores (AMD or Intel) under DX12, there is ample data on that. Tomb Raider isn't even a worst case scenario.
Posted this on the previous page (data from Computerbase Ryzen 1800X review)
BF1 DX11 720p
6900k 143.8 FPS
1800X 122.4 FPS
7700k 116.4 FPS
BF1 DX12 720p
6900k 122.4 down 14.9%
1800X 90.7 down 25.9%
7700k 127.6 up 9.6%

What is less clear is how well AMD scales in comparison but CF is far from ideal and there is no real need for it since the res can be pushed to 720p if the GPU is mid range.

In BF1 though it's not just Nvidia losing frames.. fps goes down on both AMD and Nvidia plus you get stuttering as well. So far most DX12 titles that have come out are less than impressive. I will leave this here though:

1479888886lJNAO2Au4H_2_2_l.png


1479888886lJNAO2Au4H_3_2_l.png
 
  • Like
Reactions: french toast

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
In BF1 though it's not just Nvidia losing frames.. fps goes down on both AMD and Nvidia plus you get stuttering as well. So far most DX12 titles that have come out are less than impressive. I will leave this here though:
I admit I haven't looked very hard but do you know where I can find RX480 benches for BF1 that include DX11 and DX12. I have been coming up short.
 

DisEnchantment

Golden Member
Mar 3, 2017
1,601
5,780
136
Ryzen 5 / 1400 Performance in games compared to i5 7400 and Pentium G4560


  • 00:16 — OCing at
  • 00:41 — Specs
  • 01:18 — Battlefield 1 DX12
  • 04:03 — Fallout 4
  • 04:51 — GTA 5
  • 06:49 — Hitman DX12
  • 07:39 — Just Cause 3
  • 08:06 — Assassin’s Creed Unity
  • 08:57 — The Witcher 3
  • 10:02 — Rise Of The Tomb Raider DX12
https://videocardz.com/67773/amd-ryzen-5-1400-gaming-performance-leaked

1400 OC manages to keep up with i5 7400 with 60 - 70 % load.

Also interesting is that the owner manages to also OC 1400 on the B350 to 3.8 GHz, RAM at 2667 MT/s
[AsRock Fatal1ty AB350 Gaming K4 AM4 Motherboard, BIOS updated to 2.20]


Using reliable MK1 Eyeball

01:18 — Battlefield 1 DX12 --> small lead i5 7400
04:03 — Fallout 4 --> definitive lead by i5 7400
04:51 — GTA 5 --> lead by i5 7400
06:49 — Hitman DX12 --> small lead by Ryzen 1400 OC
07:39 — Just Cause 3 --> very small lead by i5 7400
08:06 — Assassin’s Creed Unity --> definitive lead by Ryzen 1400 OC
08:57 — The Witcher 3 --> lead by Ryzen 1400 OC
10:02 — Rise Of The Tomb Raider DX12 --> resounding lead by Ryzen 1400 OC

Also both G4560 and 7400 are already very close to 100%. I would suppose not all cores were utilized fully on R5 1400.
Makes the R5 1500X very interesting for under $200
 
Last edited:

imported_jjj

Senior member
Feb 14, 2009
660
430
136
In BF1 though it's not just Nvidia losing frames.. fps goes down on both AMD and Nvidia plus you get stuttering as well. So far most DX12 titles that have come out are less than impressive. I will leave this here though:

1479888886lJNAO2Au4H_2_2_l.png


1479888886lJNAO2Au4H_3_2_l.png


This doesn't have much to do with the conversation.
You don't mention the CPU used but it doesn't show 4 cores vs 8 and the res used is appropriate for the GPU so not high FPS testing.On top of that both are under DX12 and it's not wise to assume that it's all about async.
Even so the Nvidia card gains 0.4% while the AMD card gains 7.5% with async enabled and at lower res(or faster GPU) the gap would widen.
 

imported_jjj

Senior member
Feb 14, 2009
660
430
136
The issue is DX11 to DX12 scaling WITH more cores not 4. On 4 cores Nvidia fakes it pretty well.

Edit: That GamerNexus review is done with a 6 cores so it does help a bit
bf1-benchmark-1080p-dx11.png
bf1-benchmark-1080p-dx12.png


The differences in scaling are more visible with higher end cards and Nvidia does take a big hit under DX12 while the Radeons are flatish.
 
Last edited:

lolfail9001

Golden Member
Sep 9, 2016
1,056
353
96
Or something isn't right with DX12 performance compared to DX11 with Nvidia cards.
It is certainly the case, nV seems to have higher overhead in Dx12, at least with Ryzen.
except it does not, not really.
Sure, explain Deus Ex Dx12 performance, if you can. I will laugh in the meantime. To make it short for you: on both AMD and nV GPUs, it loses FPS in CPU limited scenario.
 

ryzenmaster

Member
Mar 19, 2017
40
89
61
The issue is DX11 to DX12 scaling WITH more cores not 4. On 4 cores Nvidia fakes it pretty well.

Fakes it on 4 cores? Care to explain?

I am convinced that prior to Pascal, Nvidia would not gain anything from multi threaded graphics. But seeing that at least in some cases their latest lineup appears to scale a little bit is implying that they did something. Now I wouldn't know whether that something is at hardware or driver level..
 

imported_jjj

Senior member
Feb 14, 2009
660
430
136
Fakes it on 4 cores? Care to explain?

I am convinced that prior to Pascal, Nvidia would not gain anything from multi threaded graphics. But seeing that at least in some cases their latest lineup appears to scale a little bit is implying that they did something. Now I wouldn't know whether that something is at hardware or driver level..

Posted this on the previous page https://forums.anandtech.com/thread...and-discussion.2499879/page-215#post-38823668
Only 4 games were tested in both DX11 and DX12 but in those 4 games, the quad core gains overall while the 6900k and 1800X octa cores are getting hit hard.
How Nvidia "fakes it" hard to say maybe they just take better advantage of the ST perf due to DX12 and what hurts them is the lack of proper async compute. Maybe the poor scaling to more cores is due to latency gaining in relevance combined with the inability to use the extra cores? However, all that is speculation so better not take it as fact as we need more data to understand the full picture.
 
Last edited:

lixlax

Member
Nov 6, 2014
183
150
116
Is there any game where DX12 actually runs the same or better than DX11 regardless of hardware used? Vulkan (and Mantle) look to be doing much better at what they were intended to. I remember my 860K literally gaining like 40-50% more fps in BF4 multiplayer when using Mantle over DX11. Plus I don't remember seeing benchmarks where higher end hardware lose meaningful amount of perfomance when using these 2 apis.
I think the best examples are the beforementioned BF4 and BF1. Both are from the same company and even on the same engine(although updated for BF1). Mantle gained tiny bit perfomance even on the high end machines while running flawlessly, while DX12 has all kinds of perfomance issues in BF1, I don't want to believe Dice got that lazy with DX12 implementation.

DX12 seems like a mess right now, maybe the game engines have to be rebuilt from ground up for this api. As for Ryzen generally losing more perfomance compared to Intel has to do with the optimisations inside the gameengine as well (because of the low level aspect in these APIs), because Ryzen wasn't out when these games were realeased or patched.
 

Spjut

Senior member
Apr 9, 2011
928
149
106
Sniper Elite 4 has been shown to run better in DX12 than DX11, both for AMD and Nvidia.
Rise of the Tomb Raider also runs better in DX12 if you have a slow/old CPU. But almost all benchmark sites just use highly overclocked i7 CPUs so DX12's CPU advantage is rarely shown.
 

french toast

Senior member
Feb 22, 2017
988
825
136
We don't have a true dx12 engine yet, also does not help that nvidia is holding things up with their antiquated technology and ancient single threaded drivers.
Yes they have the fastest and most efficient dx11 uarch, but they are not really helping things move forward with regards to next gen APIs at all, just throwing money at game works tie ins that force games like mass effect to use dx11 only- in 2017 no less.
Hopefully Volta gets with the times as well as nvidia dx12 driver teams.