Official AMD Ryzen Benchmarks, Reviews, Prices, and Discussion

Page 248 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Topweasel

Diamond Member
Oct 19, 2000
5,437
1,659
136
And what else? That's the issue with generalization. So far i am yet to see test that reaffirms that it is observed outside of RotR.
I have noted them over and over again. You disagree with what the numbers show and for the most part Ryzen seems to be holding it's own DX11 games but fine. I am not trying to turn this into an AMD against the world thing. I have shown my home work and actually while we see the problem in Rise of the Tomb Raider it's not the best example because it's performance is sporatic. All games with both DX11 and DX12 versions the problem can be easily identified and while AMD can work with the Devs to patch the games to run better with AMD's more abundant resources and avoid the issue with the Nvidia drivers. I have shown and many others examples of not just Ryzen, but Intel's own HDET lineup having serious issues when running DX12 games. This isn't about a 1700 or even about a 1800x should be beating a 7700. It shouldn't and IPC and clock advantage means that Ryzen should be behind in all but the most threaded games out there. But honestly this topic is mostly dead. When Vega if ever hits, then all of sudden it will be apparent what is happening and the onus will be on Nvidia to clean it up.

I should note that if you go back 6 months Nvidia didn't have the issue. Something changed in their drivers in January or later. From that point on 4 Core CPU's stopped seeing a overhead penalty in DX12 and all CPU's with more cores started having their core usage capped and also had to deal with the overhead.

These numbers imported_jjj pulled from the overly detailed Computerbase.de's Ryzen review.
BF1 DX11 720p
6900k 143.8
1800X 122.4
7700k 116.4
BF1 DX12 720p
6900k 122.4 down 14.9%
1800X 90.7 down 25.9%
7700k 127.6 up 9.6%

Deus Ex DX11 720p
6900k 106.7
1800X 80.5
7700k 87.1
Deus EX DX12 720p
6900k 83 down 22.2%
1800X 63.6 down 21%
7700k 83.6 down 4%

Rise of the Tomb Raider DX11 720p
6900k 165.7
1800X 135.7
7700k 152
Rise of the Tomb Raider DX12 720p
6900k 172.5 up 4.1%
1800X 117.5 down 13.4%
7700k 168.2 up 10.65%

Total War Warhammer DX11 720p
6900k 45.5
1800X 40.3
7700k 43.3
Total War Warhammer DX12 720p
6900k 34.6 down 24%
1800X 30.7 down 23.8%
7700k 42.4 down 2.1%
 
  • Like
Reactions: Space Tyrant

coffeemonster

Senior member
Apr 18, 2015
241
87
101
I should note that if you go back 6 months Nvidia didn't have the issue. Something changed in their drivers in January or later. From that point on 4 Core CPU's stopped seeing a overhead penalty in DX12 and all CPU's with more cores started having their core usage capped and also had to deal with the overhead.

These numbers imported_jjj pulled from the overly detailed Computerbase.de's Ryzen review.
I wonder if you can spoof the driver to see a ryzen 7 as an intel quad core. Curious to see what sort of performance changes you might see.
 

Topweasel

Diamond Member
Oct 19, 2000
5,437
1,659
136
I wonder if you can spoof the driver to see a ryzen 7 as an intel quad core. Curious to see what sort of performance changes you might see.

I don't think so. On top of capping the driver to 4 threads Nvidia seems to have done something specifically for the Skylake and higher lineup to get past the driver overhead penalty. SB, IVB, and HW all see overhead performance penalties in line with what you expect when switching to DX12 on an Nvidia video card. So I doubt that it's just getting core count and GenuineIntel. So it would have to be getting the whole CPUID and using that as a basis for how the driver acts and assigns threads.

It just helps further the whole its a Ryzen thing or its a CCX thing, when the numbers on a Haswell system and Ryzen system looks the same. Since that was where we were expecting it core for core wise.
 

lolfail9001

Golden Member
Sep 9, 2016
1,056
353
96
These numbers imported_jjj pulled from the overly detailed Computerbase.de's Ryzen review.
If you mean these numbers then you have managed not only to completely miss my point yet again, but also end up being malinformed on the case too.

1. No, these particualr titles have Dx12 perf issues on nV in general and in case of BF1 even on AMD GPUs.

2. The RotR situation i reference is nV +Dx12 ending up as heavier load on CPU threads compared to, say AMD + Dx11.
 

Topweasel

Diamond Member
Oct 19, 2000
5,437
1,659
136
If you mean these numbers then you have managed not only to completely miss my point yet again, but also end up being malinformed on the case too.

1. No, these particualr titles have Dx12 perf issues on nV in general and in case of BF1 even on AMD GPUs.

2. The RotR situation i reference is nV +Dx12 ending up as heavier load on CPU threads compared to, say AMD + Dx11.

Answer one simple question without trying to to mention some random example of some other benchmark or example. If the problem is with DX12 in general, why does the 7700 see better performance when running DX12 in almost all benchmarks and it's one drop in performance so much smaller than other CPU's?

1. Now to respond to your number 1 question or point. DX12 perf issues are one thing. Polaris or even as far back as GCN 1.0 has been made to support most of the design aspects of DX12 (since DX12 was influenced by AMD's Mantle). That said even it does see performance drops in DX12. That is not and has never been my point. It's how that performance has been displayed in the reviews. The 6950/6900/6850 numbers are the most telling the tend to get stuck with a % of each other. This wasn't always the case and Nvidia cards scaled with cores in DX12.

2. I personally ignore Tomb Raider results because it's an unreliable benchmark in the best of circumstances. DX11 is always going to be a weak point for AMD as it's scheduling is at the mercy of the developers using an API that isn't well geared for multi core use. Once it switches to DX12 performance shoots up (well in some areas). It doesn't negate the relatively recently thread capping in DX12 with Nvidia cards and I don't see how it proves any point. This also one of the reason that DX12 some times sees a loss on AMD cards in BF1. Frostbite is a heavy threaded engine and therefore it sees little extra value in switching to DX12 (core usage wise).
 
  • Like
Reactions: Space Tyrant

Rayniac

Member
Oct 23, 2016
78
13
41
How are the memory speed issues progressing? Is it reasonable to hope that at some point BIOSes have matured to the point that all memory sticks can be ran at their rated speeds?
 

T1beriu

Member
Mar 3, 2017
165
150
81
How are the memory speed issues progressing? Is it reasonable to hope that at some point BIOSes have matured to the point that all memory sticks can be ran at their rated speeds?

Short answer: yes, mostly yes.

Long answer:
James Prior - product manager for Ryzen said:
Ryzen owners and potential Ryzen owners are interested in knowing is will the new AGESA update offer better memory support?

James: Absolutely. The previous May update had this in mind and we continue to focus on this with the update we have planned for later this month and indeed throughout the rest of this year. It’s not just May 2017 and then we’re done though. We’re going to be increasing performance in many different ways and one of those is increasing memory compatibility – we’re also working with memory vendors as well as application and game developers.

More info & source: An Interview With AMD: The Latest On Ryzen Memory Support, Game Performance And Ryzen 3's Launch (3 Pages)
 
  • Like
Reactions: Rayniac

lolfail9001

Golden Member
Sep 9, 2016
1,056
353
96
Answer one simple question without trying to to mention some random example of some other benchmark or example. If the problem is with DX12 in general, why does the 7700 see better performance when running DX12 in almost all benchmarks and it's one drop in performance so much smaller than other CPU's?
What problem? The Ryzen problem is not with Dx12. The nV one is not either, as funny as that is.
2. I personally ignore Tomb Raider results because it's an unreliable benchmark in the best of circumstances. DX11 is always going to be a weak point for AMD as it's scheduling is at the mercy of the developers using an API that isn't well geared for multi core use. Once it switches to DX12 performance shoots up (well in some areas). It doesn't negate the relatively recently thread capping in DX12 with Nvidia cards and I don't see how it proves any point. This also one of the reason that DX12 some times sees a loss on AMD cards in BF1. Frostbite is a heavy threaded engine and therefore it sees little extra value in switching to DX12 (core usage wise).
If you ignore Tomb Raider, then it's even better since Adored had nothing else to offer.
 

Topweasel

Diamond Member
Oct 19, 2000
5,437
1,659
136
What problem? The Ryzen problem is not with Dx12. The nV one is not either, as funny as that is.

If you ignore Tomb Raider, then it's even better since Adored had nothing else to offer.
You think I am following Adored on this you are sadly mistaken. He didn't know what he was seeing just that the Tomb Raider scores where out of whack. He collected some good information and he was right. When something is as random and way out expectations it should be analyzed and untill then not used to generate an average for performance.

But that isn't where I got this opinion. I researched more and more reviews noticed the same trend. Read the computer base review and saw the way the 6850, 6900, and 6950, capped off in performance. I thought it was a DX12 issues till I read a core scaling test by pcgaminghardware.de where the 6900 scaled fine in October of last year.

Ryzen doesn't have a problem in DX12. The 6900 doesn't have an issue in DX12.

You know I am done talking to you. You haven't actually brought any information to the table and can only answer with "of course it's not the case" or toss up random benches that don't apply or are ancient.
 

Topweasel

Diamond Member
Oct 19, 2000
5,437
1,659
136
Anyone here have a 1700 or higher and a 1080 or higher. It just occured to me that we can actually test for this.

Really what we need is
R7 1700 or higher
1080 or Higher
Access to any of the 4 major games with DX11 and DX12. That's BF1, Rise of the Tomb Raider, Dues Ex, or Total Warhammer.
BF1 is a bit of a wildcard. It's multi threaded performance really only shows up during MP and not during the demo but still.
And both the latest Nvidia driver and the driver used on the Pcgamhardware.de BF1 review (373.06). I am willing to bet that the 373.06 Performance will be much greater.
They can be downloaded from here. http://www.nvidia.com/download/driverResults.aspx/108324/en-us
 
  • Like
Reactions: Space Tyrant

richierich1212

Platinum Member
Jul 5, 2002
2,741
360
126
Anyone here have a 1700 or higher and a 1080 or higher. It just occured to me that we can actually test for this.

Really what we need is
R7 1700 or higher
1080 or Higher
Access to any of the 4 major games with DX11 and DX12. That's BF1, Rise of the Tomb Raider, Dues Ex, or Total Warhammer.
BF1 is a bit of a wildcard. It's multi threaded performance really only shows up during MP and not during the demo but still.
And both the latest Nvidia driver and the driver used on the Pcgamhardware.de BF1 review (373.06). I am willing to bet that the 373.06 Performance will be much greater.
They can be downloaded from here. http://www.nvidia.com/download/driverResults.aspx/108324/en-us

I'll try to run the test on my 1700 + EVGA 1070 SC with BF1 multiplayer and see what kind of results I get even though you are requesting a 1080 or better.
 

Topweasel

Diamond Member
Oct 19, 2000
5,437
1,659
136
I'll try to run the test on my 1700 + EVGA 1070 SC with BF1 multiplayer and see what kind of results I get even though you are requesting a 1080 or better.
Cool. The lower the rez the better. Thank you good sir.

Sent from my Pixel XL using Tapatalk
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
If there is over 80% yield from Ryzen dies, then AMD gets from each wafer around 230 functional dies.

Each die should cost AMD around 33$. The margins are quite nice, after all.
 

CatMerc

Golden Member
Jul 16, 2016
1,114
1,153
136
If there is over 80% yield from Ryzen dies, then AMD gets from each wafer around 230 functional dies.

Each die should cost AMD around 33$. The margins are quite nice, after all.
That's for full 8 core dies. You can probably add at least 10% if you include 6 and 4 cores. At least.
 
  • Like
Reactions: Glo.

richierich1212

Platinum Member
Jul 5, 2002
2,741
360
126
Cool. The lower the rez the better. Thank you good sir.

Sent from my Pixel XL using Tapatalk

This is the error I'm getting with Origin + BF1 + Nvidia 373.6 Geforce Drivers:

JlSfUpb.jpg