• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Ryzen's poor performance with Nvidia GPU's. Foul play? Did Nvidia know?

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Did Nvidia know about the Ryzen performance issue and choose not to fix it?

  • Yes, Nvidia has guilt regarding this issue. They wanted to damage AMD.

    Votes: 45 38.8%
  • No, Nvidia is innocent regarding this issue. They simply didn't know or its out of their hands.

    Votes: 71 61.2%

  • Total voters
    116
LOL where do you guys get this stuff? 😀🙄😵

Do you just make things up as you go along? NVidia drivers scaling poorly on CPUs with more than four cores/threads eh? Then by gosh, how do you explain this? As far back as Kepler, NVidia drivers scaling with HT enabled on a 3770K whereas AMD's driver chokes:

imGNVM.png


Perhaps something a bit more modern then? How about a GTX 1080 scaling all the way to 10 cores/20 threads in Ghost Recon Wildlands:

XkFRmR.png


And just a few months ago, Computerbase.de did a test on CPU scaling with a Titan X Pascal, and look at what they found.

fzVwMX.png


The truth is, you guys have no idea what you're talking about. NVidia's driver scales wonderfully on high core/threaded CPUs, and NVidia's drivers have been native 64 bit for years 🙄

Also, CPU scaling has more to do with the game itself than the drivers. If a game is programmed to only use four threads, then no amount of driver trickery will change that.

Could you post the same images with respect 1080p , 1440p and 4k ?
 
No problem. Just needed the correction. It's already a mess trying to explain the problem to deniers anyways. I wanted to make sure they aren't filling the thread with useless benches that don't actually apply to the issue.

Also, the frequency in which one visits their tech sites, vary. It is hard to keep up to date on the minute to minute stuff.
 
LOL where do you guys get this stuff? 😀🙄😵

Do you just make things up as you go along? NVidia drivers scaling poorly on CPUs with more than four cores/threads eh? Then by gosh, how do you explain this? As far back as Kepler, NVidia drivers scaling with HT enabled on a 3770K whereas AMD's driver chokes:

imGNVM.png


Perhaps something a bit more modern then? How about a GTX 1080 scaling all the way to 10 cores/20 threads in Ghost Recon Wildlands:

XkFRmR.png


And just a few months ago, Computerbase.de did a test on CPU scaling with a Titan X Pascal, and look at what they found.

fzVwMX.png


The truth is, you guys have no idea what you're talking about. NVidia's driver scales wonderfully on high core/threaded CPUs, and NVidia's drivers have been native 64 bit for years 🙄

Also, CPU scaling has more to do with the game itself than the drivers. If a game is programmed to only use four threads, then no amount of driver trickery will change that.
As I already corrected him on, the problem is with DX12 and not DX11. It's also somewhat new as pcgameshardware.de showed scaling in October buy by Computercase.de's Ryzen review it stopped.
 
Developers can certainly optimize for nVidia hardware. You can only squeeze so much blood out of a turnip, however.

DX12 performance gains hit their cap earlier with nVidia hardware (at least, prior to Volta) because of the software scheduler and the need for multi-threaded DCLs. There is a cost associated with software simulation of hardware-level features. That's what you are seeing in these results.
 
If it's really only DX12, can you really blame nVidia for it? Developers are much more in control there.
Have you read any of the other posts? This only applies to Nvidia, so not the developers, and these did scale properly in the past on Nvidia video cards.
 
Developers can certainly optimize for nVidia hardware. You can only squeeze so much blood out of a turnip, however.

DX12 performance gains hit their cap earlier with nVidia hardware (at least, prior to Volta) because of the software scheduler and the need for multi-threaded DCLs. There is a cost associated with software simulation of hardware-level features. That's what you are seeing in these results.
This isn't about the DX12 penalty with Nvidia. This is a relatively recently change in their drivers. I surmise it was done to improve performance on the 4c8t Intel processors since that is the mostly likely used configuration. That configuration now is the only CPU not seeing a penalty.
 
Have you read any of the other posts? This only applies to Nvidia, so not the developers, and these did scale properly in the past on Nvidia video cards.

Wait, you are suggesting that nVidia has regressed performance in DX12 drivers? None of the posts are suggesting that. 480 vs 1060 is a different topic altogether.
 
Wait, you are suggesting that nVidia has regressed performance in DX12 drivers? None of the posts are suggesting that. 480 vs 1060 is a different topic altogether.
That is exactly what I am suggesting and have stated probably a dozen times. I outlined it in a pretty detailed post here.

https://forums.anandtech.com/thread...-did-nvidia-know.2503650/page-4#post-38843902

My current working theory was that most people were using normal i7's and i5's and they optimized the drivers for those CPU's so that they could get rid of the performance penalty when using DX12. The downside means that scaling stops at 4c. On more than 4c you see not only scaling stop but they still suffer from the Nvidia DX12 performance penalty. It's the last part that adds to Ryzen's performance descrepency, not only are their extra cores not being used, not only are they down on clock and IPC, but they still have to deal with the CPU overhead of the Nvidia driver.
 
nVidia has had ongoing issues with DX12. I knew it was only a matter of time until someone blamed the devs though.

nVidia being faster in DX11 isn't an "ongoing issue".

That is exactly what I am suggesting and have stated probably a dozen times. I outlined it in a pretty detailed post here.
https://forums.anandtech.com/thread...-did-nvidia-know.2503650/page-4#post-38843902

That seems like a pretty big stretch. You'd really need to compare drivers with the same version of the game to really come to that conclusion.
 
Yes, let us blame nVidia for a bad DX12 port when the developers are now responsible for >90% of the work.

Funny that the same people never blamed AMD for their bad DX11/OpenGL performance. Guess DX11 is more low level then DX12. 🙄
 
Each DX12 port is unique. Look at Tomb Raider and Deus Ex. Nixxus did both ports and yet the result is different:
Deus Ex: DX12 is 33% slower on nVidia
Tomb Raider: DX12 is ~30% faster on nVidia

Both in CPU limited scenario.
 
why must we need drivers anyway? everything should work right out of the box, like linux.... (really, 90% drivers for linux are in the kernel)
 
Yes, let us blame nVidia for a bad DX12 port when the developers are now responsible for >90% of the work.

Funny that the same people never blamed AMD for their bad DX11/OpenGL performance. Guess DX11 is more low level then DX12. 🙄
More deflection. Predictably so.
 
More deflection. Predictably so.

What do you expect him to say? The theory that NVidia's drivers have issues with scaling above four cores is just pure uneducated speculation, not borne of any facts. I've given an example of NVidia's driver scaling to as high as 10 cores, then I was told that this "problem" affected DX12 only, which is preposterous if you know anything about DX11 and DX12. Then I used Gears of War 4, which scales to six threads, and was told that it's not a proper DX12 title even though it has no DX11 rendering path. 🙄

So I used Ashes of the Singularity, an AMD sponsored title which clearly showed a 5960x significantly outperforming a 4770K in DX12, and was told that it's just a synthetic benchmark 😵

So now that this thread has obviously been hijacked by trolls, the real question is why hasn't it been locked?😕
 
Last edited:
You claim nVidia has "ongoing issues" with DX12, but "it's faster with DX11" is ultimately what that issue is, isn't it?

I don't recall anyone blaming/crediting the devs with AMD's overall DX11 performance. Specific games like Gameworks titles, sure. But not that they aren't coding DX11 poorly and it was effecting AMD and not nVidia.

What do you expect him to say? The theory that NVidia's drivers have issues with scaling above four cores is just pure uneducated speculation, not borne of any facts. I've given an example of NVidia's driver scaling to as high as 10 cores, then I was told that this "problem" affected DX12 only, which is preposterous if you know anything about DX11 and DX12. Then I used Gears of War 4, which scales to six threads, and was told that it's not a proper DX12 title even though it has no DX11 rendering path. 🙄

So I used Ashes of the Singularity, an AMD sponsored title which clearly showed a 5960x significantly outperforming a 4770K in DX12, and was told that it's just a synthetic benchmark 😵

So now that this thread has obviously been hijacked by trolls, the real question is why hasn't it been locked?😕

So, is it the different games or is it the different driver optimizations for the games? If the games scale with one brand but not the other that should let the game off of the hook. And I haven't looked at it closely at all to say anything. Other than to dismiss out of hand that it's nVidia's drivers and then place the blame squarely on the devs just wreaks of typical nVidia excuses. They always blame someone else. When I read forum posters immediately jump in with it, I assume they are just delivering nVidia's blame everyone else canned response.
 
Then I used Gears of War 4, which scales to six threads, and was told that it's not a proper DX12 title even though it has no DX11 rendering path.
Unreal Engine has a long development process optimizing for a given API. You're not going to have a completely revamped DX12-only engine that sheds all the previous baggage just because Microsoft decided to port Gears of War to the PC.
So I used Ashes of the Singularity, an AMD sponsored title which clearly showed a 5960x significantly outperforming a 4770K in DX12, and was told that it's just a synthetic benchmark 😵
What's so enlightening about that? GPU-bound tests showed better scaling on the Fury X vs the 980Ti. Also, this is straight from Kyle Bennet:

First and foremost, I hate the AotS benchmark for use in any kind of graphical / GPU data collection. It has always proven to be an outlier in the world of canned GPU benchmarks. There have been a lot of conclusions that it has pointed to that have simply not panned out in the world of Triple A gaming titles. We have even used it here recently, but against my liking. I simply do not merit its abilities to give any sort of direction in terms of GPU performance. You may not like my opinion on that, but it is likely not the first time that has happened.
He later clarified:
....it is a game that is basically played by nobody but still used by AMD as a credible benchmark as to its gaming performance.
Disparaging as it may sound but his problem is with the benchmark, not the engine:
My remarks are to the AotS benchmark and not the game engine. Oxide is doing some awesome things with that engine and I am very happy to see it moving forward in the VR world. Awesome stuff.
Just like Doom isn't going to reflect the proliferation of AAA-games with Vulkan support optimized for specific hardware, AoTS isn't a reflection of well-threaded DX12 games flooding the market.
So now that this thread has obviously been hijacked by trolls, the real question is why hasn't it been locked?😕
A hardly surprising non-argument. How about you explain this:
https://forums.anandtech.com/thread...and-discussion.2499879/page-216#post-38824172
 
They have absolutely no reason to sabotage Ryzen's paradade for purpose. AMD's cpu department ain't their direct competitior, and Ryzen sells very well without their optimized drivers. If they can't get their drivers working well with Ryzen, it will only mean more Vega sells in the future. No, nvidia's best interest is to fix their drivers asap as they want as many Ryzen platforms paired with their gpu.

I haven't followed this thread from start to finish, but I'm going to have to agree here. Nvidia (at worst) got caught with their pants down. Now people are snapping up AM4 systems with 6-8 cores each and SMT and people are starting to notice the problem.
 
So, is it the different games or is it the different driver optimizations for the games? If the games scale with one brand but not the other that should let the game off of the hook. And I haven't looked at it closely at all to say anything. Other than to dismiss out of hand that it's nVidia's drivers and then place the blame squarely on the devs just wreaks of typical nVidia excuses. They always blame someone else. When I read forum posters immediately jump in with it, I assume they are just delivering nVidia's blame everyone else canned response.

I'm not dismissing that the driver is at fault. I think it is, somehow, but not intentionally. Likely it's just that NVidia has to optimize their drivers for AMD's SMT. This is the first SMT enabled CPU that AMD has ever produced to my knowledge, and it's obviously different from Intel's. That combined with the fact that AMD never sent any samples to NVidia to test, means that NVidia never got a chance to optimize their drivers for Ryzen.

Intel on the other hand has had SMT capable CPUs for years out in the field, and NVidia has definitely optimized for it as seen by the benchmarks I posted a few pages back.
 
Back
Top