• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Discussion [HWUB] Nvidia has a driver overhead problem. . .

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

CakeMonster

Golden Member
Nov 22, 2012
1,010
88
91
Not many would, but then again the common advice to gamers with all nuance removed would be that GPU makes the most difference for performance.
 
  • Like
Reactions: Leeea

Leeea

Senior member
Apr 3, 2020
553
603
96
This is the correct answer. NVIDIA implements a lot more stuff in the drivers. They even added DX12 support to older GPUs.

Who would use a 3090 on an Intel quad core from 6-8 years ago?
Raises hand. Me. The high end skews are far more likely to be in stock at MSRP.

Upgrade the gpu now for the biggest gain, up the CPU later for small gain.


Not many would, but then again the common advice to gamers with all nuance removed would be that GPU makes the most difference for performance.
That advice remains, it just changes to AMD gpus for most upgrade performance/$.
 
Last edited:

Mopetar

Diamond Member
Jan 31, 2011
5,651
2,412
136
Forza Horizon 4 also demonstrate the same behavior with low end CPUs.
This game is actually a bit more surprising because even more recent processors that you might think would be affected will bottleneck performance on a 3090.

CPU6900 XT FPS3090 FPS
i3 43308370
R3 120010889
i3 710010690
R3 1300X11897
R5 140011795
i5 4770K140115
R5 1600X131110
R7 1800X131113
i7 4770K140115
R5 2600X143120
R7 2700X144124
i7 7700K179151
i5 9600K179160
i7 9700K179168
R7 3700X179151
i7 10900K179174
R5 5600X179178
R9 5900X179178

With the results from AC: Valhalla, you could still run a 7700K without bottlenecking a 3090. Here you effectively need one of the newest Zen 3 CPUs to get maximum performance and it isn't clear if those aren't also seeing a bottleneck because it looks like additional cores don't do a lot in terms of scaling performance compared to clock speed increases or IPC gains.
 

GodisanAtheist

Platinum Member
Nov 16, 2006
2,889
1,398
136
This is the correct answer. NVIDIA implements a lot more stuff in the drivers. They even added DX12 support to older GPUs.

Who would use a 3090 on an Intel quad core from 6-8 years ago?
- While I wouldn't run a 3090, I very well could run a 3070 or 3060ti, and if the problem is trickling down to that tier of card as well, then it definitely is an issue for NV.

System rebuilds are a pain in the ass and I definitely don't enjoy them quite as much as I did when I was younger. Need to make sure all kinds of crap is backed up in triplicate and there is the inevitable nightmare scenario where you put everything together, hit the power button, and... nothing. Software licenses with install limits and password managers and just all kinds of crap. I just want to game, and drop in upgrades really help with that.

Additionally, the big promise of these "Close to metal" APIs was the reduced CPU overhead. I still remember being amazed at the additional FPS my HD7950 got on my Q9550 when using the Mantle API over DX11. Similarly, when using Vulkan or DX 12 on my current PC, there is a significant performance uplift that can stretch the life of the processor or make a higher tier upgrade "viable" without leaving a ton of performance on the table.

While I am definitely due for my next 6-8 year system rebuild off my aging Intel quad, and will settle for no less than an 8/16 CPU, I would not hesitate to push off the core system upgrade if I could still get into a higher performance tier with less waste now on the current system with only a GPU upgrade.
 

DAPUNISHER

Super Moderator and Elite Member
Moderator
Aug 22, 2001
22,551
3,648
136

AtenRa

Lifer
Feb 2, 2009
13,571
2,606
126
Igor's lab did a follow up with Horizon: Zero Dawn. He doubts the CPU is the real cause but rather, that it is the pipeline processing. He hints at engines optimized for AMD hardware possibly being the big factor; the console effect? But wisely states it needs more investigation, which he does not have the time for right now.

https://www.igorslab.de/en/driver-overhead-at-nvidia-and-directx12-msi-geforce-rtx-3090-suprim-versus-msi-radeon-rx-6900xt-gaming-x-and-your-own-drivers/
The same behavior can been seen even in NVIDIA optimized titles like Watch Dogs : Legion and Cyberpunk 2077 both in raster and Raytracing at 1080p.
In CP77, the RTX3080 and RTX3090 are both CPU limited at 1080p with RT + DLSS with all CPUs bellow Ryzen 5600X/Core i7 10700.

 

KompuKare

Senior member
Jul 28, 2009
653
183
116
Upgrade the gpu now for the biggest gain, up the CPU later for small gain.
Yes, those who build totally new, or always upgrade all the time are not affected, but there are plenty of people upgrade piece-meal style.
That's we keep seeing these threads all over internet with people who bought a 3080 with what was supposed to have been the gaming CPU (like a Skylake or Kabylake i5 or i7) and now wonder why their fancy new GPU now performs worse than their old 2080 or whatever.
 
  • Like
Reactions: Tlh97 and Leeea

PhoBoChai

Member
Oct 10, 2017
119
389
106
Igor's lab did a follow up with Horizon: Zero Dawn. He doubts the CPU is the real cause but rather, that it is the pipeline processing. He hints at engines optimized for AMD hardware possibly being the big factor; the console effect? But wisely states it needs more investigation, which he does not have the time for right now.

https://www.igorslab.de/en/driver-overhead-at-nvidia-and-directx12-msi-geforce-rtx-3090-suprim-versus-msi-radeon-rx-6900xt-gaming-x-and-your-own-drivers/
I think he's confused, because he talks about Async Compute, but this effect is seen in games that don't even use Async C. His results show DX12 is eating more CPU cycles on NV GPUs, and the only thing that runs between hw & CPU is the driver interaction with the API.

GameGPU.RU today just did a more detailed look and they confirm it by showing Vega 64 beating 3090, sometimes more than 20%.


The ST 3dMark test from those days is very telling because NV gets a huge bump in ST drawcalls as CPU cores increase, which should NOT be possible as the draw call is submitted ST without CMD List usage. They do automatically parse draw calls and spawn worker threads in DX11 for MT advantage in ST games.

The only question is why in DX12, they still treat CPU scheduling as if its DX11, where MT draw call submission is batched up into the main thread and sent in ST. Its not what these API are designed to do.

Edit: Pay close attention to these results AT found back then.



Note with 4 cores, NV GPUs have 1.4 ST.

But with 6 cores, NV GPUs suddenly have higher ST.



All these recent testing shows that NerdTech was right back in early 2017, IMO. NV does some amazing work with their DX11 auto MT. But their flexible drivers is a problem now in DX12 & Vulkan, with more CPU overhead.
 
Last edited:

DAPUNISHER

Super Moderator and Elite Member
Moderator
Aug 22, 2001
22,551
3,648
136
I think he's confused, because he talks about Async Compute, but this effect is seen in games that don't even use Async C. His results show DX12 is eating more CPU cycles on NV GPUs, and the only thing that runs between hw & CPU is the driver interaction with the API.

GameGPU.RU today just did a more detailed look and they confirm it by showing Vega 64 beating 3090, sometimes more than 20%.


The ST 3dMark test from those days is very telling because NV gets a huge bump in ST drawcalls as CPU cores increase, which should NOT be possible as the draw call is submitted ST without CMD List usage. They do automatically parse draw calls and spawn worker threads in DX11 for MT advantage in ST games.

The only question is why in DX12, they still treat CPU scheduling as if its DX11, where MT draw call submission is batched up into the main thread and sent in ST. Its not what these API are designed to do.

Edit: Pay close attention to these results AT found back then.



Note with 4 cores, NV GPUs have 1.4 ST.

But with 6 cores, NV GPUs suddenly have higher ST.



All these recent testing shows that NerdTech was right back in early 2017, IMO. NV does some amazing work with their DX11 auto MT. But their flexible drivers is a problem now in DX12 & Vulkan, with more CPU overhead.
I like their style; shots fired!

"But at one time, only the lazy did not discuss the high processor dependence of the Radeon in DX11, where it really took place, and was confirmed by tests many times. But why was it actively discussed then, and as if on purpose they are silent now?"

"If you heard another blogger declare that Radeons are more processor-dependent than GeForce, then at best he does not care about events (how can this be if this has been going on for 6 years!), At worst - either incompetent or an interested party."

Right on. And the sensible conclusion, based on how well most of the suspect reviewers cover AMD issues, is they are shilling by omission. People get accused of tinfoil hat conspiracies when they broach these topics, but this is more evidence, backed up by well over a decade of precedence. That Nvidia flexes it muscles behind the scenes, holding quite a bit of sway over how and what many reviewers and "tech journalists" cover and report. I will not name any for now. But there are some big reviewers that have been suspiciously lacking in coverage of the topic to this point. I hope it is because they are currently doing the rigorous testing necessary to confirm it. If they fail to weigh in soon though, they need to be called out.
 

Racan

Senior member
Sep 22, 2012
228
140
116
Right on. And the sensible conclusion, based on how well most of the suspect reviewers cover AMD issues, is they are shilling by omission. People get accused of tinfoil hat conspiracies when they broach these topics, but this is more evidence, backed up by well over a decade of precedence. That Nvidia flexes it muscles behind the scenes, holding quite a bit of sway over how and what many reviewers and "tech journalists" cover and report. I will not name any for now. But there are some big reviewers that have been suspiciously lacking in coverage of the topic to this point. I hope it is because they are currently doing the rigorous testing necessary to confirm it. If they fail to weigh in soon though, they need to be called out.
Yep, for ex Digital Foundry were one of the first to bring up Radeon driver overhead issues back in the day. I'm expecting an investigation into Nvidia driver overhead issues any day now, any day

 
Last edited:

Justinus

Platinum Member
Oct 10, 2005
2,454
591
136
I really like to see if we have the same results with slower GPUs like 2060S/3060/Ti vs 5700XT/6700XT
It's just one example but FH4 from GameGPU CPU test on a 3060ti:

Screenshot_20210319-065927__01.jpg

You can see the bottlenecking doesn't really start until zen 1/1+ and Haswell. GameGPU has such thorough testing I'm sure some metrics about the mid-range cards could be compiled from several DX12 game benchmarks.
 
  • Like
Reactions: Tlh97 and Leeea

PhoBoChai

Member
Oct 10, 2017
119
389
106
I like their style; shots fired!

"But at one time, only the lazy did not discuss the high processor dependence of the Radeon in DX11, where it really took place, and was confirmed by tests many times. But why was it actively discussed then, and as if on purpose they are silent now?"

"If you heard another blogger declare that Radeons are more processor-dependent than GeForce, then at best he does not care about events (how can this be if this has been going on for 6 years!), At worst - either incompetent or an interested party."

Right on. And the sensible conclusion, based on how well most of the suspect reviewers cover AMD issues, is they are shilling by omission. People get accused of tinfoil hat conspiracies when they broach these topics, but this is more evidence, backed up by well over a decade of precedence. That Nvidia flexes it muscles behind the scenes, holding quite a bit of sway over how and what many reviewers and "tech journalists" cover and report. I will not name any for now. But there are some big reviewers that have been suspiciously lacking in coverage of the topic to this point. I hope it is because they are currently doing the rigorous testing necessary to confirm it. If they fail to weigh in soon though, they need to be called out.
Mate you can see this clearly in some reviewers. Primary example, 3060 review by some of them, compared to the 1060, it's a GREAT upgrade path. 6700XT review from same ppl? Not compared to RX 480, or even 5700XT. Its only compared vs 3060Ti and 3070. The narrative is quite different.

Then you see other reviewers trash talk AMD's 6700XT and 6800XT reference cooler design. Like really? It runs cool, it runs quiet. No problems. Unlike the FE 3090 that is supposedly a premium GPU that memory that's baking hot over 105C on an open test bench at 21C ambients. REALLY?

The NVIDIA narrative is stronk among the tech review community, at least, some of the bigger ones.

The hilarious thing is Digital Foundry started this whole "AMD has DX11 driver overhead problems" years ago, when they reviewed CODAW/MW that NV sponsored. That game pushes PhysX hard, and on the CPU, it destroys the main thread (constantly 100% load), so obviously AMD's method of CPU scheduling gets nuked. Same for Project Cars if ppl remember a few years back.
 

Head1985

Golden Member
Jul 8, 2014
1,853
666
136
Yep, for ex Digital Foundry were one of the first to bring up Radeon driver overhead issues back in the day. I'm expecting an investigation into Nvidia driver overhead issues any day now, any day
I am not sure about that because they are Big NV shills lately.Look at 6800XT review video-half video about ray tracing and control:D
 
Last edited:

Panino Manino

Senior member
Jan 28, 2017
389
482
136
I am not sure about that because they are Big NV shills lately.Look at 6800XT review video-half video about ray tracing and control:D
It's tangential related, but they put a video today with Bloodborne upscaled with AI to 4K/60fps and... it looks horrendous! Remember DLSS 1.0? All the detail in the textures is gone but they still praise the results!
It's shilling a bi too hard.
 

blckgrffn

Diamond Member
May 1, 2003
7,653
876
126
www.teamjuchems.com
It's tangential related, but they put a video today with Bloodborne upscaled with AI to 4K/60fps and... it looks horrendous! Remember DLSS 1.0? All the detail in the textures is gone but they still praise the results!
It's shilling a bi too hard.
:/

I guess I am naive but I always thought that Digital Foundry actually got into the nuts and bolts of things, especially on a game by game basis. They would really look at a bunch of details and explain some of the technical aspects of what was going on.

It sucks that realizing they might be actively biased more than I ever realized. Sigh.
 

Panino Manino

Senior member
Jan 28, 2017
389
482
136
:/

I guess I am naive but I always thought that Digital Foundry actually got into the nuts and bolts of things, especially on a game by game basis. They would really look at a bunch of details and explain some of the technical aspects of what was going on.

It sucks that realizing they might be actively biased more than I ever realized. Sigh.
They still do this when it's possible due to their industry contacts.
But they're a team, each one have it's personal preferences and beliefs. Sometimes they're not "shilling" or "lobbying", just talking about a thing that excites them. Whether is like it or not pure native resolution is becoming a thing of the past and image reconstruction/DLSS is something that have a future. Some of them are dreaming with that future and a Bloodborne under DLSS 2.0 would look much better than how it looks in this video.
 

PingSpike

Lifer
Feb 25, 2004
21,451
360
126
- While I wouldn't run a 3090, I very well could run a 3070 or 3060ti, and if the problem is trickling down to that tier of card as well, then it definitely is an issue for NV.

System rebuilds are a pain in the ass and I definitely don't enjoy them quite as much as I did when I was younger. Need to make sure all kinds of crap is backed up in triplicate and there is the inevitable nightmare scenario where you put everything together, hit the power button, and... nothing. Software licenses with install limits and password managers and just all kinds of crap. I just want to game, and drop in upgrades really help with that.

Additionally, the big promise of these "Close to metal" APIs was the reduced CPU overhead. I still remember being amazed at the additional FPS my HD7950 got on my Q9550 when using the Mantle API over DX11. Similarly, when using Vulkan or DX 12 on my current PC, there is a significant performance uplift that can stretch the life of the processor or make a higher tier upgrade "viable" without leaving a ton of performance on the table.

While I am definitely due for my next 6-8 year system rebuild off my aging Intel quad, and will settle for no less than an 8/16 CPU, I would not hesitate to push off the core system upgrade if I could still get into a higher performance tier with less waste now on the current system with only a GPU upgrade.
Spending big on a GPU ended up making more sense in another way too: GPUs have not gotten any cheaper (the horrifying opposite in fact!) while CPUs have. If you only had X money awhile ago you'd be happy you blew it on a high end GPU and stuck it in an old quadcore right about now, not so much if you had bought a 7700K and some mid tier card.
 

Mopetar

Diamond Member
Jan 31, 2011
5,651
2,412
136
Looks pretty good at native resolution as well. Even a 1080p it's a gorgeous game. There really isn't anything wrong with gaming at 1080p and the type of people who can afford to buy a high quality 4K display are the same people who can afford a high-end GPU to render a native resolution and image on it.

I get the argument that RT hardware isn't good enough right now so DLSS needs to act as a crutch until we get there, but there's nothing wrong with taking a 3090 and playing at 1080p when turning on those effects.
 

ASK THE COMMUNITY