Pascal now supports DX12 resource binding tier 3 with latest drivers (384.76)

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Spjut

Senior member
Apr 9, 2011
928
149
106
Vega is still GCN.

And an inferior GTX 460 with driver support is still inferior to the HD 5850 - unless you are interested in running DX12 stuff on it for academic reasons.

Might as well ask why NVIDIA doesn't allow the GeForce drivers to install in a VM, for attempting GPU passthrough, which is something that is far more useful than writing DX12 drivers for 7-8 year old hardware.

Vega keeping similarities to the previous GCN generations doesn't guarantee a damn thing. Similarities between Terascale 1 and 2 didn't keep AMD from moving Terascale 1 to Legacy already in 2012.
The inferior GTX 460 with driver support is much more likely to play a modern game properly than the HD 5850.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,773
3,596
136
Vega keeping similarities to the previous GCN generations doesn't guarantee a damn thing. Similarities between Terascale 1 and 2 didn't keep AMD from moving Terascale 1 to Legacy already in 2012.
The inferior GTX 460 with driver support is much more likely to play a modern game properly than the HD 5850.
That's some strange logic - one or two high-end, 500$+ GPUs launching shortly with a newer version of the underlying architecture means that AMD might move every older GCN card to legacy, especially when a vast majority of those cards are mid-range and constitute the biggest volume of AMD's GPU sales?

Cypress does fine against Fermi in modern games, FYI:

 

SPBHM

Diamond Member
Sep 12, 2012
5,056
409
126
That's some strange logic - one or two high-end, 500$+ GPUs launching shortly with a newer version of the underlying architecture means that AMD might move every older GCN card to legacy, especially when a vast majority of those cards are mid-range and constitute the biggest volume of AMD's GPU sales?

Cypress does fine against Fermi in modern games, FYI:


those are just 3 games games from before the legacy days (2015 and lower)
and even that, on fallout 4 if you enable godays it's broken, but the video is running all disabled/low.
the number of bugs is a lot worse now.
 

Face2Face

Diamond Member
Jun 6, 2001
4,100
215
106
I just picked up a GTX 480 and GTX 470s for super cheap. Let me know if you guys want to see something specific? I plan on trying to run some modern games with them, including some DX12 titles.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,773
3,596
136
those are just 3 games games from before the legacy days (2015 and lower)
and even that, on fallout 4 if you enable godays it's broken, but the video is running all disabled/low.
the number of bugs is a lot worse now.
Tell me why this is suddenly important again?

One fine day NVIDIA added support for DX12 in Fermi after two years and NVIDIA is the champion of supporting their old hardware, but when AMD is making their stand very clear from the beginning, it means they're somehow at fault?

If we are talking about DX12 support for 8 year old GPUs, we should also talk about why NVIDIA cards don't work with recent Mac ports which use the Metal API like Hitman, Mafia III, F1 2016 etc.
 
  • Like
Reactions: DarthKyrie

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Well their 1070, 1080, 1080ti lose performance under DX12 in pretty much all games compared to DX11 renderer. Even some of the internal game benches that they do okay at, seem misleading as in actual gameplay the fps difference is hugely in favor of DX11 over DX12, while AMD cards offer much better performance under DX12 and Vulkan, even in DX11 optimized games like the Division where they go head to head with Nvidia in the DX11 api, but take the lead easily under DX12.

Sniper Elite 4 which has the most advanced DX12 implementation to date sees the Nvidia products gain between 1-5fps, while AMD cards gain from 5 to 15fps more. Heck the RX 580, especially overclocked ones come very close to 1070 in that game. So a $240 card competing with a $350 one in certain games under DX12.

You really need to stop spreading FUD and garbage. :rolleyes: There are many reasons as to why a game might run faster in DX11 than in DX12, and by far the most likely reason has to do with bad optimizations. There isn't a single game out there yet that is pure DX12 in every manner, so most of these games are a hybrid between DX11 and DX12 with varying levels of optimizations. Some games run faster under DX11 than they do under DX12 on AMD as well, ie Deus Ex MD and BF1.

That said, there are several games which run faster with DX12 (and Vulkan) than with DX11 (and OpenGL), on NVidia. Ashes of the Singularity, Doom, Tom Clancy's The Division, Sniper Elite 4, Hitman all run faster in DX12 than in DX11 on NVidia hardware. In DX12 only titles like Gears of War 4, Forza H3 etcetera, NVidia also performs very well.

And as for why AMD gains more from DX12 than NVidia does, it's simple. NVidia's DX11 driver already mimics many of the functions of DX12 as far as CPU usage and performance is concerned, so obviously they wouldn't gain as much with DX12 as AMD does in that respect because AMD's DX11 driver has always been CPU limited.
 
  • Like
Reactions: xpea
Mar 11, 2004
23,073
5,554
146
I think it is commendable that Nvidia is supporting their products still, although I'll disagree with people about how much it matters on 7 year old cards (which by the way still will run games of that era fine with drivers from that era), or how much finally enabling DX12 years after they claimed the product would actually qualifies (and so far we haven't seen fully what level of support it actually is, performance seems to be downright atrocious to the point that it seems like you wouldn't bother and any game that does only DX12 will be practically unplayable; but of course, for some insane reason, that doesn't stop people chastising AMD for not offering DX12 support on a product that they never claimed would, but then that turns into criticizing them for EOL'ing driver support for it).

You know damn well what Phynaz means.
The way you provide "facts" here is meaningless without context. Any GPU thats so old will not perform well in a benchmark designed to stress modern hardware.
I find it quite surprising that Nvidia even invested time and money to get this done actually. Its quite commendable.

And you know damn well that Phynaz was doing that every bit if not moreso. Bacon was posting nothing inflammatory in this thread yet Phynaz had to start copping the usual BS he does. He's one of several people that every single time there's an impending AMD GPU release, we see start posting more and very often clearly just trying to add noise and incite arguments.

The problem here is that Microsoft is competing with itself. Plus it gave a free upgrade option to Windows 10.

AMD is competing with Nvidia.

For me it is completely ridiculous that an inferior GTX 460 enjoys better support than the 5850 and as a consumer, I will too, think twice before going AMD again.

So what then, a few months after Vega launches they will place GCN cards in Legacy support?

Actually because of that Microsoft has basically had to offer longer legacy support than they want to or normally would because people throw a fit when they announce support changes because of their dominant position. And that has actually bitten them too as it just further keeps the holdouts demanding support. There's a good reason why they were giving Win10 away, and one of those is so that they can better make the case for not supporting stuff that shouldn't be supported any longer. Also somewhat amusing that you're making that argument since there's games from that time that have compatibility issues (because of stupid junk like GFWL).

And? Nvidia people acted like Kepler aging poorly compared to the similar AMD products was perfectly fine (it came out in 2012 and 2013, people have already bought 9xx series so 6xx and 7xx series performance doesn't matter, you can still technically play games, who cares that AMD saw performance gains and sees very significant ones with DX12? They literally made those arguments and even criticized AMD for how GCN cards actually improved over time). Oh and of course, DX12 doesn't matter since DX12 is so minor? But now it does because look cards that will perform like garbage running it can check off support for it while AMD's are EOL'ed!

Why do you find it ridiculous that an almost 8 year old product got EOLed? Oh I have no doubt you were very seriously considering which company's products to buy based on if you could keep playing the latest games on an 8 year old card. If that were the case, why did you buy a 570 when I'm pretty sure the 5700 had pretty awful support for the newest games at the time? Not only that but you're saying that when your Nvidia card literally died on you? How is a dead card that can't run at all better support than not releasing drivers that support newest games 8 years later? I'm sure all the people affected by "bumpgate" appreciate the driver support for their dead cards.
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
That is not Nvidia winning, that is them not having competition. All of their 1070/80/ti lose performance under DX12 and Vulkan. I think Sniper Elite is the only game where in actual gameplay the Nvidia cards perform mildly better under DX12 compared to DX11.

Even a game like ROTTR which was Nvidia's big push into DX12, even at that game their graphics lose performance under actual gameplay under DX12.

At best they gain like 1-3fps on average in actual gameplay like SE4, have about the same average under Hitman and lose performance in all other titles under actual gameplay.

No competition, winning, it's just semantics. Either way, Nvidia's strategy is winning. They have the faster cards with either DX version.

It depends on the reviews you look at. More recently, DX12 under Nvidia is more a side grade, but in either case, most games perform better in DX11 for both AMD and Nvidia once you take into account game play. DX12 is young, and full of buggy behavior. Vulkan is working out well, but DX12 is taking a while to get right, and usually not worth using, even for AMD users.
 
Mar 11, 2004
23,073
5,554
146
You really need to stop spreading FUD and garbage. :rolleyes: There are many reasons as to why a game might run faster in DX11 than in DX12, and by far the most likely reason has to do with bad optimizations. There isn't a single game out there yet that is pure DX12 in every manner, so most of these games are a hybrid between DX11 and DX12 with varying levels of optimizations. Some games run faster under DX11 than they do under DX12 on AMD as well, ie Deus Ex MD and BF1.

That said, there are several games which run faster with DX12 (and Vulkan) than with DX11 (and OpenGL), on NVidia. Ashes of the Singularity, Doom, Tom Clancy's The Division, Sniper Elite 4, Hitman all run faster in DX12 than in DX11 on NVidia hardware. In DX12 only titles like Gears of War 4, Forza H3 etcetera, NVidia also performs very well.

And as for why AMD gains more from DX12 than NVidia does, it's simple. NVidia's DX11 driver already mimics many of the functions of DX12 as far as CPU usage and performance is concerned, so obviously they wouldn't gain as much with DX12 as AMD does in that respect because AMD's DX11 driver has always been CPU limited.

Hasn't that come into dispute more recently? Something about Nvidia's performance seems to plateau past like 4 cores/threads or something, that people started noticing with Ryzen (and found it held true on even the 8 and 10 core Intel chips)? I guess that might actually argue that it was more CPU limited, but I guess maybe it disputes that AMD was so single core/thread limited like was the argument for a while.

And of course AMD supports DX12 better, as its development and AMDs hardware were quite related (features that people wanted for DX12 AMD had hardware support for where Nvidia didn't). Nvidia doesn't typically see gains, but because the game and/or the hardware doesn't support DX12 well (see async computer. Not because they're "mimicking DX12" with DX11 but because they were running DX11 more efficiently compared to AMD. Which is not at all the same as mimicking DX12 at some high level or else they wouldn't see any gains from DX12/Vulkan (and we wouldn't have seen the industry calling for stuff like DX12 and Vulkan, they'd just have just gone whole hog on NVidia for offering DX12 performance without needing to make the large changes with DX12; plus we see games that are well optimized in both DX11 but also in DX12 and the latter showing the potential, Doom being a good example). Claiming that Nvidia was/is mimicking DX12 in DX11 is straight up misinformation. Certainly they were doing DX11 quite efficiently but limits to performance in DX11 still exist, hence why DX12 still offers very worthwhile improvements to be had. Time will tell how well games take advantage of it though, so far the results have been mixed, but some promising signs.

Nvidia deserves credit for how well they had DX11 games running. And its clear they too can see good gains from DX12, but its also very clear their support for it is not at AMD's level (likewise, AMD's support/performance of DX11 games was not at Nvidia's level, which is why they had to just focus on perf/price since they typically had to use a higher theoretical output chip to compete, giving up efficiency; which we see changes on proper DX12 situations). At comparable products, the differences in either typically aren't drastic (can be), although for a while AMD took time to offer more comparable performance, and I think Nvidia took a while to offer good gains in DX12 games that were well done. Seems that some DX12 games AMD offers decently better framerates (although there are DX11 games where Nvidia does while AMD's performance is lacking comparably). The bigger issue now is that AMD just has not been releasing comparable products across the same range that Nvidia has.
 
Mar 11, 2004
23,073
5,554
146
No competition, winning, it's just semantics. Either way, Nvidia's strategy is winning. They have the faster cards with either DX version.

It depends on the reviews you look at. More recently, DX12 under Nvidia is more a side grade, but in either case, most games perform better in DX11 for both AMD and Nvidia once you take into account game play. DX12 is young, and full of buggy behavior. Vulkan is working out well, but DX12 is taking a while to get right, and usually not worth using, even for AMD users.

I don't agree. DX12 certainly isn't showing its potential yet, but even when it doesn't necessarily show better framerates, DX12 often is equal or people say it is smoother than DX11 on AMD in quite a few games. Whereas the opposite seems to be indicated with Nvidia (where it can offer similar framerate, but seems people say that it feels choppier than DX11 on the same card, I think that improved though after one of the driver updates that supposed to improve DX12 though). I see a lot of people say that benchmarks are getting worse about actually representing gameplay in general though, especially in multiplayer heavy games.

I don't think the bugs are a DX12 issue, as we've seen plenty of buggy DX11 releases, and some of the worst offenders happened to also have Nvidia Gameworks features tacked on (Arkham Knight being a good example, I think Watch Dogs was another one, and the French Assassin's Creed game).
 

Guru

Senior member
May 5, 2017
830
361
106
No competition, winning, it's just semantics. Either way, Nvidia's strategy is winning. They have the faster cards with either DX version.

It depends on the reviews you look at. More recently, DX12 under Nvidia is more a side grade, but in either case, most games perform better in DX11 for both AMD and Nvidia once you take into account game play. DX12 is young, and full of buggy behavior. Vulkan is working out well, but DX12 is taking a while to get right, and usually not worth using, even for AMD users.
Not true, in gameplay pretty much all games perform better under DX12 for AMD, while only SE4 performs better for Nvidia.
 

Guru

Senior member
May 5, 2017
830
361
106
You really need to stop spreading FUD and garbage. :rolleyes: There are many reasons as to why a game might run faster in DX11 than in DX12, and by far the most likely reason has to do with bad optimizations. There isn't a single game out there yet that is pure DX12 in every manner, so most of these games are a hybrid between DX11 and DX12 with varying levels of optimizations. Some games run faster under DX11 than they do under DX12 on AMD as well, ie Deus Ex MD and BF1.

That said, there are several games which run faster with DX12 (and Vulkan) than with DX11 (and OpenGL), on NVidia. Ashes of the Singularity, Doom, Tom Clancy's The Division, Sniper Elite 4, Hitman all run faster in DX12 than in DX11 on NVidia hardware. In DX12 only titles like Gears of War 4, Forza H3 etcetera, NVidia also performs very well.

And as for why AMD gains more from DX12 than NVidia does, it's simple. NVidia's DX11 driver already mimics many of the functions of DX12 as far as CPU usage and performance is concerned, so obviously they wouldn't gain as much with DX12 as AMD does in that respect because AMD's DX11 driver has always been CPU limited.

You are spreading fud and garbage. If you look at actual gameplay benchmarks, you'd see that Nvidia cards do lose under DX12 compared to DX11. Even Hitman is a wash, where in actual gameplay at best its equal performance under DX11 and DX12.

Yeah Doom obviously runs better on Vulkan compared to openGL, but you gain like 10% with Nvidia, more like 30% with AMD.

Tom Clancy again runs slower under DX12 in actual gameplay. Once we get away from the fake internal benches, in actual gameplay the Nvidia cards operate worse under DX12 compared to DX11.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Hasn't that come into dispute more recently? Something about Nvidia's performance seems to plateau past like 4 cores/threads or something, that people started noticing with Ryzen (and found it held true on even the 8 and 10 core Intel chips)? I guess that might actually argue that it was more CPU limited, but I guess maybe it disputes that AMD was so single core/thread limited like was the argument for a while.

NVidia's driver isn't currently limited to four threads. Perhaps in the past it was, but now it can scale to a lot more threads. In fact, DX11 games like Ghost Recon Wildlands and Watch Dogs 2 scale all the way to 10 cores and 20 threads on a 6950x CPU with NVidia hardware. If anything, AMD's DX11 driver is the one that is limited.

XkFRmR.png


And of course AMD supports DX12 better, as its development and AMDs hardware were quite related (features that people wanted for DX12 AMD had hardware support for where Nvidia didn't).

And NVidia was just sitting around twiddling their thumbs during DX12 development I suppose. The truth is, there was a lot of collaboration between Microsoft and the IHVs during DX12's development, and NVidia played a major part in this just like AMD, Intel etcetera.

Nvidia doesn't typically see gains, but because the game and/or the hardware doesn't support DX12 well (see async computer. Not because they're "mimicking DX12" with DX11 but because they were running DX11 more efficiently compared to AMD. Which is not at all the same as mimicking DX12 at some high level or else they wouldn't see any gains from DX12/Vulkan (and we wouldn't have seen the industry calling for stuff like DX12 and Vulkan, they'd just have just gone whole hog on NVidia for offering DX12 performance without needing to make the large changes with DX12; plus we see games that are well optimized in both DX11 but also in DX12 and the latter showing the potential, Doom being a good example). Claiming that Nvidia was/is mimicking DX12 in DX11 is straight up misinformation. Certainly they were doing DX11 quite efficiently but limits to performance in DX11 still exist, hence why DX12 still offers very worthwhile improvements to be had. Time will tell how well games take advantage of it though, so far the results have been mixed, but some promising signs.

So many things wrong with this paragraph. First off, both Maxwell and Pascal have a higher support of DX12 feature levels than current AMD GPUs, though Intel GPUs reportedly have the highest support level. Also, asynchronous compute has no hardware specification and isn't part of the DX12_0 feature level, the basic requirements for meeting the DX12 standard. That said, NVidia's Pascal supports asynchronous compute just fine. Maxwell does too, but it's limited to static scheduling which is why it doesn't work in games.

Regarding my comment about NVidia's DX11 driver mimicking some functions of DX12, how else do you explain Ghost Recon Wildlands scaling to 20 threads with a GTX 1080 and 6950x under DX11? And there are many comments by developers about how hard it is just to come up even against NVidia's DX11 driver with their DX12 implementation. This all feeds into the narrative that NVidia has somehow found a way to exploit the power of modern multithreaded CPUs for accelerating GPU functions, which ordinarily would be done in hardware on the GPU.
 
  • Like
Reactions: xpea

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
Not true, in gameplay pretty much all games perform better under DX12 for AMD, while only SE4 performs better for Nvidia.
This is just one example, but I've seen these types of things reviewed quite a few times of actual experience between DX12 vs DX11. While DX12 often gives higher FPS for AMD, it often gives worse results in smoothness. Lots of people notice the same on the forums. It likely is a game by game thing to some extent.

http://techreport.com/review/30639/...x-12-performance-in-deus-ex-mankind-divided/3
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
VLIW AMD GPUs (Terascale) never meant to have WDDM 2.x drivers, never they were advertised to be supported: they lacked of different binding features, like indexable resources, moreover the low-level binding model would be quite impossible to apply on those architecture. By some binding point of views Fermi was even more advanced then Kepler, unfortunately that architecture had other limitations in the chosen binding model (some of them in common with Haswell). If you think that 3 binding tier is a mess, the initial proposal was for 5 binding tiers and 5 coupling tiers.

It should be noted that Fermi does not have support for bindless resources and this shows in their OpenGL drivers too as they lack support for the extension. If Apple had decided to write Metal 2 drivers for Fermi it would most likely only support Argument Buffer Tier 1 since the maximum amount of resources that you can bind is 128 textures and 16 samplers which coincidentally matches the limits of D3D12 Resource Binding Tier 1 ...

The difference between DX11 and DX12 resource binding is the capability to do GPU side resource binding (bindless). In DX11, Microsoft had to account for the fact that older hardware didn't have the capability to do pointer indexing to the resources directly and instead had these special purpose registers or 'slots' one would call them for the purpose of CPUs to do these pointer bindings for the GPU resources. This is one of the reasons why texture state changes in the pipeline is expensive and that is one of the intentions that bindless strives to solve ...

With D3D12 Resource Binding Tier 2+ and Metal 2 Argument Buffer Tier 2 it changes everything about the way we do resource binding by exposing the GPUs ability to do pointer indexing into the resources ...

With Resource Binding Tier 1 and Argument Buffer Tier 1, you're stuck with CPU side binding ... (Yuck!)

On a side note with Terascale microarchitectures issue's with recent DX11 games there are a multitude of factors to take into account such possible changes to the conformance tests to account for the fact that the D3D11 spec has changed and that the drivers weren't updated with this in mind during the time AMD dropped support for their older microarchitectures. (Don't know how that would be possible since Terascale 2 & 3 cleared support for WDDM 1.3.) App logic can affect the way the driver behaves depending on how far the app is pushing the limits of the spec and that could be a probable cause. Another one that ties into driver behaviour is when the app is straight up violating the spec and that some drivers will play along with it while on other systems it could crash driver or fail to boot up and in this case the app writers is in the wrong for writing incorrect code and that AMD doesn't owe them anything since they've passed as a conformant DX11 driver implementation ...
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
With D3D12 Resource Binding Tier 2+ and Metal 2 Argument Buffer Tier 2 it changes everything about the way we do resource binding by exposing the GPUs ability to do pointer indexing into the resources ...

So how does this help gaming? Does it actually increase performance, and if so, by how much? Or does it just take the CPU out of the equation, thus reducing power usage?
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
You are spreading fud and garbage. If you look at actual gameplay benchmarks, you'd see that Nvidia cards do lose under DX12 compared to DX11. Even Hitman is a wash, where in actual gameplay at best its equal performance under DX11 and DX12.

Yeah Doom obviously runs better on Vulkan compared to openGL, but you gain like 10% with Nvidia, more like 30% with AMD.

Tom Clancy again runs slower under DX12 in actual gameplay. Once we get away from the fake internal benches, in actual gameplay the Nvidia cards operate worse under DX12 compared to DX11.

Seriously, get a clue. I actually own the hardware, and many of these games so I speak from firsthand experience. I've actually gained as much 150% or more with Doom Vulkan over OpenGL in the game's most CPU limited area, and the benchmarks will back up what I am saying.

Anyway, this is my last post addressing you because it's obvious you're just a troll.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,773
3,596
136
NVidia's driver isn't currently limited to four threads. Perhaps in the past it was, but now it can scale to a lot more threads. In fact, DX11 games like Ghost Recon Wildlands and Watch Dogs 2 scale all the way to 10 cores and 20 threads on a 6950x CPU with NVidia hardware. If anything, AMD's DX11 driver is the one that is limited.
You can't separate multi-threaded capability of drivers from FPS numbers. For that matter, nobody but the game developer working closely with the IHVs can do this. There's no way to tell how much of scaling is due to the game itself and how much is due to drivers from reviews. This is similar to the bogus "driver overhead" talks that theses reviewers love to speculate on, without having a clue. Besides, his point was about DX12 scaling, not DX11.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
You can't separate multi-threaded capability of drivers from FPS numbers. For that matter, nobody but the game developer working closely with the IHVs can do this. There's no way to tell how much of scaling is due to the game itself and how much is due to drivers from reviews. This is similar to the bogus "driver overhead" talks that theses reviewers love to speculate on, without having a clue. Besides, his point was about DX12 scaling, not DX11.

Driver overhead is real. I get your point about not knowing what is due to drivers, and what is due to the game's inherent multithreading capabilities, but when you compare AMD and NVidia in highly threaded games like the original Watch Dogs and Watch Dogs 2, you definitely see a pattern. Observe:

The original Watch Dogs:

imGNVM.png


NVidia's drivers are able to extract performance from HT, whereas AMD's is not.


Now Watch Dogs 2:

tFCaGK.png

AMD is definitely CPU bound in this game, as there is a HUGE deficit between the GTX 1080 and the AMD cards at this CPU limited resolution. Such a difference cannot be explained by the game's inherent multithreading. Obviously the driver is doing some work as well. How NVidia does it, is not understood, but they've possibly found a way to add worker threads beyond the game's threading model to increase rendering output by using their driver.

That's why I say that NVidia can mimic the functions of DX12 with their DX11 driver. With DX12, adding concurrent rendering threads is basically trivial compared to doing so in DX11, where there is much higher CPU overhead and increased probability of threads clashing. That's why DX11 games typically are limited to one or two rendering threads. I'm sure that whatever NVidia is doing in their DX11 driver is a closely guarded secret, because it has allowed them to gain a notable edge over AMD throughout the DX11 era.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
So how does this help gaming? Does it actually increase performance, and if so, by how much? Or does it just take the CPU out of the equation, thus reducing power usage?

It's a combination of these things ... :)

The functionality is a lot more interesting than the performance or CPU reduction overhead benefits ...

It's good for optimizing GPU based culling, deferred texturing, and ray tracing ...
 
  • Like
Reactions: Carfax83

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
After thinking about this some more, it's also possible that AMD's DX11 driver sets a limit on the amount of concurrent rendering threads that a game can create, whereas NVidia has no limit.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
It's a combination of these things ... :)

The functionality is a lot more interesting than the performance or CPU reduction overhead benefits ...

It's good for optimizing GPU based culling, deferred texturing, and ray tracing ...

All this sounds good. I think I posted this before, but Assassin's Creed Origins might be the very first game to use the DX12 binding model; at least for shaders. I don't know if this covers anything else like textures:

XwPvoi.jpg
 

tamz_msc

Diamond Member
Jan 5, 2017
3,773
3,596
136
Driver overhead is real. I get your point about not knowing what is due to drivers, and what is due to the game's inherent multithreading capabilities, but when you compare AMD and NVidia in highly threaded games like the original Watch Dogs and Watch Dogs 2, you definitely see a pattern. Observe:

The original Watch Dogs:

imGNVM.png


NVidia's drivers are able to extract performance from HT, whereas AMD's is not.


Now Watch Dogs 2:

tFCaGK.png

AMD is definitely CPU bound in this game, as there is a HUGE deficit between the GTX 1080 and the AMD cards at this CPU limited resolution. Such a difference cannot be explained by the game's inherent multithreading. Obviously the driver is doing some work as well. How NVidia does it, is not understood, but they've possibly found a way to add worker threads beyond the game's threading model to increase rendering output by using their driver.

That's why I say that NVidia can mimic the functions of DX12 with their DX11 driver. With DX12, adding concurrent rendering threads is basically trivial compared to doing so in DX11, where there is much higher CPU overhead and increased probability of threads clashing. That's why DX11 games typically are limited to one or two rendering threads. I'm sure that whatever NVidia is doing in their DX11 driver is a closely guarded secret, because it has allowed them to gain a notable edge over AMD throughout the DX11 era.
The WD2 numbers clearly reflect the underutilization of the shader engines in Fiji, this isn't a driver issue, but an inherent limitation of GCN. You need to do more work to extract performance out of the extra 5 CUs per SE on Fiji compared to Hawaii. I bet that the R9 290X/390X would be in the same ballpark, perhaps even slightly ahead of Fiji. The same thing might be happening in Watch Dogs. So this may not be an issue due to threading. The usual underperformance of AMD GCN hardware on Ubisoft titles using AnvilNext is usually attributed to this. Plus as usual the high quantity of tessellation. This being a CPU-bound test, it is to be expected that the GPU utilization would be low anyway.

Also Polaris has better normalized geometry performance, tessellation and color compression, among other things - that's why it can outperform Fiji.

It may very well be true that AMD's drivers have limited multithreaded capability given how GCN works, but I believe that these results primarily reflect the underutilization of resources on GCN.

GCN truly shines when you have a game that is 50% geometry and 50% compute, which is why I'm looking forward to Sebbi's upcoming game Claybook, to see what it can do with GCN.
 

TheELF

Diamond Member
Dec 22, 2012
3,973
730
126
That's why DX11 games typically are limited to one or two rendering threads. I'm sure that whatever NVidia is doing in their DX11 driver is a closely guarded secret, because it has allowed them to gain a notable edge over AMD throughout the DX11 era.
Huge big secret,don't tell anybody
ewyHZoJ.jpg

6cAtXQz.jpg
 

Guru

Senior member
May 5, 2017
830
361
106
Seriously, get a clue. I actually own the hardware, and many of these games so I speak from firsthand experience. I've actually gained as much 150% or more with Doom Vulkan over OpenGL in the game's most CPU limited area, and the benchmarks will back up what I am saying.

Anyway, this is my last post addressing you because it's obvious you're just a troll.
Get your tone in order, especially when you are wrong on so many levels.

I own a GTX 1060 6GB and GTX 1080, but I'm not going from my personal numbers which actually confirm what I'm saying, I go by tests, there is literally a new thread right now about Hitman DX11 and DX12 test in actual gameplay, in it you can see AMD's card is on average about 5fps faster, while Nvidia 1060 is about the same average, with significantly lower frames in higher complexity scenes in DX12.

Again, check duderandom88, testinggames, artis, etc... on youtube for gameplay tests and you can see the 1060 6GB losing to the RX 580 8GB in pretty much all DX12 games, but what is more important Nvidia loses performance in most games when going to DX12 over DX11.

OpenGL was a crappy API, it's always performed terrible, usually 30%-50% worse than Microsoft's alternatives like DX11 or DX10, etc... so of course you are going to gain performance going to Vulkan over OpenGL on Nvidia, you will gain performance on an integrated Intel graphics in Vulkan over openGL.

The facts are Nvidia gains 1-4fps in SE4 on AVERAGE, sometimes up to 8-9fps more in DX12, but sometimes that much lower, while AMD gains 15-20fps in DX12 over DX11.

Hitman is a wash in gameplay, about same performance on Nvidia compared to DX11, in all other game I've seen and tested, like Tom Clancy, warhammer, Deus-EX, ROTTR, Battlefield 1, etc... Nvidia cards lose performance going to DX12 over DX11.