Pascal now supports DX12 resource binding tier 3 with latest drivers (384.76)

Carfax83

Diamond Member
Nov 1, 2010
6,509
1,296
126
Still hasn't been officially confirmed, but with the latest drivers, Pascal GPUs are now showing DX12 resource binding tier 3, whereas before it used to show tier 2.

Here's a useful application written by DmitryKo from beyond3d forums which checks the DX12 features that your GPU supports, and conveniently outputs the results to text format. Unless you have a beyond3d account, you won't be able to download it from that link.

Here's a temporary download link which will expire within a couple of days for anyone that wants to download it.

My results for my Titan Xp. Would be interested to see what the results are for Maxwell v2, so anyone with a Maxwell card, please download the app and run it and post your results.

 

Guru

Senior member
May 5, 2017
830
361
106
Does this change anything in practice, or is it just another useless "feature" on paper?
 

Carfax83

Diamond Member
Nov 1, 2010
6,509
1,296
126
Does this change anything in practice, or is it just another useless "feature" on paper?
Games still have to be programmed to take advantage of the DX12 resource binding model, but apparently it's a big deal for GPU performance in DX12. The first game to use the DX12 binding model might be AC Origins, which I base on this slide:



Here's an excellent video about DX12 resource binding model. It's very technical, but it drives the point home:

 

[DHT]Osiris

Lifer
Dec 15, 2015
11,407
8,779
146

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
Yay for better DX12 support, means more devs will go with DX12 over DX11 :)
 

Krteq

Senior member
May 22, 2015
989
670
136
Wow, how can nV change a hardware limitation by a driver? :D

Anyway, those are good news indeed. We will see more native DX12 games and maybe we will finally see a proper DX12 Async-Compute support for GeForce cards.
 
  • Like
Reactions: Bacon1

Carfax83

Diamond Member
Nov 1, 2010
6,509
1,296
126
I wonder what kind of push we're looking at as far as 'rubber meets the road' FPS/resource capacity gains are? 25% reduction in a_thing? +10% FPS? etc...
I think I remember Microsoft stating that DX12 can supposedly offer a 20% increase in GPU performance when you account for all the GPU efficiency improvements, due to asynchronous compute, shader model 6.0, and bindless resources.
 
  • Like
Reactions: [DHT]Osiris

Carfax83

Diamond Member
Nov 1, 2010
6,509
1,296
126
Wow, how can nV change a hardware limitation by a driver? :D
Obviously it wasn't a hardware limitation. It's quite possible that NVidia has been sitting on this as they've refined their DX12 driver over the years. It's not as though it was needed, since we haven't had any DX12 titles that use the new binding model anyway.

As I posted above, AC Origins might be the first game to use it.

We will see more native DX12 games and maybe we will finally see a proper DX12 Async-Compute support for GeForce cards.
Not this crap again :rolleyes: Asynchronous compute is properly supported by Pascal GPUs, and has been for quite some time. Maxwell supports asynchronous compute as well, but not dynamically which means it cannot use it for games.
 
  • Like
Reactions: amenx

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,107
231
116
Could actually be a driver reporting error since fully bindless is not exposed in Vulkan on the Nvidia side ...
 

Malogeek

Golden Member
Mar 5, 2017
1,390
778
136
yaktribe.org
Could also be simply reporting it with a driver software solution with no actual performance benefits. Would need actual testing by a developer really.
 

Carfax83

Diamond Member
Nov 1, 2010
6,509
1,296
126
Could actually be a driver reporting error since fully bindless is not exposed in Vulkan on the Nvidia side ...
No I think these drivers are legit. Someone just posted this on guru3d, but it appears that the drivers have finally enabled DX12 on Fermi GPUs. So it's more likely that these drivers are the result of many years of cumulative DX12 optimization by NVidia and they finally decided to release them to the public.

 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,107
231
116
No I think these drivers are legit. Someone just posted this on guru3d, but it appears that the drivers have finally enabled DX12 on Fermi GPUs. So it's more likely that these drivers are the result of many years of cumulative DX12 optimization by NVidia and they finally decided to release them to the public.

How are drivers not legit when GPU chip designers are the only ones able to access the firmware and the bios on windows ?

Whether driver giving out correct info or not is another matter entirely much like how earlier Nvidia drivers reported resource heap tier 2 supported when it was later fixed to show tier 1 supported ...

We won't know for sure that Maxwell and Pascal supports fully bindless unless we do some more testing by creating a test app that enumerates a D3D12 device with resource binding tier 3 that does some simple bound resources to a pipeline without crashing ...

If Kepler too reports fully bindless as being supported as well then it's almost certainly a driver mistake ...
 
  • Like
Reactions: Carfax83 and Krteq

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
My results for my Titan Xp. Would be interested to see what the results are for Maxwell v2, so anyone with a Maxwell card, please download the app and run it and post your results.
Well here is my Fury result for anyone that cares.

Do you happen to know where the tools that some reviewers use for bandwidth tests and such are on those forums? I've looked in the past and couldn't find them.

AMD Radeon ReLive 17.6.1

Direct3D 12 feature checker (March 2017) by DmitryKo
https://forum.beyond3d.com/posts/1840641/

Windows 10 version 1703 (build 15063)

ADAPTER 0
"AMD Radeon (TM) R9 Fury Series"
VEN_1002, DEV_7300, SUBSYS_E331174B, REV_CB
Dedicated video memory : 3221225472 bytes
Total video memory : 4294901760 bytes
Video driver version : 22.19.171.1
Maximum feature level : D3D_FEATURE_LEVEL_12_0 (0xc000)
DoublePrecisionFloatShaderOps : 1
OutputMergerLogicOp : 1
MinPrecisionSupport : D3D12_SHADER_MIN_PRECISION_SUPPORT_NONE (0)
TiledResourcesTier : D3D12_TILED_RESOURCES_TIER_2 (2)
ResourceBindingTier : D3D12_RESOURCE_BINDING_TIER_3 (3)
PSSpecifiedStencilRefSupported : 1
TypedUAVLoadAdditionalFormats : 1
ROVsSupported : 0
ConservativeRasterizationTier : D3D12_CONSERVATIVE_RASTERIZATION_TIER_NOT_SUPPORTED (0)
StandardSwizzle64KBSupported : 0
CrossNodeSharingTier : D3D12_CROSS_NODE_SHARING_TIER_NOT_SUPPORTED (0)
CrossAdapterRowMajorTextureSupported : 0
VPAndRTArrayIndexFromAnyShaderFeedingRasterizerSupportedWithoutGSEmulation : 1
ResourceHeapTier : D3D12_RESOURCE_HEAP_TIER_2 (2)
MaxGPUVirtualAddressBitsPerResource : 40
MaxGPUVirtualAddressBitsPerProcess : 40
Adapter Node 0: TileBasedRenderer: 0, UMA: 0, CacheCoherentUMA: 0, IsolatedMMU: 1
HighestShaderModel : D3D12_SHADER_MODEL_5_1 (0x0051)
WaveOps : 1
WaveLaneCountMin : 64
WaveLaneCountMax : 64
TotalLaneCount : 3584
ExpandedComputeResourceStates : 1
Int64ShaderOps : 1
RootSignature.HighestVersion : D3D_ROOT_SIGNATURE_VERSION_1_1 (2)
DepthBoundsTestSupported : 1
ProgrammableSamplePositionsTier : D3D12_PROGRAMMABLE_SAMPLE_POSITIONS_TIER_NOT_SUPPORTED (0)
ShaderCache.SupportFlags : D3D12_SHADER_CACHE_SUPPORT_SINGLE_PSO | LIBRARY (3)
 

tamz_msc

Diamond Member
Jan 5, 2017
3,324
3,258
136
Why does @Bacon1 's result correctly show the number of SPs on his Fury, but @Carfax83 's result doesn't show the correct number of CUDA cores on his Titan Xp?
 

nvgpu

Senior member
Sep 12, 2014
629
202
81
The program is a 32bit application, it can only see up to 4GB.

It needs to be compiled as 64bit application.
 

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
The program is a 32bit application, it can only see up to 4GB.

It needs to be compiled as 64bit application.
Its not trying to fill the memory, it's just querying the card features.

Code:
pAdapter->GetDesc2(&AdapterDesc); 

printf("%s : %u %s\n", "Total video memory", AdapterDesc.DedicatedVideoMemory + AdapterDesc.DedicatedSystemMemory + AdapterDesc.SharedSystemMemory, " bytes");
 

Krteq

Senior member
May 22, 2015
989
670
136
Still, this whole "resource binding tier 3 on Pascal/Maxwell" thing seems to be a bug in driver or DmitryKo's application. Resource binding tiers are tied to HW tiers, there is no way how to magically adjust it.

Let's wait for DmitryKo or some nV representative to shed more light on this.
 
Last edited:

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
Still, this whole "resource binding tier 3 on Pascal/Maxwell" thing seems to be a bug in driver or DmitryKo's application. Resource binding tiers are tied to HW tiers, there is no way how to magically adjust it.

Let's wait for DmitryKo or some nV representative to shed more light on this.
His app just calls the APIs and queries the driver for the features available:

Code:
pDevice->CheckFeatureSupport(D3D12_FEATURE_D3D12_OPTIONS, &FeatureDataOptions, sizeof(D3D12_FEATURE_DATA_D3D12_OPTIONS));


                printf("%s : %s%s (%i)\n", "TiledResourcesTier", "D3D12_TILED_RESOURCES_TIER_", pTier, FeatureDataOptions.TiledResourcesTier);

                printf("%s : %s%s (%i)\n", "ResourceBindingTier", "D3D12_RESOURCE_BINDING_TIER_", pTier, FeatureDataOptions.ResourceBindingTier);
So its not testing it's just asking the drivers what support they have.
 

Spjut

Senior member
Apr 9, 2011
904
93
91
It'd be nice if Nvidia finally delivered on their statement regarding Fermi supporting DX12, but I wouldn't count on it. DXdiag has claimed before that DX12 was supported on my older PC with GT 610 but it failed Dmitriko's test and couldn't run any DX12 stuff either.

I'm curious to get news about if Nvidia will support Shader Model 6.0 on Kepler, and if Maxwell/Pascal will support 6.1. It seems like all of GCN will get SM 6.1.
 

zlatan

Senior member
Mar 15, 2011
580
291
136
A Guru3d member with a GTX 980 has now confirmed that Maxwell v2 now supports DX12 resource binding tier 3.

So both Pascal and Maxwell now support DX12 resource binding tier 3. I wonder why it took so long for them to implement it in the drivers?
Because Microsoft changed the final resource binding specs too late. There was a lot of talk about what specs would be useful for the future, but the IHVs had to develop an implementation for the release of the D3D12 API. NVIDIA just unable to react for the final changes, and these opens the door for some emulation. Their hardwares still not fully bindless, but they can emulate the TIER_3 support with some overhead on the CPU side. But to do this, they need to write the emulation layer between the driver and the hardware, and this required a lot of code change on their original implementation.
 
  • Like
Reactions: Carfax83

zlatan

Senior member
Mar 15, 2011
580
291
136
It'd be nice if Nvidia finally delivered on their statement regarding Fermi supporting DX12, but I wouldn't count on it.
I don't think they will care about it. It's too late and too little. But we can't say now that they don't keep their promise.

I'm curious to get news about if Nvidia will support Shader Model 6.0 on Kepler, and if Maxwell/Pascal will support 6.1. It seems like all of GCN will get SM 6.1.
SM6.1 is not a big change compared to SM6.0. It's just some intrinsic instruction to read the barycentric coordinates from the hardware rasterizer. While it will require a lot of work on the software side for NV and Intel, but the hardwares can support it. AMD already do manual interpolation, so they can ad this feature very easily.
 

ASK THE COMMUNITY