Pascal now supports DX12 resource binding tier 3 with latest drivers (384.76)

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Carfax83

Diamond Member
Nov 1, 2010
6,805
1,494
136
http://nvidia.custhelp.com/app/answers/detail/a_id/4522

Can anyone install 384.80 hotfix driver and see if it still reports Tier 3 for Pascal & Maxwell?

https://forum.beyond3d.com/threads/direct3d-feature-levels-discussion.56575/page-9#post-1840641

Feature Checker was updated to July 2017 version but can't download it though because B3D forums don't allow it unless you have account with certain amount of posts or whatever.
I'm using the May 2017 version, because I don't have a membership there either, but I can confirm that 384.80 still says resource binding tier 3 for my Titan Xp.

Seems less and less like a mistake, and more and more like it was deliberate.
 

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91

Alessio1989

Member
Jun 6, 2015
26
2
71
I'm using the May 2017 version, because I don't have a membership there either, but I can confirm that 384.80 still says resource binding tier 3 for my Titan Xp.

Seems less and less like a mistake, and more and more like it was deliberate.
What about Kepler GPUs? I am pretty sure the bypass of # of UAV and CBV bindings can by bypassed (not sure how much the performance impact, especially on CBVs..), but at least Kepler should not allow empty slots on the root buffer (which would preclude the Tier 3 of RB) and empty slots cannot easily replaced with null descriptors without runtime support (which D3D12 should not provide). If Kepler GPUs too reports Tier 3 on RB I would bet on a driver bug (at least for Kepler GPUs).
 
Last edited:

Spjut

Senior member
Apr 9, 2011
909
98
91
What about Kepler GPUs? I am pretty sure the bypass of # of UAV and CBV bindings can by bypassed (not sure how much the performance impact, especially on CBVs..), but at least Kepler should not allow empty slots on the root buffer (which would preclude the Tier 3 of RB) and empty slots cannot easily replaced with null descriptors without runtime support (which D3D12 should not provide). If Kepler GPUs too reports Tier 3 on RB I would bet on a driver bug (at least for Kepler GPUs).
The user MDolenc tested Kepler on page 41. Says Resource Binding Tier 2. Also says Kepler has experimental support for Shader Model 6.0.
 

pepone1234

Member
Jun 20, 2014
36
8
81
I have a laptop with a nvidia 850m (maxwell 1) and it is reporting tier 3. I find it strange because I thought nvidia designed maxwell v1 to have feature parity with kepler and kepler is reporting tier 2.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
We know which team you root for, but really, did you just call out the DX12 performance of a GPU that is over 7 years old? How do 7 year old AMD cards perform in DX12? That's HD 5xxx generation for reference.
 
  • Like
Reactions: Sweepr

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
We know which team you root for, but really, did you just call out the DX12 performance of a GPU that is over 7 years old? How do 7 year old AMD cards perform in DX12? That's HD 5xxx generation for reference.
Sorry for providing facts and evidence, I realize you may dislike them but I'll continue to provide them anyway. Why are you going off topic?
 
  • Like
Reactions: Kuosimodo

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
You didn't answer the question, how do the HD 5xxx series do in DX12?
 

Alessio1989

Member
Jun 6, 2015
26
2
71
The big issue with Fermi DX 12 support is it comes too late. At the DX12 launch with Windows 10, it was the biggest spread architecture, but now it a deprecated architecture by a development point of view since NVIDIA dropped Fermi support from it's profiling&debug tool Nsight.
I have a laptop with a nvidia 850m (maxwell 1) and it is reporting tier 3. I find it strange because I thought nvidia designed maxwell v1 to have feature parity with kepler and kepler is reporting tier 2.
Nope, only the rendering features were the same as Kepler, the internal design was pretty identical to Maxwell 2.
 

SPBHM

Diamond Member
Sep 12, 2012
5,032
387
126
The big issue with Fermi DX 12 support is it comes too late. At the DX12 launch with Windows 10, it was the biggest spread architecture, but now it a deprecated architecture by a development point of view since NVIDIA dropped Fermi support from it's profiling&debug tool Nsight.
still beats no real support since 2015 and WDDM 1.3 even for architectures newer than Fermi like the 6900/Trinity/Richland.
 
  • Like
Reactions: psolord

tamz_msc

Diamond Member
Jan 5, 2017
3,366
3,299
136
Pre-GCN cards are EOL - AMD made it pretty clear, so of course you wouldn't get DX12 on HD 5000 and HD 6000.
 

SPBHM

Diamond Member
Sep 12, 2012
5,032
387
126
Pre-GCN cards are EOL - AMD made it pretty clear, so of course you wouldn't get DX12 on HD 5000 and HD 6000.
to be honest I don't find it all that relevant, it's just the contrast that is... interesting, AMD stopped 100% with support, while Nvidia is adding new features for products from the same era 2 years later.
what I miss on the DX11 Radeons is at least having some major bug fixes, if their "legacy support" meant support, like every 6 months or so release a driver with bug fixes, that would be great and more than enough for me not to think it's a major disadvantage, but, the current situation even makes me less confident about buying newer AMD products... and it's not like it's new, with the DX10 cards Nvidia also had support for a a couple more years, same for DX9c cards actually.
 
  • Like
Reactions: psolord

tamz_msc

Diamond Member
Jan 5, 2017
3,366
3,299
136
to be honest I don't find it all that relevant, it's just the contrast that is... interesting, AMD stopped 100% with support, while Nvidia is adding new features for products from the same era 2 years later.
what I miss on the DX11 Radeons is at least having some major bug fixes, if their "legacy support" meant support, like every 6 months or so release a driver with bug fixes, that would be great and more than enough for me not to think it's a major disadvantage, but, the current situation even makes me less confident about buying newer AMD products... and it's not like it's new, with the DX10 cards Nvidia also had support for a a couple more years, same for DX9c cards actually.
AMD made it clear that they are going to focus on GCN when they moved these cards to legacy support. The only uncertainty arises from how long GCN will be around in the future.

Pre-GCN is to AMD what Windows XP, Vista and 8(and soon 8.1) is to Microsoft.
 

Peicy

Member
Feb 19, 2017
28
14
81
Sorry for providing facts and evidence, I realize you may dislike them but I'll continue to provide them anyway. Why are you going off topic?
You know damn well what Phynaz means.
The way you provide "facts" here is meaningless without context. Any GPU thats so old will not perform well in a benchmark designed to stress modern hardware.
I find it quite surprising that Nvidia even invested time and money to get this done actually. Its quite commendable.
 
Last edited:

Spjut

Senior member
Apr 9, 2011
909
98
91
to be honest I don't find it all that relevant, it's just the contrast that is... interesting, AMD stopped 100% with support, while Nvidia is adding new features for products from the same era 2 years later.
what I miss on the DX11 Radeons is at least having some major bug fixes, if their "legacy support" meant support, like every 6 months or so release a driver with bug fixes, that would be great and more than enough for me not to think it's a major disadvantage, but, the current situation even makes me less confident about buying newer AMD products... and it's not like it's new, with the DX10 cards Nvidia also had support for a a couple more years, same for DX9c cards actually.
I totally agree about Nvidia's superior long-term driver support, but I think Nvidia got away very easily with DX12 for Fermi being delayed for so long. All major tech sites positively called Nvidia out for being the only one going to support DX12 on GPUs dated back to 2010, but barely anyone called them out for Fermi being delayed for so long that everyone believed it to be canceled.

I of course think it's a relief that Nvidia finally released the DX12 drivers, but it being delayed for almost two years and not a single word from Nvidia, that's not really my cup of tea.
 

Guru

Senior member
May 5, 2017
830
361
106
I don't really know how to take this concept of Nvidia losing in DX12, when they have 4 cards which are faster than anything AMD has in DX12 (Titan Xp, 1080ti, 1080 and 1070). While the 1060 and lower doesn't do as well in comparison to AMD's offerings, they still have several cards that perform better. It is clear that AMD's offering does better in DX12 in relation to DX11, but they aren't winning in DX12 as far as I can see.
That is not Nvidia winning, that is them not having competition. All of their 1070/80/ti lose performance under DX12 and Vulkan. I think Sniper Elite is the only game where in actual gameplay the Nvidia cards perform mildly better under DX12 compared to DX11.

Even a game like ROTTR which was Nvidia's big push into DX12, even at that game their graphics lose performance under actual gameplay under DX12.

At best they gain like 1-3fps on average in actual gameplay like SE4, have about the same average under Hitman and lose performance in all other titles under actual gameplay.
 

Alessio1989

Member
Jun 6, 2015
26
2
71
still beats no real support since 2015 and WDDM 1.3 even for architectures newer than Fermi like the 6900/Trinity/Richland.
VLIW AMD GPUs (Terascale) never meant to have WDDM 2.x drivers, never they were advertised to be supported: they lacked of different binding features, like indexable resources, moreover the low-level binding model would be quite impossible to apply on those architecture. By some binding point of views Fermi was even more advanced then Kepler, unfortunately that architecture had other limitations in the chosen binding model (some of them in common with Haswell). If you think that 3 binding tier is a mess, the initial proposal was for 5 binding tiers and 5 coupling tiers.
 
Last edited:

Spjut

Senior member
Apr 9, 2011
909
98
91
AMD's VLIW GPUs not being able to support WDDM 2.x is one thing, but those GPUs can't even play 2015+ DX11 games anymore due to driver issues. They're able to start some games but have visual bugs (but OK performance), or they're unable to start others due to failing a driver check.

AMD's drivers are extra bad for laptops with hybrid graphics using different architectures. It's a PITA using a laptop with both Terascale 1 and Terascale 2/3, as well as Terascale 2/3 with GCN.
 

Tup3x

Senior member
Dec 31, 2016
751
656
136
I totally agree about Nvidia's superior long-term driver support, but I think Nvidia got away very easily with DX12 for Fermi being delayed for so long. All major tech sites positively called Nvidia out for being the only one going to support DX12 on GPUs dated back to 2010, but barely anyone called them out for Fermi being delayed for so long that everyone believed it to be canceled.

I of course think it's a relief that Nvidia finally released the DX12 drivers, but it being delayed for almost two years and not a single word from Nvidia, that's not really my cup of tea.
Pure DX12 games are only starting to show up now though. Fermi DX12 support only makes sense for those games. I'd actually like to know how well Fermi does Forza Apex for example.
 

Alessio1989

Member
Jun 6, 2015
26
2
71
AMD's VLIW GPUs not being able to support WDDM 2.x is one thing, but those GPUs can't even play 2015+ DX11 games anymore due to driver issues. They're able to start some games but have visual bugs (but OK performance), or they're unable to start others due to failing a driver check.

AMD's drivers are extra bad for laptops with hybrid graphics using different architectures. It's a PITA using a laptop with both Terascale 1 and Terascale 2/3, as well as Terascale 2/3 with GCN.
Those GPUs are no more supported, which means games released after Terascale end support are not tested and supposed to work correctly with those cards. A 2017 game requiring DirectX 11 doesn't automatically mean "any DirectX 11 graphics card".

On hybrid (VLIW+GCN) laptops you should force the game to run under the GCN GPU. If hybrid crossfire is enable, it must be disabled.
 

Dribble

Platinum Member
Aug 9, 2005
2,020
554
136
Still got a GTX 570 in use in my sons machine. Still works fine for anything he wants to play including all the latest releases - never has driver problems or compatibility issues, it just works. These old cards certainly aren't past it yet, so if they keep updating it that suits me just fine.
 

psolord

Golden Member
Sep 16, 2009
1,577
896
136
AMD made it clear that they are going to focus on GCN when they moved these cards to legacy support. The only uncertainty arises from how long GCN will be around in the future.

Pre-GCN is to AMD what Windows XP, Vista and 8(and soon 8.1) is to Microsoft.
The problem here is that Microsoft is competing with itself. Plus it gave a free upgrade option to Windows 10.

AMD is competing with Nvidia.

For me it is completely ridiculous that an inferior GTX 460 enjoys better support than the 5850 and as a consumer, I will too, think twice before going AMD again.

So what then, a few months after Vega launches they will place GCN cards in Legacy support?
 

tamz_msc

Diamond Member
Jan 5, 2017
3,366
3,299
136
The problem here is that Microsoft is competing with itself. Plus it gave a free upgrade option to Windows 10.

AMD is competing with Nvidia.

For me it is completely ridiculous that an inferior GTX 460 enjoys better support than the 5850 and as a consumer, I will too, think twice before going AMD again.

So what then, a few months after Vega launches they will place GCN cards in Legacy support?
Vega is still GCN.

And an inferior GTX 460 with driver support is still inferior to the HD 5850 - unless you are interested in running DX12 stuff on it for academic reasons.

Might as well ask why NVIDIA doesn't allow the GeForce drivers to install in a VM, for attempting GPU passthrough, which is something that is far more useful than writing DX12 drivers for 7-8 year old hardware.
 

Spjut

Senior member
Apr 9, 2011
909
98
91
Those GPUs are no more supported, which means games released after Terascale end support are not tested and supposed to work correctly with those cards. A 2017 game requiring DirectX 11 doesn't automatically mean "any DirectX 11 graphics card".

On hybrid (VLIW+GCN) laptops you should force the game to run under the GCN GPU. If hybrid crossfire is enable, it must be disabled.
And that ended support is mostly what people are reacting to. AMD stopped the support altogether for those GPUs already back in 2015 (except a Beta Crimson driver in january 2016), whereas Fermi has gotten drivers supporting WDDM 2.2 and DX12. Nvidia kept adding features to their old GPUs and AMD left theirs to rot. The difference is staggering.
Most games don't officially support Fermi anymore either, but try a modern game and Fermi is very likely to have less issues than any Terascale GPU.
 

ASK THE COMMUNITY