D3D12 articles - so much misunderstandings and miscommunications

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

zlatan

Senior member
Mar 15, 2011
580
291
136
Zlatan, regarding the GCN revisions, are these small enough to make optimizations for the whole graphics architecture easier, or does the rev changes from 1.0 to 1.1 and now to 1.2 require new optimized code?

I ask this because AMD is in the unique position to have both three itinerations of GCN in the same lineup, and having such diversity could hamper software optimization (both from the driver team and the game engine developers). Tonga Mantle performance comes to mind, it seemed pretty unconsistent comparing to the 1.1 or 1.0 offerings.

The main issue here is the memory management. On D3D11 the driver do it on every GPU. But with D3D12 this must be managed by the application itself. So every GPU revision needs an optimized code. This means the applications must known how to allocate the buffers optimally for different GPUs, with different size of device memory, and different target resolution. If the buffer allocations are not optimal for some GPUs, than it will hurt the performance.
 

PPB

Golden Member
Jul 5, 2013
1,118
168
106
Considering what you have explained, what would be the swiftest approach for delivering an optimized code for each uarch that will be DX12 capable in the new wave of game engines/games?

Would this make a must for GPU vendors to be quick with the NDA lifting and the release of the tech specs of their new GPUs (so game devs and MS's API division are quickly able to adjust with the new SKUs)? Afaik from what you have said, GCN 1.2 and Maxwell 2 are still under the carpet in that aspect (and ironically, they are the most advanced and probably the most capable of supporting most of the DX12 feature set).
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
Before you leave again, what are your thoughts on the feasibility of asymmetric crossfire a la 290x & 290 or 290x & apu?

Amd had a demo of grass fx that showed grass physics on an apu while the rendering was done on a dgpu.

Does d3d12 make doing that easier? And why do you think devs would/not want to implement such a system?
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
Don't think so.

If you ask me I think Maxwell and GCN are Tier 2.

I want information.... and here there is nothing.

Good job on showing showing your lack of knowledge ... ;)

If you'd taken a look at the programmers guide on page 6 you can clearly see that the resource descriptors are been held in the SGPRs and is manually fetched by the Scalar ALUs ...

This means that GCN does not have constant registers for the CPU to assist in the resource binding therefore it is heralded as been truly "bindless" since only a pointer needs to be passed to be able to reference a large amount of resources and it's also supported for every resource type too ...

GCN is clearly TIER3 no matter how you spin it ...

What fib are you going to tell us next ?:twisted:
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Software/Emulated will always suck horribly compared to hardware.

Bare assertion, no source, no analysis. Cool story, bro. PROTIP: partial hardware support is a thing. It's not binary. HINT: Look at how quicksync works. PROTIP2: partial hardware + software is sometimes preferable to pure hardware because the software side can be flexible...
 
Last edited:

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
Where is the Maxwell SRV information?

This thread is so much confusing with clear AMD Warriors.

Clear information or go home.

Why all the Dx12 tests use one GTX 980 instead of an AMD R9?

Fermi and Kepler support DX 11.1 (Tier 1 - Directx 12)
Maxwell support DX 11.3 (Tier 3 - Directx 12)
AMD GCN support DX 11.2 (Tier 2 - Directx 12)
Don't think so.

If you ask me I think Maxwell and GCN are Tier 2.
So which is it?
 

zlatan

Senior member
Mar 15, 2011
580
291
136
Considering what you have explained, what would be the swiftest approach for delivering an optimized code for each uarch that will be DX12 capable in the new wave of game engines/games?

Simply write an optimized path for each IP. Sure, the codebase will be larger, but the explicit control will allow us to debug the full source code, and this is a huge advantage.

Would this make a must for GPU vendors to be quick with the NDA lifting and the release of the tech specs of their new GPUs (so game devs and MS's API division are quickly able to adjust with the new SKUs)? Afaik from what you have said, GCN 1.2 and Maxwell 2 are still under the carpet in that aspect (and ironically, they are the most advanced and probably the most capable of supporting most of the DX12 feature set).
Sure, it will be easier if I know how to write an optimized code for each IP.

Before you leave again, what are your thoughts on the feasibility of asymmetric crossfire a la 290x & 290 or 290x & apu?

Amd had a demo of grass fx that showed grass physics on an apu while the rendering was done on a dgpu.

Does d3d12 make doing that easier? And why do you think devs would/not want to implement such a system?
GrassFX is not an Async Crossfire tech. It simply a GPGPU concept where the iGPU can do the simulation. It is possible on D3D12, but OpenCL is more suitable for this kind of workload.

CrossFire and SLI is not defined on D3D12. The application can detect all GPUs in the PC, and yes it is possible to use them if an algorithm built around it. But it won't work automatically. You can even throw away the bridges. Don't need for them with the new APIs. Two or more GeForce will also work with mobos without SLI certification.
 
Last edited:

.vodka

Golden Member
Dec 5, 2014
1,203
1,538
136
DX12 sure sounds like a *huge* step forward in every sense, for everyone. Excited to see what it can enable in the future!

Thank you very much for all the information so far, much appreciated.
 

stahlhart

Super Moderator Graphics Cards
Dec 21, 2010
4,273
77
91
Get the tempers in here under control, now. Either debate respectfully, or agree to disagree and move on.
-- stahlhart
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Many major engines will have DX12 support out around the time of release. Thus they will be able release DX12 versions of their games at that point. So you can expect to see them by the end of this year.

I would love for this to be true but I am not aware of a single DX12 game announced so far and we are just 10 months away from the end of 2015. I am not buying your theory that in 10 months from now we will see a lot of AAA DX12 games. Buying today under the assumption that 960/970/980 will run DX12 games better vs. 290/290X sounds like false hope. All of those cards will be too slow.

From a practical point of view DX12 support for every current card out now is irrelevant because the best GPUs we have are just mid-range. Next gen DX12 games will mop the floor with them. Today a 970 is slower than a 290X at 1440P and above and it has a 3.5GB of VRAM impeding bottleneck. That means it's unlikely to outlast a 290X. If as you say next gen games engines will be DX12, those game engines will have even more advanced particles, textures, polygons, tessellation, shadows, etc. If these effects increase in complexity by 50-100%, those games will literally mop the floor with today's cards. Not only that but 980 itself is not much faster than a 290X/970 but costs a whopping $220-300 more. That means a 290/290X/970 owner could just sell his card in 12 months from now and invest the $220-300 savings + resale value towards a next gen $500 DX12 card.

We also know from following GPU history that a GPU with a higher DX designation was never good enough to play next gen DX games of that designation. While DX12 support sounds nice on paper today, it's not practical for those of us who follow the GPU market closely. By the time a good DX12 game actually launches, a $350 GPU will beat a GTX980. My post is my opinion but what I stated has been true for every single DX generation since MS started hyping it up.

In less than 2 quarters we should have GM200/390X that will officially cement 290X/980 as mid-range cards. So really I don't see what the big deal is about "full" or "partial" DX12 support. Besides, no GPU today can claim full DX12 support unless MS themselves confirms it and what it actually means for games. Until then, everything is just marketing from NV/AMD.
 
Last edited:

Noctifer616

Senior member
Nov 5, 2013
380
0
76
I would love for this to be true but I am not aware of a single DX12 game announced so far and we are just 10 months away from the end of 2015.

Fable Legends was announced on the last Win 10 event from MS. Outside of that there is engine support in Unity and Unreal Engine I believe.
 

Gundark

Member
May 1, 2011
85
2
71
Unity don't support dx12 yet. It's still in beta at least to v5 and maybe get delayed to a point release. They support Apple's Metal as of Unity 4.6.3 and the game Republique is upgraded to it.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
I would love for this to be true but I am not aware of a single DX12 game announced so far and we are just 10 months away from the end of 2015. I am not buying your theory that in 10 months from now we will see a lot of AAA DX12 games. Buying today under the assumption that 960/970/980 will run DX12 games better vs. 290/290X sounds like false hope. All of those cards will be too slow.

From a practical point of view DX12 support for every current card out now is irrelevant because the best GPUs we have are just mid-range. Next gen DX12 games will mop the floor with them. Today a 970 is slower than a 290X at 1440P and above and it has a 3.5GB of VRAM impeding bottleneck. That means it's unlikely to outlast a 290X. If as you say next gen games engines will be DX12, those game engines will have even more advanced particles, textures, polygons, tessellation, shadows, etc. If these effects increase in complexity by 50-100%, those games will literally mop the floor with today's cards. Not only that but 980 itself is not much faster than a 290X/970 but costs a whopping $220-300 more. That means a 290/290X/970 owner could just sell his card in 12 months from now and invest the $220-300 savings + resale value towards a next gen $500 DX12 card.

We also know from following GPU history that a GPU with a higher DX designation was never good enough to play next gen DX games of that designation. While DX12 support sounds nice on paper today, it's not practical for those of us who follow the GPU market closely. By the time a good DX12 game actually launches, a $350 GPU will beat a GTX980. My post is my opinion but what I stated has been true for every single DX generation since MS started hyping it up.

In less than 2 quarters we should have GM200/390X that will officially cement 290X/980 as mid-range cards. So really I don't see what the big deal is about "full" or "partial" DX12 support. Besides, no GPU today can claim full DX12 support unless MS themselves confirms it and what it actually means for games. Until then, everything is just marketing from NV/AMD.

Right now we are seeing the wave of unoptimized console ports. Devs just got a ton of power with the upgrade to the new consoles and are releasing tons of unoptimized ports. Hardware demands are higher than even when normalized to IQ. New games require an inordinate amount of GPU power relative to the IQ they produce.

Newer games are likely to move the baseline foreward (like I talked about eariler with the base effects on a game coming relatively cheap as they are programmed into the engine and special effects much more expensive for little IQ gain - Crysis 3 on a mixture of medium/low looks better or as good as many games on high and performs much better; compare with AC:U or FC4) resulting in games that actually increase the IQ without stupid hardware demands. Likely newer cards will have a longer lifetime than many of the recent cards.

We saw a jump in required Vram because of the consoles (mainly unoptimized textures). However, we seem to have reached or almost reached the vram consoles have available; this trend of vram inflation is unlikely to continue.
 

Spjut

Senior member
Apr 9, 2011
932
162
106
I get what RussianSensation is saying, but I think it's a bit too harsh.

We've already seen how great results BF4 (a one year old game) gets with Mantle, both for weaker CPUs + mid-range/high-end graphics cards, and for powerful CPUs + Crossfire with high-end cards.

And even when today's cards are considered old/slow, and perhaps aren't "full" DX12, a more powerful and feature rich card is obviously better off than a weaker and less feature rich card. IIRC, benchmarks in late DX9 games showed AMD's X1900 series outperforming Nvidia's 7900 series. The HD 5770 was generally comparable to the HD 4870 but pulled ahead by a fair margin in DX11 titles.
I guess many people on this forum upgrade quite often though.
 
Last edited:

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
Right now we are seeing the wave of unoptimized console ports. Devs just got a ton of power with the upgrade to the new consoles and are releasing tons of unoptimized ports. Hardware demands are higher than even when normalized to IQ. New games require an inordinate amount of GPU power relative to the IQ they produce.
The root is actually on the consoles themselves, right now the console games are DX9-ish stuff with little to no optimizing for the hardware. Once that happens we'll see much better games that are ported to the PC side.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
I'm going to respond to Win 10 + DX12 by downgrading. I suspect current hardware won't be as DX12-friendly as companies claim... that they might have to do things in emulators or something, plus they are the last of the 28nm cards.

So I sold my R9 290 and am probably going to get an energy-efficient GTX 750 Ti. Most of the games I play are older, so a 750 Ti (overclocked 20% to almost GTX 660 stock speeds) on a 1080p screen will still be decent, especially if I don't try to max out settings. If 16nm GPUs are good, then I'll upgrade, and the 750 Ti will be given away, sold off, or demoted to HTPC or PhysX duty.

I did the same thing several years ago when I sold my 5850 and downgraded to a 6850 because 32nm TSMC got canceled (just like how 20nm GPUs got delayed/canceled). When 28nm did come around, I got a 7970, and I had no regrets about skipping the HD6970.
 

DeathReborn

Platinum Member
Oct 11, 2005
2,786
789
136
Good job on showing showing your lack of knowledge ... ;)

If you'd taken a look at the programmers guide on page 6 you can clearly see that the resource descriptors are been held in the SGPRs and is manually fetched by the Scalar ALUs ...

This means that GCN does not have constant registers for the CPU to assist in the resource binding therefore it is heralded as been truly "bindless" since only a pointer needs to be passed to be able to reference a large amount of resources and it's also supported for every resource type too ...

GCN is clearly TIER3 no matter how you spin it ...

What fib are you going to tell us next ?:twisted:

Has Microsoft finalised the Tiers and exactly how much of a tier you must support to fall under that Tier?

In my view this thread is just adding to the confusion and misinformation, while well intentioned it's not helping much.
 

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
...
From a practical point of view DX12 support for every current card out now is irrelevant because the best GPUs we have are just mid-range. Next gen DX12 games will mop the floor with them.

There is some anecdotal evidence out there that this will be true of DX 12 cards, moreover that it will have a much much greater impact on performance than previous DX releases.

If true, I think DX12 will be a major driver to Win 10 adoption and could bring in a major flurry of GPU upgrades.

Now might not be a great time to buy a new GPU...


"Over the past few days, Stardock’s CEO, Brad Wardell, has been tweeting some really interesting details about DX12. As Brad Wardell noted, in a recent test he saw over 100fps difference between DX11 and DX12. This test was conducted on an unreleased GPU and as Wardell claimed, this performance boost was on a system that was ‘way beyond console stuff’."


http://www.dsogaming.com/news/dx12-...sed-gpu-in-new-test-way-beyond-console-stuff/
 

Spanners

Senior member
Mar 16, 2014
325
1
0
Has Microsoft finalised the Tiers and exactly how much of a tier you must support to fall under that Tier?

In my view this thread is just adding to the confusion and misinformation, while well intentioned it's not helping much.

Check the PDF in the OP (page 39), I'd assume they are finalised or Intel wouldn't be quoting them. The PDF explains the support required for each tier.

I disagree, this thread has been informative.
 
Last edited:

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
Has Microsoft finalised the Tiers and exactly how much of a tier you must support to fall under that Tier?

In my view this thread is just adding to the confusion and misinformation, while well intentioned it's not helping much.

Your question doesn't make much sense but I'll try and answer it anyways ...

First off, NOTHING is finalized as far as the feature level's specifications go so what we got was just a sneak peak of things ...

Second of all, support for a resource binding tier is a binary decision so there's hardly any wiggle room on that front ...

The reason why you might not find this thread helpful has to do with the fact that some people in here kept deception going on. The only people in here that might actually know something is zlatan, NeoLuxembourg, and I markedly ...
 

Mindtaker

Junior Member
Feb 26, 2015
16
0
0
Has Microsoft finalised the Tiers and exactly how much of a tier you must support to fall under that Tier?

In my view this thread is just adding to the confusion and misinformation, while well intentioned it's not helping much.

- Not clear.

- Agree, some red fanboys but little good information.

The reason why you might not find this thread helpful has to do with the fact that some people in here kept deception going on. The only people in here that might actually know something is zlatan, NeoLuxembourg, and I markedly ...

Hahaha, oh god, this make my day :D
 
Last edited:
Status
Not open for further replies.