[Microsoft] DirectX 12: A MiniEngine Update

BlitzWulf

Member
Mar 3, 2016
165
73
101
Haven't seen this posted anywhere.
https://www.youtube.com/watch?v=bSTIsgiw7W0
This is a recent presentation from Microsoft's DirectX 12 education channel.
It's running through some of the new features of mini engine ,a small example engine intended to be used as a tool to help developers acclimate to the new features offered by the api.
Some may not be Happy to see that Microsoft seems to have a lot of AMD love going on right now, they seem to be really excited about compute shaders being able to be used asynchronously saying their entire post processing is done asynchronously with compute shaders.

KbrGiPj.png



za0p9Xw.png




It seems Microsoft is really doubling down and giving devs strong examples to work with when creating or working with DX12 engines so that they can properly use all the features this new API has to offer.

Without sounding too biased they appear to be pushing for a ton of features that will suit the hardware of a certain IHV more than it's current competitors

definitely a good watch
 
Last edited:

BlitzWulf

Member
Mar 3, 2016
165
73
101

Thanks for your reply.
I agree that GDC presentation was more diplomatic and certainly more technically in depth.(certainly beyond the breadth of my understanding of this subject)
Did you mean My opinions are biased or that I'm mis-characterizing the content of the video?
Or was it that this video could possibly give some people (me) the wrong ideas if viewed out of context?
I got the impression that even though their best programming practices (your post)say one thing,their engine code examples that they provide developers seem to be heavily leaning towards rendering and resource management techniques that from my limited understanding seem to be ones that favor AMD's architecture designs.(Tile based deferred rendering ,long running compute shaders,asynchronous execution of graphics and compute shaders)
feel free to correct me if I'm wrong as i said I'm no expert on the matter by far.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
How is that bias when it's from Microsoft? o_O

ps. It's not an AMD presentation. It's actually MS DX12 demo engine for developers to learn how to do DX12 properly.

There's always some sort of bias ...

In the current situation bringing up the topic of async compute is an implicit positive for AMD ...

Even an ISV like Microsoft is not immune to playing favourites for an IHV ...

AMD was the ONLY IHV to support the idea of multiple and simultaneous execution of seperate compute queues in hardware yet Microsoft exposed those features in DX12 despite AMD's market share disadvantage in PC ...
 

tential

Diamond Member
May 13, 2008
7,348
642
121
There's always some sort of bias ...

In the current situation bringing up the topic of async compute is an implicit positive for AMD ...

Even an ISV like Microsoft is not immune to playing favourites for an IHV ...

AMD was the ONLY IHV to support the idea of multiple and simultaneous execution of seperate compute queues in hardware yet Microsoft exposed those features in DX12 despite AMD's market share disadvantage in PC ...

Nvidia supports Async and continues to say their GPUs do so.
How is it biased for MS to have Async when BOTH gpu vendors support it.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106

What makes you think MSFT is AMD biased? I remember them using nVidia for the 1st DX12 demos which many instantly proclaimed as proof that nVidia was better at DX12.

Before it became apparent that nVidia hardware can't do async compute in DX12, lauding async compute was fine. It's not MSFT's fault nVidia was either lying or didn't understand what was required to use the function with DX12 (Maybe thought it was more like what they do with CUDA?).
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
There's always some sort of bias ...

In the current situation bringing up the topic of async compute is an implicit positive for AMD ...

Even an ISV like Microsoft is not immune to playing favourites for an IHV ...

AMD was the ONLY IHV to support the idea of multiple and simultaneous execution of seperate compute queues in hardware yet Microsoft exposed those features in DX12 despite AMD's market share disadvantage in PC ...

you are suggesting they should hold back gaming just because nvidia has more market share but inferior technology? Anyway, they have the xbox with AMD hardware and plan to put xbox games on PC, so makes sense to expose it.

and really I don't think async compute is actually that special. It matters now because nvidia has issues with it, but it seems a natural evolution of graphics to me. Why do either graphics or compute? why not just do both and let the GPU sort out whats what whens what? Just makes sense.

They can't be called biased for exploiting a common sense approach.

hadn't realized they used conservative rasterization in the division for the HFTS shadows. Mixed results but the bike shadow looks good, though it looks like a stretched reflection. If the next gen cards do not have a performance loss with it that would be good, if there is a major loss in performance still I don't know if it would be worth it.

http://32ipi028l5q82yhj72224m8j.wpe...ogramming_Model_and_Hardware_Capabilities.pdf
 
Last edited:

PhonakV30

Senior member
Oct 26, 2009
987
378
136
Bias ?
Because of Async compute? Isn't it part of DX12? Tessellation Is part of DX11 and Nvidia cards are superior to any AMD cards for Tessellation, So it's bias? Nvidia Ignored one of important features in DX12 that is Async compute yet they said :

Our work with Microsoft on DirectX 12 began more than four years ago with discussions about reducing resource overhead. For the past year, NVIDIA has been working closely with the DirectX team to deliver a working design and implementation of DX12 at GDC.

Microsoft is focused on Async compute for Xbox one.so it makes sense.
 
Last edited:
Feb 19, 2009
10,457
10
76
Microsoft is focused on Async compute for Xbox one.so it makes sense.

It's one half of the major advantage of DX12 or Mantle-like console APIs. Multi-thread the CPU and "Hyper-thread" the GPU with parallel graphics + compute. How else are they going to get peak performance from 7850 obsolete tech, right?
 

Ed1

Senior member
Jan 8, 2001
453
18
81
Thanks for your reply.
I agree that GDC presentation was more diplomatic and certainly more technically in depth.(certainly beyond the breadth of my understanding of this subject)
Did you mean My opinions are biased or that I'm mis-characterizing the content of the video?
Or was it that this video could possibly give some people (me) the wrong ideas if viewed out of context?
I got the impression that even though their best programming practices (your post)say one thing,their engine code examples that they provide developers seem to be heavily leaning towards rendering and resource management techniques that from my limited understanding seem to be ones that favor AMD's architecture designs.(Tile based deferred rendering ,long running compute shaders,asynchronous execution of graphics and compute shaders)
feel free to correct me if I'm wrong as i said I'm no expert on the matter by far.

I simply showing there more to Dx12 than those slides and my guess it will only expand.
 

Ed1

Senior member
Jan 8, 2001
453
18
81
What makes you think MSFT is AMD biased? I remember them using nVidia for the 1st DX12 demos which many instantly proclaimed as proof that nVidia was better at DX12.

Before it became apparent that nVidia hardware can't do async compute in DX12, lauding async compute was fine. It's not MSFT's fault nVidia was either lying or didn't understand what was required to use the function with DX12 (Maybe thought it was more like what they do with CUDA?).

Never said MSFT or AMD are biased, but lets be frank, AC is only one small part of Dx12 and "needing" to use AC on your HW to get good GPU utilization is not good thing. Sure it gives AMD hardware now a decent boost (5-10% max) but it be better they didn't need it to feed GPU.

Now onto MS on a whole, there trying to find every way to hype Dx12 so everyone migrates to Win10, UWP and with AC of course AMD is on that wagon to (I would be too) :)
 
Last edited:
Feb 19, 2009
10,457
10
76
Dx12 and "needing" to use AC on your HW to get good GPU utilization is not good thing.

You don't actually understand Async Compute or multi-engine rendering in DX12 if that is what you think.

Some parts of the GPU are utilized for copy or transfer for example, but don't touch shaders at all. Likewise, shadow maps use the ROPs and not the shaders.

In simple terms: https://youtu.be/H1L4iLIU9xU?t=14m48s
 

Ed1

Senior member
Jan 8, 2001
453
18
81
You don't actually understand Async Compute or multi-engine rendering in DX12 if that is what you think.

Some parts of the GPU are utilized for copy or transfer for example, but don't touch shaders at all. Likewise, shadow maps use the ROPs and not the shaders.

In simple terms: https://youtu.be/H1L4iLIU9xU?t=14m48s

that is all true for AMD, which is who talking there.

Anyway, I still stand that AC is very small part of dx12, sure your seeing it now hyped cause it known more as from consoles .
Wait till more mature support of Dx12 comes.
 

Vaporizer

Member
Apr 4, 2015
137
30
66
that is all true for AMD, which is who talking there.

Anyway, I still stand that AC is very small part of dx12, sure your seeing it now hyped cause it known more as from consoles .
Wait till more mature support of Dx12 comes.

Yeah hopefully some day mature support for DX12 will also come from the other IHV.
 

Mahigan

Senior member
Aug 22, 2015
573
0
0
More mature DX12 support won't help Maxwell. Well move to Pascal and folks will forget about Maxwell the way they have Kepler. Then we'll move to Volta and same deal.

DX12 allows us to circumvent NV-only optimizations and compels devs to optimize for both NV and AMD. Coupled with the console effect we have much closer competition from both NV and AMD.

I, for one, am glad to see NVs dominance disappear and instead see titles optimized for both architectures. Asynchronous Compute + Graphics is here to stay. How NV deal with that fact of life will be interesting.

I'm also looking forward to GPUOpen cross vendor optimizations. It should negate Gameworks and allow for properly optimized effects for both vendors.

It seems that hardware-side innovations is where we will see NV and AMD attempt to differentiate themselves. The way it's mean't to be.
 
Last edited:

Ed1

Senior member
Jan 8, 2001
453
18
81
More mature DX12 support won't help Maxwell. Well move to Pascal and folks will forget about Maxwell the way they have Kepler. Then we'll move to Volta and same deal.

DX12 allows us to circumvent NV-only optimizations and compels devs to optimize for both NV and AMD. Coupled with the console effect we have much closer competition from both NV and AMD.

I, for one, am glad to see NVs dominance disappear and instead see titles optimized for both architectures. Asynchronous Compute + Graphics is here to stay. How NV deal with that fact of life will be interesting.

I'm also looking forward to GPUOpen cross vendor optimizations. It should negate Gameworks and allow for properly optimized effects for both vendors.

It seems that hardware-side innovations is where we will see NV and AMD attempt to differentiate themselves. The way it's mean't to be.

Well IMO Pascal won't be much different from maxwell (same on AMD next, Polaris ). I doubt they will do major changes with big move in die size (28nm>16nm ) at the same time.

While I agree it all on dev side with Dx12, but I think there will be more generic optimization, more whats the same with each, than optimization per chip .
I think it just to time consuming but we will see. Hopefully big AAA engines will do it after time.
We also need to wait and see how ground up Dx12 engines perform.
 
Last edited:

Pinstripe

Member
Jun 17, 2014
197
12
81
Mahigan keeps preaching the end for Nvidia is nigh, then Nvidia just posts another record fiscal quarter while Team Red gets deeper into the red. Same story as always.
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,250
136
Mahigan keeps preaching the end for Nvidia is nigh, then Nvidia just posts another record fiscal quarter while Team Red gets deeper into the red. Same story as always.

Why stir the pot?

In the end it's best if both AMD and NVidia compete on the hardware side than the silly games one IHV plays with software.

Why hold back on features with the intent to sandbag, stifle, whatever you want to call it, the future of gaming.
 

caswow

Senior member
Sep 18, 2013
525
136
116
Mahigan keeps preaching the end for Nvidia is nigh, then Nvidia just posts another record fiscal quarter while Team Red gets deeper into the red. Same story as always.

oh really does he do this? i cant remember reading doom and gloom from him about nv ever.
 

Pinstripe

Member
Jun 17, 2014
197
12
81
Why hold back on features with the intent to sandbag, stifle, whatever you want to call it, the future of gaming.

That is what competition does. It's a game of hijacking and a race to be first on meaningless features (cough DX12_1 cough). It dilutes quality long term, but the prices keep going up anyway.

The only ones really holding back the game industry are lazy developers and greedy publishers, not NV or AMD.
 

Mahigan

Senior member
Aug 22, 2015
573
0
0
oh really does he do this? i cant remember reading doom and gloom from him about nv ever.

That's because I never preach the end of NVIDIA. I discuss the going and upcoming trends based on the technology at hand as well as upcoming technology.

I do preach on open source as well as open standards and hope to see more competition stemming from the adoption of these ideals.

Oh well. Can't please everyone.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Never said MSFT or AMD are biased, but lets be frank, AC is only one small part of Dx12 and "needing" to use AC on your HW to get good GPU utilization is not good thing. Sure it gives AMD hardware now a decent boost (5-10% max) but it be better they didn't need it to feed GPU.

Now onto MS on a whole, there trying to find every way to hype Dx12 so everyone migrates to Win10, UWP and with AC of course AMD is on that wagon to (I would be too) :)

There are tasks that without async compute have to wait for other tasks to finish. If you want to remove that bottleneck then you 'need to use async compute. The fact that GCN can isn't the weakness you are trying to portray. It's an advantage. This "AMD underutilizes the cards resources" line is just spin. Don't buy in to it. What do you even base that conclusion on?

And considering the OP was a presentation by msft, you most certainly did claim msft is biased by saying your link was a "more unbiased view"..
 
Last edited:

Ed1

Senior member
Jan 8, 2001
453
18
81
There are tasks that without async compute have to wait for other tasks to finish. If you want to remove that bottleneck then you 'need to use async compute. The fact that GCN can isn't the weakness you are trying to portray. It's an advantage. This "AMD underutilizes the cards resources" line is just spin. Don't buy in to it. What do you even base that conclusion on?

And considering the OP was a presentation by msft, you most certainly did claim msft is biased by saying your link was a "more unbiased view"..

Maybe I should of said more broad viewpoint of Dx12. I took OP as those screens represent large part of Dx12. That is why I posted the pdf which shows much more feature list.
As for your other statement AMD Dx11 did have big trouble with multi-core support something that Nvidia didn't seem to have on same scale.
 
Last edited: