sontin: Marketing strikes again.
Ask yourself. Would you buy a new hardware when they say these "new features" are possible with the existing GPUs with some workaround?:awe:
Don't get angry for this but they just want to sell the product. Most of us totally understand why they are doing this. But in the other hand this doesn't mean that we can't write the truth for example here.
I'm using conservative rasterization for shadowing. It is a hybrid shadow mapping technique with ray-tracing. The performance is not a problem for me on PS4. I don't see why it would be problem on a PC, but yeah didn't try it.Nobody will use CR emulated by shaders.
Due to the lack of hardware support in current GPUs, we implement conservative rasterization in a geometry shader (GS)
The main drawback of a geometry shader-based approach, besides having to enable the GS stage, is that we cannot rely on built-in perspective-correct vertex attribute interpolation.
Instead, the vertex attributes of the original triangle have to be passed to the pixel shader for manual interpolation, which consumes a large number of input/output registers. For these reasons, we believe hardware support for conservative rasterization is highly desired.
Indeed, support for conservative rasterization in Direct3D 12 has already been announced.
So yes, DX11_3 and DX12 will require a "hardware" implementation of CR within the rasterizers.We note that conservative rasterization (CR) implemented in the geometry shader consumes a disproportionately large portion of the total frame time for the complex ARENA scene. The reason for this is twofold: the scene contains a large number of primitives, which are nearly all visible, and uses a large number of vertex attributes. These all have to be passed from the geometry shader to the pixel shader, and manually interpolated. The savings would be very large if conservative rasterization instead was implemented in hardware[...]
And here ends the discussion about DX12 and DX11_3:
There will be only one way - through the rasterizers.
Yes, there are different ways to implement CR. But only one with the DX flag.
I don't understand this. The IHVs can choose how to implement certain features. MS just want to standardize the access for these. The hardware implementation can be different. If it's compatible with the standard than MS don't care about the hardware.And here ends the discussion about DX12 and DX11_3:
There will be only one way - through the rasterizers.
Yes, there are different ways to implement CR. But only one with the DX flag.
The Maxwell v2 is fine. The first Maxwell is not that good, but better than a GK110 Kepler. The earlier Keplers are bad at this.zlatan, can you already tell us about the support for the so called "async compute" on maxwell?
Is it "well" supported?
Youve already mentioned that there was support for this in most gpus in a non efficient manner. So, kepler already supported this?
Thank you very much for the topic!
So you cant provide any edvidence/documention I guess.
I don't understand this. The IHVs can choose how to implement certain features. MS just want to standardize the access for these. The hardware implementation can be different. If it's compatible with the standard than MS don't care about the hardware.
The Maxwell v2 is fine. The first Maxwell is not that good, but better than a GK110 Kepler. The earlier Keplers are bad at this.
In theory the Broadwell iGPU is very good also, but async compute will force the hardware to run at normal clock. It will still better than the turbo mode but sometimes not much.
The real king for this worlkoad is GCN, with many ACEs.
Intel published a paper about "Deep Shadow Buffers" in which they used a software CR approach with Kepler, GCN and Intel GPUs: https://software.intel.com/en-us/articles/deep-shading-buffers-on-commodity-gpus
A few quotes from it:
So yes, DX11_3 and DX12 will require a "hardware" implementation of CR within the rasterizers.
In this particular case, the GS expands the primitive and the PS performs attribute interpolation, before calling the original pixel shader.
Microsoft indeed specifies hardware features - look at DX11 Tessellation.
Up to now Microsoft hasn't announced a new shader stage or a new rendering order for CR. Which makes is more and more clear that the "rasterizing" stage will do the CR.
And there wouldnt exist a standardization if you need to implement three+ different CR paths...
BTW: Here are the new optional DX11.3 features:
https://msdn.microsoft.com/en-us/library/dn879499.aspx
Thank you very much zlatan - it was really interesting to read. :thumbsup:
Btw, I would like to see more stuff like that in the future.
That's using a geometry shader too. Isn't the point of manual interpolation in GCN that you could conceivably create "geometry shader"-alike pixel shaders? If I understand it correctly. But instead Intel does:
But the pixel shader already does the interpolation in GCN no? So why did Intel add a GS on top? Sorry if these are dumb questions.
But the pixel shader already does the interpolation in GCN no? So why did Intel add a GS on top? Sorry if these are dumb questions.
