I was thinking so but then I don't really know for sure. I heard that HyperZ was more aggressive than nvidia's "early z occlusion culling" and that AMD wanted to use Hyper Z to cull things that could barely be visible as well.
I've never understood what it is that nvidia did to make their depth calculations look better.
What are some vendor specific optimizations (i.e., what does nvidia sacrifice for performance that AMD doesn't and vice versa) at the driver level? What about at the hardware level? Anyone have a list?
As others know, I have a beef with early z occlusion culling because it makes it so that devs are tempted to use depth formats other than 32 bit fx log z.
I've never understood what it is that nvidia did to make their depth calculations look better.
What are some vendor specific optimizations (i.e., what does nvidia sacrifice for performance that AMD doesn't and vice versa) at the driver level? What about at the hardware level? Anyone have a list?
As others know, I have a beef with early z occlusion culling because it makes it so that devs are tempted to use depth formats other than 32 bit fx log z.