one major reason is because opengl has 32 bit log z-buffers while the best that DX has to offer is 32 bit float reverse z-buffers.
and that means that open gl would actually be better for an xbox classic emulator than DX11 would. not only is DX11 inadequate for matching the w-buffer (which was used by many Xbox classic games) 100% of the time, but 32 bit reverse float z-buffers arent enough for the PS2 replication 100% of the time either. also, dont forget that pre-DX9 games couldnt look as good on DX11 as they could with an openGL wrapper. ut 99's (unofficial) DX11 renderer doesnt look as good as an opengl renderer could because ut 99's original rasterizers used w-testing (for power VR's proprietary api) or w-buffering (for glide).
depth precision has been the most neglected aspect of 3d graphics ever since its inception imo. before G80, the only two hardware rasterizers for consumers that did 32 bit fixed point z-buffers were matrox's g400 and its derivatives, the PS2's GS, and the rage 128 and R200 (but i am not 100% sure of the latter two).
we have seen increases of render target precision to ARGB16 fixed point, ARGB16 float, and ARGB32 float, but DX still doesnt do anything that is a perfect match for the w-buffer.
and another advantage of 32 bit log z-buffers is that they dont have a stencil buffer which means that brand new shadowing algorithims would have to used which is a good thing. 8 bit stencil buffers have been used for shadows for way too long. fully programmable shadows via shaders would be much nicer.
ideally, programmable blending and depth would be used as software is more versatile than fixed function. software rendering would probably not be as fast or look as good at mid range settings, but assuming that the fpu did double precision, software rendering would definitely be able to do even higher quality than hardware (while being slower) and it would also definitely be capable of doing lower quality than hardware could (while being faster). and of course, software rendering is directly to the metal. it would definitely best performed on an isa better at graphics rendering than x86 is and two or even three general purpose dies plus display logic and probably texture units integrated into one of those dies, but it would definitely be more versatile. god in heaven help us if we ever see hardware ray tracing make it to store shelves.
your thoughts?
and that means that open gl would actually be better for an xbox classic emulator than DX11 would. not only is DX11 inadequate for matching the w-buffer (which was used by many Xbox classic games) 100% of the time, but 32 bit reverse float z-buffers arent enough for the PS2 replication 100% of the time either. also, dont forget that pre-DX9 games couldnt look as good on DX11 as they could with an openGL wrapper. ut 99's (unofficial) DX11 renderer doesnt look as good as an opengl renderer could because ut 99's original rasterizers used w-testing (for power VR's proprietary api) or w-buffering (for glide).
depth precision has been the most neglected aspect of 3d graphics ever since its inception imo. before G80, the only two hardware rasterizers for consumers that did 32 bit fixed point z-buffers were matrox's g400 and its derivatives, the PS2's GS, and the rage 128 and R200 (but i am not 100% sure of the latter two).
we have seen increases of render target precision to ARGB16 fixed point, ARGB16 float, and ARGB32 float, but DX still doesnt do anything that is a perfect match for the w-buffer.
and another advantage of 32 bit log z-buffers is that they dont have a stencil buffer which means that brand new shadowing algorithims would have to used which is a good thing. 8 bit stencil buffers have been used for shadows for way too long. fully programmable shadows via shaders would be much nicer.
ideally, programmable blending and depth would be used as software is more versatile than fixed function. software rendering would probably not be as fast or look as good at mid range settings, but assuming that the fpu did double precision, software rendering would definitely be able to do even higher quality than hardware (while being slower) and it would also definitely be capable of doing lower quality than hardware could (while being faster). and of course, software rendering is directly to the metal. it would definitely best performed on an isa better at graphics rendering than x86 is and two or even three general purpose dies plus display logic and probably texture units integrated into one of those dies, but it would definitely be more versatile. god in heaven help us if we ever see hardware ray tracing make it to store shelves.
your thoughts?