- Oct 9, 1999
- 9,140
- 67
- 91
I'm genuinely curious Ben, what DX10.1 features does the GT200 not support, since it seems like it has most things covered?
The only thing that I am aware of that isn't exposed in drivers atm is cubemap arrays, and I couldn't honestly say if they are supported in hardware or not(it seems nV has more functionality then they are willing to admit to, why I have no idea- perhaps performace is sub optimal?). I'm not saying there isn't more missing, but if there is I haven't seen it.
I'm wondering, is this a sarcasm thread or a thread to show just how clueless most people are about DX10.1?
Actually, both
But the more important question is if NV "unofficially" supports DX10.1, as I've heard in some rumors, then how would one utilize those features on NV hardware? For example, does the driver expose such functionality through OpenGL extensions? And if so, how come no game takes advantage of those features on NV hardware?
They are exposed under DX, do a caps test.
Of course when nVidia pull stunts like this:
That single page I think sums up AT better then any other when it comes to vid card analysis. It is honestly shocking that they would publish it, perhaps moreso then their shockingly ignorant flames in their AC review.
From what I've read (yeah mostly marketing material), I could see at least one clear advantage of DX10.1 (for us) - tightening the standard (requirements). Standards that everyone (be it video card makers or developers) has to follow tend to benefit consumers.
Benefit consumers, horrificly suffocate innovation. You say tomato
When there is no entity to force the standards, it's the consumers who are disadvantaged.
This can go both ways. There is a rather large amount of funcitonality in BOTH the 48x0 and 2x0 cores that are not exposed under DX10- this functionality won't be properly utilized because developers almost exclusively utilize DX10 at this point.
I also rememember that John Carmack bemoaning about ATI/NV doing same things same way with just different names.
That was the fault of OGL, but it was also a benefit in a lot of ways. Because of the open extension support, nV or ATi could release a new part with a new feature and see devs start to take advantage of it right away.
exactly so it would be logical to assume a GPU will be able to process DX API much faster, in case of DX10, special architectural changes (like to process geometry shader and vertex buffer pool) are need to process truck loads of data.
Sounds like you have spent a lot of time with DirectX, and pretty much no time at all with any other 3D API. The problems you are talking about were native to DirectX, they never were an issue using legacy hardware in OpenGL or any other modern 3D API. The hardware didn't need to change, the horrificly coded segment of D3D that tanked small batch geometric data needed to be reworked.
