Below is a list of the things Carmack was not impressed with concerning the overpriced GeForce 3. Sure it is a nice card, but it is not perfect and nVidia is playing us all for dummies if we go out and pay their MSRP for this imperfect card.
The things that are indifferent:
I'm still not a big believer in hardware accelerated curve tessellation.
I'm not going to go over all the reasons again, but I would have rather seen the features left off and ended up with a cheaper part.
The shadow map support is good to get in, but I am still unconvinced
that a fully general engine can be produced with acceptable quality using shadow maps for point lights. I spent a while working with shadow buffers last year, and I couldn't get satisfactory results. I will revisit that work now that I have GeForce 3 cards, and directly compare it with my current approach.
At high triangle rates, the index bandwidth can get to be a significant thing. Other cards that allow static index buffers as well as static vertex buffers will have situations where they provide higher application speed. Still, we do get great throughput on the GF3 using vertex array range and glDrawElements.
The things that are bad about it:Vertex programs aren't invariant with the fixed function geometry paths. That means that you can't mix vertex program passes with normal passes in a multipass algorithm. This is annoying, and shouldn't have happened.
Now we come to the pixel shaders, where I have the most serious issues. I can just ignore this most of the time, but the way the pixel shader functionality turned out is painfully limited, and not what it should have been.
DX8 tries to pretend that pixel shaders live on hardware that is a lot more general than the reality.
Nvidia's OpenGL extensions expose things much more the way they actually are: the existing register combiners functionality extended to
eight stages with a couple tweaks, and the texture lookup engine is configurable to interact between textures in a list of specific ways.
I'm sure it started out as a better design, but it apparently got cut and cut until it really looks like the old BumpEnvMap feature writ large: it does a few specific special effects that were deemed important, at the expense of a properly general solution.
Yes, it does full bumpy cubic environment mapping, but you still can't just do some math ops and look the result up in a texture. I was disappointed on this count with the Radeon as well, which was just slightly too hardwired to the DX BumpEnvMap capabilities to allow more general dependent texture use.
Enshrining the capabilities of this mess in DX8 sucks. Other companies had potentially better approaches, but they are now forced to dumb them down to the level of the GF3 for the sake of compatibility. Hopefully we can still see some of the extra flexibility in OpenGL extensions.
The things that are indifferent:
I'm still not a big believer in hardware accelerated curve tessellation.
I'm not going to go over all the reasons again, but I would have rather seen the features left off and ended up with a cheaper part.
The shadow map support is good to get in, but I am still unconvinced
that a fully general engine can be produced with acceptable quality using shadow maps for point lights. I spent a while working with shadow buffers last year, and I couldn't get satisfactory results. I will revisit that work now that I have GeForce 3 cards, and directly compare it with my current approach.
At high triangle rates, the index bandwidth can get to be a significant thing. Other cards that allow static index buffers as well as static vertex buffers will have situations where they provide higher application speed. Still, we do get great throughput on the GF3 using vertex array range and glDrawElements.
The things that are bad about it:Vertex programs aren't invariant with the fixed function geometry paths. That means that you can't mix vertex program passes with normal passes in a multipass algorithm. This is annoying, and shouldn't have happened.
Now we come to the pixel shaders, where I have the most serious issues. I can just ignore this most of the time, but the way the pixel shader functionality turned out is painfully limited, and not what it should have been.
DX8 tries to pretend that pixel shaders live on hardware that is a lot more general than the reality.
Nvidia's OpenGL extensions expose things much more the way they actually are: the existing register combiners functionality extended to
eight stages with a couple tweaks, and the texture lookup engine is configurable to interact between textures in a list of specific ways.
I'm sure it started out as a better design, but it apparently got cut and cut until it really looks like the old BumpEnvMap feature writ large: it does a few specific special effects that were deemed important, at the expense of a properly general solution.
Yes, it does full bumpy cubic environment mapping, but you still can't just do some math ops and look the result up in a texture. I was disappointed on this count with the Radeon as well, which was just slightly too hardwired to the DX BumpEnvMap capabilities to allow more general dependent texture use.
Enshrining the capabilities of this mess in DX8 sucks. Other companies had potentially better approaches, but they are now forced to dumb them down to the level of the GF3 for the sake of compatibility. Hopefully we can still see some of the extra flexibility in OpenGL extensions.