Originally posted by: gururu
Originally posted by: ViRGE
Originally posted by: Pete
I thought IQ was superior due to using math, rather than filtered texture lookups. As for not "compromising" D3's IQ, that sounds like more of an artist than a programmer issue.
I don't remember JC specifically denigrating Humus' patch b/c of lower or compromised IQ. His only comment on it, AFAIK, is in
this B3D "interview,"">http://beyond3d.com/interviews/carmack04/index.php?p=2</a> and it doesn't mention costing or compromising IQ.
Note where he talks about all the artwork being done on non-fragment hardware; because Humus's tweak doesn't match the square/bias, it produces an image that deviates from the "reference" image, hence the comments I made about IQ. In a generic case, the IQ may be superior, but for Doom 3, this perfectly matches what id wants, and as Carmack notes, it's also useful for when you want a finite cutoff angle, which I'm assuming they're taking advantage of here.
I don't buy this preservation of 'reference' image stuff. JC admitted that they used a texture lookup table because it was faster than doing the math. He neglected to say that the lookup was faster only on Nvidia hardware. He admits that doing the math may actually be faster on certain hardware, but doesn't specify ATI or Humus or anything; the snub. It's a matter of what does the trick faster, the lookup or the math. Most sites agree as well that ATI's architecture DOES math a lot faster than Nvidia's, validating the gain in performance that the Humus tweak can offer (I got 20% increase with NO apparent artifacts). Lastly, no site has shown quality differences in either method.
In any regard, how could JC not know that the math would be faster on ATI. Of course he knew. For one reason or another, it seems he didn't just give a damn.
Thats what i thought, what Humus did doesnt take long, iD could have easily have implemented his work into the retail game but didnt, it just required a few numbers changed and a few replaced words and that was it. Since ATi and nVidia were working with iD, they must have known that ATi was reknowned for having a much better maths architecture than nVidia, but they didnt seem to want to try to implement this into the games engine, aagain all it needed was an extra set of instructions to tell it to change a few things in the engine and that was it. All i suppose the gamer had to do was set AF in the control panel. Valve knew that the FX card wouldnt run well on the DX9 stuff, and we all know that the FX are not as good as the 9700+, so they put in a new code to make it run well without having to sacrifice too much Quality, they actually went out of their way to put in new and SIMPLE coding to make each and every card run at playable frame rates with the best quality it can muster.
Now tell me, iD dont bother acknowledging that having a math calculation instead of a look up table is better for ATi and just go for the look up table which seems to run worse of ATi but not nVidia
Or Valve, at first it was ATi are much better, but later nearer release time, we have including extra coding so that the FX owners can play at reasonable frame rates without a lot of loss in Quality, acknowledging the poor perfomance in DX9 as HL2 does use it extensively
So i ask, tell me whose comprimising the GPU companies more? iD or Valve?
Everyone says that Valve are more in bed with ATi than iD were with nVidia, and that Carmack is a god and wouldnt exclude half his audience.... But what about helping ATi by putting a few bits of extra and SIMPLE coding to make it run faster, by up to 20% by what some review sites say.
And as for the IQ loss, but the end of the thread where Humus pout his tweak, they worked out a flawless tweak which gave no apparent Quality loss, but increased the peformance a lot.