Originally posted by: xtknight
It's because ATI's is implemented better. In ATI's Adaptive AA mode (MSAA+TSAA), there is the normal multisampling then it takes some the textures and supersamples them. This is as opposed to supersampling the whole screen. I also believe ATI's algorithm is selective between textures whereas NV's is not, which is where the advantage comes in. The Radeon has always incurred less of a hit with AA enabled it seems, so I think it's more the algorithm than the hardware itself.
AT's article explains more than you'd ever want to know about this:
http://www.anandtech.com/video/showdoc.aspx?i=2552&p=6
Originally posted by: Gamingphreek
Oh...yes that would make sense. It is still a 256bit memory architecture but the bus for the GPU itself is 512bit. So that isn't it.
As far as programmable AA, why in the hell is Nvidia not using this. Are they stupid or something...not only does it seem to have better IQ, but it is a fraction of the performance hit. Are there any downsides to the algorithm?
-Kevin
Well, the G70 is already released. They can't just change it. NVIDIA's AA is not programmable. I hope we see it in their next-gen GPU though.