Originally posted by: SickBeast
I agree with you, Scali, and my guess is that the AMD parts can perform quality AF, but the performance hit would be embarassing for AMD.
No hardware since 2002 gets a huge performance hit by enabling AF no matter if it's angle independent or not. When the X1800XT debuted, enabling HQ AF had a less of a 5% of a performance hit difference compared to Standard AF and the image quality difference was outstanding. Today, the AF image quality difference between G80 and R6X0 is smaller than that, and both cards are more powerful. So you are just speculating.
ATi always had the best AF when the 9700/9800 cards were around. Their AA/AF went out the window when the 1800XT was released. That series of cards did more to harm ATi's brand than anything I have seen them come out with.
Wrong, the GeForce FX had the best AF quality at that time compared to the Radeon 9700 series and it was usable in DX8 games, because we all know the GeForce FX DX9 sucky performance. ATi always had the best AA quality since the R300 era, even today
http://enthusiast.hardocp.com/...w0LCxoZW50aHVzaWFzdA==
In the first screenshot of the power lines it seems that the Radeon HD 4870?s 4X AA looks better than the GeForce GTX 260?s 4X AA. Looking at the other two screenshots though we can?t see any difference between 2X and 4X AA in normal view.
In Crysis, we really can?t see any differences in AA image quality at 2X or 4X. However, in the first screenshot of the shack look to the right of the screen, to the roof of the shack in the background. The roof looks slightly more detailed on the ATI hardware, than the NVIDIA hardware. We don?t really know what this means or which image is supposed to be right, it?s just a difference we noticed
Here in Crysis the first screenshot shows hardly any difference notable between 8X CSAA and 8xQ MSAA in normal view. Looking at the second screenshot though we can see some clear differences, proving that 8xQ MSAA is higher quality than 8X CSAA. Looking at the cables that are right underneath the crane, we can see that with 8X CSAA they are more broken up and blocky than they are using 8xQ MSAA. In this image 8xQ MSAA matches up perfectly with ATI?s 8X MSAA.
http://enthusiast.hardocp.com/...w1LCxoZW50aHVzaWFzdA==
To us, looking very closely at these image quality screenshots, it seems that ATI?s 12X CFAA actually looks slightly better than NVIDIA?s 16X CSAA. If you look closely at the power lines there seems to be better color blending and a better gradient with 12X CFAA. It is harder to tell with the second and third screenshots however. Looking closely at 24X CFAA we also find it to be slightly better than 16xQ CSAA on the power lines, but equally as nice as 16xQ on the other images
We again see the same pattern follow here. Look closely at the cables under the crane, both 12X CFAA and 24X CFAA look better there in comparison to 8X CSAA and 16X CSAA.
To us, again, looking closely, it seems that 8X Adaptive AA looks better than 8X CS TR SSAA, but in normal view hard to tell. That tree in the first screenshot just doesn?t look as good even at 16X CS TR SSAA as it does at ATI?s 8X AD AA. The thing is though, we really have to look closely to see these kinds of differences, in normal view it is hard to see a difference.
http://alienbabeltech.com/main/?p=3188&all=1 <<Read it, it may enlight you.
. The only brand of cards that I can recall that did harm to the Radeon brand was the HD 2900XT and the 8500 series.
Originally posted by: Scali
Yes, I agree that it's hard to notice... But on the other hand, nVidia has had this quality since the G80, which is nearly 3 years old now. ATi has released no less than three generations of hardware since the G80, and STILL hasn't caught up with that level of quality. There's just something about that that doesn't feel right, from an enthusiast point-of-view.
Also, it probably gives ATi an unfair advantage in benchmarks, because they are effectively doing less work, saving on bandwidth and all that, by not filtering as accurately as nVidia does.
ATi at least should consider having it's current AF as Standard AF and give us a HQ checkbox to bring Near Perfect AF as an option like in the X1K era. In benchmarks, while it's true what you wrote, nVidia has more texturing and filtering power, it shouldn't be that unfair.