Yeah, it's on page 6 with the title: D3D AF Tester.
Let me emphasize that for you in case you missed it: D3D
AF Tester.
The entire commentary is talking about anisotropic filtering heaven's sake, just like I suspected it was to begin with. So next time I suggest you employ some reading and comprehension skills before trying to argue a point you know nothing about.
So getting back to my original point, both ATi and nVidia use adaptive AF and
all video cards also use mip-mapping (which by
definition lowers texture resolution according to distance). No current vendor uses a version of AF that is strong enough to only use one mip-map so by
definition any kind of said filtering will require lowered texture resolution as the distance increases and you run out of anisotropy range.
Also he uses the word "resolution" when he really should be using "sampling size" or similar.
Then why does he point out ATI.
He doesn't point out ATi, he compares them to nVidia. Since it's very clear that you haven't even read the article correctly I suggest you go back and carefully do so. By taking lower amounts of samples in different situations both nVidia and ATi have reduced quality at certain angles and on certain surfaces and hence both vendors have varying quality AF.
See, Nvidia wasn't cheating, it was just a bug.
On this issue they weren't. However there's the whole UT2003.exe and Direct3D brilinear and the latter appears that it's here to stay for good. Also there have been some reports of nVidia employing trickery in their AF after the first mip-map is passed.