Originally posted by: ZobarStyl
Thanks for your post Matthias, it was well reasoned and helpful, and frankly I'm not arguing that "trylinear" isn't capable of actual trilinear but it all goes back to how this algorithm is determining when/where the filtering is needed, and for what reason, and this is why the situation as a whole bothers me.
		
		
	 
Well, ok, but *any* adaptive algorithm is like that.  Does it bother you that MSAA may not be properly deciding where to apply antialiasing?  Or that your angle-dependant AF may not be optimal?
	
	
		
		
			This of course stems back to the ATI dev chat which left me sour as it felt like a runaround when they couldn't just say straight how it worked, citing the proprietary nature of it (it's been in the drivers for a year, get a copyright already).
		
		
	 
I can see you've never dealt with 
patent law (not copyright).  It can take several years to get a patent, assuming things go well.
	
	
		
		
			If you are right and it's just a mix (as shown 
here) then it's the same as brilinear and this "trylinear" moniker needs to be dropped and they need to release full tri for accurate benching, as nV was finally compelled to do after their similar implementation.
		
 
		
	 
ATI does an *adaptive* mix of trilinear and bilinear filtering, which, in theory, should give quality very close to trilinear (assuming you're doing it right, which they appear to be).  NVIDIA's 'brilinear' just uses the same lower quality filtering for everything across the board -- even when you *do* need trilinear filtering for full detail (as you can see from comparisons using colored mipmaps; the 6800 with optimizations is clearly filtering less, whereas in these cases the X800 kicks back into full trilinear).
Also, Toms provides two pictures -- one comparing X800 'trylinear' against bilinear, and one comparing 9800XT trilinear against bilinear.  However, they don't show the original screenshots.  Try subtracting their two shots from each other with Photoshop (giving you X800 'trylinear' compared to 9800XT trilinear).  They're really not as different as they're making it out to sound.
	
	
		
		
			If it was such a great improvement, why did they lie and tell us it was full tri and that we needed to make sure nV was doing similar filtering?
		
		
	 
Their opinion is that it *is* full trilinear, because it defaults back to full trilinear whenever that is needed (or at least it's supposed to).  'Brilinear' (at least on things like colored mipmaps) provides a lower level of filtering.  However, as you've noted, it's impossible to tell mathematically which is 'better' in real-world situations.
	
	
		
		
			Nv came right out with their brilinear, but it was blasted left and right, so now ATi does it but lies and it's ok?
		
		
	 
Um, actually, NVIDIA 'introduced' brilinear by silently forcing its use in UT2K3 last year in order to inflate their benchmark numbers.  Then they forced it on all the time, and until recently you had to use various driver hacks to get rid of it on the GeForceFX cards (now, I believe, you can just turn it off in the drivers).  I'm not happy with how either company has handled this in terms of marketing, but I think ATI's technical implementation is a lot better.
	
	
		
		
			And as for Cainam and his DAoC problems, as you said it's not the most popular game, whereas the extremely popular UT2K4 has no problems, are we possibly looking at an application-detection aspect of this algorithm?
		
		
	 
He's only seeing issues with certain textures in DAoC, not in the entire game.  I don't think it's app-detection, just that ATI probably didn't test very well (if at all) with that game, and they do some weird thing with some of their textures that screws up the algorithm.  Seems like the most likely explanation to me.  
	
	
		
		
			And the most important little tidbit...why is it so important that it's full tri?  Simple; like I said before, I really want to buy one of these next gen cards and I want the one that's actually faster for the money.  Therefore, I want some even-level benches before I spend a big chunk of change on some silicon.  And secondly, it's important because they said it was there...and I don't like being lied to.
		
		
	 
If it provides full trilinear quality (or very, VERY close to it) with a decent performance boost, does it make sense to force the X800 to bench with the optimizations off?  This starts to get back to whether you can ever get truly 'apples to apples' benchmarks.  ATI maintains that their optimized trilinear is just as good as the 'real thing', and that you should bench it against NVIDIA's full trilinear.  NVIDIA obviously disagrees.  Who do you believe?  How do you do your benchmarking?  Should you test ATI without its optimizations, even though 99% of the time there's no IQ difference, and 99% of the people that buy the card will run it with them on?