Originally posted by: Scali
Since I use the subjective qualifier 'much' in my comment, the accuracy of the comment cannot be determined in the first place.
If the accuracy of the comment cannot be determined, why did you state it as fact? Additionally, I can only assume we?re speaking English here?
http://dictionary.reference.com/browse/much
?adjective
1. great in quantity, measure, or degree:
The examples I gave improved image quality in a great degree, especially AAA in OpenGL. So again, going by English dictionary definitions, you?re wrong.
Now, if you choose not to use English definitions as the basis of your comments, then you need to give us prior warning.
Other than that, they added extra AA modes, which required more processing, and as such were not very practical.
How much processing they require is irrelevant to your original false claim that AA didn?t improve much.
And actually they?re very practical given I used them a lot when I gamed on the 4850. Additionally, I?ve thoroughly tested and benchmarked them in dozens of titles.
How much gaming have you done with those modes, Scali?
How much benchmarking/testing have you done with those modes, Scali?
You them pull the AA out of context and go on some off-topic egotrip AGAIN.
What context? You stated they hadn?t improved AF/AA much, and I pointed out AA
had improved it a lot. So please, explain to us this mysterious missing context of your comment.
When you say ?context? I think you actually mean ?I?ve been caught out stating inaccurate statements, so now I?m back-pedaling and playing semantic games?.
Why don?t you look at graphics card marketshare? Hint: GMA comes ahead by a long shot. So again, if our discussions are based on what most people do, most people use GMA.
I'm talking about the fact that early hardware took a bruteforce approach.
But the 16xAF parts
didn?t, so your statement is the opposite of fact. Early 16xAF parts did
even less work than the
R3xx, which in turn did less than today?s ATi?s parts. So your statement
?if you used 16xAF, it actually took 16 samples for every pixel. is woefully inaccurate.
You mean that in practical implementations it is usually 64 or 128 TAPS, which is not exactly a given, it's hardware/driver dependent. It certainly doesn't MEAN that it will do 64/128 taps.
Like I said
?minus any optimizations for a given surface?
16x MEANS that your texels may have a max anisotropic level of 16x after being projected on screen, namely the width:height ratio is 16:1 (or 1:16, depending on the orientation).
Err, you didn?t state any of that. You stated:
The algorithm on early AF hardware was just bruteforce, so if you used 16xAF, it actually took 16 samples for every pixel. Sure, it gives perfect results, but it's not a practical solution becasue it takes far too much bandwidth.
You stated it used
16 samples on
every pixel, which is wrong on both counts. Your comments are right there in bold. Are you now denying you said it?
16x isn?t even a sample count, it?s a ratio. Yet you stated it was the sample count. Again it?s right there in bold. Are you denying the contents of the quote?
Another total back-pedal on your part, yet again.
If you want to try and look smart, at least get your facts straight. Now you just look like an idiot trying to correct someone with the WRONG info, LMAO.
Heh.
?GF3 didn?t have MSAA?.
-Scali, 2009.
I got a good laugh from that one. Thanks for that.
