Warning: if you bore easily, please leave this thread right now. OK, you have been warned 
I was just wondering how many subsamples it would take to completely eliminate all polygonal aliasing, pixel popping and shimmering artifacts in all imaginable situations. My conclusion: infinite.
Imagine a chessboard pattern, made out of black and white squares. Shrink this pattern so, that one square becomes smaller than a pixel on the screen. Horrible pixel popping occurs. When 4-sample FSAA (I'll use OGSS for simplicity's sake) is enabled, problem is solved. However, when you shrink the chessboard further, so that one square becomes smaller than the space between subsample positions within output pixel, shimmering/popping occurs again due to insufficent number of samples needed to produce an accurate output color (namely the correct shade of grey). Keep shrinking the chessboard, eventually only infinite number of subsamples suffices.
Of course, taking infinite number of subsamples is not possible. That's why another, mathematical way of determining correct output color must be taken.Here's how I figured out it would go.
I'm pretty certain current L&EAA implementations already determine the output pixel color this way, and albeit relatively computationally intensive, the process is quite simple for lines and straight polygon edges. When textured&shaded polygons instead of single-color ones come to play, however, things get more complicated. Some sort of average has to be calculated out of all textels which lie within the piece of a polygon making up output pixel area. With filtering techniques applied on the texture, this is no simple process.
Despite how complex the implementation of this technique might be, I see great future potential here. Imagine: this is the actual upper limit to output image accuracy!
I was just wondering how many subsamples it would take to completely eliminate all polygonal aliasing, pixel popping and shimmering artifacts in all imaginable situations. My conclusion: infinite.
Imagine a chessboard pattern, made out of black and white squares. Shrink this pattern so, that one square becomes smaller than a pixel on the screen. Horrible pixel popping occurs. When 4-sample FSAA (I'll use OGSS for simplicity's sake) is enabled, problem is solved. However, when you shrink the chessboard further, so that one square becomes smaller than the space between subsample positions within output pixel, shimmering/popping occurs again due to insufficent number of samples needed to produce an accurate output color (namely the correct shade of grey). Keep shrinking the chessboard, eventually only infinite number of subsamples suffices.
Of course, taking infinite number of subsamples is not possible. That's why another, mathematical way of determining correct output color must be taken.Here's how I figured out it would go.
I'm pretty certain current L&EAA implementations already determine the output pixel color this way, and albeit relatively computationally intensive, the process is quite simple for lines and straight polygon edges. When textured&shaded polygons instead of single-color ones come to play, however, things get more complicated. Some sort of average has to be calculated out of all textels which lie within the piece of a polygon making up output pixel area. With filtering techniques applied on the texture, this is no simple process.
Despite how complex the implementation of this technique might be, I see great future potential here. Imagine: this is the actual upper limit to output image accuracy!