Perfect Full Scene Anti-Aliasing; Procedural textures

jpprod

Platinum Member
Nov 18, 1999
2,373
0
0
Warning: if you bore easily, please leave this thread right now. OK, you have been warned :)

I was just wondering how many subsamples it would take to completely eliminate all polygonal aliasing, pixel popping and shimmering artifacts in all imaginable situations. My conclusion: infinite.

Imagine a chessboard pattern, made out of black and white squares. Shrink this pattern so, that one square becomes smaller than a pixel on the screen. Horrible pixel popping occurs. When 4-sample FSAA (I'll use OGSS for simplicity's sake) is enabled, problem is solved. However, when you shrink the chessboard further, so that one square becomes smaller than the space between subsample positions within output pixel, shimmering/popping occurs again due to insufficent number of samples needed to produce an accurate output color (namely the correct shade of grey). Keep shrinking the chessboard, eventually only infinite number of subsamples suffices.

Of course, taking infinite number of subsamples is not possible. That's why another, mathematical way of determining correct output color must be taken.Here's how I figured out it would go.

I'm pretty certain current L&EAA implementations already determine the output pixel color this way, and albeit relatively computationally intensive, the process is quite simple for lines and straight polygon edges. When textured&shaded polygons instead of single-color ones come to play, however, things get more complicated. Some sort of average has to be calculated out of all textels which lie within the piece of a polygon making up output pixel area. With filtering techniques applied on the texture, this is no simple process.

Despite how complex the implementation of this technique might be, I see great future potential here. Imagine: this is the actual upper limit to output image accuracy!
 

RoboTECH

Platinum Member
Jun 16, 2000
2,034
0
0
you were bored at work today, weren't you?


<G>

seriously tho, very interesting points.

to be pretty honest tho, it's going to depend on the size of your screen.

mathematically? no, that won't matter. But what we can view certainly will.

4x does a pretty good job as it is

16x pretty much elminates it on my 19&quot;

even a 22 or 25&quot; would probably be sufficed by 16x

 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
I was just wondering how many subsamples it would take to completely eliminate all polygonal aliasing, pixel popping and shimmering artifacts in all imaginable situations. My conclusion: infinite.

I would say you don't need FSAA at all. All you need is a high enough resolution so that your pixel size is so small you can't see individual pixels. That is, if one black pixel was drawn in the middle of a white screen you wouldn't be able to see it.

In real life there are no such artifacts because everything is made up of atoms and we can't see atoms with the naked eye. So essentially in real life we have &quot;infinite&quot; resolution.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Whoa, talk about a fillrate killer:)

The method you describe looks like it would definately be much better at reducing aliasing artifacts then what we have now, but if your looking for weighted averages, shouldn't you include the Z value as well?

Either way, I would say you would need at least 100x OGSS to get the level of detail you are looking for if I'm reading the chart properly, either that or maybe 32-50x JSS(no grid), if you used tradtional methods.

The problem I see with this would be haloing and blurring. If you are weighting the example that you had with blue at 3.5% then you would end up with some rather serious haloing around the object. At higher resolutions this becomes muth less of an issue, and for blue and black it isn't that bad(as you would end up with a navy blue and would have a hard time seeing the difference), but if you changed that over to red and green you would end up with a brown, creating a very distinct halo around the object.

Using the pixel boundaries also can weight things improperly, or perhaps I'm not understanding how you would get the samples properly? Are you talking about just using the corners, or is that the sampling issue you were talking about?

Even using eight samples at the pixel edge, it would still only require the sampling of 4X(because of shared values between pixels).

Definately an interesting proposition...
 

jpprod

Platinum Member
Nov 18, 1999
2,373
0
0
The problem I see with this would be haloing and blurring. If you are weighting the example that you had with blue at 3.5% then you would end up with some rather serious haloing around the object. At higher resolutions this becomes muth less of an issue, and for blue and black it isn't that bad(as you would end up with a navy blue and would have a hard time seeing the difference), but if you changed that over to red and green you would end up with a brown, creating a very distinct halo around the object.

Indeed, but consider that however ugly might it look, but if there were red and green polygons within area of one pixel, brown would indeed be the correct averaged output color. In real life, let's say you fill an A4 paper with brown and red dots, one millimeter in diameter, and you look the paper from something like five meters away. Paper would look brown :)

Using the pixel boundaries also can weight things improperly, or perhaps I'm not understanding how you would get the samples properly? Are you talking about just using the corners, or is that the sampling issue you were talking about?

I'm thinking renderer should compute points where a partially visible polygon intersects the pixel, factor in any points a polygon might have inside it if this is the case and determine polygon's relative area to the square area of output from these. All must done with floating point numbers if ultimate precision is desired. No actual sampling takes place at all here, polygon's color is taken from geometry data. If polygon is textured, things get quite complicated because you'll essentially have to project polygon's visible face onto a texture, determine which textels are visible and averate all of them together.

I'm still thinking this could be implemented. Perhaps not in near future in realtime 3D graphics rendererds, but for professional 3D-modelling software where FSAA precision is needed the most.


I would say you don't need FSAA at all. All you need is a high enough resolution so that your pixel size is so small you can't see individual pixels. That is, if one black pixel was drawn in the middle of a white screen you wouldn't be able to see it.

You made a good point. When pixel size approaches dot size of a quality printer, even that worst-case chessboard example would look just fine. But what kind of resolution would we need? Let's see... 600DPI resolution gives pretty good print quality. Resolution for a 19&quot; monitor which would give 600DPI is 10620x7965.

howzabout 1048576 x 786432 w/8096x FSAA?

Overkill! :D
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
&quot;I'm thinking renderer should compute points where a partially visible polygon intersects the pixel, factor in any points a polygon might have inside it if this is the case and determine polygon's relative area to the square area of output from these. All must done with floating point numbers if ultimate precision is desired. No actual sampling takes place at all here, polygon's color is taken from geometry data.&quot;

That's how Renderman works.

&quot;If polygon is textured, things get quite complicated because you'll essentially have to project polygon's visible face onto a texture, determine which textels are visible and averate all of them together.&quot;

Pixar gets around this by utilizing calculated &quot;textures&quot;, no traditional texture maps are used.

&quot;Indeed, but consider that however ugly might it look, but if there were red and green polygons within area of one pixel, brown would indeed be the correct averaged output color. In real life, let's say you fill an A4 paper with brown and red dots, one millimeter in diameter, and you look the paper from something like five meters away. Paper would look brown&quot;

But the problem comes when you have something like blood on grass, you end up with a brown halo around the blood. At a decent enough res it is hardly noticeable, but at lower settings it can be easily seen.

Also, you end up blurring of any far distance objects still if you don't weight the Z value properly.

I think your idea sounds very good, I'm playing devil's advocate to try and probe a bit deeper.

What do you think about the upcoming generations ideas of using what amounts to L&amp;EAA with high tap anisotropic filtering and mip mapping to substitute for tradtional FSAA and how do you think it would compare in terms of both current implementations and also compared to the Renderman technique?
 

Yza

Senior member
Jul 8, 2000
212
0
0
hehe MY A.D.D. kicked in about after that first paragraph.... I started day dreaming how my geforce2 would look with FSAA in half life...

Anyways. uhhhh .... Good point.?>!
 

jpprod

Platinum Member
Nov 18, 1999
2,373
0
0
Pixar gets around this by utilizing calculated &quot;textures&quot;, no traditional texture maps are used.

By definition, this is what that support for procedural textures feature in most current video cards should be. I'll reiterate what you've said on another thread a while ago: this is something that would be a very, very useful feature on a 3D accelerator card thank's to infinite precision/resolution combined with absymal texture memory requirements. It would be neat if DirectX8 abstracted support for procedural textures so that both current and future hardware could support it - future hardware in native, accurate manner and current hardware via tricks like used in Unreal and Serious Sam engines (water texture maps are generated procedurally).

I wonder if NV20 ships with native procedural texturing support? It would certainly be welcome. Many of the technically most advanced upcoming PC-games (on years 2001-2003) will be Xbox -> PC conversions. Xbox being based on NV20 core, lacking support would mean that feature's implementation in near-future games/engines on PC won't happen outside a few tech demos even if hardware support existed.


But the problem comes when you have something like blood on grass, you end up with a brown halo around the blood. At a decent enough res it is hardly noticeable, but at lower settings it can be easily seen.

Yeah, I see it now. Proper Z-weighting could take care of this, at least for objects against a background further away. But this would naturally further complicate the process.


What do you think about the upcoming generations ideas of using what amounts to L&amp;EAA with high tap anisotropic filtering and mip mapping to substitute for tradtional FSAA and how do you think it would compare in terms of both current implementations and also compared to the Renderman technique?

This could produce effectively near-perfect results for most situations. Indeed, for realtime 3D-graphics combined L&amp;EAA/anisotropic texture filter would be vastly superior choise to &quot;infinite supersampling&quot; because performance hit for proper* hardware would be insignificant, whereas latter takes a load of quite complex, branched computation on per-pixel and per-polygon basis. On the other hand, L&amp;EAA/filter method isn't order independent, and fully application-transparent support for it - a'la current 3dfx/Nvidia/ATi FSAAs - would thus be problematic.

* hardware supporting free L&amp;EAA for textured polygons as well as single-pass high-tap anisotropic without utilizing additional pixel pipelines/texturing units
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Haven't seen anything yet that makes me think that DX8 will support procedural textures in any meaningful way. It is rather difficult to handle them and not end up with &quot;synthetic&quot; looking end results. If you look closely at SW Episode1 you can notice what is done with Renderman and what isn't, Renderman using procedural texturing ends up giving a &quot;plastic&quot; look to everything.

This could be a flaw with Renderman itself and not the method utilized, but I think that if Pixar hasn't gotten it down quite yet for one of the best render engines, we won't have good results in hardware for a while yet.

The liquid textures in UT are quite cool from a technological standpoint, but they also appear very disjointed from the rest of the environment. I can see it working with an &quot;Evolva&quot; type game, but as of yet I haven't seen anything that appears photorealistic using procedural texturing(please, feel free to point to me anything that does:)).

But this thread is about FSAA, not procedural textures;)

I think that if you are looking for the type of FSAA that takes infinite sampling points, it may be better to rethink the entire rendering process. As it stands now, we are looking to &quot;brute force&quot; aliasing away, with the next gen cards offering a more elegant, but still costly in terms of die space and fillrate overhead(just not memory bandwith).

Also, we can't truly use infinite sampling points, we are limited by the confines currently of 32bit Z(though I suppose we could get around this). With current hardware, unless we are talking ten to twenty years out, we are looking at *only* millions of samples per texel max from what I can figure(feel free to point out errors and plain stupidity as always:)).

For Z-weighting, I would think that the Z calcs would be extremely simplistic in comparison. If you already have the parameters of the four corners you can simply check the Z values for those points and then divide the pixel down until you reach ~64 samples, that should be plenty enough accuracy(of course, we are talking about the possibility of infinite samples, so maybe adding the Z to every function wouldn't be too terribly bad).

For the type of L&amp;EAA I was talking about for the upcoming boards, my understanding is that they are using a AB style multi-sampling to reuse the same texel data and simply re-check the Z value, not the traditional L&amp;EAA. This won't do a thing for shimmering/pixel popping etc, but that is where using high tap(32-64) anisotropic combined with mip maps come into play. With nearly no bandwith penalty, they should be able to rely solely on raw fill to handle FSAA(which the GF2 at least already has loads of to spare).

Does anyone beside Jukka and I have any thoughts on the subject??????

(Not saying I'm not looking forward to your response JP, just trying to see if anyone else has any thoughts:))
 

jpprod

Platinum Member
Nov 18, 1999
2,373
0
0
For the type of L&amp;EAA I was talking about for the upcoming boards, my understanding is that they are using a AB style multi-sampling...

Sorry, I misunderstood, I thought AA was being implemented trough a application/API-supported mixture of traditional L&amp;EAA and anisotropic filtering. This would use anisotropic to determine color of textured surfaces and mathematical L&amp;EAA (a weighted blend, by the edge angle) for polygon edges to remove the infamous jaggies (which Voodoo5 owners must really hate :p). But thinking it trough, L&amp;EAA using color values from textures is very complicated.


...to reuse the same texel data and simply re-check the Z value, not the traditional L&amp;EAA. This won't do a thing for shimmering/pixel popping etc, but that is where using high tap(32-64) anisotropic combined with mip maps come into play.

But at this point I'll have to ask a potentially stupid question: how does multisampling help for anything but Z accuracy, specifically jaggies present on intersecting polygons w/ not enough Z information? It doesn't take care of jaggies: there's no alternate color value to blend the re-used textel data (unless textel is defined as bilinear textel, four seperate texture elements?), and thus no way to smooth a textured polygon edge with it's background.

I recall Dave of Beyond3D wrote he'd clarify the basics of multisampling for the lot of us in a thread long ago, but I never saw his follow-up...


The liquid textures in UT are quite cool from a technological standpoint, but they also appear very disjointed from the rest of the environment.

Agreed, but that must be due to badly designed lighting and water color; they look much, much better in Deus Ex. Especially within the spot of a streetlight when looking down the peer in the very first level. That game's visual merits are underestimated IMO :)

[edit]
But this thread is about FSAA, not procedural textures ;)

Now it's about both :D
[/edit]
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
The liquid textures in UT are quite cool from a technological standpoint,

Yeah, even the original Unreal had these. In fact the main menu (when you press esc) has one. When I first saw it I was amazed that something like that was possible.

Unreal/UT may not benchmark well but the engine has plenty of eye candy. :)
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
I just thought of something else.

If you remember, any vector image or true type font can be scaled at will and it doesn't have any jaggies. So how about using rasterised textures instead of bitmaps?

Maybe video cards can support rasterised textures. Instead of downloading bitmapped graphics onto the boards, we could instead load lists of vertices, co-ordinates and colours. That way the video card could handle all of the scaling in hardware and the jaggies would be completely eliminated.

We would also have to modify the rasterising algorithm so that it handles the Z-axis as well as the X and Y axes.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
&quot;Sorry, I misunderstood, I thought AA was being implemented trough a application/API-supported mixture of traditional L&amp;EAA and anisotropic filtering.&quot;

I think this is also planned on for DX8, but it seems based on either JSS or RGSS from what I have seen, not the implementation that I was speaking of(though either could be way off of course).

&quot;unless textel is defined as bilinear textel, four seperate texture elements?&quot;

That was my assumption, although that is a guess on my part based on what has been rumored. It makes sense, and should be &quot;free&quot; from a memory bandwith perspective. Too many definitions of &quot;texel&quot; floating around to be exactly sure what they are talking about, and even then this is all based on rumors:)

&quot;I recall Dave of Beyond3D wrote he'd clarify the basics of multisampling for the lot of us in a thread long ago, but I never saw his follow-up&quot;

Don't know if you heard, it is now Dave of 3dfx. He was hired on by 3dfx's engineering department, doubt we will hear much in terms of speculation and rumors from him anymore:)

&quot;Agreed, but that must be due to badly designed lighting and water color; they look much, much better in Deus Ex. Especially within the spot of a streetlight when looking down the peer in the very first level. That game's visual merits are underestimated IMO&quot;

Is it running decent on your SDR yet?(You still have the SDR don't you?)

I've been holding out until they fix the D3D code to pick that one up, tried the demo and it was absolutely horrible on three different vid cards(two nV and an ATi).

&quot;Now it's about both&quot;

Hehe, so what do you think the the chances are of hardware procedural textures are in the next couple of generations? Using the barrometer that I look to, id, I haven't seen any mention of it yet. I keep hearing about increasing the number of passes, as high as ten in some cases, but nothing about procedural texture support. Haven't heard if nV is pushing/supporting this for nV2X, but I'll take a look around and see what I will see(too many people are under NDA:|).
 

jpprod

Platinum Member
Nov 18, 1999
2,373
0
0
&quot;..Deus Ex.&quot;

Is it running decent on your SDR yet?(You still have the SDR don't you?)


Certainly. It doesn't run anywhere near 60FPS, but framerate rarely drops below 30-40FPS in 800x600/32bit color w/ full detail on latest D3D patch and 5.30 Detonator drivers. I have a feeling game's not fillrate limited at all, there's just some strange overhead in the D3D renderer (which I usually refer as Epic's Glide wrapper :)) and that framerates wouldn't improve at all if I had a GF2U.


I've been holding out until they fix the D3D code to pick that one up, tried the demo and it was absolutely horrible on three different vid cards(two nV and an ATi).

Yeah, the first version of the demo was really slow. Aside from the D3D beta patch, INI tweaks help out a lot too.


Hehe, so what do you think the the chances are of hardware procedural textures are in the next couple of generations? Using the barrometer that I look to, id, I haven't seen any mention of it yet.

For the next generation (NV20, G800, Radeon II, Rampage) I'll keep my fingers crossed, but I don't see much of a chanse since leaked NV20 and Xbox specs don't mention procedural textures. However, id's recent interviews might not tell everything about upcoming hardware features in their engine or developer's willingness to support a feature in general, since id (or should I just say John Carmack :)) seems to build their engines around feature sets present on the best accelerators out at the time: now Radeon and GeForce2, TNT2* at the time when Quake III engine was being developed.

* T&amp;L and S3TC in Quake III are not really thoroughly enough implemented to be considered id's GeForce support.
 

Dragon Puppy

Member
Oct 12, 1999
40
0
0
Didn't read all the comment's got to get some sleep , I just wanted to tell about a good 3D Board that has had alot of debating about different FSAA methods , you could post your suggestion there. Seams be down thought.Beyond3D forums