ATI Ultra-Realistic Realtime Ruby Tessellation demo

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

jimhsu

Senior member
Mar 22, 2009
705
0
76
Oh a question:

Can someone elaborate on the differences between rendering a model with 1 million polygons, and rendering a model that has been tessellated to 1M polygons? Is it a bandwidth limitation that prevents the former but allows the latter? A GPU clock speed limitation? Or that tessellation is more optimized? Or my question is more precisely: What allows tessellation to be so much faster than drawing the polygons individually, if the fundamental architecture is still the same (points > polygons > meshes > textures > shaders)?
 

Kakkoii

Senior member
Jun 5, 2009
379
0
0
Originally posted by: jimhsu
Oh a question:

Can someone elaborate on the differences between rendering a model with 1 million polygons, and rendering a model that has been tessellated to 1M polygons? Is it a bandwidth limitation that prevents the former but allows the latter? A GPU clock speed limitation? Or that tessellation is more optimized? Or my question is more precisely: What allows tessellation to be so much faster than drawing the polygons individually, if the fundamental architecture is still the same (points > polygons > meshes > textures > shaders)?

With Tessellation, the polygons are a uniform fractal structure, where as a normal 1 million polygon model would have 100's of 1000's of different sized polygons throughout.
 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
Originally posted by: jimhsu
Oh a question:

Can someone elaborate on the differences between rendering a model with 1 million polygons, and rendering a model that has been tessellated to 1M polygons? Is it a bandwidth limitation that prevents the former but allows the latter? A GPU clock speed limitation? Or that tessellation is more optimized? Or my question is more precisely: What allows tessellation to be so much faster than drawing the polygons individually, if the fundamental architecture is still the same (points > polygons > meshes > textures > shaders)?


It isn't faster. It allows the model to be one set resolution and the GPU to adapt that mesh based on its capabilities. The other thing is that a mesh that has had the poly count reduced will not return to exactly the same mesh when using tessellation. It will look better but it will not be the same because of the way current poly reduction engines work. They are good but not perfect.

I made this pic so people can see what the differences look like.
http://img16.yfrog.com/img16/8002/deertessel.jpg

This is a deer model for a game I did a while back. The upper left mesh is full resolution, slightly over 500,000 polygons. The upper right mesh that has been optimized for the game engine is about 4500 polygons. The mesh on the bottom left and right are what would be seen in game with hardware tessellation . Notice the original mesh and the tessellated mesh are close but not the same.

I would still rather play a game with the tessellated version rather than the normal optimized version. The people with a GPU that can do the tessellation will see the bottom version the rest will see the optimized version.

So for a developer they only need to ship the optimized model with the game and if you have a GPU that can do the tessellation then you get to play with the higher detail mesh.
 

jimhsu

Senior member
Mar 22, 2009
705
0
76
So if I understand this right, tessellation is a way to achieve something like texture LOD (e.g. in DDS files), but with meshes? I can definitely see this as a way to cut down on development time drastically (not having to design new models for two DirectX versions). Thus, it's more likely to be used.
 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
Originally posted by: jimhsu
So if I understand this right, tessellation is a way to achieve something like texture LOD (e.g. in DDS files), but with meshes? I can definitely see this as a way to cut down on development time drastically (not having to design new models for two DirectX versions). Thus, it's more likely to be used.

LOD can decrease resolution and lower detail of meshes while tessellation does the opposite by increasing detail above the standard mesh.
 

Barfo

Lifer
Jan 4, 2005
27,554
212
106
Originally posted by: Modelworks
Originally posted by: jimhsu
Oh a question:

Can someone elaborate on the differences between rendering a model with 1 million polygons, and rendering a model that has been tessellated to 1M polygons? Is it a bandwidth limitation that prevents the former but allows the latter? A GPU clock speed limitation? Or that tessellation is more optimized? Or my question is more precisely: What allows tessellation to be so much faster than drawing the polygons individually, if the fundamental architecture is still the same (points > polygons > meshes > textures > shaders)?


It isn't faster. It allows the model to be one set resolution and the GPU to adapt that mesh based on its capabilities. The other thing is that a mesh that has had the poly count reduced will not return to exactly the same mesh when using tessellation. It will look better but it will not be the same because of the way current poly reduction engines work. They are good but not perfect.

I made this pic so people can see what the differences look like.
http://img16.yfrog.com/img16/8002/deertessel.jpg

This is a deer model for a game I did a while back. The upper left mesh is full resolution, slightly over 500,000 polygons. The upper right mesh that has been optimized for the game engine is about 4500 polygons. The mesh on the bottom left and right are what would be seen in game with hardware tessellation . Notice the original mesh and the tessellated mesh are close but not the same.

I would still rather play a game with the tessellated version rather than the normal optimized version. The people with a GPU that can do the tessellation will see the bottom version the rest will see the optimized version.

So for a developer they only need to ship the optimized model with the game and if you have a GPU that can do the tessellation then you get to play with the higher detail mesh.

With no performance hit?
 

jimhsu

Senior member
Mar 22, 2009
705
0
76
I meant, an LOD effect in that you can obtain meshes that change in quality the same way that textures change in quality. But ok.

If developers only need to ship the optimized model though, does that mean tessellation can work with games that aren't designed for it (e.g. DX9/10 games)? Why or why not?
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
I meant, an LOD effect in that you can obtain meshes that change in quality the same way that textures change in quality. But ok.

I know what you are saying and the simple answer you are looking for is yes, but it gets a bit more complicated then that.

If developers only need to ship the optimized model though, does that mean tessellation can work with games that aren't designed for it (e.g. DX9/10 games)? Why or why not?

Kind of. You can force tesselation on in games that were not designed for it but it will result in some comical results. If you look around you can probably find some old CounterStrike shots with some fairly humorous results from tesselation being forced on in the game(ATi hardware has supported tesselation for some time, it needs developer support to work properly though).

Hmm, OK, I can think of an example to use. Tesselation helps remove sharp edges from objects. If you are looking at a hard angle on someone's arm then obviously it isn't realistic at all. But what if you are looking at say a Cadillac STV in a racing game? Those sharp creases could be tesselated out to make the car look like a Neon. That is just a general example but hopefully you will understand why it is an issue if you don't have developer support when using a tesselator.
 

jimhsu

Senior member
Mar 22, 2009
705
0
76
Originally posted by: BenSkywalker
I meant, an LOD effect in that you can obtain meshes that change in quality the same way that textures change in quality. But ok.

I know what you are saying and the simple answer you are looking for is yes, but it gets a bit more complicated then that.

If developers only need to ship the optimized model though, does that mean tessellation can work with games that aren't designed for it (e.g. DX9/10 games)? Why or why not?

Kind of. You can force tesselation on in games that were not designed for it but it will result in some comical results. If you look around you can probably find some old CounterStrike shots with some fairly humorous results from tesselation being forced on in the game(ATi hardware has supported tesselation for some time, it needs developer support to work properly though).

Hmm, OK, I can think of an example to use. Tesselation helps remove sharp edges from objects. If you are looking at a hard angle on someone's arm then obviously it isn't realistic at all. But what if you are looking at say a Cadillac STV in a racing game? Those sharp creases could be tesselated out to make the car look like a Neon. That is just a general example but hopefully you will understand why it is an issue if you don't have developer support when using a tesselator.

But from what I understand about tessellation, if the sharp edges are chamfered, and because tessellation works on the polygon level, then the sharp edge should be maintained? The same thing is true for most smoothing algorithms - is tessellation similar? Of course, that depends on the designer putting chamfered or beveled edges in the first place.

Sorry if this is tedious, but there is little concrete info aside from technical design specs and people saying "tessellation is cool".
 

jimhsu

Senior member
Mar 22, 2009
705
0
76
Microsoft seems to support that observation: http://www.bit-tech.net/custom...tx-11-tesselation.html

"The tessalator unit simplifies complex wireframes, making them more digestible for the GPU and so reducing memory usage."

So what we have is card that can push a lot of raw polygons, but in order to a) save valuable GPU memory and b) not have two inconvenient versions of a mesh, tessellation was implemented.

So really, it's only a stopgap measure until enough people get high performance cards so that developers can export truly higher resolution meshes from the start.
 

jw0ollard

Senior member
Jul 29, 2006
220
0
0
I'm a bit concerned why "01_RubyRendered" (link) seems to be an actual image of the actress/model and not a rendering.

View the image at full size and note the remnants of a background that is not pitch black around her entire outline. Especially on her right arm near her breast and her hair near the back of her head/neck. The area around the arm is especially egregious. It's actually around most of her outline. Like where her neck switches from "blurry" to "peach fuzz". You can actually see some red fibers from her tank top strap against the black background. I really don't think a laser scanner (or whatever else "Lightstage" uses) or the resultant 3D mesh would provide for that level of detail. Nor the bump/normals/UV/etc. I understand her peach fuzz and clothing fibers are technically possible, but they don't look computer generated nor do I believe the scanners used today could resolve those miniscule details.

Now if you switch between "textured" and "untextured" in a smaller size (Before vs After) ... They don't even appear to be the same. The hair is "flatter" in the "untextured" and different in the back. Comparing "untextured" vs "normals", they seem to be unchanging, however. So at the very least they have a high enough detailed model to show a bump/normal view.

Going back to watch the video with the same level of scrutiny, it dawned on me they may have done the same switcheroo. Untextured/normal/specular/whatever 3D model demo, then bam, a "photorealistic" demo that is actually faked or prerendered. For all I know the mouse they're dragging around can be animating the Lightstage data from the live model in a Quicktime movie, not a 3D one.

After browsing pages like this one or "Digital Emily" from the same site, I would say a great deal of the "photorealistic" demos from this ATI event are just the raw Lightstage data that is either directly regurgitated, or prerendered in some kind of fakery.

For instance, "02_RubyRenderedWithTrackingDots" sure appears to me like it's just the real-life image of the actress with the tracking dots inked on her face, ala the same exact images of "Digital Emily" two links back.

Anyway, color me unimpressed with this demo. They don't tell us it's simply regurgitated Lightstage data. There's no proof they even bothered building an actual 3D model, since it seems that the "Lightstage process" results in giving you all the diffuse/normal/specular/etc data you would need from any angle. And as mentioned several posts above (gorobei), they're not even "real" normals of any actual use. I was also SEVERELY unimpressed with "Digital Emily" (link above) when it'd made rounds several months ago, since it's basically video capture and totally dependent on the actual live actor. And how the only thing actually "3D" was just her face. (The rest is all live action) ... Just use the live actor then!

Tech demos haven't had a great track record anyway, when it comes to showing off new DirectX versions... Just look at the faked Flight Simulator shots meant to showcase DirectX 10.

With all that said, I would still buy a 5870 or 5870X2 for when I build next, assuming Nvidia hasn't released theirs by that time. I'm really not trying to attack ATI/AMD either. I'm not a fanboy for either camp. I just saw the blatant Photoshopping and felt I should speak up and say something before anyone invests too much into these tech demos that are consistently less than honest. :) At least it makes sense now that when viewing these images a few weeks ago, I absolutely refused to believe she was rendered. hahaha. I think I actually observed the Extract tool (or however they masked off the background) artifacts then too, but didn't think anything of it.