• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Why AMD doesn't like over-tess

AMD was all for tessellation, supporting a form of it with their hardware before DX11 came out. Then Nvidia released Fermi, which crushed AMD at tessellation performance. Now, AMD certainly has made plenty of improvements to their geometry engines and tessellation since then, but so has Nvidia. That's really why they try to downplay high levels of tessellation as "overtessellation", because Nvidia is better at it. I like AMD but it's really nothing but PR-speak, here.
 
AMD was all for tessellation, supporting a form of it with their hardware before DX11 came out. Then Nvidia released Fermi, which crushed AMD at tessellation performance. Now, AMD certainly has made plenty of improvements to their geometry engines and tessellation since then, but so has Nvidia. That's really why they try to downplay high levels of tessellation as "overtessellation", because Nvidia is better at it. I like AMD but it's really nothing but PR-speak, here.

they clearly state it in the slideshow, it is a technical limitation not just pr or what ever.
 
Yeah, they have never talked about "over tessellation" before nVidia had released Fermi. They even praised the Heaven Benchmark for the tessellation. 😀

BTW: Their own TressFX is using triangles smaller than 1 pixel:
The hair is rendered as several thousand individual strands stored as thin chains of polygons. Since the width of a hair is considerably smaller than one pixel,[...]
The problem is not the size of the triangles but the generating of geometry information on the GPU.
 
A single pixel cannot confer sub-pixel visual information. At some point, any increase in tessellation stops providing a better visual improvement. Until that point is reached, it cannot be described as "Over-Tessellation", IMO.
 
A single pixel cannot confer sub-pixel visual information. At some point, any increase in tessellation stops providing a better visual improvement. Until that point is reached, it cannot be described as "Over-Tessellation", IMO.

Sub-pixel information can help prevent aliasing, though.
 
AMD was all for tessellation, supporting a form of it with their hardware before DX11 came out. Then Nvidia released Fermi, which crushed AMD at tessellation performance. Now, AMD certainly has made plenty of improvements to their geometry engines and tessellation since then, but so has Nvidia. That's really why they try to downplay high levels of tessellation as "overtessellation", because Nvidia is better at it. I like AMD but it's really nothing but PR-speak, here.

promoting a technology is one thing. Not the same as saying everything should be tessellated to the moon. Tessellation performance in a situation where its being abused for no benefit to the gamer should not be a measure worth anything

This is not PR. It's just information
 
Found this slideshow that explains why GCN doesn't like over tessellation.
http://www.slideshare.net/DevCentralAMD/gs4106-the-amd-gcn-architecture-a-crash-course-by-layla-mah

the explanation can be found at page 59.

My uneducated understanding...too many triangles 🙂
That's not an AMD specific problem. All GPUs work this way. This is why MS don't recommend to use too little triangles, because it will kill the performance. To achieve good efficiency a triangle must be bigger than 16 pixels.

There are several tessellation methods like nosplit, binsplit, isosplit, diagsplit, etc. All hardwares and APIs use the nosplit method. It is easy to build a hardware logic for it and gives acceptable results.
Binsplit gives you better results, but the rasterizer must be overdesigned to get good performance.
Isosplit gives better results, but it needs some very specific hardware logic to do it's job.
I think diagsplit is the best method, because it gives near perfect results, and it is not much harder to build a hardware for it compared to nosplit. Probably an API (Mantle perhaps) in the future will use diagsplit with quad-fragment merging.
 
Putting it simply Nvidia tesselation performance is a bit overpowered, and AMD performance on it was underpowered before GCN.
 
AMD was all for tessellation, supporting a form of it with their hardware before DX11 came out. Then Nvidia released Fermi, which crushed AMD at tessellation performance. Now, AMD certainly has made plenty of improvements to their geometry engines and tessellation since then, but so has Nvidia. That's really why they try to downplay high levels of tessellation as "overtessellation", because Nvidia is better at it. I like AMD but it's really nothing but PR-speak, here.

Nvidia have more or less always been leading on performance for tessellation since it's actually been implemented and useful, and it's nice to have, especially as a more subtle LOD tool so we don't have awful popping between low poly and high poly meshes.

A single pixel cannot confer sub-pixel visual information. At some point, any increase in tessellation stops providing a better visual improvement. Until that point is reached, it cannot be described as "Over-Tessellation", IMO.

It can if you use anti-aliasing to average samples within that pixel, that's not using tessellation to do AA, that's just one method that can help confer sub-pixel visual information.

The general point is right though, there is a degree to which you can over tessellate which is the degree to which there's no discernible image quality difference, but neither AMD or Nvidia are bumping up against that boundary right now, if nothing else geometry is the thing that has seen the least improvement over the years, I was one of the people excited to hear Carmack throwing around the idea of a possible mega-geometry to go along side mega-texture, sadly he now works at occulus advancing a field that is in my opinion below him, there's many engineers that could deliver the code needed for VR but not many that can completely write new paradigm shifting tech from the ground up.
 
Saying "over-tesselation" is begging the question from the outset. It would be the same as someone saying tesselation is bad on "under-performing" architectures.
 
Putting it simply Nvidia tesselation performance is a bit overpowered, and AMD performance on it was underpowered before GCN.

it's not just Nvidia, Intel Iris Pro beats by a good margin the stuff from AMD with similar performance overall on tessmark 64x

haswell GT2 was performing like the 7850K on tessmark 64x while in games it's clearly slower, so I would think AMD is a little underpowered for this, so if Nvidia wants to "bottleneck" AMD's performance they know what to push devs to use...
 
it's not just Nvidia, Intel Iris Pro beats by a good margin the stuff from AMD with similar performance overall on tessmark 64x

haswell GT2 was performing like the 7850K on tessmark 64x while in games it's clearly slower, so I would think AMD is a little underpowered for this, so if Nvidia wants to "bottleneck" AMD's performance they know what to push devs to use...

devs aren't typically stupid. They often balance image quality with performance. The over-tessellation has to come from nvidia code they've pushed on the devs which is a reason people hate gameworks. Any code in the developers game should be the developer's or open.

As a consumer over-tessellation is simply nasty. Even on nvidia hardware it's just plain disgusting to take the performance hit. You pay all that money for this hardware and then some punks want to force you to run foolishness that offers you no visual benefit but reduces your enjoyment.
 
The only time "over-tessellation" is a serious issue is when you have lower quad fragment shading efficiency due to having primitives smaller than a 2x2 quad ...

What GPUs shade are 2x2 quads, not individual pixels because 2x2 quads are useful primitives to compute screen-space gradients of texture coordinates cheaply which texture samplers depend on for mip-map selection and filtering ...

Rasterization is practically free when there is a dedicated unit doing that for you so that is not an issue but AMD's Evergreen fixed function tessellator and rasterizer were just plain bad ...

I don't think AMD will be crying so much about "over-tessellation" anymore when when Fiji fixed that issue ...
 
Might be the overhead of approximating which pixels need to be shaded. The larger the area the polygon covers (upto 16 pixels) the more efficient the shader is making use of GCN. When you have a minipixel, the shading approximation clamps it to one whole pixel which wastes cycles. That's my understanding of it anyways, don't take my word for it.
 
What about tessellating a water table beneath game world? What's that considered?

Normal. Having one unified, rectangular surface of water is more efficient to render than several or having one body curving around the environment. If the water is tessellated for detail, that tessellation comes with it.
 
Last edited:
Back
Top