Tesselation done properly..according to AMD

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Is that why they made the 6900 series with 3/4x the tessalation power of the 5870?
How do you exspain that.?

It's great to see hardware with more tessellation performance from AMD or nvidia based on there may be less tessellation bottlenecks for the future and adds value, imho.
 

tincart

Senior member
Apr 15, 2010
630
1
0
It's great to see hardware with more tessellation performance from AMD or nvidia based on there may be less tessellation bottlenecks for the future and adds value, imho.

This is what we would expect. The 58xx series provide enough power to run current games that use tessellation. That said, the same level of tessellation may not be enough to run it at high settings in future games and so you scale the capacities of the news cards to the needs of the consumer during that cards lifetime.

It is not a marketing gimmick. It is not confirmation that people defending nV's superior tessellation hardware were right (whatever that means). It is a hardware development strategy that tries to identify the "sweet spot" in terms of certain performance metrics and designs hardware to meet those prospective goals. nVidia has a different strategy with regards to tessellation that may provide some benefits, or it may not.

The goals that are set may be off, so if someone wants to argue about tessellation capabilities, what they would need to show is that the design of a certain product was well off the mark for what a reasonable consumer would expect in a given period of time.
 

HeXen

Diamond Member
Dec 13, 2009
7,828
37
91
It's great to see hardware with more tessellation performance from AMD or nvidia based on there may be less tessellation bottlenecks for the future and adds value, imho.

no what would be great..no fantastic, would be some freaking games that support this hardware with "more" tessellation.
 

PingviN

Golden Member
Nov 3, 2009
1,848
13
81
The funny thing is that even AMD says that NVIDIA's tessellator is too powerful. I swear some people will defend an idea long beyond its death.

Either way my point is clear NVIDIA's superior tessellation allows its midrange card to keep up with the best AMD has to offer. To me that is proof that NVIDIA is tessellation done right.

Now if the 6970 or whatever they rebadge the next card, matches NVIDIA in tessellation then they are agreeing with me. If not then they will be behind in the DX11 game.

You're reading it wrong. AMD claims that, in games, tesselation factors needn't be set so high as it punnishes the framerate without offering significant improvements to IQ. I don't think Nvidia was mentioned in the blogg once. A guy at AMD stated how the company thinks that tesselation should be used to benefit gamers the most. It really is as simple as that.

I have yet, to date, not found a game that can actually take advantage of the OH-MY-GOD-IT'S-INSANELY-AWESOME tesselation performance available with Fermi. See? AMD offered a solution that offers enough tesselation performance to match the tesselation used in modern games. AMD offered that performance over 6 months before Nvidia did. The reason to this is, amongst other, that Nvidia launched a new micro arch. while AMD reworked and tweaked an existing one. Of course tesselation performance will improve with AMDs next architecture, but that doesn't mean that AMD was/is wrong with Cypress and Barts. Tesselation is a new feature, barely used at all - why should any company waste die area for something that wont be used much anyways?
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
You're reading it wrong. AMD claims that, in games, tesselation factors needn't be set so high as it punnishes the framerate without offering significant improvements to IQ. I don't think Nvidia was mentioned in the blogg once. A guy at AMD stated how the company thinks that tesselation should be used to benefit gamers the most. It really is as simple as that.

I have yet, to date, not found a game that can actually take advantage of the OH-MY-GOD-IT'S-INSANELY-AWESOME tesselation performance available with Fermi. See? AMD offered a solution that offers enough tesselation performance to match the tesselation used in modern games. AMD offered that performance over 6 months before Nvidia did. The reason to this is, amongst other, that Nvidia launched a new micro arch. while AMD reworked and tweaked an existing one. Of course tesselation performance will improve with AMDs next architecture, but that doesn't mean that AMD was/is wrong with Cypress and Barts. Tesselation is a new feature, barely used at all - why should any company waste die area for something that wont be used much anyways?

If it is good enough why is AMD writing about this at all? And which gamers is AMD looking out for exactly? I think the key to this is looking at where they "claim" is the tradeoff. Amazing how it is right where their hardware crashes from a performance standpoint. Wouldnt we all laugh at Nvidia if they were to write about how 4 pixels per clock is the perfect tradeoff for gamers from a performance standpoint with the NV30?
 
Last edited:

Outrage

Senior member
Oct 9, 1999
217
1
0
If it is good enough why is AMD writing about this at all? And which gamers is AMD looking out for exactly? I think the key to this is looking at where they "claim" is the tradeoff. Amazing how it is right where their hardware crashes from a performance standpoint. Wouldnt we all laugh at Nvidia if they were to write about how 4 pixels per clock is the perfect tradeoff for gamers from a performance standpoint with the NV30?

there are always tradeoffs when you design a gpu. Amd's 5 series is over 1 year and 3 months old, i would say they spent the transistor budget wisely, they dont have a large chunk of silicon sitting idle most of the time.

nv30 was a piece of junk
 

PingviN

Golden Member
Nov 3, 2009
1,848
13
81
If it is good enough why is AMD writing about this at all? And which gamers is AMD looking out for exactly? I think the key to this is looking at where they "claim" is the tradeoff. Amazing how it is right where their hardware crashes from a performance standpoint. Wouldnt we all laugh at Nvidia if they were to write about how 4 pixels per clock is the perfect tradeoff for gamers from a performance standpoint with the NV30?

Well, AMD has a hardware solution that probably fits in with its' model. If AMD was targetting 16 pixels all along, then it's not a shock that they'll say so. We all know that AMD didn't dedicate much of the die for tesselation with their Cypress chipsets and still doesn't dedicate much die with their Barts products. I don't think AMD was interested in wasting die for a feature that wont be used much in the near future.

Surely, Nvidia's got another view on the issue, but I wont discard anyone as being wrong or right, because such a thing doesn't exist here. It's two takes on the same problem, so far, there is nothing saying either one solution is better than the other.
 

lavaheadache

Diamond Member
Jan 28, 2005
6,893
14
81
If it is good enough why is AMD writing about this at all? And which gamers is AMD looking out for exactly? I think the key to this is looking at where they "claim" is the tradeoff. Amazing how it is right where their hardware crashes from a performance standpoint. Wouldnt we all laugh at Nvidia if they were to write about how 4 pixels per clock is the perfect tradeoff for gamers from a performance standpoint with the NV30?

the difference between then and now is that the FX series cards got hammered in at the time current games. This is not happening to the hd 5800's everything is fairly tight and well within the price /performance segment. I don't see why this thread is seeing such sensationalistic responses since most all of it is moot. Where are these games that have huge amounts of tesselation that bring the AMD cards to their knees ?
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
You're reading it wrong. AMD claims that, in games, tesselation factors needn't be set so high as it punnishes the framerate without offering significant improvements to IQ. I don't think Nvidia was mentioned in the blogg once. A guy at AMD stated how the company thinks that tesselation should be used to benefit gamers the most. It really is as simple as that.

I have yet, to date, not found a game that can actually take advantage of the OH-MY-GOD-IT'S-INSANELY-AWESOME tesselation performance available with Fermi. See? AMD offered a solution that offers enough tesselation performance to match the tesselation used in modern games. AMD offered that performance over 6 months before Nvidia did. The reason to this is, amongst other, that Nvidia launched a new micro arch. while AMD reworked and tweaked an existing one. Of course tesselation performance will improve with AMDs next architecture, but that doesn't mean that AMD was/is wrong with Cypress and Barts. Tesselation is a new feature, barely used at all - why should any company waste die area for something that wont be used much anyways?


There is not a wrong or right way to me but information and choice. For AMD, it was their thinking and so far, the market did speak loudly for their products. nVidia had trouble with the first Fermi's and slow going, some rather obvious limitations but still great to see a powerful first generation tessellation ability and adds value to me.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
Well, AMD has a hardware solution that probably fits in with its' model. If AMD was targetting 16 pixels all along, then it's not a shock that they'll say so. We all know that AMD didn't dedicate much of the die for tesselation with their Cypress chipsets and still doesn't dedicate much die with their Barts products. I don't think AMD was interested in wasting die for a feature that wont be used much in the near future.

Surely, Nvidia's got another view on the issue, but I wont discard anyone as being wrong or right, because such a thing doesn't exist here. It's two takes on the same problem, so far, there is nothing saying either one solution is better than the other.

Isnt that the point? AMD is telling us we only need a factor of 16. Why? Because their hardware crashes from a performance stand point if you increase tesselation beyond that factor. AMD knows having more performance is never a bad thing. They know Nvidia wipes their ass in tesselation performance. Having a hardware vendor tell me what is good for me because their hardware can only do what they deem "done right" is silly. Which is why I brought up how we would laugh at Nvidia if they wrote targetting 4 pixel\clock with the NV30 was doing it "right".
 
Last edited:

Aristotelian

Golden Member
Jan 30, 2010
1,246
11
76
Isnt that the point? AMD is telling us we only need a factor of 16. Why? Because their hardware crashes from a performance stand point if you increase tesselation beyond that factor. AMD knows having more performance is never a bad thing. They know Nvidia wipes their ass in tesselation performance. Having a hardware vendor tell me what is good for me because their hardware can only do what they deem "done right" is silly. Which is why I brought up how we would laugh at Nvidia if they wrote targetting 4 pixel\clock with the NV30 was doing it "right".

Logically, you're aware that your analysis can occur in precisely the opposite way, right? Here:

Instead of
(1) "AMD tells us that we only need a factor of 16 because their hardware only supports tessellation up to that level"

How about
(2) "They designed their hardware to a factor of 16 in tessellation precisely because they sincerely believe that this is a point of marginal returns on tessellation investment."

In the same way as one can accuse AMD of drawing an artificial boundary, one can accuse Nvidia of creating the very same boundary but in moving past it, generate for themselves a unique marketing gimmick. Unless people on these forums are just trying to defend debating positions, games need to be made to take advantage of this extra tessellation performance on Nvidia's parts, or else it's just a moot point as it has been for months now.

I had this same back and forth with Scali months ago - still no change in the games.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
It seems, what AMD offers is fact - end of story -- and there is no other side, when they're offering PR to create division and cloud a competitors strength, imho. Not saying nVidia doesn't PR -- of course they do.
 

Aristotelian

Golden Member
Jan 30, 2010
1,246
11
76
It seems, what AMD offers is fact - end of story -- and there is no other side, when they're offering PR to create division and cloud a competitors strength, imho. Not saying nVidia doesn't PR -- of course they do.

How can a blog post cloud a strength which can be objectively measured by benchmarks? That's some PR you're projecting for AMD.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
Logically, you're aware that your analysis can occur in precisely the opposite way, right? Here:

Instead of
(1) "AMD tells us that we only need a factor of 16 because their hardware only supports tessellation up to that level"

How about
(2) "They designed their hardware to a factor of 16 in tessellation precisely because they sincerely believe that this is a point of marginal returns on tessellation investment."

In the same way as one can accuse AMD of drawing an artificial boundary, one can accuse Nvidia of creating the very same boundary but in moving past it, generate for themselves a unique marketing gimmick. Unless people on these forums are just trying to defend debating positions, games need to be made to take advantage of this extra tessellation performance on Nvidia's parts, or else it's just a moot point as it has been for months now.

I had this same back and forth with Scali months ago - still no change in the games.

The only way to verify #2 is if AMD doesnt move beyond that point. Time will tell.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
How can a blog post cloud a strength which can be objectively measured by benchmarks? That's some PR you're projecting for AMD.

Attack the benchmarks, the tests, reasons why their tessellation makes a lot of sense for today, right now, turn the subject to balance and their strengths.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
I'm not sure why people consider Tesselation the be all, end all. Even the Hawx2 demo didn't impress with its overly strong tess. Tess is definitely not near the importance that dx8-dx9 was. So comparing it to the FX video card problems back in the day is not even close.

At least with me, I could care less about having a mostly useless tess feature over price/performance.

it's not a "mostly useless" feature, civ5 makes use of it to great effect. it's just not properly implemented in most other games yet. and tess is just one part of dx11, so saying that "dx8 or dx9 > tess" just doesn't make sense.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
If it is good enough why is AMD writing about this at all? And which gamers is AMD looking out for exactly? I think the key to this is looking at where they "claim" is the tradeoff. Amazing how it is right where their hardware crashes from a performance standpoint. Wouldnt we all laugh at Nvidia if they were to write about how 4 pixels per clock is the perfect tradeoff for gamers from a performance standpoint with the NV30?

It's also worth noting that they did not bring this up when they launched their DX11 cards. They only brought it up after NVIDIA launched and embarrassed them in benchmarks. They should have just remained silent on this issue and fixed the performance in their chips.

It's like when StarCraft II launched and AMD had no AA support. They had a PR piece trying to downplay that as well. Of course they still patched it (and took a major performance hit).

I think this PR fluff needs to be filed under "put up or shut up".
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
I suspect the moment Ati can do 1 pixel tessellated triangles fast Eric Demers will be singing a completely different tune. They clearly don't feel their tessellator is fast enough right now because that's the thing they really tweaked the most between the 5 and 6 series cards.

My personal opinion is it's a bit like the difference between 4*MS AA and 8*SS AA. It's not that great really, but it does look better. If you've got the gpu power why not use it.

maybe amd is trying to make 16 pixel tess the new 4xAA, while nv wants it to be 1 pixel. long term we're probably better off with 1 pixel (or 4/6 whatever) but I agree that the devs are more apt to stick to higher numbers, especially when it's still in its infancy.
 

Aristotelian

Golden Member
Jan 30, 2010
1,246
11
76
Attack the benchmarks, the tests, reasons why their tessellation makes a lot of sense for today, right now, turn the subject to balance and their strengths.

The first two are Nvidia's specialty, as is the last one (read: we're slower but have physx). How embarrassing for you.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Logically, you're aware that your analysis can occur in precisely the opposite way, right? Here:

Instead of
(1) "AMD tells us that we only need a factor of 16 because their hardware only supports tessellation up to that level"

How about
(2) "They designed their hardware to a factor of 16 in tessellation precisely because they sincerely believe that this is a point of marginal returns on tessellation investment."

In the same way as one can accuse AMD of drawing an artificial boundary, one can accuse Nvidia of creating the very same boundary but in moving past it, generate for themselves a unique marketing gimmick. Unless people on these forums are just trying to defend debating positions, games need to be made to take advantage of this extra tessellation performance on Nvidia's parts, or else it's just a moot point as it has been for months now.

I had this same back and forth with Scali months ago - still no change in the games.



The only way to verify #2 is if AMD doesnt move beyond that point. Time will tell.

No, when games move beyond it and show a visible performance/IQ gain due to it. That's when AMD will be proven wrong.

Just curious? Some people seem to be making/defending arguments who, apparently, haven't read or don't understand AMD's position and reasoning on the subject. Nobody is attacking the validity of their argument on why not to proceed beyond 16pix/tri. I'm not an expert on "Rasterizer efficiency". Anyone here know enough about it to be able to further clarify and explain their claims to the unwashed masses here? ;) Judging by the arguments, "We need more because more is better." and "No we don't, enough is enough.", it seems like we could use some more expert input.

Look at the image below. I've whited out an area equal to 1pix and 16pix under the appropriate pics. 16pix/tri makes for extremely high res for a game.

Tesselation-Rasterizer-Efficiency.png
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
The first two are Nvidia's specialty, as is the last one (read: we're slower but have physx). How embarrassing for you.

What does GPU Physics have to do with tessellation? What does a proprietary solution have to do with a more Open Solution?

We're slower but have PhysX? Are you trying to equate the lesser tessellation from AMD is okay because there is a performance hit with PhysX?

I'm not saying AMD's solution is wrong or their hardware is not balanced for the life cycles of their products -- simply offered that a more powerful tessellation may offer value for some and may offer less of a tessellation bottleneck for the future. I am saying, that AMD and nVidia both PR and would be foolish to think AMD doesn't.
 

Aristotelian

Golden Member
Jan 30, 2010
1,246
11
76
What does GPU Physics have to do with tessellation? What does a proprietary solution have to do with an Open Solution?

We're slower but have PhysX? Are you trying to equate the lesser tessellation from AMD is okay because there is a performance hit with PhysX?

I'm not saying AMD's solution is wrong or their hardware is not balanced for the life cycles of their products -- simply offered that a more powerful tessellation may offer value for some and may offer less of a tessellation bottleneck for the future. I am saying, that AMD and nVidia both PR and would be foolish to think AMD doesn't.

Nobody is contradicting a 'maybe'. Maybe global warming will slow down. Maybe I'll grow bald in the future. Maybe I'll get a job I've applied for.

Maybe in the future there will be games to take advantage of Nvidia's tessellation performance but there are none now and that is all I'm trying to say. Maybe, could be, would be, whatever. When the games come that make use of it, it'll be a motivating reason for purchasing the product. Now, it's just a marketing gimmick. Note, I'm not blaming them for pushing it, I'm just calling it for what it is.

And regarding my physx comment, it should be obvious from the context of the post I was replying to (yours).
 

cusideabelincoln

Diamond Member
Aug 3, 2008
3,274
41
91
No, when games move beyond it and show a visible performance/IQ gain due to it. That's when AMD will be proven wrong.

Just curious? Some people seem to be making/defending arguments who, apparently, haven't read or don't understand AMD's position and reasoning on the subject. Nobody is attacking the validity of their argument on why not to proceed beyond 16pix/tri. I'm not an expert on "Rasterizer efficiency". Anyone here know enough about it to be able to further clarify and explain their claims to the unwashed masses here? Judging by the arguments, "We need more because more is better." and "No we don't, enough is enough.", it seems like we could use some more expert input.

Look at the image below. I've whited out an area equal to 1pix and 16pix under the appropriate pics. 16pix/tri makes for extremely high res for a game.
Without expert voices the best thing is to look at tests.

To determine the validity of the blog post tests should be done. One test is run at 16 pixels and other tests are run at lower pixels. Now AMD claims there is little to no visible IQ difference between 16 and 1. So we compare the results of the tests and see how accurate this statement is. Aren't there benchmarks out there with the ability to fine-tune tessellation options? I don't have the hardware to do this or else I'd try.

Also AMDs approach would increase the performance on all video cards, not just theirs. Here is their goal:
This is done by using a variety of adaptive techniques that use high tessellation levels only for parts of a scene that are close to the viewer, on silhouette edges, or in areas requiring fine detail.

That doesn't seem unreasonable to me. To me it makes sense for developers to do this as it would increase performance on all video cards. Whether or not IQ is affected and to what degree has yet to be determined, as I have not seen any tests or benchmarks nor have I seen any poster in this thread present any kind of data that actually confirms or debunks AMD's blog post.
 
Last edited:

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Nobody is contradicting a 'maybe'. Maybe global warming will slow down. Maybe I'll grow bald in the future. Maybe I'll get a job I've applied for.

Maybe in the future there will be games to take advantage of Nvidia's tessellation performance but there are none now and that is all I'm trying to say. Maybe, could be, would be, whatever. When the games come that make use of it, it'll be a motivating reason for purchasing the product. Now, it's just a marketing gimmick. Note, I'm not blaming them for pushing it, I'm just calling it for what it is.

And regarding my physx comment, it should be obvious from the context of the post I was replying to (yours).

What it is to you. Thankfully, others may differ.