AMD Radeon HD 6970 already benchmarked? Enough to beat GTX480 in Tesselation?

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Yea, but that's what bothers me.
When nVidia revealed their Fermi architecture, they didn't just say "We have 'scalable' tessellation".
No, they just immediately went down into the technical details, and then just slapped the marketing label "PolyMorph Engine" on it.

AMD hasn't gone into detail yet, nor have they even bothered to figure out a nice marketing label for whatever it is they may have cooked up.

hrmmm...at an admittedly superficial level this gives me a deja vu feeling about AMD's GPGPU efforts as well. Similar manner of marketing doing a lot of talking, one or two token apps show up and turn out to be totally busted/worthless (and hence free to acquire) and years go by without much of an improvement to show for all the marketing babble.

Just my superficial impression of the situation, prolly wrong on many levels but I steadfastly hold to the notion that this too (if true) is more fail on AMD marketing's part for not having done a good enough job to have left me with a more accurate superficial impression! :p ;)
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
Yea, but that's what bothers me.
When nVidia revealed their Fermi architecture, they didn't just say "We have 'scalable' tessellation".
No, they just immediately went down into the technical details, and then just slapped the marketing label "PolyMorph Engine" on it.

AMD hasn't gone into detail yet, nor have they even bothered to figure out a nice marketing label for whatever it is they may have cooked up.



Exactly, I hope we get some real info on what they've been doing regarding tessellation soon, and even better: some benchmarks as a proof-of-concept of their efforts.

AMD or the graphics division at least, don't usually go into technical details about their unreleased GPUs.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
AMD or the graphics division at least, don't usually go into technical details about their unreleased GPUs.

They do, actually. Ask the reviewers at Anandtech where they get their technical information from when they are doing a (p)review of a new architecture.
An article like this one, for example: http://www.anandtech.com/show/2556/3
Sometimes AMD will even put up blogs on their own site, explaining upcoming technology.
Ofcourse a prerequisite is that you have something to talk about in the first place.
 

Madcatatlas

Golden Member
Feb 22, 2010
1,155
0
0
Not really. A wooden board is more than enough sometimes.


Thread crapping is not acceptable.

Moderator Idontcare
 
Last edited by a moderator:

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
You need a link to prove your allegations.



I never claimed otherwise, re-read what I said.



No I don't, and that question ha been asked WAY too many times. I find it deeply insulting, and I want actions to be taken against these people.



I did that in the past, they don't want to play anymore.

Me thinks you protest to much. IDC is here if I miss use my words he will put it out there for all to see. As for your discussion with me . IDC should look real hard at that as your backtracking . Ya I know they won't play with ya, Would ya tell us all why?
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
1
0
hrmmm...at an admittedly superficial level this gives me a deja vu feeling about AMD's GPGPU efforts as well. Similar manner of marketing doing a lot of talking, one or two token apps show up and turn out to be totally busted/worthless (and hence free to acquire) and years go by without much of an improvement to show for all the marketing babble.

Just my superficial impression of the situation, prolly wrong on many levels but I steadfastly hold to the notion that this too (if true) is more fail on AMD marketing's part for not having done a good enough job to have left me with a more accurate superficial impression! :p ;)

Well, I guess there are some options to be explored, for example:
1) A company has great technology, but fails to market it properly, and as such, fails in the marketplace anyway (Amiga anyone?)

2) A company doesn't have great technology, but knows how to market it, making it a success anyway (Intel Pentium 4 anyone?)

I wonder where we would have to put AMD.
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
They do, actually. Ask the reviewers at Anandtech where they get their technical information from when they are doing a (p)review of a new architecture.
An article like this one, for example: http://www.anandtech.com/show/2556/3
Sometimes AMD will even put up blogs on their own site, explaining upcoming technology.
Ofcourse a prerequisite is that you have something to talk about in the first place.

They aren't allowed to publish it till NDA lifts.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
False, but why I am not surprised:

http://www.pcgameshardware.com/aid,...h-Interview-What-DirectX-11-is-good-for/News/

1. One of the main features of DirectX 11 API is hardware tessellation. If you're utilizing this feature, which visual improvements can we expect (e.g. water, NPCs, environment)? Some implementations distort the textures cause of the additional mesh, have you noticed and did you consider this problem?
This feature will be enabled automatically if your hardware has DX11 capabilities. We use tessellation for the Civilization V terrain, which adjusts the mesh's subdivision of the terrain as the user zooms in and out. Not only does it add detail, but terrain tessellation makes the game measurably faster on both Nvidia and AMD hardware (as much as 30% in some cases).
Now can you declare that "argument" for debunked and move on?

Incorrect.

Benching Civ 5 on my system with tessellation on vs off doing the lategameview benchmark, fps is lower with tessellation on.

There is not one DX11 game on the market where your fps is higher using tessellation. I own them all and waste time benching stuff out of curiosity.

Posting anecdotes from a random interview is a far cry from a fact.

Why can't you stay on topic to the thread ? 'AMD Radeon HD 6970 already benchmarked? Enough to beat GTX480 in Tesselation?'

My bet is the 6970 will be faster than the GTX480 in every single game out there, no matter DX9, 10 or 11, tessellation on or off.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Incorrect.

Benching Civ 5 on my system with tessellation on vs off doing the lategameview benchmark, fps is lower with tessellation on.

There is not one DX11 game on the market where your fps is higher using tessellation. I own them all and waste time benching stuff out of curiosity.

How exactly did you test it?
As far as I know, you can only set tessellation to 'low', not 'off'.
Lower tessellation will obviously be faster than higher tessellation, but at reduced image quality.

The real comparison is the same quality level with and without tessellation.
I've seen someone on this forum comparing the DX9 version (no tessellation) to the DX11 version (with tessellation), and DX11 was considerably faster.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
How exactly did you test it?
As far as I know, you can only set tessellation to 'low', not 'off'.
Lower tessellation will obviously be faster than higher tessellation, but at reduced image quality.

The real comparison is the same quality level with and without tessellation.
I've seen someone on this forum comparing the DX9 version (no tessellation) to the DX11 version (with tessellation), and DX11 was considerably faster.

Have you made one on-topic post in this entire thread ?

I'd rather not feed your tessellation rant, but you cannot use DX9 vs DX11 to compare, that is obviously flawed. But feel free to link to this comparison you've seen.

For the record, World of Warcraft runs faster through DX11 than DX9 and makes no use of tessellation, so even if someone got those results it proves nothing, further, AA does not work in Civ V under DX9, another skew.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
None are downplaying . A blindman can see that . The problem is some are overstating its value right now at this time . 6970 changes everthing. But for 1 thing still won't be any games that require that power of tess. We all like tess. But don't like marketing tactics here. Thats whats going on in this topic. At the end of the day the people taking the stance that at this time 480 has more tess than required for present software(Games) . In 3 weeks 6970 makes this whole topic a none important discussin,

Yeaa, but at the expense of fx. a costly off die memory that is good for nothing but removing NV marketing bs. Perhaps AMD will market it as future proof gfx card? LOL

So the end result will be higher prices for 6970 and 580 for no benefit for the gamers.

But soon after the 6970 launch we will se some new benchmarks, frogs on sledges crashing into dragons, the story of batman2 returning from cuda...
 

mosox

Senior member
Oct 22, 2010
434
0
0
Almost every single components retailer in the world will tell you that Intel is much better than AMD and Nvidia much better than ATI (AMD).

A prebuilt AMD system with a Phenom II X4/HD5870 is named something like "Gamer 5" while an Intel i-3/GTX 460 is named something like "Dragon Tiger Extreme XX".

The result is this;
http://www.youtube.com/watch?v=FL7yD-0pqZg
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
Mosox I agree, alot of people just assume things because of brand.

If I wanted to build a cheap value pc for myself, id go with a Athlon II x4 640, and probably a 6850 and overclock that bastard card to its limits (same with cpu). A athlon IIx4 is super cheap, fast enough not to be a bottle neck in anything gameing wise, once you oc it to 3.8-3.9 ghz.

Alternative would be a i3 530, but it would cost abit more, and the athlon does slightly better in multi thread stuff. Id also go cheap on ram and anything else, and oc those too... lmao.

if you want high end though... Id probably go intel <.<' As much as I love amd yeah.. But im a cheap bastard so intel is out of my price range.
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
1
0
Have you made one on-topic post in this entire thread ?

Not sure if you read the thread title?
"AMD Radeon HD 6970 already benchmarked? Enough to beat GTX480 in Tesselation?"

Apparently the performance of tessellation (with regard to both GTX480 and HD6970) is very much the topic of this thread.

I'd rather not feed your tessellation rant, but you cannot use DX9 vs DX11 to compare, that is obviously flawed. But feel free to link to this comparison you've seen.

I asked you how you tested it, so we can verify whether or not your comparison is flawed.
 

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
How exactly did you test it?
As far as I know, you can only set tessellation to 'low', not 'off'.
Lower tessellation will obviously be faster than higher tessellation, but at reduced image quality.

The real comparison is the same quality level with and without tessellation.
I've seen someone on this forum comparing the DX9 version (no tessellation) to the DX11 version (with tessellation), and DX11 was considerably faster.

Ok, this whole image quality is really getting on my nerves. I do quite a bit of animation and have to disagree with you a bit about tessellation.

First off tessellation done by Nvidia or ATI is subject for games. As you state, you're a developer who likes Nvidia because it can increase a quality mesh without much user intervention. While that is correct, the method that Nvidia and ATI use are not compatible with a majority of algorithm's that are widely used on the market.

For example:

http://www-2.cs.cmu.edu/~quake/triangle.html

http://mrl.nyu.edu/projects/modeling_simulation/subdivision/

http://www.plunk.org/~grantham/public/actc/

Also to correct your statements on these key topics:

Tessellation does not improve "Image Quality". Image quality can only be improved by a better texture. It is useless to have tessellation enabled with a very low texture as it really shows the image in bad view. That it like having a perfectly created 3d human with a really bad texture. In reality, a better texture is better than 3d geometry as it is easier on the video card. It just needs more/faster memory, that is why 2gb or even 4gb will be here in the future vs fully utilized DX11 tessellation techniques.

Tessellation is not faster as a whole. If you have so much geometry that it cripples a GPU then what is the point of implementation? Similar argument to a super sized texture that does not need to be that big. This is the debate going on with consoles, they use low resolution textures to make the game feel fast. That is why on consoles you see better textures on the 3d model you are following vs. it's environment. Try increasing the textures in Tony Hawk by just 20&#37; and the damn game would crawl.

While I agree DX 11 tessellation is getting to a unified architecture, it can not be used to an extent that it cripples GPU performance. In fact, the main problem with tessellation you are taking something that can be done on the CPU and are now moving to the GPU. Go talk to Blizzard or any other MMO creator and you will see that people have better CPU's. When the hardware from top of the line to the bottom has enough dedicated hardware to use tessellation, only then can I recommend it.

Oh just a side note. It is easier to use tessellation than to edit your own
mesh :)
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Have you made one on-topic post in this entire thread ?

The thread is about tessellation, specifically about comparing it on GTX480 and HD6970.

The topic has wandered somewhat but only in the breadth of the tessellation comparison, a change in scope that as moderator I feel is warranted and valid since HD6970 itself is an unannounced/unreleased product so some form of interpolation/extrapolation is bound to be invoked in due course of logical debate.

I'd rather not feed your tessellation rant

That is a personal insult is not acceptable.

You are free to think it but you are not free to state it publicly. Keep the negativity to yourself.

Moderator Idontcare
 

Scali

Banned
Dec 3, 2004
2,495
1
0
While that is correct, the method that Nvidia and ATI use are not compatible with a majority of algorithm's that are widely used on the market.

The tessellator is fully programmable with a hull shader before tessellation (to determine the amount of tessellation and do control point setup etc), and a domain shader after tessellation (to process/adjust/correct the triangles generated by the tessellator).

You can implement a large variety of algorithms this way, including very popular algorithms such as bezier/b-spline and Catmull-Clark.

Perhaps not EVERY possible algorithm can be implemented at this point, but that certainly doesn't mean the tessellator isn't very useful.

Tessellation does not improve "Image Quality". Image quality can only be improved by a better texture. It is useless to have tessellation enabled with a very low texture as it really shows the image in bad view.

Nonsense.
Firstly, you're assuming that textures are used in the first place. That is not necessarily true (although I agree it is rare in games).
Tessellation adds geometry detail, which WILL improve IQ, as surfaces can be made smoother/more detailed.
Secondly, texture quality is not really the breaking point in current games. They already contain VERY high-detail texture maps, bumpmaps etc. These are generally wrapped around relatively lowpoly objects, and shading (bumpmap/parallax) is used to create the impression of geometric detail.
Tessellation can replace this by REAL geometry, which also will be MSAA'ed properly, unlike bumpmapping approaches. Again, IQ improvements.

That it like having a perfectly created 3d human with a really bad texture. In reality, a better texture is better than 3d geometry as it is easier on the video card. It just needs more/faster memory, that is why 2gb or even 4gb will be here in the future vs fully utilized DX11 tessellation techniques.

In case you don't realize, a very common approach for tessellation is to use displacement maps (textures) to 'encode' or 'compress' the geometry.
It's not mutually exclusive. Thing is however, that displacement maps can be more compact than bumpmaps/heightmaps/horizonmaps that we need today, while at the same time delivering better image quality (see above: better MSAA etc).

Tessellation is not faster as a whole. If you have so much geometry that it cripples a GPU then what is the point of implementation?

That's if you look at it from the wrong side.
The point is not in "generating as much geometry as possible", but in rendering a certain amount of geometry as efficiently as possible. Given a fixed amount of geometry, tessellation will always be faster, as polycounts increase, because you reduce the memory footprint and bandwidth requirements.

In fact, the main problem with tessellation you are taking something that can be done on the CPU and are now moving to the GPU. Go talk to Blizzard or any other MMO creator and you will see that people have better CPU's. When the hardware from top of the line to the bottom has enough dedicated hardware to use tessellation, only then can I recommend it.

Tessellation can be done on the CPU... but there are two problems:
1) The GPU has far faster dedicated tessellation hardware. Even the lowest-end DX11 part versus the highest-end CPU (just like even the fastest CPU can't outperform even the simplest Intel IGP in software rendering in general).
2) Tessellation is done in realtime (as it is adaptive: tessellation factors depend on distance and/or screen space size of the polygons). This means that for every frame, you need to re-tessellate every object. The net effect of that is basically that your CPU needs to do all the geometry processing, generate tons of data, and then try to push it over the PCI-e bus to the GPU. We've had hardware T&L since the first GeForce, and this is exactly why: you cannot push the geometry around fast enough.

Really, trying to argue for CPU-based geometry processing was naive back in the early GeForce days... but today it's just ridiculous.

I think you shouldn't have made that post. I get the distinct impression that you just want to argue against tessellation because of brand loyalty. You throw some technical terms around, but have no idea about how it works in practice.
 
Last edited:

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
The tessellator is fully programmable with a hull shader before tessellation (to determine the amount of tessellation and do control point setup etc), and a domain shader after tessellation (to process/adjust/correct the triangles generated by the tessellator).

I know how Tessellation works my friend, nonetheless thank you for providing something that I am not trying to debate....

You can implement a large variety of algorithms this way, including very popular algorithms such as bezier/b-spline and Catmull-Clark.

You come off as it is an easy task to take what you have been doing for years into a specification that is needed for nvidia and or ATI. Have you worked in the industry where a simple code change turns out to ruin the entire 3d engine? Do you think it is a simple as a find and replace in wordpad?

People have steps and a specific process when it comes to developing games. You have to give the code generated up top and let the developers code with it. I make one change and documentation, training etc. has to be worked out. It is a rather big undertaking.


Perhaps not EVERY possible algorithm can be implemented at this point, but that certainly doesn't mean the tessellator isn't very useful.

Did I ever say it wasn't very useful? PLEASE POINT ME TO WHERE I SAID IT.

Nonsense.
Firstly, you're assuming that textures are used in the first place. That is not necessarily true (although I agree it is rare in games).
Tessellation adds geometry detail, which WILL improve IQ, as surfaces can be made smoother/more detailed.
Secondly, texture quality is not really the breaking point in current games. They already contain VERY high-detail texture maps, bumpmaps etc. These are generally wrapped around relatively lowpoly objects, and shading (bumpmap/parallax) is used to create the impression of geometric detail.
Tessellation can replace this by REAL geometry, which also will be MSAA'ed properly, unlike bumpmapping approaches. Again, IQ improvements.

So you are on a kick about not using textures in the first place? I could give you the best tessellation algorithm and video card to process it and regardless, a plain texture looks BAD with the 3d geometry.

In case you don't realize, a very common approach for tessellation is to use displacement maps (textures) to 'encode' or 'compress' the geometry.
It's not mutually exclusive. Thing is however, that displacement maps can be more compact than bumpmaps/heightmaps/horizonmaps that we need today, while at the same time delivering better image quality (see above: better MSAA etc).

I understand the undertaking in that approach and I am not debating that. I will however debate the better image quality.

That's if you look at it from the wrong side.
The point is not in "generating as much geometry as possible", but in rendering a certain amount of geometry as efficiently as possible. Given a fixed amount of geometry, tessellation will always be faster, as polycounts increase, because you reduce the memory footprint and bandwidth requirements.

This is where we have our problem. You are saying it is easier for a company to place specific hardware on a card to provide Tessellation. Ok, fine... That is your approach to the problem.

My approach to the problem is that you can not have variations in Tessellation units. I think Nvidia's 480 is fine, when you move down to the 460 or below it starts to bother me. What I am expected to do is create a texture for the 3d mesh. I refuse to make three or four different textures to fit a 3d model. You scale down, meaning you make one high quality texture and scale it down.

If I had three or four different levels of tessellation, I have to make a texture to fit those levels. I am surely not going to test the process on Nvidia and on ATI's hardware.

A 3d mesh DOES look different with each variation level of tessellation. This causes me to create different textures for each level.

Tessellation can be done on the CPU... but there are two problems:
1) The GPU has far faster dedicated tessellation hardware. Even the lowest-end DX11 part versus the highest-end CPU (just like even the fastest CPU can't outperform even the simplest Intel IGP in software rendering in general).

*sigh* Ok, let's keep on topic here.

Not all GPU's have dedicated tessellation hardware that is sufficient enough to run a fully fledged tessellation styled game. I am referring to the mass audience. I would rather put a low level default tessellation mesh for the general public that can run fine on a dual core CPU. I will not put a low level tessellation mesh for the general public and have it run on fixed GPU hardware.

CPU's are not used enough in today's industry, why make it worse?

2) Tessellation is done in realtime (as it is adaptive: tessellation factors depend on distance and/or screen space size of the polygons). This means that for every frame, you need to re-tessellate every object. The net effect of that is basically that your CPU needs to do all the geometry processing, generate tons of data, and then try to push it over the PCI-e bus to the GPU. We've had hardware T&L since the first GeForce, and this is exactly why: you cannot push the geometry around fast enough.

Once again you are referring to a large amount of tessellated objects.


Really, trying to argue for CPU-based geometry processing was naive back in the early GeForce days... but today it's just ridiculous.

That is your opinion and yours to keep.

I think you shouldn't have made that post. I get the distinct impression that you just want to argue against tessellation because of brand loyalty. You throw some technical terms around, but have no idea about how it works in practice.

Ok mate, whatever floats your boat.
 

maddie

Diamond Member
Jul 18, 2010
5,204
5,615
136
That's if you look at it from the wrong side.
The point is not in "generating as much geometry as possible", but in rendering a certain amount of geometry as efficiently as possible. Given a fixed amount of geometry, tessellation will always be faster, as polycounts increase, because you reduce the memory footprint and bandwidth requirements.


Isn't that what AMD argued for?

Against Nvidia's advice of using extreme, unrealistic tessellation levels when testing cards.
 

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
I also forgot to add that there are people working on caching Tessellation units which reduces the CPU process time.