AMD Radeon HD 6970 already benchmarked? Enough to beat GTX480 in Tesselation?

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
The subdivision factor is probably not the same as the tessellation factors in DX11. Sounds like it adds polys far faster than what DX11 does (could be a recursive algo, each factor subdividing each quad into 4? Then you'd get 2.9M at 6 subdivisions).
The tessellation factor for a quad is just the number of points to generate on an edge.
So if you use a tessellation factor of 6, you'd get 6x6 = 36 quads.
So 728 polys would turn into 728*36 = 26k polys, not quite in the millions.

Well, even in your example I don't see a need for 32x or 64x, etc. for gaming. I think this is why AMD Tess has been fine for actual games.
 

dguy6789

Diamond Member
Dec 9, 2002
8,558
3
76
Partly incorrect: GeForce3 supported RT-patches in DirectX 8.
The rest, that's a pretty strong accusation, have any proof to back it up?

It's pretty common knowledge that DX10 was originally going to have tessellation but Nvidia didn't have a card capable of it so they coerced Microsoft into keeping tessellation out of DX10 so Nvidia could call their cards DX10 compliant.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Well, even in your example I don't see a need for 32x or 64x, etc. for gaming. I think this is why AMD Tess has been fine for actual games.

True, those are only for extreme cases...
But AMD's hardware is only reasonable at factors 1-11 basically.
I think you'll have to admit that the range 11-32 can be quite useful in normal scenario's.

Aside from that, try not to think in terms of 'for gaming'.
Technology redefines what games look and perform like all the time, and tessellation is going to be as big a leap in visual detail as per-pixel lighting was in the early days of programmable shaders. Back then, did anyone say "vertex lighting is good enough for gaming"? No, I don't think so.
One major advantage is perfectly smooth LOD, no more 'popping' objects into more/less detail as they move closer/further away.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
It's pretty common knowledge that DX10 was originally going to have tessellation but Nvidia didn't have a card capable of it so they coerced Microsoft into keeping tessellation out of DX10 so Nvidia could call their cards DX10 compliant.

If it's common knowledge, then you should be able to produce a good reliable source... but I see none.
Come up with a source, or drop the subject. We have four people repeating the same thing now, without the slightest shred of evidence. That doesn't make it any more true. It's just derailing the thread. If anyone else gets any ideas to 'chip in their 2 cents', don't.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
You mean Intel, not nVidia.

I meant NV and you of course knew that . I not worried abit about intels tessallation on 22nm Ivy bridge . Intel has said for along time they like the idea of tessallation in games Intel Also likes global ill.

Back to NV on vista grip . Because ATI did MS 360. I think because of that relationship . ATI had inside info . So basicly MS did the right thing holding off until later .
 

Scali

Banned
Dec 3, 2004
2,495
1
0
I meant NV and you of course knew that .

No I don't. I know that Intel wanted Microsoft to lower the Vista requirements because otherwise they'd have a ton of IGPs that would not be "Vista capable".
I also know that Intel's DX10-capable IGPs support the absolute minimum of the DX10 standard.
nVidia's DX10 hardware however, supports a variety of DX10.1 features.

Tessellation was a standard feature of DX8. In DX9 it went into obscurity (and even AMD started to simulate TruForm in software on their DX9 parts), and in DX10 it wasn't there either.
The DX11 tessellator is nothing like AMD ever dreamed of (fully programmable, rather than the fixed units that AMD used up to that point), and the fact that their implementation is so much slower than nVidia's, those facts don't exactly convince me that it was AMD that had been pushing for this functionality all along.
 

Spyhawk

Junior Member
Oct 25, 2010
11
0
66
I suggest we move all this talk on tesselation over to its own thread and go back to speculating on Cayman. Lets keep it clean and devoid of all refferences to anything else. Please ?
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
True, those are only for extreme cases...
But AMD's hardware is only reasonable at factors 1-11 basically.
I think you'll have to admit that the range 11-32 can be quite useful in normal scenario's.

Aside from that, try not to think in terms of 'for gaming'.
Technology redefines what games look and perform like all the time, and tessellation is going to be as big a leap in visual detail as per-pixel lighting was in the early days of programmable shaders. Back then, did anyone say "vertex lighting is good enough for gaming"? No, I don't think so.
One major advantage is perfectly smooth LOD, no more 'popping' objects into more/less detail as they move closer/further away.

Well, actually I'm not to sure about needing above 11. I'd have to know how much it really SubD the model. My example of ~700 poly base model is an extremely low poly model. I'm actually making it with the intention of seeing how basic I can make the model and then use the hypernurbs to add detail/smoothness. No way I'd use so few polygons for the base model I'm doing if it was intended for a game. Way too much work! ;)

Keep in mind that we are talking about gaming cards when we refer to the 6800. If I'm using a card for gaming I don't want to have to pay for one that is designed for something far greater than my needs.
 

Paratus

Lifer
Jun 4, 2004
17,760
16,111
146
Somebody find the gif of the guy beating a dead horse.

OT - are there any rumors of 6900's shader count?
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Iv been hearing 1920SPs a lot.


If that ends up being true do we think we'll be looking @ HD6850 crossfire O/C to 850MHz? Would be a pretty monster single GPU even if it's not any faster than Barts shader for shader. :EVIL:
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
No I don't. I know that Intel wanted Microsoft to lower the Vista requirements because otherwise they'd have a ton of IGPs that would not be "Vista capable".
I also know that Intel's DX10-capable IGPs support the absolute minimum of the DX10 standard.
nVidia's DX10 hardware however, supports a variety of DX10.1 features.

Tessellation was a standard feature of DX8. In DX9 it went into obscurity (and even AMD started to simulate TruForm in software on their DX9 parts), and in DX10 it wasn't there either.
The DX11 tessellator is nothing like AMD ever dreamed of (fully programmable, rather than the fixed units that AMD used up to that point), and the fact that their implementation is so much slower than nVidia's, those facts don't exactly convince me that it was AMD that had been pushing for this functionality all along.

Tess had nothing to do with that . Don't act as if we here at AT have been hiding under a rock. Tess on intel IGP wasn't a problem for intel as IGP wasn't a gaming platiform. It was that other issue that had intel worried and rightly so . But that wasn't really resolved.
 

Dark Shroud

Golden Member
Mar 26, 2010
1,576
1
0
No I don't. I know that Intel wanted Microsoft to lower the Vista requirements because otherwise they'd have a ton of IGPs that would not be "Vista capable".
I also know that Intel's DX10-capable IGPs support the absolute minimum of the DX10 standard.
nVidia's DX10 hardware however, supports a variety of DX10.1 features..

If myself or anyone else that agreed with me were talking about Intel we would have said Intel and not Nvidia.

Nor did Intel's issue with Vista ready/capable chipsets & IGPs have anything to do with MS's DirectX features.

Tessellation was a standard feature of DX8. In DX9 it went into obscurity (and even AMD started to simulate TruForm in software on their DX9 parts), and in DX10 it wasn't there either.
The DX11 tessellator is nothing like AMD ever dreamed of (fully programmable, rather than the fixed units that AMD used up to that point), and the fact that their implementation is so much slower than nVidia's,

Which "implementation" are you referring to, 5000 or 6000 series? Because the 6800 series is already faster than the 5000. Then there is the 6900 series that we have not even seen specs of, yet people already seem content to pass judgment on them.

those facts don't exactly convince me that it was AMD that had been pushing for this functionality all along.

I don't recall any one saying AMD was "pushing for" Tessellation. Many of us are simply saying that AMD had it first and no one really used it or cared. Then all of a sudden Nvidia releases a pumped up version and suddenly it's the second coming. After all FERMI is ridding a one trick pony named Tessellation. The problem for all of us is the pony will need a year or two before it can get up to a gallop.

Meanwhile AMD has Morphological Anti-Aliasing which is working right now even giving older games a noticeable improvement. And working on games with Nvidia vendor ID lock in the software to hinder AMD cards.
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
And working on games with Nvidia vendor ID lock in the software to hinder AMD cards.

<.<' nvidia dont play nice, they fight dirty.

they also get in bed with game developers and software bomb amd cards if they can get away with it. ei the messup with the HAWX2.
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
And working on games with Nvidia vendor ID lock in the software to hinder AMD cards.

<.<' nvidia dont play nice, they fight dirty.

they also get in bed with game developers and software bomb amd cards if they can get away with it. ei the messup with the HAWX2.
I will buy hardwares from the vendor who get in bed with game deverlopers than ones that doesn't.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
I wonder if these will be the fabled 4D shaders.

Thats what there saying . I sure would like to know what happened to my 1920 reply it was here now its missing . Strange that. Could have been another thread tho . I think not but I struggling with recall. LOL its up there I just didn't see it.
 
Last edited:

buckshot24

Diamond Member
Nov 3, 2009
9,916
85
91
We get it. Wait for 6970 to actually come out and then if this leak is false, commence bitching about it, otherwise there are other threads to whine over the 6870 not performing as well as a GTX 460 in synthetic benchmarks.
LOL No kidding, it's ridiculous. Anyway...

Didn't the 3870 beat the 8800 gt in 3dMark? Which card did people want back then?
 

Scali

Banned
Dec 3, 2004
2,495
1
0
No way I'd use so few polygons for the base model I'm doing if it was intended for a game. Way too much work! ;)

Well, actually that is the point.
Keep the polycount as low as possible.
This means there is much less of a memory footprint. Especially interesting in games with large levels (eg Crysis). Those games currently have to stream a lot of data in/out from harddisk, over the PCI-e port, which greatly affects performance and gaming experience.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Tess had nothing to do with that . Don't act as if we here at AT have been hiding under a rock. Tess on intel IGP wasn't a problem for intel as IGP wasn't a gaming platiform. It was that other issue that had intel worried and rightly so . But that wasn't really resolved.

Provide links from authoritative sources, or keep your silence.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
If myself or anyone else that agreed with me were talking about Intel we would have said Intel and not Nvidia.

You are drowning in a lack of proof.

Which "implementation" are you referring to, 5000 or 6000 series? Because the 6800 series is already faster than the 5000. Then there is the 6900 series that we have not even seen specs of, yet people already seem content to pass judgment on them.

Ofcourse 6000-series is faster than 5000-series, pretty hard to be slower, when it's that poor.
But it's not faster in the places where the biggest bottleneck is. And still much slower than nVidia's.

Then all of a sudden Nvidia releases a pumped up version and suddenly it's the second coming.

If you knew anything about graphics, you'd know how important tessellation is. There is no discussion about that. And nVidia's implementation is the first one that is actually useful.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Well, actually that is the point.
Keep the polycount as low as possible.
This means there is much less of a memory footprint. Especially interesting in games with large levels (eg Crysis). Those games currently have to stream a lot of data in/out from harddisk, over the PCI-e port, which greatly affects performance and gaming experience.

It's not that cut and dry. But as a general rule, that's always correct. I actually don't see tessellation's main advantage being memory savings. The actual meshes don't use all that much memory. Textures use far more, for instance. It's biggest asset is not trying to render unnecessary polygons.