AMD Radeon HD 6970 already benchmarked? Enough to beat GTX480 in Tesselation?

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Gloomy

Golden Member
Oct 12, 2010
1,469
21
81
Judging from what has been said so far, it seems exceedingly foolish for AMD to put much stock in APUs without an adequate tesselation solution in the works. I just can't imagine them being so shortsighted.

So I'm going to assume that if they don't bring one out with the 69xx series, they'll do so along with the second generation APUs towards the end of 2011. This could essentially become the crowning glory of AMD solutions-- how would Intel reply to a chip that pushes remarkably good IQ despite bandwidth deficiencies?

The cards really are in AMD's hands right now. 2011 is going to be rather exciting, it seems...
 

Scali

Banned
Dec 3, 2004
2,495
1
0
It's not that cut and dry. But as a general rule, that's always correct. I actually don't see tessellation's main advantage being memory savings.

Well, it is. Firstly because you don't need multiple instances of the same object at different detail levels.
Secondly, because you don't need high-resolution meshes to get thee same image quality or better than what you get now. The lowres one is enough.

The actual meshes don't use all that much memory. Textures use far more, for instance.

Not sure if you paid attention to the City Of The Future demo, but they explained how they generate the detail using displacement maps. So they basically replaced geometry with textures, and that's how they got their savings.

It's biggest asset is not trying to render unnecessary polygons.

Not really, we already avoid that with LOD. We can just do that more accurately and more efficiently now.
Another huge advantage is that you can do animation before tessellation, greatly reducing vertex shader load.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Well, it is. Firstly because you don't need multiple instances of the same object at different detail levels.
Secondly, because you don't need high-resolution meshes to get thee same image quality or better than what you get now. The lowres one is enough.



Not sure if you paid attention to the City Of The Future demo, but they explained how they generate the detail using displacement maps. So they basically replaced geometry with textures, and that's how they got their savings.



Not really, we already avoid that with LOD. We can just do that more accurately and more efficiently now.
Another huge advantage is that you can do animation before tessellation, greatly reducing vertex shader load.

It's really painful trying to discuss something on a forum. Too much back and forth and misunderstanding. Displacement maps, while they are materials stored as image files, aren't the textures I was referring to. Absolutely tessellation is a far better way of generating LOD than with separate models, but in the end that is exactly what it is doing, generating models with greater or fewer polygons.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
It's really painful trying to discuss something on a forum. Too much back and forth and misunderstanding.

If there's a misunderstanding, then why didn't you bother to try and explain what you mean?

Absolutely tessellation is a far better way of generating LOD than with separate models, but in the end that is exactly what it is doing, generating models with greater or fewer polygons.

And what do you mean by that statement?
It's a better way of achieving the same goal? Yea, sure... but I don't understand your use of the word 'but' in that sentence. It still feels like you somehow try to downplay it, or not fully wanting to admit that you have changed your view a bit from your earlier posts (where you didn't seem to see much use in tessellation at all).
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Well, I haven't changed my view. I know the importance of controlling poly count and that doing it dynamically is the better way. The only thing I wasn't sure of was how much poly count increased with different levels of tessellation. It's nice to know, and it makes sense, that it's not done exponentially to the same degree as SubD modeling in a modeling ap. is. You were talking 32 and 64 times and that would generate poly counts to unbelievable levels in my modeling ap. Still, I'm doubtful of the need for those values in games. But that's my opinion and you have yours.

The misunderstanding was with the term textures. I was talking the hires bitmap (using bitmap as a generic term as they don't have to be .bmp files) files that you use to "paint" the surfaces of the model.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Still, I'm doubtful of the need for those values in games. But that's my opinion and you have yours.

Well, it's mostly useful for large objects which you may view close up.
Think about terrain for example.
In theory you'd only need one quad to represent the terrain for an entire level in a game.
At any given moment, only a small part of this quad is on screen.
So dividing up the quad with a factor of 64*64, would yield 4096 polygons (of which only a portion would be in view at a time), which still is very low.

The misunderstanding was with the term textures. I was talking the hires bitmap (using bitmap as a generic term as they don't have to be .bmp files) files that you use to "paint" the surfaces of the model.

Well, I doubt that the displacement maps they used in the City demo were any lower detail than the colourmaps. There was a LOT of geometry detail in the statues and such.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Were is your link that was responsible for my reply.

Link for what? You mean GeForce3 supporting RT-patches?
http://en.wikipedia.org/wiki/Microsoft_Direct3D
Tessellation was earlier considered for Direct3D 10, but was later abandoned. GPUs such as Radeon R600 feature a tessellation engine that can be used with Direct3D 9/10[19] and OpenGL, but it's not compatible with Direct3D 11 (according to Microsoft). Older graphics hardware such as Radeon 8xxx, GeForce 3/4 had support for another form of tesselation (RT patches, N patches) but those technologies never saw substantial use. As such, their support was dropped from newer hardware.

If you have a Radeon 8500 or GeForce3/4 (and drivers supporting it, they dropped support at some point), you can see the support with DXCapsViewer.
Look for D3DDEVCAPS_RTPATCHES and D3DDEVCAPS_NPATCHES.
 
Last edited:

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
Oh that?
Try this: http://arstechnica.com/hardware/new...able-debacle-intel-pushes-microsoft-bends.ars

You know, you could have googled for that.

I tried to google for your claims, but found absolutely nothing. Apparently you didn't find anything either. Next time, don't bother posting things you cannot back up.


GameInformer: The way I look at it as, if I bought a DX10 card already for $500, I’ve got a card that isn’t going to support the DX10.1 features.

Yerli: The features that it supports are not critical. The difference is so minimal. You would have to have two more generations of graphics hardware to really consider making a DX10.1 only game where the [DX10.1] features then would become significant if actually used right. I didn’t look at 10.1 because for me, I just looked at our engineers and said, “No, don’t need it in the next 12 months.” (laughs) That’s all I need to know right now.

From what I understand, even if you had the next generation of 10.1 hardware, it would be too slow to use the features. You would have to wait two more generations in order to get a real benefit from it. Remember, Matrix introduced environmental bump mapping almost 6-7 years ago? Normal Mapping and bump mapping just made it in the four years since then. In fact, Far Cry was the first normal mapped game to ship. When you look at it this way, it’s the same as 10.1. 10.1 will become actual or three years from now. But not now.


Another:
http://www.techrad
ar.com/news/computing-components/graphics-cards/nvidia-directx-10-1-won-t-make-difference--144509
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
1
0
From what I understand, even if you had the next generation of 10.1 hardware, it would be too slow to use the features. You would have to wait two more generations in order to get a real benefit from it. Remember, Matrix introduced environmental bump mapping almost 6-7 years ago? Normal Mapping and bump mapping just made it in the four years since then. In fact, Far Cry was the first normal mapped game to ship. When you look at it this way, it’s the same as 10.1. 10.1 will become actual or three years from now. But not now.


Another:
http://www.techrad
ar.com/news/computing-components/graphics-cards/nvidia-directx-10-1-won-t-make-difference--144509

You can't just compare things.
Firstly, Matrox (not Matrix) introduced EMBM, which is very different from normalmapping, which was introduced with nVidia's GeForce. Problem was, it was useless because it required a lot of CPU setup. GeForce 3 added programmable vertex shaders, and since then, pretty much every game used normalmaps.

Secondly, the problem with most techniques is that they cost performance. Tessellation can actually IMPROVE performance. So this is not a case of "waiting until the hardware is fast enough".

Lastly, this is DX11, not DX10.1 (a major revision, not a minor one). 10.1 will never become actual because DX11 have already superceded it. This anti-DX10.1 talk was from nVidia for an obvious reason: nVidia didn't have DX10.1.
 

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
You can't just compare things.
Firstly, Matrox (not Matrix) introduced EMBM, which is very different from normalmapping, which was introduced with nVidia's GeForce. Problem was, it was useless because it required a lot of CPU setup. GeForce 3 added programmable vertex shaders, and since then, pretty much every game used normalmaps.

Secondly, the problem with most techniques is that they cost performance. Tessellation can actually IMPROVE performance. So this is not a case of "waiting until the hardware is fast enough".

Lastly, this is DX11, not DX10.1 (a major revision, not a minor one). 10.1 will never become actual because DX11 have already superceded it. This anti-DX10.1 talk was from nVidia for an obvious reason: nVidia didn't have DX10.1.


Dude, That was from the GameInformer article LOL! You are arguing with a Nvidia Senior rep hahah!
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Dude, That was from the GameInformer article LOL! You are arguing with a Nvidia Senior rep hahah!

What's so funny about that?
As I said, nVidia was just being anti-DX10.1 because they didn't have DX10.1 hardware out, nor planned.
Not like you should take that seriously, it's just marketing talk, just like AMD's nonsense about tessellation.
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
Dude, That was from the GameInformer article LOL! You are arguing with a Nvidia Senior rep hahah!

Am I the only one how noticed the irony in this?

(And the sad pattern of personally going after Scali, not his arguments)
 

Madcatatlas

Golden Member
Feb 22, 2010
1,155
0
0
It looks like noone has answered the OPs question. Here goes:

AMD Radeon HD6970 has not been benched and we dont know how it compares in tesselation to the GTX480, nor do we know how and if it beats the GTX480 in performance.

When its benchmarked, again, as always, Anandtech and Hardocp (for me personally) will have the numbers and gameplay layed out neatly for you to see.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Judging from what has been said so far, it seems exceedingly foolish for AMD to put much stock in APUs without an adequate tesselation solution in the works. I just can't imagine them being so shortsighted.

I'd agree...except they are fighting budget cuts after budget cuts and the competition from their perspective is not Nvidia discreet GPU's but rather Intel's IGP.
 

Madcatatlas

Golden Member
Feb 22, 2010
1,155
0
0

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Oh okay, yes I am well familiar with what AMD did as far as reducing their head count. I had in my mind that this had something to do with significant R&D cuts that were occurring now, something along those lines.

No nothing recent, but yes the fusion team was hit with them a while back. My point was just that in the fiscal environment that Fusion was born from we can't expect it to have been designed with anything but the absolute "must haves" and core competition in mind.

That guy, just not leader material. Dirk Meyer is leagues ahead in both appearance and execution so far it seems.

IF only he got good at...or wait is it a good thing to get good at marketing?...

Holy crap, I actually subconsciously typed out Ruinz instead of Ruiz...lol that was not intentional.

If you know much about Dirk, his DEC Alpha days and how his role in the original K7 Athlon development then you'd no doubt have a deep respect for his potential to do an Andy Grove with AMD. Likewise if you knew Hector's legacy at Moto then you would not have been overly surprised at his performance at AMD.

Sometimes, on rare occasion, past performance IS indicative of future results ;)
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
No nothing recent, but yes the fusion team was hit with them a while back. My point was just that in the fiscal environment that Fusion was born from we can't expect it to have been designed with anything but the absolute "must haves" and core competition in mind.



Holy crap, I actually subconsciously typed out Ruinz instead of Ruiz...lol that was not intentional.

If you know much about Dirk, his DEC Alpha days and how his role in the original K7 Athlon development then you'd no doubt have a deep respect for his potential to do an Andy Grove with AMD. Likewise if you knew Hector's legacy at Moto then you would not have been overly surprised at his performance at AMD.

Sometimes, on rare occasion, past performance IS indicative of future results ;)

Didn't I hear somewhere that AMD hired a bunch of people recently?

And who us Andy Grove?
 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
You can't just compare things.
Firstly, Matrox (not Matrix) introduced EMBM, which is very different from normalmapping, which was introduced with nVidia's GeForce. Problem was, it was useless because it required a lot of CPU setup. GeForce 3 added programmable vertex shaders, and since then, pretty much every game used normalmaps.

Secondly, the problem with most techniques is that they cost performance. Tessellation can actually IMPROVE performance. So this is not a case of "waiting until the hardware is fast enough".

Lastly, this is DX11, not DX10.1 (a major revision, not a minor one). 10.1 will never become actual because DX11 have already superceded it. This anti-DX10.1 talk was from nVidia for an obvious reason: nVidia didn't have DX10.1.
That's total BS. It improves IQ not performance. You have to pay for better IQ.