• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

AMD Radeon HD 6970 already benchmarked? Enough to beat GTX480 in Tesselation?

Page 11 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
At this point i doubt it too. GTX 480 is awesome in tesselation but AMD might just do it. They're aware they behind right now and since they decided to go all out with Cayman who knows.

AMD is not behind any way, stop spreading nonsense.

NV's tessellation only shines in benchmarks - because it's not a dedicated unit its performance keeps falling as the shader load goes up.

AMD went with a fairly decent-sized dedicated part so it never slows down, regardless of shader load.
 
Incorrect.

Benching Civ 5 on my system with tessellation on vs off doing the lategameview benchmark, fps is lower with tessellation on.

There is not one DX11 game on the market where your fps is higher using tessellation. I own them all and waste time benching stuff out of curiosity.

Posting anecdotes from a random interview is a far cry from a fact.

Why can't you stay on topic to the thread ? 'AMD Radeon HD 6970 already benchmarked? Enough to beat GTX480 in Tesselation?'

My bet is the 6970 will be faster than the GTX480 in every single game out there, no matter DX9, 10 or 11, tessellation on or off.

With Nvidia the higher the shader load goes, the lower your tessellation performance sinks so it's obvious your framerate will greatly improve if you turn it off (freeing up resources for regular jobs.)

They will always try to blur this but this is a FACT.
 
With Nvidia the higher the shader load goes, the lower your tessellation performance sinks so it's obvious your framerate will greatly improve if you turn it off (freeing up resources for regular jobs.)

They will always try to blur this but this is a FACT.

No, as the shader load goes up, shader performance becomes a bigger and bigger bottleneck, diminishing any advantages in tessellation performance. Remember that 5870 has an advantage in theoretical shader and texturing performance over the GTX 480.
 
No, as the shader load goes up, shader performance becomes a bigger and bigger bottleneck, diminishing any advantages in tessellation performance. Remember that 5870 has an advantage in theoretical shader and texturing performance over the GTX 480.

What hes saying is with AMD cards the tessellation units are not affected by the shader workload, with Nvidia the same units are doing the work for both and so with a high shader load NV tess performance should go down while AMD performance would not.




Disclaimer: I've seen no evidence either way so who knows.
 
AMD is not behind any way, stop spreading nonsense.

NV's tessellation only shines in benchmarks - because it's not a dedicated unit its performance keeps falling as the shader load goes up.

AMD went with a fairly decent-sized dedicated part so it never slows down, regardless of shader load.

This is just false, although it would be true to say that in an actual gameplay situation (not just a benchmark), the bottleneck may be in the shaders rather than the tessellators. Heck the bottleneck could even be in the CPU!

NV has dedicated tessellation hardware. Period. See, e.g., http://www.anandtech.com/show/2918/2
 
AMD is not behind any way, stop spreading nonsense.

NV's tessellation only shines in benchmarks - because it's not a dedicated unit its performance keeps falling as the shader load goes up.

AMD went with a fairly decent-sized dedicated part so it never slows down, regardless of shader load.

Each SM contains 32 cores (CUDA cores as they call it) and a 'polymorph engine'. The geometry (including tesselation) is handled by the polymorph engine, not the CUDA cores! Fermi contains 16 SMs (2 are disabled in GTX480). So no, using tesselation does NOT mean less performance for the shaders. Stop listening to Charlie D, he's the one spreading this false information.

Furthermore, GF100 can do 4 triangles/ clock compared with 1/clock for previous generation GPUs (both Nvidia and ATI) going back a long time. Not sure how much impact this has however, I would guess none for current gen game titles.
 
Each SM contains 32 cores (CUDA cores as they call it) and a 'polymorph engine'. The geometry (including tesselation) is handled by the polymorph engine, not the CUDA cores! Fermi contains 16 SMs (2 are disabled in GTX480). So no, using tesselation does NOT mean less performance for the shaders. Stop listening to Charlie D, he's the one spreading this false information.

Furthermore, GF100 can do 4 triangles/ clock compared with 1/clock for previous generation GPUs (both Nvidia and ATI) going back a long time. Not sure how much impact this has however, I would guess none for current gen game titles.

Exactly and even AMD is going the way of NV, if the Cayman slide is true (that Cayman's tessellator will have off-chip buffering and scalability--which implies that AMD is also going to break down tessellation duties among 2 or more units rather than one, in order to extract more ILP).
 
AMD is not behind any way, stop spreading nonsense.

NV's tessellation only shines in benchmarks - because it's not a dedicated unit its performance keeps falling as the shader load goes up.

AMD went with a fairly decent-sized dedicated part so it never slows down, regardless of shader load.

Nice spin. AMD unit goes to crap at factor 11 where Nvidia's part doesnt.

Edit: And forgot to also chime in about you being wrong they are using cuda cores for tesselation. Each cluster has its own dedicated unit. Which is why the Nvidia side scales with the amount of shaders it contains. Each new cluster allows for more tesselation power. Unlike the AMD part that has a single serial unit that gets crushed at low double digit factors.

http://images.anandtech.com/reviews/video/NVIDIA/GF100/GF100small.png

Note how each SM has its own polymorph engine(tesselator).
 
Last edited:
Exactly and even AMD is going the way of NV, if the Cayman slide is true (that Cayman's tessellator will have off-chip buffering and scalability--which implies that AMD is also going to break down tessellation duties among 2 or more units rather than one, in order to extract more ILP).

They have to if they want to keep up. People rant and rave about the G100. But what they dont seem to realize is Nvidia did the dirty work of building an arch that encorporates a tesselator worth talking about and GPGPU that screams. AMD will now have to follow suit if they want to keep up. That means a bigger hotter chip. Nothing is free.

What this kind of reminds me of is the NV30. The NV30 was an array of shaders vs the R300 which looked like a tradition 8 pipelined chip. Yeah, out of the gate the NV30 blew but its follow up was the NV40 which started a series of dominating performances by Nvidia. While ATi fumbled around with design issues trying to play catch up.
 
Last edited:
What this kind of reminds me of is the NV30. The NV30 was an array of shaders vs the R300 which looked like a tradition 8 pipelined chip. Yeah, out of the gate the NV30 blew but its follow up was the NV40 which started a series of dominating performances by Nvidia. While ATi fumbled around with design issues trying to play catch up.

Wait, I don't remember it quite like that. NV30 was poor in every regard. Fermi or Cypress are in not even in the same league as NV30.

I'd also like to know about these "series of dominating performances by Nvidia. While ATi fumbled around with design issues trying to play catch up."

I distinctly remember ATI holding the performance lead up until G80.
 
Wait, I don't remember it quite like that. NV30 was poor in every regard. Fermi or Cypress are in not even in the same league as NV30.

I'd also like to know about these "series of dominating performances by Nvidia. While ATi fumbled around with design issues trying to play catch up."

I distinctly remember ATI holding the performance lead up until G80.

Sure it sucked from a performance perspective but it departed from the traditional graphics pipeline. Nvidia used that as the basis of their design from there on out. ATi showed up with a similar design in the R400. Which had problems shipping the highest end part XT 800 XTX or something like that. Some people were on waiting lists for 6-8 months while the 6800 Ultra was available in much better quantity. The 7000 series cards were a hit with high volume on launch. And then G80 in 2006 blew away the competition and ATi was 6 months late with the X1800XT and the X1900XT was too little too late.
 
Sure it sucked from a performance perspective but it departed from the traditional graphics pipeline. Nvidia used that as the basis of their design from there on out. ATi showed up with a similar design in the R400. Which had problems shipping the highest end part XT 800 XTX or something like that. Some people were on waiting lists for 6-8 months while the 6800 Ultra was available in much better quantity. The 7000 series cards were a hit with high volume on launch. And then G80 in 2006 blew away the competition and ATi was 6 months late with the X1800XT and the X1900XT was too little too late.

The X850XT-PE (press edition) was that card that was nowhere to be seen, just the 6800Ultra Extreme. The 6800 and the X800 were widely available. The X700 was every bit as fast as the 6600. The X1800 was faster and more featured than the 7800 and the X1900 was much faster than the 7900. I don't see any fumbles over design issues there.
 
http://forum.beyond3d.com/showpost.php?p=1488243&postcount=4276

94189504.jpg


Groove ought to be pleased with the 2GB DDR5 standard. 😉

Scali ought to be pleased that geometry is only 2x the previous generation and not up to 8x like Fermi was. 😉

VLIW4 is almost for sure now, not just because of this but because of driver details.

No definite statement on tessellator though.
 
AMD is not behind any way, stop spreading nonsense.

GTX 480 is great in tesselation, what nonsense are you talking about? Relax ok?

I know it won't make a difference in games since the games we have these days barely use it but nevertheless Nvidia has a stronger tesselator right now. I hope 6970 changes that as i'm planning in getting one.
 
geometry - how fast you can process triangles

Wait, Fermi was 8X compared to what? GT200?

And why was it so much, it doesn't seem to have made a huge difference.

63807935.jpg


Can someone extrapolate how large the GPU is by using the height of the card and length of the PCI-E connector.
 
The X850XT-PE (press edition) was that card that was nowhere to be seen, just the 6800Ultra Extreme. The 6800 and the X800 were widely available. The X700 was every bit as fast as the 6600. The X1800 was faster and more featured than the 7800 and the X1900 was much faster than the 7900. I don't see any fumbles over design issues there.

Not sure what red-colored glasses you are remembering with, but NV owned the market after the 6800GT came out. The 6600GT was the preferred sub $200 part, the 6800GT was the best ~$250, and the 6800 Ultra was the high end. This continued into the 7800 series as well(mostly a re-badge job), but the 8800 came out pretty quickly and was then the crusher for a long time.

Time and time again, a new architecture marked as "slow" or "inefficient" has risen to be the next "big thing".

Think:
NV FX ------> 6xxx/7xxx/8xxx series
ATI 3xxx ---> ATI 4xxx/5xxx/6xxx series
NV Fermi -----> ????? (who knows)

There is no guarantee with Fermi, but it was designed as a GPGPU and tesselation powerhouse. I love the 4xxx/5xxx/6xxx series too (I own a 5870 and love it) but I also see the angle that NV has. They retooled and are looking as Fermi to build on for a couple generations. The strides they made from GF100 to GF104 in power effeciency has been marked, and if they continue they could have a real winner on their hands.
 
Wait, Fermi was 8X compared to what? GT200?

And why was it so much, it doesn't seem to have made a huge difference.

Can someone extrapolate how large the GPU is by using the height of the card and length of the PCI-E connector.

Compared to GT200, yes.

Triangles are not all that matter. You need to process things like lighting, texture, etc. so things like the shaders can be the bottleneck, no matter how fast you can process polygons. Historically there was a surge in the demands placed on shaders, hence why people talk about how many SPs Cayman XT might have, etc. But NV seems hellbent on pushing more triangles onto the screen, because it has a geometry and tessellation advantage. I approve of this and hope AMD follows suit.
 
Not sure what red-colored glasses you are remembering with, but NV owned the market after the 6800GT came out. The 6600GT was the preferred sub $200 part, the 6800GT was the best ~$250, and the 6800 Ultra was the high end. This continued into the 7800 series as well(mostly a re-badge job), but the 8800 came out pretty quickly and was then the crusher for a long time.

Time and time again, a new architecture marked as "slow" or "inefficient" has risen to be the next "big thing".

Think:
NV FX ------> 6xxx/7xxx/8xxx series
ATI 3xxx ---> ATI 4xxx/5xxx/6xxx series
NV Fermi -----> ????? (who knows)

There is no guarantee with Fermi, but it was designed as a GPGPU and tesselation powerhouse. I love the 4xxx/5xxx/6xxx series too (I own a 5870 and love it) but I also see the angle that NV has. They retooled and are looking as Fermi to build on for a couple generations. The strides they made from GF100 to GF104 in power effeciency has been marked, and if they continue they could have a real winner on their hands.

Price/performance is another discussion entirely. He said ATI were fumbling over design issues. I was comparing the capability of each card, not how much value they offered.

So do you agree with him saying R400 and R500 had "design issues"?
 
Compared to GT200, yes.

Triangles are not all that matter. You need to process things like lighting, texture, etc. so things like the shaders can be the bottleneck, no matter how fast you can process polygons. Historically there was a surge in the demands placed on shaders, hence why people talk about how many SPs Cayman XT might have, etc. But NV seems hellbent on pushing more triangles onto the screen, because it has a geometry and tessellation advantage. I approve of this and hope AMD follows suit.

At the expense of Shaders?
 
http://forum.beyond3d.com/showpost.php?p=1488243&postcount=4276

94189504.jpg


Groove ought to be pleased with the 2GB DDR5 standard. 😉

Scali ought to be pleased that geometry is only 2x the previous generation and not up to 8x like Fermi was. 😉

VLIW4 is almost for sure now, not just because of this but because of driver details.

No definite statement on tessellator though.

That 2GB GDDR5, would it still have to go via a 256-bit bus or have they doubled it?
 
Back
Top