AMD V Nvidia by Richard Huddy

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
You basically just fell for the vendor telling you that the card is okay?
Just like AMD is now trying to tell us that their tessellation is okay?



Not that I know of...
But we know that the best-case scenario for 6870 is 2x the tessellation of 5870.
So doubling the 5870 scores would be best-case, and that still is slower than the 460.
I think it's pretty obvious that the 6870 isn't going to beat the 460 in practice.



I don't see the problem. It demonstrates a huge bottleneck.
While less pronounced in actual games (and the HAWX 2 benchmark), it's not going to disappear.
AMD's own slides already indicate the problem: above tessellation factor 11, the 6870 tanks back down to 5870 levels, and we already know that 5870 has been struggling when tessellation is involved.
Now you just need to convince yourself that the tessellation factor can go all the way up to 64x for a good reason. They didn't make it like that just so they could stop at a mere 11.
I already mentioned the example of using a 12-triangle cube as a basis for creating a perfect sphere by tessellating it into infinity... Try to work with that idea.

It might not be enough to close the gap to a 460 in tessmark, but In a game I pretty sure it would.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
It might not be enough to close the gap to a 460 in tessmark, but In a game I pretty sure it would.

I'm pretty sure it wouldn't.
Let me try to explain, and again, please try to work with me.
You understand how a pipeline works, roughly?
Well, basically you start off with a bunch of vertices...
You pass them through the vertex shaders, three at a time, for a triangle.
Then these are fed to the triangle setup and rasterizer logic, which then divides it up into pixels and fires off the pixelshaders.

So basically:
vertex shaders -> triangle rasterizer setup -> pixelshaders

Now, tessellation comes in... For each triangle, you can apply subdivision into smaller triangles... So we insert the tessellator:
vertex shaders -> tessellator -> triangle rasterizer setup -> pixelshaders

Now, here's your problem... The tessellator on AMD hardware is still connected to that single triangle setup unit. But now a single triangle may generate tens or even hundreds of triangles... There is your bottleneck. You can't feed them to your pixelshaders quickly enough.
It doesn't matter how fast your pixelshaders are, they're not being used anyway! Which is why in extreme cases we see a 5770 getting the same framerates as a 5870.

nVidia has instead done something like this:
vertex shaders -> tessellator ->
-> triangle rasterizer setup 1 -> pixelshaders
-> triangle rasterizer setup 2 -> pixelshaders
-> triangle rasterizer setup 3 -> pixelshaders
-> triangle rasterizer setup 4 -> pixelshaders
-> triangle rasterizer setup 5 -> pixelshaders
-> triangle rasterizer setup 6 -> pixelshaders
-> triangle rasterizer setup 7 -> pixelshaders
-> triangle rasterizer setup 8 -> pixelshaders
-> triangle rasterizer setup 9 -> pixelshaders
-> triangle rasterizer setup 10 -> pixelshaders
-> triangle rasterizer setup 11 -> pixelshaders
-> triangle rasterizer setup 12 -> pixelshaders
-> triangle rasterizer setup 13 -> pixelshaders
-> triangle rasterizer setup 14 -> pixelshaders
-> triangle rasterizer setup 15 -> pixelshaders

Now your tessellator can output a lot of triangles in parallel, and you can still feed your pixelshaders!

It really doesn't take anything all that out-of-the-ordinary to get the AMD tessellator into stressful situations. Things can go VERY bad, VERY quickly.
In fact, using geometry as low as possible, and blowing it up to as much detail as possible is EXACTLY what's so good about tessellation... With AMD you cannot do that, you must always feed it reasonably highly detailed geometry to begin with, so that not too many triangles have to be added by the tessellator, and you're not bottlenecking your precious triangle setup.
 
Last edited:

Voo

Golden Member
Feb 27, 2009
1,684
0
76
I'm pretty sure it wouldn't.
You see most people here play games and not benchmarks, so you've got to show some real games where it creates a noticeable bottleneck, otherwise why should anyone care?
The same for changing textures or whatever.. as long as I don't see any differences why wouldn't I want the extra performance? It may be horrible from a engineering point of view, but as a user? I couldn't care less.
And IF there are IQ problems (because of driver optimizations or because the tesselation unit can't handle the same amount of tesselation as another card) I'd hope that the reviews would point them out, show the differences and let me make a informed decision.
 
Last edited:

Spike

Diamond Member
Aug 27, 2001
6,770
1
81
I have 2 things to say about that:
1) Did nVidia suffer from the same poor performance, until a driver update? If not, why do you think this is?
2) The point of shader replacement, texture downgrading etc is that you don't need to change anything to the game, the driver silently does that FOR you.


If tessellation is such a bottleneck as on AMD's hardware, then anything using tessellation will quickly become a tessellation benchmark.
On nVidia's hardware it's just a game.

Wait... are you asking that question about a game that is a TWIMTBP title? Doesn't that pretty much mean nVidia has offered some sort of finicial assistance or other help and just may have an optimized driver for the game already? Do you have even further back pre-production benchmarks that came before nVidia made any contributions? Besides, how do you know that they did not suffer a hit in performance, release a driver fix to the developer, and then show the benchmark? I assume that would be part of the normal process for a sponsored game.

I'm not saying thats the case and I'm not saying the AMD cards don't have a tesselator issue but using HAWX 2 alone seems really fishy. I'm still worried about the 6xxx cards though I have a few months before I buy.

EDIT* I did see some of the other synthetic benchs and I'm worried but not much. Different people feel different ways about synthetic benchmarks but in my case I always base it on games. Thats why this HAWX thing is interesting to me but I can't give it much credit until it's finished and both "teams" have had a chance to optimize for it, instead of just one.
 
Last edited:

Seero

Golden Member
Nov 4, 2009
1,456
0
0
You see most people here play games and not benchmarks, so you've got to show some real games where it creates a noticeable bottleneck, otherwise why should anyone care?
The same for changing textures or whatever.. as long as I don't see any differences why wouldn't I want the extra performance? It may be horrible from a engineering point of view, but as a user? I couldn't care less.
And IF there are IQ problems I'd hope that some reviews would point them out and let me make a informed decision.
I disagree. Most of us are in office but are not working. Instead we join whatever wars in here to stay awake. Since benchmarks are the number one weapons in forum wars, I believe we care about benchmarks more than the performance on games.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
The bottom line is do your research, and go for the best value you can find, and what you feel comfortable with. The great thing is right now, AMD and Nvidia are engaged in a furious war for the ~$200 slot. We all win.

I don't understand why anyone would go on a one man crusade and pound a particular point incessantly into the ground.
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
The bottom line is do your research, and go for the best value you can find, and what you feel comfortable with. The great thing is right now, AMD and Nvidia are engaged in a furious war for the ~$200 slot. We all win.
QUOTED FOR TRUTH

I don't understand why anyone would go on a one man crusade and pound a particular point incessantly into the ground.
Because we don't want to work.
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
The bottom line is do your research, and go for the best value you can find, and what you feel comfortable with. The great thing is right now, AMD and Nvidia are engaged in a furious war for the ~$200 slot. We all win.

I don't understand why anyone would go on a one man crusade and pound a particular point incessantly into the ground.

I don't get Fuddy's crusade either...:hmm:
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
I don't understand why anyone would go on a one man crusade and pound a particular point incessantly into the ground.

Earlier, I caught myself wondering if people on forums like this spend more time arguing about video cards or more time using their video cards for what they were (mostly) designed for.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Because he is doing his job well, very well.

I disagree.
Intel's and nVidia's PR are obviously very effective as well, given their popularity, but they do so in a far more professional manner. You don't catch them on such transparent lies, twists and 180 degree turns much, if at all. With Huddy it's a constant stream of obvious slip-ups. Sometimes even at the cost of developers. That's a big no-no when you're head of Developer Relations. You don't stab your clients in the back.
 

NIGELG

Senior member
Nov 4, 2009
852
31
91
I disagree.
Intel's and nVidia's PR are obviously very effective as well, given their popularity, but they do so in a far more professional manner. You don't catch them on such transparent lies, twists and 180 degree turns much, if at all. With Huddy it's a constant stream of obvious slip-ups. Sometimes even at the cost of developers. That's a big no-no when you're head of Developer Relations. You don't stab your clients in the back.
Wood screws and fake boards are just as painful and dishonest.I think you've made your point but now are sounding like you want to pound this issue over and over......
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
That's what I don't get.
If someone is so obviously incompetent... why isn't he fired and replaced with a more capable figure?

Because apparently a lot of people on this forum has a very short memory and never notice the 180's...
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Wood screws and fake boards are just as painful and dishonest.

Nope.
Everyone uses mock-ups. But as I pointed out, not everyone makes the PR mistakes that Huddy makes (or David Hoff, or Randy Allen... AMD really has a bunch of geniuses in their midsts).
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Looking at the success AMD is having, you can disagree all you want. But Huddy and his team are hitting home run after home run. The recent drastic Nvidia price drops valid AMD's success.

I would say that AMD's success is more because of their hardware generally being quite solid and getting good reviews than because of whatever Huddy is doing.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
I would say that AMD's success is more because of their hardware generally being quite solid and getting good reviews than because of whatever Huddy is doing.
So you know for sure that Huddy is responsible for none of the positive decisions made at AMD. Of course you do.

I have no idea why you hate Huddy and AMD so much, very strange. For every one thing you might post that is positive about AMD, you post 1000 negatives and bash them from every possible angle. :thumbsdown:
 

Scali

Banned
Dec 3, 2004
2,495
0
0
So you know for sure that Huddy is responsible for none of the positive decisions made at AMD. Of course you do.

Well, Huddy is in charge of developer relations... and I think we can all agree that devrel and software/drivers in general aren't exactly AMD's strongest point.
Hardware is their strongest point, but that's not Huddy's area.

PS: Did you read the small print in Anandtech's review on the GPGPU benchmarks?
They said it appeared like AMD's drivers weren't optimized for multithreading, while nVidia's were, given that nVidia had less of a CPU bottleneck, and showed better CPU usage.
Quite embarassing for a CPU company, is it not?