AMD 7970 v Nvidia GTX 770

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

sushiwarrior

Senior member
Mar 17, 2010
738
0
71
A GTX780 is faster in Grid 2 with less compute performance.

Yeah, so much to AMD's advantage. :rolleyes:

Sure, a $250 more GTX780 is faster - am I supposed to be surprised? :|

That's impressive. The engine must have a very deep level of GCN optimization.

What about Tomb Raider?

GTX-770-NVV-58.jpg


Not much of a difference it seems.

The engine has a deep level of compute optimization. GCN is better at compute. Hence it has the advantage. It's not "GCN Optimized", it's compute optimized, and Nvidia cheaped out on compute, so they don't get to enjoy the benefits.

Now the performance is very close. But at launch, it was unplayable on many Nvidia cards and it took months of drivers and patches to start to become comparable!


NVidia's fall lineup looks pretty good to me. Watch Dogs is probably going to be comparable to BF4 in sales.

Dude... the amount of facepalm is uncontainable. BF4 is probably the biggest game to launch this year. BF3 sold as much as HALF LIFE 2 for the love of god. You can't honestly say a new IP (no matter how amazing the game is) will beat the "sure-fire" blockbuster sales of a sequel.


It's been explained multiple times that the "tessellated ocean" was culled and not even being rendered.

That doesn't change the fact that other TWIMTBP titles have shown clear and obvious anti-AMD decisions.

Your celebration is very premature my friend.

If that's the criteria, then how can you say BF4 is a major title, when it's not even released yet?

My god, then why is ANY future title a major title o_O stop changing the argument around, you're proving yourself more wrong than me. List me as many CURRENT TWIMTBP titles as you can. I'll give you a much better list from GE.

When you add up the list of TWIMTBP titles over the years and compare them to AMD's GE, AMD's GE looks positively amateurish.

When you add up the list of GE titles over the PAST YEAR and compare them to TWIMTBP, Nvidia's TWIMTBP looks positively amateurish.

See how you are just picking and choosing criteria at will to make Nvidia look good? GE could make no difference in the games for the sake of the argument, but you're trying to argue something that is just flat out wrong. Unless you LIVE off of Batman and Metro, TWIMTBP cannot compare to the recent GE lineup.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Yes which has already been said that NV needs next Gen GTX780 to beat the old 7970 GHz in Grid 2.
Or lets say a much more expensive card for the people who taken what i said to literally when i said next gen.

Next Gen? Cray got GK110 in August. It's only a few weeks older than the 7970GHz.

Nothing is next gen about GK110. So let's go back to the statement that a card with less compute performance is faster than the 7970GHz in a game like Grid 2.

Hm, it seems that Kepler is better than GCN. :awe:

Sure, a $250 more GTX780 is faster - am I supposed to be surprised? :|

It's not nVidia's fault that AMD can not compete on prices. And we talking here about performance and not price.
 

Final8ty

Golden Member
Jun 13, 2007
1,172
13
81
Next Gen? Cray got GK110 in August. It's only a few weeks older than the 7970GHz.

Nothing is next gen about GK110. So let's go back to the statement that a card with less compute performance is faster than the 7970GHz in a game like Grid 2.

Hm, it seems that Kepler is better than GCN. :awe:



It's not nVidia's fault that AMD can not compete on prices. And we talking here about performance and not price.

http://forums.anandtech.com/showpost.php?p=35235122&postcount=198

And as far as we talking here about performance and not price, you must be talking to yourself, Price is always relative in comparisons even when its not mentioned when it comes to consumers.
 
Last edited:

boxleitnerb

Platinum Member
Nov 1, 2011
2,601
2
81
The engine has a deep level of compute optimization. GCN is better at compute. Hence it has the advantage. It's not "GCN Optimized", it's compute optimized, and Nvidia cheaped out on compute, so they don't get to enjoy the benefits.

You lack proof. Compute != compute. There are many titles where compute functions are used and where similarly specced Kepler and GCN-cards perform on par. You certainly cannot draw a general conclusion from one or two games.

And btw, AMD directly contradicts you:
But it’s not just the hardware, as we’ve been working very closely with Codemasters to optimize the DiRT Showdown engine for GCN. Those optimizations are twofold. First, we have cooperated to implement the advanced Forward+ rendering system that AMD developed in-house for our “Leo” technical demo.
(...)
Secondly, but less obviously to the user, AMD and Codemasters have collaborated to optimize DiRT Showdown’s codepath for Graphics Core Next. In the simplest terms, this means we have polished the game for the architecturally specific ways that our GPUs handle DirectX® 11 rendering. This ensures that all of the rendering techniques being handled by the GPU are processed with minimal overhead and high utilization of compute resources—a staple of GCN’s design.
http://community.amd.com/community/...0/the-rad-performance-of-dirt-showdown-on-gcn

Are you surprised that tech developed at AMD performs better on AMD hardware? I'm not. But when other IHVs are directly involved in things like these, there is a massive outcry. Double standards...
 
Last edited:

sushiwarrior

Senior member
Mar 17, 2010
738
0
71
Next Gen? Cray got GK110 in August. It's only a few weeks older than the 7970GHz.

Nothing is next gen about GK110. So let's go back to the statement that a card with less compute performance is faster than the 7970GHz in a game like Grid 2.

Hm, it seems that Kepler is better than GCN. :awe:

Doesn't matter what Cray got, consumers got GK110 in February 2012. Consumers got Tahiti in January 2011. A full YEAR earlier! And it offers better compute performance with a 352mm^2 die vs a 551mm^2 GK110 which took another full YEAR just to LOSE on compute performance (not overall, just compute).

Kepler and GK110 aren't better than AMD in a game like Dirt: Showdown. I can't find a full Grid 2 lineup, care to share your results? Nothing I find has Tahiti and GTX770 and Titan/780.



It's not nVidia's fault that AMD can not compete on prices. And we talking here about performance and not price.

Yes, the more expensive card is faster. This is not news. "But wait, Tahiti is faster than a GT620! AMD WINS! THE MORE EXPENSIVE CARD IS FASTER!" - see how useful your logic is?

Or, I could rather say that the 7990 offers more compute performance than anything Nvidia has ever made. Jeeze Nvidia, pick up the slack, eh :p

You lack proof. Compute != compute. There are many titles where compute functions are used and where similarly specced Kepler and GCN-cards perform on par. You certainly cannot draw a general conclusion from one or two games.

Very few current games use compute in the engine to any significant extent. Dirt is the only real use so far. This is mostly a point which will be proven by next gen games. (oh, and I think you meant compute != better FPS?)

Are you surprised that tech developed at AMD performs better on AMD hardware? I'm not. But when other IHVs are directly involved in things like these, there is a massive outcry. Double standards...

No, I'm not surprised, they paid to get better performance. You know what they didn't do? Pay for WORSE Nvidia performance. The engine uses DirectCompute. That is an open standard. Just so happens Kepler sucks at DirectCompute, but that's the sacrifice they made with their architecture... It's not double standards, it is different methods to achieve similar results. One method is ethical, the other isn't.
 
Last edited:

boxleitnerb

Platinum Member
Nov 1, 2011
2,601
2
81
Very few current games use compute in the engine to any significant extent. Dirt is the only real use so far. This is mostly a point which will be proven by next gen games. (oh, and I think you meant compute != better FPS?)

Proof? Are you a game developer involved in and with deep knowledge of all the games that use compute? How do you quantify "significant extent"? Honestly, I guess you're a layperson just like me. You cannot possibly know what you claim to know and still you post this stuff here.
And no, I meant what I wrote. There obviously are very many scenarios and implementations when it comes to compute. I don't even claim to know what those are - I'm just saying, don't generalize.

No, I'm not surprised, they paid to get better performance. You know what they didn't do? Pay for WORSE Nvidia performance. The engine uses DirectCompute. That is an open standard. Just so happens Kepler sucks at DirectCompute, but that's the sacrifice they made with their architecture... It's not double standards, it is different methods to achieve similar results. One method is ethical, the other isn't.

Read the link. Dirt Showdown and GRID 2 are specifically GCN-optimized. I believe AMD over a layperson on a forum.
And about your "pay for worse competitor performance" comment...proof! There is no difference here, it's direct involvement for the sake of one-sided optimizations. If one finds that unethical is another question.
 
Last edited:

sushiwarrior

Senior member
Mar 17, 2010
738
0
71
Proof? Are you a game developer involved and with deep knowledge in all the games that use compute? How do you quantify "significant extent"? Honestly, I guess you're a layperson just like me. You cannot possibly know what you claim to know and still you post this stuff here.

What do you think I am claiming? All I am claiming is that GCN/Tahiti is better at compute than GK110/Kepler (from both a price/performance, absolute performance, and efficiency/die size perspective) and I am arguing/suggesting future games will also use compute (especially compute optimized for GCN, because of next gen consoles). My claim has been proven right so far - the compute performance is better, the synthetics don't lie. Whether or not this translates into in-game performance in current games isn't relevant to next gen games (due to the design methodology change that GCN-based consoles and a compute-focused PS4 APU suggest).

Your ad hominem argument is irrelevant, I am more or less a layperson (I don't work in game development) but that doesn't mean I can't be right (See: https://en.wikipedia.org/wiki/Ad_hominem).

Read the link. Dirt Showdown and GRID 2 are specifically GCN-optimized. I believe AMD over a layperson on a forum.
And about your "pay for worse competitor performance" comment...proof! There is no difference here, it's direct involvement for the sake of one-sided optimizations. If one finds that unethical is another question.

Yes, they are GCN optimized. AMD spent time and money to optimize the game for GCN hardware. Nvidia could easily do the same, or maybe they already have. There should be nothing stopping them from optimizing code paths and removing overhead (assuming it is possible with Kepler). AMD didn't put code directly into the game that has some kind of "if AMD then run faster" function. The game engine uses certain algorithmic functions which GCN performs well, and the render engine uses DirectCompute with GCN performs well with too, and the driver allows AMD to remove overhead (as any driver does). All of this is correlation, not causation. You are misinterpreting why the game runs fast. Nvidia cannot run the game as fast because their compute performance is WORSE. If the game didn't use compute, maybe they would be equal. But then the game would look worse, and run slower, etc...

"Specifically GCN optimized" is misleading, unless they used some kind of instruction that Nvidia is somehow incapable of. Look at BitCoin mining. AMD is much faster because they implemented an integer shift operation much better than Nvidia. That isn't GCN "optimized" per se, it's just something that GCN is faster at.
 
Last edited:

boxleitnerb

Platinum Member
Nov 1, 2011
2,601
2
81
What do you think I am claiming? All I am claiming is that GCN/Tahiti is better at compute than GK110/Kepler (from both a price/performance, absolute performance, and efficiency/die size perspective) and I am arguing/suggesting future games will also use compute (especially compute optimized for GCN, because of next gen consoles). My claim has been proven right so far - the compute performance is better, the synthetics don't lie. Whether or not this translates into in-game performance in current games isn't relevant to next gen games (due to the design methodology change that GCN-based consoles and a compute-focused PS4 APU suggest).

Your ad hominem argument is irrelevant, I am more or less a layperson (I don't work in game development) but that doesn't mean I can't be right (See: https://en.wikipedia.org/wiki/Ad_hominem).

You claim to know the usage extent of compute functions in (all?) games. And you generalize about "compute" and don't differentiate and on top of that, you neglect the difference in resources (GFLOPs, bandwidth...) between competing SKUs which obviously affect performance as well. I said it to RS, and I'll say it to you - look at Tahiti LE, you'll be surprised.

My argument isn't irrelevant. You simply cannot claim something, neglect important factors and then say "I'm right". Doesn't work that way.

Yes, they are GCN optimized. AMD spent time and money to optimize the game for GCN hardware. Nvidia could easily do the same, or maybe they already have. There should be nothing stopping them from optimizing code paths and removing overhead (assuming it is possible with Kepler). AMD didn't put code directly into the game that has some kind of "if AMD then run faster" function. The game engine uses certain algorithmic functions which GCN performs well, and the render engine uses DirectCompute with GCN performs well with too, and the driver allows AMD to remove overhead (as any driver does). All of this is correlation, not causation. You are misinterpreting why the game runs fast. Nvidia cannot run the game as fast because their compute performance is WORSE. If the game didn't use compute, maybe they would be equal. But then the game would look worse, and run slower, etc...

"Specifically GCN optimized" is misleading, unless they used some kind of instruction that Nvidia is somehow incapable of. Look at BitCoin mining. AMD is much faster because they implemented an integer shift operation much better than Nvidia. That isn't GCN "optimized" per se, it's just something that GCN is faster at.

So you agree with me, what else is there to write about? If a renderer is written in a specific way to take advantage of one particular architecture, a driver team can only do so much. It's not like Nvidia can change the game code...
And again you generalize and don't even think of the possibility that there are different implementations of functions that suit different architectures differently well. For example, If I write a function that absolutely needs a certain amount of cache, it obviously will perform much worse on hardware with less cache. But that doesn't mean that with a little tweak here and there a similar if not equal result could be achieved with much higher performance. Compute != compute. Implementation can be important, too.
 
Last edited:

sushiwarrior

Senior member
Mar 17, 2010
738
0
71
You claim to know the usage extent of compute functions in (all?) games. And you generalize about "compute" and don't differentiate and on top of that, you neglect the difference in resources (GFLOPs, bandwidth...) between competing SKUs which obviously affect performance as well. I said it to RS, and I'll say it to you - look at Tahiti LE, you'll be surprised.

My argument isn't irrelevant. You simply cannot claim something, neglect important factors and then say "I'm right". Doesn't work that way.

No, I don't know the extent of compute usage in ANY game. I didn't code any of them. I know the compute performance of the cards, and I know the in-game performance. That's it.

What would you like me to specifically say about compute? "Compute" is using a GPU for general computing (manipulating numbers, more or less) rather than specifically rendering polygons and textures and rasterizing pixels into an image. Hence compute by definition is a very broad term.

Not sure what Tahiti LE has to do with anything, AMD made a similar size die with a better memory bus, Nvidia could have put an equally big memory bus on theirs too.

I claim that GCN is better at compute. Nobody has proven me wrong about that. What important factors am I ignoring?
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,601
2
81
No, I don't know the extent of compute usage in ANY game. I didn't code any of them. I know the compute performance of the cards, and I know the in-game performance. That's it.

And yet you claimed otherwise here:
http://forums.anandtech.com/showpost.php?p=35235315&postcount=205

What would you like me to specifically say about compute? "Compute" is using a GPU for general computing (manipulating numbers, more or less) rather than specifically rendering polygons and textures and rasterizing pixels into an image. Hence compute by definition is a very broad term.

Finally we agree.

Not sure what Tahiti LE has to do with anything, AMD made a similar size die with a better memory bus, Nvidia could have put an equally big memory bus on theirs too.

Lots. Tahiti LE is GCN, too, and if compared with GK104, it performs very similar in all modern titles except Dirt Showdown and GRID 2. It goes to show that it's not only about architecture, but also about the resources a specific SKU has.

I claim that GCN is better at compute. Nobody has proven me wrong about that. What important factors am I ignoring?

You generalize, that's the biggest problem. As for factors, see above. Usage of compute, implementation of compute and of course raw power of SKUs. When it comes to GCN and gaming, with the exception of said two titles, there is no advantage of GCN whatsoever versus Kepler when we keep those factors I just mentioned in mind.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
I read a couple of pages of this thread and I feel like it's a time warp to June 2012. ;)

They're both good cards IMO. Just buy whatever's the better value at the time -- if there's a 7970 anywhere for 300$, i'd snag it. If 7970s are still 400$, i'd opt for the 770.
 

sushiwarrior

Senior member
Mar 17, 2010
738
0
71
And yet you claimed otherwise here:
http://forums.anandtech.com/showpost.php?p=35235315&postcount=205

Finally we agree.

Lots. Tahiti LE is GCN, too, and if compared with GK104, it performs very similar in all modern titles except Dirt Showdown and GRID 2. It goes to show that it's not only about architecture, but also about the resources a specific SKU has.


You generalize, that's the biggest problem. As for factors, see above. Usage of compute, implementation of compute and of course raw power of SKUs. When it comes to GCN and gaming, with the exception of said two titles, there is no advantage of GCN whatsoever versus Kepler when we keep those factors I just mentioned in mind.

Well show me all the titles that do use compute if you know different. I'm just saying not many do, and circumstantial (not real evidence) suggests that future games will use compute more often.

Anything GCN and anything Kepler could perform the same in every other game, that doesn't change my point: GCN is better at compute, however compute isn't really used yet - but it will probably be used more in the future. It is absolutely about architecture. Kepler does polygons and all that jazz equally to GCN, GCN is just also able to do compute - for the same price! It's free potential performance.

When it comes to GCN and gaming, outside of Dirt I claim nothing. Because current games haven't been shown to use significant compute (proof: Civ V shows that Tahiti is much superior in compute, yet it doesn't perform better in the game, hence compute may be barely used in game). I am just suggesting that GCN has a futureproof/hypothetical advantage that isn't being used now, but evidence suggests may be used soon.


I just realized something. Those links you gave are from an older review with un-optimized drivers.

Newer reviews with more optimized drivers show a different result:

GTX-780-EVGA-32.jpg

That's interesting - 7970GE lost almost 10 FPS even at a lower resolution (with newer drivers?). The 7970GE is also still significantly ahead of the comparable GTX680, and essentially tied (~2% slower) with the GTX780. I will find a different source to compare with then, as HWC evidently has something up with their numbers.
 

stahlhart

Super Moderator Graphics Cards
Dec 21, 2010
4,273
77
91
We just never get tired of beating this dead horse, do we? Thread closed.
-- stahlhart
 
Status
Not open for further replies.