• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

[HardOCP] Tomb Raider Performance Review w/ new drivers

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
We start with highest playable settings, which takes the cards, and we find out what in-game quality settings/resolution are playable on each, since that is what gamers do with video cards when they play games, first figure out what settings are playable. Then, we do apples to apples tests at same settings, for those that find value in that. Both methods are presented in every evaluation.

Is there any chance that you could also do IQ testing of both brands side by side and not just of game quality settings on 1 brand?
 
I would say actual gameplay is representative of actual gameplay.

Is it representative of overall gameplay experience or just at a particular stage, area or scene? ie. Anand used to do a waterfall scene that perform much worse on AMD cards, but it only occurs in that spot.
 
We start with highest playable settings, which takes the cards, and we find out what in-game quality settings/resolution are playable on each, since that is what gamers do with video cards when they play games, first figure out what settings are playable. Then, we do apples to apples tests at same settings, for those that find value in that. Both methods are presented in every evaluation.


Any plans for frametimes...besides your FPS chart?
 
I would say actual gameplay is representative of actual gameplay.

My point was more which benchmark was more representative of general gameplay, something Silverforce11 explained better than me a few posts above.

Either way a 25% difference is too large to simply ignore, so we definitively need more tests.
 
I know I couldn't sleep last night, when push came to shove 680 came out on top.


The win here is the frame time update shortly after release for AMD, 48 vs 55 means a lot less than what they have currently going on in Hitman.
 
Tombraider is a key AMD GE title with the TressFX technology showcase. its just plain ridiculous that AMD can't beat Nvidia on a title they worked closely with the developer.

NV's arch is more flexible. Sometimes AMD's arch can do better with larger numbers of less capable shaders in certain corner cases, but NV's arch is more Intel-ish and thus more versatile: fewer cores, more performance per core.

In GAMING titles, NV will almost never trail AMD GPUs by very much due to their architecture's flexibility.
 
NV's arch is more flexible. Sometimes AMD's arch can do better with larger numbers of less capable shaders in certain corner cases, but NV's arch is more Intel-ish and thus more versatile: fewer cores, more performance per core.

In GAMING titles, NV will almost never trail AMD GPUs by very much due to their architecture's flexibility.

That was true in the VLIW days but is it still true with GCN?
 
NV's arch is more flexible. Sometimes AMD's arch can do better with larger numbers of less capable shaders in certain corner cases, but NV's arch is more Intel-ish and thus more versatile: fewer cores, more performance per core.

In GAMING titles, NV will almost never trail AMD GPUs by very much due to their architecture's flexibility.

Here we go, now we are going to start to compare GCN to the bulldozer arch. 🙄


EDIT: My bad, you were outvoted months ago.

http://forums.anandtech.com/poll.php?do=showresults&pollid=3739
 
Last edited:
That was true in the VLIW days but is it still true with GCN?

Since Nvidia switched to non hot clocked, smaller cores it's probably not as true as it was when Nvidia had fat shaders running twice the core speed.

If there is any questions left over which is better, Thaiti or Kepler, I think GK110 pretty much summed it up.

perfwatt_2560.gif


There really should be no question as to who has the best high performance uarch on the market currently...
 
Here we go, now we are going to start to compare GCN to the bulldozer arch. 🙄


EDIT: My bad, you were outvoted months ago.

http://forums.anandtech.com/poll.php?do=showresults&pollid=3739

Lmao. Try taking a vote of GPU architecture engineers instead of people who conflate GPUs with architectures. The midrange GK104 part was able to compete successfully with the much larger and more expensive flagship AMD Tahiti. That's a fact. Even if you were to compare Pitcairn to Kepler to make it fairer in terms of stripping away unnecessary DP hardware for gaming, Kepler is still ahead.

More slower cores vs fewer faster cores is blatantly obvious even to a casual observer. Anandtech itself has published deep dive articles about this. And yes, GCN isn't that far removed from VLIW4; on the other hand it's also true that NV doesn't have hot clocks anymore so the gap has narrowed.

Being Bulldozer-ish isn't bad for graphics since it's so parallel, but as you get farther away from graphics it might lose its potency depending on what kind of compute load you are feeding GCN.

Btw I'm not saying NV has some overwhelming advantage or anything. I think that's what some biased people might read. But read again. I was simply saying that even in cases where NV falls behind in gaming fps (which is a flawed metric but that's a different story...), it's not likely to fall behind by that much because its architecture is more flexible than AMD's so its worst-case scenarios still won't be THAT far behind comparable GCN chips. That's all I was saying.
 
Last edited:
NV's arch is more flexible. Sometimes AMD's arch can do better with larger numbers of less capable shaders in certain corner cases, but NV's arch is more Intel-ish and thus more versatile: fewer cores, more performance per core.

In GAMING titles, NV will almost never trail AMD GPUs by very much due to their architecture's flexibility.

that was true prior GCN
 
Lmao. Try taking a vote of GPU architecture engineers instead of people who conflate GPUs with architectures. The midrange GK104 part was able to compete successfully with the much larger and more expensive flagship AMD Tahiti. That's a fact. Even if you were to compare Pitcairn to Kepler to make it fairer in terms of stripping away unnecessary DP hardware for gaming, Kepler is still ahead.

More slower cores vs fewer faster cores is blatantly obvious even to a casual observer. Anandtech itself has published deep dive articles about this. And yes, GCN isn't that far removed from VLIW4; on the other hand it's also true that NV doesn't have hot clocks anymore so the gap has narrowed.

Being Bulldozer-ish isn't bad for graphics since it's so parallel, but as you get farther away from graphics it might lose its potency depending on what kind of compute load you are feeding GCN.

Btw I'm not saying NV has some overwhelming advantage or anything. I think that's what some biased people might read. But read again. I was simply saying that even in cases where NV falls behind in gaming fps (which is a flawed metric but that's a different story...), it's not likely to fall behind by that much because its architecture is more flexible than AMD's so its worst-case scenarios still won't be THAT far behind comparable GCN chips. That's all I was saying.

Or it could just be that Nvidia optimizes the crap out games and nothing to do with what you claim
 
Or it could just be that Nvidia optimizes the crap out games and nothing to do with what you claim

Please list 5 games--INCLUDING Gaming Evolved titles if you want--where a GTX 680 is 25% or more slower than a HD7970. The HD7970 has 33.3% more cores, so this ought to be easy, right?

And no, don't do something silly like compare games that had known launch issues or broken drivers etc. Be fair and compare latest drivers to latest drivers on games that have been out for a while, not something like TR when it first came out earlier this month.
 
Please list 5 games--INCLUDING Gaming Evolved titles if you want--where a GTX 680 is 25% or more slower than a HD7970. The HD7970 has 33.3% more cores, so this ought to be easy, right?

And no, don't do something silly like compare games that had known launch issues or broken drivers etc. Be fair and compare latest drivers to latest drivers on games that have been out for a while, not something like TR when it first came out earlier this month.

WHat? Did I suggest any of what you ask for in my post?
I merely stated that nvidia optimizes the crap out of games
 
Last edited:
Please list 5 games--INCLUDING Gaming Evolved titles if you want--where a GTX 680 is 25% or more slower than a HD7970. The HD7970 has 33.3% more cores, so this ought to be easy, right?

And no, don't do something silly like compare games that had known launch issues or broken drivers etc. Be fair and compare latest drivers to latest drivers on games that have been out for a while, not something like TR when it first came out earlier this month.

GTX Titan: 2688 Shaders + 550mm^2 100%
HD 7970: 2048 Shaders + 365mm^2 78% (-22)
GTX 680: 1536 Shaders + 294mm^2 75%
HD 7870: 1280 Shaders + 212mm^2 55% (-20)

They're pretty much the same.
 
Since Nvidia switched to non hot clocked, smaller cores it's probably not as true as it was when Nvidia had fat shaders running twice the core speed.

If there is any questions left over which is better, Thaiti or Kepler, I think GK110 pretty much summed it up.

You're answering a question that wasn't asked. I was just questioning the statement that Kepler is more versatile and because of that versatility doesn't trail AMD by much in any game. Since the implication is that AMD does trail Nvidia in some games by a large margin, I wanted some proof of that. I wasn't asking which is the better architecture.

perfwatt_2560.gif


There really should be no question as to who has the best high performance uarch on the market currently...
Um Tahiti is like 3% behind Titan and the other Kepler cards are trailing their respective Tahiti counterparts. The only card that doesn't compete that well in performance/watt is the 7970Ghz.
 
Since the implication is that AMD does trail Nvidia in some games by a large margin

What??? I explicitly stated that for graphics-only AMD's GCN was fine. If you move away from graphics into more esoteric programs that you can run on a GPU, you might have more variation. VLIW5/4/GCN has high potential computational power in theory, but it's hard to tap because few things are as parallel as graphics. So you will start losing efficiency. That's okay, nothing in life is ever 100% efficient. NV loses efficiency too. This is why they both have ups and downs in frametime and fps charts over time, and why we care about minimum fps and not just average or max fps.

Getting back to thread topic: I am utterly unsurprised that with updated software, Tomb Raider's TressFX runs fine on NV hardware. I never believed the conspiracy theories about AMD trying to lock out NV in the first place.
 
Last edited:
Um Tahiti is like 3% behind Titan

3%?

Oh you mean the 7950, ok so where is the less efficient uarch in performance then?

perfrel_2560.gif


Point out the 7950 for me.


Point being, AMD is way behind in uarch currently. Hence the $500 mid-range nvidia gpu's and the 1K Titan.

More to your point, it's 2688 Nvidia shaders at 1050~MHz vs 2048 AMD shaders on the 7970. Granted the Titan has more ROPS and way more TMUs, but point remains, at he same clocks Titan offers way more performance, and similar power consumption.

Hence GCN is no where near as good at Kepler. They don't get as much performance per shader, and they don't get as good of perf/w at the same clocks.
 
Last edited:
More to your point, it's 2688 Nvidia shaders at 1050~MHz vs 2048 AMD shaders on the 7970. Granted the Titan has more ROPS and way more TMUs, but point remains, at he same clocks Titan offers way more performance, and similar power consumption.

Hence GCN is no where near as good at Kepler. They don't get as much performance per shader, and they don't get as good of perf/w at the same clocks.

yeah... let's judge an entire range of products, only by their high ends

but the funny part, is that i agree with you...
...but based on 7970 vs 680
 
Last edited:
yeah... let's judge an entire range of products, only by their high ends

but the funny part, is that i agree with you...
...but based on 7970 vs 680

I'm judging the uarch based on the best sku's.

After that you muddy the situation with cuts and clock speeds to place products in their respective brackets.

But I could be wrong in what I'm using to assess the situation.

Furthermore to the subject of outside gaming workloads, specially with GK110, Nvidia introduced a lot of transistors to help bolster their actual performance closer to their theoretical. Things like Dynamic Parallelism, and Hyper-Q which set it apart as well.
 
I'm judging the uarch based on the best sku's

perfrel_2560.gif

I'd think $999 cards would outperform $400 cards by a greater margine. Seems nvidia tax is steeper than Apples on the high end.

For 2.5 x the $'s a person doesn't get much other than another 1" of epeen it looks like.
 
Back
Top