But I could be wrong in what I'm using to assess the situation.
don't you find odd, that titan have better perf/watt than 680?....
or that 7970 uses an insane 97 more watts than a 7870 ?
the truth is, amd or nvidia learn about theyr chips, when they are almost ready to produce... since titan came almost 1 year later than any other card... is kinda unfair to all other cards
hence, i agree with you about the 680 vs 7970 😉
Any plans for frametimes...besides your FPS chart?
Is there any chance that you could also do IQ testing of both brands side by side and not just of game quality settings on 1 brand?
Is it representative of overall gameplay experience or just at a particular stage, area or scene? ie. Anand used to do a waterfall scene that perform much worse on AMD cards, but it only occurs in that spot.
No, actually I don't.
680 was overclocked to beat Tahiti at release, it would be much better suited at 170w than 195w, the performance difference is minimal for the power difference.
I don't find 97w to be insane, it provides a decent amount of performance over the 7870 for that power allotment.
I dunno what this talk about having a dual GPU setup being "required" to play it at 2560x resolution. My single 680 is handling it just fine at 2560x1440. I haven't benchmarked it, but it's smooth gameplay which is all that should matter for a game like this (IMO). I'm just using FXAA of course but everything else is on. This plays better with a 360 controller too IMO.
Generally if you are busy wrestling with a controller during a game you are less likely to notice that 18-20 fps is "semi-choppy".
We pick areas that utilize all of the game's graphical features supported, such as in this game, tessellation, tressfx, ambient occlusion, shadowing, etc.... we want our run-throughs to represent everything the game is capable of.
Whereas, the "benchmark" built into the game is built to stress TressFX, and not the other effects the game is capable of. We look for a balanced run-through area that represents the whole game. Certain "benchmarks" don't do that.
I'm confused here. You claim that you look for a balanced run-through area that represents the whole game, but at the same time the article states that you "looked for scenes, levels, or areas which produced lower framerates than others", i.e. basically the hardest areas in the game, and as such not a balanced run-through representative of the whole game. So which is it, a balanced area or the hardest area?
Both approaches certainly have their merits, I'm just a bit confused about the exact criteria your using here.
I have a question about Tomb Raider performance...there is a setting to put out 24hz. My projector handles 24hz. Would it be better to change it to that setting (instead of 60) and turn on vsync and beef up the settings? How playable is a game at 24fps if it is rock solid? I know film looks good at 24, but i dont know about gaming....
I have a question about Tomb Raider performance...there is a setting to put out 24hz. My projector handles 24hz. Would it be better to change it to that setting (instead of 60) and turn on vsync and beef up the settings? How playable is a game at 24fps if it is rock solid? I know film looks good at 24, but i dont know about gaming....
http://www.rage3d.com/board/showthread.php?threadid=33999519A short but sweet email from AMD informs us that a patch for Tomb Raider is due tomorrow, giving a nice performance update for Radeon GPU owners - up to 25% more performance in some cases.
Quote:
Crystal Dynamics is releasing a new patch this Friday, that will enhance AMD Radeons performance significantly (up to 25%).
No details on the specifics of the increase, but it sounds promising.
I like your reviews Brent, but feel you place too much emphasis on your subjective "playable" settings, as I often find your settings and frame rates completely unplayable, making the benches worthless to me.
Personally think more apples to apples benches would be more useful, at different graphical settings, then users can see for themselves the performance at different graphical levels and decide which performs better at what they consider playable settings.
WTH?
Benching Subjective "playable" settings is the whole point of Hardocp testing procedure.
I'm judging the uarch based on the best sku's.
Do you think its a fair comparison given Titan is an updated design thats released what, nearly 1 year after AMD's GCN? That's a long time.
Likewise, if AMD release GCN 2.0 in a top sku later this year and it beats Titan in these metrics, is it a fair comparison to then say Titan is crap?
ps. answer = no, in both cases.
The "best" uarch is not just about perf/w, if it needs a massive perf/mm2 deficit to attain a small perf/w advantage (referring to your own linked chart, 7970 and 7800 series isn't far behind and its even winning), it adds to the final perf/$, which is what the consumers care about. $350 vs $1000 is not a small difference, i hope we can agree on that.
I appreciate the support for OGSSAA out of the box on these titles. (Sleeping Dogs, Witcher 2, latest build of Project CARS and Metro Last Light also seems to have built-in OGSSAA support). I love them. They're better than nothing. However, I think SGSSAA could be better.They are not promoting OGSSAA, just giving it as an option and free of any down-sampling hassle. 4xSSAA is by far the single best IQ setting in Tomb Raider, and proly the most expensive too.
Since Nvidia switched to non hot clocked, smaller cores it's probably not as true as it was when Nvidia had fat shaders running twice the core speed.
If there is any questions left over which is better, Thaiti or Kepler, I think GK110 pretty much summed it up.
There really should be no question as to who has the best high performance uarch on the market currently...