• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

FuryX now = 980ti 1080p/1440p > 4k

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.
I think it's right to remove outliers, especially when their integrity is in question, or they were bent over a barrel with gameworks.

http://www.pcgamer.com/project-cars...entionally-crippled-performance-on-amd-cards/

That's fair if those outliers are either showing ambiguous results that don't match up with the rest of the review sites (Wolfenstein NWO) or if the game was blatantly coded for a particular AIB with total disregard for competing AIB's products (Project CARS). The point I was making is that you cannot correlate the improvements in AMD's cards relative to NV's only due to newer drivers because those 2 games were heavily influencing the results given how much of an outlier they were.

Yah, I never really understood the complaint about AMD drivers. Maybe long long ago that was a big issue, but anymore they are on top of their game. I believe they are coming out with a huge set of drivers soon, Omega 2. Who knows how much that will change things up.

https://www.techpowerup.com/217086/amd-readies-catalyst-omega-2015-drivers-for-november.html

Complaints about AMD drivers are more often than not coming from NV users who haven't used AMD drivers in years. Based on the driver support we've seen since R9 280X/290/290X, AMD deserves an A for its support. What about Kepler? Look at 780, OG Titan, 680/770. Ya, not much more needs to be said. Those cards have bombed.

The bolded part is why doing quantitative analysis on game benchmarks is completely idiotic in the first place. The only benchmarks that should matter to any gamer are the ones for games they actually play.

Yes, so if I plan to play TW3/FO4, Metal Gear Solid V, GTA V and SW BF for most of 2015/1H of 2016, I wouldn't care to look at the averages of any review since those are irrelevant to me. That means if I am upgrading to a new card, and plan on spending 100s of hours on those games, I would only use those games as what primarily drives my GPU choice. I couldn't care less what the average is in 20-30 games I won't play. Since we can't read people's mind, when recommending GPUs, AT we compare videocards based on averages since it's impossible to quantify who plays what games, what fraction of their time they spend playing game XYZ, etc. If Gamer A spends the majority of his/her time playing Project CARS, WoW or games that highly favour NV hardware, it's self explanatory what videocard Gamer is going to buy. This changes nothing about what is a fair method of comparing videocards, not games. Also, to keep things consistent, the same sites that removed games that highly favoured AMD (Dirt Showdown) are now blatantly including games like Project CARS that highly favour NV. To be professional, there has to be some consistency to the methodology.

This is the key difference between TPU/Computerbase/Sweclockers/AT and HardOCP. The purpose of the former is to provide as wide range as possible for representing many modern games, and their underlying engines (by including many diverse games/engines, this allows these reviewers to capture trends of performance when making GPU recommendations), that a gamer might play in the immediate or near future and try to gauge where various GPU products land on that basis. The purpose of HardOCP's GPU reviews is not to review videocards, but to review specific game experiences by looking at IQ and performance of that specific game provided by a specific videocard. What that means is a site like HardOCP might review 3, 5, or 7 games but they review all of these games individually. In theory, this is an excellent approach and is unique in the industry. The flaw in HardOCP's intended methodology is that they claim to review games per their owners/editors, but then draw conclusions on videocards at the end. This is completely contradictory to their ideology. What they should do instead is compile a chart of games and what videocards provide the best gaming experiences in those games based on their testing. This way someone can easily select 10-15 games from their database and quickly compare which videocards provide what level of an experience without looking at %s or actual FPS data.

OTOH, sites like TPU/Computerbase/AT/Sweclockers/PCGamesHardware/TechSpot are looking to compare videocards, not games. That's why these reviewers need to have a fair representation of how well a videocard might perform in 50, 100, 150 other games. That's the whole point of using 15-20 games in reviews to try to get a more accurate average.

If you don't like the idea of averages, not a problem but since we can't ask 1000+ members on AT what games they play, since all the data is available for them, as mentioned earlier they are free to buy an NV card for WoW, Project CARS, etc. There have been quite a few posters on AT that bought a 750Ti/960 over R9 290 for WoW and no one says anything because their intentions are genuine and well understood.
 
Last edited:
Designed for GCN? Makes it a biased engine towards AMD.

This makes it very hard to recommend any game based on these engines to use for benchmarking.

Like i said: Open the box of pandora is never a good idea.
Sure most of the new engines are designed for GCN. It is not even possible to optimize for Maxwell for example, because NVIDIA don't provide the details for that.
 
Last edited:
This is why I always say the number one question when recommending gpus is what style of gamer are you. Because if you play triple a titles at release then nvidia has the advantage. If you play titles well after release with all patches amd has the advantage. Etc scenarios.
 
This is why I always say the number one question when recommending gpus is what style of gamer are you. Because if you play triple a titles at release then nvidia has the advantage. If you play titles well after release with all patches amd has the advantage. Etc scenarios.

Translation, people with money buy Nvidia. It's pretty much always been this way. Do game developers want you to preorder a game when it is new and $60 or 3 years down the line when it is $5 on Steam? As a game developer, which side are you going to put more effort into optimizing for the dominant market share leader with users that buy new, or the minority bargain shoppers years down the line?
 
Translation, people with money buy Nvidia. It's pretty much always been this way. Do game developers want you to preorder a game when it is new and $60 or 3 years down the line when it is $5 on Steam? As a game developer, which side are you going to put more effort into optimizing for the dominant market share leader with users that buy new, or the minority bargain shoppers years down the line?

I don't know about all that. He was just giving a couple examples. People who play RTS games, at least in the past, seemed to get a little more out of Nvidia, possibly due to lower head drivers. Those playing MMO's have a similar experience. Now a'days, many FPS games that are doing lots of physics tend to prefer AMD, unless a Gameworks/PhysX game.

There are lots of scenarios. The point is, pick what works best for your gaming habits.
 
Most people when they are buying a GPU are buying for at least 3 years. The overwhelming majority of people are NOT buying for a single or even a few games.

Interesting. Do you have any data to back this up, or just making up an assumption to validate your opinion? And just to clarify, I said "games they actually play" as in plural. I never said anything about one game.


Performance indexes gives a weighted average which tells a potential buyer what the ballpark estimate is in terms of performance. This is then used to extrapolate. It's an imperfect process.

Only it doesn't. You still have to look at the individual benchmarks to see how that average was calculated. There is rarely one card that is best at every game and setting, so you have to examine the individual games you are likely to play to determine what is best for you. I stand by my point, if you are buying a video card based on a single aggregate score without looking at the individual benchmarks, you are an idiot.
 
Ryse = 8% faster for AMD @ 1440
P cars = 34% faster for nVidia @ 1440
Not even close to being comparable.

The 8% in 1440p is an outliner because nVidia is "overall" 8% faster without Project Cars. Would PC still be in the suite the GTX980TI would be around 10% faster.

So yes, Ryse is screwing the picture and should be removed as well.
 
This is why I always say the number one question when recommending gpus is what style of gamer are you. Because if you play triple a titles at release then nvidia has the advantage. If you play titles well after release with all patches amd has the advantage. Etc scenarios.

Only GameWorks titles NV has the advantage on release. There's only a handful each year, mostly from Ubifail, fortunately for the rest of us who don't have an NV GPU.

Fallout 4 is GameWorks, if NV destroys AMD in that, its going to look really bad for AMD.
 
Only GameWorks titles NV has the advantage on release. There's only a handful each year, mostly from Ubifail, fortunately for the rest of us who don't have an NV GPU.

Fallout 4 is GameWorks, if NV destroys AMD in that, its going to look really bad for AMD.
If you play major triple a titles (many of which are gameworks) then yes go nvidia.... That's kind of my point. Amd doesn't have the same bias for them.

Gameworks sucks though so byying nvidia to make gameworks work kind of well is hilariously dumb to me. Any game that has issues because of gameworks moves from a game to purchase to a game to acquire by some other means.
 
This has been debated and refuted over and over again. The hyperbole is getting annoying.

GTX 780 debut:
perfrel_1920.gif


GTX 780Ti debut:
perfrel_1920.gif


I draw your attention to the 680 vs 7970Ghz results and the 7970Ghz vs 780 (big lead for the 780).

Then compare to now:

perfrel_1920_1080.png


The 770 which was 5% faster than 680 is now very behind the 280X which itself is slower than 7970Ghz.

Compare the 780 vs 280X (7970Ghz is a bit faster, would make the gap even smaller).
 
If you play major triple a titles (many of which are gameworks) then yes go nvidia.... That's kind of my point. Amd doesn't have the same bias for them.

Gameworks sucks though so byying nvidia to make gameworks work kind of well is hilariously dumb to me. Any game that has issues because of gameworks moves from a game to purchase to a game to acquire by some other means.

Someone should make a list of major AAA titles and whether its GameWorks & running crippled on AMD.
 
GTX 780 debut:
perfrel_1920.gif


GTX 780Ti debut:
perfrel_1920.gif


I draw your attention to the 680 vs 7970Ghz results and the 7970Ghz vs 780 (big lead for the 780).

Then compare to now:

perfrel_1920_1080.png


The 770 which was 5% faster than 680 is now very behind the 280X which itself is slower than 7970Ghz.

Compare the 780 vs 280X (7970Ghz is a bit faster, would make the gap even smaller).

How does that prove Kepler performance has bombed (meaning it got worse)? All you're proving is that AMD dropped the ball with launch driver performance and allowed nVIDIA to get away with using GK104 as a flagship. How do I know these graphs you posted are even comparable? The ones in OP aren't.

It's also easy for me to find games that show Kepler outperforming equivalent AMD cards (GameWorks or otherwise, I don't care) but I don't feel like filling this thread with more stupid graphs.
 
Last edited:
GTX 780 debut:
http://tpucdn.com/reviews/NVIDIA/GeForce_GTX_780/images/perfrel_1920.gif[/img

GTX 780Ti debut:
[img]http://tpucdn.com/reviews/NVIDIA/GeForce_GTX_780_Ti/images/perfrel_1920.gif[/img

I draw your attention to the 680 vs 7970Ghz results and the 7970Ghz vs 780 (big lead for the 780).

Then compare to now:

[img]http://tpucdn.com/reviews/MSI/GTX_980_Ti_Lightning/images/perfrel_1920_1080.png[/img

The 770 which was 5% faster than 680 is now very behind the 280X which itself is slower than 7970Ghz.

Compare the 780 vs 280X (7970Ghz is a bit faster, would make the gap even smaller).[/QUOTE]

7970 and 780 weren't even in the same category when the 780 launched. It's pretty lame where it stands now.

[quote="Hi-Fi Man, post: 37804860"]How does that prove Kepler performance has bombed (meaning it got worse)? All you're proving is that AMD dropped the ball with launch driver performance and allowed nVIDIA to get away with using GK104 as a flagship. How do I know these graphs you posted are even comparable? The ones in OP aren't.

It's also easy for me to find games that show Kepler outperforming equivalent AMD cards (GameWorks or otherwise, I don't care) but I don't feel like filling this thread with more stupid graphs.[/QUOTE]

well depends on how you look at it. Drivers got better for AMD or nvidias architecture needs driver babysitting, or some other explanation. either way, expect this to happen to the 980ti and 980 as well
 
How does that prove Kepler performance has bombed (meaning it got worse)? All you're proving is that AMD dropped the ball with launch driver performance and allowed nVIDIA to get away with using GK104 as a flagship.

It's also easy for me to find games that show Kepler outperforming equivalent AMD cards (GameWorks or otherwise, I don't care) but I don't feel like filling this thread with more stupid graphs.
This is what I've been saying.
It doesn't show kepler being bad it shows amd improving performance over time (or you can take it to being poor launch driver's).

If amd launched with the performance like nvidia is, rather than getting that performance over time, amd would be ahead. If fury x launched with the performance it will get over driver improvements (and didn't launch with pump issues) I would have purchased. Instead I didn't because I'm not waiting for performance to come over time I want it at the time of purchase.

It's nice though for the used market which I'm enjoying the mature amd driver speeds on Hawaii after my 7950 and I'll probably pick up amd used cards on the cheap until amd has a lead at a launch and then I'll buy.
 
well depends on how you look at it. Drivers got better for AMD or nvidias architecture needs driver babysitting, or some other explanation. either way, expect this to happen to the 980ti and 980 as well

It's both actually. Kepler is a unique architecture with it's roots in Fermi, for those who pay attention to the details it's been known that Kepler has always needed careful driver optimization to achieve high efficiency due to the scheduler and ALU count per SMX. Maxwell's ALU count per SMM was reduced and it's scheduler improved/changed making it easier to fully utilize an SMM thus removing the need for the driver to "babysit" to get good SMM usage. So I highly doubt this will be a recurring theme.
 
Last edited:
It's both actually. Kepler is a unique architecture with it's roots in Fermi, for those who pay attention to the details it's been known that Kepler has always needed careful driver optimization to achieve high efficiency due to the scheduler and ALU count per SMX. Maxwell's ALU count per SMM was reduced and it's scheduler improved/changed making it easier to fully utilize an SMM thus removing the need for the driver to "babysit" to get good SMM usage. So I highly doubt this will be a recurring theme.

Wait? So, it is bombing (as in gotten worse)? If you're saying Kepler needed "babysitting" to get optimal performance (which it did) but it is no longer getting babysat, isn't that the reason why it's not performing as well as it should? Since the focus is mainly on Maxwell, now, isn't that the reason for Kepler bombing? Thus, the statement that Kepler isn't performing as it should is justified?
 
It's both actually. Kepler is a unique architecture with it's roots in Fermi, for those who pay attention to the details it's been known that Kepler has always needed careful driver optimization to achieve high efficiency due to the scheduler and ALU count per SMX. Maxwell's ALU count per SMM was reduced and it's scheduler improved/changed making it easier to fully utilize an SMM thus removing the need for the driver to "babysit" to get good SMM usage. So I highly doubt this will be a recurring theme.

software scheduler, why not drop the improvements on kepler too? Maybe maxwell sucks less relatively in a year or two.
 
It's both actually. Kepler is a unique architecture with it's roots in Fermi, for those who pay attention to the details it's been known that Kepler has always needed careful driver optimization to achieve high efficiency due to the scheduler and ALU count per SMX. Maxwell's ALU count per SMM was reduced and it's scheduler improved/changed making it easier to fully utilize an SMM thus removing the need for the driver to "babysit" to get good SMM usage. So I highly doubt this will be a recurring theme.

There's the answer to the question you posed - "How does that prove Kepler performance has bombed (meaning it got worse)?"
 
Wait? So, it is bombing (as in gotten worse)? If you're saying Kepler needed "babysitting" to get optimal performance (which it did) but it is no longer getting babysat, isn't that the reason why it's not performing as well as it should? Since the focus is mainly on Maxwell, now, isn't that the reason for Kepler bombing? Thus, the statement that Kepler isn't performing as it should is justified?

You're not understanding what I said at all. It isn't bombing because performance hasn't regressed. I did say however, that Kepler needs careful optimization from the driver to perform at max efficiency. These optimizations are still happening, just not always at launch time, much like AMD's cards.
 
Last edited:
software scheduler, why not drop the improvements on kepler too? Maybe maxwell sucks less relatively in a year or two.

Kepler doesn't have a "software scheduler", it's a static scheduler. Only the scheduling of instructions is done in software (compiler).
 
Status
Not open for further replies.
Back
Top