• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

[guru3d] Total War: Warhammer DX12 benched

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

McGraw

Member
Oct 16, 2014
36
0
0
First of all, what a piece of garbage the gtx960 is. Still shaking my head to all the times people defended that choice over a r9 380. Like seriously? Kepler plummeting to its death 2 years after its release and people are shocked Maxwell cards starting to do the same?

Second, yet another DX12 title that shows AMD winning. As others said above, Pascal definitely starting to look like merely a shrunken Pascal "Paxwell" and nVidia is stuck on this for the next two years apparently. I have a feeling Vega and Polaris are going to do very, very well especially in modern DX 12 games like Battlefield 1, Forza 6, Arma 3, Watch Dogs 2, Star Citizen, and Deus Ex among others.
Arma 3 shouldn't be in that list.
 

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,015
91
Shame they didn't test it without AMDs MLAA. But I guess they wasn't allowed.

Also a small note, the settings used are "beyond Ultra".
Yeah sad that they used a superior AA technique. I don't see you complaining when games use HBAO+ not HDAO.
 

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,015
91
Someone help me out here. Are these new drivers or something? Because I remember seeing this game run like crap on everything other than a 1080. If you OC that 980ti it will be at the top of the charts with the 1080. I got your sour grapes right here.
This is DX12, earlier benchmarks were DX11.
 

Piroko

Senior member
Jan 10, 2013
905
79
91
Don't bet on it,this is the benchmark fairy tail(Gpu load only) ,the game will still run all it's logic/AI/whatever on a single core,sure zooming around the map will be faster with Dx12 but that's not what makes this game.
Any source or just speculation? Because the benchmark is running with AI and everything and seems to hold up just fine as well as scale to at least 8 threads...

Shame they didn't test it without AMDs MLAA. But I guess they wasn't allowed.

Also a small note, the settings used are "beyond Ultra".
Beyond ultra settings are also above 50 fps for most GPUs even in 1440p, so I'm fine with that. A retest with more playable 4k settings would be fine though.
 

swilli89

Golden Member
Mar 23, 2010
1,538
1,143
136
I know its completely ridiculous. Someone with a 290x can still survive and play today's games pretty well actually. I think that's pretty unbelievable, but that's where we are.
The 290X was the same price of a vanilla 780. Utterly insane value AMD gave to its buyers by engineering a chip that was forward-thinking.
 

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,015
91
The 290X was the same price of a vanilla 780. Utterly insane value AMD gave to its buyers by engineering a chip that was forward-thinking.
780 was actually $625-650 to the 290x $550.

Seems like Polaris will offer that same amazing price/perf we saw then.
 

TheELF

Diamond Member
Dec 22, 2012
3,290
402
126
Because the benchmark is running with AI and everything and seems to hold up just fine as well as scale to at least 8 threads..
Any source or just speculation?
Every in game benchmark from the beginning of time has been scripted GPU load and nothing else and this is no different.

Alone the fact that it scales to 8 cores is a dead giveaway,it's chopping up render workload the same way that for example cinebench does.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
639
126
Any source or just speculation?
Every in game benchmark from the beginning of time has been scripted GPU load and nothing else and this is no different.

Alone the fact that it scales to 8 cores is a dead giveaway,it's chopping up render workload the same way that for example cinebench does.
You ask for a source for his claim about AI then you turn around and claim it's chopping up the workload in some particular way with no source of your own?
 

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,015
91
Any source or just speculation?
Every in game benchmark from the beginning of time has been scripted GPU load and nothing else and this is no different.

Alone the fact that it scales to 8 cores is a dead giveaway,it's chopping up render workload the same way that for example cinebench does.
Funny since the developers claim the opposite and pretty sure I've linked you this before:

What should you expect out of a non-synthetic benchmark?

But what is it exactly that you are going to see in a benchmark that is measuring actual gameplay performance? If you run the Ashes of the Singularity Benchmark, what you are seeing will not be a synthetic benchmark. Synthetic benchmarks can be useful, but they do not give an accurate picture to an end user as to what expect in real world scenarios.

Our benchmark run is going to dump a huge amount of data which we caution may take time and analysis to interpret correctly. For example, though we felt obligated to put an overall FPS average, we don’t feel that it’s a very useful number. As a practical matter, PC gamers tend to be more interested the minimum performance they can expect.

People want a single number to point to, but the reality is that things just aren’t that simple. Real world test and data are like that. Our benchmark mode of Ashes isn’t actually a specific benchmark application, rather it’s simply a 3 minute game script executing with a few adjustments to increase consistency from run to run.

What makes it not a specific benchmark application? By that,we mean that every part of the game is running and executing. This means AI scripts, audio processing, physics, firing solutions, etc. It’s what we use to measure the impact of gameplay changes so that we can better optimize our code.

Because games have different draw call needs, we’ve divided the benchmark into different subsections, trying to give equal weight to each one. Under the normal scenario, the driver overhead differences between D3D11 and D3D12 will not be huge on a fast CPU. However, under medium and heavy the differences will start to show up until we can see massive performance differences. Keep in mind that these are entire app performance numbers, not just graphics.
http://oxidegames.com/2015/08/16/the-birth-of-a-new-api/

What is your source?
 

n0x1ous

Platinum Member
Sep 9, 2010
2,524
181
106
780 was actually $625-650 to the 290x $550.

Seems like Polaris will offer that same amazing price/perf we saw then.
At launch of 780 yes, but 780 was dropped to $499 after Hawaii was released.
 

Piroko

Senior member
Jan 10, 2013
905
79
91
Any source or just speculation?
Every in game benchmark from the beginning of time has been scripted GPU load and nothing else and this is no different.
I don't think you realize that these strategy benchmark scenes work a little bit different than your typical Tomb Raider drive-by camera in a static scene, this is a normal benchmark load:
https://youtu.be/qRbm_zJFEdU?t=76
That's quite far from "scripted GPU load and nothing else" since all those units do pathfind and interact individually (they have to due to how Total War handles low/medium/high settings) even if the overall fight result might be precalculated in every aspect (it's not to the best of my knowledge, that would make for enormous replay files).

Alone the fact that it scales to 8 cores is a dead giveaway,it's chopping up render workload the same way that for example cinebench does.
That's probably the best compliment anyone has given to DX12 so far. But it's also nothing more than an assumption from you based on your first assumption.
 

Xenochus

Junior Member
May 31, 2016
6
1
11
So I guess the people at SemiAccurate were right in saying those Pascal cards are going to be unsustainable disasters at DX12 v.s. AMD?
 

AtenRa

Lifer
Feb 2, 2009
13,597
2,689
126
R9 380X to R9 390X = 390X 50% faster

R9 390X to Fury X = Fury X 19% faster

Still Fury is not scaling as it should. Is it the different Memory or something else ??

 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
All the evidence was there I think.

I mean, when you looked at the 2013 parts on paper (bus sizes, compute capacity, etc.) AMD hardware looked like a clear winner. Plus we all knew the other mitigating factors- namely that the consoles were based on GCN. The only evidence we had that Kepler was better than Hawaii was the benchmarks of games we could already buy, anyone who tried to read the tea leaves could see past that.
I guess that's the thing, we're talking about reading the tea leaves so to speak. By looking at history, can we assume that in a year Fury X will be pulling ahead of GTX 1080?
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Yeah sad that they used a superior AA technique.
How is it superior? I think its a wash. The thing I dont like about MLAA has been that it blurs 2D objects, texts in game a little too much perhaps even the edges. But each to their own I guess.

http://www.hardocp.com/article/2011/07/18/nvidias_new_fxaa_antialiasing_technology/4#.V1C9VLhcRBc

Its been awhile hearing about MLAA, but it would have been nice if the benchmark had the option to use normal MSAA or no AA to see what quite of hit there is per videocard/brand. Guess we'll have to wait for the June DX12 patch to find out.
 
Last edited:

Zodiark1593

Platinum Member
Oct 21, 2012
2,230
4
81
Would the 960 have done better with 4gb of Vram?

I see that Tonga did okay with 2, but maybe it makes a difference with Maxwell?
The 960 is held back by memory bandwidth. Always was. Many benchmarks I have done gained 10%-15% by taking the memory from 7.0 GHz to 8.2.

Compared to the 970, the 960 loses a bit less than a 3rd of the 970's shader power, gets cut down to
 

boozzer

Golden Member
Jan 12, 2012
1,549
17
81
I guess that's the thing, we're talking about reading the tea leaves so to speak. By looking at history, can we assume that in a year Fury X will be pulling ahead of GTX 1080?
in this case, I would bet on a no. the gap is too big. in dx11 at least. dx12 is another ball game.
 

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,015
91
How is it superior? I think its a wash. The thing I dont like about MLAA has been that it blurs 2D objects, texts in game a little too much perhaps even the edges. But each to their own I guess.

http://www.hardocp.com/article/2011/07/18/nvidias_new_fxaa_antialiasing_technology/4#.V1C9VLhcRBc

Its been awhile hearing about MLAA, but it would have been nice if the benchmark had the option to use normal MSAA or no AA to see what quite of hit there is per videocard/brand. Guess we'll have to wait for the June DX12 patch to find out.
Same Perf as FXAA:

http://www.hardocp.com/article/2011/09/12/deus_ex_human_revolution_gameplay_performance_review/8

Image quality:

http://www.hardocp.com/article/2011/09/12/deus_ex_human_revolution_gameplay_performance_review/9

The fog is missing in FXAA:



http://www.techpowerup.com/forums/threads/does-fxaa-really-beat-mlaa-in-iq-a-little-comparison….151170/
 

poohbear

Platinum Member
Mar 11, 2003
2,284
5
81
From what we've seen so far, consumer Pascal just looks like a Maxwell shrink with a new feature here and there, relying on clockspeed to drive performance up. So, Maxwell's weakesses that have appeared lately will also probably haunt Pascal throughout its life. As more DX12 games come out, we'll see if this is the case. Man, 20nm failing as it did sure screwed up the game for both sides of the fence.

Pro/HPC GP100 is probably another thing altogether considering its SMs are different and GCN-like in some ways, but we probably won't see that in a Geforce version not anytime soon, and considering the "GP102 = 1.5x GP104" rumors like GM200 was to GM204, we probably never will.


Volta should be a brand new architecture with a different set of strengths and weaknesses, but that's for 2018. Hell, Pascal appeared out of nowhere in roadmaps when 20nm failed, if the foundries didn't hit a wall these past few years we would've had Volta now, not Pascal (Paxwell?)



GTX 960 is GM206, x06 chips used to be the GT 440 tier of cards before nV moved the midrange x04 chips to the high end price tag and that's clearly showing here.
Pascal IS a die shrink. It's not a new architecture. Volta is the new architecture. This is not a secret or anything. Nvidia's marketing is slick, but they never explicitly claime it's a new architecture.
 
Last edited:

NTMBK

Diamond Member
Nov 14, 2011
9,375
2,840
136
Still Fury is not scaling as it should. Is it the different Memory or something else ??
It's the imbalance. Fury has lots of shader power, but didn't beef up the front end to feed it.
 

ASK THE COMMUNITY