Deus EX: Mankind Dividied system specs revealed

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
You claimed the "majority" of NVidia Gameworks titles are terribly broken and unoptimized at launch, and yet you focus on only one? And not only do you focus on only one, you focus on only one feature; Hairworks. How is Hairworks any different than AMD's TressFX in Tomb Raider which had terrible performance on NVidia at launch?

TressFX only had terrible performance at launch because at that time NV did not have access to (direct) source code of the game. Once they started to actually optimize the drivers for Tomb Raider, the game ran like butter on their cards. I am not going to spend a lot of time finding reviews or digging through the data but it's been proven many times over that TressFX 1.0 was more efficient than HairWorks. By now, AMD already has TressFX 3.0 which is much more efficient than the early implementations were in Tomb Raider.

I'll tell you how it's different. Hairworks can at least justify some of the performance hit, since it's applied to MULTIPLE entities on screen at the same time and it improves IQ tremendously for the fur or hair of animals and monsters. TressFX on the other hand was restricted to Lara Croft, and even then it bombed performance.

Revisionist history, huh? GTX680 managed 44 fps average with 2xSSAA (!) and 7970 managed 50 fps average at 1080p with everything else on Ultra. That was with almost no optimizations around launch.

13632141234v2TkTbPdM_3_3.gif


Here is a modern comparison between The Witcher 3 and Tomb Raider:

HairWorks.png


"With HairWorks disabled the minimum frame rate of the R9 290 is 3x greater, while the GTX 780 saw a 2.5x increase and the GTX 980 a 2.2x increase. The average frame rate of the GTX 980 was boosted by 20fps, that's 36% more performance. The GTX 780 saw a 36% increase in average frame rate which was much needed going from just 36fps at 1080p to a much smoother 49fps. The R9 290X enjoyed a massive 75% performance jump with HairWorks disabled, making it 17% slower than the GTX 980 -- it all started to make sense then."

GTX780 and R9 290 ran Tomb Raider like butter, but the 2 cards choked in The Witcher 3 with max HairWorks.

TressFX.png


That's what I'm getting at you see. At least NVidia's Gameworks stuff improves IQ over the baseline, even if it has a huge performance hit. AMD's Gaming Evolved actually degrades IQ and lowers performance, which is a double whammy. Their CHS in particular is a joke, and so is HDAO:

Sounds like something I'd read from an NV marketing pamphlet. DE:MD may be the best looking 2016 PC game, and its graphics are surpassing Crysis 3. The game actually looks way better than in all the previous trailers leading up to release -- the completely opposite of The Witcher 3 and Assassin's Creed Unity.

The joke is that CHS is broken in Deus Ex MD, and so is their temporal ambient occlusion. Both of these settings on max quality actually degrades IQ, whilst still suffering a massive performance hit.

Assassin's Creed Unity that you hyped on forums for months and then defended for more months was broken for almost 6 months straight. Some would argue it was never fixed.

If Deus Ex MD had used PCSS and HBAO+ instead, it would not only look better but run better :)

The interesting part is you have everything black and white regarding PCSS vs. CHS. Takes 5 minutes disprove that PCSS is not the definitive shadow rendering choice.

NV's PCSS has 3 major issues you failed to address:

1. NVIDIA PCSS fails to accurately portray the amount of detail in the real-world, or the detail lost is often too exaggerated.
2. It blurs the entire shadow around the characters/objects but that is NOT how soft shadows work. Soft shadows does not mean the entire shadow is a blur.
3. It starts the transparency effect closer to the character, once again incorrect.

This example highlights all of these errors:

-> The shadow details from the leaves/tree are all washed out/degraded
-> The shadow being cast by the main character is rendered the wrong way --> in the real world shadow gets lighter and more blurrier the farther it is from the object. In this instance it is clear NV's PCSS is rendering incorrectly.

1432446325kun5585ora_4_6_l.png


NV's PCSS got it completely wrong -- they rendered the shadow completely opposite of real life with more blur and transparency near the character and more shadow detail away from the character.

16940480686_eec435de36_b.jpg


795612_detay.jpg


One other thing that happens with NVIDIA PCSS as well is that the shadow becomes lighter in color, more transparent, and in some cases almost removes the shadowing altogether from being seen. The end result is NV's PCSS often appears to mimic shadows in games from 5-7 years ago where all the detail is/was completely washed out. If you think those are good shadows, that's your opinion but I'd pick GTA V's softest or AMD CHS implementation any day in these instances:

1432446325kun5585ora_4_7_l.png


1432446325kun5585ora_4_8_l.png


This also highlights all 3 errors I mentioned earlier:

> transparency and lack of detail around the chacter's feet - wrong
> blur for the entire character's shadow - wrong
> lack of shadow detail from power lines/trees - wrong

1432446325kun5585ora_4_10_l.png


HardOCP even commented "Another example of how shadows can seemingly disappear under NVIDIA PCSS is shown above. In the screenshots you can see these power lines are visible in the world and across the back of the legs of our character with Softest and AMD CHS. However, with NVIDIA PCSS the shadows are extremely hard to see. This is why we question the strength of diffusion into the distance of shadows under NVIDIA PCSS."

In fact, shadows are such a complex topic, it's not even determinate if PCSS or CHS or other implementations are superior. "It depends".

In the real world you can have defined shadows like this:

13165748-Dark-shadows-from-a-bush-in-the-snow-with-dramatic-clouds-in-black-and-white--Stock-Photo.jpg


or completely blurry like this:

20278092-Largas-sombras-de-los-rboles-y-sendero-en-un-bosque-Blanco-y-negro--Foto-de-archivo.jpg


One could easily just as argue that in certain games/places, any of the implementations "could look like real life"

1432446325kun5585ora_4_12_l.png


There are instances where CHS > PCSS, where PCSS > CHS and where both are flawed and worse than alternative methods, and yet you conclusively paint PCSS superior to CHS.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Total Warhammer is the exact same:

It's not the same. TW:W has amazing CPU optimization under DX12. I am amazed you decided to leave that out.

z8ExJ5ptM7gMmY8QxZN4oB.png


vs. DX12

XxhXDJKP2x6Z9jPMP7XgjD.png


It's not the developer's or AMD's fault that NV decided to create a 3rd consecutive GPU generation that lacks proper DX12 hardware support. It is 100% NV's decision when they made Kepler, Maxwell and Pascal. NV chose to prioritize DX11 over DX12 since we can only presume they thought most of 2016-2018 would be DX11 era and they will go all-in with DX12 with Volta. No need to start downplaying broken DX12 just because some DX12 implementations failed to live up to the hype. Dragging TW:W with RoTR and Hitman as a failed DX12 implementation if frankly disingenuous.

GTX 1060 destroys RX 480 in OpenGL in Doom.

OpenGL performance in Doom is irrelevant since GTX1060 runs faster under Vulkan too. If there are glitches under Vultan in Doom with NV cards, that's not AMD's fault either.

With Vulkan, it's reversed, but only because Vulkan uses GCN shader intrinsics which is basically like console optimizations that can directly map to the hardware. Until ID implement the NVidia shader intrinsics, it's not a fair comparison. When they do, and you can be sure they will, then we can come to a final decision.

Architecture specific GPU/CPU optimizations are the exact type of optimizations that are fair in PC gaming and what we want. We want developers to max out all the capabilities of Pascal against Maxwell and Kepler and of GCN 4.0 vs. GCN 1.0, or Skylake over Sandy/Haswell. NV decided to take the high road and baked these optimizations into GW's DLLs. Did anyone stop NV from working with iD to create architecture specific optimizations for Kepler, Maxwell or Pascal? Once the game becomes an official GWs title, I am pretty sure AMD is not allowed to alter any code inside the game for its GPUs or optimize any of that GWs code by working with the developer.

That said, I think AMD's Gaming Evolved program leaves much to be desired. The graphics effects like CHS are second rate compared to what you find with NVidia's Gameworks..

The list of well-optimized, good looking AMD GE titles is huge. OTOH, the list of well-optimized NV GWs titles is tiny. I remember at one point in 2 years, nearly every AAA GWs title released was an unoptimized pile, with broken AMD performance, broken CF and insane GPU demands for lack of next gen graphics. DE: MD is none of those things. It demands high end hardware because it looks great and yet it supports both SLI and CF day 1.

Deus Ex Mankind Divided would have been a better looking game, and probably even a better performing game had it been a GW title rather than a GE title.

You forgot to mention that your statements may have been true if it ran on Pascal, but everything else from Kepler to Maxwell to all AMD cards would have sub-par performance in such a title. You forgot already when GTX960 and 780/780Ti were performing close to each other in Project CARS, until NV fans' concerns were voiced to NV, and soon as if magically Kepler performance improved?

Based on the track record of most GWs titles, any game that is officially a GWs title now almost universally ensures that older NV GPUs and AMD GPUs perform like garbage and CF is broken. No thanks! I mean really, R9 295X2 and GTX1080 are performing close to each other on Day 1 of release. Let me know when that happens with a GWs title. With AMD GE titles, even if AMD has the edge, NV GPUs still perform well relatively speaking. In GW titles like Anno 2205, GTX1060 beats Fury X and 970 beats the Fury. Is that a joke?

Good thing NV won't be able to ruin BF1. Fury X may end up beating 980Ti but we know for sure 980Ti will still perform well. That's the difference between GimpWorks and AMD GE.
 
Last edited:

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Unreal CEO specifically said Unreal Engine was developed on nVidia hardware, and therefore ran best on GeForce. If any titles were too be taken seriously, it'd certainly not be Unreal titles.

Yes, you wouldn't want to test with what is possibly the most widely used game engine on the planet.o_O
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
Personally, I don't get the hype behind PCSS or CHS ...

PCSS is limited to to one occluder for accurate results and CHS is only a filtering algorithm ...
 

escrow4

Diamond Member
Feb 4, 2013
3,339
122
106
The last 4 pages have nothing to do with Mankind Divided. Why is GTA V's shadow performance being brought up? Plus all the DX 12 rubbish? Mod should gut recent posts and transfer them to another thread.
 
  • Like
Reactions: Sweepr

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
According to Techspot, MD is a CPU lightweight:

Yes, that is true, at least as it pertains to the built-in benchmark. With a modern fast quad-core, the game is almost entirely GPU limited. It will be interesting to see if DX12 allows faster and/or higher threaded CPUs to open up a lead. I also would like to see CPU scaling with faster GPUs than the 1080 -- 1080 SLI, Titan XP and Titan XP SLI.

BTW, good thing you sold your 780Ti back then. This game is an official death sentence for Kepler.

HD7970 is 31% faster than 680
HD7970Ghz/R9 280X is 36% faster than 770
R9 290 is 42% faster than 780
R9 290X is 43% faster than the 780Ti

The performance of 980Ti looks wrong in this chart though.

Medium.png


Fury/Nano beating 980 by 33-36% is also a huge win here. I remember many of this forum recommended buying a 980 and overclocking it, as opposed to buying the Nano or the Fury. I recall TechSyndicate and some other sites pointed out to Nano and Fury's superior performance at higher resolutions, suggesting that once next gen games arrive, these cards will be faster than the 980 even at 1080p. Fast forward a year and we are exactly there. Better performance at higher resolutions in older games translated to better performance at 1080p in newer gamers. As often was the case during the Kepler era, AMD's more forward looking architectures and AMD's superior performance at higher resolutions was ignored even during the Maxwell era when it came to 970 vs. 390, 980 vs. Fury.

Maybe NV can release a couple drivers and dramatically improve performance for older generation of cards?
 
Last edited:

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
I'll tell you how it's different. Hairworks can at least justify some of the performance hit, since it's applied to MULTIPLE entities on screen at the same time and it improves IQ tremendously for the fur or hair of animals and monsters. TressFX on the other hand was restricted to Lara Croft, and even then it bombed performance. And even worse, it actually looked terrible. Lara's hair looked like she was underwater or in space most of the time.. :D

MD actually uses PureHair too, and seems like on more than just Adam Jensen. It's most noticeable on female NPC with ponytails.

That's what I'm getting at you see. At least NVidia's Gameworks stuff improves IQ over the baseline, even if it has a huge performance hit.

Batman Arkham Knight would beg to differ. Gameworks title, ran like tank at best and was completely broken at worst on both AMD and Nvidia GPUS, was capped at 30 fps, and was actually missing a bunch of effects from the console versions! It was so bad that it infamously was taken down from Steam and customers were offered full refunds even beyond Steam's refund policy. How did Gameworks help that game, exactly?

Are there any DX dx12 benchmarks?

If I remember correctly, Unreal Engine is an nVidia thingy just like DX used to be.

http://www.anandtech.com/show/10594/nvidia-releases-paragon-game-ready-pack
http://www.anandtech.com/show/10593/amd-bundles-together-deus-ex-mankind-divided-and-amd-fx-cpus

I can't wait for DX dx12 results.

Unfortunately, even if AMD gains a lot out of DX dx12, this two titles are too subjective to be taken seriously. DX maybe looking and performing great overall, but it is still looking as a nVidia subjective game just as UE4 games on AMD.

It is not the same ID's DOOM scenario where AMD used to have a deep opengl handicap.
Deus Ex Mankind Divided doesn't use Unreal Engine. It uses Eidos' proprietary Dawn Engine, derived from the Glacier engine used by the recent Hitman games.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
what a joke. Tress FX is easily the best hair implementation in terms of performance vs IQ tradeoff. Hairworks in Witcher 3 was terrible and crippled performance on all cards. It did hurt AMD cards more than Nvidia Maxwell. Surprisingly even Kepler was massacred.

TressFX is, or should I say was, absolute garbage. The hair animation is completely ridiculous, as it makes Lara's hair look like she's in a zero G environment. Plus, it's also prone to freaking out as you can see in this video:


The rest of your post is just you quoting what other people have said. Whatever you think about Hairworks and TressFX, there are a few facts that cannot be ignored:

1) Hairworks in the Witcher 3 was used on multiple entities, that could also be shown on screen at the same time. Yes the performance hit was substantial, but considering the IQ enhancement (particularly to animals and monsters), it's somewhat understandable.

2) TressFX on the other hand was limited to a single character, which begs to differ that if TressFX is "easily the best hair implementation in terms of performance vs IQ tradeoff," as you put it, then why wasn't it used more liberally in Tomb Raider?

I'll tell you why, because it's garbage that has extremely poor scaling. At least I can admit that NVidia should have made Hairworks more scalable in regards to tessellation factor and MSAA from the beginning, so as to accommodate slower rigs.

But TressFX is even worse, in the sense that it had zero scaling capability beyond a single character, and not only that, the hair animation was completely ridiculous!
 

SPBHM

Diamond Member
Sep 12, 2012
5,066
418
126
interesting, 460 is beating the 7950 at medium (best playable settings I guess for both), that's like half the amount of SPs and even more dramatic memory bandwidth gap.

tressfx is kind of inefficient I think, they added it to TR2013 because it was an Xbox 360 port, so the base game left a lot of room for the GPUs to do extra work...
but at the same time they are even using a version of it on ROTR (Xbone is the base version), even on the Xbox One which is limited to 768SPs 850MHz, and is not even running the game at full 1080P/30FPS, so they are clearly trying to get the most performance and don't have a lot of room left, so I guess the current version, running on GCN must be pretty efficient, or it wouldn't make sense to use it on the XBone, unless they really absolutely believe it's an important feature for the game visuals, but again it's just for the main character.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Here is a modern comparison between The Witcher 3 and Tomb Raider:

HairWorks.png


"With HairWorks disabled the minimum frame rate of the R9 290 is 3x greater, while the GTX 780 saw a 2.5x increase and the GTX 980 a 2.2x increase. The average frame rate of the GTX 980 was boosted by 20fps, that's 36% more performance. The GTX 780 saw a 36% increase in average frame rate which was much needed going from just 36fps at 1080p to a much smoother 49fps. The R9 290X enjoyed a massive 75% performance jump with HairWorks disabled, making it 17% slower than the GTX 980 -- it all started to make sense then."

GTX780 and R9 290 ran Tomb Raider like butter, but the 2 cards choked in The Witcher 3 with max HairWorks.

TressFX.png

I see relatively the same performance hit all things considered. HairWorks affected multiple characters. TressFX in TR did not and had major visual problems (the aura over her shoulders causing floating hair). Not to mention the hair physics were completely unrealistic (though visually impressive) - Lara looked like a character in a Pantene commercial. Honestly neither effect is worth its performance hit unless you have performance to spare.

The interesting part is you have everything black and white regarding PCSS vs. CHS. Takes 5 minutes disprove that PCSS is not the definitive shadow rendering choice.

NV's PCSS has 3 major issues you failed to address:

1. NVIDIA PCSS fails to accurately portray the amount of detail in the real-world, or the detail lost is often too exaggerated.
2. It blurs the entire shadow around the characters/objects but that is NOT how soft shadows work. Soft shadows does not mean the entire shadow is a blur.
3. It starts the transparency effect closer to the character, once again incorrect.

This example highlights all of these errors:

-> The shadow details from the leaves/tree are all washed out/degraded
-> The shadow being cast by the main character is rendered the wrong way --> in the real world shadow gets lighter and more blurrier the farther it is from the object. In this instance it is clear NV's PCSS is rendering incorrectly.

1432446325kun5585ora_4_6_l.png


NV's PCSS got it completely wrong -- they rendered the shadow completely opposite of real life with more blur and transparency near the character and more shadow detail away from the character.

16940480686_eec435de36_b.jpg


795612_detay.jpg


One other thing that happens with NVIDIA PCSS as well is that the shadow becomes lighter in color, more transparent, and in some cases almost removes the shadowing altogether from being seen. The end result is NV's PCSS often appears to mimic shadows in games from 5-7 years ago where all the detail is/was completely washed out. If you think those are good shadows, that's your opinion but I'd pick GTA V's softest or AMD CHS implementation any day in these instances:

1432446325kun5585ora_4_7_l.png


1432446325kun5585ora_4_8_l.png


This also highlights all 3 errors I mentioned earlier:

> transparency and lack of detail around the chacter's feet - wrong
> blur for the entire character's shadow - wrong
> lack of shadow detail from power lines/trees - wrong

1432446325kun5585ora_4_10_l.png


HardOCP even commented "Another example of how shadows can seemingly disappear under NVIDIA PCSS is shown above. In the screenshots you can see these power lines are visible in the world and across the back of the legs of our character with Softest and AMD CHS. However, with NVIDIA PCSS the shadows are extremely hard to see. This is why we question the strength of diffusion into the distance of shadows under NVIDIA PCSS."

In fact, shadows are such a complex topic, it's not even determinate if PCSS or CHS or other implementations are superior. "It depends".

In the real world you can have defined shadows like this:

13165748-Dark-shadows-from-a-bush-in-the-snow-with-dramatic-clouds-in-black-and-white--Stock-Photo.jpg


or completely blurry like this:

20278092-Largas-sombras-de-los-rboles-y-sendero-en-un-bosque-Blanco-y-negro--Foto-de-archivo.jpg


One could easily just as argue that in certain games/places, any of the implementations "could look like real life"

1432446325kun5585ora_4_12_l.png


There are instances where CHS > PCSS, where PCSS > CHS and where both are flawed and worse than alternative methods, and yet you conclusively paint PCSS superior to CHS.

I think your views are subjective. PCSS is simulating a less intense/more diffuse lighting source. CHS is simulating a very close and bright light source. Neither is 'wrong' to say but they are different. It is also important to note that better looking is not equivalent to most realistic. CHS is in my opinion by far the best looking but also by far the least realistic (it is consistently sharp - too sharp to be natural) and so it looses its appeal. Most of the time though shadows do tend to be rather diffuse and blocky.

As far as realism goes

PCSS is realistic for those kinds of hazy sunny days. CHS is realistic for those insanely bright clear days near the equator. IMO - softest shadows looks the best.

I think Nvidia is forgetting the cardinal rule of optimization - go for 90% the effect for 50% the cost. You have to do some quick and dirty algorithms and I think AMD is far better at that.
 
  • Like
Reactions: Carfax83

dogen1

Senior member
Oct 14, 2014
739
40
91
Unreal CEO specifically said Unreal Engine was developed on nVidia hardware, and therefore ran best on GeForce. If any titles were too be taken seriously, it'd certainly not be Unreal titles.

Maybe, but now that they have paragon running on and optimized for consoles, it should run pretty well on AMD hardware.

I think Nvidia is forgetting the cardinal rule of optimization - go for 90% the effect for 50% the cost. You have to do some quick and dirty algorithms and I think AMD is far better at that.

I think this is true with regards to hairworks. But they don't always do that. FXAA is a good example(especially with the over aggressive "sub-pixel aa" blur turned off), plus as I said before HBAO+ is also fast. It easily looks better and runs better than the original HBAO. Ok, VXAO is again a bit like hairworks, but it looks amazing, and it's not that slow.
 
Last edited:

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Yes, that is true, at least as it pertains to the built-in benchmark. With a modern fast quad-core, the game is almost entirely GPU limited. It will be interesting to see if DX12 allows faster and/or higher threaded CPUs to open up a lead. I also would like to see CPU scaling with faster GPUs than the 1080 -- 1080 SLI, Titan XP and Titan XP SLI.

BTW, good thing you sold your 780Ti back then. This game is an official death sentence for Kepler.

HD7970 is 31% faster than 680
HD7970Ghz/R9 280X is 36% faster than 770
R9 290 is 42% faster than 780
R9 290X is 43% faster than the 780Ti

The performance of 980Ti looks wrong in this chart though.

Medium.png


Fury/Nano beating 980 by 33-36% is also a huge win here. I remember many of this forum recommended buying a 980 and overclocking it, as opposed to buying the Nano or the Fury. I recall TechSyndicate and some other sites pointed out to Nano and Fury's superior performance at higher resolutions, suggesting that once next gen games arrive, these cards will be faster than the 980 even at 1080p. Fast forward a year and we are exactly there. Better performance at higher resolutions in older games translated to better performance at 1080p in newer gamers. As often was the case during the Kepler era, AMD's more forward looking architectures and AMD's superior performance at higher resolutions was ignored even during the Maxwell era when it came to 970 vs. 390, 980 vs. Fury.

Maybe NV can release a couple drivers and dramatically improve performance for older generation of cards?

Look at the 970 and the 980Ti. Now tell me with a straight face that this is completely normal.
 

escrow4

Diamond Member
Feb 4, 2013
3,339
122
106
You could say a new game has to make the latest architecture look good otherwise Pascal wouldn't sell, but there are architectural changes in Pascal to account for at least some of that difference.
 

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
Everybody I knew who used TressFX in the first Tombraider thought it was OK - plus I ran it on a GTX660TI fine even though it was an AMD sponsored effect. Purehair is based on the same technology and is very well optimised in the newest Tombraider game and has almost no performance hit. Seemed fine on both games for me when I played them - some people here obviously never played both games,so want to call it bad since it was not from Nvidia for some weird E-PEEN reason. Yet Nvidia thought Purehair is great:

PureHair is Crystal Dynamics and Square Enix's hair rendering technology, which like our own HairWorks technology adds tens of thousands of hair strands to a character model. These hairs act realistically, swaying and moving in concert with character movement, and can be affected by water, wind and snow, and are lit and shaded in real-time by the scene.

If you can afford the cost, enabling PureHair increases the realism and fidelity of Lara's hair as she runs, jumps and blasts her way through Rise of the Tomb Raider. If not, the look of the basic hair is OK, and there is a basic level of approximated movement. For most though, 'On' is the recommended level, adding realistic hair at a minimal performance cost during gameplay.

Looks like in Deus Ex:Mankind Divided it is used even more. Nvidia thinks it is a good effect with minimal performance cost(their words) during gameplay.

I liked the effects in The Witcher 3,but too much tessellation was used,as you could see with the tessellation sliders provided by CD Projekt Red to help performance,or even the AMD driver sliders,turning down the tessellation factor down a bit made performance go up with little visual cost.
 

Maverick177

Senior member
Mar 11, 2016
411
70
91
I adjust the Tess level in Witcher 3 to 16x from default and I gain around 3~4 fps with zero image quality deficit.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
another performance benchmark

http://www.purepc.pl/karty_graficzn...ind_divided_premiera_bez_directx_12?page=0,10
http://www.purepc.pl/karty_graficzn...ind_divided_premiera_bez_directx_12?page=0,11

Rx 480 performs admirably at Very high settings providing performance on par with R9 390X / Fury. At Ultra settings it loses quite a bit of performance and falls behind R9 390. Given that frame rates are much more enjoyable at Very high and that even the GTX 1070 / Fury X cannot avg 60 fps at 1080p Ultra , the Rx 480 is performing extremely well and providing 80% of GTX 1070 performance at 65% of the price. With DX12 the Rx 480 could provide 60 fps at Very High settings which is praiseworthy.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
TressFX only had terrible performance at launch because at that time NV did not have access to (direct) source code of the game. Once they started to actually optimize the drivers for Tomb Raider, the game ran like butter on their cards. I am not going to spend a lot of time finding reviews or digging through the data but it's been proven many times over that TressFX 1.0 was more efficient than HairWorks. By now, AMD already has TressFX 3.0 which is much more efficient than the early implementations were in Tomb Raider.

I'm not going to get into a long winded debate about TressFX vs Hairworks. All of us know the merits and demerits of each technology, just that some of us are less honest about them. I will say though I don't consider TressFX as a direct comparison to Hairworks for aforementioned reasons, namely that the former focuses on a single character, whilst the latter can be applied to multiple characters.

Also, there is no TressFX 3.0. TressFX ended with Tomb Raider. PureHair is used in both RotR and Deus Ex MD, and the tech is developed by Crystal Dynamics and Square Enix. AMD may have provided the foundation with TressFX, but other developers have continued to refine and improve it.

Sounds like something I'd read from an NV marketing pamphlet. DE:MD may be the best looking 2016 PC game, and its graphics are surpassing Crysis 3.

Surely this is hyperbolic on your part. Do you even have the game? I have both, and whilst I think Deus Ex MD looks great, it's not on the same level as Crysis 3 when it comes to tech, refinement and especially optimization. There are too many graphics bugs in the game, such as CHS having a limited draw distance and the temporal AO introducing shimmering to surfaces.

Assassin's Creed Unity that you hyped on forums for months and then defended for more months was broken for almost 6 months straight. Some would argue it was never fixed.

Yeah, but AC Unity was somewhat revolutionary. No game before then, or even since has ever had such an astounding level of detail for both exterior and interior buildings with the kind of SCALE that ACU has, with the exception of Star Citizen and that game hasn't been released yet..

There are instances where CHS > PCSS, where PCSS > CHS and where both are flawed and worse than alternative methods, and yet you conclusively paint PCSS superior to CHS.

Not going to get into any lengthy debate about CHS and PCSS, as it's likely subjective when it comes to preference. To me PCSS looks a lot more realistic as it more accurately depicts diffusion due to distance.

But neither is perfect at all..

It's not the same. TW:W has amazing CPU optimization under DX12. I am amazed you decided to leave that out.

I already addressed those graphs in the thread in the CPU forum. Those graphs were created at the launch of the game, and they involve the SCRIPTED benchmark.

After patches and driver updates, the performance landscape has shifted. This is what it looks like now:

index.php


Apparently the DX12 path is broken somewhat, as DX11 is still faster for NVidia..

It's not the developer's or AMD's fault that NV decided to create a 3rd consecutive GPU generation that lacks proper DX12 hardware support. It is 100% NV's decision when they made Kepler, Maxwell and Pascal. NV chose to prioritize DX11 over DX12 since we can only presume they thought most of 2016-2018 would be DX11 era and they will go all-in with DX12 with Volta. No need to start downplaying broken DX12 just because some DX12 implementations failed to live up to the hype. Dragging TW:W with RoTR and Hitman as a failed DX12 implementation if frankly disingenuous.

If TW:W is a good DX12 implementation, why is NVidia still faster under DX11? And if NVidia lacks proper DX12 hardware support, why is Pascal ahead of the Fury X, and why are the reference clocked air cooled GTX 980 Ti and Titan XM behind by just a hair to AMD's fastest water cooled DX12 monster?

Also, why is the CPU scaling so crappy? It's obvious that the Pascal cards are being CPU bottlenecked in that graph I posted..

OpenGL performance in Doom is irrelevant since GTX1060 runs faster under Vulkan too. If there are glitches under Vultan in Doom with NV cards, that's not AMD's fault either.

OpenGL performance in Doom isn't irrelevant, as it took ID two months to come up with the Vulkan renderer. That's two months, where NVidia was bitch slapping AMD like a red headed step child in Doom. Lots of people played the game under AMD's inferior OpenGL performance, and will likely never play it again.

Architecture specific GPU/CPU optimizations are the exact type of optimizations that are fair in PC gaming and what we want. We want developers to max out all the capabilities of Pascal against Maxwell and Kepler and of GCN 4.0 vs. GCN 1.0, or Skylake over Sandy/Haswell. NV decided to take the high road and baked these optimizations into GW's DLLs. Did anyone stop NV from working with iD to create architecture specific optimizations for Kepler, Maxwell or Pascal? Once the game becomes an official GWs title, I am pretty sure AMD is not allowed to alter any code inside the game for its GPUs or optimize any of that GWs code by working with the developer.

I think you mistook me. I never said that architecture specific optimizations are unfair. I said that forming a final conclusion in Doom's performance, when ID haven't yet implemented the NVidia architecture specific optimizations is unfair.

We'll see what the final performance looks like when ID releases the NV shader intrinsics and asynchronous compute update for Doom.

The list of well-optimized, good looking AMD GE titles is huge. OTOH, the list of well-optimized NV GWs titles is tiny.

Not sure if you're serious or not. First off, the sheer amount of AMD GE titles in comparison to NV GW titles is tiny. That alone makes the comparison rather absurd.

You can look at this page and see the massive amount of NVidia GW and NVidia assisted titles.. There's 41 pages.

Now go to AMD's website, and you'll see that AMD's list is ridiculously small by comparison..
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Everybody I knew who used TressFX in the first Tombraider thought it was OK - plus I ran it on a GTX660TI fine even though it was an AMD sponsored effect. Purehair is based on the same technology and is very well optimised in the newest Tombraider game and has almost no performance hit. Seemed fine on both games for me when I played them - some people here obviously never played both games,so want to call it bad since it was not from Nvidia for some weird E-PEEN reason. Yet Nvidia thought Purehair is great:

PureHair is great. TressFX was crap. And no, the two are not the same. PureHair is a tremendous refinement of TressFX by Crystal Dynamics and Square Enix. I'm pretty sure that AMD has no direct involvement with the development of PureHair, having already given up their TressFX source code.

Looks like in Deus Ex:Mankind Divided it is used even more. Nvidia thinks it is a good effect with minimal performance cost(their words) during gameplay.

Deus Ex MD uses PureHair much more liberally than in RotTR, but it still does not match Hairwork's flexibility and scaling. HW is used on Geralt, and several animals and monsters in the Witcher 3. In Deus Ex MD, it seems that only a small amount of notable characters have it.

I liked the effects in The Witcher 3,but too much tessellation was used,as you could see with the tessellation sliders provided by CD Projekt Red to help performance,or even the AMD driver sliders,turning down the tessellation factor down a bit made performance go up with little visual cost.

This is one thing that I will agree on. NVidia should have made the technology even more scalable by providing sliders for tessellation factor from the very beginning. But for its first debut, I'd say that HW was very successful.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Since you are way off topic stop with the arguing. If you don't I am going to report your post.

What you call arguing, I call debating. And you, along with RS are neck deep in it as well, since I obviously haven't been debating myself. Just now that you're losing you want to stop and report me LOL!

Besides, this entire thread has been going on and off topic since the very start.
 

boozzer

Golden Member
Jan 12, 2012
1,549
18
81
could someone explain to me why 980 ti is slower than a 290/470/1060? what is this game made of?