• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

[GameGPU] Dying Light - Horrible game engine CPU optimizations

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Well it is known that Nvidia drivers have less CPU overhead under DX, and therefore they may be trying to make CPU perfomance as lackluster as possible in their Gameworks titles in order to make their GPU's perform better in those games(compared to AMD).
 
Played this for a bit last night, pretty fun game. I had maxed settings with around 50% draw distance and performance was around 50FPS. Seems like it might have a pretty long story as well, played for a couple hours and am at like 3% complete.
 
No it is just killing AMD not gaming.

The point of TWIMTPB and GE titles was that the hardware manufacturer works closely with the developer to push out more advanced graphical effects. That in itself was already pushing the boundaries of what's acceptable since a company with more $ could send more programmers to alter the game's code to take full advantage of its GPU architecture, essentially turning this practice into which firm has more human and financial capital to throw at developers. However, despite that, while NV cards did run better in many TWIMTPB, it was very rare that the performance was atrocious and CF simply didn't work.

With GW, NV now sends SDK/closed source code to the developer to insert into the game, and it's optimized for NV products. That is far worse than TWIMTBP program or GE. The results speak for themselves - almost all GW games run like total garbage, have bugs, glitches, SLI and especially CF problems. If GW showed results similar to TWIMTPB or GE, no one would care that much, but since GW started, performance, overall level of polish, and multi-GPU support for AMD has fallen off a cliff. There is no way all of this is just a coincidence considering how recent GW is and the track record of the GW titles in regard to their performance and level of engine optimization quality.

Before you say I am biased against GW, I actually don't like GE either and would rather the developer implement and optimize PC games -- that way I know it's brand agnostic.
http://forums.anandtech.com/showpost.php?p=37069258&postcount=326

The problem with GE and GW is whenever I am going to switch GPUs, I am always going to be wondering if my NV card will run GE games well or if my AMD card / CF will not work in GW titles. Why would I want that as a consumer? I want the game engine to scale well with a more powerful card, regardless if it's NV or AMD, since I don't know if my next upgrade will be NV or AMD. If games are specifically / purposely optimized for AMD/NV, that's not a possible outcome and now I have to research which games I play/like run better on what brand. That's total BS to me, and I don't see how that helps me as a GPU consumer. The other consequence is if my cards tank in GW games and I am currently on AMD, I just end up skipping the GW title or just buying it at $5-10. That in turns actually hurts the developer of the game!
 
Last edited:
Until we see more developers using GW I'm not so sure the sample size is large enough to determine that GW is the problem or a poor developer of the game is the problem. Unlike the current GTX 970 fiasco by NV I think we should wait a bit to get the pitchforks out on GW for poor game performance.
 
How many GW titles are NOT ubisoft? If they are mostly all ubisoft I think we can all mostly agree that Ubisoft just sucks balls, though overall I agree with Russian and wish none of these programs existed.
 
i just tested same test scene like here on video(i end test scene in 6:39 in video for more easy testing) + same settings with Fraps + Flac
https://www.youtube.com/watch?v=-EQ-bH7g9sk
hehiap9r.jpg

it will be good if someone with GTX980/r9 290/290x and post result here with Flac
 
Last edited:
The way I see it is that unless there is someone else providing the code besides Nvidia or AMD at a similar or lower cost you will see more of GameWorks or Tressfx like stuff in games. Game Developers are eating up this stuff and asking for more and it seems like this is where the future is going. If you don't believe me then just check out this UE4 thread https://forums.unrealengine.com/showthread.php?53735-NVIDIA-GameWorks-Integration Nvidia is letting all subscribers for UE4 use it for free and is promising to included FLEX and VXGI
 
I just finished uploading my benchmark session as well, if anyone would be interested. 🙂

First two same settings, last two same settings as well.

Dying Light 1920X1080 High GTX 970 @1.5Ghz Core i5 2500k @4.8GHz - 65fps

Dying Light 1920X1080 High Radeon 7950 @1.1Ghz CORE i7-860 @3.9GHz - 38fps

Dying Light 1920X1080 Medium GTX 570 @850Mhz Q9550 @4GHz - 43fps

Dying Light 1920X1080 Medium 5850 @950Mhz Q9550 @4GHz - 36fps

To tell the truth, even with reduced settings for the 570 and the 5850 it is not too bad looking. Surely more playable and higher grade looking, than Unity.
 
Overall, I agree, most of the games with Gameworks are poorly optimized. In this case though, I dont see how we can blame nVidia for designing a modern game with a single threaded engine. That would seem to be on the developers. I mean it is possible that something in gameworks is causing this single thread cpu dependence, but we have not seen it in other games using gameworks.
 
Just tried Dying Light, everything max 1080p and no problems at all. Constant 60fps with some framedrops to 30fps during cutscenes (ingame, not pre-rendered. I think it was because game does not have triple buffering so I should enable NVIDIA adaptive v-sync) One strange thing is that video memory quickly reached utilization up to 3532MB and it did never go over above this border at any cost. What could that mean ? Very strange I would say. I had very few occasional stutters but this could not be because whole 3.5GB partition was utilized. So I'd better set medium shadow mapping and the vram utilization was up to 3GB.
A 970 user from the geforce forums. Interesting experience. Looks like a 3.5GB card after all, lmao. I feel sorry for the people that have bought Geforce 960 2Gb on launch. Clearly, you want a 3gb plus card for 2015 moving forward.
 
Last edited:
I just finished uploading my benchmark session as well, if anyone would be interested. 🙂

First two same settings, last two same settings as well.

Dying Light 1920X1080 High GTX 970 @1.5Ghz Core i5 2500k @4.8GHz - 65fps

Dying Light 1920X1080 High Radeon 7950 @1.1Ghz CORE i7-860 @3.9GHz - 38fps

Dying Light 1920X1080 Medium GTX 570 @850Mhz Q9550 @4GHz - 43fps

Dying Light 1920X1080 Medium 5850 @950Mhz Q9550 @4GHz - 36fps

To tell the truth, even with reduced settings for the 570 and the 5850 it is not too bad looking. Surely more playable and higher grade looking, than Unity.

Thanks for the benchmarks. Watching the first video, there seem to be a lot of frametime spikes. Example at 6:30. Horrible lag spike. But 210 ms means 1000 / 210 = 4.7 fps, right? I mean, shouldn't frametime spikes = framerate dips?
Anyways I'll wait to see if they improve the performance, right now it's unacceptable imo.
 
Thanks for the benchmarks. Watching the first video, there seem to be a lot of frametime spikes. Example at 6:30. Horrible lag spike. But 210 ms means 1000 / 210 = 4.7 fps, right? I mean, shouldn't frametime spikes = framerate dips?
Anyways I'll wait to see if they improve the performance, right now it's unacceptable imo.

You are welcome my friend and welcome to the forum.

I have grown accustomed to seeing spikes like that, while benchmarking, especially in new games with new drivers. That's why I like to benchmark lengthy gaming periods and not just 30-60secs that some benchmarkers use. I also prefer variety in my benchmark sessions, if possible, things like, indoor session, outdoor session, a small fight, a real time cutscene, etc.

That being said, I retested the game with Vsync and a framerate limiter and the spikes were not only cut in half in frequency, but also in severity. That's why I also use such settings while gaming, that can keep up with 60fps vsynced and framelimited.

The game itself is very smooth with vsync and you barely notice any visible spikes. In the heat of the games you barely have any time to notice these things anyway.

Just to be certain, since I only have the 970 for three days now, I also tested with stock clocks on the gpu, to rule out any memory/main chip errors, as well as mildy overclocked cpu and the same spikes occurred.

I'll be on the lookout once both game and drivers and game are updated, to see if any better performance can be obtained.
 
Last edited:
Back
Top