96Firebird
Diamond Member
NVIDIA have been pretty vocal about their anticipation of Vucan whereas they tend to downplay the importance of DX12.
Where did you pull this from?
NVIDIA have been pretty vocal about their anticipation of Vucan whereas they tend to downplay the importance of DX12.
Where did you pull this from?
Actually it is because Fiji is a mish-mash of architectures made by AMD, and current and future technology that AMD will use.Fiji is bottlenecked on the front end and its caching has issues keeping the CUs operating efficiently.
Actually it is because Fiji is a mish-mash of architectures made by AMD, and current and future technology that AMD will use.
From Hawaii AMD put in Fiji caching system and shaders.
From Tonga they added Color Compression.
From Polaris they used HBM memory controller.
But the biggest problem what we see in Fiji is the first bit. Cache system. 16 KB per CU L1 cache and 128 KB cache L2 per CU. The problem is that you have 512 GB/s bandwidth that cannot be utilized, just because of the small caches.
Effect? We have seen slides with complete inefficiency of memory bandwidth utilization on Fury X.
Polaris is supposed to get rid of this. And lets hope they did it finally properly.
OpenGPU will eliminate this. Exposing the code to developers and any GPU maker who can then supply devs with optimized shaders for their architectures.
It also reduces the load on the GPU driver teams. Making developer relations important and costing less in software R&D.
NV has more $$ to sponsor game developers and studios with PR deals and incentives to join GameWorks instead of going with AMD's open source approach.
Games that NV sponsor will not feature Async Compute until NV's hardware is ready.
Tomb Raider was originally an AMD game, with TressFX3.0 and Async Compute being showcased and advertised, NV somehow manages to take the sponsorship, ship the game with their GPUs, removed DX12/Async Compute (even the devs said this is what it runs on originally for the Xbone), rename to PureHair and release with pre-release builds that ran very poorly on AMD GPUs.
What we can learn from this is NV has the $$ to throw around to bribe developers to go down a route that is best for them at the expense of AMD. We know AMD simply cannot compete with this approach since they lack $$.
Yeah,also on the CPU side it was custom tailored to run on 6 amd cores and yet on PCs it runs better on a single AMD module (or 1 core+ht on intel) somehow intel intervened and turned a multithreaded game into a single threaded one...poor AMD gets screwed over left and right....right?Tomb Raider was originally an AMD game, with TressFX3.0 and Async Compute being showcased and advertised, NV somehow manages to take the sponsorship, ship the game with their GPUs, removed DX12/Async Compute (even the devs said this is what it runs on originally for the Xbone), rename to PureHair and release with pre-release builds that ran very poorly on AMD GPUs.
alright complete dummy question here, the R9 280X supports DX 12? I thought it was a DX 11 point something card?
In other words, NVIDIA does what it needs to in order to make sure games run best on NVIDIA hardware, good to know.
Cool, happy to know NVIDIA's got its customers' back!
Nice, all the more reason for me to prefer NVIDIA hardware, since I know NV's got my back.
Sucks for AMD and its customers, doesn't it? No wonder NV has 80% market share -- they take care of their customers.
Look all that matters is the end result. If NV makes the better gear for the stuff I want to play, I'm buying NV cards. If AMD gets it done and makes its cards superior for my use, then I'll buy them. No use sitting around whining that NV does things to make sure that its hardware runs modern games better.
Yeah,also on the CPU side it was custom tailored to run on 6 amd cores and yet on PCs it runs better on a single AMD module (or 1 core+ht on intel) somehow intel intervened and turned a multithreaded game into a single threaded one...poor AMD gets screwed over left and right....right?
At some point people have to see past the conspiracy theories and consider the fact that the consoles are incredibly weak and what works well for them won't necessarily work well on PC ,thus the many incredibly bad console ports of the last years.
Built-in (pre scripted) benches will always work better on AMD GPUs ,a wicked mind would say that AMD makes uarchs (gcn) exclusively for benchmarks, since that is pretty much the only scenario where you will see enough "action" at once for AMD's GPUs to get an edge,but somehow this still makes people think that the cards will perform exactly the same way in gameplay.
None of those things Silverforce mentioned improved performance on NVIDIA hardware, all they did was potentially decrease performance on AMD hardware. How do end users with NVIDIA graphics cards benefit?
Yeah, NV wins, but not by increasing their performance, but by limiting the other side's performance. What benefit is that to any of us? The only one it helps is NVIDIA. NVIDIA users aren't affected one way or the other, except that if they want to buy a new card for better performance, they're not being offered all the options.
You're arguing that computers are different from consoles therefore something that works well on consoles shouldn't be used even though it works well on computers as well? Good job!
The thing which irks many developers about Gameworks is that the code is closed source. The shaders tend to be optimized for the CUDA architecture. Meaning that gameworks, aside from the tessellation hoopla, utilizes long running shaders. This is perfectly suite to the 32 lane wide SIMD design which CUDA incorporates. GCN uses 16 wide SIMD design. So GCN likes many simple shaders working in parallel.
Gameworks, being closed source, forces AMD to do guess work when optimizing a games profile. Guessing what shader was used and replacing it with AMD optimized shaders. AMD doesn't have as great of a driver team as NVIDIA so drivers take time to release (Game ready drivers).
OpenGPU will eliminate this. Exposing the code to developers and any GPU maker who can then supply devs with optimized shaders for their architectures.
It also reduces the load on the GPU driver teams. Making developer relations important and costing less in software R&D.
Nvidia may have gotten less performance too. Not going DX12 being 1 clue pointing it out.None of those things Silverforce mentioned improved performance on NVIDIA hardware, all they did was potentially decrease performance on AMD hardware. How do end users with NVIDIA graphics cards benefit?
Yeah, NV wins, but not by increasing their performance, but by limiting the other side's performance. What benefit is that to any of us? The only one it helps is NVIDIA. NVIDIA users aren't affected one way or the other, except that if they want to buy a new card for better performance, they're not being offered all the options.
In other words, NVIDIA does what it needs to in order to make sure games run best on NVIDIA hardware, good to know.
oh you mean cool that Nvidia bribes developers not to develop better and more efficient games. 😀Cool, happy to know NVIDIA's got its customers' back!
yeah nv got your back as long as their latest gen launches. then you better upgrade or get shafted.Nice, all the more reason for me to prefer NVIDIA hardware, since I know NV's got my back.
sure they do. ask those poor kepler owners 😀Sucks for AMD and its customers, doesn't it? No wonder NV has 80% market share -- they take care of their customers.
Telling a lie over and over again does not make it a truth. Gameworks causes games to perform poorly both on Nvidia and AMD hardware. The only thing it does is make it slightly more bearable on Nvidia's latest gen. :thumbsdown:Look all that matters is the end result. If NV makes the better gear for the stuff I want to play, I'm buying NV cards. If AMD gets it done and makes its cards superior for my use, then I'll buy them. No use sitting around whining that NV does things to make sure that its hardware runs modern games better.
Not going to happen for this reason:
NV has more $$ to sponsor game developers and studios with PR deals and incentives to join GameWorks instead of going with AMD's open source approach.
Games that NV sponsor will not feature Async Compute until NV's hardware is ready.
Tomb Raider was originally an AMD game, with TressFX3.0 and Async Compute being showcased and advertised, NV somehow manages to take the sponsorship, ship the game with their GPUs, removed DX12/Async Compute (even the devs said this is what it runs on originally for the Xbone), rename to PureHair and release with pre-release builds that ran very poorly on AMD GPUs.
What we can learn from this is NV has the $$ to throw around to bribe developers to go down a route that is best for them at the expense of AMD. We know AMD simply cannot compete with this approach since they lack $$.
alright complete dummy question here, the R9 280X supports DX 12? I thought it was a DX 11 point something card?
What are you basing this on?Built-in (pre scripted) benches will always work better on AMD GPUs ,a wicked mind would say that AMD makes uarchs (gcn) exclusively for benchmarks, since that is pretty much the only scenario where you will see enough "action" at once for AMD's GPUs to get an edge,but somehow this still makes people think that the cards will perform exactly the same way in gameplay.
This.Don't confuse feature sets with supporting the API. DX11.2 is a feature set. Just like DX12_1 is. You can support DX12_1 features without supporting every DX12 function.
Tomb Raider was originally an AMD game, with TressFX3.0 and Async Compute being showcased and advertised, NV somehow manages to take the sponsorship, ship the game with their GPUs, removed DX12/Async Compute (even the devs said this is what it runs on originally for the Xbone), rename to PureHair and release with pre-release builds that ran very poorly on AMD GPUs.
UE4 DX12 support isn't that great yet, so this is not a surprise.ARK: Survival Evolved was supposed to have a DX-12 patch in late August 2015 but guess what ??? They still havent released it yet.