- Apr 18, 2014
- 1,438
- 67
- 91
To keep the other thread to results.
My view of the benchmark currently is that their thinking behind it was flawed. Keeping to the lowest common denominator is not what games will do. I think we overestimate the difficulty involved in implementing features that some GPUs do not support (writing different code paths). It may take a lot of work, but its work that does not need to be done over and over for each game. The game engine is the core of the game and, if I am understanding it correctly, once that has proper support every game built on it should (maybe with some tweaking).
I think the better approach would be to target the same visuals while exploiting, as much as is reasonable, every dx12 feature.
This is probably a question of what a dedicated benchmark should represent. On one hand it could say this GPU is capable of pulling off identical visuals faster than this other GPU while exploiting features the other GPU might not have. On the other hand it could just say this GPU is faster than this other GPU using the same tools to produce the same visuals, even though it can't do it faster if the other GPU was using more efficient tools that its capable of using.
IMO since the software can check what the GPU supports and proceed appropriately, I think the previous case is more representative.
Some background: the dx12 benchmark uses fl11_0 and what looks like less advanced asynchronous compute. Probably to put all hardware on the same field. Limited to Kepler level dx12 support.
There are also weird things in the technical guide that I don't touch on because I am not familiar. Eg. Their use of Ray tracing and the effects they chose for both graphics tests. Did not sound like what games are likely to use heavily, but those things might be suitable to show hardware capabilities
My view of the benchmark currently is that their thinking behind it was flawed. Keeping to the lowest common denominator is not what games will do. I think we overestimate the difficulty involved in implementing features that some GPUs do not support (writing different code paths). It may take a lot of work, but its work that does not need to be done over and over for each game. The game engine is the core of the game and, if I am understanding it correctly, once that has proper support every game built on it should (maybe with some tweaking).
I think the better approach would be to target the same visuals while exploiting, as much as is reasonable, every dx12 feature.
This is probably a question of what a dedicated benchmark should represent. On one hand it could say this GPU is capable of pulling off identical visuals faster than this other GPU while exploiting features the other GPU might not have. On the other hand it could just say this GPU is faster than this other GPU using the same tools to produce the same visuals, even though it can't do it faster if the other GPU was using more efficient tools that its capable of using.
IMO since the software can check what the GPU supports and proceed appropriately, I think the previous case is more representative.
Some background: the dx12 benchmark uses fl11_0 and what looks like less advanced asynchronous compute. Probably to put all hardware on the same field. Limited to Kepler level dx12 support.
There are also weird things in the technical guide that I don't touch on because I am not familiar. Eg. Their use of Ray tracing and the effects they chose for both graphics tests. Did not sound like what games are likely to use heavily, but those things might be suitable to show hardware capabilities
Last edited: