Meanwhile on planet earth,dice is working on the dx12 path for the frostbite engine ever since they put mantle in bf4,since as everybody keeps saying dx12 is basically mantle.
So if after all these years of working on it they still manage to mess it up that much it does not look very promising at all.
According to the link the low level aspect seems to be working well in the beta- FX8350+Nano is ~50% faster at 720p under DX12. They probably still have work to do on the graphics side of things and ofcourse drivers to match and surpass DX11 under higher resolutions. Lets not forget its just a beta.Some users in this forum said that Maxwell is finished and end of life ,however, that end life product is man handling AMD best GPU.
A GTX 980 TI is 20-30% then fury x.
http://www.pcgameshardware.de/Battl...chnik-Systemanforderungen-Benchmarks-1206197/
Makes all those claims of "Advanced API" support plastered all over the marketing materials and packaging of a certain IHV even more laughable. If there was the slightest chance that dx12 would do better on serial API optimised(by design) hardware, the dx 12 implementation would be much further along. Just a pity the marketing doesn't match the reality.
Right, which explains why the GTX 1080 with its 2560 shaders and 256 bit bus is killing the Fury X with its 4096 shaders and 4096 bit bus in Ashes of the Singularity, the greatest and most polished example of DX12 optimization so far.![]()
Not sure what the meaning with your bus width and SP count is.
Could you elaborate?
1440 maxed out (default resolutions scale) runs fine. I also increased the FOV from 70 HOR to 90 HOR. 60s and 70s framerate with high action scenes sometimes running in the 50s. With Freesync no problems here. Fine with 4GB of VRAM.
Not sure if I want FXAA or TAA for this one. I didn't get to play much last night so maybe I'll compare them today (well I played over an hour actually but had too much fun to bother comparing). But neither of them really seemed that great here at first glance.
And as far as gameplay goes, I'm pleased with what they did with the bolt action rifles. Body shot 1 hit kills would
custom 980TI is faster than custom 1070 so no surprise here.Some users in this forum said that Maxwell is finished and end of life ,however, that end life product is man handling AMD best GPU.
A GTX 980 TI is 20-30% then fury x.
http://www.pcgameshardware.de/Battl...chnik-Systemanforderungen-Benchmarks-1206197/
custom 980TI is faster than custom 1070 so no surprise here.
Its only at 1330mhz.. still 5% faster than 1850Mhz 1070GTX 980 Ti has more overclocking headroom than GTX 1070, AFAIK. Not a surprise.
Its only at 1330mhz.. still 5% faster than 1850Mhz 1070
The reason why NVidia has the edge in DX11, is because they've tuned their driver to create worker threads using the CPU to parallelize rendering. Basically, NVidia's DX11 driver has been doing what DX12 was designed to do for years, which is to lower CPU overhead and parallelize rendering. Ironically, this also explains why NVidia doesn't get as large a gain when going to DX12 as AMD.
Source desperately needed. This sounds 100% fake. No way this is true. There is zero chance any DX11 implementation can reach the same low levels of overhead as DX12 (which is what DX12 is "designed to do," among other things) by the very nature of it.
I dont know where the tribes got off the rails and partisanized low overhead APIs but the fact of the matter is DX11 is a high level of abstraction with high levels of overhead and DX12 is low abstraction, low overhead. It's a very simple and very well understood trade off that's been happening in software development for 30 years. C++ vs Assembly is the same thing. Python vs C++. Writing your own webserver in C is probably faster than using whatever one comes out of the box in your PHP framework of choice but it takes a lot more time and skill
computerbase benchmarks...Surprise ref 1070 is 10% faster than ref 980TI.Too bad no1 buy reference models.Aftermarket 980TI is faster than aftermarket 1070.I dont know why they even test ref models.
They should only test aftermarket cards.Those cards actually people buy
https://www.computerbase.de/2016-08/battlefield-1-beta-benchmark/
My point is that FuryX is a much wider GPU than the GTX 1080 and should theoretically be capable of greater exploitation of "parallel APIs" like DX12, but it still loses against the latter which is a narrower but deeper design.
Basically, I'm making fun of the whole NVidia" serial API optimized design" nonsense.