• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Battlefield 1 Benchmarks (Gamegpu & the rest)

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Actually it's the truth so far looking at their dx11 vs dx12/vlk results on other games. couple that with async driver which is still coming god knows when.
also remember dx12 for fermi ? it's nowhere to be found. so much for 100% dx12 for all dx11 gpus

but simply saying "fud" solves everything i guess.

anyway, this is a benchmark thread related to bf1, so just discuss this game. for other things please make separate thread we have enough derailing already.

nice to see another optimized game from dice, other devs can learn a thing or two. also if anyone wants to try it, you can play it right now if you have ea access. but for only 10hrs.
 
This is unsubstantiated FUD.

I think at this point it's been proven an exhaustive number of times throughout this entire forum.
1) DX12 driver optimisations are not possible in the same way DX11 ones were because the entire layer is now much thinner -> significantly less control over what Nvidia, AMD and Intel can do to optimise DX12 'drivers'.
2) Maxwell - no async compute after more than a year of promising it'll come soon
3) "This is unsubstantiated FUD." is not a valid argument, why is it 'FUD'?

@swilli89 is spot on when he says "Either their software, hardware, or a combination of both aren't well adapted to future APIs unfortunately for nvidia buyers"
Kepler hasn't aged well at all, and neither will Maxwell (for DX12, 11 seems to be completely fine though). Saying Nvidia cards aren't adapted for future APIs (DX12 and Vulkan) seems like a perfectly tenable thing to say especially when there's data that shows a very obvious performance regression in Vulkan and DX12 games on older Nvidia cards. On the other hand "This is unsubstantiated FUD" seems like a knee jerk defense mechanism to buyers remorse - akin to blocking your ears and covering your eyes because you don't want to hear what you think to be false.. Bring up evidence to counter what has been posted, else don't bother posting at all.
 
That's because there's a performance regression of the 1080 in DX12 so you're GPU bound...
Thats true, but not the point of what I was saying. The poster I was replying to said DX11 was faster only because they were testing with a 5960X. That is not correct. DX 11 is faster with the 6700 as well and equal with the 4770.

Edit: also, I would not consider the 4770 gpu bound in DX11, but it is is DX12.
 
Edit: also, I would not consider the 4770 gpu bound in DX11, but it is is DX12.

b1_proz_11.png


b1_proz_12.png


The 4770 is basically even in DX11 vs DX12. It is definitely bound in DX11 as the 1080 drops from 121 / 139 -> 109/125

All the other CPUS benefit from DX12 or are basically even.
 
OMG my rig is going to wreck this game. This is great new for me. SLI works well and oh, that 2600k! Wow. I would have liked to see a 6 core chip thrown in there, but it won't matter anyway. A 2600k with a moderate OC will be enough to thrash this game, let alone something with more cores. i5s are dying. Sad but true. Was that a SKYLAKE i5 being edged out by an ancient sandy bridge?
Also, an FX 8350 neck and neck with a new Skylake chip under DX12? Good god. Did I mention that i5's are dead?
Also, I bet Zen would be a beast in this game. As good or better than anything else out there I bet. Not many games use cores like this one seems to.
 
Last edited:
I would have liked to see a 6 core chip thrown in there, but it won't matter anyway

The 5960x is 8 core and the 6700 performs the same, so looks like it tops out ~4 physical cores / 8 threads.

Also, an FX 8350 neck and neck with a new Skylake chip under DX12? Good god. Did I mention that i5's are dead?

Yeah I feel bad for everyone who cheaped out on pentium / i3s over similar priced 8320s or used i7 2000 / 3000 / 4000 series.
 
They perform the same due to GPU bottleneck, as usual. A proper test would be 1080sli or titan X @ 1080p.

The 5960x is 8 core and the 6700 performs the same, so looks like it tops out ~4 physical cores / 8 threads.



Yeah I feel bad for everyone who cheaped out on pentium / i3s over similar priced 8320s or used i7 2000 / 3000 / 4000 series.

I feel bad for the people that suggested i3's or even worse, 2c/2t pentiums over any Vishera only to push an agenda. And even worse for the poor lads that paid attention to that backhanded advice.

Ultimately, DX12 is good for everyone. Less money on the CPU --> More budget allocated to the GPU. Who wouldn't want that? We know, the people that say DX12 is the son of the devil just because Paxwell cant do well on this API besides from the UE4 games or blatantly suspicious synthethic benches such as Timespy. Time to save a whole lot of posts for when Volta arrives and DX12 suddenly becomes good again. Yeah, just like when it was just announced and Nvidia was endorsing it with that 5 year development time lie.
 
I feel bad for the people that suggested i3's or even worse, 2c/2t pentiums over any Vishera only to push an agenda. And even worse for the poor lads that paid attention to that backhanded advice.
So in one of the few game engines that can take advantage of lots of cores,running with an API that can take advantage of lots of cores the Fx-8350 barely get's 10FPS more than the i3-6100 and that's bad for i3s how exactly?
If anything it's a testament on how ridiculously outdated FX cores are,they should be able to completely destroy a dual core CPU since they have 6 more cores.
 
Ultimately, DX12 is good for everyone. Less money on the CPU --> More budget allocated to the GPU. Who wouldn't want that? We know, the people that say DX12 is the son of the devil just because Paxwell cant do well on this API besides from the UE4 games or blatantly suspicious synthethic benches such as Timespy. Time to save a whole lot of posts for when Volta arrives and DX12 suddenly becomes good again. Yeah, just like when it was just announced and Nvidia was endorsing it with that 5 year development time lie.

What? DX12 performs phantastic on Maxwell and Pascal in Tomb Raider and Gears of War. Only Gaming Evolved games are broken.

It is funny that Nixxes was able to implement DX12 in their first iteration without any problems in Tomb Raider and yet their DX12 implementation in Deux ExMankind is still broken and totally useless. Maybe there is a correlation between AMD, money and certain publisher and developers...
 
I wouldn't hold your breath. Nvidia's drivers team has been about as reliable as your Georgia secondary against Tennessee's hail marys when it comes to putting out DX 12 updates. Either their software, hardware, or a combination of both aren't well adapted to future APIs unfortunately for nvidia buyers.

We are talking about 8 - 10 FPS difference here between the 1060 6GB and the RX 480 at 1440. Hardly the stuff of legends...

I'll take my lower power draw, the card size (I have the Gigabyte Mini), and the quieter card any day of the week with differences small.
 
We are talking about 8 - 10 FPS difference here between the 1060 6GB and the RX 480 at 1440. Hardly the stuff of legends...

I'll take my lower power draw, the card size (I have the Gigabyte Mini), and the quieter card any day of the week with differences small.

On top of this, GTX 1060 in DX11 performs exactly like RX 480/390X in DX12, with (much) better minimum FPS.

http://www.sweclockers.com/test/22807-snabbtest-battlefield-1-i-directx-11-och-directx-12

There is room for improvement as well, just like AotS's DX12 implementation on NVIDIA cards (launch vs now).
 
Last edited:
Best implemented dx12 game yet, and its a bloodbath o_0

Looking forward to more benchmarks.. This is one of the biggest games in 2016, so this is pretty much a big deal 🙂
No, it isn't. Its DirectX 11 game, with DX12 back-end renderer.
 
On top of this, GTX 1060 in DX11 performs exactly like RX 480/390X in DX12, with (much) better minimum FPS.

http://www.sweclockers.com/test/22807-snabbtest-battlefield-1-i-directx-11-och-directx-12

There is room for improvement as well, just like AotS's DX12 implementation on NVIDIA cards (launch vs now).

For those not translating the text, Sweclockers talks about DX12 having more stutters and irritating "micro stops" for both AMD and Nvidia, but it's worse for AMD than for Nvidia
 
A proper test would be with a full 64 player game going. Thats what sucks up the CPU time.

Most review sites are too chicken to do that. They are so terrified of not being able to maintain perfect repeatability that they totally sacrifice any kind of useful data in these multiplayer games by doing CPU tests in single player. But still, avoiding GPU bottlenecks is critical if you want to see relative CPU performance. Of course, most sites get this part wrong as well.
 
Most review sites are too chicken to do that. They are so terrified of not being able to maintain perfect repeatability that they totally sacrifice any kind of useful data in these multiplayer games by doing CPU tests in single player. But still, avoiding GPU bottlenecks is critical if you want to see relative CPU performance. Of course, most sites get this part wrong as well.

If they did, then fans of a particular CPU maker would cry foul if their preferred brand does worse than expected 😛



Your purposeful, inflammatory post was unnecessary.


esquared
Anandtech Forum Director
 
Last edited by a moderator:
Testing Battlefield games for CPU in single player is so dumb its basically misleading. Reviewers need to test 64 player MP for CPU or not even bother putting up CPU tests. Nobody cares how great the single player campaign runs when you'll play it for 1/400th of the total time you play BF
 
For those not translating the text, Sweclockers talks about DX12 having more stutters and irritating "micro stops" for both AMD and Nvidia, but it's worse for AMD than for Nvidia

That's terrible. Hopefully official drivers will fix some of these issues.

DX12 is not looking like how I envisioned it...
 
For those not translating the text, Sweclockers talks about DX12 having more stutters and irritating "micro stops" for both AMD and Nvidia, but it's worse for AMD than for Nvidia
The culprit is Sweclockers' test bench which has 8GB of RAM. DX12 tends to increase memory usage, and 16GB is recommended so it's not a surprise. Battlefront also had poor frame times if you had 8GB.

16GB seems to be the new 8GB, at least for gaming.
 
So in one of the few game engines that can take advantage of lots of cores,running with an API that can take advantage of lots of cores the Fx-8350 barely get's 10FPS more than the i3-6100 and that's bad for i3s how exactly?
If anything it's a testament on how ridiculously outdated FX cores are,they should be able to completely destroy a dual core CPU since they have 6 more cores.

He's not talking about the latest i3, the older ones i guess, the ones many people recommended over FX 8350. It's good that you recognize that the older FX 8350 beats the newest i3 in today's titles. I play everything on my FX machine, where the limiting part is the 7970 i have with it
 
The culprit is Sweclockers' test bench which has 8GB of RAM. DX12 tends to increase memory usage, and 16GB is recommended so it's not a surprise. Battlefront also had poor frame times if you had 8GB.

16GB seems to be the new 8GB, at least for gaming.

I agree with you here!

Surprised it took so long for 16 GB to become the new 8 GB!
 
That's terrible. Hopefully official drivers will fix some of these issues.

DX12 is not looking like how I envisioned it...

DX12 sucks right now. It adds unnecessary burden to the developer in search of gains that just aren't worth it, especially given that gaming PCs have much faster CPUs than the consoles.

There is a reason DX12-like APIs didn't really come to PC until AMD started aggressively pushing Mantle.
 
DX12 sucks right now. It adds unnecessary burden to the developer in search of gains that just aren't worth it, especially given that gaming PCs have much faster CPUs than the consoles.

There is a reason DX12-like APIs didn't really come to PC until AMD started aggressively pushing Mantle.

Doom will never stop existing whenever you try bagging on DX12 and low level APIs in general. There's a learning curve involved with the APIs as we've seen, but the gains are very real and worthwhile.
 
Back
Top