• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

guru3dStar Wars: Battlefront Beta VGA graphics performance benchmarks

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
And this is a DX11 game with the option to use feature level 11_1 ?

Neat how Fiji can match the GM200 in this case since it's not using bindless or asynchronous compute, two of the biggest performance boosting features which are been left out for good measure ...
 
This puts the whole argument "a card that scores better at higher resolution tends to last longer" completely on its head.

Absolutely not. The whole point is to look at overall trends and pay attention to outliers in terms of context. What has been the overall trend in the last 20 years of GPUs? The card that performs better at higher resolutions lasts longer in probably 90% of the previous GPU generations. Alternatively, the card that performs better at 1080P but not as good at higher resolutions eventually loses most of its advantages with future games at 1080P as the games become more GPU demanding. There are of course exceptions to this rule but generally speaking on the whole the 960 is likely a far worse bet for keeping over 2-3 years than the R9 380/280X and especially the 290. Considering it's still possible to buy an R9 290 for $245 in the US, while most 960 4GB cards still sell for $200, it's shocking that people still keep buying the 960. Well, I guess it's shocking to me since I wrongly presumed that most of the mainstream GPU buyers would eventually wake up and start doing GPU research but I guess I am wrong generation after generation. Also, it makes it very difficult when "professional" sites like TechReport or HardOCP recommend GTX950/960 cards over far superior options, fully ignoring VRAM bottlenecks, fully ignoring raw GPU horsepower that will come into play with future games, fully ignoring the wide range of performance in more than 7-8 games, etc. And guess who the mainstream gamer going to listen to? 😛

Can GTX950/960/950 4GB owners get a refund check now? Nope. Back to upgrade land in 2016. I am sure NV is loving that.

Whoa! Could they be more biased if they tried?

What do you mean? I guess I am missing something? Looks like the older architectures (Fermi and VLIW-4) are getting hammered -- possibly some internal bottlenecks across the board -- pixel, shaders, textures, etc. R9 280X is 120% faster than the HD6950. Remember when people said HD7970 was a small upgrade from HD6970? :awe:

While the FPS are not directly comparable to Guru3D's testing, the overall standing seems somewhat similar.

960 < 770 < 380 < 280X < 780 < Titan < 970 < 780Ti < 290 < 980 < 390X < Fury < Titan X < 980Ti.

They don't have the flagship Fury X in their charts though.

I found this interesting:

"The graphics load is comparable to the online battles, but the CPU load will be lower. One exception we make in memory measurements with 8 and 16 GiByte main memory. This we have recorded on the more sophisticated card Hoth in Walker Assault mode with 40 players. Note that not 1 the frametimes here: let reproduce. 1 Nevertheless, the difference is clearly visible and at times even to feel: The frametimes with only 8 GiByte RAM are significantly worse, what this could intensify at slow clock-cycle storage. The 16 GiByte main memory which indicates DICE as a recommendation, certainly seem sensible for the undiluted gambling entertainment."

StarWars_-_Frametimes_Hoth20vs20_-_8_GiByte_RAM-pcgh.png

StarWars_-_Frametimes_Hoth20vs20_-_16_GiByte_RAM-pcgh.png


This is one of the few times if not the first where I am seeing a real benefit in frame times moving from 8GB to 16GB of RAM.

This game is still in Beta though so chances are we'll see 2-3 more AMD/NV drivers and game patches that should improve performance further for both companies.
 
Last edited:
I wonder how things will change when the DX12 patch comes out.

Did they confirm DX12? Last I check DICE haven't decided and said it could be DX12 or Vulkan.

If its just a patch, I don't think Battlefront would be using major features, more like BF4 -> Mantle for lower CPU overhead and better frame latency in multi-player.
 
Did they confirm DX12? Last I check DICE haven't decided and said it could be DX12 or Vulkan.

If its just a patch, I don't think Battlefront would be using major features, more like BF4 -> Mantle for lower CPU overhead and better frame latency in multi-player.

I'm under the impression it's still getting a DX12 patch, at least from what little I read following the game.

I recall back in April (or some time then) they wanted DX12 to be the minimum for the game. I interpreted that as how much they wanted to work with DX12.
 
Did they confirm DX12? Last I check DICE haven't decided and said it could be DX12 or Vulkan.

If its just a patch, I don't think Battlefront would be using major features, more like BF4 -> Mantle for lower CPU overhead and better frame latency in multi-player.

Never played BF4 enough to even try the mantle. But DA:I's mantle only did one thing, make the super long loading time even longer...
 
Never played BF4 enough to even try the mantle. But DA:I's mantle only did one thing, make the super long loading time even longer...

It worked really well in both BF4 (MP and frame latency) and DAI, and then it didn't. EA dropped support for it as DICE said Mantle was a proof of concept only. Actually Mantle still works in BF4, for R290X but not for GCN 1.2 GPUs where its messed up.
 
Oh, there's no MSAA in the graphics options.



Which is about time because MSAA doesn't do much in deferred rendering engines like Frostbite 3 but still hurts performance.

They kept the Resolution Scale which is great, built in SSAA without blurry issues with DSR.

I think MSAA was dropped in favor of a Temporal solution. FXAA wasn't designed to be used in conjunction with MSAA anyway.
 
Also, unlike PC, while AMD cards are faster, both AMD and NV deliver excellent performance. The only card that has poor performance is the GTX960 2GB but we know why that is = 960 is not a good videocard. I suspected for a long time that 960 would start falling apart rather quickly looking over various benchmarks online and seeing the trends. If you remove the 960 from those benchmarks, 970/780Ti/980 still offer great performance.

I hope more people start to see how crippling the 960 is as an x60 series NV card and stop recommending this card. It's turning into a major disaster for anyone who fell for NV marketing/hype related to that card. It's just unfortunate that professional reviewers lapped up that card and hyped it by giving it gold and silver awards, which even further mislead the average gamer into thinking it was a safe purchase. I guess I can't say I didn't provide warnings since January 2015. :ninja:

960 would be a good card for my mITX core i3 system where i value its low TDP. Have a small preference for AMD and would take 285/ 380 if not their a bitt too high power consumtion.
 
960 would be a good card for my mITX core i3 system where i value its low TDP. Have a small preference for AMD and would take 285/ 380 if not their a bitt too high power consumtion.

Here is a guide from EVGA hot to further reduce power draw, increase performance/watt and overclocking:

DISABLE THE FANS

1zz13k3.jpg
 
960 would be a good card for my mITX core i3 system where i value its low TDP. Have a small preference for AMD and would take 285/ 380 if not their a bitt too high power consumtion.

You could use powertune and the 380 would still be quite a bit faster.
 
Here is a guide from EVGA hot to further reduce power draw, increase performance/watt and overclocking:

DISABLE THE FANS

1zz13k3.jpg

So how much more overclocking would a user get with 3 extra watts? 1Mhz?

And also, isn't better cooling reducing the power draw altogether?
 
Just tried this beta out. The survival thing on tatooine runs flawlessly on high on my gtx670 (with factory overclock)

All ultra at 1920x1200 runs at ~50 fps, with drops to 40 in the msi afterburner graph.

Maybe the multiplayer is more demanding.
 
Holy crap, I didn't even see that. Something fishy is going on there. Haha.

This puts the whole argument "a card that scores better at higher resolution tends to last longer" completely on its head.

Eh, they're all slideshows at that point anyway.
 
Just tried this beta out. The survival thing on tatooine runs flawlessly on high on my gtx670 (with factory overclock)

All ultra at 1920x1200 runs at ~50 fps, with drops to 40 in the msi afterburner graph.

Maybe the multiplayer is more demanding.

I was just playing for an hour also on the small Multiplayer map at 1200p also.

7970Ghz Stock clocks.

Ultra
60 fps avg/high
50 fps low

Vsync is on with it off the high's hit 90 fps.
 
Last edited:
Back
Top