Its cpu limited/AMD cpu overhead.Thats why Furyx have almost same performance as 390x in 1080P.
Google translator
http://images.nvidia.com/geforce-co...interactive-comparison-001-on-vs-off-rev.html
If you got an NV card, force HQ 16x AF in drivers. By default, the game seems to be really awful at it.
What's with the whining about SLI? It always seems based on large assumptions about the feasability or ease of implementation.
News Flash! DirectX and OpenGL don't support multi GPU(at least not in a way that works well with games). Do you really think nvidia will convince any developer to write their engine a certain way(or modify, assuming this is feasible or even possible), including not using optimizations that are incompatible with AFR, just so a few % of users will be happy?
The reality is that occasionally or even often, some games, even big games, won't support SLI. Whichever new API becomes popular might solve the issue, but who knows. I'm not convinced every development house will bother.
I'm not sure if you're kidding or serious. Many Gameworks features can only be run with a 980Ti sometimes, which is a VERY few % of users, yet Nvidia STILL invests in getting those features into games...
Again, not sure if srs..... SLI not working in games, especially when Nvidia has a hand in those games, is VERY VERY annoying.
Multi GPU (SLI/CrossfireX) is f*****g complicated. You cannot begin to conceive of the number of failure cases that are involved until you see them in person. I suspect that more than half of the total software effort within the IHVs is dedicated strictly to making multi-GPU setups work with existing games. If you've ever tried to independently build an app that uses multi GPU - especially if, god help you, you tried to do it in OpenGL - you may have discovered this insane rabbit hole.
does anyone know a work-around, like an inspector profile that works? I game at 1440p/120hz with 980ti's, and 70 FPS is not good enough and i dont want to turn down settings.
Apparently a profile is being worked on.
As for now, maybe this will help. Not sure if there's another profile that will work.
https://forums.geforce.com/default/topic/900670/how-to-solve-sli-rendering-issue-with-just-cause-3-/
What's with the whining about SLI? It always seems based on large assumptions about the feasability or ease of implementation.
News Flash! DirectX and OpenGL don't support multi GPU(at least not in a way that works well with games). Do you really think nvidia will convince any developer to write their engine a certain way(or modify, assuming this is feasible or even possible), including not using optimizations that are incompatible with AFR, just so a few % of users will be happy?
The reality is that occasionally or even often, some games, even big games, won't support SLI. Whichever new API becomes popular might solve the issue, but who knows. I'm not convinced every development house will bother.
Most developers who care about PC gamers work with AMD/NV to get multi-GPU support to work unless the game engine itself is 100% incompatible (UE4 or Company of Heroes 2). In this case, I have not seen any evidence that proves that the Avalance Engine 3.0 is incompatible since it works in Mad Max. This tells me it is 100% the developer's responsibility to work with NV/AMD to get multi-gpu working. I guess the developer just decided not to bother wasting any $ getting multi-GPU support for one reason or another.
Side-by-side video footage proves without a shadow of a doubt that Just Cause 3 is a console port with trivial improvements in shadows and heat haze effect.
My guess is the decision to not spend any $ on multi-GPU support or the PC version in terms of technical graphics is because this studio doesn't have the resources for that. Sales of [URL=" Max weren't that stellar[/URL].
Based on the amount of user review feedback, JC3 is looking like it's not doing that well in sales.
Cannot expect much from a game that was clearly made primarily for consoles and just ported to PC for extra sales as an after-thought.
Time to learn fellas:
[/QUOTE] Yeah, the actual papers t...the cluster system iirc), new LOD system etc.
I'm not really sure why you're saying it's a port. All versions of the game were developed simultaneously.
Anyway, SLI may be possible(and is apparently coming), but my point was that people shouldn't just expect it to be in every game. It's not a trivial addition, everything I've heard about it from developers suggests that it's the complete opposite in many cases.
https://www.youtube.com/watch?v=pBBw5nssai0
very interesting, AMD GPUs seem to have performance problems under heavy load that may not be apparent on most benchmarks
Yeah, the actual papers they put out a while ago go into more detail and are pretty interesting. Clustered deferred shading, a new shadow management system(it sort of works in tandem with the cluster system iirc), new LOD system etc.
My personal opinion - this game is very poorly optimized on the GPU side for its level of mediocre graphics. Requiring an overclocked 780 or a 290/Titan to hit 60 fps @ 1080P for console level graphics. My opinion = Mad Max, made on the same Avalanche Engine 3.0 looks and runs better.
Just Cause 3 is a HUGE game. BF3's maps don't hold a candle in terms of size. Battlefront's maps are even smaller than most BF3 and BF4 maps.
GTAV would be a better comparison, but I would agree that GTAV is a better looking game.
Aren't you generalizing there? The only AMD GPU that was tested was R9 380, not 280X/380X/290/290X/R9 295X2/390/390X/Fury/Nano/Fury X but your post implies some global issue. Let's take a look from that video:
In the beginning, 960 gets pummeled into the ground more or less the entire time during the first 1:10 min of that video but he doesn't emphasize that at all despite R9 380 being that much closer to the 60 fps mark.
At 22 second mark, GTX960 is at 48 fps, R9 380 is at 60 fps.
At 29 second mark, GTX960 is at 45 fps, R9 380 is at 57 fps.
At 51-52 second mark, GTX960 is at 48 fps, R9 380 is at 70 fps.
At 1:06 min, GTX960 is at 48 fps, R9 380 is at 60 fps.
None of this is important to him? OK, I guess sub-60 fps is suddenly not worth talking about.
Then once he gets to 1:15 min mark, he starts talking about large latency spikes while ignoring how 960 was bombing the entire time up to 1 min 15?
Then at the 1:49 mark he talks about how firing the mini-gun depresses the performance of the 380 but 960 is at 40-41 fps vs. 380 at 38-39 fps. Then he proceeds to show how both of these cards cannot even come close to 50 fps during those scenes. In other words, both are too slow during gun fights/explosions for smooth 60 fps PC gaming standard.
The way he presented the data right there is not very objective because he seems to suggest that R9 380 is having some 'major' issues while ignoring that in other parts of the game the 960 is the one that's struggling and has major deficits against the 380. His analysis actually shows that neither the 960 nor the 380 is good enough to get 60 fps @ 1080P maxed out in JC3 and a faster graphics card is required. While in some scenes 960 has the edge, in others 380 has the edge but neither is great. That's what I got out of that video.
As far as FX8350 is concerned, it's not at all surprising that i5 4690K would smoke it. Unless he overclocked FX8350 to 4.7-4.8Ghz, it's a foregone conclusion that it won't be as fast as an i5 4690K. So basically moral of the story is to get an R9 290/390/970 because the sub-$200 GPU desktop landscape right now just presents bad value.
Was watching a Russian youtube channel tonight and the main editor said the same thing -- every GPU under R9 390/970 is crap and isn't worth buying for a good modern AAA experience. Better to spend a little more upfront and enjoy great gaming experience over the next 2 years. I agree.
Too bad all of that technical speak means little in this case when the end result doesn't translate to a next gen PC game. The graphics are so outdated, it's ridiculous that cards like GTX960/380 cannot max this game out at 1080P @ 60 fps.
concluding that this is a disadvantage for AMD Radeons and not just the 380 seems pretty logical, specially considering the historic of having worse DX11 overhead, but I'm sure they can do more testing.
Concluding it affects all radeons is totally premature from one test on a midrange card. We have no idea what would actually be causing it, your DX11 overhead theory is just one and not a very well supported one since we do not have any more evidence other than this video. It could be the particular CPU/GPU combination that one reviewer has. It could be a lot of things.
It could be driver related, vram memory bandwidth related, ROP related, system RAM related, architecture specific... the point is we do not know and making speculative conclusions based on too little evidence helps no one.
My theory is that you will see no problems like this on a 290 due to the doubled ROPs. My theory is exactly as well supported as your dx11 overhead theory, which is to say, it's pure speculation.
More data needed.
Well, they're not there to make the game look better really, but to maintain performance in stressful scenarios. The problem is that most of the really stressful scenarios involve a ton of CPU limited physics and destruction.
The game might not look as nice as others, but I think they traded that at least partially for having a massive draw distance. So instead of making each tree look way better, they decided to just show way more of them.